[HACKERS] Please help, pgAdmin3 on Debian!

2006-03-28 Thread lmyho
   Dear All,I am totally new to the PostgreSQL, and pgAdmin. I really need your help.  I just installed the PostgreSQL8.1 and pgAdmin3 on a Debian system, using the apt-get install command. Apparently, the initial database and the user "postgres" have also been automatically created during the installation.Which is good. But I've got big trouble to login to this initial db by using this auto-created username "postgres" through pgAdmin:((( The first try failed due to "Ident authentication failed", so I follow the suggestion on the pop-up window of pgAdmin3, and changed the ident method in the pg_hba.conf file all to md5 to try again, but the database now ask me for the password!! which I couldn't figure out the passwd so I tried to created rules in the pg_ident.conf file to map both the ordinary user and root user od Debian system to postgres, and tried again. But still failed,:((( "ident authentication
 failed"again!!!:((( I've tried many times for all I could think and failed everytime failed:((( By the way each time before I try, I did "pg_ctl reload", and I could see the failure reason changed after I do reload.   I've sent mail to other list but no anwser back. I believe people in this group must know what's the reason and solution. So would you please help me? So if there is auto-created password for this auto-created postgres user, please anyone tell me what it is?? Also otherwise, how can I login using the "ident auth" method through pgAdmin3 (just locally) on the Debian system??Any help would be greatly appreciated!!! Thank you so much for help!!!leo   
		New Yahoo! Messenger with Voice. Call regular phones from your PC for low, low rates.

Re: [HACKERS] Please help, pgAdmin3 on Debian!

2006-03-28 Thread Adrian Maier
On 3/28/06, lmyho [EMAIL PROTECTED] wrote:
  Dear All,

  Which is good.  But I've got big trouble to login to this initial db by
 using this auto-created username postgres through pgAdmin:(((  The first
 try failed due to Ident authentication failed, so I follow the suggestion
 on the pop-up window of pgAdmin3, and changed the ident method in the
 pg_hba.conf file all to md5 to try again, but the database now ask me for
 the password!! which I couldn't figure out the passwd so I tried to created
 rules in the pg_ident.conf file to map both the ordinary user and root user
 od Debian system to postgres, and tried again.  But still failed,:(((
 ident authentication failedagain!!!:(((  I've tried many times for all I
 could think and failed everytime failed:(((  By the way each time before I
 try, I did pg_ctl reload, and I could see the failure reason changed after
 I do reload.

  I've sent mail to other list but no anwser back.  I believe people in this
 group must know what's the reason and solution.  So would you please help
 me?  So if there is auto-created password for this auto-created postgres
 user, please anyone tell me what it is??


You could try to change the ident method to trust (in pg_hba.conf). This
should allow you to login.
Then, set the password of the postgres user (alter user postgres with password
'blabla1212' ; ).  Then you could change the ident method back to md5 .


Adrian Maier

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [GENERAL] PANIC: heap_update_redo: no block

2006-03-28 Thread Simon Riggs
On Mon, 2006-03-27 at 22:03 -0500, Tom Lane wrote:
 Greg Stark [EMAIL PROTECTED] writes:
  Tom Lane [EMAIL PROTECTED] writes:
  I think what's happened here is that VACUUM FULL moved the only tuple
  off page 1 of the relation, then truncated off page 1, and now
  heap_update_redo is panicking because it can't find page 1 to replay the
  move.  Curious that we've not seen a case like this before, because it
  seems like a generic hazard for WAL replay.
 
  This sounds familiar
  http://archives.postgresql.org/pgsql-hackers/2005-05/msg01369.php

Yes, I remember that also.

 After further review I've concluded that there is not a systemic bug
 here, but there are several nearby local bugs.  

IMHO that's amazing to find so many bugs in a code review of existing
production code. Cool.

 The reason it's not
 a systemic bug is that this scenario is supposed to be handled by the
 same mechanism that prevents torn-page writes: the first XLOG record
 that touches a given page after a checkpoint is supposed to rewrite
 the entire page, rather than update it incrementally.  Since XLOG replay
 always begins at a checkpoint, this means we should always be able to
 write a fresh copy of the page, even after relation deletion or
 truncation.  Furthermore, during XLOG replay we are willing to create
 a table (or even a whole tablespace or database directory) if it's not
 there when touched.  The subsequent replay of the deletion or truncation
 will get rid of any unwanted data again.

That will all work, agreed.

 The subsequent replay of the deletion or truncation
 will get rid of any unwanted data again.

Trouble is, it is not a watertight assumption that there *will be* a
subsequent truncation, even if it is a strong one. If there is not a
later truncation, we will just ignore what we ought to now know is an
error and then try to continue as if the database was fine, which it
would not be.

The overall problem is that auto extension fails to take action or
provide notification with regard to file system corruptions. Clearly we
would like xlog replay to work even in the face of strong file
corruptions, but we should make attempts to identify this situation and
notify people that this has occurred.

I'd suggest both WARNING messages in the log and something more extreme
still: anyone touching a corrupt table should receive a NOTICE saying
database recovery displayed errors for this table HINT: check the
database logfiles for specific messages. Indexes should have a log
WARNING saying database recovery displayed errors for this index
HINT: use  REINDEX to rebuild this index.

So I guess I had better help if we agree this is beneficial.

 Therefore, there is no systemic bug --- unless you are running with
 full_page_writes=off.  I assert that that GUC variable is broken and
 must be removed.

On this analysis, I would agree for current production systems. But what
this says is something deeper: we must log full pages, not because we
fear a partial page write has occurred, but because the xlog mechanism
intrinsically depends upon the existence of those full pages after each
checkpoint.

The writing of full pages in this way is a serious performance issue
that it would be good to improve upon. Perhaps this is the spur to
discuss a new xlog format that would support higher performance logging
as well as log-mining for replication?

 There are, however, a bunch of local bugs, including these:

...

 Notice that these are each, individually, pretty low-probability
 scenarios, which is why we've not seen many bug reports.  

Most people don't file bug reports. If we have a recovery mode that
ignores file system corruptions we'll get even less because any errors
that occur will be deemed as gamma rays or some other excuse.

 a systemic bug 

Perhaps we do have one systemic problem: systems documentation.

The xlog code is distinct from other parts of the codebase in that it
has almost zero comments with it and the overall mechanisms are
relatively poorly documented in README form. Methinks there are very few
people who could attempt such a code review and even fewer who would
find any bugs by inspection. I'll think some more on that...

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Please help, pgAdmin3 on Debian!

2006-03-28 Thread Martin Pitt
Hi lmyho,

lmyho [2006-03-28  0:17 -0800]:
   I am totally new to the PostgreSQL, and pgAdmin.  I really need
   your help.
  
  I just installed the PostgreSQL8.1 and pgAdmin3 on a Debian system,
  using the apt-get install command.  Apparently, the initial
  database and the user postgres have also been automatically
  created during the installation.  

Please feel free to mail me directly
([EMAIL PROTECTED]) for questions related to the
Debian packages. It might be regarded as noise on the upstream lists.

  Which is good.  But I've got big trouble to login to this initial
  db by using this auto-created username postgres through
  pgAdmin:(((  The first try failed due to Ident authentication
  failed

The 'postgres' user in Debian is a system user with a locked password,
since it is not recommended to use it for normal work with the
database. As /usr/share/postgresql-common/README.Debian describes, you
should first create your own database user and work with that. Then
the default 'ident' authentication scheme will work, and you are free
to set a password for your db user as well (so that connecting from
remote computer over TCP works as well).

If you really need to connect as user postgres to do administrative
tasks, then the easiest solution is to set a password for the user
postgres, as already mentioned in the previous reply.

HTH,

Martin

-- 
Martin Pitthttp://www.piware.de
Ubuntu Developer   http://www.ubuntu.com
Debian Developer   http://www.debian.org

In a world without walls and fences, who needs Windows and Gates?


signature.asc
Description: Digital signature


Re: [HACKERS] Why are default encoding conversions

2006-03-28 Thread Tatsuo Ishii
 Tatsuo Ishii [EMAIL PROTECTED] writes:
  I don't mind having encoding conversions be named within schemas,
  but I propose that any given encoding pair be allowed to have only
  one default conversion, period, and that when we are looking for
  a default conversion we find it by a non-namespace-aware search.
 
  That doesn't sound good idea to me.
 
 What does it mean to have different default encoding conversions in
 different schemas?  Even if this had a sensible interpretation, I don't
 think the existing code implements it properly.

  Then why do we have CREATE DEFAULT CONVERSION command at all?
 
 So you can create the one you're allowed to have, of course ...

If you do allow only one default conversion for encodings A and B
regardless schemas, then how one can have different default conversion
for A and B?

I'm sure we need more than one default conversion for encoding A and
B. For example, different vendors provide different conversion maps
for SJIS and UTF-8. M$ has its own and Apple has another one, etc. The
differences are not huge but some customers might think the difference
is critical. In this case they could create their own conversion in
their schema.
--
Tatsuo Ishii
SRA OSS, Inc. Japan

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] Tru64/Alpha problems

2006-03-28 Thread Andrew Dunstan


Honda Shigehiro has diagnosed the longstanding problems with his 
Tru64/Alpha buildfarm member (bear). See below.


First, it appears that there is a problem with the system getaddrinfo(), 
which configure reports as usable, but turns out not to be. Our current 
configure test checks the return value of getaddrinfo(, , NULL, 
NULL) but I am wondering if we should test for localhost instead of  
as the first parameter.


Second, it appears that this platform apparently doesn't handle Infinity 
and NaN well. The regression diffs are attached.


cheers

andrew

 Original Message 
Subject:Re: postgresql buildfarm member bear
Date:   Tue, 28 Mar 2006 21:53:15 +0900 (JST)
From:   Honda Shigehiro [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
CC: [EMAIL PROTECTED]
References: 	[EMAIL PROTECTED] 
[EMAIL PROTECTED] 
[EMAIL PROTECTED]





I found the cause. Tru64's getaddrinfo seems something wrong.
(I use version 5.0, but with google search, this is same until 
version 5.1B.) I had used only with Unix domain socket.


So I succeed to start server with Unix Domain Socket(ex. make check).
But with listen_addresses = 'localhost', fail with:
 LOG:  could not translate host name localhost, service 5432 to address: 
servname not supported for ai_socktype

To solve this, I had change to use src/port/getaddrinfo.c.
(I have little knowledge about autoconf...so ugly...)
Is there smart way which do not need to change code?

(1) change configure script and run it
bash-2.05b$ diff configure.aaa configure
14651c14651
 #define HAVE_GETADDRINFO 1
---

/* #define HAVE_GETADDRINFO 1 */


(2) run make command 
It fail by some undefined symbol. After the fail, change directory 
to src/port and type:

cc -std   -I../../src/port  -I../../src/include -I/usr/local/include -c 
getaddrinfo.c -o getaddrinfo.o
ar crs libpgport.a isinf.o getopt_long.o copydir.o dirmod.o exec.o noblock.o 
path.o pipe.o pgsleep.o pgstrcasecmp.o sprompt.o thread.o getaddrinfo.o
ar crs libpgport_srv.a isinf.o getopt_long.o copydir.o dirmod_srv.o exec_srv.o 
noblock.o path.o pipe.o pgsleep.o pgstrcasecmp.o sprompt.o thread_srv.o 
getaddrinfo.o

(3) re-run make command

(4) check make check and make installcheck
float4 and float8 tests are failed in both cases. 





*** ./expected/float4.out   Thu Apr  7 10:51:40 2005
--- ./results/float4.outTue Mar 28 21:03:10 2006
***
*** 35,69 
  ERROR:  invalid input syntax for type real: 1235
  -- special inputs
  SELECT 'NaN'::float4;
!  float4 
! 
! NaN
! (1 row)
! 
  SELECT 'nan'::float4;
!  float4 
! 
! NaN
! (1 row)
! 
  SELECT '   NAN  '::float4;
!  float4 
! 
! NaN
! (1 row)
! 
  SELECT 'infinity'::float4;
!   float4  
! --
!  Infinity
! (1 row)
! 
  SELECT '  -INFINiTY   '::float4;
!   float4   
! ---
!  -Infinity
! (1 row)
! 
  -- bad special inputs
  SELECT 'N A N'::float4;
  ERROR:  invalid input syntax for type real: N A N
--- 35,54 
  ERROR:  invalid input syntax for type real: 1235
  -- special inputs
  SELECT 'NaN'::float4;
! ERROR:  floating-point exception
! DETAIL:  An invalid floating-point operation was signaled. This probably 
means an out-of-range result or an invalid operation, such as division by zero.
  SELECT 'nan'::float4;
! ERROR:  floating-point exception
! DETAIL:  An invalid floating-point operation was signaled. This probably 
means an out-of-range result or an invalid operation, such as division by zero.
  SELECT '   NAN  '::float4;
! ERROR:  floating-point exception
! DETAIL:  An invalid floating-point operation was signaled. This probably 
means an out-of-range result or an invalid operation, such as division by zero.
  SELECT 'infinity'::float4;
! ERROR:  floating-point exception
! DETAIL:  An invalid floating-point operation was signaled. This probably 
means an out-of-range result or an invalid operation, such as division by zero.
  SELECT '  -INFINiTY   '::float4;
! ERROR:  floating-point exception
! DETAIL:  An invalid floating-point operation was signaled. This probably 
means an out-of-range result or an invalid operation, such as division by zero.
  -- bad special inputs
  SELECT 'N A N'::float4;
  ERROR:  invalid input syntax for type real: N A N
***
*** 72,90 
  SELECT ' INFINITYx'::float4;
  ERROR:  invalid input syntax for type real:  INFINITYx
  SELECT 'Infinity'::float4 + 100.0;
! ERROR:  type double precision value out of range: overflow
  SELECT 'Infinity'::float4 / 'Infinity'::float4;
!  ?column? 
! --
!   NaN
! (1 row)
! 
  SELECT 'nan'::float4 / 'nan'::float4;
!  ?column? 
! --
!   NaN
! (1 row)
! 
  SELECT '' AS five, * FROM FLOAT4_TBL;
   five | f1  
  --+-
--- 57,70 
  SELECT ' INFINITYx'::float4;
  ERROR:  invalid input syntax for type real:  INFINITYx
  SELECT 'Infinity'::float4 + 100.0;
! ERROR:  floating-point exception
! DETAIL:  An 

Re: [HACKERS] [GENERAL] PANIC: heap_update_redo: no block

2006-03-28 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 On Mon, 2006-03-27 at 22:03 -0500, Tom Lane wrote:
 The subsequent replay of the deletion or truncation
 will get rid of any unwanted data again.

 Trouble is, it is not a watertight assumption that there *will be* a
 subsequent truncation, even if it is a strong one.

Well, in fact we'll have correctly recreated the page, so I'm not
thinking that it's necessary or desirable to check this.  What's the
point?  PANIC: we think your filesystem screwed up.  We don't know
exactly how or why, and we successfully rebuilt all our data, but
we're gonna refuse to start up anyway.  Doesn't seem like robust
behavior to me.  If you check the archives you'll find that we've
backed off panic-for-panic's-sake behaviors in replay several times
before, after concluding they made the system less robust rather than
more so.  This just seems like another one of the same.

 Perhaps we do have one systemic problem: systems documentation.

I agree on that ;-).  The xlog code is really poorly documented.
I'm going to try to improve the comments for at least the xlogutils
routines while I'm fixing this.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Tablespaces oddity?

2006-03-28 Thread Tom Lane
Philip Yarra [EMAIL PROTECTED] writes:
 Someone else might be able to see a better way to write this query, but I 
 think it would be good if \d could show this information, when you really 
 want to know which tablespace an object is on. 

If \d doesn't say anything then the table is in the database's default
tablespace.  I see nothing wrong with that, and I do object to
cluttering \d output with information that will be of no interest to
people not using tablespaces.

 Note also that \l won't show you the tablespace for a DB, so you need
 to query pg_database to even know which is the default tablespace for
 a DB.

I wouldn't object to adding default tablespace to \l output, or maybe \l+.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Shared memory

2006-03-28 Thread Thomas Hallgren

Hi Simon,
Thanks for your input. All good points. I actually did some work using Java stored 
procedures on DB2 a while back but I had managed to forget (or repress :-) ) all about the 
FENCED/NOT FENCED stuff. The current discussion definitely puts it in a different 
perspective. I think PL/Java has a pretty good 'NOT FENCED' implementation, as does many 
other PL's, but no PL has yet come up with a FENCED solution.


This FENCED/NOT FENCED terminology would be a good way to differentiate between the two 
approaches. Any chance of that syntax making it into the PostgreSQL grammar, should the need 
arise?


Some more comments inline:

Simon Riggs wrote:

Just some thoughts from afar: DB2 supports in-process and out-of-process
external function calls (UDFs) that it refers to as UNFENCED and FENCED
procedures. For Java only, IBM have moved to supporting *only* FENCED
procedures for Java functions, i.e. having a single JVM for all
connections.


Are you sure about this? As I recall it a FENCED stored procedure executed in a remote JVM 
of it's own. A parameter could be used that either caused a new JVM to be instantiated for 
each stored procedure call or to be kept for the duration of the session. The former would 
yield really horrible performance but keep memory utilization at a minimum. The latter would 
get a more acceptable performance but waste more memory (in par with PL/Java today).




Each connection's Java function runs as a thread on a
single dedicated JVM-only process. 

If that was true, then different threads could share dirty session data. I wanted to do that 
using DB2 but found it impossible. That was a while back though.



That approach definitely does increase the invocation time, but it
significantly reduces the resources associated with the JVM, as well as
allowing memory management to be more controllable (bliss...). So the
overall picture could be more CPU and memory resources for each
connection in the connection pool.

My very crude measurements indicate that the overhead of using a separate JVM is between 
6-15MB of real memory per connection. Today, you get about 10MB/$ and servers configured 
with 4GB RAM or more are not uncommon.


I'm not saying that the overhead doesn't matter. Of course it does. But the time when you 
needed to be extremely conservative with memory usage has passed. It might be far less 
expensive to buy some extra memory then to invest in SMP architectures to minimize IPC overhead.


My point is, even fairly large app-servers (using connection pools with up to 200 
simultaneous connections) can run using relatively inexpensive boxes such as an AMD64 based 
server with 4GB RAM and show very good throughput with the current implementation.




If you have a few small Java functions centralisation would not be good,
but if you have a whole application architecture with many connections
executing reasonable chunks of code then this can be a win.

One thing to remembered is that a 'chunk of code' that executes in a remote JVM and uses 
JDBC will be hit by the IPC overhead on each interaction over the JDBC connection. I.e. the 
overhead is not just limited to the actual call of the UDF, it's also imposed on all 
database accesses that the UDF makes in turn.




In that environment we used Java for major database functions, with SQL
functions for small extensions.


My guess is that those major database functions did a fair amount of JDBC. Am I 
right?



Also the Java invocation time we should be celebrating is that by having
Java in the database the Java-DB time is much less than it would be if
we had a Java stack sitting on another server.



I think the cases when you have a Tomcat or JBoss sitting on the same physical server as the 
actual database are very common. One major reason being that you don't want network overhead 
between the middle tier and the backend. Moving logic into the database instead of keeping 
it in the middle tier is often done to get rid of the last hurdle, the overhead of IPC.



Regards,
Thomas Hallgren


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Why are default encoding conversions

2006-03-28 Thread Tom Lane
Tatsuo Ishii [EMAIL PROTECTED] writes:
 I'm sure we need more than one default conversion for encoding A and
 B. For example, different vendors provide different conversion maps
 for SJIS and UTF-8. M$ has its own and Apple has another one, etc. The
 differences are not huge but some customers might think the difference
 is critical. In this case they could create their own conversion in
 their schema.

Well, being able to switch to a different conversion is fine, but I don't
think that's a good argument for tying it to the schema search path.
What would make more sense to me is a command specifically setting the
conversion to use --- perhaps a GUC variable, since then ALTER USER SET
and ALTER DATABASE SET would provide convenient ways of controlling it.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Tru64/Alpha problems

2006-03-28 Thread Andrew Dunstan

Tom Lane wrote:


Andrew Dunstan [EMAIL PROTECTED] writes:
 

Honda Shigehiro has diagnosed the longstanding problems with his 
Tru64/Alpha buildfarm member (bear). See below.
   



 

First, it appears that there is a problem with the system getaddrinfo(), 
which configure reports as usable, but turns out not to be. Our current 
configure test checks the return value of getaddrinfo(, , NULL, 
NULL) but I am wondering if we should test for localhost instead of  
as the first parameter.
   



Huh?  That's just an AC_TRY_LINK test, we don't actually execute it.
If we did, the test would fail on machines where resolution of localhost
is broken, which we already know is a not-so-rare disease ...

I'm not sure that I believe the getaddrinfo doesn't work diagnosis
anyway, seeing that bear gets through make check okay.  Wouldn't that
fail too if there were a problem there?

 



Now that I look further into it, this machine was working just fine 
until we made a change in configure, allegedly to get things right on 
Tru64. The first build that went wrong was the one right after 
configure.in version 1.450. I see a report from Albert Chin that this 
patch worked, but the buildfarm member seems to provide counter-proof.



cheers

andrew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Tru64/Alpha problems

2006-03-28 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 I'm not sure that I believe the getaddrinfo doesn't work diagnosis
 anyway, seeing that bear gets through make check okay.  Wouldn't that
 fail too if there were a problem there?

 Now that I look further into it, this machine was working just fine 
 until we made a change in configure, allegedly to get things right on 
 Tru64. The first build that went wrong was the one right after 
 configure.in version 1.450. I see a report from Albert Chin that this 
 patch worked, but the buildfarm member seems to provide counter-proof.

Ugh.  So probably it depends on just which version of Tru64 you're using
:-(.  Maybe earlier versions of Tru64 have a broken getaddrinfo and it's
fixed in later ones?  How would we tell the difference?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Why are default encoding conversions

2006-03-28 Thread Tatsuo Ishii
 Tatsuo Ishii [EMAIL PROTECTED] writes:
  I'm sure we need more than one default conversion for encoding A and
  B. For example, different vendors provide different conversion maps
  for SJIS and UTF-8. M$ has its own and Apple has another one, etc. The
  differences are not huge but some customers might think the difference
  is critical. In this case they could create their own conversion in
  their schema.
 
 Well, being able to switch to a different conversion is fine, but I don't
 think that's a good argument for tying it to the schema search path.
 What would make more sense to me is a command specifically setting the
 conversion to use --- perhaps a GUC variable, since then ALTER USER SET
 and ALTER DATABASE SET would provide convenient ways of controlling it.

If it does work, then it's ok. However still I'm not sure why current
method is evil.

BTW, what does the standard say about conversion vs. schema? Doesn't
conversion belong to schema? If so, then schema specific default
conversion seems more standard-friendly way.
--
Tatsuo Ishii
SRA OSS, Inc. Japan

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [GENERAL] PANIC: heap_update_redo: no block

2006-03-28 Thread Jim C. Nasby
On Tue, Mar 28, 2006 at 10:07:35AM -0500, Tom Lane wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
  On Mon, 2006-03-27 at 22:03 -0500, Tom Lane wrote:
  The subsequent replay of the deletion or truncation
  will get rid of any unwanted data again.
 
  Trouble is, it is not a watertight assumption that there *will be* a
  subsequent truncation, even if it is a strong one.
 
 Well, in fact we'll have correctly recreated the page, so I'm not
 thinking that it's necessary or desirable to check this.  What's the
 point?  PANIC: we think your filesystem screwed up.  We don't know
 exactly how or why, and we successfully rebuilt all our data, but
 we're gonna refuse to start up anyway.  Doesn't seem like robust
 behavior to me.  If you check the archives you'll find that we've
 backed off panic-for-panic's-sake behaviors in replay several times
 before, after concluding they made the system less robust rather than
 more so.  This just seems like another one of the same.

Would the suggestion made in
http://archives.postgresql.org/pgsql-hackers/2005-05/msg01374.php help
in this regard? (Sorry, much of this is over my head, but not everyone
may have read that...)
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Why are default encoding conversions

2006-03-28 Thread Tom Lane
Tatsuo Ishii [EMAIL PROTECTED] writes:
 Well, being able to switch to a different conversion is fine, but I don't
 think that's a good argument for tying it to the schema search path.

 If it does work, then it's ok. However still I'm not sure why current
 method is evil.

Because with the current definition, any change in search_path really
ought to lead to repeating the lookup for the default conversion proc.
That's a bad idea from a performance point of view and I don't think
it's a particularly good idea from the definitional point of view
either --- do you really want the client conversion changing because
some function altered the search path?

 BTW, what does the standard say about conversion vs. schema? Doesn't
 conversion belong to schema? If so, then schema specific default
 conversion seems more standard-friendly way.

AFAICT we invented the entire concept of conversions ourselves.  I see
nothing about CREATE CONVERSION in the SQL spec.  There is a CREATE
TRANSLATION in SQL2003, which we'd probably not seen when we invented
CREATE CONVERSION, but it does *not* have a DEFAULT clause.  I don't
think you can point to the spec to defend our current method of
selecting which conversion to use.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [GENERAL] PANIC: heap_update_redo: no block

2006-03-28 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes:
 On Tue, Mar 28, 2006 at 10:07:35AM -0500, Tom Lane wrote:
 Well, in fact we'll have correctly recreated the page, so I'm not
 thinking that it's necessary or desirable to check this.

 Would the suggestion made in
 http://archives.postgresql.org/pgsql-hackers/2005-05/msg01374.php help
 in this regard?

That's exactly what we are debating: whether it's still necessary/useful
to make such a check, given that we now realize the failures are just
isolated bugs and not a systemic problem with truncated files.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Why are default encoding conversions

2006-03-28 Thread Martijn van Oosterhout
On Wed, Mar 29, 2006 at 01:09:08AM +0900, Tatsuo Ishii wrote:
 BTW, what does the standard say about conversion vs. schema? Doesn't
 conversion belong to schema? If so, then schema specific default
 conversion seems more standard-friendly way.

The standard says nothing about conversions. They're only used when
communicating between the client and the server. By having them belong
to a schema you suggest that your queries be interpreted differently
character set-wise depending on the schema.

SELECT * FROM myschema.mytable;
SET search_path=otherschema;
SELECT * FROM myschema.mytable;

So the second may produce a different output because the schema changed
and the data to the client will be encoded in a different encoding.
Ofcourse, if the client and server are using the same encoding then the
queries will produce the same result. That sounds broken to me.

The reason it doesn't happen now is because (as Tom said) we only do
the lookup once. But can trigger it if you're careful.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [HACKERS] Shared memory

2006-03-28 Thread Tom Lane
Thomas Hallgren [EMAIL PROTECTED] writes:
 This FENCED/NOT FENCED terminology would be a good way to
 differentiate between the two approaches. Any chance of that syntax
 making it into the PostgreSQL grammar, should the need arise?

Of what value would it be to have it in the grammar?  The behavior would
be entirely internal to any particular PL in any case.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Why are default encoding conversions

2006-03-28 Thread Tatsuo Ishii
  If it does work, then it's ok. However still I'm not sure why current
  method is evil.
 
 Because with the current definition, any change in search_path really
 ought to lead to repeating the lookup for the default conversion proc.
 That's a bad idea from a performance point of view and I don't think
 it's a particularly good idea from the definitional point of view
 either --- do you really want the client conversion changing because
 some function altered the search path?

That argument does not strike me too strongly. I cannot imagine the
case search_path changed so frequently.

 AFAICT we invented the entire concept of conversions ourselves.  I see
 nothing about CREATE CONVERSION in the SQL spec.  There is a CREATE
 TRANSLATION in SQL2003, which we'd probably not seen when we invented
 CREATE CONVERSION, but it does *not* have a DEFAULT clause.  I don't
 think you can point to the spec to defend our current method of
 selecting which conversion to use.

SQL's CONVERT and TRANSLATE are different things. CONVERT changes
encodings, while TRANSLATE changes character sets. However sometimes
the difference between encodings and character sets are vague,
for some encodings such as LATIN* and SJIS.
--
Tatsuo Ishii
SRA OSS, Inc. Japan

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Why are default encoding conversions

2006-03-28 Thread Tom Lane
Tatsuo Ishii [EMAIL PROTECTED] writes:
 Because with the current definition, any change in search_path really
 ought to lead to repeating the lookup for the default conversion proc.
 That's a bad idea from a performance point of view and I don't think
 it's a particularly good idea from the definitional point of view
 either --- do you really want the client conversion changing because
 some function altered the search path?

 That argument does not strike me too strongly. I cannot imagine the
 case search_path changed so frequently.

I can.  There's been talk for example of having a search path associated
with every function definition, so that it might need to be changed at
every function call and return.  In any case I don't like the notion
that the client conversion is tied to search_path; they really should
be independent.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Shared memory

2006-03-28 Thread Thomas Hallgren

Tom Lane wrote:

Thomas Hallgren [EMAIL PROTECTED] writes:
  

This FENCED/NOT FENCED terminology would be a good way to
differentiate between the two approaches. Any chance of that syntax
making it into the PostgreSQL grammar, should the need arise?



Of what value would it be to have it in the grammar?  The behavior would
be entirely internal to any particular PL in any case.

  
Not necessarily but perhaps the term FENCED is incorrect for the concept 
that I have in mind.


All languages that are implemented using a VM could benefit from the 
same remote UDF protocol. Java, C#, perhaps even Perl or Ruby. The flag 
that I'd like to have would control 'in-process' versus 'remote'.


I'm not too keen on the term FENCED, since it, in the PL/Java case will 
lead to poorer isolation. Multiple threads running in the same JVM will 
be able to share data and a JVM crash will affect all connected sessions.


Then again, perhaps it's a bad idea to have this in the function 
declaration in the first place. A custom GUC parameter might be a better 
choice. It will not be possible to have some functions use the 
in-process approach and others to execute remotely but I doubt that will 
matter that much.


I'm still eager to hear what it is in the current PL/Java that you 
consider fundamental unresolvable problems.


Regards,
Thomas Hallgren


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Shared memory

2006-03-28 Thread Simon Riggs
On Tue, 2006-03-28 at 17:48 +0200, Thomas Hallgren wrote:

 Simon Riggs wrote:
  Just some thoughts from afar: DB2 supports in-process and out-of-process
  external function calls (UDFs) that it refers to as UNFENCED and FENCED
  procedures. For Java only, IBM have moved to supporting *only* FENCED
  procedures for Java functions, i.e. having a single JVM for all
  connections.
  
 Are you sure about this? 

Yes.

 As I recall it a FENCED stored procedure executed in a remote JVM 
 of it's own. A parameter could be used that either caused a new JVM to be 
 instantiated for 
 each stored procedure call or to be kept for the duration of the session. The 
 former would 
 yield really horrible performance but keep memory utilization at a minimum. 
 The latter would 
 get a more acceptable performance but waste more memory (in par with PL/Java 
 today).

In the previous release, yes.

  That approach definitely does increase the invocation time, but it
  significantly reduces the resources associated with the JVM, as well as
  allowing memory management to be more controllable (bliss...). So the
  overall picture could be more CPU and memory resources for each
  connection in the connection pool.
  
 My very crude measurements indicate that the overhead of using a separate JVM 
 is between 
 6-15MB of real memory per connection. Today, you get about 10MB/$ and servers 
 configured 
 with 4GB RAM or more are not uncommon.
 
 I'm not saying that the overhead doesn't matter. Of course it does. But the 
 time when you 
 needed to be extremely conservative with memory usage has passed. It might be 
 far less 
 expensive to buy some extra memory then to invest in SMP architectures to 
 minimize IPC overhead.
 
 My point is, even fairly large app-servers (using connection pools with up to 
 200 
 simultaneous connections) can run using relatively inexpensive boxes such as 
 an AMD64 based 
 server with 4GB RAM and show very good throughput with the current 
 implementation.

Memory is cheap, memory bandwidth is not.

All CPUs have limited cache resources, so the more mem you waste, the
less efficient your CPUs will be.

That effects the way you do things, sure. 1GB lookup table: no problem.
10MB wasted memory retrieval: lots of dead CPU time.

  If you have a few small Java functions centralisation would not be good,
  but if you have a whole application architecture with many connections
  executing reasonable chunks of code then this can be a win.
  
 One thing to remembered is that a 'chunk of code' that executes in a remote 
 JVM and uses 
 JDBC will be hit by the IPC overhead on each interaction over the JDBC 
 connection. I.e. the 
 overhead is not just limited to the actual call of the UDF, it's also imposed 
 on all 
 database accesses that the UDF makes in turn.
 
 
  In that environment we used Java for major database functions, with SQL
  functions for small extensions.
  
 My guess is that those major database functions did a fair amount of JDBC. Am 
 I right?

Not once I'd reviewed them...

  Also the Java invocation time we should be celebrating is that by having
  Java in the database the Java-DB time is much less than it would be if
  we had a Java stack sitting on another server.
  
 
 I think the cases when you have a Tomcat or JBoss sitting on the same 
 physical server as the 
 actual database are very common. One major reason being that you don't want 
 network overhead 
 between the middle tier and the backend. Moving logic into the database 
 instead of keeping 
 it in the middle tier is often done to get rid of the last hurdle, the 
 overhead of IPC.

I can see the performance argument for both, but supporting both,
especially in a mix-and-match architecture is much harder.

Anyway, just trying to add some additional perspective.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [GENERAL] PANIC: heap_update_redo: no block

2006-03-28 Thread Simon Riggs
On Tue, 2006-03-28 at 10:07 -0500, Tom Lane wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
  On Mon, 2006-03-27 at 22:03 -0500, Tom Lane wrote:
  The subsequent replay of the deletion or truncation
  will get rid of any unwanted data again.
 
  Trouble is, it is not a watertight assumption that there *will be* a
  subsequent truncation, even if it is a strong one.
 
 Well, in fact we'll have correctly recreated the page, so I'm not
 thinking that it's necessary or desirable to check this.  What's the
 point?  

We recreated *a* page but we are shying away from exploring *why* we
needed to in the first place. If there was no later truncation then
there absolutely should have been a page there already and the fact
there wasn't one needs to be reported.

I don't want to write that code either, I just think we should.

 PANIC: we think your filesystem screwed up.  We don't know
 exactly how or why, and we successfully rebuilt all our data, but
 we're gonna refuse to start up anyway.  Doesn't seem like robust
 behavior to me.  

Agreed, which is why I explicitly said we shouldn't do that.

grass_up_filesystem = on should be the only setting we support, but
you're right we can't know why its wrong, but the sysadmin might.

  Perhaps we do have one systemic problem: systems documentation.
 
 I agree on that ;-).  The xlog code is really poorly documented.
 I'm going to try to improve the comments for at least the xlogutils
 routines while I'm fixing this.

I'll take a look also.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Shared memory

2006-03-28 Thread Dave Cramer


On 28-Mar-06, at 10:48 AM, Thomas Hallgren wrote:


Hi Simon,
Thanks for your input. All good points. I actually did some work  
using Java stored procedures on DB2 a while back but I had managed  
to forget (or repress :-) ) all about the FENCED/NOT FENCED stuff.  
The current discussion definitely puts it in a different  
perspective. I think PL/Java has a pretty good 'NOT FENCED'  
implementation, as does many other PL's, but no PL has yet come up  
with a FENCED solution.


What exactly is a FENCED solution ? If it is simply a remote  
connection to a single JVM then pl-j already does that.


This FENCED/NOT FENCED terminology would be a good way to  
differentiate between the two approaches. Any chance of that syntax  
making it into the PostgreSQL grammar, should the need arise?


Some more comments inline:

Simon Riggs wrote:
Just some thoughts from afar: DB2 supports in-process and out-of- 
process
external function calls (UDFs) that it refers to as UNFENCED and  
FENCED

procedures. For Java only, IBM have moved to supporting *only* FENCED
procedures for Java functions, i.e. having a single JVM for all
connections.


Are you sure about this? As I recall it a FENCED stored procedure  
executed in a remote JVM of it's own. A parameter could be used  
that either caused a new JVM to be instantiated for each stored  
procedure call or to be kept for the duration of the session. The  
former would yield really horrible performance but keep memory  
utilization at a minimum. The latter would get a more acceptable  
performance but waste more memory (in par with PL/Java today).




Each connection's Java function runs as a thread on a
single dedicated JVM-only process.
If that was true, then different threads could share dirty session  
data. I wanted to do that using DB2 but found it impossible. That  
was a while back though.



That approach definitely does increase the invocation time, but it
significantly reduces the resources associated with the JVM, as  
well as

allowing memory management to be more controllable (bliss...). So the
overall picture could be more CPU and memory resources for each
connection in the connection pool.
My very crude measurements indicate that the overhead of using a  
separate JVM is between 6-15MB of real memory per connection.  
Today, you get about 10MB/$ and servers configured with 4GB RAM or  
more are not uncommon.


I'm not saying that the overhead doesn't matter. Of course it does.  
But the time when you needed to be extremely conservative with  
memory usage has passed. It might be far less expensive to buy some  
extra memory then to invest in SMP architectures to minimize IPC  
overhead.


My point is, even fairly large app-servers (using connection pools  
with up to 200 simultaneous connections) can run using relatively  
inexpensive boxes such as an AMD64 based server with 4GB RAM and  
show very good throughput with the current implementation.



If you have a few small Java functions centralisation would not be  
good,
but if you have a whole application architecture with many  
connections

executing reasonable chunks of code then this can be a win.
One thing to remembered is that a 'chunk of code' that executes in  
a remote JVM and uses JDBC will be hit by the IPC overhead on each  
interaction over the JDBC connection. I.e. the overhead is not just  
limited to the actual call of the UDF, it's also imposed on all  
database accesses that the UDF makes in turn.



In that environment we used Java for major database functions,  
with SQL

functions for small extensions.
My guess is that those major database functions did a fair amount  
of JDBC. Am I right?



Also the Java invocation time we should be celebrating is that by  
having
Java in the database the Java-DB time is much less than it would  
be if

we had a Java stack sitting on another server.


I think the cases when you have a Tomcat or JBoss sitting on the  
same physical server as the actual database are very common. One  
major reason being that you don't want network overhead between the  
middle tier and the backend. Moving logic into the database instead  
of keeping it in the middle tier is often done to get rid of the  
last hurdle, the overhead of IPC.



Regards,
Thomas Hallgren


---(end of  
broadcast)---

TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Shared memory

2006-03-28 Thread Dave Cramer


On 28-Mar-06, at 12:11 PM, Thomas Hallgren wrote:


Tom Lane wrote:

Thomas Hallgren [EMAIL PROTECTED] writes:


This FENCED/NOT FENCED terminology would be a good way to
differentiate between the two approaches. Any chance of that syntax
making it into the PostgreSQL grammar, should the need arise?



Of what value would it be to have it in the grammar?  The behavior  
would

be entirely internal to any particular PL in any case.


Not necessarily but perhaps the term FENCED is incorrect for the  
concept that I have in mind.


All languages that are implemented using a VM could benefit from  
the same remote UDF protocol. Java, C#, perhaps even Perl or Ruby.  
The flag that I'd like to have would control 'in-process' versus  
'remote'.


I'm not too keen on the term FENCED, since it, in the PL/Java case  
will lead to poorer isolation. Multiple threads running in the same  
JVM will be able to share data and a JVM crash will affect all  
connected sessions.
When was the last time you saw a JVM crash ? These are very rare now.  
In any case if it does fail, it's a JVM bug and can happen to any  
code running and take the server down if it is in process.


Then again, perhaps it's a bad idea to have this in the function  
declaration in the first place. A custom GUC parameter might be a  
better choice. It will not be possible to have some functions use  
the in-process approach and others to execute remotely but I doubt  
that will matter that much.


I'm still eager to hear what it is in the current PL/Java that you  
consider fundamental unresolvable problems.


Regards,
Thomas Hallgren


---(end of  
broadcast)---

TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that  
your

  message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] autovacuum: could not access status of transaction

2006-03-28 Thread Nichlas Löfdahl



Hello!


PostgreSQL 8.1.1 on 
x86_64-pc-linux-gnu

I've been running a server with autovacuum enabled 
for quite a while now (months) without problems. But recently the server slowed 
down and after investigation I found the following repeated error messsage 
in the log:

LOG: autovacuum: processing database 
"template0"ERROR: could not access status of transaction 
3541181801DETAIL: could not open file "pg_clog/0D31": No such file or 
directory
I assume that the avac-process halts at this point 
which means no vacuum and/or analyze for the other databases? Which would 
explain the slowdown.

What is the best way to proceed with this? Stop the 
postmaster, create a zero-filled pg_clog/0D31 and restart?

Regards
Nichlas


Re: [HACKERS] Shared memory

2006-03-28 Thread Thomas Hallgren

Dave Cramer wrote:


What exactly is a FENCED solution ? If it is simply a remote 
connection to a single JVM then pl-j already does that.
Last time I tried to use pl-j (in order to build a mutual test 
platform), I didn't manage to make it compile due to missing artifacts 
and it wasn't ported to Windows. Lazslo filed a JIRA bug on that but 
since then (August last year) I've seen no activity in the project. Is 
it still alive? Is anyone using it?


Regards,
Thomas Hallgren


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Shared memory

2006-03-28 Thread Dave Cramer

The last time I talked to him Laszlo said he is working on it again.

Dave
On 28-Mar-06, at 2:21 PM, Thomas Hallgren wrote:


Dave Cramer wrote:


What exactly is a FENCED solution ? If it is simply a remote  
connection to a single JVM then pl-j already does that.
Last time I tried to use pl-j (in order to build a mutual test  
platform), I didn't manage to make it compile due to missing  
artifacts and it wasn't ported to Windows. Lazslo filed a JIRA bug  
on that but since then (August last year) I've seen no activity in  
the project. Is it still alive? Is anyone using it?


Regards,
Thomas Hallgren





---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] [GENERAL] PANIC: heap_update_redo: no block

2006-03-28 Thread Tom Lane
I wrote:
 * log_heap_update decides that it can set XLOG_HEAP_INIT_PAGE instead
 of storing the full destination page, if the destination contains only
 the single tuple being moved.  This is fine, except it also resets the
 buffer indicator for the *source* page, which is wrong --- that page
 may still need to be re-generated from the xlog record.  This is the
 proximate cause of the bug report that started this thread.

I have to retract that particular bit of analysis: I had misread the
log_heap_update code.  It seems to be doing the right thing, and in any
case, given Alex's output

LOG:  REDO @ D/19176644; LSN D/191766A4: prev D/19176610; xid 81148979: Heap - 
move: rel 1663/16386/16559898; tid 1/1; new 0/10

we can safely conclude that log_heap_update did not set the INIT_PAGE
bit, because the new tid doesn't have offset=1.  (The fact that the
WAL_DEBUG printout doesn't report the bit's state is an oversight I plan
to fix, but anyway we can be pretty sure it's not set here.)

What we should be seeing, and don't see, is an indication of a backup
block attached to this WAL record.  Furthermore, I don't see any
indication of a backup block attached to *any* of the WAL records in
Alex's printout.  The only conclusion I can draw is that he had
full_page_writes turned OFF, and as we have just realized that that
setting is completely unsafe, that is the explanation for his failure.

 Clearly, we need to go through the xlog code with a fine tooth comb
 and convince ourselves that all pages touched by any xlog record will
 be properly reconstituted if they've later been truncated off.  I have
 not yet examined any of the code except the above.

I've finished going through the xlog code looking for related problems,
and AFAICS this is the score:

* full_page_writes = OFF doesn't work.

* btree_xlog_split and btree_xlog_delete_page should pass TRUE not FALSE
  to XLogReadBuffer for all pages that they are going to re-initialize.

* the recently-added gist xlog code is badly broken --- it pays no
  attention whatever to preventing torn pages :-(.  It's not going to be
  easy to fix, either, because the page split code assumes that a single
  WAL record can describe changes to any number of pages, which is not
  the case.

Everything else seems to be getting it right.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Shared memory

2006-03-28 Thread Thomas Hallgren

Dave Cramer wrote:




I'm not too keen on the term FENCED, since it, in the PL/Java case 
will lead to poorer isolation. Multiple threads running in the same 
JVM will be able to share data and a JVM crash will affect all 
connected sessions.

When was the last time you saw a JVM crash ? These are very rare now.
I think that's somewhat dependent on what JVM you're using. For the 
commercial ones, BEA, IBM, and Sun, i fully agree.


In any case if it does fail, it's a JVM bug and can happen to any code 
running and take the server down if it is in process.
Crash is perhaps not the right word. My point concerned level of 
isolation. Code that is badly written may have serious impact on other 
threads in the same JVM. Let's say you cause an OutOfMemoryException or 
an endless loop. The former will render the JVM completely useless and 
the latter will cause low scheduling prio. If the same thing happens 
using an in-process JVM, the problem is isolated to that one session.


Regards,
Thomas Hallgren


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Larry Rosenman
Greetings,
After helping a user on irc, I was wondering if there would be any
objection
to my making a patch that would:

1) expose DEFAULT_PGSOCKET_DIR via a libpq call
2) add this information to the psql --version output (or some other
switch, I'm agnostic).

for those weird times when some distro changes it, and you then
overwrite parts of it, 
it would be useful for diagnostics.

Comments?

LER


-- 
Larry Rosenman  
Database Support Engineer

PERVASIVE SOFTWARE. INC.
12365B RIATA TRACE PKWY
3015
AUSTIN TX  78727-6531 

Tel: 512.231.6173
Fax: 512.231.6597
Email: [EMAIL PROTECTED]
Web: www.pervasive.com 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Tom Lane
Larry Rosenman [EMAIL PROTECTED] writes:
 1) expose DEFAULT_PGSOCKET_DIR via a libpq call
 2) add this information to the psql --version output (or some other
 switch, I'm agnostic).

pg_config would seem to be the appropriate place, not libpq nor psql.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Darcy Buskermolen
On Tuesday 28 March 2006 14:07, Larry Rosenman wrote:
 Greetings,
 After helping a user on irc, I was wondering if there would be any
 objection
 to my making a patch that would:

 1) expose DEFAULT_PGSOCKET_DIR via a libpq call
 2) add this information to the psql --version output (or some other
 switch, I'm agnostic).

 for those weird times when some distro changes it, and you then
 overwrite parts of it,
 it would be useful for diagnostics.

is it not shown by, (if it's not default of /tmp) ?
pg_config --configure


 Comments?

 LER

-- 
Darcy Buskermolen
Wavefire Technologies Corp.

http://www.wavefire.com
ph: 250.717.0200
fx: 250.763.1759

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Larry Rosenman
Tom Lane wrote:
 Larry Rosenman [EMAIL PROTECTED] writes:
 1) expose DEFAULT_PGSOCKET_DIR via a libpq call
 2) add this information to the psql --version output (or some other
 switch, I'm agnostic).
 
 pg_config would seem to be the appropriate place, not libpq nor psql.

The issue is when you overwrite PIECES of an install, and their
inconsistent.  I want to
put it in libpq, since that is what makes the connection to the server. 

pg_config doesn't link to libpq at all.

The issue is what psql (and any libpq using program) is going to use to find
the UNIX socket. 

we have the unix_socket_directory GUC, but that doesn't show the
DEFAULT_PGSOCKET_DIR that libpq is using, 
and in fact there is no where that the server exposes it's default, either.
I'm wondering
if we should expose it's default in unix_socket_directory when the config
doesn't set it.

LER


-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 512-248-2683 E-Mail: ler@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3893


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Larry Rosenman
Darcy Buskermolen wrote:
 On Tuesday 28 March 2006 14:07, Larry Rosenman wrote:
 Greetings,
 After helping a user on irc, I was wondering if there would be
 any objection to my making a patch that would:
 
 1) expose DEFAULT_PGSOCKET_DIR via a libpq call
 2) add this information to the psql --version output (or some other
 switch, I'm agnostic). 
 
 for those weird times when some distro changes it, and you then
 overwrite parts of it, it would be useful for diagnostics.
 
 is it not shown by, (if it's not default of /tmp) ?
 pg_config --configure


see my reply to Tom, that I just posted.  This is for diagnostic use, when
there are partial overwrites, and/or pathing issues that are causing a
particular
libpq/psql combination to not necessarily agree on what's where.

I want to expose exactly what libpq is using.

LER
 
-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 512-248-2683 E-Mail: ler@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3893


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Tablespaces oddity?

2006-03-28 Thread Philip Yarra
On Wed, 29 Mar 2006 01:36 am, Tom Lane wrote:
 Philip Yarra [EMAIL PROTECTED] writes:
  Someone else might be able to see a better way to write this query, but I
  think it would be good if \d could show this information, when you really
  want to know which tablespace an object is on.

 If \d doesn't say anything then the table is in the database's default
 tablespace.  I see nothing wrong with that, and I do object to
 cluttering \d output with information that will be of no interest to
 people not using tablespaces.

OK, how about on \d+, if the object is not on pg_default or pg_global, print 
the tablespace that this object is on? That way, people not using tablespaces 
won't ever see it.

  Note also that \l won't show you the tablespace for a DB, so you need
  to query pg_database to even know which is the default tablespace for
  a DB.

 I wouldn't object to adding default tablespace to \l output, or maybe \l+.

OK, not fussed which one it's on, so long as it's there - this should do it 
for \l+

SELECT d.datname as Name,
r.rolname as Owner,
pg_catalog.pg_encoding_to_char(d.encoding) as Encoding,
pg_catalog.obj_description(d.oid, 'pg_database') as Description,
t.spcname as Tablespace
FROM pg_catalog.pg_database d
LEFT JOIN pg_catalog.pg_roles r ON d.datdba = r.oid
LEFT JOIN pg_catalog.pg_tablespace t on d.dattablespace = t.oid;

On a related note: is there a simple way to show all objects on a given 
tablespace? If not, would other people also see this as useful?

Regards, Philip.

-- 

Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it. - Brian W. Kernighan

-
Utiba Pty Ltd 
This message has been scanned for viruses and
dangerous content by Utiba mail server and is 
believed to be clean.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Please help, pgAdmin3 on Debian!

2006-03-28 Thread lmyho


 You could try to change the ident method to trust (in pg_hba.conf). This
 should allow you to login.
 Then, set the password of the postgres user (alter user postgres with password
 'blabla1212' ; ).  Then you could change the ident method back to md5 .
 
Hi Adrian,

Thank you for help!!  I've made the change and I am able to login using pgAdmin3
now.:)

Trying to learn more about PostgreSQL!

Thanks!!!
leo

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] How are tables stored fisically in HD?

2006-03-28 Thread Bruno Cassol
Hi, my teacher want's me to find out and explain how PgSQL stores data 
fisically! I've done searches but could not find it. Please help me!






___ 
Yahoo! doce lar. Faça do Yahoo! sua homepage. 
http://br.yahoo.com/homepageset.html 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] How are tables stored fisically in HD?

2006-03-28 Thread Jim C. Nasby
On Tue, Mar 28, 2006 at 08:40:27AM -0300, Bruno Cassol wrote:
 Hi, my teacher want's me to find out and explain how PgSQL stores data 
 fisically! I've done searches but could not find it. Please help me!

http://www.postgresql.org/docs/8.1/interactive/storage.html
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Shared memory

2006-03-28 Thread Mark Dilger
Thomas Hallgren wrote:
 Martijn,
 
 I tried a Socket approach. Using the new IO stuff that arrived with Java
 1.4 (SocketChannel etc.), the performance is really good. Especially on
 Linux where an SMP machine show a 1 to 1.5 ratio between one process
 doing ping-pong between two threads and two processes doing ping-pong
 using a socket. That's acceptable overhead indeed and I don't think I'll
 be able to trim it much using a shared memory approach (the thread
 scenario uses Java monitor locks. That's the most efficient lightweight
 locking implementation I've come across).
 
 One downside is that on a Windows box, the ratio between the threads and
 the processes scenario seems to be 1 to 5 which is a bit worse. I've
 heard that Solaris too is less efficient then Linux in this respect.
 
 The real downside is that a call from SQL to PL/Java using the current
 in-process approach is really fast. It takes about 5 micro secs on my
 2.8GHz i386 box. The overhead of an IPC-call on that box is about 18
 micro secs on Linux and 64 micro secs on Windows. That's an overhead of
 between 440% and 1300% due to context switching alone. Yet, for some
 applications, perhaps that overhead is acceptable? It should be compared
 to the high memory consumption that the in-process approach undoubtedly
 results in (which in turn might lead to less optimal use of CPU caches
 and, if memory is insufficient, more time spent doing swapping).
 
 Given those numbers, it would be interesting to hear what the community
 as a whole thinks about this.

Assuming by community you mean developers not normally involved in hackers, 
then:

1) As a developer, the required debugging time increases greatly when one
session can effect (or crash) all the other sessions.  This in turn drives up
the cost of development.  Unless some guarantees could be had against this sort
of intermittent runtime bugginess, I would be less likely to opt for PL/Java and
exposing myself to the potential cost overruns.

2) As a speed freak, I'm going to code things in C, not Java.  So the appeal of
Java must come from something other than speed, such as stability and faster
development cycles.

My opinion is that it all depends whether you can hammer down a reliable
solution that has the necessary stability guarantees.  Splitting the middle,
trying to get performance benefits at the cost of stability, would seem to make
PL/Java a sort of lukewarm solution on the speed side, and a lukewarm solution
on the stability side.  I doubt I could get excited about it.

mark

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [GENERAL] PANIC: heap_update_redo: no block

2006-03-28 Thread Qingqing Zhou

Tom Lane [EMAIL PROTECTED] wrote

 What we should be seeing, and don't see, is an indication of a backup
 block attached to this WAL record.  Furthermore, I don't see any
 indication of a backup block attached to *any* of the WAL records in
 Alex's printout.  The only conclusion I can draw is that he had
 full_page_writes turned OFF, and as we have just realized that that
 setting is completely unsafe, that is the explanation for his failure.


This might be the answer. I tried the fill-checkpoint-vacuum-crash sequence
as you suggested, but still a neat recovery. That's because, IMHO, even
after checkpoint, the moved page will still be saved into WAL (since it is
new again to the checkpoint) if full_page_writes is on.

Regards,
Qingqing



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Tom Lane
Larry Rosenman ler@lerctr.org writes:
 Tom Lane wrote:
 pg_config would seem to be the appropriate place, not libpq nor psql.

 The issue is what psql (and any libpq using program) is going to use to find
 the UNIX socket. 

No, the issue is where the server put the socket.  libpq is the wrong
place because libpq is not the only thing people use to connect to the
server.

If the DBA sets a non-default unix_socket_directory via postgresql.conf
then you're screwed no matter what: no client-side code can hope to tell
you where it is.  The only thing that is useful to inspect is the
server's compile-time default, and pg_config is the right mechanism
to inspect that with.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Larry Rosenman
Tom Lane wrote:
 Larry Rosenman ler@lerctr.org writes:
 Tom Lane wrote:
 pg_config would seem to be the appropriate place, not libpq nor
 psql. 
 
 The issue is what psql (and any libpq using program) is going to use
 to find the UNIX socket.
 
 No, the issue is where the server put the socket.  libpq is the wrong
 place because libpq is not the only thing people use to connect to
 the server.  
 
 If the DBA sets a non-default unix_socket_directory via
 postgresql.conf then you're screwed no matter what: no client-side
 code can hope to tell you where it is.  The only thing that is useful
 to inspect is the server's compile-time default, and pg_config is the
 right mechanism to inspect that with.
 
   regards, tom lane

The other issue is borked installs where the server and libpq disagree.
What I'm looking for
 is to expose what libpq has for it's default as well as what the server is
using.  There is currently
 no way to determine what libpq has for it's default.  What happened in the
irc case was a partial re-install
 with non-matching server and libpq.

LER


-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 512-248-2683 E-Mail: ler@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3683 US


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Tom Lane
Larry Rosenman ler@lerctr.org writes:
 The other issue is borked installs where the server and libpq disagree.
 What I'm looking for
  is to expose what libpq has for it's default as well as what the server is
 using.  There is currently
  no way to determine what libpq has for it's default.  What happened in the
 irc case was a partial re-install
  with non-matching server and libpq.

[ shrug... ]  So?  There isn't going to be any way that
random-app-using-libpq is going to have a way to tell the user what the
underlying copy of libpq is using for this default --- adding a call for
that will be nothing more nor less than a waste of code space.  You'd be
best off running strings(1) over the libpq.so file when the question
comes up.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Larry Rosenman
Tom Lane wrote:
 Larry Rosenman ler@lerctr.org writes:
 The other issue is borked installs where the server and libpq
  disagree. What I'm looking for is to expose what libpq has for it's
 default as well as what the server is using.  There is currently  no
 way to determine what libpq has for it's default.  What happened in
 the irc case was a partial re-install  with non-matching server and
 libpq. 
 
 [ shrug... ]  So?  There isn't going to be any way that
 random-app-using-libpq is going to have a way to tell the user what
 the underlying copy of libpq is using for this default --- adding a
 call for that will be nothing more nor less than a waste of code
 space.  You'd be best off running strings(1) over the libpq.so file
 when the question comes up.

That's making the assumption that you know which libpq.  I was hoping to
have a psql commandline
Switch to dump the info, but with your objection(s), I'll just crawl back
under my rock.

 
 
   regards, tom lane



-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 512-248-2683 E-Mail: ler@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3683 US


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Tom Lane
Larry Rosenman ler@lerctr.org writes:
 That's making the assumption that you know which libpq.  I was hoping to
 have a psql commandline
 Switch to dump the info, but with your objection(s), I'll just crawl back
 under my rock.

It's not that I don't feel your pain ... but if you don't know what
version of libpq you're using, I don't see where you get to assume that
psql is invoking the same version as your app-that's-actually-broken.
Seems like there's not any substitute for some forensic effort here.

On the server side, recent discussions about getting pg_ctl to behave
sanely in the face of non-default configurations have been leading me to
think about a proposal like this:

postmaster --show-value guc-variable-name other-switches

with the behavior of parsing the postgresql.conf file, interpreting the
other-switches (which might include -D or -c that'd affect the result)
and then printing the value of the guc-variable to stdout and exiting.
This would allow pg_ctl to deal with issues such as non-default
unix_socket_directory.  Doesn't fix your problem of client-side
configuration variation, but would do a bit for the server side.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Larry Rosenman
Tom Lane wrote:
 Larry Rosenman ler@lerctr.org writes:
 That's making the assumption that you know which libpq.  I was hoping
 to have a psql commandline Switch to dump the info, but with your
 objection(s), I'll just crawl back under my rock.
 
 It's not that I don't feel your pain ... but if you don't know what
 version of libpq you're using, I don't see where you get to assume
 that psql is invoking the same version as your
 app-that's-actually-broken. Seems like there's not any substitute for
 some forensic effort here.   

The particular case was psql not being able to connect to a running
postmaster on
 the unix socket, because of the mismatch. 

What's the harm of a (pseudo code):

 const char *PQgetunixsocketdir(void)
 {
return(DEFAULT_PGSOCKET_DIR)
 }

In libpq, and a psql command line switch to call it. 

 
 On the server side, recent discussions about getting pg_ctl to behave
 sanely in the face of non-default configurations have been leading me
 to think about a proposal like this:  
 
   postmaster --show-value guc-variable-name other-switches
 
 with the behavior of parsing the postgresql.conf file, interpreting
 the other-switches (which might include -D or -c that'd affect the
 result) and then printing the value of the guc-variable to stdout and
 exiting. This would allow pg_ctl to deal with issues such as
 non-default unix_socket_directory.  Doesn't fix your problem of
 client-side configuration variation, but would do a bit for the
 server side.

This would help as well. 



-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 512-248-2683 E-Mail: ler@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3683 US


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Tom Lane
Larry Rosenman ler@lerctr.org writes:
 What's the harm of a (pseudo code):

  const char *PQgetunixsocketdir(void)
  {
 return(DEFAULT_PGSOCKET_DIR)
  }

 In libpq, and a psql command line switch to call it. 

By the time you get done adding the infrastructure and documentation for
those two layers of features, you're talking about many hundreds of
lines of stuff, not four.  There are also definitional issues (what does
this do on platforms without Unix sockets) and future proofing (will we
always have DEFAULT_PGSOCKET_DIR).  So what's the harm is not the
appropriate measure --- especially when this proposal clearly doesn't
help in a lot of the scenarios in which one might wish to know the
information.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Larry Rosenman
Jeremy Drake wrote:
 
 When I encounter such behavior, my tool of choice tends to be
 strace(1) rather than strings(1).  That way, you know what exactly
 the thing it wants that it is not finding is...  
That assumes that the user has strace(1) installed.  Yes, I've run into
systems
 that don't have it, and have no idea where the RPM/etc is for it :(.

There is also the differences between Linux (strace), SVR4 (truss), *BSD
(ktrace),
 etc, whereas a commandline switch to psql and the one-line function I
proposed would
 be standard across at least all the unix-like systems (since I think that
the windows code 
 doesn't enable HAVE_UNIX_SOCKETS, and therefore even if the library returns
a string, it's 
 useless.

LER


-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 512-248-2683 E-Mail: ler@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3683 US


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Larry Rosenman
Tom Lane wrote:
 Larry Rosenman ler@lerctr.org writes:
 What's the harm of a (pseudo code):
 
  const char *PQgetunixsocketdir(void)
  {
 return(DEFAULT_PGSOCKET_DIR)
  }
 
 In libpq, and a psql command line switch to call it.
 
 By the time you get done adding the infrastructure and documentation
 for those two layers of features, you're talking about many hundreds
 of lines of stuff, not four.  There are also definitional issues
 (what does this do on platforms without Unix sockets) and future
 proofing (will we always have DEFAULT_PGSOCKET_DIR).  So what's the
 harm is not the appropriate measure --- especially when this
 proposal clearly doesn't help in a lot of the scenarios in which one
 might wish to know the information.

I know that it's not just the 4 line function, etc.  However, there is
currently
 no way to find out if that non-standard setting has been changed.  Is it
safe to assume
 that we will always have a default unix socket that we connect to if no
hostname is specified?

However, as I said a couple of messages back, this isn't gonna fly, based on
your objections,
 so I'm gonna drop it.
   
-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 512-248-2683 E-Mail: ler@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3683 US


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Exposing DEFAULT_PGSOCKET_DIR via a libpq function?

2006-03-28 Thread Jeremy Drake
On Tue, 28 Mar 2006, Tom Lane wrote:

 Larry Rosenman ler@lerctr.org writes:
  The other issue is borked installs where the server and libpq disagree.
  What I'm looking for
   is to expose what libpq has for it's default as well as what the server is
  using.  There is currently
   no way to determine what libpq has for it's default.  What happened in the
  irc case was a partial re-install
   with non-matching server and libpq.

 [ shrug... ]  So?  There isn't going to be any way that
 random-app-using-libpq is going to have a way to tell the user what the
 underlying copy of libpq is using for this default --- adding a call for
 that will be nothing more nor less than a waste of code space.  You'd be
 best off running strings(1) over the libpq.so file when the question
 comes up.

When I encounter such behavior, my tool of choice tends to be strace(1)
rather than strings(1).  That way, you know what exactly the thing it
wants that it is not finding is...


-- 
Nothing astonishes men so much as common sense and plain dealing.
-- Ralph Waldo Emerson

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] Issue in Mapping varchar datatype of Postgre to Oracle

2006-03-28 Thread vidisha . shah

Hi

We are trying to fetch records
from Postgre Tables.

We are successfully able to build
connectivity. We are Using Postgre ODBC Driver (Unicode).

But when I query the tables of
Postgre it is unable to map the datatype varchar of source Table.

We got following Reply from Oracle
Support.

-

Hi,

.
DATA COLLECTED
===

TRACE FILE

mylog_3388.log

.
ISSUE VERIFICATION
===

Verified the issue by the trace file
mylog_3388.log, which displays 

[4464] PGAPI_DescribeCol: res =
22422104, stmt-status = 2, !finished=1, !premature=0
[4464]getCharColumnSize: type=1043,
col=2, unknown = 0
[4464]describeCol: col 2 fieldname =
'name'
[4464]describeCol: col 2 fieldtype =
1043
[4464]describeCol: col 2 column_size
= 50
[4464]getCharColumnSize: type=1043,
col=2, unknown = 0
[4464]describeCol: col 2 *pfSqlType
= -9
[4464]describeCol: col 2 *pcbColDef
= 50
[4464]describeCol: col 2 *pibScale =
0
[4464]describeCol: col 2 *pfNullable
= 1

.
CAUSE DETERMINATION

VARCHAR datatype from PostgreSQL is
translated by your ODBC driver in an unsupported datatype for HSODBC.


CAUSE JUSTIFICATION

In the trace file, you get the datatype
from postgreSQL 
1043 VARCHAR(50)

then you get the datatype that ODBC
driver is mapping to get back to Oracle
[4464]describeCol: col 2 *pfSqlType
= -9

If you look at in the Note 252548.1,

-9 is SQL_WVARCHAR 

and unfortunately this ODBC datatype
is not supported by the Generic Connectivity agent (HSODBC).

To get confirmation, please have a look
in the documentation:

Oracle® Database Heterogeneous Connectivity
Administrator's Guide
10g Release 2 (10.2)
Part Number B14232-01
B Data Type Mapping for Generic Connectivity
B.1 Mapping ANSI Data Types to Oracle
Data Types Through an ODBC Interface

.
POTENTIAL SOLUTION(S)
==
Please check if in your ODBC driver
there is any option to differently map the SQL_WVARCHAR d
atatype
---

Can you please suggest why varchar
datatype is not correctly identified and what is the path to get solution?

Thanks  Regards

Vidisha B Shah

Vidisha B Shah
Tata Consultancy Services Limited
Mailto: [EMAIL PROTECTED]
Website: http://www.tcs.com

Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you