[JDBC] Fastpath error on solaris 2.8 pgsql 7.1.3

2001-08-27 Thread T . R . Missner

Anyone seen this error before?

It doesn't happen every time I insert a blob so 
I assume the code is correct.  Only happens occasionaly.
I have no idea how to troubleshoot 
this problem.  Any help would be appreciated.

FastPath call returned ERROR:  lo_write: invalid large obj descriptor (0)

I don't know if this is related to JDBC or not but the code I 
am using to insert the blob is JDBC.



t.r. missner
level 3 communications

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [JDBC] JDBC changes for 7.2 - wish list item

2001-08-27 Thread Rene Pijlman

On Mon, 27 Aug 2001 08:48:52 +1000 (EST), you wrote:
It's been mentioned before, but a set of error numbers for database errors
would make trapping exceptions and dealing with them gracefully a LOT
simpler. I have java code that runs against Oracle, Informix, PostgreSQL,
MS SQL Server and Cloudscape. All(?) the others have an error code as well
as an error message and it's a lot easier to get the error code.

I agree. Its on the list on
http://lab.applinet.nl/postgresql-jdbc/#SQLException. This
requires new functionality in the backend.

Of course, they all have *different* error codes for the same error (ie
primary key violation). Nothing is ever simple.

Perhaps the SQLState string in SQLException can make this easier
(if we can support this with PostgreSQL). This is supposed to
contain a string identifying the exception, following the Open
Group SQLState conventions. I'm not sure how useful these are.

Regards,
René Pijlman

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



RE: [JDBC] Fastpath error on solaris 2.8 pgsql 7.1.3

2001-08-27 Thread chris markiewicz

I have exactly the same problem...it happens randomly, it seems.  maybe 5%
of the time.  also happens on selects (lo_read though).  the only advice
that i've seen on the topic is to make sure that autocommit is set to false,
which i've done, but i still see the problem.

unfortunately, my next attempt at a fix was going to be to upgrade to 7.1
(i'm currently on 7.0.3 on linux)...but i see that you already use 7.1...

sorry i can't help...

chris

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of
[EMAIL PROTECTED]
Sent: Monday, August 27, 2001 11:51 AM
To: [EMAIL PROTECTED]
Subject: [JDBC] Fastpath error on solaris 2.8 pgsql 7.1.3


Anyone seen this error before?

It doesn't happen every time I insert a blob so
I assume the code is correct.  Only happens occasionaly.
I have no idea how to troubleshoot
this problem.  Any help would be appreciated.

FastPath call returned ERROR:  lo_write: invalid large obj descriptor (0)

I don't know if this is related to JDBC or not but the code I
am using to insert the blob is JDBC.



t.r. missner
level 3 communications

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [JDBC] Re: [BUGS] Bug #428: Another security issue with the JDBC driver.

2001-08-27 Thread Barry Lind

No this is not the way to do this.  Elsewhere when the driver has 
different functionality/requirements for JDBC1 vs JDBC2 this is 
impelmented via subclassing (see the jdbc1 and jdbc2 packages).  That 
pattern should be followed here, not the kludgy fake ifdef support 
provided by configure.  Driver.java needs this as that determines which 
Connection object is used, but from there on there shouldn't be any 
other uses of .in files.

thanks,
--Barry

David Daney wrote:
 Sorry about that, things are never as easy as they seem.  The answer 
 appears to be to filter PG_Stream.java in a similar manner as is done to 
 Driver.java
 
 Attached please find two files.
 
 1) diffs for build.xml.
 
 2) PG_Stream.java.in
 
 I hope this can now be put to bed.
 
 David Daney.
 
 
 Bruce Momjian wrote:
 
Patch reversed.  Please advise how to continue.

Please pull this patch.  It breaks JDBC1 support.  The JDBC1 code no 
longer compiles, due to objects being referenced in this patch that do 
not exist in JDK1.1.

thanks,
--Barry


  [copy] Copying 1 file to 
/home/blind/temp/pgsql/src/interfaces/jdbc/org/postgresql
  [echo] Configured build for the JDBC1 edition driver

compile:
 [javac] Compiling 38 source files to 
/home/blind/temp/pgsql/src/interfaces/jdbc/build
 [javac] 
/home/blind/temp/pgsql/src/interfaces/jdbc/org/postgresql/PG_Stream.java:33: 
Interface org.postgresql.PrivilegedExceptionAction of nested class 
org.postgresql.PG_Stream. PrivilegedSocket not found.
 [javac]   implements PrivilegedExceptionAction
 [javac]  ^
 [javac] 
/home/blind/temp/pgsql/src/interfaces/jdbc/org/postgresql/PG_Stream.java:63: 
Undefined variable or class name: AccessController
 [javac] co
nnection = (Socket)AccessController.doPrivileged(ps);
 [javac]  ^
 [javac] 
/home/blind/temp/pgsql/src/interfaces/jdbc/org/postgresql/PG_Stream.java:65: 
Class org.postgresql.PrivilegedActionException not found in type 
declaration.
 [javac]  catch(PrivilegedActionException pae){
 [javac]^
 [javac] 3 errors

BUILD FAILED



Bruce Momjian wrote:

Patch applied.  Thanks.


I am sorry to keep going back and forth on this, but:

The original patch is correct and does the proper thing.  I should have 
tested this before sounding the alarm.

AccessController.doPrivileged()

Propagates SecurityExceptions without wrapping them in a 
PrivilegedActionException so it appears that there is not the possibility of a 
ClassCastException.

David Daney.


Bruce Momjian wrote:


OK, patch removed from queue.


It is now unclear to me the the

catch(PrivilegedActionException pae)

part of the patch is correct.  If a SecurityException is thrown in 
Socket() (as might happen if the policy file did not give the proper 
permissions), then it might be converted into a ClassCastException, 
which is probably the wrong thing to do.

Perhaps I should look into this a bit further.

David Daney.


Bruce Momjian wrote:


Your patch has been added to the PostgreSQL unapplied patches list at:

http://candle.pha.pa.us/cgi-bin/pgpatches

I will try to apply it within the next 48 hours.


David Daney ([EMAIL PROTECTED]) reports a bug with a severity of 3
The lower the number the more severe it is.

Short Description
Another security issue with the JDBC driver.

Long Description
The JDBC driver requires

permission java.net.SocketPermission host:port, connect;

in the policy file of the application using the JDBC driver 
in the postgresql.jar file.  Since the Socket() call in the
driver is not protected by AccessController.doPrivileged() this
permission must also be granted to the entire application.

The attached diff fixes it so that the connect permission can be
restricted just the the postgresql.jar codeBase if desired.

Sample Code
*** PG_Stream.java.origFri Aug 24 09:27:40 2001
--- PG_Stream.java Fri Aug 24 09:42:14 2001
***
*** 5,10 
--- 5,11 
imp
ort java.net.*;
import java.util.*;
import java.sql.*;
+ import java.security.*;
import org.postgresql.*;
import org.postgresql.core.*;
import org.postgresql.util.*;
***
*** 27,32 
--- 28,52 
   BytePoolDim1 bytePoolDim1 = new BytePoolDim1();
   BytePoolDim2 bytePoolDim2 = new BytePoolDim2();

+private static class PrivilegedSocket
+   implements PrivilegedExceptionAction
+{
+   private String host;
+   private int port;
+   
+   PrivilegedSocket(String host, int port)
+   {
+  this.host = host;
+  this.port = port;
+   }
+ 
+   public Object run() throws Exception
+   {
+  return new Socket(host, port);
+   }
+}
+
+ 
 /**
  * Constructor:  Connect to the PostgreSQL back end and return
  * a stream connection.
***
*** 37,43 
  */
 public PG_Stream(St
ring host, int port) throws IOException
 {
! connection = new Socket(host, port);

   // Submitted by 

[JDBC] Re: Proposal to fix Statement.executeBatch()

2001-08-27 Thread Barry Lind

Rene,

I see your statements below as incorrect:

  The intended behaviour is to send a set of update/insert/delete/DDL
  statements in one round trip to the database. Unfortunately,
  this optional JDBC feature cannot be implemented correctly with
  PostgreSQL, since the backend only returns the update count of
  the last statement send in one call with multiple statements.
  JDBC requires it to return an array with the update counts of
  all statements in the batch.

The intended behaviour is certainly to send all of the statements in one 
round trip.  And the JDBC2.1 spec certainly allows postgres to do just 
that.  Here is how I would suggest this be done in a way that is spec 
compliant (Note: that I haven't looked at the patch you submited yet, so 
forgive me if you have already done it this way, but based on your 
comments in this email, my guess is that you have not).


Statements should be batched together in a single statement with 
semicolons separating the individual statements (this will allow the 
backend to process them all in one round trip).

The result array should return an element with the row count for each 
statement, however the value for all but the last statement will be 
'-2'.  (-2 is defined by the spec to mean the statement was processed 
successfully but the number of affected rows is unknown).

In the event of an error, then the driver should return an array the 
size of the submitted batch with values of -3 for all elements. -3 is 
defined by the spec as the corresponding statement failed to execute 
successfully, or for statements that could not be processed for some 
reason.  Since in postgres when one statement fails (in non-autocommit 
mode), the entire transaction is aborted this is consistent with a 
return value of -3 in my reading of the spec.

I believe this approach makes the most sense because:
1) It implements batches in one round trip (the intention of the feature)
2) It is complient with the standard
3) It is complient with the current functionality of the backend

thanks,
--Barry


Rene Pijlman wrote:
 I've finished the secion on batch updates in the JDBC 2.0
 compliance documentation on
 http://lab.applinet.nl/postgresql-jdbc/ (see the quote of the
 relevant part below).
 
 In the short term I think two things need to be fixed:
 1) don't begin, commit or rollback a transaction implicitly in
 Statement.executeBatch()
 2) have executeBatch() throw a BatchUpdateException when it is
 required to do so by the JDBC spec
 
 If there are no objections from this list I intend to submit a
 patch that fixes 1), and perhaps also 2). 
 
 Note that this may cause backward compatibility issues with JDBC
 applications that have come to rely on the incorrect behaviour.
 OTOH, there have been complaints on this list before, and those
 people would certainly be happy about the fix. E.g.
 http://fts.postgresql.org/db/mw/msg.html?mid=83832
 
 In the long run it would be nice if the backend would support
 returning one update count (and perhaps an OID) per statement
 send in a semicolon separated multi-statement call. Would this
 be something for the backend TODO list? OTOH, I'm not sure if
 this (small?) performance improvement is worth the trouble.
 
 Batch updates
 
 The driver supports batch updates with the addBatch, clearBatch
 and executeBatch methods of Statement, PreparedStatement and
 CallableStatement. DatabaseMetaData.supportsBatchUpdates()
 returns true.
 
 However, executing statements in a batch does not provide a
 performance improvement with PostgreSQL, since all statements
 are internally send to the backend and processed one-by-one.
 That defeats the purpose of the batch methods. The intended
 behaviour is to send a set of update/insert/delete/DDL
 statements in one round trip to the database. Unfortunately,
 this optional JDBC feature cannot be implemented correctly with
 PostgreSQL, since the backend only returns the update count of
 the last statement send in one call with multiple statements.
 JDBC requires it to return an array with the update counts of
 all statements in the batch. Even though the batch processing
 feature currently provides no performance improvement, it should
 not be removed from the driver for reasons of backward
 compatibility.
 
 The current implementation of Statement.executeBatch() in
 PostgreSQL starts a new transaction and commits or aborts it.
 This is not in compliance with the JDBC specification, which
 does not mention transactions in the description of
 Statement.executeBatch() at all. The confusion is probably
 caused by a JDBC tutorial from Sun with example code which
 disables autocommit before calling executeBatch so that the
 transaction will not be automatically committed or rolled back
 when the method executeBatch is called. This comment in the
 tutorials appears to be a misunderstanding. A good reason to
 disable autocommit before calling executeUpdate() is to be able
 to commit or rollback all statements in a 

Re: [JDBC] Fastpath error on solaris 2.8 pgsql 7.1.3

2001-08-27 Thread Tom Lane

[EMAIL PROTECTED] writes:
 FastPath call returned ERROR:  lo_write: invalid large obj descriptor (0)

Usually this indicates that you didn't have the lo_open ... lo_write
... lo_close sequence wrapped in a transaction block (BEGIN/COMMIT
SQL commands).  Since it's erratic for you, I'd bet that some of your
application control paths have the BEGIN and some don't.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [JDBC] Re: Proposal to fix Statement.executeBatch()

2001-08-27 Thread Rene Pijlman

On Mon, 27 Aug 2001 11:07:55 -0700, you wrote:
[executeBatch() implemented as one round trip]
Here is how I would suggest this be done in a way that is spec 
compliant (Note: that I haven't looked at the patch you submited yet, so 
forgive me if you have already done it this way, but based on your 
comments in this email, my guess is that you have not).

Indeed, I have not implemented this.

Statements should be batched together in a single statement with 
semicolons separating the individual statements (this will allow the 
backend to process them all in one round trip).

The result array should return an element with the row count for each 
statement, however the value for all but the last statement will be 
'-2'.  (-2 is defined by the spec to mean the statement was processed 
successfully but the number of affected rows is unknown).

Ah, I see. I hadn't thought of that solution.

In the event of an error, then the driver should return an array the 
size of the submitted batch with values of -3 for all elements. -3 is 
defined by the spec as the corresponding statement failed to execute 
successfully, or for statements that could not be processed for some 
reason.  Since in postgres when one statement fails (in non-autocommit 
mode), the entire transaction is aborted this is consistent with a 
return value of -3 in my reading of the spec.

Not quite. A statement in a batch may also fail because its a
succesful SELECT as far as the server is concerned (can't have
select's in a batch). But that situation can also be handled
correctly by setting the update count for that particular
statement to -3. Its then up to the application to decide if it
wants to rollback, I would say.

But what to do when an error occurs with autocommit enabled?
This is not recommended, but allowed by the spec, if I
understand it correctly.

What exactly is the behaviour of the backend in that scenario?
Does it commit every separate SQL statement in the
semicolon-separated list, or does it commit the list as a whole?
Does it abort processing the statement list when an error occurs
in one statement? And if it continues, does it return an error
when only one statement in the middle of the list had an error?

I believe this approach makes the most sense because:
1) It implements batches in one round trip (the intention of the feature)
2) It is complient with the standard
3) It is complient with the current functionality of the backend

If we can come up with an acceptable solution for an error with
autocommit enabled, I agree. Otherwise, I'm not sure.

However, it would mean a change in behaviour of the driver that
may break existing JDBC applications: the driver will no longer
return update counts for all statements in a batch like it
currently does, it will return unknown for most statements.
I'm not sure if the performance improvement justifies this
non-backwardly-compatible change, though I agree this is the
intention of the feature. What do you think?

Regards,
René Pijlman

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[JDBC] Re: [PATCHES] Attempt to clean up ExecSql() in JDBC

2001-08-27 Thread Barry Lind

I looked at the patch and it looks fine.  As for what fqp and hfr stand 
for I don't have a clue.  I was looking through the fe/be protocol 
documentation and think I might have a clue about what they are being 
used for.

thanks,
--Barry

Anders Bengtsson wrote:
 Hi,
 
 Attached is my attempt to clean up the horrors of the ExecSQL() method in
 the JDBC driver.
 
 I've done this by extracting it into a new method object called
 QueryExecutor (should go into org/postgresql/core/) and then taking it
 apart into different methods in that class.
 
 A short summary:
 
 * Extracted ExecSQL() from Connection into a method object called
   QueryExecutor.
 
 * Moved ReceiveFields() from Connection to QueryExecutor.
 
 * Extracted parts of the original ExecSQL() method body into smaller
   methods on QueryExecutor.
 
 * Bug fix: The instance variable pid in Connection was used in two
   places with different meaning. Both were probably in dead code, but it's
   fixed anyway.
 
 /Anders
 
 
 PS.: If anyone has any idea what the variable names fqp and hfr stand
 for, please tell me! :)
 
 _
 A n d e r s  B e n g t s s o n   [EMAIL PROTECTED]
 Stockholm, Sweden
 
 
 
 
 Index: src/interfaces/jdbc/org/postgresql/Connection.java
 ===
 RCS file: 
/home/projects/pgsql/cvsroot/pgsql/src/interfaces/jdbc/org/postgresql/Connection.java,v
 retrieving revision 1.26
 diff -c -r1.26 Connection.java
 *** src/interfaces/jdbc/org/postgresql/Connection.java2001/08/24 16:50:12
 1.26
 --- src/interfaces/jdbc/org/postgresql/Connection.java2001/08/26 18:33:48
 ***
 *** 8,14 
   import org.postgresql.fastpath.*;
   import org.postgresql.largeobject.*;
   import org.postgresql.util.*;
 ! import org.postgresql.core.Encoding;
   
   /**
* $Id: Connection.java,v 1.26 2001/08/24 16:50:12 momjian Exp $
 --- 8,14 
   import org.postgresql.fastpath.*;
   import org.postgresql.largeobject.*;
   import org.postgresql.util.*;
 ! import org.postgresql.core.*;
   
   /**
* $Id: Connection.java,v 1.26 2001/08/24 16:50:12 momjian Exp $
 ***
 *** 348,513 
* @return a ResultSet holding the results
* @exception SQLException if a database error occurs
*/
 ! public java.sql.ResultSet ExecSQL(String sql,java.sql.Statement stat) throws 
SQLException
   {
 !   // added Jan 30 2001 to correct maxrows per statement
 !   int maxrows=0;
 !   if(stat!=null)
 ! maxrows=stat.getMaxRows();
 ! 
 ! // added Oct 7 1998 to give us thread safety.
 ! synchronized(pg_stream) {
 ! // Deallocate all resources in the stream associated
 ! // with a previous request.
 ! // This will let the driver reuse byte arrays that has already
 ! // been allocated instead of allocating new ones in order
 ! // to gain performance improvements.
 ! // PM 17/01/01: Commented out due to race bug. See comments in
 ! // PG_Stream
 ! //pg_stream.deallocate();
 ! 
 ! Field[] fields = null;
 ! Vector tuples = new Vector();
 ! byte[] buf = null;
 ! int fqp = 0;
 ! boolean hfr = false;
 ! String recv_status = null, msg;
 ! int update_count = 1;
 ! int insert_oid = 0;
 ! SQLException final_error = null;
 ! 
 ! buf = encoding.encode(sql);
 ! try
 ! {
 ! pg_stream.SendChar('Q');
 ! pg_stream.Send(buf);
 ! pg_stream.SendChar(0);
 ! pg_stream.flush();
 ! } catch (IOException e) {
 ! throw new PSQLException(postgresql.con.ioerror,e);
 ! }
 ! 
 ! while (!hfr || fqp  0)
 ! {
 ! Object tup=null;// holds rows as they are recieved
 ! 
 ! int c = pg_stream.ReceiveChar();
 ! 
 ! switch (c)
 ! {
 ! case 'A':   // Asynchronous Notify
 ! pid = pg_stream.ReceiveInteger(4);
 ! msg = pg_stream.ReceiveString(encoding);
 ! break;
 ! case 'B':   // Binary Data Transfer
 ! if (fields == null)
 ! throw new PSQLException(postgresql.con.tuple);
 ! tup = pg_stream.ReceiveTuple(fields.length, true);
 ! // This implements Statement.setMaxRows()
 ! if(maxrows==0 || tuples.size()maxrows)
 ! tuples.addElement(tup);
 ! break;
 ! case 'C':   // Command Status
 ! recv_status = 

Re: [JDBC] Re: Proposal to fix Statement.executeBatch()

2001-08-27 Thread Barry Lind

  What exactly is the behaviour of the backend in that scenario?
  Does it commit every separate SQL statement in the
  semicolon-separated list, or does it commit the list as a whole?
  Does it abort processing the statement list when an error occurs
  in one statement? And if it continues, does it return an error
  when only one statement in the middle of the list had an error?

I do not know what the server does if you have autocommit enabled and 
you issue multiple statements in one try.  However, I would be OK with 
the driver issuing the statements one by one with autocommit on.  If you 
are running in this mode you just wouldn't get any performance improvement.

  However, it would mean a change in behaviour of the driver that
  may break existing JDBC applications: the driver will no longer
  return update counts for all statements in a batch like it
  currently does, it will return unknown for most statements.
  I'm not sure if the performance improvement justifies this
  non-backwardly-compatible change, though I agree this is the
  intention of the feature. What do you think?

I wouldn't worry about this 'change in behavior' because if the caller 
is JDBC complient it should be coded to handle the new behavior as it is 
complient with the spec.

thanks,
--Barry




Rene Pijlman wrote:
 On Mon, 27 Aug 2001 11:07:55 -0700, you wrote:
 [executeBatch() implemented as one round trip]
 
Here is how I would suggest this be done in a way that is spec 
compliant (Note: that I haven't looked at the patch you submited yet, so 
forgive me if you have already done it this way, but based on your 
comments in this email, my guess is that you have not).

 
 Indeed, I have not implemented this.
 
 
Statements should be batched together in a single statement with 
semicolons separating the individual statements (this will allow the 
backend to process them all in one round trip).

The result array should return an element with the row count for each 
statement, however the value for all but the last statement will be 
'-2'.  (-2 is defined by the spec to mean the statement was processed 
successfully but the number of affected rows is unknown).

 
 Ah, I see. I hadn't thought of that solution.
 
 
In the event of an error, then the driver should return an array the 
size of the submitted batch with values of -3 for all elements. -3 is 
defined by the spec as the corresponding statement failed to execute 
successfully, or for statements that could not be processed for some 
reason.  Since in postgres when one statement fails (in non-autocommit 
mode), the entire transaction is aborted this is consistent with a 
return value of -3 in my reading of the spec.

 
 Not quite. A statement in a batch may also fail because its a
 succesful SELECT as far as the server is concerned (can't have
 select's in a batch). But that situation can also be handled
 correctly by setting the update count for that particular
 statement to -3. Its then up to the application to decide if it
 wants to rollback, I would say.
 
 But what to do when an error occurs with autocommit enabled?
 This is not recommended, but allowed by the spec, if I
 understand it correctly.
 
 What exactly is the behaviour of the backend in that scenario?
 Does it commit every separate SQL statement in the
 semicolon-separated list, or does it commit the list as a whole?
 Does it abort processing the statement list when an error occurs
 in one statement? And if it continues, does it return an error
 when only one statement in the middle of the list had an error?
 
 
I believe this approach makes the most sense because:
1) It implements batches in one round trip (the intention of the feature)
2) It is complient with the standard
3) It is complient with the current functionality of the backend

 
 If we can come up with an acceptable solution for an error with
 autocommit enabled, I agree. Otherwise, I'm not sure.
 
 However, it would mean a change in behaviour of the driver that
 may break existing JDBC applications: the driver will no longer
 return update counts for all statements in a batch like it
 currently does, it will return unknown for most statements.
 I'm not sure if the performance improvement justifies this
 non-backwardly-compatible change, though I agree this is the
 intention of the feature. What do you think?
 
 Regards,
 René Pijlman
 
 



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly