On 25 November 2011 07:54, Mikko Tiihonen
mikko.tiiho...@nitorcreations.com wrote:
=BE ParameterStatus(binary_minor = 23)
FE= Execute(SET binary_minor = 20)
Yeah this was almost exactly what I was thinking about how to retrofit
it, except it might be clearer to have, say,
On 24 November 2011 05:36, Tom Lane t...@sss.pgh.pa.us wrote:
Now it's possible we could do that without formally calling it a
protocol version change, but I don't care at all for the idea of coming
up with one-off hacks every time somebody decides that some feature is
important enough that
On 23 November 2011 10:47, Mikko Tiihonen
mikko.tiiho...@nitorcreations.com wrote:
Here is a patch that adds a new flag to the protocol that is set when all
elements of the array are of same fixed size.
When the bit is set the 4 byte length is only sent once and not for each
element. Another
Lukas Eder wrote:
The result set meta data correctly state that there are 6 OUT columns.
But only the first 2 are actually fetched (because of a nested UDT)...
The data mangling was just a plpgsql syntactic issue, wasn't it?
Oliver
--
Sent via pgsql-hackers mailing list
Florian Pflug wrote:
On Feb17, 2011, at 01:14 , Oliver Jowett wrote:
Any suggestions about how the JDBC driver can express the query to get
the behavior that it wants? Specifically, the driver wants to call a
particular function with N OUT or INOUT parameters (and maybe some other
IN parameters
On 17/02/11 23:18, rsmogura wrote:
Yes, but driver checks number of declared out parameters and number of
resulted parameters (even check types of those), to prevent programming
errors.
And..?
Oliver
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
On 18/02/11 00:37, rsmogura wrote:
On Fri, 18 Feb 2011 00:06:22 +1300, Oliver Jowett wrote:
On 17/02/11 23:18, rsmogura wrote:
Yes, but driver checks number of declared out parameters and number of
resulted parameters (even check types of those), to prevent programming
errors
On 18/02/11 00:52, rsmogura wrote:
On Fri, 18 Feb 2011 00:44:07 +1300, Oliver Jowett wrote:
On 18/02/11 00:37, rsmogura wrote:
On Fri, 18 Feb 2011 00:06:22 +1300, Oliver Jowett wrote:
On 17/02/11 23:18, rsmogura wrote:
Yes, but driver checks number of declared out parameters and number
On 18/02/11 01:08, Florian Pflug wrote:
Well, the JDBC driver does know how many OUT parameters there are before
execution happens, so it could theoretically do something different for 1
OUT vs. many OUT parameters.
Right, I had forgotten that JDBC must be told about OUT parameter with
On 17/02/11 00:58, Robert Haas wrote:
On Wed, Feb 16, 2011 at 3:30 AM, Lukas Eder lukas.e...@gmail.com wrote:
I'm not trying to fix the signature. I want exactly that signature. I want
to return 1 UDT as an OUT parameter from a function.
Somewhere between JDBC and the database, this signature
On 17/02/11 01:10, Robert Haas wrote:
If you do SELECT function_with_one_out_parameter() rather than SELECT
* FROM function_with_one_out_parameter(), you'll get just one
argument. Does that help at all?
Unfortunately, not really, because it doesn't work for cases where
there's more than one
On 17/02/11 04:23, Tom Lane wrote:
Florian Pflug f...@phlo.org writes:
Hm, I've browsed through the code and it seems that the current behaviour
was implemented on purpose.
Yes, it's 100% intentional. The idea is to allow function authors to
use OUT-parameter notation (in particular, the
On 17/01/11 17:27, Robert Haas wrote:
On Wed, Jan 12, 2011 at 5:12 AM, rsmogurarsmog...@softperience.eu wrote:
Dear hackers :) Could you look at this thread from General.
---
I say the backend if you have one row type output result treats it as the
full output result, it's really bad if you
Tom Lane wrote:
Dave Cramer [EMAIL PROTECTED] writes:
This is a server bug, I will post to hackers for you, it has little
to do with JDBC, however the ? can't be a column in a prepared statement
I cannot reproduce any problem using what I think is equivalent in libpq:
I thought we got
Tom Lane wrote:
NULL,/* let the backend deduce param type */
I think the JDBC driver will be passing the int4 OID for the param type
in this case.
Best thing is probably for the OP to run with loglevel=2 and see exactly
what's being sent, though.
-O
On Sun, 13 Nov 2005, Joost Kraaijeveld wrote:
I have a connection that is created with prepareThreshold=1 in the
connection string. I use a prepared statement that I fill with
addbatch() and that I execute with executeBatch() (for full source: see
application.java attachment).
LOG: statement:
Bruce Momjian wrote:
Simon's page is in the patches queue. What would you like changed,
exactly?
I'm not going to have time to comment on this any time soon, sorry :( ..
I guess I will try to look at it for 8.2.
-O
---(end of broadcast)---
TIP
Thomas Hallgren wrote:
PL/Java runs a JVM. Since a JVM is multi threaded, PL/Java goes to
fairly extreme measures to ensure that only one thread at a time can
access the backend. So far, this have worked well but there is one small
problem. [...]
I assume this means you have a single lock
Bruce Momjian wrote:
We don't have a log_statement = verbose mode.
Please see my earlier email where I suggested adding one if you really
wanted all this protocol-level detail logged.
-O
---(end of broadcast)---
TIP 1: if posting/reading
Bruce Momjian wrote:
I think it is more verbose because no FETCH is logged in this type of
prepare/execute. The goal, I think, is for these type of queries to
look as similar to normal PREPARE/EXECUTE and DECLARE/FETCH as possible.
I do not understand why this is a useful thing to do as part
Bruce Momjian wrote:
Well, from the application writer perspective, you are right it doesn't
make sense,
This is exactly what the end user is going to say.
but this is only because jdbc is using prepare internally.
Isn't this mostly irrelevant to the result we want to see? It's a detail
of
Merlin Moncure wrote:
I've noticed that trying to parameterize a fetch statement via
ExecParams returns a syntax error:
fetch $1 from my_cursor;
This is not really a big deal, but maybe it should be documented which
statements can be parameterized and which can't
Currently the
Simon Riggs wrote:
Are we sure there is just 3 cases?
I haven't exhaustively checked, but I think those are the main cases.
Even if case (3) is not that common, I still want to know it is
occurring, to see what effect or overhead it has.
I don't want it to be more verbose than the other
James William Pye wrote:
The use case primarily applies to custom clients(non-libpq, atm) that
support multiple PQ versions that may be implemented in separate
modules/libraries. (Avoid loading client-2.0 code for a 3.0 connection,
and/or future versions.)
libpq automatically negotiates
Simon Riggs wrote:
Looking more closely, I don't think either is correct. Both can be reset
according to rewind operations - see DoPortalRewind().
We'd need to add another bool onto the Portal status data structure.
AFAIK this is only an issue with SCROLLABLE cursors, which v3 portals
Simon Riggs wrote:
Subsequent calls to the same portal are described as FETCHes rather than
as EXECUTEs. The portal name is still given and number of rows is
provided also.
I wonder if it might be better to only log the first Execute.. It's not
immediately clear to me that it's useful to see
8.1-beta1 produces some odd results with statement logging enabled when
the extended query protocol is used (e.g. when using the JDBC driver).
Repeatedly running a simple query with log_statement = 'all' produces this:
LOG: statement: PREPARE AS SELECT 'dummy statement'
LOG: statement: BIND
Sivakumar K wrote:
Do we have an API like mysql_ping to check whether the server is up and
running after the connection has been established?
At the protocol level, you could send Sync and wait for ReadyForQuery.
-O
---(end of broadcast)---
TIP
Tom Lane wrote:
So, the low-tech solution to these gripes seems to be:
* uncomment all the entries in postgresql.conf
* add comments to flag the values that can't be changed by SIGHUP
Can we agree on taking these measures?
Doesn't this still mean that a SIGHUP may give you a
Peter Eisentraut wrote:
Also, let's say I have apps now in 7.4/8.0, and I want them to be
forward-compatible. Should I make a type called E so that the E''
notation will work, and then use that for strings? What is the right
way to do it?
To be standards-conforming, don't use any backslash
Tom Lane wrote:
Oliver Jowett [EMAIL PROTECTED] writes:
per my linux/socket.h:
/* Setsockoptions(2) level. Thanks to BSD these must match IPPROTO_xxx */
#define SOL_IP 0
/* #define SOL_ICMP 1 No-no-no! Due to Linux :-) we cannot use
SOL_ICMP=1 */
#define SOL_TCP
Andrew - Supernews wrote:
On 2005-07-31, Oliver Jowett [EMAIL PROTECTED] wrote:
I'm not worried about changing values; I think that representing the
option level as an IP protocol number, in an interface that
encompasses non-IP protocols, is a bad API design decision.
The interpretation
Larry Rosenman wrote:
I think Tom's fix to use IPPROTO_TCP will fix firefly.
Ah, I forgot about the we'll just use IP protocol numbers as socket
option levels behaviour (BSD-derived?). My Linux man page only talks
about SOL_TCP, but I have run into this before and should have
remembered.. my
Simon Riggs wrote:
I agree we *must* have the GUC, but we also *must* have a way for crash
recovery to tell us for certain that it has definitely worked, not just
maybe worked.
Doesn't the same argument apply to the existing fsync = off case? i.e.
we already have a case where we don't provide
Tom Lane wrote:
It appears that somebody has changed things so that the -L switches
appear after the -l switches (ie, too late). I'm too tired to
investigate now, but my money is on Autoconf 2.59 being the problem ...
Perhaps this:
Heikki Linnakangas wrote:
On Fri, 1 Jul 2005, Oliver Jowett wrote:
Heikki Linnakangas wrote:
branch id: Branch Identifier. Every RM involved in the global
transaction is given a *different* branch id.
Hm, I am confused then -- the XA spec definitely talks about enlisting
multiple RMs
Tom Lane wrote:
regression=# \h commit prepared
Command: COMMIT PREPARED
Description: commit a transaction that was earlier prepared for two-phase
commit
Syntax:
COMMIT PREPARED transaction_id
Ah, I was looking under '\h commit', '\h prepare' etc.
-O
Heikki Linnakangas wrote:
On Fri, 1 Jul 2005, Oliver Jowett wrote:
That implies it's valid (in fact, normal!) to enlist many different RMs
in the same transaction branch. Am I interpreting that correctly?
I see. No, I don't think that's the correct interpretation, though now
that you
Dave Cramer wrote:
Do the transaction id's used in 2PC need to be unique across all sessions?
They are global IDs, yes.
Do we provide a mechanism for this ?
If not shouldn't we provide a way to create a unique transaction id ?
Well, in XA the XIDs are assigned by the TM, the individual
Dave Cramer wrote:
I'm thinking of the situation where one transaction occurs on more than
one backend, and there is
more than one transaction manager.
XA XIDs are *global* IDs, i.e. they are unique even with more than one
TM involved. It's the responsibility of the TM to generate a
Oliver Jowett wrote:
If you have two different databases involved in the same global
transaction, then yes, the two backends could be told to use the same
global XID. That's normal. (they don't *have* to be given the same XID
as they could be participating in two independent branches
Tom Lane wrote:
Oliver Jowett [EMAIL PROTECTED] writes:
Can we make the GID-to-internal-xid mapping for prepared transactions
1:N rather than the current 1:1?
No.
Ok, so how do we get XA working when a single global transaction
involves two databases on the same cluster?
The scenario
Tom Lane wrote:
Oliver Jowett [EMAIL PROTECTED] writes:
Ok, so how do we get XA working when a single global transaction
involves two databases on the same cluster?
It's the TM's responsibility to deal with that. I would expect it to
hand out transaction IDs that consist of a common
Oliver Jowett wrote:
Tom Lane wrote:
It's the TM's responsibility to deal with that. I would expect it to
hand out transaction IDs that consist of a common prefix and a
per-database suffix, if it does not know which resources it's dealing
with might share a common GID namespace.
I don't know
Tom Lane wrote:
Magnus Hagander [EMAIL PROTECTED] writes:
But it still requires me to send some data (such as a dummy query) to
the backend before it exits. This is because server side libpq blocks
when reading and ignores signals at this time. I believe the fix for
this would be to pass a flag
Bruce Momjian wrote:
I have received very few replies to my suggestion that we implement E''
for escaped strings, so eventually, after a few major releases, we can
have '' treat backslashes literally like the SQL standard requires.
Just checking: with this plan, a client needs to know what
Christopher Kings-Lynne wrote:
What would be absolutely ideal is a reset connection command, plus some
way of knowing via the protocol if it's needed or not.
And a way of notifying the client that a reset has happened.
-O
---(end of
Alon Goldshuv wrote:
I think that the basic issue is that there are some database users that would
like to take their data and put it into the database without pre-processing
it [...]
The only responsibility of these users is to explicitly escape any delimiter
or 0x0A (LF) characters that
Luke Lonergan wrote:
I propose an extended syntax to COPY with a change in semantics to remove
the default of WITH ESCAPE '\'.
Er, doesn't this break existing database dumps?
-O
---(end of broadcast)---
TIP 3: if posting/reading through
Tom Lane wrote:
On the other hand, it seems to me a client-side SO_KEEPALIVE would only
be interesting for completely passive clients (perhaps one that sits
waiting for NOTIFY messages?) A normal client will try to issue some
kind of database command once in awhile, and as soon as that
Tom Lane wrote:
Oliver Jowett [EMAIL PROTECTED] writes:
Peter Eisentraut wrote:
That would cripple a system that many users are perfectly content with now.
Well, I wasn't thinking of using a 7-bit encoding always, just as a
replacement for the cases where we currently choose SQL_ASCII
Tom Lane wrote:
We should wait and see what field experience is like with
that, rather than insisting on anything as anal-retentive as disallowing
8-bit data in SQL_ASCII.
I didn't suggest changing the behaviour of SQL_ASCII..
-O
---(end of
Peter Eisentraut wrote:
Am Donnerstag, 12. Mai 2005 04:42 schrieb Oliver Jowett:
I suppose that we can't change the semantics of SQL_ASCII without
backwards compatibility problems. I wonder if introducing a new encoding
that only allows 7-bit ascii, and making that the default, is the way to
go
Peter Eisentraut wrote:
Am Donnerstag, 12. Mai 2005 14:57 schrieb Oliver Jowett:
My 8.0.0 (what I happen to have on hand) initdb creates a SQL_ASCII
cluster by default unless I specify -E.
Then you use the locale C. We could create a 7-bit encoding and map it to
locale C, I suppose.
Ok
The SQL_ASCII-breaks-JDBC issue just came up yet again on the JDBC list,
and I'm wondering if we can do something better on the server side to
help solve it.
The problem is that people have SQL_ASCII databases with non-7-bit data
in them under some encoding known only to a (non-JDBC) application.
Madison Kelly wrote:
Is there a way to store the name in raw binary?
Yes: bytea.
-O
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Dave Held wrote:
So it seems that a possible solution to that problem is to
have a separate connection for keepalive packets that doesn't
block and doesn't interfere with normal client/server
communication.
What does this do that TCP keepalives don't? (other than add extra
connection
Tom Lane wrote:
Oliver Jowett [EMAIL PROTECTED] writes:
Tom Lane wrote:
I'm not convinced that Postgres ought to provide
a way to second-guess the TCP stack ...
Would you be ok with a patch that allowed configuration of the
TCP_KEEPCNT / TCP_KEEPIDLE / TCP_KEEPINTVL socket options on backend
Neil Conway wrote:
Is there a way to change the
socket timeout for some subset of the processes on the machine without
hacking the client or server source?
The only ways I can see of tuning the TCP idle parameters on Linux are
globally via sysfs, or per-socket via setsockopt().
You could
Peter Eisentraut wrote:
Neil Conway wrote:
The specific scenario this feature is intended to resolve is
idle-in-transaction backends holding on to resources while the
network connection times out;
I was under the impression that the specific scenario is
busy-in-transaction backends
Tom Lane wrote:
Wouldn't it be reasonable to expect the cluster liveness machinery to
notify the database server's kernel that connections to A are now dead?
No, because it's a node-level liveness test, not a machine-level
liveness. It's possible that all that happened is the node's VM
Tom Lane wrote:
Wouldn't it be reasonable to expect the cluster liveness machinery to
notify the database server's kernel that connections to A are now dead?
I find it really unconvincing to suppose that the above problem should
be solved at the database level.
Actually, if you were to
Chuck McDevitt wrote:
Why not just use SO_KEEPALIVE on the TCP socket?
We already do, but the default keepalive interval makes it next to useless.
-O
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
Neil Conway wrote:
[EMAIL PROTECTED] wrote:
statement_timeout is not a solution if many processes are
waiting the resource.
Why not?
I think the only problem with using statement_timeout for this purpose
is that the client connection might die during a long-running
transaction at a point when
Tom Lane wrote:
I'm not convinced that Postgres ought to provide
a way to second-guess the TCP stack ... this looks to me like I can't
convince the network software people to provide me an easy way to
override their decisions, so I'll beat up on the database people to
override 'em instead.
Simon Riggs wrote:
I assume this replaces the current logging on Parse to avoid duplicate
logging?
Well, I'm open to discussion, but that isn't what the patch does.
I guess I'll wait for your patch and take a look rather than try to
guess about what it does, then.
My thinking was to add
Tom Lane wrote:
Yeah? Cool. Does John's proposed patch do it correctly?
http://candle.pha.pa.us/mhonarc/patches2/msg00076.html
Some comments on that patch:
Doesn't pg_utf2wchar_with_len need changes for the longer sequences?
UtfToLocal also appears to need changes.
If we support
Simon Riggs wrote:
I've got a patch to submit that logs the EXEC phase, so you get just the
SQL, not the parameters. [...]
I assume this replaces the current logging on Parse to avoid duplicate
logging?
What happens on syntax errors? It's useful to log the statement that
failed, but you will
Simon Riggs wrote:
OK, thats what I hoped you'd say. With a prepared query all of the
statements execute the same plan, so you don't need to know the exact
parameters.
This isn't true in 8.0 if you are using the unnamed statement (as the
JDBC driver does in some cases): the plan chosen
Neil Conway wrote:
Christopher Kings-Lynne wrote:
I think he has a really excellent point. It should log the parameters
as well.
neilc=# prepare foo(int, int) as select $1 + $2;
PREPARE
neilc=# execute foo(5, 10);
...
neilc=# execute foo(15, 20);
...
% tail
Greg Stark wrote:
Palle Girgensohn [EMAIL PROTECTED] writes:
When setting log_statement = all, and using JDBC PreparedStatements, I get $n
in the log where the real arguments used to be in previous versions of
postgresql:
You might want to look into JDBC options to disable use of prepared
Tom Lane wrote:
You could make a
good case that we just ought to save query text and start from there in
any replanning; it'd be the most compact representation, the easiest to
copy around, and the least likely to break.
What happens if (for example) DateStyle changes between the two parses?
(not
Neil Conway wrote:
- it is the responsibility of the call site managing the prepared plan
to check whether a previously prepared plan is invalid or not -- and to
take the necessary steps to replan it when needed.
Does this mean that clients that use PREPARE/Parse need to handle plan
invalidated
Francisco Figueiredo Jr. wrote:
After some testing, I could send an Execute message with 2 as the manx
number of rows. After the second execute I get the following:
portal does not exist
Severity: ERROR
Code: 34000
I noticed that I could only get it working if I explicitly create a
transaction.
I
Karel Zak wrote:
Yes, I think we should fix it and remove UNICODE and WIN encoding names
from PG code.
The JDBC driver asks for a UNICODE client encoding before it knows the
server version it is talking to. How do you avoid breaking this?
-O
---(end of
Karel Zak wrote:
On Sat, 2005-02-19 at 00:27 +1300, Oliver Jowett wrote:
Karel Zak wrote:
Yes, I think we should fix it and remove UNICODE and WIN encoding names
from PG code.
The JDBC driver asks for a UNICODE client encoding before it knows the
server version it is talking to. How do you avoid
Evgeny Rodichev wrote:
Write cache is enabled under Linux by default all the time I make deal
with it (since 1993).
It doesn't interfere with fsync(), as linux kernel uses cache flush for
fsync.
The problem is that most IDE drives lie (or perhaps you could say the
specification is ambiguous)
Greg Stark wrote:
Oliver Jowett [EMAIL PROTECTED] writes:
So Linux is indeed doing a cache flush on fsync
Actually I think the root of the problem was precisely that Linux does not
issue any sort of cache flush commands to drives on fsync. There was some talk
on linux-kernel of what how
Richard Huxton wrote:
Oliver Jowett wrote:
I'm currently trying to find a clean way to deal with network-dead
clients that are in a transaction and holding locks etc.
Have you come across the pgpool connection-pooling project?
http://pgpool.projects.postgresql.org/
I've looked at it, haven't
I'm currently trying to find a clean way to deal with network-dead
clients that are in a transaction and holding locks etc.
The normal client closes socket case works fine. The scenario I'm
worried about is when the client machine falls off the network entirely
for some reason (ethernet
Marc G. Fournier wrote:
On Sat, 5 Feb 2005, Matthew T. O'Connor wrote:
Well I'm positive I submitted all my pg_autovacuum patches to the
patches list, however searching the archives for autovacuum I can't
find anything that old. How far back to the searchable archives go?
back to 96 or so ...
Tom Lane wrote:
Greg Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
How is what you're suggesting more portable?
Well, the driver would be free to implement $sth-last_insert_id() using
whatever proprietary extensions it has available. The non-portableness would
at least be
Alvaro Herrera wrote:
On Wed, Jan 26, 2005 at 05:10:09PM -0500, Tom Lane wrote:
I don't think we have a lot of choices: we have to destroy (or at least
mark FAILED) all such cursors for the time being.
I don't see a lot of difference between marking the portal FAILED and
destroying it (maybe I'm
Oliver Jowett wrote:
Having the close fail
because of an intervening savepoint rollback isn't great -- the error
will cause an unexpected failure of the current transaction.
Never mind -- I just reread the protocol docs, and it's safe to close a
nonexistant portal. Did this previously issue
(cc'ing -hackers)
Karel Zak wrote:
I think command status is common and nice feedback for client. I think
it's more simple change something in JDBC than change protocol that is
shared between more tools.
There is a bit of a queue of changes that would be nice to have but
require a protocol
8.0.0rc1 builds and passes 'make check' on Gentoo Linux (amd64) with the
dependencies I have to hand (no tcl or kerberos):
$ ./configure --prefix=/home/oliver/pg/8.0.0rc1 --with-pgport=5800
-enable-thread-safety --with-perl --with-python --with-pam -with-openssl
$ uname -a
Linux extrashiny
Tom Lane wrote:
strk [EMAIL PROTECTED] writes:
==15489== Syscall param write(buf) contains uninitialised or unaddressable byte(s)
Valgrind is fairly useless for debugging postgres, because it doesn't
know the difference between alignment-pad bytes in a struct and real
data. What you've got here
Bruce Momjian wrote:
[... SIGPIPE suppression in libpq ...]
Linux also has MSG_NOSIGNAL as a send() flag that might be useful. It
suppresses generation of SIGPIPE for just that call. No, it doesn't work
for SSL and it's probably not very portable, but it might be a good
platform-specific
Tom Lane wrote:
If the C library does support queued signals then we will read the
existing SIGPIPE condition and leave our own signal in the queue. This
is no problem to the extent that one pending SIGPIPE looks just like
another --- does anyone know of platforms where there is additional info
Barry Lind wrote:
I also have the test case (in java) down to the bare minimum that
generated the following output (that test case is attached). (Note that
if the FETCH in the test case is not executed then the backend crashes;
with the FETCH you get an error: ERROR: unrecognized node type: 0)
I
Oliver Jowett wrote:
Perhaps PerformCursorOpen should copy the query tree before planning, or
plan in a different memory context?
Patch attached. It moves query planning inside the new portal's memory
context. With this applied I can run Barry's testcase without errors,
and valgrind seems OK
Seen in passing when running valgrind against a CVS HEAD build:
==28598== Syscall param write(buf) contains uninitialised or unaddressable
byte(s)
==28598==at 0x1BABC558: write (in /lib/libc-2.3.4.so)
==28598==by 0x1BA7165D: (within /lib/libc-2.3.4.so)
==28598==by 0x1BA715FE:
Rod Taylor wrote:
Our local admin tried compiling a 64bit PostgreSQL on Solaris 9 using
the below environment:
export
PATH=:/usr/bin/sparcv9:/usr/ccs/bin/sparcv9:/usr/sfw/bin/sparcv9:/usr/local/bin/sparcv9:/usr/bin:/usr/local/bin:/usr/sfw/bin:/usr/ccs/bin
export
Barry Lind wrote:
Environment #1: WinXP 8.0beta4 server, 8.0jdbc client
2004-11-19 12:19:06 ERROR: unrecognized node type: 25344832
Environment #2: Sun Solaris 7.4.3 server, 8.0jdbc client
ERROR: no value found for parameter 1
From memory the 7.4.3 behaviour you see can happen if you DECLARE
Greg Stark wrote:
What purpose is there to returning both columns to the outer query? The
columns become effectively inaccessible. There's no syntax for disambiguating
any reference.
I think postgres should treat the second alias as hiding the first. Currently
there's no way to selectively
Tom Lane wrote:
It's really a
performance issue: do you want to pay the penalty associated with
reassembling messages that exceed the loopback MTU [...]
BTW, the loopback MTU here is quite large:
[EMAIL PROTECTED]:~$ /sbin/ifconfig lo | grep MTU
UP LOOPBACK RUNNING MTU:16436 Metric:1
Tom Lane wrote:
AFAICS the only nondestructive way to do this is to cvs delete and cvs
add, with a commit comment saying where the files were moved from. Then
when you are looking at them in CVS, you'd have to navigate over to the
previous location (by hand, probably; the commit comment isn't
Heikki Linnakangas wrote:
The Linux fsync man page says:
It does not necessarily ensure that the entry in the directory
containing the file has also reached disk. For that an explicit fsync on
the file descriptor of the directory is also needed.
AFAIK, we don't care about it at the moment. The
Tom Lane wrote:
I wrote:
Yeah. The intent of the protocol design was that the recipient could
skip over the correct number of bytes even if it didn't have room to
buffer them, but the memory allocation mechanism in the backend makes
it difficult to actually do that. Now that we have PG_TRY,
(Tom: this is not as severe a problem as I first thought)
If a client sends a V3 message that is sufficiently large to cause a
memory allocation failure on the backend when allocating space to read
the message, the backend gets out of sync with the protocol stream.
For example, sending this:
1 - 100 of 195 matches
Mail list logo