Re: [HACKERS] Client deadlocks when connecting via ssl
On Sun, Jan 25, 2015 at 11:13 PM, Heikki Linnakangas hlinnakan...@vmware.com wrote: I couldn't come up with any good solutions. I would suggest speaking to the OpenSSL folk. They must be aware of the issue, so what do they recommend, do they have any plans to fix it, and if there's anything you can do to help them. Thanks. We'll give that a shot.
Re: [HACKERS] Client deadlocks when connecting via ssl
Hello, I wanted to follow up on a bug report [1] from October. In a reply, Heikki conceded that there does seem to be an issue with how Postgres interacts with OpenSSL, but the nature of the API makes it difficult to fix. I'm with Heroku Postgres, and though the issue is not Heroku-specific, it does seem to affect a lot of our customers, especially ones using the Python Gunicorn web server. One particular account of this is written up in a blog post [2]. It's a frustrating issue that's rather hard to diagnose. Is there anything we can do to help resolve this? Are there any recommended workarounds? Thanks, Maciek [1]: http://www.postgresql.org/message-id/5436991b.5020...@vmware.com [2]: http://blog.maestropublishing.com/2015/01/16/gunicorn-dyno-death-spiral-on-heroku-part-ii/
[HACKERS] [PATCH] Doc fix for VACUUM FREEZE
VACUUM FREEZE sets both vacuum_freeze_min_age and vacuum_freeze_table_age to 0, but only the former is documented. This patch notes that the other setting is also affected.
Re: [HACKERS] [PATCH] Doc fix for VACUUM FREEZE
(now with patch--sorry about that) diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml index eb71581..5f03343 100644 --- a/doc/src/sgml/ref/vacuum.sgml +++ b/doc/src/sgml/ref/vacuum.sgml @@ -100,8 +100,9 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] ANALYZE [ replaceable class=PARAMETER para Selects aggressive quotefreezing/quote of tuples. Specifying literalFREEZE/literal is equivalent to performing - commandVACUUM/command with the - xref linkend=guc-vacuum-freeze-min-age parameter + commandVACUUM/command with both + xref linkend=guc-vacuum-freeze-table-age and + xref linkend=guc-vacuum-freeze-min-age set to zero. /para /listitem -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] [PATCH] bgworker doc typo fixes
Hi, I noticed a couple of missing words (at least to my reading) in the bgworker doc. This changes ...is started including the module name... to ...is started by including the module name... and ...to obtain information the status of the worker. to ...to obtain information regarding the status of the worker. Thanks, Maciek bgworker-doc-typo-fixes.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Protocol forced to V2 in low-memory conditions?
One of our customers seems to be running into exactly the issue hypothesized about by Tom here: http://www.postgresql.org/message-id/8040.1314403...@sss.pgh.pa.us That is, the server is in low-memory conditions, and the client occasionally issues an error saying it can't complete a certain action due to the protocol version: PG::UnableToSend: function requires at least protocol version 3.0: SELECT ... The server is 9.2.4, and the client is the Ruby pg gem using a 9.2.4 libpq. Was the possibility of an inadvertent protocol downgrade addressed as part of that patch? I read through the thread, but it wasn't entirely clear. Thanks, Maciek
Re: [HACKERS] Fwd: [Still Failing] pg-quilter/postgres#111 (master - 82b0102)
I apologize. I guess the Travis CI integration is a little better than I expected. I'm traveling but will turn off notifications as soon as I have a chance to. Per Peter's comment there is more info in the GitHub project, and I welcome any feedback. I'll follow up with more once this is a little more mature.
Re: [HACKERS] [GENERAL] Floating point error
On Tue, Mar 5, 2013 at 12:03 AM, Albe Laurenz laurenz.a...@wien.gv.at wrote: I don't think that it is about looking nice. C doesn't promise you more than FLT_DIG or DBL_DIG digits of precision, so PostgreSQL cannot either. If you allow more, that would mean that if you store the same number on different platforms and query it, it might come out differently. Among other things, that would be a problem for the regression tests. Thank you: I think this is what I was missing, and what wasn't clear from the proposed doc patch. But then how can pg_dump assume that it's always safe to set extra_float_digits = 3? Why the discrepancy between default behavior and what pg_dump gets? It can't know whether the dump is to be restored into the same system or a different one (and AFAICT, there's not even an option to tweak extra_float_digits there). -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] Floating point error
On Tue, Mar 5, 2013 at 10:23 AM, Tom Lane t...@sss.pgh.pa.us wrote: Why the discrepancy between default behavior and what pg_dump gets? Basically, the default behavior is tuned to the expectations of people who think that what they put in is what they should get back, ie we don't want the system doing this by default: regression=# set extra_float_digits = 3; SET regression=# select 0.1::float4; float4 - 0.10001 (1 row) regression=# select 0.1::float8; float8 - 0.10001 (1 row) We would get a whole lot more bug reports, not fewer, if that were the default behavior. Isn't this a client rendering issue, rather than an on-the-wire encoding issue? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] Floating point error
On Sun, Mar 3, 2013 at 9:14 PM, Tom Lane t...@sss.pgh.pa.us wrote: The real difficulty is that there may be more than one storable value that corresponds to 1.23456 to six decimal digits. To be certain that we can reproduce the stored value uniquely, we have to err in the other direction, and print *more* decimal digits than the underlying precision justifies, rather than a bit less. Some of those digits are going to look like garbage to the naked eye. I think part of the difficulty here is that psql (if I understand this correctly) conflates the wire-format text representations with what should be displayed to the user. E.g., a different driver might parse the wire representation into a native representation, and then format that native representation when it is to be displayed. That's what the JDBC driver does, so it doesn't care about how the wire format actually looks. pg_dump cares about reproducing values exactly, and not about whether things are nice-looking, so it cranks up extra_float_digits. The JDBC driver might be justified in doing likewise, to ensure that the identical binary float value is stored on both client and server --- but that isn't even a valid goal unless you assume that the server's float implementation is the same as Java's, which is a bit of a leap of faith, even if IEEE 754 is nigh universal these days. I would hope that any driver cares about reproducing values exactly (or at least as exactly as the semantics of the client and server representations of the data type allow). Once you start talking operations, sure, things get a lot more complicated and you're better off not relying on any particular semantics. But IEEE 754 unambiguously defines certain bit patterns to correspond to certain values, no? If both client and server talk IEEE 754 floating point, it should be possible to round-trip values with no fuss and end up with the same bits you started with (and as far as I can tell, it is, as long as extra_float_digits is set to the max), even if the implementations of actual operations on these numbers behave very differently on client and server. I think given that many ORMs can cause UPDATEs on tuple fields that have not changed as part of saving an object, stable round trips seem like a desirable feature. We could have dumbed it down to a boolean look nice versus reproduce the value exactly switch, but it seemed like there might be applications that could use some additional flexibility. In any case, it's not Postgres' fault that there is an issue here; it's fundamental to the use of binary rather than decimal stored values. It seems like getting things to look nice should be the client's job, no? Why does that factor into wire protocol data representations (and yes, I know part of the answer here--presumably literals are intimately tied to the same code paths, so it's not quite so simple)? Going back to the documentation patch, what should the advice be? How about something along these lines: Due to the nature of floating point numeric values, a faithful textual representation of a typereal/type or typedouble precision/type value requires some decimal digits that are generally insignificant, impairing readability of common values. Because of this, Postgres supports a limited output precision for floating point numbers by default. In order to preserve floating point values more exactly, you can use the xref linkend=guc-extra-float-digits to adjust this setting. Is that reasonable? It still feels like extra_float_digits should be opt-out rather than opt-in (leaving any formatting issues to clients), but this could be a start. It doesn't address non-IEEE 754 platforms, but the note in the other proposed patch is so high-level as to just be hand-waving. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] Floating point error
While having more docs around extra_float_digits is a great idea, I don't think the patch really clarifies much. (Disclaimer: I honestly have only a vague idea of the reasoning behind extra_float_digits existing in the first place, but perhaps that means I'm a good target audience for the doc patch. Also, I apologize if anything below sounds abrasive--having worked on both my own driver and a few things here and there on the JDBC one, plus running into this in assorted places in the wild, I still find extra_float_digits baffling at best. I immensely appreciate the effort to make improvements here.) That is, the patch explains some of the reasoning behind the setting, but it doesn't really help the reader identify where setting this is useful and/or necessary. E.g., the JDBC driver just indiscriminately sets extra_float_digits to 3 if the server supports it (the max allowable). Is this a bogus approach? + The (inherently inaccurate) textual representation of a typereal/type + or typedouble precision/type value... Isn't full fidelity possible assuming sensible rounding semantics and enough characters of precision? Isn't that the point of extra_float_digits? + will by default contain only as many + significant digits as can be represented without losing precision + on any platform supported by PostgreSQL. How would providing more digits lose precision? Platform as in a non-IEEE 754 server restoring a dump? A non-IEEE 754 client running queries? Something more subtle? And how does having more precision in the serialized cause these to lose precision when decoding the value? I think extra_float_digits is an awkward, surprising implementation detail--at first glance it seems like a flip this switch for correct behavior knob. I'd love to have a clear explanation in the docs of why it's needed, and when you should and should not set it (e.g., is the JDBC driver wrong to set it all the time? should every driver set it all the time?). Thanks, Maciek -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Runtime SHAREDIR for testing CREATE EXTENSION
On Fri, Feb 24, 2012 at 3:31 PM, Daniel Farina dan...@heroku.com wrote: Having been in this position once before in a different but similar situation, there's one big caveat: initdb is *really* slow, so it is really painful for people who write Postgres-linked code that is compiled separately, whereby the initdb binary is stable. The frustration is not unlike when I have to run ./configure, except I had to do it all the time, every time. A ccache-like solution has worked well for at least one person I know, and wasn't hard to implement(?) for personal use. Being the person in question: I don't have the full context so I'm not sure how useful this will be, but here's hoping it's not just line noise. I'm probably not at liberty to share the original code, but it was a fairly trivial shell script that did more or less the following: 1. Capture all arguments (to pass to initdb itself) except the path, and figure out the path to the cluster (via looking for -D/--pgdata/$PGDATA) 2. Create a hash of the concatenation of the arguments (again, sans cluster path) 3. Create a template path based on a local cache directory and the argument hash 4. a) If the directory with this path did not exist, run initdb with -D set to this and the rest of the arguments as captured; then copy to *actual* path b) If it does exist, that's because you've run initdb with these arguments before: just copy that to the cluster path you captured in step (1) instead of starting fresh This was not terribly robust (e.g., I remember that Dan had pointed out several shortcomings), but none of these were practical issues for me, and I never had a problem. There is some more work around capturing the cluster path in the initdb output and mangling it back from the template version into the original (so the output of the cached initdb looks like output of a real initdb), but that's not a big deal. The approach could probably be made solid enough for general-purpose use. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [JDBC] JDBC and Binary protocol error, for some statements
Haven't really gotten much further, but an interesting note: the named / unnamed prepared statement and portal stuff seems to be a red herring. I can add a name to the portal, or move to an unnamed prepared statement, and I still see the same thing. Which is interesting, since that's not what Radosław saw (his original test only failed once named prepared statements kicked in)... --- Maciek Sakrejda | System Architect | Truviso 1065 E. Hillsdale Blvd., Suite 215 Foster City, CA 94404 (650) 242-3500 Main www.truviso.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [JDBC] JDBC and Binary protocol error, for some statements
Interesting. I think you're right. Looking at the Wireshark traffic again, the driver seems to issue a portal-variant Describe when using unnamed prepared statements, but as soon as the named prepared statements kick in (per prepare threshold), the Describe is a statement-variant Describe with the expected behavior you've noted. Actually, the driver sends Parse / Bind / Describe with the unnamed statement, but for some reason switches to Parse / Describe / Bind with named ones. --- Maciek Sakrejda | System Architect | Truviso 1065 E. Hillsdale Blvd., Suite 215 Foster City, CA 94404 (650) 242-3500 Main www.truviso.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [JDBC] JDBC and Binary protocol error, for some statements
Okay, looking at the JDBC side of things, I think JDBC doesn't actually need that information (since, it always used text results before Radosław's patch--the previous binary support was for parameters only, right?). From looking at QueryExecutorImpl (specifically sendOneQuery), it's clear that it *does* need the Describe before the Bind when using named prepared statements. I can't quite follow why, but I think the upshot of this is that there is no bug in either the JDBC driver or the backend. This is just a limitation of the existing implementation in the JDBC driver that Radosław would need to work around for binary result processing (probably by storing the requested format somewhere and using that instead of what comes back from Describe). I think the lessons are (1) the bug is probably *not* in the established library or backend code and (2) it's a little confusing that the result codes for I don't know what format this will be in and this will be in text format are aliased. It's too late to do anything about the second one and it's only a minor quibble. Thanks for the help in making sense of this. --- Maciek Sakrejda | System Architect | Truviso 1065 E. Hillsdale Blvd., Suite 215 Foster City, CA 94404 (650) 242-3500 Main www.truviso.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [JDBC] JDBC and Binary protocol error, for some statements
So, to summarise, I shouldn't believe server DescribeRow (in context of format), in this situation, but only I should look at this what I asked for, isn't it? If I asked for columns in binary format, I need to do binary reading regarding what server has responded? Yes, because in this case 0 doesn't mean the result will be in text, it means, you issued the statement-variant of Describe, so I'm not sure what the result format will be yet. If I asked for odd columns in text, even in binary do I need to choose proper format basing only on my request? I don't quite understand this question, but I think so. I don't think there's ever a situation where the server will ignore your result format requests. But to the last part of cited protocol specification, when I've sent message with statement parameter's type int4, int8, varchar the format field wasn't set to 0, but 1. I wasn't able to reproduce that with my standalone test case. When I changed the parameter oid to 23, I still got the same behavior. Can you alter my test case to reproduce the error? --- Maciek Sakrejda | System Architect | Truviso 1065 E. Hillsdale Blvd., Suite 215 Foster City, CA 94404 (650) 242-3500 Main www.truviso.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [JDBC] JDBC and Binary protocol error, for some statements
OTOH, it seems possible that the JDBC driver might behave differently depending on whether parameter types were prespecified or not --- it might issue Describe earlier in order to get the parameter types, perhaps. Ah. Bingo: boolean describeStatement = describeOnly || (!oneShot paramsHasUnknown queryHasUnknown !query.isStatementDescribed()); --- Maciek Sakrejda | System Architect | Truviso 1065 E. Hillsdale Blvd., Suite 215 Foster City, CA 94404 (650) 242-3500 Main www.truviso.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [JDBC] JDBC and Binary protocol error, for some statements
21:43:02.264 (26) FE= Describe(statement=S_1) You're still doing the statement-flavor Describe. As Tom pointed out, this won't tell you the result types because it doesn't know them. Actually, technically if you issue a statement-flavor Describe *after* a Bind, the server does have this information, but I'm not surprised that it doesn't send it correctly, since it seems pointless to send the statement variation after already doing a Bind. You may be able to change the JDBC code to call sendDescribePortal() instead. --- Maciek Sakrejda | System Architect | Truviso 1065 E. Hillsdale Blvd., Suite 215 Foster City, CA 94404 (650) 242-3500 Main www.truviso.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [JDBC] JDBC and Binary protocol error, for some statements
Result is oid=23, format=(0) T, value = 0x00,0x00,0x00,0x02 What do you mean regarding the format? Are you just inferring that from the data? If memory serves, the format of a particular column is not specified anywhere other than the RowDescription, and according to your JDBC log output above, the server is telling you the format is text (1) (which is your point--it doesn't match the resulting data--but I want to make sure we're clear on what's actually going on). Also, can you narrow this down to a simple, self-contained test case (with code)? Even if it's against a custom driver build, that would be easier to investigate. --- Maciek Sakrejda | System Architect | Truviso 1065 E. Hillsdale Blvd., Suite 215 Foster City, CA 94404 (650) 242-3500 Main www.truviso.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [JDBC] JDBC and Binary protocol error, for some statements
Text format is marked as 0, binary format is 1. Sorry--I misread the docs. This is consistent and something does look fishy. Thanks for the tarball. I can take a look tonight. --- Maciek Sakrejda | System Architect | Truviso 1065 E. Hillsdale Blvd., Suite 215 Foster City, CA 94404 (650) 242-3500 Main www.truviso.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [JDBC] JDBC and Binary protocol error, for some statements
I've run your test and I can confirm the error. I've looked at the protocol traffic with Wireshark, and the back-end is clearly lying about the format of the results in this particular case: as you stated, the row description says text, but the data is in binary. I also wrote a simple Java program (on github at https://github.com/deafbybeheading/postgresql-result-format-testcase ; I didn't know if uninvited attachments were kosher) to minimize the test case. Note that the program relies on trust authentication to be on. Since triggering the set of FEBE messages that leads to this was tied deep into the guts of JDBC, I opted for raw wire protocol. It looks like the following sequence of messages from the client leads to this result format mixup: 1. Parse, with statement name S_1, sql SELECT $1::int, and parameter type oid 0 (ask server to guess) 2. Describe S_1 3. Bind S_1 to anonymous portal, with param formats [ 0 (text) ], param values [ '2' ], result formats [ 1 (binary) ] 4. Execute anonymous portal (returning all rows) 5. Sync I have not tried to narrow this further yet. This essentially reproduces what Radosław was seeing but outside of the JDBC driver. That is, the server responds with: 1. Parse completion 2. Parameter description, with type oids [ 23 ] 3. Row description, with empty table data, type oids [ 23 ], type length 4, type mod -1, and format 0 (text) 4. Bind completion 5. DataRow, with the result in binary, as requested, but not as described in message (3) above 6. Command completion RFQ For what it's worth, I'm running this against an 8.4 server: PostgreSQL 8.4.5 on i486-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 32-bit I'll try to look into this some more, but first of all, is the client sequence above valid? As far as I can tell, it is. I'm guessing that the root of the problem is providing the parameter as text and requesting it back in binary, but I'll freely admit my knowledge of the internals here (well, beyond the protocol itself) is very limited. Thanks, --- Maciek Sakrejda | System Architect | Truviso 1065 E. Hillsdale Blvd., Suite 215 Foster City, CA 94404 (650) 242-3500 Main www.truviso.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers