Re: [GENERAL] SELECT x'00000000F'::int leading zeros causes "integer out of range"
On 2/25/17 at 6:56 AM, Gavin Flower wrote: On 25/02/17 08:39, John McKown wrote: On Fri, Feb 24, 2017 at 1:25 PM, David G. Johnston On Friday, February 24, 2017, Tom Lane wrote: Justin Pryzby writes: Is this expected behavior ? ts=# SELECT x'F'::int; ERROR: 22003: integer out of range LOCATION: bittoint4, varbit.c:1575 Yes. The provided operation is "convert a bitstring of up to 32 bits to an integer". It's not "guess whether it's okay to throw away some bits to make an integer". IME The error message itself is to blame here - we are checking for a malformed (too many characters) integer varbit representation but then reporting that the we somehow got a valid integer but that it is "out of range". A better reply would be good. Another possibility is for the parser to remove unneeded leading zeros. [...] I think the latter would be a good idea! This is interesting in that the views expressed range from something close to "every bit is sacred" through to something resembling "drop what's needed to make it work". My take is PostgreSQL is already pragmatic: pendari=# select ((x'')::bigint)::int; int4 -- -1 (1 row) Clearly we've quietly dropped a lot of bits moving across this line. The problem posed in the OP example happens when the bit pattern is under specifying a long value (or over specifying a short value), and, in an ideal world, the correct behaviour should be close to what all well behaved CPUs are already doing: Opclass Operand Action (MSB=most significant bit) ==|===|== logical/bitwise Small->LargeZero fill most significant, but Large->Smallcheck which "standard" applies arthmetic/signedSmall->LargePropagate sign bit to left Large->SmallTruncate sign bits, error if sign bits are not all equal, and not equal to MSB of result arithmetic/unsigSmall->LargeZero fill most significant part Large->SmallTruncate from MSB, error if any truncated bit is not zero To my mind Tom's reply resembles the bitwise case but I think the OP's example should ideally have been interpreted in an arithmetic manner (i.e., treating the extra bits as representing the sign and nothing more) since the desired result was to be a signed integer. But! This gets problematic for something like: x'FFF67'::bigint My analogy would have this interpreted as x'FF67'::bigint whereas the current behaviour is equivalent to x'000FFF67'::bigint, and I doubt anyone has the appetite to change this. (Of course we have always known using bit masks across architectures with different word sizes was never an easy or safe activity. :) So, getting back to the OP problem… what's a good parser to do? I suggest: 1. the error message might be better (i.e., help get the focus onto the real problem); and/or, 2. consider dropping excess leading zeros when building an integer value. (I don't think this breaks anything.) Other than that there really isn't a realisable consistent behaviour beyond the current strict bitwise interpretation. Specifically any behaviour which tries to promote or truncate some "sign" bits in an arithmetically consistent manner is going to break existing behaviour. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How tö select a column?
On 2/18/17 at 3:33 AM, Egon Frerich wrote: I have a table with two columns with type money. If column 'a' has an amount > 0 then this amount is wanted else the amount from column 'b'. Examples in 4.2.14 SELECT CASE WHEN a > 0 THEN a ELSE b END FROM WHERE ; Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] BEFORE UPDATE trigger doesn't change column value
On 5/4/13 at 6:59 AM, Clemens Eisserer wrote: Sorry for this newbie-question, I am trying for quite some time now to get the following trigger-function to work properly: CREATE OR REPLACE FUNCTION update_synced_column() RETURNS trigger AS $BODY$ BEGIN IF NEW.synced IS NULL THEN NEW.synced := false; END IF; RETURN NEW; END; $BODY$ LANGUAGE plpgsql VOLATILE COST 100; The idea is basically to always update the synced column to false, unless a value has been provided manually in the UPDATE-clause. Synced is defined as BOOLEAN DEFAULT FALSE; I execute this function in a BEFORE UPDATE trigger FOR EACH ROW, however it doesn't seem to have any effect. You could also add 'NOT NULL' to the declaration of synced so this column can never be set to NULL and this would further ensure the trigger function has nothing to do. By way of sanity testing, do you get any rows when doing something like: SELECT * FROM relevant_table WHERE synced IS NULL; Any ideas what could be wrong here? If the above does not apply and at the risk of being too obvious (specifically not wishing to cause offence): Has the trigger itself been declared? refer: http://www.postgresql.org/docs/9.2/static/sql-createtrigger.html Is the function ever getting called? refer: http://www.postgresql.org/docs/9.2/static/plpgsql-errors-and-messages.html Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Why not cascade? (was: Using varchar primary keys)
On 3/4/13 at 1:49 PM, dix1wji...@sneakemail.com (Julian tempura-at-internode.on.net |pg-gts/Basic|) wrote: ... having to really think it out is probably a good sign that you should stick to a surrogate unless you are really sure. (again I don't advocate ON UPDATE CASCADE as a solution should you change your mind) OK this is interesting. Why not cascade? Assuming someone makes the dB design as straight forward as possible, avoids obfuscation of key values (since this mostly only gets the present and the next developer into trouble, not the mythical external hacker), and has constraints with cascaded updates in place to keep it all consistent. Something changes in the real world, the DBA makes the dB reflect this change and the cascade ensures everything is still consistent. Where is the problem with this? When there is a lot of work involved this needs to be taken into account, but what is the basis for such a general prohibition on a modern SQL dB? why not use the feature? Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Money casting too liberal?
On 30/3/13 at 11:09 PM, D'Arcy J.M. Cain wrote: I am formulating Cain's Law. Something like If a discussion lasts long enough, someone will mention Godwin's Law. +1 More formally: As an online discussion grows longer, the probability of Godwin's Law being mentioned approaches one. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Money casting too liberal?
On 30/3/13 at 12:58 AM, D'Arcy J.M. Cain wrote: On Sat, 30 Mar 2013 12:04:21 +1100 Gavan Schneider wrote: No MONEY column would be complete without the ability to specify whether it is normally DEBIT or CREDIT (or in my preferred case That seems extreme. What use case would there ever be for making a column always debit or always credit? I have a G/L system and most money columns either don't know about Dr/Cr or else there is another column with the G/L account which implies Dr/Cr. Where do you see a column that would be dedicated to one or the other? If you have a credit card and a bank account you are already familiar with the concept of Debit and Credit columns. If the balance figures on your bank account are negative you have become overdrawn (i.e., it's a Credit column), if the balance of your credit card becomes negative it means you have paid them too much money (i.e., it's a Debit column). Notice how the sign is different when money is paid to the bank account (+ve) as opposed to the credit card (-ve). On the G/L system you probably have all the liabilities listed and when added up they come to a positive number. The assets should also add up to a positive number. Adding the two together in simple arithmetic terms should produce a nice big positive number which is not useful. The accounting convention is to negate all Debit values before adding them to Credit values, i.e., the result represents how much assets exceed liabilities. Obviously a negative number here means bad news for unsecured creditors if the company is in receivership. Most people don't notice this process since it is part of an accounting framework. Deep inside the application is a lookup table or application code or some other device that applies this Debit/Credit convention every time it's needed. My proposal is to make this part of the column characteristic so this logic is moved to the table design phase (and handled by the backend) rather than the application needing to keep track of which column values need to be negated and when. Basically if MONEY is to be a useful tool it should really handle money matters in a way that makes accountants happy. If it can't do that then nobody is going to bother using it for serious work since NUMERIC and INTEGER will do the job just as well without the surprises. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Money casting too liberal?
On 31/3/13 at 5:20 AM, D'Arcy J.M. Cain wrote: On Sun, 31 Mar 2013 21:57:49 +1100 Gavan Schneider wrote: On 30/3/13 at 12:58 AM, D'Arcy J.M. Cain wrote: That seems extreme. What use case would there ever be for making a column always debit or always credit? I have a G/L system and most money columns either don't know about Dr/Cr or else there is another column with the G/L account which implies Dr/Cr. Where do you see a column that would be dedicated to one or the other? If you have a credit card and a bank account you are already familiar with the concept of Debit and Credit columns. If the I am *very* familiar with debit and credit columns. In addition, I don't confuse columns on a display or piece of paper with columns in a database. OK. My assumption was that my previous comment was not understood and I needed to plod through a basic example to make my meaning clear. While English is my first language I don't claim perfection in its usage. it can't do that then nobody is going to bother using it for serious work since NUMERIC and INTEGER will do the job just as well without the surprises. What surprises? It is much faster than numeric and it does the formatting for you rather than requiring code like integer would. Other than that they fail your test exactly like the money type. Sorry. I know you authored the type. And mine are not the only comments along these lines. The MONEY type is in the system and any and all are welcome to use it as is. From the discussion it does not suite many and my only motive was to explore ways in which it could cover a wider audience without losing its advantages, i.e., speed and specificity. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Using varchar primary keys.
On 1/4/13 at 10:35 AM, Tim Uckun wrote: Consider the following scenario. Since I don't consider myself an expert I am talking in basic terms below. Please don't misunderstand this plodding as anything other than me attempting to be as clear as possible. These comments reflect my understanding of Joe Celko's view as expressed in his books. I fully understand that many respectable SQL masters (respectfully) disagree with Celko in this. My take is to use his approach as a starting point and only deviate when a different approach is obviously much easier for me. (I doubt it's better. :). YMMV. I have a typical tagging structure. There is a table called tags, there is a table called taggings. The taggings table acts as a many to many join table between the taggers and the tags. The tags table has two fields id and tag. id is a serial data type. Celko would suggest the table name is the plural of its primary key, so a table named tags has tag as the name of its primary key. And this is a big hint: the sequential index is not needed. And he further (very strongly) suggests that primary keys assume their natural values and not be referenced by arbitrary (sequential) values, i.e., avoid pseudo-keys, as neither SQL nor modern data bases need them. The taggings has a tag_id field along with the context, tagger_id etc. So this would instead have two columns named tag and tagger (each referencing the column of the same name in tables tags and taggers) and whatever is implied by the etc. if the pair (tag,tagger) is meant to be unique then this is a good candidate for primary key, otherwise a further term(s) needs to be dragged into the composite index (e.g., a timestamp). Once again there is no logical need for a sequential integer to be stored as well. Given that you can still set foreign keys and cascade commands to adjust child records either way the tables would be properly normalized so I am wondering what I am gaining by using these serial ID fields. Is this false economy? I think so. If the tags table has both sequential integer and the unique tag value then there is likely storage and I/O associated with two indexes, along with storage of both tag and associated integer. If the list of tags is short enough it might be suitable to use an ENUM to enforce both integrity and brevity. If the list of tags is longer and/or might need updating then a single column table will allow for this. In general, I think lists of things (e.g., tag values) should be kept within the dB so they can be used for integrity checking, etc. I don't like the idea of externalising this job to the interface application. Also, as you have mentioned, you can elegantly handle table renovation, eg., if a given tag needs its name changed, it can be done via a foreign key constraint with cascade. (Elegant here refers to the syntax, the actual I/O implications may be very ugly depending on the scale of data that needs rewriting. :) In another situation the tags table might well contain columns such as: tag, tag_full_name, tag_origin, tag_authority_certificate/expiry/whatever. In such an instance the tag is better if it's human readable/meaningful so reports can be useful without always back joining the full name. Sequential integers rarely fulfil this role as implied by the original question. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Money casting too liberal?
Some people wrote: ... Hmm... This should optionally apply to time. ... for anything that really matters, I'll work with UTC. Is there a Godwin's law http://en.wikipedia.org/wiki/Godwin's_law equivalent for when our conversations end up with timezones getting mentioned? :) Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Money casting too liberal?
Some thoughts. The current MONEY type might be considered akin to ASCII. Perfect for a base US centric accounting system where there are cents and dollars and no need to carry smaller fractions. As discussed, there are some details that could be refined. When it comes to this type being used in full blown money systems it lacks the ability to carry fractions of cents and keep track of currencies. It also needs to play nicer with other exact types such as numeric, i.e., no intermediate calculations as real. Therefore the discussion is really about the desired role for the MONEY type. Should it be refined in its current dallar and cents mode? or, be promoted to a more universal role (akin to a shift from ASCII to UTF)? If there is merit in making MONEY work for most situations involving financial transactions I think the following might apply: - keep integer as the underlying base type (for performance) - generalise the decimal multiplier of a MONRY column so a specific MONEY column can be what its creator wants (from partial cents to millions of dollars/Yen/Other, along with rounding/truncating rules as required by r the user of his/her external agencies) - define the currency for a given column and only allow this to change in defined ways, and specifically forbid implicit changes such as would arise from altering LOCALE information - ensure the MONEY type plays nice with other exact precision types, i.e., convert to REAL/FLOAT as a very last resort Personally I don't think it is appropriate for the MONEY type to have variable characteristics (such as different currencies) within a given column, rather the column variable should define the currency along with the desired decimal-multiplier and whatever else is required. The actual values within the column remain as simple integers. This is mostly based on performance issues. If the MONRY type is to be used it has to offer real performance benefits over bespoke NUMERIC applications. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Money casting too liberal?
-- need to allow for multiple rules here, sometimes -- cents are just dropped, otherwise it can be -- required that rounding is up or down [,OTHER? ]]]) I have left the display characteristics out (they could be there as a default) but column values are going to be displayed however the application wants them, and this only applies at the time of reporting. Each currency can carry the conventional defaults and the application should have formatting tools to alter this during output. Inputting money values, i.e., text to MONEY should follow the conventions of the target currency. Specifically the input conversion routine should handle the symbol (or no symbol) and all the usual conventions for negative values, decimals and separators. It should throw an error if asked to add a value to a USD column but finds a yen symbol in the text. (There is no such help for all of us sharing the $ symbol. :) Also it should parse such things as 123.456,00 (Europe) and 123,456.00 (Anglo) properly. Errors need to be thrown when it looks wrong 123,456.789.00 -- since this is likely to be corrupted data, and finally gets me back to the issue raised by OP. :) Hope this hasn't been too much of a ramble. Regards, and happy (Western) Easter to all, Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Money casting too liberal?
On 30/3/13 at 9:30 AM, I wrote: I have sketched something of a notation for MONEY columns along these lines: amt_received MONEY (CURRENCY-- e.g., 'USD' 'AUD' 'YEN' ... [,SCALE -- default as per currency, e.g. USD 2 decimals -- but could be used to see money in bigger units -- such as '000s (e.g., that end-of-month view) [,ROUND -- need to allow for multiple rules here, sometimes -- cents are just dropped, otherwise it can be -- required that rounding is up or down [,OTHER? ]]]) Addition... No MONEY column would be complete without the ability to specify whether it is normally DEBIT or CREDIT (or in my preferred case NATURAL, i.e., no sign is pre-applied before any arithmetic between columns). This is possibly the best use case for the type since it really allows for the DB/CR (IMNSHO arcane) conventions to be properly handled within established industry traditions and has special benefits with externally provided data... values will enter the dB with sign conventions properly observed. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Money casting too liberal?
On 27/3/13 at 9:12 AM, Steve Crawford wrote: In contrast to certain other open-source databases, PostgreSQL leans toward protecting data from surprises ... And long may this continue. But it appears that the philosophy does not extend to the money type. ... select ',123,456,,7,8.1,0,9'::money; money $12,345,678.11 In general terms I would hate for such probable garbage to appear as legitimate data in the dB. Somewhat more worrisome is the fact that it automatically rounds input (away from zero) to fit. select '123.456789'::money; money - $123.46 select '$-123.456789'::money; money -- -$123.46 Thoughts? Is this the no surprises way that money input should behave? I would defer to a CPA on the correct conventions for rounding. However I have a vague notion there are circumstances when rounding is always up, always down and (only sometimes) to the nearest. If the money type is meant to be serious then these conventions need to be followed/settable on a column by column basis. And money is done in whole dollars, thousands of dollars, and fractional cents according to the situation, i.e., not just two decimal places... another setting. Personally I have ignored the money type in favour of numeric. Money seemed to do too much behind the scenes for my taste, but, that's me being lazy as well, I haven't spend much time trying to understand its features. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DEFERRABLE NOT NULL constraint
On Friday, February 8, 2013 at 19:34, Albe Laurenz wrote: Gavan Schneider wrote: Referring to: http://www.postgresql.org/docs/current/static/sql-createtable.html I really must have missed something so am standing by for the 'gotcha'... please supply :) Further down on the page you quote, it says: ... Thank you, it had to be somewhere. :) And this leads to a thought. Why is it that in this chapter the documentation gives a synopsis which is not correct for the current implementation but relies on a negation much further down the page to properly describe the actual behaviour? Mostly the manual follows the pattern of a correct synopsis (where correct means what this version will actually do) followed by a section setting out the differences from the standard and/or other implementations. While this chapter of the current documentation is not in error overall it's a bit misleading. Of course if anything is going to change my preference would be to leave the synopsis in its SQL conformant state and bring the implementation up to standard in this area, meaning we can drop the contradiction/'correcting' paragraph. And, no, I'm not holding my breath on this just now. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Swapping volumes under tablespaces: supported?
On Friday, February 8, 2013 at 10:58, Tom Lane wrote: If it breaks you get to keep both pieces. Tom is an optimist. My (unscheduled) attempt at this resulted in a lot more than two pieces all of which appeared broken in their own right. If you want to (re)start a conversation about making mount/unmount/move tablespace a reality be my guest, but, be warned, there seem to be some very fundamental barriers. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: [NOVICE] Problems with ñ and tildes / CSV import problems in PostgreSQL 9.1
On Wednesday, February 6, 2013 at 11:56, 2jt6w5k...@sneakemail.com (Zach Seaman znseaman-at-gmail.com |pg-gts/Basic|) wrote: This a similar question to this one http://www.postgresql.org/message-id/4dda42060512140509xe8b13...@mail.gmail.com, so I have encoded a database with LATIN-1 as suggested but can't copy a CSV file into a table within the database. I may have missed something here... why would anyone suggest LATIN-1 in modern times? UTF-8 will do all of LATIN-1 and everything else as well. Except for legacy support why would anyone use anything other than UTF-8? (Of course there are those where UTF-16 is a better choice for their dominant language use, e.g. chinese.) Suggest you use UTF-8 database encoding and if there are no problems importing the .csv stay with UTF-8. OTOH if there are still problems when using UTF-8, stay with UTF-8 while you work out what it is in the .csv file that's causing the problem. Regards Gavan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DEFERRABLE NOT NULL constraint
Getting back to the OP (Andreas): On Tuesday, February 5, 2013 at 20:22, Andreas Joseph Krogh wrote: På tirsdag 05. februar 2013 kl. 09:59:54, skrev Albe Laurenz: Andreas Joseph Krogh wrote: ... Are there any plans to make NOT NULL constraints deferrable so one can avoid the trigger boilerplate? Not that I know of. There's an entry in the TODO list that recognizes that it would be desirable to make NOT NULL a regular constraint (you can do that today by using CHECK (col IS NOT NULL) instead). But CHECK constraints are also not deferrable... Is there any I want to sponsor development of feature-X with $xxx mechanism? On Thursday, February 7, 2013 at 18:45, Albe Laurenz wrote: ... the standard caters for deferrable NOT NULL constraints. So, notwithstanding the many expressions of personal preference and several suggested 'work arounds' needed to compensate for this implied SQL compliance failure, there seems to be no good reason why this 'entry in the TODO list' couldn't be sponsored for development. But I feel I have missed something here. Referring to: http://www.postgresql.org/docs/current/static/sql-createtable.html where column_constraint is: [ CONSTRAINT constraint_name ] { NOT NULL | NULL | CHECK ( expression ) [ NO INHERIT ] | DEFAULT default_expr | UNIQUE index_parameters | PRIMARY KEY index_parameters | REFERENCES reftable [ ( refcolumn ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE action ] [ ON UPDATE action ] } [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] and table_constraint is: [ CONSTRAINT constraint_name ] { CHECK ( expression ) [ NO INHERIT ] | UNIQUE ( column_name [, ... ] ) index_parameters | PRIMARY KEY ( column_name [, ... ] ) index_parameters | EXCLUDE [ USING index_method ] ( exclude_element WITH operator [, ... ] ) index_parameters [ WHERE ( predicate ) ] | FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE action ] [ ON UPDATE action ] } [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] CHECK constraints, NOT NULL constraints and FOREIGN KEY constraints all look very deferrable in this definition. If that's the case, why are we having this discussion if the requested functionality/compliance is already present? (As I have said already) I really must have missed something so am standing by for the 'gotcha'... please supply :) Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DEFERRABLE NOT NULL constraint
On Wednesday, February 6, 2013 at 23:31, 00jkxma...@sneakemail.com (Alban Hertroys haramrae-at-gmail.com |pg-gts/Basic|) wrote: On 6 February 2013 12:56, Chris Angelico ros...@gmail.com wrote: If you get into a taxi and ask to be driven to New Zealand within the hour, no amount of begging will get you what you want. Unless you get into a taxi in New Zealand. Which makes the request effectively NULL, planning to do this makes it DEFFERABLE. Taking a different tangent ... Is there anything in the SQL standards about NOT NULL constraints being deferrable? To my mind we should not consider implementing non-standard behaviour, but if something is in the standard I can't see why it shouldn't be implemented, esp. when there is no compulsion for it to be used. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Cast double precision to integer check for overflow
On Saturday, January 26, 2013 at 08:13, Ian Pilcher wrote: I need to cast a double precision into an integer, and I want to check that the value will actually fit (modulo rounding). Coming from a C/Java background, this seems like something that should be utterly trivial. In my searching, however, I can't seem to find any SQL equivalent of INT_MAX, Integer.MAX_VALUE, etc. So far I haven't seen such defined constants but am happy to share the RTFM moment. :-) I am sure you have already found this: http://www.postgresql.org/docs/9.2/static/datatype-numeric.html#DATATYPE-NUMERIC-TABLE Do I have to hard-code this value? Or cast double to both numeric(13000,0) and integer and compare them. A stored function could encapsulate this along with raising the exception when required. If design is still fluid and performance allows the numeric type could do the job without fear of overflow. (And yes, I do feel stupid having to ask this question here.) If in doubt the Novice list is designed for those questions where feelings of impending stupidity lurk. Regards Gavan Schneider (who considers himself a novice) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Throttling Streamming Replication
On Friday, January 25, 2013 at 23:59, Rodrigo Pereira da Silva wrote: We are having a problem with our infrastructure provider because the network traffic between master and slave server is reaching more than 30k packages per second(SLA says 20k/second). I note the later post where the infrastructure provider has determined it is their problem, however you might well benefit from this as a warning and trim your traffic anyway. Is there any way to throttle the streamming replication? I meant, any parameter that I set the max number of megabytes sent to standby server per second? I am guessing (i.e., no knowledge of your setup) you would prefer to not have a lot of delay between the master and slave servers. This is especially so if the slave has to become master since people mostly want that transition to appear seamless to the outside world. So increasing time settings may not be suitable even if it did reduce bandwidth. It is possible to get serious bandwidth savings without increasing latency. ref: http://permalink.gmane.org/gmane.comp.db.postgresql.general/164874 If you put each WAL file through pg_clearxlogtail (it zeros out the unused part of the fixed-length WAL file) then compress-transmit-decompress the result you will get much better use of the available bandwidth between master and slave servers. Specifically you will only be sending information that is needed, and smaller data chunks are faster data chunks. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Yet Another Timestamp Question: Time Defaults
On Monday, January 21, 2013 at 18:11, bgd39h5...@sneakemail.com (Nathan Clayton nathanclayton-at-gmail.com |pg-gts/Basic|) wrote: I only wish. I work with a transactional system from the 70s on a daily basis that decided to store something like a work date and work time. The date changes whenever they decide to dateroll the system. Until then the time field continues to grow, so you see times like 25:00 and 26:00 all the time. SELECT execute(relevant_dba) FROM the_70s WITH tardis WHERE working_tardis = true; Exceptions abound. At least that can't be blamed on a government, and, we can only hope ISO-8601 will prevent more examples being created. You sound as though you really need, and/or already have, a dedicated datatype... if only to stop 'the system' from 'fixing' such weirdness. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Yet Another Timestamp Question: Time Defaults
On 01/21/2013 07:40 PM, Gavan Schneider wrote: ... The points raised by Adrain have prompted some more research on my part and I am intrigued to learn that on one day of the year in many countries (e.g., Brazil) where daylight conversion happens over midnight the local-time version of midnight as start of day does not exist. Basically the last day of unadjusted time ends at midnight and rolls directly into 01:00:00 the next day (i.e., time 00:00:00 never happens on this one day). So the current date- date+time system must already have some added complexity/overhead to check for this rare special case. (If not, there's a bug needs fixing!) Basically midnight is not safe as a target entity once timezones and daylight saving get involved. Midday, on the other hand, is a very solid proposition, no checks required, 12:00:00 will happen in all time zones on every day of the year! Basically nobody messes with their clocks in the middle of the day. So restating: '2013-10-20'::timestamp == 2013-10-20 12:00:00 can never be wrong; but, '2013-10-20'::timestamp == 2013-10-20 00:00:00 is wrong in some places. Wrong times occur in every time zone that changes offsets at various points of the year. Here in California, 02:00:00-02:59:59 March 10, 2013 are wrong but PostgreSQL uses a reasonable interpretation to yield a point-in-time: select '2013-03-10 0230'::timestamptz; timestamptz 2013-03-10 03:30:00-07 And it does the exact same thing in Brazil: set timezone to 'Brazil/West'; select '1993-10-17 00:00'::timestamptz; timestamptz 1993-10-17 01:00:00-03 select '1993-10-17'::timestamptz; timestamptz 1993-10-17 01:00:00-03 Note, too, that in both zones when the input is interpreted in the local zone and displayed in the local zone the date-portion of the point-in-time is the same as the input date. (While I suppose some politician somewhere could decide that fall-back could cross date boundaries, I am unaware of any place that has ever done something so pathological as to have the same date occur in two non-contiguous pieces once every year.) Cheers, Steve -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Yet Another Timestamp Question: Time Defaults
On Monday, January 21, 2013 at 06:53, Adrian Klaver wrote: On 01/21/2013 11:27 AM, Tom Lane wrote: Note that that default is local midnight according to your current timezone setting (from which we may guess that Adrian lives on the US west coast, or somewhere in that general longitude). Not sure you can change the default supplied by Postgres, SET timezone ought to do it ... I took Richs question to mean can you change the time portion supplied by Postgres, so: Instead of '2013-01-21' having the time portion set to local midnight it could be set to a user supplied value say, 08:00:00. That is not possible, correct. In the absence of a time portion a date string supplied to timestamp will always get local midnight? Thanks to all for the discussion of timestamps with/without timezones I have been learning a lot from the side. Taking another tangent I would much prefer the default time to be 12:00:00 for the conversion of a date to timestamp(+/-timezone). Propose: '2013-12-25'::timestamp == 2013-12-25 12:00:00 The benefit of the midday point is that the actual date will not change when going through the timezone conversion. This has implications for time-of-day insensitive data such as birthdays and other calendar values. I am still resolving off by one day errors that crept into many entries in my calendar and contacts from several years ago when data was added while travelling across multiple time zones (and I did report it as a bug back then). With this lesson learnt the workaround for me in my own applications since has been to store such dates as point-in-time for midday while keeping track of the input/output so it only gets used as a date... sometimes tedious, and a last resort. Mostly I have been actively avoiding anything with the taint of timezone due to this bad experience. It's time to reconsider, I guess, since this can cause other forms of silly behaviour. Aesthetically (and/or mathematically) the midday point is more accurate. It is the middle of the relevant interval (i.e., 24 hours) implied by a date. Midnight is the extreme edge of any date (i.e., not what you would consider as mid-target). Midnight also has confusing English semantics since it can belong to either of its adjacent days. I don't know if the current behaviour will be deemed to be too rusted in place for change, or if this proposal has too many adverse consequences, but hope springs eternal. :) Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Yet Another Timestamp Question: Time Defaults
On Tuesday, January 22, 2013 at 09:48, I wrote: (and I did report it as a bug back then) Didn't pick this up on my pre-post re-read bug report was _NOT_ against PostgreSQL. It was some very early incarnations of OSX iCal, etc. which showed this behaviour. Apologies for the noise/confusion. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Yet Another Timestamp Question: Time Defaults
On Monday, January 21, 2013 at 12:06, Kevin Grittner wrote: Adrian Klaver wrote: [Actually Gavan Schneider wrote this, don't blame Adrian :] I see where my confusion lies. There are two proposals at work in the above: Taking another tangent I would much prefer the default time to be 12:00:00 for the conversion of a date to timestamp(+/-timezone) Propose: '2013-12-25'::timestamp == 2013-12-25 12:00:00 For the timestamp(alias for timestamp without time zone) case the date does not change. For timestamp with time zone it might. Well, the big problem here is in trying to use either version of timestamp when what you really want is a date. It will be much easier to get the right semantics if you use the date type for a date. This is the cleanest solution. And I did not want to imply the following... Adrian Klaver wrote: If I was following Gavan correctly, he wanted to have a single timestamp field to store calender dates and datetimes. In other words to cover both date only situations like birthdays and datetime situations like an appointment. My discussion really only applies to some notion of the best (or, more exactly, the least wrong) time to attribute to a date when conversion to timestamp happens for whatever reason. And, as indicated in my original post, I have been stung when dates got (badly) mixed into a datetime timezone aware context. The points raised by Adrain have prompted some more research on my part and I am intrigued to learn that on one day of the year in many countries (e.g., Brazil) where daylight conversion happens over midnight the local-time version of midnight as start of day does not exist. Basically the last day of unadjusted time ends at midnight and rolls directly into 01:00:00 the next day (i.e., time 00:00:00 never happens on this one day). So the current date- date+time system must already have some added complexity/overhead to check for this rare special case. (If not, there's a bug needs fixing!) Basically midnight is not safe as a target entity once timezones and daylight saving get involved. Midday, on the other hand, is a very solid proposition, no checks required, 12:00:00 will happen in all time zones on every day of the year! Basically nobody messes with their clocks in the middle of the day. So restating: '2013-10-20'::timestamp == 2013-10-20 12:00:00 can never be wrong; but, '2013-10-20'::timestamp == 2013-10-20 00:00:00 is wrong in some places. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Yet Another Timestamp Question: Time Defaults
On Monday, January 21, 2013 at 10:53, Steve Crawford wrote: On 01/21/2013 02:48 PM, Gavan Schneider wrote: Taking another tangent I would much prefer the default time to be 12:00:00 for the conversion of a date to timestamp(+/-timezone). Propose: '2013-12-25'::timestamp == 2013-12-25 12:00:00 The benefit of the midday point is that the actual date will not change when going through the timezone conversion. Just like it doesn't change now? (I just checked against all of the more than 1,100 zones in PG without seeing a problem.) I find this result strange to say the least... our conversation is straddling Monday(you)/Tuesday(me). We shared the time point 2013-01-22 01:30 UTC, but called it different things, viz., 2013-01-22 12:30 and 2013-01-21 17:30. And any definition based on midnight(UTC) will cast to either side of the date line depending on the local timezone. This is not a problem per se. It just brings me back to my point that sometimes the date is more important than the notion of a point in time. Hence: This has implications for time-of-day insensitive data such as birthdays and other calendar values. I am still resolving off by one day errors that crept into many entries in my calendar and contacts from several years ago when data was added while travelling across multiple time zones (and I did report it as a bug back then). With this lesson learnt the workaround for me in my own applications since has been to store such dates as point-in-time for midday while keeping track of the input/output so it only gets used as a date... sometimes tedious, and a last resort. Mostly I have been actively avoiding anything with the taint of timezone due to this bad experience. It's time to reconsider, I guess, since this can cause other forms of silly behaviour. Date/time is not trivial. ... Total agreement here. And, as I said, I am going to school on this with a lot more insight after your's and other's input. ... Meanwhile if I'm up at that hour and try to schedule a job ... or possibly one of your machines is on the other side of the planet and running on tomorrow's time Aesthetically (and/or mathematically) the midday point is more accurate. It is the middle of the relevant interval (i.e., 24 hours) implied by a date. Midnight is the extreme edge of any date (i.e., not what you would consider as mid-target). Midnight also has confusing English semantics since it can belong to either of its adjacent days. Except for days that are 23-hours long, or 25, or other (it's a big world with all sorts of timezone rules). The day's length may change but I don't believe there is anywhere that allows for the local time of day to equal or be greater than 24:00:00 without rolling over to the next day. How would that fit with ISO-8601? http://en.wikipedia.org/wiki/ISO_8601#Times It's also very useful for common queries (select ... from somelog where logtime current_date) and provides a known starting-point from which you can easily calculate the offsets you desire. Agree, but aren't we better writing something like: SELECT ... FROM somelog WHERE logtime::date = CURRENT_DATE; and not relying on an implementation detail for correct behaviour. Timestamps can always be busted back to lesser precision, i.e., date only, but adding time information to a date is extrapolation. IMNSHO this sort of thing should be avoided. I don't know if the current behaviour will be deemed to be too rusted in place for change, or if this proposal has too many adverse consequences, but hope springs eternal. :) Obviously there is no discussion if current PostgreSQL behaviour is SQL standards compliant. I don't think anyone should ask that existing standards compliance be undone. It would sure break a lot of my queries. And for the many people who want/expect the date to cast to date at 00:00:00 local time it would lead to a load of pitfalls such as naively subtracting 12-hours or requiring the programmer to add complexity to determine how many hours to subtract based on local time zone and current date. This is assuming that someone would need to correct the hour when there was never any time of day information originally present. The naivety here is in attempting to correct something that is arbitrary. This is already a problem with the current system when attempting to correct times in all timezones, i.e., how many hours to add for a least wrong estimate of the time? But you are, of course, free to use the capability that PostgreSQL gives you to define pretty much any data-type you want along with your desired casting rules if you so desire. Just don't expect the built-in definitions to change. Thinking only, but it's way too early on my learning curve to venture there since such a data-type still has to play correctly with the rest of the system. And once I better know the system I may well have learnt to mitigate correctly in the relevant
Re: [GENERAL] Yet Another Timestamp Question: Time Defaults
On Monday, January 21, 2013 at 15:33, Tom Lane wrote: I think it is also arguably contrary to the SQL standard... 17) If TD is the datetime data type TIMESTAMP WITHOUT TIME ZONE, then let TSP be the timestamp precision of TD. b) If SD is a date, then the primary datetime fields hour, minute, and second of TV are set to 0 (zero) and the primary datetime fields year, month, and day of TV are set to their respective values in SV. That has to be the trump card. ... let's just define a new GUC parameter that selects the behavior, with a backwards-compatible default setting. ... Robust application code has to be made to cope with any possible setting of such a GUC, which makes them not nearly such a cheap fix as they seem initially. ... and, why go to significant trouble to implement standards non-compliance when there is no legacy code to support? I could always wish the SQL committee had thought along my lines all those years ago, and then again, I could just do something useful. :) On Monday, January 21, 2013 at 11:38, Adrian Klaver wrote: I must be missing something. I to am in PST: test=# \d ts_test Table utility.ts_test Column | Type | Modifiers +--+--- ts_fld | timestamp with time zone | test=# INSERT INTO ts_test VALUES('2012-01-21'); test=# SELECT * from ts_test ; ts_fld 2012-01-21 00:00:00-08 test=# set timezone ='AKST9AKDT'; test=# SELECT ts_fld from ts_test; ts_fld 2012-01-20 23:00:00-09 The only thing missed is we are saying much same thing. There is no problem with the conversion. It is, as we see from Tom, fully SQL compliant. The only problem is when you are more interested in the date itself and not the point in time. This is just one of several scenarios where the date might get changed in ways that could be difficult to trace... caveat coder. Thanks again everyone for a lot more clarity in my thinking about dates times and timezones. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL]
On Saturday, January 12, 2013 at 04:49, Gavin Flower wrote: On 12/01/13 06:45, Bosco Rama wrote: Shouldn't the value for theta be: 2 * pi() * random() Bosco. Very definitely! :-) One could also ask if the value for theta shouldn't be: tau() * random() http://tauday.com/ :-) Regards Gavan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] query by partial timestamp
On Wednesday, January 9, 2013 at 04:42, Michael Nolan wrote: On 1/8/13, Gavan Schneider wrote: 2. SELECT ... WHERE '2011-01-01'::TIMESTAMP = col_of_type_timestamp ANDcol_of_type_timestamp = '2011-12-31'::TIMESTAMP; This won't quite work, because '2011-12-31'::TIMESTAMP is the same as 2011-12-31 00:00:00.0 so records timestamped later in the day on the 31st would not get selected SELECT ... WHERE '2011-01-01'::TIMESTAMP = col_of_type_timestamp AND col_of_type_timestamp '2012-01:01'::TIMESTAMP; would get all records with a 2011 timestamp. Thank you. I was wondering where Tom and Depesz were coming from when they both said less than or equal to the '2011-12-31'::TIMESTAMP would miss data. I was giving it a rest before re-reading, testing, and/or asking 'the right question'. You have supplied the missing part to my puzzle. Mostly I use DATE so have not had much practice wrestling the TIMESTAMP edge cases. I also prefer the closed-open equality tests as you suggest especially as they are the 'only way to go' when grouping data on a monthly basis. My only 'defense' is that I tried to craft my examples as close as possible to the OP statement and not introduce the 'next year' unless forced... lame I know. :) Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] query by partial timestamp
On Tuesday, January 8, 2013 at 09:26, Raymond O'Donnell wrote: On 08/01/2013 22:19, Kirk Wythers wrote: I have a column of type TIMESTAMP, I'd like to query all records from 2011. If it were text I could use a partial such as: WHERE text ~ '2011' There must be a simple way to pull the year part out of a timestamp format. Thanks in advance. You want the extract() function. From my perspective there are at least three ways to attack this problem: (I have not tested these, so apologies for the stupid syntax errors.) 1. SELECT ... WHERE 2011 = extract(YEAR FROM col_of_type_timestamp); 2. SELECT ... WHERE '2011-01-01'::TIMESTAMP = col_of_type_timestamp ANDcol_of_type_timestamp = '2011-12-31'::TIMESTAMP; 3. SELECT ... WHERE (col_of_type_timestamp, col_of_type_timestamp) OVERLAPS (DATE '2011-01-01', DATE '2012-01-01'); Is this the full list? So... generalizing the original question: which approach would yield the best performance and/or compliance with SQL standards? I note Steve Crawford has (strongly) hinted that direct date comparison is more likely to use an index (when available) so I suspect this is the way to go, but would an index based on extract(YEAR...) negate this difference? Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] question about varchar
On Tuesday, December 18, 2012 at 10:15, Rui Li wrote: /* ** text 'cleaned' to remove message digest cruft ** apologies for any errors */ In postgresql, we can create column as varchar(n)=94 where n is the sizeof the char, or we can just define column as varchar=94 rli=3D# CREATE TABLE test (v1 varchar(100), v2 varchar); CREATE TABLE rli=3D# \d test Table public.test Column | Type | Modifiers ++--- v1 | character varying(100) | v2 | character varying | so my question is: why should we even declare character varying(n) in postgresql when there's an easier options to declaring character varying (without n) is there any performance different between it? or some other reason we should use varchar(n) instead of just varchar? thanks for any help The short answer seems to be: just use text, e.g., CREATE TABLE test (v1 text, v2 text); and, only go to varchar(n) if there is a very special need. Lots of good reasons to not use char(n) if you cannot guarantee to fill the field every time (otherwise those right filled blank padding characters will just cause problems). There is no speed performance difference between them, but lots of design advantages in favour of text (it's much more 'future proof'). These ideas are much better explained, and tested here: http://www.depesz.com/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/ Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] youtube video on pgsql integrity
On Friday, November 30, 2012 at 02:30, czisg0s...@sneakemail.com (Chris Angelico rosuav-at-gmail.com |pg-gts/Basic|) wrote: I like his quoting of the error messages. MySQL: now()/0 - NULL; PostgreSQL: now()/0 - dude, what are you doing. H... that looked amusing in a perverse way, and a diversion from test cases, so I gave it a try mid psql session, as follows: pendari= select '2149Q1'::text similar to '((19)|(20))[0-9]{2}Q[1-4]'::text; ?column? -- f (1 row) pendari= select now()/0; Bus error: 10 pendari:~ gavan$ psql --user=book_keeper --host=localhost pendari psql: could not connect to server: Connection refused Is the server running on host localhost (127.0.0.1) and accepting TCP/IP connections on port 5434? could not connect to server: Connection refused Is the server running on host localhost (::1) and accepting TCP/IP connections on port 5434? could not connect to server: Connection refused Is the server running on host localhost (fe80::1) and accepting TCP/IP connections on port 5434? pendari:~ gavan$ Ouch! That is a strange way to say dude, what are you doing, even if it is totally accurate. PostreSQL: 9.2.1 System: Mac OS X Server Lion 10.7.5 (11G63) Processor: 2.66 GHz Intel Core 2 Duo RAM:8 GB 1067 MHz DDR3 Nothing got to the log files On the assumption that stupid user input should not crash the server I consider this to be a bug, but also assume there must be some details in my configuration that have brought this to light. So what extra information is needed to complete the picture? Since I did the build myself I have the ./configure output files, and will see if a core dump has turned up somewhere. Anything else to add? and, what is the next step? Regards Gavan Schneider PS. Of course I will smile bravely if this is the postgresql equivalent of being Rick rolled :) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] youtube video on pgsql integrity
Tom, thanks for the reply On Thursday, November 29, 2012 at 12:58, Tom Lane wrote: Gavan Schneider writes: pendari= select now()/0; Bus error: 10 [ scratches head... ] I get the expected error report on my own OS X 10.7.5 machine (though it's just plain Lion not Server). As of Lion the Server functionality is a simple add-on to the base system. Seems the underlying stuff is always there just you get the management tools and extra bell'n'whistles to configure. This looks like it's a psql-side problem, not a server-side problem, particularly since you say nothing showed up in the server log. Agree that psql had a problem since it left crash logs for both of the instances I had running in separate shells. Impressive! almost quantum (aka spooky action at a distance) effect. Execute stupid statement in one shell and kill your own and the other process as well. I have found an instance of my server process still hanging on but it is not accepting connections from psql or pgadmin3, and at almost zero CPU time, is likely detritus from a failed startup test. So the psql problem also took the server down... to me this all seems to point to a common piece of code with poor interprocess separation so the misaligned libraries idea seems to have merit. From the psql crash log: Process: psql [94318] Path:/Volumes/VOLUME/*/psql Identifier: psql Version: ??? (???) Code Type: X86-64 (Native) Parent Process: bash [69168] Date/Time: 2012-11-30 09:48:53.886 +1100 OS Version: Mac OS X Server 10.7.5 (11G63) Report Version: 9 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: 0x000a, 0x000100938622 VM Regions Near 0x100938622: -- mapped file 00010093-00010098c000 [ 368K] r-x/rwx SM=COW Object_id=270c5ca0 VM_ALLOCATE 00010098c000-000100993000 [ 28K] rw-/rwx SM=PRV Application Specific Information: objc[94318]: garbage collection is OFF Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 ??? 0x000100938622 0 + 4304635426 1 ??? 0x000100939807 0 + 4304640007 2 ??? 0x000100940df6 0 + 4304670198 3 ??? 0x000100931124 0 + 4304605476 Thread 0 crashed with X86 Thread State (64-bit): ... Logical CPU: 0 Binary Images: ... +psql (??? - ???) /Volumes/VOLUME/*/psql ... libssl.0.9.8.dylib (44.0.0 - compatibility 0.9.8) /usr/lib/libssl.0.9.8.dylib *== ... libedit.3.dylib (3.0.0 - compatibility 2.0.0) ... /usr/lib/libedit.3.dylib ... From this can I conclude: -- the Apple buggy version is the one in use? -- specifically, that my postgres build would not normally have installed this library in this location? and, -- do you have the GNU readline installed on your system? (While I am having so many adventures doing the normal thing, I hope you can understand why I don't want to be a pioneer as well. :) Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ERROR: query has no destination for result data
On Friday, November 23, 2012 at 21:36, Peter Kroon wrote: Hello, I wish to return the SELECT statement. Ho can I achieve this? DO $$ DECLARE v_some_id int=14; BEGIN /* more queries here... */ SELECT 'this is text'; END $$ LANGUAGE plpgsql; Best, Peter Kroon Reinterpreting the question and taking the pseudocode semi-literally is the following closer to what was asked? ref. 39.2. Structure of PL/pgSQL http://www.postgresql.org/docs/9.2/static/plpgsql-structure.html pendari= CREATE FUNCTION somefunc() RETURNS text AS $$ pendari$ pendari$ DECLARE pendari$ v_some_id int=14; pendari$ BEGIN pendari$ /* pendari$ more queries here... pendari$ */ pendari$ RETURN 'this is text'::text; pendari$ END; pendari$ $$ LANGUAGE plpgsql; CREATE FUNCTION pendari= select somefunc(); somefunc -- this is text (1 row) pendari= Regards Gavan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [BUGS] Prepared Statement Name Truncation
On Sunday, November 18, 2012 at 01:10, David Johnston wrote: Can the system be made smart enough to not allow intra-schema collisions in addition to same schema ones? That would seem to be the area of greatest concern - particularly around the usage of truncate/delete/drop. My summary FWIW: 1. Potential exists for internally generated names to exceed maxlen; and 2. this maxlen is shorter than the SQL standard specification; but 3. it may not be worth the performance hit to be SQL compliant in this; with 4. potential for (undetected) name collision and unintended consequences. May I suggest an idea from the days when memory was counted in (tiny int) kB: represent the over maxlen identifiers as is up to maxlen-8 bytes use those last 8 bytes for a 40bit hash in Base32 for disambiguation and, if 1:10^^12 residual collision risk is considered too high a side list of overlong names would allow for a second hash disambiguation process. Notes: 1. The choice of Base32 encoding may be a matter of personal preference http://en.wikipedia.org/wiki/Base32, and, if so, I suggest using the Crockford encoding http://www.crockford.com/wrmg/base32.html. (I am impressed his design is excellent, while also averting some accidental obscenities. None of the others offer this feature :) 2. Something along these lines, with the side table to track the (hopefully) occasional overlong identifiers, could give standards compliance in identifier length while still keeping the working tables compact. 3. (Wild speculation) There may be a sweet spot using even shorter identifiers than is the case now, with full disambiguation, which might improve overall performance. Regards Gavan Schneider -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general