Re: [GENERAL] Questions on dynamic execution and sqlca

2014-08-07 Thread Guillaume Lelarge
2014-08-07 7:24 GMT+02:00 David Johnston david.g.johns...@gmail.com:


   - What are the differences among PL/SQL,  PL/PGSQL and pgScript.
 
  The first two are languages you write functions in.  pgScript is simply
 an
  informal way to group a series of statements together and have them
 execute
  within a transaction.
 

 AFAICT, this isn't true. Pgscript is a client specific language. There is
 a whole description of what it can do in pgadmin's manual. This was
 interesting when PostgreSQL didn't have the DO statement. Now that we do,
 it's rather pointless.



 ​Yeah, I probably should have either researched the answer or just left it
 alone.  I am not all that familiar with pgAdmin - I figured it was just a
 souped up script runner with maybe a couple of features like variables but
 otherwise allowing only SQL commands.


No problem :)

pgscript reference is available on
http://pgadmin.org/docs/1.18/pgscript.html.

Note that pgScript isn't specific to pgAdmin, AFAIK.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [GENERAL] Connecting with PostgreSQL 9.1 using the Ubuntu machine user and peer authentication method

2014-08-07 Thread Jorge Arevalo
On Thu, Aug 7, 2014 at 1:14 AM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 08/06/2014 03:43 PM, Jorge Arevalo wrote:

 Hello,

 I want to connect to my local installation of PostgreSQL 9.1 using my
 machine user (who is vagrant). So, after reading PostgreSQL
 documentation, I thought I just needed to:

 1. Add username map in pg_ident.conf:

 # MAPNAME   SYSTEM-USERNAME PG-USERNAME

 vp   vagrantpostgres


 2. Using the map in pg_hba.conf

 # TYPE  DATABASEUSERADDRESS METHOD

 local   all  all
peer map=vp

 But I'm getting the error

 sql: FATAL:  Peer authentication failed for user vagrant

 If I try to connect to my server using psql.

 I guess I'm misunderstanding the PostgreSQL manual. But, how could I get
 what I need? (locally connect with the user vagrant like if it was the
 postgres user)


 What OS are you on?

 Per:
 http://www.postgresql.org/docs/9.1/interactive/auth-methods.html#AUTH-PEER

 Peer authentication is only available on operating systems providing the
 getpeereid() function, the SO_PEERCRED socket parameter, or similar
 mechanisms. Currently that includes Linux, most flavors of BSD including
 Mac OS X, and Solaris.



Linux system  (Ubuntu 12.04). Also tested in Mac OS 10.8.

Forgot to mention: in pg_hba.conf there is a previous line:

local postgres peer

No map specified for that line.





 --
 Jorge Arevalo

 http://about.me/jorgeas80



 --
 Adrian Klaver
 adrian.kla...@aklaver.com




-- 
Jorge Arevalo
Freelance developer

http://about.me/jorgeas80


[GENERAL] Upgrading 9.0 to 9.3 - permission denied to pg_upgrade_utility.log

2014-08-07 Thread Russell Keane
Hi,

We're trying to upgrade a 9.0 database to 9.3 using pg_upgrade. The upgrade 
must be automated so a manual pg_dump / pg_restore is not an option.
We use the following command:
D:\PostgreSQL\9.3\Datac:\Program Files 
(x86)\PostgreSQL\9.3\bin\pg_upgrade.exe -d D:\PostgreSQL\Data -D 
D:\PostgreSQL\9.3\Data -b C:\Program Files (x86)\PostgreSQL\9.0\bin -B 
C:\Program Files (x86)\PostgreSQL\9.3\bin -u postgres -c -P 5432

This is on a Windows 7 64 bit environment.
We've given the postgres windows user full permissions to all folders involved 
and are running the upgrade in a cmd prompt opened with runas  /user:postgres

The upgrade works all the way up until the final hurdle where, in the 
pg_upgrade_utility.log the following statement appears:
command: C:\Program Files (x86)\PostgreSQL\9.3\bin/initdb --sync-only 
D:\PostgreSQL\9.3\Data  pg_upgrade_utility.log 21
syncing data to disk ... initdb: could not open file 
D:/PostgreSQL/9.3/Data/pg_upgrade_utility.log: Permission denied

It seems odd that the log states that it has no permission to the file that 
it's writing the error in but I guessed it might be because both pg_upgrade and 
initdb are both trying to write to the same file. I therefore tried the initdb 
command manually and it still fails with the same message.

After this failure, if we try to run PG 9.3 it starts up fine and everything 
seems to be in order. All the data exists and there are no errors in the PG log 
at startup. I don't really want to simply ignore this error but it doesn't seem 
to be causing any issues.

Any ideas?


Regards,

Russell Keane
INPS

Tel: +44 (0)20 7501 7277

Follow ushttps://twitter.com/INPSnews on twitter | visit 
www.inps.co.ukhttp://www.inps.co.uk/



Registered name: In Practice Systems Ltd.
Registered address: The Bread Factory, 1a Broughton Street, London, SW8 3QJ
Registered Number: 1788577
Registered in England
Visit our Internet Web site at www.inps.co.uk
The information in this internet email is confidential and is intended solely 
for the addressee. Access, copying or re-use of information in it by anyone 
else is not authorised. Any views or opinions presented are solely those of the 
author and do not necessarily represent those of INPS or any of its affiliates. 
If you are not the intended recipient please contact is.helpd...@inps.co.uk



Re: [GENERAL] Connecting with PostgreSQL 9.1 using the Ubuntu machine user and peer authentication method

2014-08-07 Thread Jorge Arevalo
On Thu, Aug 7, 2014 at 1:08 AM, John R Pierce pie...@hogranch.com wrote:

 On 8/6/2014 3:43 PM, Jorge Arevalo wrote:


 I want to connect to my local installation of PostgreSQL 9.1 using my
 machine user (who is vagrant). So, after reading PostgreSQL documentation,
 I thought I just needed to:



 wouldn't it be easier to ...

 create user vagrant superuser;
 create database vagrant owner vagrant;

 ?



 --
 john r pierce  37N 122W
 somewhere on the middle of the left coast



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



That's an option, yes. I just wanted to avoid the creation of a new
superuser, if I can identify my vagrant machine user with db postgres user



-- 
Jorge Arevalo
Freelance developer

http://about.me/jorgeas80


Re: [GENERAL] Recursive CTE trees + Sorting by votes

2014-08-07 Thread Gregory Taylor
Hello Martijn,

Thanks for the reply, my responses are inline below.

On Wed, Aug 6, 2014 at 5:38 PM, Martijn van Oosterhout klep...@svana.org
wrote:

 On Wed, Aug 06, 2014 at 05:28:09PM -0400, Gregory Taylor wrote:
  We are working on a threaded comment system, and found this post by
 Disqus
  to be super helpful:
 
 
 http://cramer.io/2010/05/30/scaling-threaded-comments-on-django-at-disqus/
 
  The CTE works wonderfully, and we're really happy with the results. The
  last obstacle is figuring out how to sort by a votes field, meanwhile
  preserving the tree structure.

 What do you mean exactly? Do you mean that want everything at the same
 level to be sorted by vote?


Each level of the tree should be sorted by vote, while retaining the
correct hierarchy. So the top level entry with the most votes should be at
the top, plus all of the items beneath it (with each level of the tree
under that row being sorted correctly).



  If we ORDER BY path, votes (assuming we have the same structure as in
 the
  article), we never need tie-breaking on path, so the votes part of
 this
  doesn't even come into the equation.
 
  I suspect we need to do some path manipulation, but I'm not too sure of
  where to begin with this. I attempted incorporating votes into the
 path,
  but I failed pretty badly with this. It's probably way off, but here's my
  last (failed) attempt:
 
  https://gist.github.com/gtaylor/e3926a90fe108d52a4c8

 I think what you need to do is do the ordering withing the CTE itself.
 Something like:

 WITH RECUSIVE cte () AS (
SELECT ... ORDER BY vote DESC
 UNION ALL
SELECT ... JOIN cte ... ORDER BY vote DESC
 ) SELECT * from cte;


It looks like you can't order within a CTE.



 Or another idea, add a column that is the path of the parent:

 WITH RECUSIVE cte () AS (
SELECT array[] as path_parent, array[id] as path, ... ORDER BY vote DESC
 UNION ALL
SELECT cte.path as path_parent, cte.path || comments.id as path, ...
 JOIN cte ... ORDER BY vote DESC
 ) SELECT * from cte order by path, votes desc;


I got this recommendation from someone else, and think that it's probably
the way to go. I've been playing with it unsuccessfully so far, though.
Most certainly because I've got something weirded up. Here's what I have:


WITH RECURSIVE cte (
id, discussion_id, body, num_votes,
class_section_id, modified_time,
author_id, reply_parent_id,
path, votes_path, depth
)  AS (
SELECT  discussion_response.id, discussion_response.discussion_id,
discussion_response.body, discussion_response.num_votes,
discussion_response.last_edited_time,
discussion_response.class_section_id,
discussion_response.author_id,
discussion_response.reply_parent_id,
array[id] AS path,
array[num_votes, id] AS votes_path,
1 AS depth
FROMdiscussion_response
WHERE   reply_parent_id IS NULL AND discussion_id=2763

UNION ALL

SELECT  discussion_response.id, discussion_response.discussion_id,
discussion_response.body, discussion_response.num_votes,
discussion_response.last_edited_time,
discussion_response.class_section_id,
discussion_response.author_id,
discussion_response.reply_parent_id,
cte.path || discussion_response.id,
cte.votes_path || discussion_response.num_votes ||
discussion_response.id,
cte.depth + 1 AS depth
FROMdiscussion_response
JOIN cte ON discussion_response.reply_parent_id = cte.id
WHERE discussion_response.discussion_id=2763
)
SELECT * FROM cte ORDER BY votes_path DESC, path DESC LIMIT 50 OFFSET 0;

The problem with this is that non-root level (depth  1) rows end up at the
top because of the ordering by votes_path. For example:

id=292839, num_votes=0, reply_parent_id=211957,
votes_path={2,211957,0,292839}, path={211957,292839}, depth=2
id=211957, num_votes=2, reply_parent_id=NULL, votes_path={2,211957},
path={211957}, depth=1

I understand why it is ordered this way, it's just not what I was hoping
for. Ideally this ends up like this:

id=211957, num_votes=2, reply_parent_id=NULL, votes_path={2,211957},
path={211957}, depth=1
id=292839, num_votes=0, reply_parent_id=211957,
votes_path={2,211957,0,292839}, path={211957,292839}, depth=2

Sorting by path causes the correct tree structure to be returned and in
the right order, but obviously it's not
sorted at all by votes.

-- 
Greg Taylor
http://gc-taylor.com


Re: [GENERAL] Recursive CTE trees + Sorting by votes

2014-08-07 Thread Vik Fearing
On 08/07/2014 01:22 PM, Gregory Taylor wrote:
 I got this recommendation from someone else, and think that it's
 probably the way to go. I've been playing with it unsuccessfully so far,
 though. Most certainly because I've got something weirded up. Here's
 what I have:
 
 
 WITH RECURSIVE cte (
 id, discussion_id, body, num_votes,
 class_section_id, modified_time,
 author_id, reply_parent_id,
 path, votes_path, depth
 )  AS (
 SELECT  discussion_response.id http://discussion_response.id,
 discussion_response.discussion_id,
 discussion_response.body, discussion_response.num_votes,
 discussion_response.last_edited_time,
 discussion_response.class_section_id,
 discussion_response.author_id,
 discussion_response.reply_parent_id,
 array[id] AS path,
 array[num_votes, id] AS votes_path,
 1 AS depth
 FROMdiscussion_response
 WHERE   reply_parent_id IS NULL AND discussion_id=2763
 
 UNION ALL
 
 SELECT  discussion_response.id http://discussion_response.id,
 discussion_response.discussion_id,
 discussion_response.body, discussion_response.num_votes,
 discussion_response.last_edited_time,
 discussion_response.class_section_id,
 discussion_response.author_id,
 discussion_response.reply_parent_id,
 cte.path || discussion_response.id
 http://discussion_response.id,
 cte.votes_path || discussion_response.num_votes ||
 discussion_response.id http://discussion_response.id,
 cte.depth + 1 AS depth
 FROMdiscussion_response
 JOIN cte ON discussion_response.reply_parent_id = cte.id
 http://cte.id
 WHERE discussion_response.discussion_id=2763
 )
 SELECT * FROM cte ORDER BY votes_path DESC, path DESC LIMIT 50 OFFSET 0;
 
 The problem with this is that non-root level (depth  1) rows end up at
 the top because of the ordering by votes_path. For example:
 
 id=292839, num_votes=0, reply_parent_id=211957,
 votes_path={2,211957,0,292839}, path={211957,292839}, depth=2
 id=211957, num_votes=2, reply_parent_id=NULL, votes_path={2,211957},
 path={211957}, depth=1
 
 I understand why it is ordered this way, it's just not what I was hoping
 for. Ideally this ends up like this:
 
 id=211957, num_votes=2, reply_parent_id=NULL, votes_path={2,211957},
 path={211957}, depth=1
 id=292839, num_votes=0, reply_parent_id=211957,
 votes_path={2,211957,0,292839}, path={211957,292839}, depth=2
 
 Sorting by path causes the correct tree structure to be returned and
 in the right order, but obviously it's not
 sorted at all by votes.

Just export the order from your CTE.

WITH RECURSIVE tree AS (
SELECT dr.id,
   ...,
   array[dr.id] as path,
   1 as depth,
   row_number() over (order by dr.num_votes desc) as sort_order
FROM discussion_response AS dr
WHERE dr.reply_parent_id IS NULL
  AND dr.discussion_id = 2763

UNION ALL

SELECT dr.id,
   ...,
   tree.path || dr.id,
   tree.depth + 1
   row_number() over (order by dr.num_votes desc)
FROM discussion_response AS dr
JOIN tree ON tree.id = dr.reply_parent_id
WHERE NOT array[dr.id] @ tree.path
)
SELECT *
FROM tree
ORDER BY depth, sort_order
LIMIT 50;
-- 
Vik


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recursive CTE trees + Sorting by votes

2014-08-07 Thread Gregory Taylor
On Thu, Aug 7, 2014 at 8:12 AM, Vik Fearing vik.fear...@dalibo.com wrote:

 Just export the order from your CTE.

 WITH RECURSIVE tree AS (
 SELECT dr.id,
...,
array[dr.id] as path,
1 as depth,
row_number() over (order by dr.num_votes desc) as sort_order
 FROM discussion_response AS dr
 WHERE dr.reply_parent_id IS NULL
   AND dr.discussion_id = 2763

 UNION ALL

 SELECT dr.id,
...,
tree.path || dr.id,
tree.depth + 1
row_number() over (order by dr.num_votes desc)
 FROM discussion_response AS dr
 JOIN tree ON tree.id = dr.reply_parent_id
 WHERE NOT array[dr.id] @ tree.path
 )
 SELECT *
 FROM tree
 ORDER BY depth, sort_order
 LIMIT 50;


It looks like this clobbers the hierarchy by sorting by depth first. I'm
trying to preserve said hierarchy so I can paginate using OFFSET/LIMIT
easily. I'm not sure what I'm shooting for is even possible, though.

-- 
Greg Taylor
http://gc-taylor.com


Re: [GENERAL] Reindex taking forever, and 99% CPU

2014-08-07 Thread Kevin Grittner
Phoenix Kiula phoenix.ki...@gmail.com wrote:

 We spent some time to do some massive cleaning of the data from
 this table. Brought it down to around 630 million rows. Overall
 size of the table including indexes is about 120GB anyway.

Deleting rows that you don't need is good, and once a vacuum has a
chance to run (to completion) against the table it should help with
performance, but unless there is a lot of free space right at the
end of the table, it won't release any disk space back to the OS --
it will be tracked as free space within the table, and reused for
future inserts and updates.  This is generally a good thing,
because it is faster to use space already allocated to the table
than to request that new space is added to the table before using
it.

 modify_date    | timestamp without time zone | default now()

Sooner or later you will realize that this should have been
timestamp with time zone, but that will be a different discussion.

 Rules:
 __track_bigtb_deleted AS
 ON DELETE TO bigtb

It is usually safer to create triggers rather than rules.

 Suppose we might have to explore partitioning, which would
 probably be via first letter of the alias? This would lead to
 around 26 + 9 = 35 sub-tables. Is this too many?

No; I wouldn't worry about less than about 100.  As has already
been mentioned, though, this machine is very underpowered for what
you seem to want to do with it.  Don't expect miracles.  In
particular, this is not likely to make most queries any faster, but
will help a lot with maintenance operations, like vacuuming and
indexing.

 max_connections    = 180

 temp_buffers    = 32MB

 work_mem    = 64MB

I just want to make sure you realize that temp_buffers is how much
RAM *each connection* is allowed to reserve indefinitely for
caching temporary tables.  So if all 180 allowed connections were
in use, and they had all used temporary tables of significant size,
then *even when all connections are idle* they would have 5.76GB of
RAM reserved exclusively for caching temp tables.  On a machine
with 4GB RAM that would probably cause things to crash.

Also, work_mem is questionable.  This is not limited to one per
connection; there can be one allocation of that size for each plan
node of an active query which needs working memory (sorts, hash
maps, etc.).  So one connection can be using a number of these at
one time, although only when a query is active.  Because one
connection may be using many, while others are using none, it is
often a good idea to start from the assumption that it should be
sized on the assumption of one allocation per connection.  64MB *
180 = 11.52GB.  This is in addition to the 5.76GB you allow for
temp_buffers.  It is no wonder you are seeing crashes -- you have
configured the database so that it is allowed to use 4x the
machine's RAM just for these two things!

In my experience, a good starting point for work_mem is 25% of
machine RAM / max_connections.  You can adjust from there based on
workload.  That suggests 5.5MB would be about right on your
machine.  I would probably set temp_buffers = 2MB or maybe 3MB.

 enable_indexscan    = on

These should all be on in the config file, always.  (That is the
default if the entries are commented out, of course.)  The enable_*
settings are mostly intended for diagnostic purposes, although in
extreme cases people have been known to disable a specific setting
just for the duration of a specific query; there is usually a
better solution than that, however.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connecting with PostgreSQL 9.1 using the Ubuntu machine user and peer authentication method

2014-08-07 Thread Ray Stell

On Aug 6, 2014, at 6:56 PM, Frank Pinto fr...@ayalo.co wrote:
 
 and make sure you restarted the server so your changes take effect.
 

reload will get it, also, and is sometimes easier to come by. 

The pg_hba.conf file is read on start-up and when the main server process 
receives a SIGHUP signal. If you edit the file on an active system, you will 
need to signal the postmaster (using pg_ctl reload or kill -HUP) to make it 
re-read the file.
http://www.postgresql.org/docs/9.3/static/auth-pg-hba-conf.html



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [GENERAL] Connecting with PostgreSQL 9.1 using the Ubuntu machine user and peer authentication method

2014-08-07 Thread Adrian Klaver

On 08/07/2014 01:39 AM, Jorge Arevalo wrote:



What OS are you on?

Per:

http://www.postgresql.org/__docs/9.1/interactive/auth-__methods.html#AUTH-PEER
http://www.postgresql.org/docs/9.1/interactive/auth-methods.html#AUTH-PEER

Peer authentication is only available on operating systems
providing the getpeereid() function, the SO_PEERCRED socket
parameter, or similar mechanisms. Currently that includes Linux,
most flavors of BSD including Mac OS X, and Solaris.



Linux system  (Ubuntu 12.04). Also tested in Mac OS 10.8.

Forgot to mention: in pg_hba.conf there is a previous line:

local postgres peer


That would be the issue, assuming you are doing something along lines of 
psql -d some_db -U postgres per:


http://www.postgresql.org/docs/9.1/interactive/auth-pg-hba-conf.html

 The first record with a matching connection type, client address, 
requested database, and user name is used to perform authentication. 
There is no fall-through or backup: if one record is chosen and the 
authentication fails, subsequent records are not considered.



If you are not connecting as above, you will need to show us your 
connection string.




No map specified for that line.



--
Jorge Arevalo





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding 3 hours while inserting data into table

2014-08-07 Thread Adrian Klaver

On 08/06/2014 11:29 PM, Gaurav Tiwari G wrote:

Hi,

Plz find my comment inline below.

One more point, if we direct fired insert query, then it is working fine
but with application, it is showing problem.


Points at a configuration issue in the application.



Regards,

Gaurav Tiwari




  We are facing some inconsistence behaviour of Postgres. We have

  deployed our database on a server where timezone is GMT+3 hours.

What Postgres version?

GauravàPostgres Version 9.1

How was Postgres installed and on what OS?

GauravàSolaris x86

 

  We have application which is running on the same server.

What is the application using to connect to the server, ODBC, JDBC, etc?

GauravàJDBC4


The Postgres JDBC driver?
Is there anything showing up in the application logs?
Is this going through something else, Tomcat for instance?



 

  When application starts, it is inserting the correct timestamp in the

  table but after running few minutes/hours we have observed that 3

  hours is added into the timestamp in table. The problem resolved once

  the application restarted.

So once the application is restarted the timestamp offset never changes
or do you have to restart periodically to reset?

GauravàWe don’t change it manually(Don’t know if any mechanism provided
by Postgres itself ti reset it periodically)


I am not following. You start the application, it has timestamp issues, 
you restart it, the issues go away.


So how do they come back, unless the application is restarted?

If there is something in either Postgres or the application causing a 
restart, there should be something in the logs. Is there?





 

  Our application is putting correct data (checked by the insert query)

  and in DB it is seen that 3 hours is added. The figure 3 is slightly

  important as the server is deployed GMT+3 hours.

Can you show an example of an INSERT.


An example of an INSERT statement and the corresponding inserted row in 
the table would go along way in helping to troubleshoot this.




 

  *Property in Postgres.conf*

 

  #timezone = '(defaults to server environment setting)'

Is that the actual setting from postgresql.conf?

GauravàYes it the actual settings.

There is not a timezone specified?

GauravàYes, no time zone is specified.


Forgot that Postgres did not start setting the timezone in 
postgresql.conf until 9.2+




 

  **

 

  *Table Structure:*

 

  *Column |Type | Modifiers*

 

  *+-+---*

 

  *msisdn | character varying(100)  |*

 

  *offerid| character varying(100)  |*

 

  *expdatetime| timestamp without time zone |*

If you are interested in timezone(time) correctness this should be 'with
timezone'.

GauravàBut we don’t want with timezone, as requirement is without timezone

  Regards,

 

  M Tarkeshwar Rao




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding 3 hours while inserting data into table

2014-08-07 Thread Adrian Klaver

On 08/07/2014 07:49 AM, Gaurav Tiwari G wrote:

Hi,

1. Postgres configuration might be the suspect area but as we are not
specifying any timezone related property.


I was not talking about Postgres, it is not the application but the 
server. I was referring to whatever Java application you are running 
against Postgres.




2. We are using Postgres Jdbc4 driver.

3. When Application starts everything is running fine, but after few
minutes issue occurs and issue remain persist until application restart.
Once the application restarted everything is working fine for few request.

One more point, as DB is deployed on GMT+3 hr (Saudi Arabic where
timezone is Asia/Riyadh).

So we need to set the timzone in postgres.conf  by *timezone =
'Asia/Riyadh' *in**postgres.conf.  So will it make any sense or is there
any way to do so. ??**


From previous posts the timezone is set to the correct one, as Postgres 
is picking up the machines timezone environment variable.


Dates and times are a complex subject with a lot of moving parts. To 
help narrow down the number of parts we really need to see what is being 
provided to the Postgres server and what the result is. Otherwise , 
there will not be any progress to a solution.




Regards,

Gaurav Tiwari





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] order by question

2014-08-07 Thread Steve Clark

Hello,

I am confused by how postgres 8,4..13 is sorting my data.

\d test
Table public.test
 Column | Type | Modifiers
+--+---
 data   | text |

select * from test order by data;
   data
--

 -
 --
 1
 11
 11F
 1F
 a
 b
 C
 F
 -F
  Feneric
  Generic
(14 rows)

The first row is a single space, the next row a single -, the next two -- .
What I don't understand is why the '-F', the ' Feneric' and the ' Generic' sort 
where they do.

I would expect the output to be like this:

   data
--

  Feneric
  Generic
 -
 --
 -F
 1
 11
 11F
 1F
 a
 b
 C
 F
(14 rows)

client_encoding
-
 SQL_ASCII

 lc_collate
-
 en_US.UTF-8

foxboxconfig=# show lc_ctype;
  lc_ctype
-
 en_US.UTF-8


Thanks for any clarification.


--
Stephen Clark







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding 3 hours while inserting data into table

2014-08-07 Thread Adrian Klaver

On 08/07/2014 08:09 AM, Gaurav Tiwari G wrote:

Hi,

Java application configuration has no issue, we have cross check it.

On third point, what data you require as you mentioned that  what is being
provided to the Postgres server and what the result is .  I think 
postgres.conf would be sufficient or any other files will be required ??


A sample of the actual data being INSERTed(the actual INSERT query) into 
the table and then the data as it is in the table. So a before and after 
of the timestamp.


The reason I am asking to see the actual data has, among other things, 
to do with the Postgres JDBC driver. In recent versions it adds a 
TimeZone setting to the connection string. Below is a link to a message 
in a previous thread that dealt with a JDBC timezone issue, that details 
the code involved:


http://www.postgresql.org/message-id/538b4f3b.5070...@aklaver.com

Not sure if this is involved or not, but looking at what is being 
inserted would help determine that.




Regards,
Gaurav Tiwari





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] order by question

2014-08-07 Thread Kevin Grittner
Steve Clark scl...@netwolves.com wrote:

 I am confused by how postgres 8,4..13 is sorting my data.

 select * from test order by data;
 data
 --

   -
   --
   1
   11
   11F
   1F
   a
   b
   C
   F
   -F
   Feneric
   Generic
 (14 rows)

 The first row is a single space, the next row a single -, the next two -- .
 What I don't understand is why the '-F', the ' Feneric' and
 the ' Generic' sort where they do.

 I would expect the output to be like this:

 data
 --

   Feneric
   Generic
   -
   --
   -F
   1
   11
   11F
   1F
   a
   b
   C
   F
 (14 rows)

   lc_collate
 -
   en_US.UTF-8

PostgreSQL uses the OS collations.  What you are getting matches my
Ubuntu 14.04 machine:

kgrittn@Kevin-Desktop:~/pg/master$ echo $LANG
en_US.UTF-8
kgrittn@Kevin-Desktop:~/pg/master$ sort XXX

   Feneric
   Generic
   -
   --
   -F
   1
   11
   11F
   1F
   a
   b
   C
   F
 XXX

  -
  --
  1
  11
  11F
  1F
  a
  b
  C
  F
  -F
  Feneric
  Generic

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] The dangers of streaming across versions of glibc: A cautionary tale

2014-08-07 Thread Matthew Kelly
We are currently running with the en_US.UTF-8 collation.  It was a decision 
made long ago, and seeing as we never actually rely on the sort order of 
internationalized strings (other than for stability, apparently), we have never 
had any motivation to change this practice.

Some way of versioning collations, which is not tied to glibc seems immensely 
appealing.  Without a good way of testing the differences between glibc sort 
versions, it seems the only safe thing to do at the moment is to guarantee all 
streaming replica's run from the exact same OS image.  Which is fine until you 
want to upgrade your OS, and need to do a dump-restore instead of being able to 
do that in a rolling fashion.



To Bruce's point the way I was able to test for this issue in a particular 
index was (approximately):
--Assuming textfield is what the index is on, this causes the query planner to 
scan the index and give the position in the index.
CREATE TABLE index_order (SELECT textfield, dense_rank() OVER (ORDER BY 
textfield) as i_order FROM table);
--No index here, postgres must sort
CREATE TABLE both_order as (SELECT textfield, i_order, dense_rank() OVER (ORDER 
BY textfield) as sort_order FROM index_order);
-- If this doesn't return zero, you have a problem
SELECT count(*) FROM both_orders WHERE i_order  sort_order;

This method is really slow on a big table, and I'm not going to promise it 
always works, but that is how we found the root cause.

- Matt K

On Aug 6, 2014, at 9:46 PM, Peter Geoghegan peter.geoghega...@gmail.com
 wrote:

 On Wed, Aug 6, 2014 at 6:30 PM, Tatsuo Ishii is...@postgresql.org wrote:
 Another idea could be having our own collation data to isolate any
 changes from outside world. I vaguley recall this had been discussed
 before.
 
 That's probably the best solution. It would not be the first time that
 we decided to stop relying on the operating system's facilities due to
 various problems (e.g. we used to use the C standard library qsort()
 until about 2006). The only problem is that it's a lot of work. One
 possible solution that has been proposed is to adopt ICU [1]. That
 might allow us to say this is the official way that PostgreSQL 9.6
 sorts Japanese; you may use the old way if you want, but it's
 incompatible with the new way. ICU would give us a standard
 versioning interface [2]. They seem to take this seriously, and are
 aware of our considerations around B-Tree indexes on text.
 
 [1] https://wiki.postgresql.org/wiki/Todo:ICU
 [2] http://userguide.icu-project.org/collation/architecture#TOC-Versioning
 -- 
 Regards,
 Peter Geoghegan



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] order by question

2014-08-07 Thread Steve Clark

On 08/07/2014 11:36 AM, Kevin Grittner wrote:

Steve Clark scl...@netwolves.com wrote:


I am confused by how postgres 8,4..13 is sorting my data.
select * from test order by data;
  data
--

-
--
1
11
11F
1F
a
b
C
F
-F
Feneric
Generic
(14 rows)

The first row is a single space, the next row a single -, the next two -- .
What I don't understand is why the '-F', the ' Feneric' and
the ' Generic' sort where they do.

I would expect the output to be like this:

  data
--

Feneric
Generic
-
--
-F
1
11
11F
1F
a
b
C
F
(14 rows)
lc_collate
-
en_US.UTF-8

PostgreSQL uses the OS collations.  What you are getting matches my
Ubuntu 14.04 machine:

kgrittn@Kevin-Desktop:~/pg/master$ echo $LANG
en_US.UTF-8
kgrittn@Kevin-Desktop:~/pg/master$ sort XXX

Feneric
Generic
-
--
-F
1
11
11F
1F
a
b
C
F
XXX

   -
   --
   1
   11
   11F
   1F
   a
   b
   C
   F
   -F
   Feneric
   Generic

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Hi Kevin,

Thanks for the response. I get the same thing as postgres if I sort from the 
command line too. But I don't understand why.

I would expect '-F' to sort immediately after '-' and '--' not after 'F' as
well as ' Feneric' (spaceFeneric) and ' Generic' (spaceGeneric) to sort 
immediately after ' ' (space).

It is like the space character and the - in -Letter is ignored.


--
Stephen Clark



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recursive CTE trees + Sorting by votes

2014-08-07 Thread Paul Jungwirth
 Or another idea, add a column that is the path of the parent:

I don't think this will work. The problem is you need the full path to
keep the children with their parents, but you also need the score. If
you make the path an array of (-votes, id) tuples (perhaps flattened
for simplicity), then you get the correct ordering. That way at every
stage you are sorting by votes, but still keeping children with their
parents:

comments= WITH RECURSIVE cte (id, message, author, path, parent_id,
depth, votes)  AS (
SELECT  id,
message,
author,
array[-votes,id] AS path,
parent_id,
1 AS depth, votes
FROMcomments
WHERE   parent_id IS NULL
UNION ALL
SELECT  comments.id,
comments.message,
comments.author,
cte.path || -comments.votes || comments.id,
comments.parent_id,
cte.depth + 1 AS depth, comments.votes
FROMcomments
JOIN cte ON comments.parent_id = cte.id
)
SELECT id, message, author, path, depth, votes FROM cte
ORDER BY path;
 id |   message   | author |   path| depth | votes
+-++---+---+---
  5 | Very interesting post!  | thedz  | {-3,5}| 1 | 3
  8 | Fo sho, Yall| Mac| {-3,5,-12,8}  | 2 |12
  7 | Agreed  | G  | {-3,5,-5,7}   | 2 | 5
  6 | You sir, are wrong  | Chris  | {-3,5,-3,6}   | 2 | 3
  1 | This thread is really cool! | David  | {-1,1}| 1 | 1
  3 | I agree David!  | Daniel | {-1,1,-4,3}   | 2 | 4
  2 | Ya David, we love it!   | Jason  | {-1,1,-3,2}   | 2 | 3
  4 | gift Jason  | Anton  | {-1,1,-3,2,-15,4} | 3 |15
(8 rows)

Time: 0.966 ms

Paul


On Wed, Aug 6, 2014 at 2:38 PM, Martijn van Oosterhout
klep...@svana.org wrote:
 On Wed, Aug 06, 2014 at 05:28:09PM -0400, Gregory Taylor wrote:
 We are working on a threaded comment system, and found this post by Disqus
 to be super helpful:

 http://cramer.io/2010/05/30/scaling-threaded-comments-on-django-at-disqus/

 The CTE works wonderfully, and we're really happy with the results. The
 last obstacle is figuring out how to sort by a votes field, meanwhile
 preserving the tree structure.

 What do you mean exactly? Do you mean that want everything at the same
 level to be sorted by vote?

 If we ORDER BY path, votes (assuming we have the same structure as in the
 article), we never need tie-breaking on path, so the votes part of this
 doesn't even come into the equation.

 I suspect we need to do some path manipulation, but I'm not too sure of
 where to begin with this. I attempted incorporating votes into the path,
 but I failed pretty badly with this. It's probably way off, but here's my
 last (failed) attempt:

 https://gist.github.com/gtaylor/e3926a90fe108d52a4c8

 I think what you need to do is do the ordering withing the CTE itself.
 Something like:

 WITH RECUSIVE cte () AS (
SELECT ... ORDER BY vote DESC
 UNION ALL
SELECT ... JOIN cte ... ORDER BY vote DESC
 ) SELECT * from cte;

 Or another idea, add a column that is the path of the parent:

 WITH RECUSIVE cte () AS (
SELECT array[] as path_parent, array[id] as path, ... ORDER BY vote DESC
 UNION ALL
SELECT cte.path as path_parent, cte.path || comments.id as path, ... JOIN 
 cte ... ORDER BY vote DESC
 ) SELECT * from cte order by path, votes desc;

 Hope this helps,
 --
 Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 He who writes carelessly confesses thereby at the very outset that he does
 not attach much importance to his own thoughts.
-- Arthur Schopenhauer

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.10 (GNU/Linux)

 iQIVAwUBU+KgXUvt++dL5i1EAQgKzQ//fWqd56vcwKsYQDtbUE3Q2/ohUinYxpb6
 HgS9HoEs8QU3b4yzE6VOVXcUcN3/z6PPx4Mz3rqFOVgsFcZR2umGAaVw5oEr57Bd
 mqFDVgUxq8Xio2tijO0XFU89fh+/Cvus08CRh+OH6POLe6M76ox6cmFPtQzeaEon
 iFKXZZRIzFv7zpoE3xsQ7wgqSF44L0TIJIjdw3Dhcs8fN+T/jO0hJtUMKidGwbbv
 9f08r9kjSMBYAhKCPXZHy/By/E91DhA8GjJFL1MloHPol/lzSkn7v7amWJZaILyE
 g3ghGUG1YhPJPA3Dw2VBKWzumNyu8kXSzTvzN6PacFToCf2ZIfTJH59ehPqztt0o
 FC6auCvO1vWS3NbOKSwdBVvXb/bJsIM3uqN16LSVhHqUp75eOFp5AWKJMCjQF1hE
 MkHk5xyz2CWsYZTlzqCKtGxRjFEbxKGjtqsxcM4qKM3uSjMG/ZhaAY6FZFLIage0
 yxsHrE5N+zfDAGV1EplxxtzMHUEqyFnBYQNRHUSChLPCkgrluOeFFRQU22aVpUUL
 vbPIBI8E16bbtU6zwnE3DoMdBm1Pq5E4c+URbfbzJhGB1e/DkDqf7pOZjojLJ9ue
 DRP777bBbsYwtCdS69kiIDkfwA2f7lliILI9wpnKSg64SIWlCR6NVWFTsfU8OP4l
 cJw8kApkDr4=
 =8bEW
 -END PGP SIGNATURE-




-- 
_
Pulchritudo splendor veritatis.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] order by question

2014-08-07 Thread Kevin Grittner
Steve Clark scl...@netwolves.com wrote:

 It is like the space character and the - in -Letter is ignored.

Yes, that is how the en_US collation is defined.  I think the goal
is to make it something like phone book ordering.  If you still
have a white pages book around, look at how a business name with
a hyphen (or a hyphenated last name) is sorted there.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread Chris Curvey
I have a database with the following structure:

Create table bar...
Create function subset_of_bar ... (which does a select on a subset of bar)
Create table foo...
Alter table foo add constraint mycheck check subset_of_bar(id);

I pg_dumped my database, and tried to pg_restore it on another server.  When I 
do that, foo is being loaded before bar (and since there is no data in bar, the 
inserts to foo fail).  I suspect that this is because the check constraint 
calls a function, which hides the dependency between the tables, so pg_dump 
does not get the required ordering of tables.

I thought (hoped?) that using -disable-triggers *might* help with that, so I 
tried to do the pg_restore in two steps (one with a -schema-only, and then 
another one with -data-only and -disable-triggers), but -disable-triggers does 
not seem to disable constraints (which is not surprising).

I've done some searching and am coming up empty.  Is there a way to get 
pg_restore to apply constraints AFTER loading all the tables (I'd even be fine 
with a multi-step reload process of table structures, then data, then indexes, 
then constraints).  I'd rather not go down the path of having to maintain an 
explicit ordered list of tables (but I might have to).


Every time you report an issue without creating a ticket, God kills a kitten.  
Please think of the kittens.

Disclaimer

THIS IS A CONFIDENTIAL COMMUNICATION. The information contained in this e-mail 
is attorney-client privileged and confidential, intended only for the use of 
the individual or entity named above. If the reader of this message is not the 
intended recipient, or the employee or agent responsible to deliver it to the 
intended recipient, you are here by notified that any dissemination, 
distribution or copying of this communication is strictly prohibited. If you 
have received this communication in error, please notify us immediately by 
reply e-mail or by telephone (908) 233-8500.Thank you..


Re: [GENERAL] Adding 3 hours while inserting data into table

2014-08-07 Thread Gaurav Tiwari G
Hi,



Plz find my comment inline below.



One more point, if we direct fired insert query, then it is working fine but 
with application, it is showing problem.



Regards,

Gaurav Tiwari





-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Wednesday, August 06, 2014 6:42 PM
To: M Tarkeshwar Rao; pgsql-general@postgresql.org
Cc: Gaurav Tiwari G
Subject: Re: [GENERAL] Adding 3 hours while inserting data into table



On 08/06/2014 03:50 AM, M Tarkeshwar Rao wrote:

 Hi Team,



 We are facing some inconsistence behaviour of Postgres. We have

 deployed our database on a server where timezone is GMT+3 hours.



What Postgres version?

Gaurav-- Postgres Version 9.1

How was Postgres installed and on what OS?

Gaurav-- Solaris x86







 We have application which is running on the same server.



What is the application using to connect to the server, ODBC, JDBC, etc?

Gaurav-- JDBC4





 When application starts, it is inserting the correct timestamp in the

 table but after running few minutes/hours we have observed that 3

 hours is added into the timestamp in table. The problem resolved once

 the application restarted.



So once the application is restarted the timestamp offset never changes or do 
you have to restart periodically to reset?

Gaurav-- We don't change it manually(Don't know if any mechanism provided by 
Postgres itself ti reset it periodically)



 Our application is putting correct data (checked by the insert query)

 and in DB it is seen that 3 hours is added. The figure 3 is slightly

 important as the server is deployed GMT+3 hours.



Can you show an example of an INSERT.





 Below are some observations and some command output along with table

 structure.



 JEDEMM02:/# date;



 Tue Aug  5 16:41:52 AST 2014







 *Property in Postgres.conf*



 #timezone = '(defaults to server environment setting)'



Is that the actual setting from postgresql.conf?

Gaurav--Yes it the actual settings.

There is not a timezone specified?

Gaurav-- Yes, no time zone is specified.



 **



 *Table Structure:*



 *Column |Type | Modifiers*



 *+-+---*



 *msisdn | character varying(100)  |*



 *offerid| character varying(100)  |*



 *expdatetime| timestamp without time zone |*



If you are interested in timezone(time) correctness this should be 'with 
timezone'.

Gaurav-- But we don't want with timezone, as requirement is without timezone



 Regards,



 M Tarkeshwar Rao







--

Adrian Klaver

adrian.kla...@aklaver.commailto:adrian.kla...@aklaver.com


[GENERAL] ENCODING = 'LATIN1' LC_CTYPE?

2014-08-07 Thread Preuß , Sylvia
Dear list,

I'd like to create a database with ENCODING LATIN1 .
CREATE DATABASE z_latin1
  WITH OWNER = admin
   ENCODING = 'LATIN1'
   TABLESPACE = pg_default
   LC_COLLATE = 'German_Germany.1252'
   LC_CTYPE = 'German_Germany.1252'
   CONNECTION LIMIT = -1;

FEHLER: Kodierung »LATIN1« stimmt nicht mit Locale »German_Germany.1252« überein
SQL Status:22023
Detail:Die gewählte LC_CTYPE-Einstellung verlangt die Kodierung »WIN1252«.

PostgreSQL 9.2.8, compiled by Visual C++ build 1600, 64-bit

Which LC_COLLATE and LC_CTYPE entry do I need to fit latin1?
(It won't help to set the encoding to UTF8.)

Thanks in advance Sylvia


Der E-Mail-Dienst des Oberbergischen Kreises dient ausschließlich der 
dienstlichen Kommunikation.
Senden Sie deshalb keine E-Mails privaten Inhalts an E-Mail-Adressen des 
Oberbergischen Kreises.
Es wird darauf hingewiesen, dass neben der Person, an die Ihre E-Mail gerichtet 
ist, auch deren Vertretung im Amt einen unmittelbaren Zugriff auf Ihre 
Nachricht hat. Für Berufsgeheimnisträger und besondere Funktionsträger gelten 
abweichende Regelungen.
Es wird außerdem darauf hingewiesen, dass die Kommunikation per E-Mail ohne 
Authentifizierung und Verschlüsselung unsicher ist, da für unberechtigte Dritte 
grundsätzlich die Möglichkeit der Kenntnisnahme und Manipulation besteht.
Es wird deshalb keine Verantwortung für den Inhalt dieser Nachricht übernommen, 
da eine Manipulation nicht ausgeschlossen werden kann.
Obwohl alle angemessenen Vorkehrungen getroffen wurden, um sicherzustellen, 
dass Anlagen dieser E-Mail virusgeprüft sind, wird empfohlen, anhängende 
Dateien vor dem Öffnen durch Ihr eigenes Virus-Programm zu prüfen, da keinerlei 
Haftung für Schäden übernommen wird, die infolge etwaiger Software-Viren 
entstehen könnten.
Der Inhalt dieser E-Mail ist ausschließlich für die bezeichnete Person 
bestimmt. Wenn Sie nicht der vorgesehene Adressat dieser E-Mail oder dessen 
Vertretung sein sollten, beachten Sie bitte, dass jede Form der Kenntnisnahme, 
Veröffentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail 
unzulässig ist. In diesem Fall wird darum gebeten, sich mit der absendenden 
Person der E-Mail in Verbindung zu setzen.



[GENERAL] {xml}

2014-08-07 Thread Ramesh T
Hello,
  when i ran following  query on postgres 9.3,
SELECT  xmlagg(xmlelement(name e,part_id||',')) from part;

result
..
{xml}

how to get part_id's..? please let me know ..
advance thanks,
R..


Re: [GENERAL] Adding 3 hours while inserting data into table

2014-08-07 Thread Gaurav Tiwari G
Hi,

Java application configuration has no issue, we have cross check it.

On third point, what data you require as you mentioned that  what is being 
provided to the Postgres server and what the result is .  I think 
postgres.conf would be sufficient or any other files will be required ??

Regards,
Gaurav Tiwari


-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
Sent: Thursday, August 07, 2014 8:30 PM
To: Gaurav Tiwari G; M Tarkeshwar Rao; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Adding 3 hours while inserting data into table

On 08/07/2014 07:49 AM, Gaurav Tiwari G wrote:
 Hi,

 1. Postgres configuration might be the suspect area but as we are not 
 specifying any timezone related property.

I was not talking about Postgres, it is not the application but the server. I 
was referring to whatever Java application you are running against Postgres.


 2. We are using Postgres Jdbc4 driver.

 3. When Application starts everything is running fine, but after few
 minutes issue occurs and issue remain persist until application restart.
 Once the application restarted everything is working fine for few request.

 One more point, as DB is deployed on GMT+3 hr (Saudi Arabic where
 timezone is Asia/Riyadh).

 So we need to set the timzone in postgres.conf  by *timezone =
 'Asia/Riyadh' *in**postgres.conf.  So will it make any sense or is there
 any way to do so. ??**

 From previous posts the timezone is set to the correct one, as Postgres 
is picking up the machines timezone environment variable.

Dates and times are a complex subject with a lot of moving parts. To 
help narrow down the number of parts we really need to see what is being 
provided to the Postgres server and what the result is. Otherwise , 
there will not be any progress to a solution.


 Regards,

 Gaurav Tiwari




-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding 3 hours while inserting data into table

2014-08-07 Thread Gaurav Tiwari G
Hi,



1. Postgres configuration might be the suspect area but as we are not 
specifying any timezone related property.

2. We are using Postgres Jdbc4 driver.

3. When Application starts everything is running fine, but after few minutes 
issue occurs and issue remain persist until application restart. Once the 
application restarted everything is working fine for few request.



One more point, as DB is deployed on GMT+3 hr (Saudi Arabic where timezone is 
Asia/Riyadh).

So we need to set the timzone in postgres.conf  by timezone = 'Asia/Riyadh' in 
postgres.conf.  So will it make any sense or is there any way to do so. ??



Regards,

Gaurav Tiwari





-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Thursday, August 07, 2014 8:05 PM
To: Gaurav Tiwari G; M Tarkeshwar Rao; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Adding 3 hours while inserting data into table



On 08/06/2014 11:29 PM, Gaurav Tiwari G wrote:

 Hi,



 Plz find my comment inline below.



 One more point, if we direct fired insert query, then it is working

 fine but with application, it is showing problem.



Points at a configuration issue in the application.





 Regards,



 Gaurav Tiwari





   We are facing some inconsistence behaviour of Postgres. We have



   deployed our database on a server where timezone is GMT+3 hours.



 What Postgres version?



 GauravàPostgres Version 9.1



 How was Postgres installed and on what OS?



 GauravàSolaris x86



  



   We have application which is running on the same server.



 What is the application using to connect to the server, ODBC, JDBC, etc?



 GauravàJDBC4



The Postgres JDBC driver?

Is there anything showing up in the application logs?

Is this going through something else, Tomcat for instance?





  



   When application starts, it is inserting the correct timestamp in

 the



   table but after running few minutes/hours we have observed that 3



   hours is added into the timestamp in table. The problem resolved

 once



   the application restarted.



 So once the application is restarted the timestamp offset never

 changes or do you have to restart periodically to reset?



 GauravàWe don't change it manually(Don't know if any mechanism

 provided by Postgres itself ti reset it periodically)



I am not following. You start the application, it has timestamp issues, you 
restart it, the issues go away.



So how do they come back, unless the application is restarted?



If there is something in either Postgres or the application causing a restart, 
there should be something in the logs. Is there?







  



   Our application is putting correct data (checked by the insert query)



   and in DB it is seen that 3 hours is added. The figure 3 is slightly



   important as the server is deployed GMT+3 hours.



 Can you show an example of an INSERT.



An example of an INSERT statement and the corresponding inserted row in

the table would go along way in helping to troubleshoot this.





  



   *Property in Postgres.conf*



  



   #timezone = '(defaults to server environment setting)'



 Is that the actual setting from postgresql.conf?



 GauravàYes it the actual settings.



 There is not a timezone specified?



 GauravàYes, no time zone is specified.



Forgot that Postgres did not start setting the timezone in

postgresql.conf until 9.2+





  



   **



  



   *Table Structure:*



  



   *Column |Type | Modifiers*



  



   *+-+---*



  



   *msisdn | character varying(100)  |*



  



   *offerid| character varying(100)  |*



  



   *expdatetime| timestamp without time zone |*



 If you are interested in timezone(time) correctness this should be 'with

 timezone'.



 GauravàBut we don't want with timezone, as requirement is without timezone



   Regards,



  



   M Tarkeshwar Rao







--

Adrian Klaver

adrian.kla...@aklaver.commailto:adrian.kla...@aklaver.com


Re: [GENERAL] Upgrading 9.0 to 9.3 - permission denied to pg_upgrade_utility.log

2014-08-07 Thread Bruce Momjian
On Thu, Aug  7, 2014 at 09:42:13AM +0100, Russell Keane wrote:
 Hi,
 
  
 
 We’re trying to upgrade a 9.0 database to 9.3 using pg_upgrade. The upgrade
 must be automated so a manual pg_dump / pg_restore is not an option.
 
 We use the following command:
 
 D:\PostgreSQL\9.3\Datac:\Program Files (x86)\PostgreSQL\9.3\bin\
 pg_upgrade.exe -d D:\PostgreSQL\Data -D D:\PostgreSQL\9.3\Data -b C:\
 Program Files (x86)\PostgreSQL\9.0\bin -B C:\Program Files (x86)\PostgreSQL\
 9.3\bin -u postgres -c -P 5432
 
  
 
 This is on a Windows 7 64 bit environment.
 
 We’ve given the postgres windows user full permissions to all folders involved
 and are running the upgrade in a cmd prompt opened with runas  /user:postgres
 
  
 
 The upgrade works all the way up until the final hurdle where, in the
 pg_upgrade_utility.log the following statement appears:
 
 command: C:\Program Files (x86)\PostgreSQL\9.3\bin/initdb --sync-only D:\
 PostgreSQL\9.3\Data  pg_upgrade_utility.log 21
 
 syncing data to disk ... initdb: could not open file D:/PostgreSQL/9.3/Data/
 pg_upgrade_utility.log: Permission denied

Uh, it would appear you are running pg_upgrade from _inside_ the 9.3
data directory.  That should work, but it would probably be better to
run it in another directory where you also have write permission.  I
think the problem is that initdb --sync-only is syncing those files to
disk as you are writing to the log file.

We have had Windows problems of two processes writing to the same file,
but that is usually a different error message, e.g.

 * For some reason, Windows issues a file-in-use error if we write data to
 * the log file from a non-primary thread just before we create a
 * subprocess that also writes to the same log file.  One fix is to sleep
 * for 100ms.  A cleaner fix is to write to the log file _after_ the
 * subprocess has completed, so we do this only when writing from a
 * non-primary thread.  fflush(), running system() twice, and pre-creating
 * the file do not see to help.

I think that returns a share violation error.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread Shaun Thomas

On 08/07/2014 10:00 AM, Chris Curvey wrote:


I’ve done some searching and am coming up empty.  Is there a way to get
pg_restore to apply constraints AFTER loading all the tables


Kinda. PostgreSQL applies constraints with hidden system-level triggers. 
An easy way to turn them off is to use this syntax:


ALTER TABLE foo DISABLE TRIGGER ALL;

Then you just need to generate one of these for each of your tables, and 
run it between your table schema restore, and the data import. You can 
generate a script that does it all with something like this:


COPY (
  SELECT 'ALTER TABLE ' || schemaname || '.' || quote_ident(tablename)
 || ' DISABLE TRIGGER ALL;'
FROM pg_tables
) TO '/tmp/stop_triggers.sql'

After your data is loaded, just change DISABLE to ENABLE, and run the 
script again.


--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] The dangers of streaming across versions of glibc: A cautionary tale

2014-08-07 Thread Bruce Momjian
On Thu, Aug  7, 2014 at 03:07:04PM +, Matthew Kelly wrote:
 We are currently running with the en_US.UTF-8 collation.  It was a decision 
 made long ago, and seeing as we never actually rely on the sort order of 
 internationalized strings (other than for stability, apparently), we have 
 never had any motivation to change this practice.
 
 Some way of versioning collations, which is not tied to glibc seems immensely 
 appealing.  Without a good way of testing the differences between glibc sort 
 versions, it seems the only safe thing to do at the moment is to guarantee 
 all streaming replica's run from the exact same OS image.  Which is fine 
 until you want to upgrade your OS, and need to do a dump-restore instead of 
 being able to do that in a rolling fashion.
 
 
 
 To Bruce's point the way I was able to test for this issue in a particular 
 index was (approximately):
 --Assuming textfield is what the index is on, this causes the query planner 
 to scan the index and give the position in the index.
 CREATE TABLE index_order (SELECT textfield, dense_rank() OVER (ORDER BY 
 textfield) as i_order FROM table);
 --No index here, postgres must sort
 CREATE TABLE both_order as (SELECT textfield, i_order, dense_rank() OVER 
 (ORDER BY textfield) as sort_order FROM index_order);
 -- If this doesn't return zero, you have a problem
 SELECT count(*) FROM both_orders WHERE i_order  sort_order;
 
 This method is really slow on a big table, and I'm not going to promise it 
 always works, but that is how we found the root cause.

We could walk the index looking for inconsistent btree splits, e.g. the
split doesn't match the ordering returned by the existing collation
functions.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread Shaun Thomas

On 08/07/2014 01:09 PM, Chris Curvey wrote:


The disable trigger statement runs without error, but does not seem
to have any effect.


:(

Apparently this trick only works for disabling foreign keys. I'm not 
sure how to temporarily disable check constraints. You might have to 
drop the constraints after initializing the schemas and re-add them at 
the end after the data import is complete.


--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread Chris Curvey


 -Original Message-
 From: Shaun Thomas [mailto:stho...@optionshouse.com]
 Sent: Thursday, August 07, 2014 12:43 PM
 To: Chris Curvey; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] dump/restore with a hidden dependency?

 On 08/07/2014 10:00 AM, Chris Curvey wrote:

  I've done some searching and am coming up empty.  Is there a way to
  get pg_restore to apply constraints AFTER loading all the tables

 Kinda. PostgreSQL applies constraints with hidden system-level triggers.
 An easy way to turn them off is to use this syntax:

 ALTER TABLE foo DISABLE TRIGGER ALL;

 Then you just need to generate one of these for each of your tables, and run
 it between your table schema restore, and the data import. You can
 generate a script that does it all with something like this:

 COPY (
SELECT 'ALTER TABLE ' || schemaname || '.' || quote_ident(tablename)
   || ' DISABLE TRIGGER ALL;'
  FROM pg_tables
 ) TO '/tmp/stop_triggers.sql'

 After your data is loaded, just change DISABLE to ENABLE, and run the script
 again.

The disable trigger statement runs without error, but does not seem to have 
any effect.   Here's a simple test case.

-- create table and constraint
create table foo (a int);
alter table foo add constraint foobar check(a  0);

-- this should fail, and does
insert into foo (a) values (1);

-- disable trigger and try again
alter table foo disable trigger all;

-- this should work, but still fails.
insert into foo (a) values (1);

select version()
EnterpriseDB 9.3.4.10, compiled by Visual C++ build 1600, 64-bit

(and just for giggles, I tried it on another machine running  PostgreSQL 
9.3.1, compiled by Visual C++ build 1600, 64-bit)

 --
 Shaun Thomas
 OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
 312-676-8870
 stho...@optionshouse.com

 __

 See http://www.peak6.com/email_disclaimer/ for terms and conditions
 related to this email
Disclaimer

THIS IS A CONFIDENTIAL COMMUNICATION. The information contained in this e-mail 
is attorney-client privileged and confidential, intended only for the use of 
the individual or entity named above. If the reader of this message is not the 
intended recipient, or the employee or agent responsible to deliver it to the 
intended recipient, you are here by notified that any dissemination, 
distribution or copying of this communication is strictly prohibited. If you 
have received this communication in error, please notify us immediately by 
reply e-mail or by telephone (908) 233-8500.Thank you..


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread Tom Lane
Chris Curvey ccur...@zuckergoldberg.com writes:
 I have a database with the following structure:
 Create table bar...
 Create function subset_of_bar ... (which does a select on a subset of bar)
 Create table foo...
 Alter table foo add constraint mycheck check subset_of_bar(id);

Basically, that's broken in any number of ways, not only the one you
tripped across.  CHECK constraint conditions should never, ever, depend
on anything except the contents of the specific row being checked.
When you try to fake a foreign-key-like constraint with a CHECK, Postgres
will check it at inappropriate times (as per your pg_dump problem) and
fail to check it at other times when it really needs to be checked
(in this case, when you modify table bar).

You need to restructure so that you can describe the table relationship
as a regular foreign key.  Anything else *will* bite you on the rear.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Avoid WAL archiving when idle?

2014-08-07 Thread Jeff Janes
On Wed, Aug 6, 2014 at 8:49 PM, Laurence Rowe l...@lrowe.co.uk wrote:

 I have WAL archiving setup on Postgres 9.3.2 using WAL-E on CentOS 6.4
 using the postgresql.org RPM. This is working fine, except I see a lot of
 spurious activity in the S3 bucket with wal files being backed up every 5
 minutes even when the database is idle. This can make restoring to a dev
 server really slow if it's been a long time since the last base backup. The
 only non-default configuration is:

 wal_level = archive
 archive_mode = on
 archive_command = '/usr/local/bin/envdir /etc/wal-e.d/env
 /tools/python/current/bin/wal-e wal-push %p'
 archive_timeout = 60

 The 5 minute interval matches the default checkpoint_timeout, so I guess
 I'm seeing the same problem as mentioned here:
 http://www.postgresql.org/message-id/CAMkU=1wcyn7jnotxcncqpultznfv8zwh5bqrqzha+ugb1x-...@mail.gmail.com

 Is there anyway I can configure PostgreSQL to avoid continuously archiving
 WAL files while idle but still place a limit on the time until a database
 write is archived?


I changed guc.c so that I could set to checkpoint_timeout to 100h, and then
set it that high.

Not the ideal solution, perhaps.

Cheers,

Jeff


[GENERAL] not finding rows using ctid

2014-08-07 Thread AI Rumman
Hi,

I am getting the logs as follows:

LOG:  process 32145 acquired ExclusiveLock on tuple (153420,5) of relation
663326 of database 475999 after 1123.028 ms


But, when I am executing sqls to find the row on that table using the ctid
= '(153420,5)', I get no rows.


Any idea, why?


Thanks.


Re: [GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread Chris Curvey
 -Original Message-
 From: Tom Lane [mailto:t...@sss.pgh.pa.us]
 Sent: Thursday, August 07, 2014 2:50 PM
 To: Chris Curvey
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] dump/restore with a hidden dependency?

 Chris Curvey ccur...@zuckergoldberg.com writes:
  I have a database with the following structure:
  Create table bar...
  Create function subset_of_bar ... (which does a select on a subset of
  bar) Create table foo...
  Alter table foo add constraint mycheck check subset_of_bar(id);

 Basically, that's broken in any number of ways, not only the one you tripped
 across.  CHECK constraint conditions should never, ever, depend on
 anything except the contents of the specific row being checked.
 When you try to fake a foreign-key-like constraint with a CHECK, Postgres
 will check it at inappropriate times (as per your pg_dump problem) and fail
 to check it at other times when it really needs to be checked (in this case,
 when you modify table bar).

 You need to restructure so that you can describe the table relationship as a
 regular foreign key.  Anything else *will* bite you on the rear.

   regards, tom lane

 Thanks for the heads-up.  Given that my requirement doesn't change (entries in 
foo must not only reference a row in bar, but must reference row in a subset of 
bar), what would be the recommended path forward?  You can't reference a view.  
Using table inheritance feels like the wrong solution.

Perhaps a pair of triggers?  An insert-or-update trigger on foo, and a 
delete-or-update trigger on bar?

Any other ideas?
Disclaimer

THIS IS A CONFIDENTIAL COMMUNICATION. The information contained in this e-mail 
is attorney-client privileged and confidential, intended only for the use of 
the individual or entity named above. If the reader of this message is not the 
intended recipient, or the employee or agent responsible to deliver it to the 
intended recipient, you are here by notified that any dissemination, 
distribution or copying of this communication is strictly prohibited. If you 
have received this communication in error, please notify us immediately by 
reply e-mail or by telephone (908) 233-8500.Thank you..


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recursive CTE trees + Sorting by votes

2014-08-07 Thread Gregory Taylor
On Thu, Aug 7, 2014 at 11:57 AM, Paul Jungwirth p...@illuminatedcomputing.com
 wrote:

  Or another idea, add a column that is the path of the parent:

 I don't think this will work. The problem is you need the full path to
 keep the children with their parents, but you also need the score. If
 you make the path an array of (-votes, id) tuples (perhaps flattened
 for simplicity), then you get the correct ordering. That way at every
 stage you are sorting by votes, but still keeping children with their
 parents:

 comments= WITH RECURSIVE cte (id, message, author, path, parent_id,
 depth, votes)  AS (
 SELECT  id,
 message,
 author,
 array[-votes,id] AS path,
 parent_id,
 1 AS depth, votes
 FROMcomments
 WHERE   parent_id IS NULL
 UNION ALL
 SELECT  comments.id,
 comments.message,
 comments.author,
 cte.path || -comments.votes || comments.id,
 comments.parent_id,
 cte.depth + 1 AS depth, comments.votes
 FROMcomments
 JOIN cte ON comments.parent_id = cte.id
 )
 SELECT id, message, author, path, depth, votes FROM cte
 ORDER BY path;
  id |   message   | author |   path| depth |
 votes

 +-++---+---+---
   5 | Very interesting post!  | thedz  | {-3,5}| 1 |
   3
   8 | Fo sho, Yall| Mac| {-3,5,-12,8}  | 2 |
  12
   7 | Agreed  | G  | {-3,5,-5,7}   | 2 |
   5
   6 | You sir, are wrong  | Chris  | {-3,5,-3,6}   | 2 |
   3
   1 | This thread is really cool! | David  | {-1,1}| 1 |
   1
   3 | I agree David!  | Daniel | {-1,1,-4,3}   | 2 |
   4
   2 | Ya David, we love it!   | Jason  | {-1,1,-3,2}   | 2 |
   3
   4 | gift Jason  | Anton  | {-1,1,-3,2,-15,4} | 3 |
  15
 (8 rows)

 Time: 0.966 ms


This is outstanding, Paul. I'm still checking things over, but it looks
like this is going to work. It looks like I was really close, but didn't
think to go negative, and I had one of my arrays flip-flopped from what
you've got. I made those two changes and it would appear that this is
perfect.

Much appreciated, I would have been beating my head against this for a lot
longer without the help!


Re: [GENERAL] not finding rows using ctid

2014-08-07 Thread Adrian Klaver

On 08/07/2014 12:40 PM, AI Rumman wrote:


Hi,

I am getting the logs as follows:

LOG:  process 32145 acquired ExclusiveLock on tuple (153420,5) of
relation 663326 of database 475999 after 1123.028 ms


But, when I am executing sqls to find the row on that table using the
ctid = '(153420,5)', I get no rows.


Any idea, why?


http://www.postgresql.org/docs/9.3/static/ddl-system-columns.html
ctid

The physical location of the row version within its table. Note 
that although the ctid can be used to locate the row version very 
quickly, a row's ctid will change if it is updated or moved by VACUUM 
FULL. Therefore ctid is useless as a long-term row identifier. The OID, 
or even better a user-defined serial number, should be used to identify 
logical rows.


Something changed the row between the time you saw it in the log and the 
time you did the query.





Thanks.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread David G Johnston
Chris Curvey-3 wrote
 -Original Message-
 From: Tom Lane [mailto:

 tgl@.pa

 ]
 Sent: Thursday, August 07, 2014 2:50 PM
 To: Chris Curvey
 Cc: 

 pgsql-general@

 Subject: Re: [GENERAL] dump/restore with a hidden dependency?

 Chris Curvey lt;

 ccurvey@

 gt; writes:
  I have a database with the following structure:
  Create table bar...
  Create function subset_of_bar ... (which does a select on a subset of
  bar) Create table foo...
  Alter table foo add constraint mycheck check subset_of_bar(id);

 Basically, that's broken in any number of ways, not only the one you
 tripped
 across.  CHECK constraint conditions should never, ever, depend on
 anything except the contents of the specific row being checked.
 When you try to fake a foreign-key-like constraint with a CHECK, Postgres
 will check it at inappropriate times (as per your pg_dump problem) and
 fail
 to check it at other times when it really needs to be checked (in this
 case,
 when you modify table bar).

 You need to restructure so that you can describe the table relationship
 as a
 regular foreign key.  Anything else *will* bite you on the rear.

   regards, tom lane
 
  Thanks for the heads-up.  Given that my requirement doesn't change
 (entries in foo must not only reference a row in bar, but must reference
 row in a subset of bar), what would be the recommended path forward?  You
 can't reference a view.  Using table inheritance feels like the wrong
 solution.
 
 Perhaps a pair of triggers?  An insert-or-update trigger on foo, and a
 delete-or-update trigger on bar?
 
 Any other ideas?

In no particular order:

Triggers

A Compound FK that applies the check of the of the first field to the subset
defined by the second.
i.e., ... FOREIGN KEY (b_id, b_scope) REFERENCES bar (b_id, b_scope)

David J.







--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/dump-restore-with-a-hidden-dependency-tp5814072p5814118.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] not finding rows using ctid

2014-08-07 Thread AI Rumman
I didn't execute any Vacuum Full and I tried to get the row after 3 hours
of the issue.

Thanks.


On Thu, Aug 7, 2014 at 1:51 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 08/07/2014 12:40 PM, AI Rumman wrote:


 Hi,

 I am getting the logs as follows:

 LOG:  process 32145 acquired ExclusiveLock on tuple (153420,5) of
 relation 663326 of database 475999 after 1123.028 ms


 But, when I am executing sqls to find the row on that table using the
 ctid = '(153420,5)', I get no rows.


 Any idea, why?


 http://www.postgresql.org/docs/9.3/static/ddl-system-columns.html
 ctid

 The physical location of the row version within its table. Note that
 although the ctid can be used to locate the row version very quickly, a
 row's ctid will change if it is updated or moved by VACUUM FULL. Therefore
 ctid is useless as a long-term row identifier. The OID, or even better a
 user-defined serial number, should be used to identify logical rows.

 Something changed the row between the time you saw it in the log and the
 time you did the query.



 Thanks.



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] not finding rows using ctid

2014-08-07 Thread Adrian Klaver

On 08/07/2014 02:14 PM, AI Rumman wrote:

I didn't execute any Vacuum Full and I tried to get the row after 3
hours of the issue.


Also, ...a row's ctid will change if it is updated...



Thanks.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] {xml}

2014-08-07 Thread David G Johnston
Ramesh T wrote
 Hello,
   when i ran following  query on postgres 9.3,
 SELECT  xmlagg(xmlelement(name e,part_id||',')) from part;
 
 result
 ..
 {xml}
 
 how to get part_id's..? please let me know ..
 advance thanks,
 R..

Are you using psql?  What version of PostgreSQL?

I am guessing {xml} is the client's way of saying that what you have is
XML.  Try casting the final result to text and see what you get.

Otherwise...

version
PostgreSQL 9.3.4 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro
4.6.3-1ubuntu5) 4.6.3, 64-bit

Since this:

WITH vals (ids) AS (
VALUES ('1'),('2')
)
SELECT xmlagg(xmlelement(name tag, ids))::text FROM vals;

works probably need to provide a self-contained example for someone to look
at.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/xml-tp5814076p5814126.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Feature proposal and discussion: full-fledged column/function equivalence

2014-08-07 Thread Tom Lane
David Johnston david.g.johns...@gmail.com writes:
 On Fri, Aug 1, 2014 at 6:22 PM, Chris Travers chris.trav...@gmail.com
 wrote:
 On Fri, Aug 1, 2014 at 12:19 PM, David G Johnston 
 david.g.johns...@gmail.com wrote:
 More to the point: if you are writing a multiple-relation query and have
 testfunction functions defined for at least two of the relations used in
 the query how would the system decide which one to use?

 Same way you do it for columns.  Throw an error that it is ambiguous.

 ​I'd rather approach the first-class issue by being able to say:  ALTER
 TABLE test ADD COLUMN ​testfunction(test) -- maybe with an AS col_alias...

The real reason not to do this is that there is already a SQL standard
feature for computed columns (they're called generated columns IIRC).
We don't need to, and shouldn't, invent nonstandard syntax for that.

We inherited the notion that a.b is equivalent to b(a) from PostQUEL;
it's nowhere to be seen in SQL.  While I don't feel a need to incur the
backwards compatibility hit of taking that out, I also don't feel a need
to extend it, especially not in directions that risk breaking existing
applications.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dump/restore with a hidden dependency?

2014-08-07 Thread Kevin Grittner
Chris Curvey ccur...@zuckergoldberg.com wrote:

 Perhaps a pair of triggers?  An insert-or-update trigger on foo, and a
 delete-or-update trigger on bar?

Using a foreign key constraint is best if that can do the right
thing.  If that doesn't work, triggers like you describe are
probably the best option, but you need to cover race conditions.
See this recent post for suggestions:

http://www.postgresql.org/message-id/1406836331.34944.yahoomail...@web122303.mail.ne1.yahoo.com

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] The dangers of streaming across versions of glibc: A cautionary tale

2014-08-07 Thread Peter Geoghegan
On Thu, Aug 7, 2014 at 9:46 AM, Bruce Momjian br...@momjian.us wrote:
 We could walk the index looking for inconsistent btree splits, e.g. the
 split doesn't match the ordering returned by the existing collation
 functions.

I'm not sure I follow. I don't think that a tool like my btreecheck
tool will necessarily be able to catch anything like this on a
standby. Maybe it will, but that isn't guaranteed. For example, the
difference in collation rules in question might just not have cropped
up yet, but it's still a ticking time-bomb. Or, there are only
differences affecting values on internal pages. Things break down very
quickly.

In general, once there is an undetected inconsistency in collations
between replicas, that means that the varlena B-Tree support function
number 1 can violate various invariants that all operator classes must
obey. I doubt we want to get into the business of working backwards
from a broken state of affairs like that to figure out there is a
problem. Rather, I really do think we're compelled to offer better
versioning of collations using a versioning interface like Glibc's
LC_IDENTIFICATION. There is no way other way to properly fix the
problem. This is a problem that is well understood, and anticipated by
the Unicode consortium.

-- 
Regards,
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get PG 9.3 for a RaspberryPI (Debian Wheezy)?

2014-08-07 Thread Tom Lane
Andreas maps...@gmx.net writes:
 When I checked the logfile of the local PG-9.3 server I found an error 
 on startup that complained it couldn't bind a IPv4 socket for 5432.
 Another error says autovauum is not running.

It would help if you provided the exact log messages, but what this sounds
like is a networking configuration problem.  Do you have DNS service on
this machine, and if so does it correctly resolve localhost?

There has been some talk in the past about kluging things so that PG would
work even if localhost didn't resolve, but the case hasn't seemed worth
putting much effort into.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql referencing and creating types as record

2014-08-07 Thread vpmm2007
 In reply to this post by vpmm2007
create or replace package CUM_A_TYPES
as
 type LT_PAYMENT is record
(BASICnumber,
  DPnumber,
  GRADE_PAYnumber
);
TYPE TYPE_CALC_TAX is record
 (
  FIN_ROLE_ID   number(8),
  CALC_FOR_ROLE_CODE  number(4));

NEED TO CONVERT THIS TO POSTGRES , ANYBODY PLS HELP ME I M NEW TO POSTGRES
THXRGDS
VPMM 



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/postgresql-referencing-and-creating-types-as-record-tp5813901p5814176.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general