Re: [GENERAL] unexpected error tables can have at most 1600 columns

2015-04-13 Thread Day, David
Pavel,

Thanks so much. This seems to help explain the  problem.

I can say that the patch file had a later statement with  an error related to 
search_path setup.
Given that the logic that applies patches repeat attempts for an extended 
period of time,
I speculate that the rollback of the patch leaves these invisible columns 
remaining?
Otherwise I fail to see  from where these columns originate.

When I drop the database and recreate it with the schema reference issues 
resolved.
( ie. Issues related to search_path  setup ). The database builds without issue.

These patches were not an issue for ongoing developers because at some point
In the installation the default search_path gets setup so that this err in the
patch writing is masked.


Thanks so much.



Regards


Dave Day



select attname from pg_attribute where 
attrelid='log.conference_history'::regclass and attnum  0;
 attname
-
pg.dropped.11
pg.dropped.13
pg.dropped.14
pg.dropped.15
pg.dropped.41
pg.dropped.56
pg.dropped.42
pg.dropped.43..

select count(*) from pg_attribute where 
attrelid='log.conference_history'::regclass and attnum  0 and attisdropped;
count
---
  1598

From: Pavel Stehule [mailto:pavel.steh...@gmail.com]
Sent: Monday, April 13, 2015 12:06 PM
To: Day, David
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] unexpected error  tables can have at most 1600 columns



2015-04-13 17:57 GMT+02:00 Day, David d...@redcom.commailto:d...@redcom.com:
Situation

I have a co-developer installing a new Virtual Machine and encountering a 
postgres error during the installation.
One of our  SQL patch files is failing unexpectedly.

The patch is attempting to add columns to a table,  The table involved  
currently has only 2 columns,
Interactively I can generate the same error in his current state.

psql -h ohio -U redcom ace_db
psql (9.3.6)
Type help for help.

ace_db=# select * from log.conference_history;
conf_id | max_size
-+--
(0 rows)

ace_db=# ALTER TABLE log.conference_history ADD talker_limit integer DEFAULT 0;
ERROR:  tables can have at most 1600 columns
ace_db=#
ace_db=#

There can be removed (invisible columns)

select attname from pg_attribute where attrelid = 'test'::regclass and attnum  
0;

postgres=# select attname from pg_attribute where attrelid = 'test'::regclass 
and attnum  0;
┌─┐
│ attname │
╞═╡
│ a   │
│ c   │
│ d   │
└─┘
(3 rows)



alter table test drop column a, drop column c;


postgres=# select attname from pg_attribute where attrelid = 'test'::regclass 
and attnum  0;
┌──┐
│   attname│
╞══╡
│ pg.dropped.1 │
│ pg.dropped.2 │
│ d│
└──┘
(3 rows)

postgres=# select count(*) from pg_attribute where attrelid = 'test'::regclass 
and attnum  0 and attisdropped;
┌───┐
│ count │
╞═══╡
│ 2 │
└───┘
(1 row)
So maybe it can be a reason of this issue?
Pavel






Puzzled ?


Any thoughts ?


Regards


Dave Day



Re: [GENERAL] Hot standby problems: consistent state not reached, no connection to master server.

2015-04-13 Thread Guillaume Lelarge
Le 12 avr. 2015 16:50, Ilya Ashchepkov koc...@gmail.com a écrit :

 Hello.

 I'm setting up hot standby slave.
 It recovers from wal archive files, but I can't connect to it:
 $ psql
 psql: FATAL:  the database system is starting up

 On master:
 # select name,setting from pg_settings where name like 'wal_level';
name|   setting
 ---+-
  wal_level | hot_standby


 My slave recovery.conf:
 $ cat recovery.conf
 # Note that recovery.conf must be in $PGDATA directory.
 # It should NOT be located in the same directory as postgresql.conf

 # Specifies whether to start the server as a standby. In streaming
replication,
 # this parameter must to be set to on.
 standby_mode  = 'on'

 # Specifies a connection string which is used for the standby server to
connect
 # with the primary.
 primary_conninfo  = 'host=192.168.0.101 port=5432 user=replication
password=*'

 # Specifies a trigger file whose presence should cause streaming
replication to
 # end (i.e., failover).
 trigger_file = '/media/psqlbak/101/main/standup'

 # Specifies a command to load archive segments from the WAL archive. If
 # wal_keep_segments is a high enough number to retain the WAL segments
 # required for the standby server, this may not be necessary. But
 # a large workload can cause segments to be recycled before the standby
 # is fully synchronized, requiring you to start again from a new base
backup.
 restore_command = '/usr/lib/postgresql/9.3/bin/pg_standby -t
/tmp/pgsql.trigger.5432 /media/psqlbak/101/wals/main/ %f %p %r'


Don't use pg_standby if you want to use streaming. Use cp, scp, rsync, or
anything else but not pg_standby. Streaming starts when archive recovery
fails to get next archive.

 I tried to comment 'restore_command' in recovery.conf on slave, then
slave connects
 to master and starts receiving data, but I think it's not very good way.
 What should I change to receive data through connection and reach
consistent
 state on slave?



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Hot standby problems: consistent state not reached, no connection to master server.

2015-04-13 Thread Adrian Klaver

On 04/13/2015 11:25 AM, Ilya Ashchepkov wrote:

On Mon, 13 Apr 2015 10:06:05 -0700
Adrian Klaver adrian.kla...@aklaver.com wrote:


On 04/13/2015 09:42 AM, Ilya Ashchepkov wrote:

On Sun, 12 Apr 2015 17:30:44 -0700
Adrian Klaver adrian.kla...@aklaver.com wrote:










If a connection is not being made:

1) Dose user replication have REPLICATION rights?
2) Is the pg_hba.conf on the master set up to allow a connection
from the standby for user replication and database replication?


I commented 'restore_command' in recovery.conf and after start slave
connected to master.
Then I uncomment it back. Is it possible to have a both, streaming
connection and restoring from wal files from NFS share?


Yes:

http://www.postgresql.org/docs/9.3/interactive/warm-standby.html#STREAMING-REPLICATION

I wonder if your master is recycling WALs fast enough that the
streaming can't find them and the standby has to go to the archive
instead.

What is your wal_keep_segments on the master set to?:

# select name,setting from pg_settings where name like 'wal_keep_segments';
name| setting
---+-
  wal_keep_segments | 128


I run tcpdump -ni eth0 port 5432 on slave and didn't see any packet
from slave to master after restart.


Just to be clear:

1) When you comment out the restore_command the standby connects to the 
master, correct?


2) When you uncomment restore_command you do not see a standby 
connection, correct?


So:

1) When you are changing the restore_command status do you restart the 
standby server?


2) What does  select * from pg_stat_replication show, in either case?

www.postgresql.org/docs/9.3/static/monitoring-stats.html#PG-STAT-REPLICATION-VIEW

3) I may have missed it, but what is your archive_command on the master?





http://www.postgresql.org/docs/9.3/interactive/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-SENDER





Where are the WAL files coming from?


NFS share on master.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Hot standby problems: consistent state not reached, no connection to master server.

2015-04-13 Thread Ilya Ashchepkov
On Mon, 13 Apr 2015 10:06:05 -0700
Adrian Klaver adrian.kla...@aklaver.com wrote:

 On 04/13/2015 09:42 AM, Ilya Ashchepkov wrote:
  On Sun, 12 Apr 2015 17:30:44 -0700
  Adrian Klaver adrian.kla...@aklaver.com wrote:
 
 
 
 
  Oh! I missed this! Thank you!
  Now slave reached consistent state some time after start, but
  still no connection to master server and still restoring
  wal-files.
 
  Not quite sure what you are getting at.
 
  You are not seeing the streaming connection happening?
 
  Yes, no streaming connection.
 
  If a connection is not being made:
 
  1) Dose user replication have REPLICATION rights?
  2) Is the pg_hba.conf on the master set up to allow a connection
  from the standby for user replication and database replication?
 
  I commented 'restore_command' in recovery.conf and after start slave
  connected to master.
  Then I uncomment it back. Is it possible to have a both, streaming
  connection and restoring from wal files from NFS share?
 
 Yes:
 
 http://www.postgresql.org/docs/9.3/interactive/warm-standby.html#STREAMING-REPLICATION
 
 I wonder if your master is recycling WALs fast enough that the
 streaming can't find them and the standby has to go to the archive
 instead.
 
 What is your wal_keep_segments on the master set to?:
# select name,setting from pg_settings where name like 'wal_keep_segments';
   name| setting
---+-
 wal_keep_segments | 128


I run tcpdump -ni eth0 port 5432 on slave and didn't see any packet
from slave to master after restart.

 
 http://www.postgresql.org/docs/9.3/interactive/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-SENDER
 
 
 
  Where are the WAL files coming from?
 
  NFS share on master.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] bigserial continuity safety

2015-04-13 Thread Pawel Veselov
Hi.

If I have a table created as:

CREATE TABLE xq_agr (
  idBIGSERIAL PRIMARY KEY,
  node  text not null
);

and that multiple applications insert into. The applications never
explicitly specify the value for 'id'.
Is it safe to, on a single connection, do:

- open transaction (default transaction isolation)
- Open cursor for select * from xq_agr order by id asc
- do something with current record
- advance the cursor (and repeat something), but stop at some point (id =
LAST_ID), and
- delete from xq_agr where id = LAST_ID;
- commit

safe to means - whether the cursor will not miss any records that were
deleted at the end.

I'm suspecting that depending on the commit order, I may have situations
when:
- TX1 insert ID 1
- TX2 insert ID 2
- TX2 commits
- TX3 scans 2
- TX1 commits
- TX3 deletes = 2
- record ID1 is deleted, but never processed.

Is that sequence of events as listed above possible? If yes, is there a
transaction isolation  I can use to avoid that?

Table and sequence definition, as present in the DB:

db= \d+ xq_agr_id_seq
 Sequence public.xq_agr_id_seq
Column |  Type   |Value| Storage
---+-+-+-
 sequence_name | name| xq_agr_id_seq   | plain
 last_value| bigint  | 139898829   | plain
 start_value   | bigint  | 1   | plain
 increment_by  | bigint  | 1   | plain
 max_value | bigint  | 9223372036854775807 | plain
 min_value | bigint  | 1   | plain
 cache_value   | bigint  | 1   | plain
 log_cnt   | bigint  | 27  | plain
 is_cycled | boolean | f   | plain
 is_called | boolean | t   | plain

db= \d xq_agr
   Table public.xq_agr
  Column   |  Type   |  Modifiers

---+-+-
 id| bigint  | not null default
nextval('xq_agr_id_seq'::regclass)
 node  | text| not null
Indexes:
xq_agr_pkey PRIMARY KEY, btree (id)


Re: [GENERAL] bigserial continuity safety

2015-04-13 Thread David G. Johnston
On Mon, Apr 13, 2015 at 3:05 PM, Pawel Veselov pawel.vese...@gmail.com
wrote:

 Hi.

 If I have a table created as:

 CREATE TABLE xq_agr (
   idBIGSERIAL PRIMARY KEY,
   node  text not null
 );

 and that multiple applications insert into. The applications never
 explicitly specify the value for 'id'.
 Is it safe to, on a single connection, do:

 - open transaction (default transaction isolation)
 - Open cursor for select * from xq_agr order by id asc
 - do something with current record
 - advance the cursor (and repeat something), but stop at some point (id =
 LAST_ID), and
 - delete from xq_agr where id = LAST_ID;
 - commit

 safe to means - whether the cursor will not miss any records that were
 deleted at the end.

 I'm suspecting that depending on the commit order, I may have situations
 when:
 - TX1 insert ID 1
 - TX2 insert ID 2
 - TX2 commits
 - TX3 scans 2
 - TX1 commits
 - TX3 deletes = 2
 - record ID1 is deleted, but never processed.


​Going to ignore the MVC question for the moment and describe a better
state transition mechanism to consider.

pending - active - completed

If you ensure you never delete (i.e., transition to completed) something
that isn't active then you can never delete an item in pending.

​Limit the locking to the state transitions only.

The downside is the need to deal with active items that have been
abandoned by whatever process marked them active.

Back to your question: you should probably not use = in your where
clause.  However, in READ COMMITTED TX3 cannot see ID1 since the snapshot
it took out was created before TX1 committed.  I am not fluent enough to
work through the entire scenario in my head.  I'd suggest you actually open
up 3 psql sessions and play with them to see how things really behave.

For me, a simply SELECT FOR UPDATE / UPDATE WHERE command in a function
solves the problem as small scale with minimal performance degradation.
The transition from pending to active is effectively serialized and the
transition from active to completed only occurs when the process has
been performed and it is not possible to have two client simultaneously
processing the same work.

David J.


Re: [GENERAL] Help with slow table update

2015-04-13 Thread David G. Johnston
On Mon, Apr 13, 2015 at 5:01 PM, Pawel Veselov pawel.vese...@gmail.com
wrote:


 r_agrio_hourly - good, r_agrio_total - bad.

  Update on r_agrio_hourly  (cost=0.42..970.32 rows=250 width=329) (actual
 time=2.248..2.248 rows=0 loops=1)
-  Index Scan using u_r_agrio_hourly on r_agrio_hourly
  (cost=0.42..970.32 rows=250 width=329) (actual time=0.968..1.207 rows=1
 loops=1)
  Index Cond: ((tagid = 1002::numeric) AND (unitid = 1002::numeric)
 AND ((rowdate)::text = '2015-04-09T23'::text) AND (device_type =
 3::numeric) AND (placement = 2::numeric))
  Total runtime: 2.281 ms
  Update on r_agrio_total  (cost=0.42..45052.56 rows=12068 width=321)
 (actual time=106.766..106.766 rows=0 loops=1)
-  Index Scan using u_r_agrio_total on r_agrio_total
  (cost=0.42..45052.56 rows=12068 width=321) (actual time=0.936..32.626
 rows=1 loops=1)
  Index Cond: ((tagid = 1002::numeric) AND (unitid = 1002::numeric)
 AND (device_type = 3::numeric) AND (placement = 2::numeric))
  Total runtime: 106.793 ms


What it is you expect to see here?

​What are the results (count and times) for:

SELECT count(*) FROM r_agrio_total WHERE tagid = 1002 and unitid = 1002;
SELECT count(*) FROM r_agrio_hourly WHERE tagid = 1002 and unitid = 1002;

​More queries along this line might be needed.  The underlying question is
how many index rows need to be skipped over on total to get the final
result - or rather are the columns in the index in descending order of
cardinality?

Any chance you can perform a REINDEX - maybe there is some bloat
present?  There are queries to help discern if that may be the case, I do
not know then off the top of my head, but just doing it might be acceptable
and is definitely quicker if so.

​I'm still not really following your presentation but maybe my thoughts
will spark something.​

​David J.
​


[GENERAL] recovery of a windows archive in linux

2015-04-13 Thread Guillaume Drolet
Dear list,

I have a base backup and archive logs from a Windows 7 PGSQL 9.3 install.
The machine OS disk started to show signs of failure so I replaced it and
now I want to switch for a Linux system.

My question is: will I be able to play my archive logs and point-in-time
recover on the Linux machine?

That would spare some time rebuilding a Windows machine, recovering my db,
and dumping for a Linux box.

Thanks for your advice.

Guillaume


Re: [GENERAL] bigserial continuity safety

2015-04-13 Thread Jim Nasby

On 4/13/15 7:45 PM, David G. Johnston wrote:

On Mon, Apr 13, 2015 at 3:05 PM, Pawel Veselov pawel.vese...@gmail.com
mailto:pawel.vese...@gmail.comwrote:

Hi.

If I have a table created as:

CREATE TABLE xq_agr (
   idBIGSERIAL PRIMARY KEY,
   node  text not null
);

and that multiple applications insert into. The applications never
explicitly specify the value for 'id'.
Is it safe to, on a single connection, do:

- open transaction (default transaction isolation)
- Open cursor for select * from xq_agr order by id asc
- do something with current record
- advance the cursor (and repeat something), but stop at some point
(id = LAST_ID), and
- delete from xq_agr where id = LAST_ID;
- commit

safe to means - whether the cursor will not miss any records that
were deleted at the end.

I'm suspecting that depending on the commit order, I may have
situations when:
- TX1 insert ID 1
- TX2 insert ID 2
- TX2 commits
- TX3 scans 2
- TX1 commits
- TX3 deletes = 2
- record ID1 is deleted, but never processed.


​Going to ignore the MVC question for the moment and describe a better
state transition mechanism to consider.

pending - active - completed

If you ensure you never delete (i.e., transition to completed) something
that isn't active then you can never delete an item in pending.

​Limit the locking to the state transitions only.

The downside is the need to deal with active items that have been
abandoned by whatever process marked them active.


Another option is DELETE RETURNING. Instead of an initial SELECT to find 
records to work on, you would do DELETE FROM WHERE RETURNING * and deal 
with those records. I don't know if that's safe with a cursor though; I 
believe the DELETE fully materializes before records start coming back. 
So you need to handle all the rows from the SELECT or abort.



Back to your question: you should probably not use = in your where
clause.  However, in READ COMMITTED TX3 cannot see ID1 since the
snapshot it took out was created before TX1 committed.  I am not fluent


Actually, that's not necessarily true. It depends when TX3 actually 
takes it's snapshot, which is NOT when it runs BEGIN. I believe there's 
other problems you'd run into as well. Basically, READ COMMITTED does 
nothing to protect you from phantom reads.


REPEATABLE READ should protect you from phantom reads, but it won't help 
you if someone changes the data. If you're going to try and go this 
route, SERIALIZABLE is your best bet.



enough to work through the entire scenario in my head.  I'd suggest you
actually open up 3 psql sessions and play with them to see how things
really behave.


That's really not safe enough. There's just too many different race 
conditions you can encounter, and I'd bet that you couldn't even 
reproduce some of them from a client.



For me, a simply SELECT FOR UPDATE / UPDATE WHERE command in a
function solves the problem as small scale with minimal performance
degradation.  The transition from pending to active is effectively
serialized and the transition from active to completed only occurs
when the process has been performed and it is not possible to have two
client simultaneously processing the same work.


Note that that isn't safe from repeatable reads. What you're describing 
is only safe if the WHERE clause on the update is guaranteed to always 
find only one row (ie, in this example, by using xq_agr.id = something). 
Anything other than that is asking for trouble.


BTW, since it looks like you're just looking for a queue, you should 
take a look at PgQ (https://wiki.postgresql.org/wiki/PGQ_Tutorial). It 
uses something more efficient than bulk deletes to handle it's queue, 
and you can set it up so it will handle a large amount of queued items 
rather well (increase the number of tables). The one downside you may 
run into is you MUST consume every event in a single batch. There used 
to be support for the concept of retrying an event, but that may have 
been removed. It would be safe for you to put failed events into a 
second queue.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with slow table update

2015-04-13 Thread Pawel Veselov
On Sun, Apr 12, 2015 at 5:40 PM, Jim Nasby jim.na...@bluetreble.com wrote:

 On 4/9/15 6:18 PM, Pawel Veselov wrote:

 Hi.

 I have a plpgsql procedure that updates a few similar tables.
 for some reason, updates on one of the tables take a lot longer the
 updates on the other ones. The difference is, say, 7 seconds vs. 80
 milliseconds.

 the procedure uses cursors and record variables to do the updates. For
 example:

  update r_agrio_total set
unserved = unserved + (agrow-'unserved')::numeric(38),
r_brkconn = mush_brk_conn(r_brkconn, q_item.r_brkconn),
  where
tagid = _tagid and
unitid = (akey-'unitid')::numeric and
placement = (akey-'placement')::numeric and
device_type = (akey-'device_type')::numeric;

 There is another table (xq_agr) that is read record by record, and for
 each of those records, such update is executed.

 I was trying to select analyze the updates to see where the time could
 be spent.
 There are only 24 row in the bad table, and 3,400 rows in good
 table. So, for the bad table, most of the updates will be on the same
 rows. The times were measured on processing 100 original records.

 When I'm analyzing pure update statements, I don't see anything strange.

 bad table: explain analyze update r_agrio_total set unconfirmed =
 unconfirmed +0 where tagid = 1000 and unitid = 1000 and placement = 0
 and device_type = 100;

 RESULT:
   Update on r_agrio_total  (cost=0.42..4.46 rows=1 width=321) (actual
 time=0.253..0.253 rows=0 loops=1)
 -  Index Scan using tag_r_agrio_total on r_agrio_total
   (cost=0.42..4.46 rows=1 width=321) (actual time=0.037..0.041 rows=1
 loops=1)
   Index Cond: (tagid = 1000::numeric)
   Filter: ((unitid = 1000::numeric) AND (placement = 0::numeric)
 AND (device_type = 100::numeric))
   Rows Removed by Filter: 7
   Total runtime: 0.282 ms

 good table: explain analyze update r_agrio_hourly set unconfirmed =
 unconfirmed +0 where tagid = 1000 and unitid = 1000 and placement = 0
 and device_type = 100 and rowdate = '2015-02-23T13';

 RESULT:
   Update on r_agrio_hourly  (cost=0.42..17.36 rows=6 width=329) (actual
 time=0.102..0.102 rows=0 loops=1)
 -  Index Scan using u_r_agrio_hourly on r_agrio_hourly
   (cost=0.42..17.36 rows=6 width=329) (actual time=0.047..0.048 rows=1
 loops=1)
   Index Cond: ((tagid = 1000::numeric) AND (unitid =
 1000::numeric) AND ((rowdate)::text = '2015-02-23T13'::text) AND
 (device_type = 100::numeric) AND (placement = 0::numeric))
   Total runtime: 0.135 ms

 When I try doing it with WITH statement (really, to apply the actual
 data that the plpgsql function uses), there is something strange in the
 bad table.

 explain analyze
 with SRC as (select * from xq_agr where id = 914830)
  update r_agrio_total set
unconfirmed = unconfirmed +
 (SRC.r_agrio-'unconfirmed')::numeric(38)
  from SRC
  where
tagid = (SRC.r_agrio-'key'-'tagid')::numeric and
unitid = (SRC.r_agrio-'key'-'unit')::numeric and
placement = (SRC.r_agrio-'key'-'placement')::numeric and
device_type = (SRC.r_agrio-'key'-'device_type')::numeric;

 RESULT:
   Update on r_agrio_total  (cost=8.91..32777.51 rows=19331 width=409)
 (actual time=0.107..0.107 rows=0 loops=1)
 CTE src
   -  Index Scan using xq_agr_pkey on xq_agr  (cost=0.42..8.44
 rows=1 width=379) (actual time=0.026..0.027 rows=1 loops=1)
 Index Cond: (id = 914830)
 -  Nested Loop  (cost=0.46..32769.07 rows=19331 width=409) (actual
 time=0.107..0.107 rows=0 loops=1)
   -  CTE Scan on src  (cost=0.00..0.02 rows=1 width=88) (actual
 time=0.032..0.033 rows=1 loops=1)
   -  Index Scan using u_r_agrio_total on r_agrio_total
   (*cost=0.46..32285.78 rows=19331* width=321) (actual time=0.001..0.001

 rows=0 loops=1)
 Index Cond: ((tagid = (((src.r_agrio - 'key'::text) -
 'tagid'::text))::numeric) AND (unitid = (((src.r_agrio - 'key'::text)
 - 'unit'::text))::numeric) AND (device_type = (((src.r_agrio -
 'key'::text) - 'device_type'::text))::numeric) AND (placement =
 (((src.r_agrio - 'key'::text) - 'placement'::text))::numeric))
   Total runtime: 0.155 ms

 explain analyze
 with SRC as (select * from xq_agr where id = 914830)
  update r_agrio_hourly set
unconfirmed = unconfirmed +
 (SRC.r_agrio-'unconfirmed')::numeric(38)
  from SRC
  where
tagid = (SRC.r_agrio-'key'-'tagid')::numeric and
unitid = (SRC.r_agrio-'key'-'unit')::numeric and
placement = (SRC.r_agrio-'key'-'placement')::numeric and
device_type = (SRC.r_agrio-'key'-'device_type')::numeric
 and
rowdate = (SRC.r_agrio-'key'-'rowdate');

 RESULT:
   Update on r_agrio_hourly  (cost=8.91..52.91 rows=20 width=417) (actual
 time=0.123..0.123 rows=0 loops=1)
 CTE src
   -  Index Scan using xq_agr_pkey on xq_agr 

Re: [GENERAL] recovery of a windows archive in linux

2015-04-13 Thread Adrian Klaver

On 04/13/2015 04:37 PM, Guillaume Drolet wrote:

Dear list,

I have a base backup and archive logs from a Windows 7 PGSQL 9.3
install. The machine OS disk started to show signs of failure so I
replaced it and now I want to switch for a Linux system.

My question is: will I be able to play my archive logs and point-in-time
recover on the Linux machine?


No. The machines have to be the same OS/architecture.



That would spare some time rebuilding a Windows machine, recovering my
db, and dumping for a Linux box.

Thanks for your advice.

Guillaume




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] With Update From ... vs. Update ... From (With)

2015-04-13 Thread David G. Johnston
Hello!

Is there any non-functional difference between these two forms of Update?

WITH name AS ( SELECT )
UPDATE tbl SET ...
FROM name
WHERE tbl.id = name.id

and

UPDATE tbl SET ...
FROM ( WITH qry AS ( SELECT ) SELECT * FROM qry ) AS name
WHERE tbl.id = name.id

They both better give the same results but does the backend treat them
differently?

Does the answer to my question depend on the version of PostgreSQL?

Thanks!

David J.


Re: [GENERAL] recovery of a windows archive in linux

2015-04-13 Thread Jim Nasby

On 4/13/15 6:37 PM, Guillaume Drolet wrote:

Dear list,

I have a base backup and archive logs from a Windows 7 PGSQL 9.3
install. The machine OS disk started to show signs of failure so I
replaced it and now I want to switch for a Linux system.

My question is: will I be able to play my archive logs and point-in-time
recover on the Linux machine?

That would spare some time rebuilding a Windows machine, recovering my
db, and dumping for a Linux box.


No. You can only use a PITR backup on an install that's completely 
binary compatible with the installation the backup was taken from. Not 
only does the OS have to match, you could actually run into problems 
switching to a different CPU architecture (among other things, whether 
the architecture is big- or small-endian matters).

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] With Update From ... vs. Update ... From (With)

2015-04-13 Thread Jim Nasby

On 4/13/15 8:12 PM, David G. Johnston wrote:

Hello!

Is there any non-functional difference between these two forms of Update?

WITH name AS ( SELECT )
UPDATE tbl SET ...
FROM name
WHERE tbl.id http://tbl.id = name.id http://name.id

and

UPDATE tbl SET ...
FROM ( WITH qry AS ( SELECT ) SELECT * FROM qry ) AS name
WHERE tbl.id http://tbl.id = name.id http://name.id

They both better give the same results but does the backend treat them
differently?

Does the answer to my question depend on the version of PostgreSQL?


Well, they're both ugly... but see what EXPLAIN or EXPLAIN VERBOSE shows.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with slow table update

2015-04-13 Thread Jim Nasby

On 4/13/15 7:01 PM, Pawel Veselov wrote:

Cursors tend to make things slow. Avoid them if you can.


Is there an alternative to iterating over a number of rows, where a
direct update query is not an option?

I really doubt that either the actual processing logic, including use of
types has anything to do with my problem. This is based on the fact that
out of the tables that are being changed, only one is exhibiting the
problem. All of the involved tables have nearly the same structure, and
have the same logical operations performed on them. I thought may be the
bad table is slow because it was first in the list, and Postgres was
caching the functions results, but I moved things around, and pattern is
the same.


I'm guessing that you're essentially processing a queue. Take a look at 
http://www.postgresql.org/message-id/552c750f.2010...@bluetreble.com for 
some ideas. Basically, not only do cursors have non-trivial overhead, 
doing a ton of single-row queries is going to have a non-trivial 
overhead itself.



As for your specific question, I suggest you modify the plpgsql
function so that it's doing an EXPLAIN ANALYZE on the slow table.
EXPLAIN ANALYZE actually returns a recordset the same way a SELECT
would, with a single column of type text. So you just need to do
something with that output. The easiest thing would be to replace
this in your function:

UPDATE slow_table SET ...

to this (untested)

RETURN QUERY EXPLAIN ANALYZE UPDATE slow_table SET ...

and change the function so it returns SETOF text instead of whatever
it returns now.


Thank you, that made it a lot easier to see into what's really going on.
But the outcome is somewhat the same. The bad table analysis shows a
very high cost, and thousands of rows, where the table contains only 24
rows. This time, however, the actual run time is shown, and one can see
where the time is spent (I was using just a sum of clock_time()s around
the update statements to see where the problem is).

r_agrio_hourly - good, r_agrio_total - bad.

  Update on r_agrio_hourly  (cost=0.42..970.32 rows=250 width=329)
(actual time=2.248..2.248 rows=0 loops=1)
  -  Index Scan using u_r_agrio_hourly on r_agrio_hourly
  (cost=0.42..970.32 rows=250 width=329) (actual time=0.968..1.207
rows=1 loops=1)
  Index Cond: ((tagid = 1002::numeric) AND (unitid =
1002::numeric) AND ((rowdate)::text = '2015-04-09T23'::text) AND
(device_type = 3::numeric) AND (placement = 2::numeric))
  Total runtime: 2.281 ms
  Update on r_agrio_total  (cost=0.42..45052.56 rows=12068 width=321)
(actual time=106.766..106.766 rows=0 loops=1)
  -  Index Scan using u_r_agrio_total on r_agrio_total
  (cost=0.42..45052.56 rows=12068 width=321) (actual time=0.936..32.626
rows=1 loops=1)
  Index Cond: ((tagid = 1002::numeric) AND (unitid =
1002::numeric) AND (device_type = 3::numeric) AND (placement = 2::numeric))
  Total runtime: 106.793 ms


Keep in mind that the estimated cost is not terribly useful; it's the 
actual times that matter.


I suspect what's happening here is a combination of things. First, the 
hourly table is basically living in cache, but the total table is not. 
That means that when you go to find a row in the total table you're 
actually hitting the disk instead of pulling the data from memory.


Second, you may have a lot of dead rows in the total table. I suspect 
this because of the very large amount of time the index scan is taking. 
Unless you're running on an old 10MB MFM drive you'd be pretty hard 
pressed for even 2 IO operations (one for the index leaf page and one 
for the heap page) to take 32ms. I suspect the index scan is having to 
read many dead rows in before it finds a live one, and incurring 
multiple IOs. Swiching to EXPLAIN (analyze, buffers) would help confirm 
that.


Third, I think something odd is happening with the update itself. I'm 
pretty sure that the index scan itself is visiting the heap pages, so 
each page should be in shared buffers by the time each tuple hits the 
update node. That makes me wonder what on earth is taking 60ms to update 
the tuple. I suspect it's going into either finding a free buffer to put 
the new tuple on, or waiting to try and extend the relation. Selecting 
ctid from the freshly updated rows and comparing the first number to the 
total number of pages in the heap would show if the new tuples are all 
ending up at the end of the heap.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Hot standby problems: consistent state not reached, no connection to master server.

2015-04-13 Thread Ilya Ashchepkov
On Mon, 13 Apr 2015 12:24:11 -0700
Adrian Klaver adrian.kla...@aklaver.com wrote:

 On 04/13/2015 11:25 AM, Ilya Ashchepkov wrote:
  On Mon, 13 Apr 2015 10:06:05 -0700
  Adrian Klaver adrian.kla...@aklaver.com wrote:
 
  On 04/13/2015 09:42 AM, Ilya Ashchepkov wrote:
  On Sun, 12 Apr 2015 17:30:44 -0700
  Adrian Klaver adrian.kla...@aklaver.com wrote:
 
 
 
 
 
  If a connection is not being made:
 
  1) Dose user replication have REPLICATION rights?
  2) Is the pg_hba.conf on the master set up to allow a connection
  from the standby for user replication and database replication?
 
  I commented 'restore_command' in recovery.conf and after start
  slave connected to master.
  Then I uncomment it back. Is it possible to have a both, streaming
  connection and restoring from wal files from NFS share?
 
  Yes:
 
  http://www.postgresql.org/docs/9.3/interactive/warm-standby.html#STREAMING-REPLICATION
 
  I wonder if your master is recycling WALs fast enough that the
  streaming can't find them and the standby has to go to the archive
  instead.
 
  What is your wal_keep_segments on the master set to?:
  # select name,setting from pg_settings where name like
  'wal_keep_segments'; name| setting
  ---+-
wal_keep_segments | 128
 
 
  I run tcpdump -ni eth0 port 5432 on slave and didn't see any packet
  from slave to master after restart.
 
 Just to be clear:
 
 1) When you comment out the restore_command the standby connects to
 the master, correct?

Yes.

 
 2) When you uncomment restore_command you do not see a standby 
 connection, correct?

Yes.

 
 So:
 
 1) When you are changing the restore_command status do you restart
 the standby server?

Yes.

 
 2) What does  select * from pg_stat_replication show, in either case?
 
 www.postgresql.org/docs/9.3/static/monitoring-stats.html#PG-STAT-REPLICATION-VIEW

0 rows on master
0 rows on slave

 
 3) I may have missed it, but what is your archive_command on the
 master?

# select name,setting from pg_settings where name like 'archive_command';
  name   |  setting
-+
 archive_command | test ! -f /media/psqlbak/wals/main/%f  cp %p 
/media/psqlbak/wals/main/%f


 
 
 
  http://www.postgresql.org/docs/9.3/interactive/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-SENDER
 
 
 
  Where are the WAL files coming from?
 
  NFS share on master.
 
 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bigserial continuity safety

2015-04-13 Thread David G. Johnston
On Mon, Apr 13, 2015 at 7:01 PM, Jim Nasby jim.na...@bluetreble.com wrote:

 On 4/13/15 7:45 PM, David G. Johnston wrote:

 On Mon, Apr 13, 2015 at 3:05 PM, Pawel Veselov pawel.vese...@gmail.com
 mailto:pawel.vese...@gmail.comwrote:


 Hi.

 If I have a table created as:

 CREATE TABLE xq_agr (
idBIGSERIAL PRIMARY KEY,
node  text not null
 );

 and that multiple applications insert into. The applications never
 explicitly specify the value for 'id'.
 Is it safe to, on a single connection, do:

 - open transaction (default transaction isolation)
 - Open cursor for select * from xq_agr order by id asc
 - do something with current record
 - advance the cursor (and repeat something), but stop at some point
 (id = LAST_ID), and
 - delete from xq_agr where id = LAST_ID;
 - commit

 safe to means - whether the cursor will not miss any records that
 were deleted at the end.

 I'm suspecting that depending on the commit order, I may have
 situations when:
 - TX1 insert ID 1
 - TX2 insert ID 2
 ​​
 - TX2 commits
 - TX3 scans 2
 - TX1 commits
 - TX3 deletes = 2
 - record ID1 is deleted, but never processed.


 ​Going to ignore the MVC question for the moment and describe a better
 state transition mechanism to consider.

 pending - active - completed

 If you ensure you never delete (i.e., transition to completed) something
 that isn't active then you can never delete an item in pending.

 ​Limit the locking to the state transitions only.

 The downside is the need to deal with active items that have been
 abandoned by whatever process marked them active.


 Another option is DELETE RETURNING. Instead of an initial SELECT to find
 records to work on, you would do DELETE FROM WHERE RETURNING * and deal
 with those records. I don't know if that's safe with a cursor though; I
 believe the DELETE fully materializes before records start coming back. So
 you need to handle all the rows from the SELECT or abort.

  Back to your question: you should probably not use = in your where
 clause.  However, in READ COMMITTED TX3 cannot see ID1 since the
 snapshot it took out was created before TX1 committed.  I am not fluent


 Actually, that's not necessarily true. It depends when TX3 actually takes
 it's snapshot, which is NOT when it runs BEGIN. I believe there's other
 problems you'd run into as well. Basically, READ COMMITTED does nothing to
 protect you from phantom reads.


​I was basing that off of:

​​- TX2 commits
- TX3 scans 2
- TX1 commits

​The scanning causes the snapshot to be taken and occurs before TX1.​

As for the isolation levels I apparently got the two READ ones reversed in
my head...my bad :(

REPEATABLE READ should protect you from phantom reads, but it won't help
 you if someone changes the data. If you're going to try and go this route,
 SERIALIZABLE is your best bet.


While this is likely true if there is no other use of SERIALIZABLE in the
existing codebase then doing so requires learning/adding transaction retry
to the necessary skills and tools.  Its worth considering other approaches
to avoid the cognitive overhead of serializable.​


  enough to work through the entire scenario in my head.  I'd suggest you
 actually open up 3 psql sessions and play with them to see how things
 really behave.


 That's really not safe enough. There's just too many different race
 conditions you can encounter, and I'd bet that you couldn't even reproduce
 some of them from a client.


​I guess there is a difference between knowing something is obviously wrong
because this simple testing failed and not realizing that you still have a
problem because there was no way to reasonably test the condition you are
hitting.

The question becomes whether you rephrase the solution to make it simpler
and thus not as exposed to race conditions and the like (or fails
gracefully if it is - no deadlocks and hopefully minimal waiting) or
whether to simply prevent them outright (and quickly?) by failing with a
serialization exception.

 For me, a simply SELECT FOR UPDATE / UPDATE WHERE command in a
 function solves the problem as small scale with minimal performance
 degradation.  The transition from pending to active is effectively
 serialized and the transition from active to completed only occurs
 when the process has been performed and it is not possible to have two
 client simultaneously processing the same work.


 Note that that isn't safe from repeatable reads. What you're describing is
 only safe if the WHERE clause on the update is guaranteed to always find
 only one row (ie, in this example, by using xq_agr.id = something).
 Anything other than that is asking for trouble.


​Yes, the result of the select returns an ID of (one or more) pending
jobs which has a LIMIT 1 applied to it and the first record is immediately
updated to reflect its active status.​  At the scale I 

Re: [GENERAL] Pgagent

2015-04-13 Thread Ramesh T
Hi,
in stepsdefinition-
do
$body$
begin
perform delete_empty_parts();
end;
$body$

delete_empty_parts is the function

i do selected SQL option in step process.Right clicked on job chosen run
now it getting failed(On error i selected failed,if select success it
getting success)

I'm running function on particular database and selected today dates..

but function wasn't executed.

let me know how to set it function in pgagent..

On Mon, Apr 13, 2015 at 7:26 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 04/13/2015 06:53 AM, Ramesh T wrote:

 Hi all,
   i have a function to delete data from table.Where i need to
 place function in pgagent.in http://pgagent.in definition section can
 i select SQL or BATCH ..?

 or else any other method.?


 See here:

 http://www.pgadmin.org/docs/dev/pgagent.html

  Any Help..



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



[GENERAL] Pgagent

2015-04-13 Thread Ramesh T
Hi all,
 i have a function to delete data from table.Where i need to place
function in pgagent.in definition section can i select SQL or BATCH ..?

or else any other method.?
Any Help..


Re: [GENERAL] PG-9.3.6, unable to drop role because some objects depend on it

2015-04-13 Thread Dennis Jenkins
Apologies for the typo of your name in my last post, Tom.

On Mon, Apr 13, 2015 at 12:16 PM, Dennis Jenkins 
dennis.jenkins...@gmail.com wrote:

 Doh.  I found my answer.  Tome posted it years ago..
 http://www.postgresql.org/message-id/18994.1325874...@sss.pgh.pa.us

 I have to connect to the offending database and try to drop role again
 to get the list of actual objects.  One database cannot query the catalog
 of another.

 On Mon, Apr 13, 2015 at 12:05 PM, Dennis Jenkins 
 dennis.jenkins...@gmail.com wrote:

 I am attempting to remove a role from Postgresql-9.3.6.  I've already
 reassigned ownership for the role's tables, functions, sequences, types,
 views, etc...  However, I am still unable to remove the role.  Postgresql
 reports that 8 objects in the database 'postgres' depend on this role.

 How do I locate the database objects that depend on the 'DEADUSER' so
 that I can remedy the situation?

 ps- I've tried REASSIGN OWNED BY DEADUSER TO pgsql; with no success.


 (env vars set for PGDATABASE and PGUSER, target role name changed to
 protect the guilty)

 # psql -cdrop role DEADUSER;
 ERROR:  role DEADUSER cannot be dropped because some objects depend on
 it
 DETAIL:  8 objects in database postgres

 # pg_dump --schema-only postgres | grep -i DEADUSER
 ## No results





Re: [GENERAL] PG-9.3.6, unable to drop role because some objects depend on it

2015-04-13 Thread Dennis Jenkins
Doh.  I found my answer.  Tome posted it years ago..
http://www.postgresql.org/message-id/18994.1325874...@sss.pgh.pa.us

I have to connect to the offending database and try to drop role again to
get the list of actual objects.  One database cannot query the catalog of
another.

On Mon, Apr 13, 2015 at 12:05 PM, Dennis Jenkins 
dennis.jenkins...@gmail.com wrote:

 I am attempting to remove a role from Postgresql-9.3.6.  I've already
 reassigned ownership for the role's tables, functions, sequences, types,
 views, etc...  However, I am still unable to remove the role.  Postgresql
 reports that 8 objects in the database 'postgres' depend on this role.

 How do I locate the database objects that depend on the 'DEADUSER' so that
 I can remedy the situation?

 ps- I've tried REASSIGN OWNED BY DEADUSER TO pgsql; with no success.


 (env vars set for PGDATABASE and PGUSER, target role name changed to
 protect the guilty)

 # psql -cdrop role DEADUSER;
 ERROR:  role DEADUSER cannot be dropped because some objects depend on it
 DETAIL:  8 objects in database postgres

 # pg_dump --schema-only postgres | grep -i DEADUSER
 ## No results




Re: [GENERAL] Pgagent

2015-04-13 Thread Ramesh T
no error messages.. i checked at PostgreSQL\9.4\data\pg_log

Didn't deleted data from table.after schedule time..

one thing i don't understand ,after set timings and in definition section
also the function works fine individually.

any help..




On Mon, Apr 13, 2015 at 8:47 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 04/13/2015 07:22 AM, Ramesh T wrote:

 Hi,
 in stepsdefinition-
 do
 $body$
 begin
 perform delete_empty_parts();
 end;
 $body$

 delete_empty_parts is the function

 i do selected SQL option in step process.Right clicked on job chosen run
 now it getting failed(On error i selected failed,if select success it
 getting success)


 Is there an error message in the Postgres logs?


 I'm running function on particular database and selected today dates..

 but function wasn't executed.


 How do you know?


 let me know how to set it function in pgagent..

 On Mon, Apr 13, 2015 at 7:26 PM, Adrian Klaver
 adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com wrote:

 On 04/13/2015 06:53 AM, Ramesh T wrote:

 Hi all,
i have a function to delete data from table.Where i
 need to
 place function in pgagent.in http://pgagent.in
 http://pgagent.in definition section can
 i select SQL or BATCH ..?

 or else any other method.?


 See here:

 http://www.pgadmin.org/docs/__dev/pgagent.html
 http://www.pgadmin.org/docs/dev/pgagent.html

 Any Help..



 --
 Adrian Klaver
 adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] schema or database

2015-04-13 Thread Pavel Stehule
2015-04-13 10:43 GMT+02:00 Albe Laurenz laurenz.a...@wien.gv.at:

 Michael Cheung wrote:
  I have many similar database to store data for every customer.
  Structure of database is almost the same.
  As I use same application to control all these data, so I can only use
  one database user to connect to these database.
  And I have no needs to query table for different customer together.
 
  I wonder which I should use, different shema or different database to
 store data?
 
  I 'd like to know the advantage and disadvantage for using schema or
 database.

 In addition to what others have said:

 If you use multiple schemas within one database, the danger is greater that
 data are written to or read from the wrong schema if your application has
 a bug
 ans does not make sure to always set search_path or qualify every access
 with a
 schema name.

 With multiple databases you are guaranteed not to access data from a
 different
 database.

 The main downside that I see to multiple databases is the overhead: each of
 the databases will have its own pg_catalog tables.


It can be advantage - if your schema is pretty complex - thousands
procedures, tables, then separate pg_catalog can be better - there are
issues with pg_dump, pg_restore.

So it depends on catalog size and complexity.

Regards

Pavel





 Yours,
 Laurenz Albe

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] schema or database

2015-04-13 Thread John R Pierce

On 4/13/2015 12:07 AM, Alban Hertroys wrote:

That's easier to backup, sure, but you can't restore a single customer's schema 
easily that way. So if one customer messes up their data big time, you'll need 
to restore a backup for all customers in the DB.


if you use pg_dump -Fc, then you can specify the schema at pg_restore time.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema or database

2015-04-13 Thread Alban Hertroys

 On 13 Apr 2015, at 4:20, Ian Barwick i...@2ndquadrant.com wrote:
 
 On 13/04/15 11:08, Michael Cheung wrote:
 hi, all;
 
 I am new here. And I need some suggestion.
 
 I have many similar database to store data for every customer.
 Structure of database is almost the same.
 As I use same application to control all these data, so I can only use
 one database user to connect to these database.
 And I have no needs to query table for different customer together.
 
 I wonder which I should use, different shema or different database to store 
 data?
 
 I 'd like to know the advantage and disadvantage for using schema or 
 database.
 
 If as you say access to the database is via a single application database
 user, it will probably make more sense to use multiple schemas rather than
 multiple databases. Keeping everything in one database will simplify
 administration (e.g. making backups - ypu'll just need to dump the one 
 database
 rather than looping through a variable number) and will make life easier if 
 you
 ever need to do some kind of query involving multiple customers.

That's easier to backup, sure, but you can't restore a single customer's schema 
easily that way. So if one customer messes up their data big time, you'll need 
to restore a backup for all customers in the DB.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting user from changing its own attributes

2015-04-13 Thread David G. Johnston
On Sun, Apr 12, 2015 at 10:23 PM, Sameer Kumar sameer.ku...@ashnik.com
wrote:


 On Mon, Apr 13, 2015 at 1:03 PM Jim Nasby jim.na...@bluetreble.com
 wrote:


 No. I suspect the community would support at least a hook for GUC
 changes, if not a full-on permissions system. A hook would make it
 fairly easy to add event trigger support.


 I hope someone out there is listening :)

 I hope I have made my concern clear, I currently don't have a way to
 control users from changing the parameter values for their own settings,
 which allows each user to set in-appropriate values e.g. for work_mem.


If work_mem is the only example you can describe then I'm doubtful that any
kind of urgency is going to be associated with this request.  Your actual
request does nothing because the same user can simply issue SET work_mem
at session start and bypass the user defaults that you want to prevent.

You haven't provided enough meat for anyone to offer advice regarding the
scenario you are encountering that you think has restrict alter role as a
solution.  If you want to take the time to actually paint us a picture then
maybe suggestions or motivation for change will result.  But, in the end,
the current architecture of PostgreSQL means that people with credentials
to the database have the capability to DoS the server.  work_mem is simply
one possible avenue and, in reality, one where an inappropriate value can
be either too large or too small.

The useful solution here is not restring work_mem but rather having a
process in place that provides data regarding excessive memory utilization
AND disk I/O and associating that data with the work_mem value and
executing user.  The event triggers would also allow for monitoring,
without setting an excessive log_statements level, changes and their values.

If you really care about their set role aspect you can at least setup
cron shell script to query the catalog and report any undesirable settings
and maybe even remove the offending entry.  You are still open to SET
work_mem during the session though...

David J.


Re: [GENERAL] schema or database

2015-04-13 Thread Albe Laurenz
Michael Cheung wrote:
 I have many similar database to store data for every customer.
 Structure of database is almost the same.
 As I use same application to control all these data, so I can only use
 one database user to connect to these database.
 And I have no needs to query table for different customer together.
 
 I wonder which I should use, different shema or different database to store 
 data?
 
 I 'd like to know the advantage and disadvantage for using schema or database.

In addition to what others have said:

If you use multiple schemas within one database, the danger is greater that
data are written to or read from the wrong schema if your application has a bug
ans does not make sure to always set search_path or qualify every access with a
schema name.

With multiple databases you are guaranteed not to access data from a different
database.

The main downside that I see to multiple databases is the overhead: each of
the databases will have its own pg_catalog tables.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELinux context of PostgreSQL connection process

2015-04-13 Thread Мартынов Александр
If the user is given the necessary permissions, then can the connection process 
get a context of the user?  
I mean a category and a level (sensibility) by context.
Does the architecture of PostgreSQL permit to add changing a context of a 
connection process to context of the connecting user?

25.03.2015, 17:38, Мартынов Александр m--...@yandex.ru:
 If the user is given the necessary rights, then can the connection process 
 get a context of the user?
 Is there the possibility in principle?

 24.03.2015, 21:11, John R Pierce pie...@hogranch.com:
  On 3/24/2015 5:16 AM, Мартынов Александр wrote:
   There is postgres db with sepgsql enabled. When user connect to postgres 
 db with psql, postgres create new process for each connection. These 
 processes have selinux context unconfined_u:unconfined_r:postgresql_t.

   Is there a way to assign the process a context of user that connected to 
 db?
  what if that user is on a different system connecting over the network?

  no, the only user the postgres server processes should run as are those
  of the postgres server itself as it needs to read and write files in the
  postgres data directory tree.

  --
  john, recycling bits in santa cruz

  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema or database

2015-04-13 Thread Anil Menon
In addition to all these comments

- If you use multiple databases, if you want to keep some common tables
(example counties_Table, My_company_details), its going to be a pain
- if you want to access tables across databases - you might need to start
using FDWs (which is going to be a administrative pain - syncing passwords
and stuff)
- you could set up security easier with multiple schemas - example userA
can only use schema A and no access to other schemas

Regards
AK



On Mon, Apr 13, 2015 at 4:48 PM, Pavel Stehule pavel.steh...@gmail.com
wrote:



 2015-04-13 10:43 GMT+02:00 Albe Laurenz laurenz.a...@wien.gv.at:

 Michael Cheung wrote:
  I have many similar database to store data for every customer.
  Structure of database is almost the same.
  As I use same application to control all these data, so I can only use
  one database user to connect to these database.
  And I have no needs to query table for different customer together.
 
  I wonder which I should use, different shema or different database to
 store data?
 
  I 'd like to know the advantage and disadvantage for using schema or
 database.

 In addition to what others have said:

 If you use multiple schemas within one database, the danger is greater
 that
 data are written to or read from the wrong schema if your application has
 a bug
 ans does not make sure to always set search_path or qualify every access
 with a
 schema name.

 With multiple databases you are guaranteed not to access data from a
 different
 database.

 The main downside that I see to multiple databases is the overhead: each
 of
 the databases will have its own pg_catalog tables.


 It can be advantage - if your schema is pretty complex - thousands
 procedures, tables, then separate pg_catalog can be better - there are
 issues with pg_dump, pg_restore.

 So it depends on catalog size and complexity.

 Regards

 Pavel





 Yours,
 Laurenz Albe

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





Re: [GENERAL] schema or database

2015-04-13 Thread Jim Nasby

On 4/13/15 6:21 AM, Anil Menon wrote:

In addition to all these comments

- If you use multiple databases, if you want to keep some common
tables (example counties_Table, My_company_details), its going to be a pain
- if you want to access tables across databases - you might need to
start using FDWs (which is going to be a administrative pain - syncing
passwords and stuff)
- you could set up security easier with multiple schemas - example userA
can only use schema A and no access to other schemas


Please don't top-post.


On Mon, Apr 13, 2015 at 4:48 PM, Pavel Stehule pavel.steh...@gmail.com
mailto:pavel.steh...@gmail.com wrote:



2015-04-13 10:43 GMT+02:00 Albe Laurenz laurenz.a...@wien.gv.at
mailto:laurenz.a...@wien.gv.at:

Michael Cheung wrote:
 I have many similar database to store data for every customer.
 Structure of database is almost the same.
 As I use same application to control all these data, so I can only use
 one database user to connect to these database.
 And I have no needs to query table for different customer together.

 I wonder which I should use, different shema or different database to 
store data?

 I 'd like to know the advantage and disadvantage for using schema or 
database.

In addition to what others have said:

If you use multiple schemas within one database, the danger is
greater that
data are written to or read from the wrong schema if your
application has a bug
ans does not make sure to always set search_path or qualify
every access with a
schema name.

With multiple databases you are guaranteed not to access data
from a different
database.

The main downside that I see to multiple databases is the
overhead: each of
the databases will have its own pg_catalog tables.


It can be advantage - if your schema is pretty complex - thousands
procedures, tables, then separate pg_catalog can be better - there
are issues with pg_dump, pg_restore.

So it depends on catalog size and complexity.


Two things no one has mentioned. First, you could also use row-level 
security. If you plan on each customer having a fairly small amount of 
data, this is by far your most efficient option. Anything else will 
result in either huge catalogs or a lot of wasted catalog space.


Second, if you do per-database, that makes it trivial to scale across 
multiple servers.


Regarding backups; you can easily do partial either way with pg_dump; 
there's really no difference. You can't do partial with PITR, but that's 
true for both schema and database.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pgagent

2015-04-13 Thread Adrian Klaver

On 04/13/2015 06:53 AM, Ramesh T wrote:

Hi all,
  i have a function to delete data from table.Where i need to
place function in pgagent.in http://pgagent.in definition section can
i select SQL or BATCH ..?

or else any other method.?


See here:

http://www.pgadmin.org/docs/dev/pgagent.html


Any Help..



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Hot standby problems: consistent state not reached, no connection to master server.

2015-04-13 Thread Ilya Ashchepkov
On Sun, 12 Apr 2015 17:30:44 -0700
Adrian Klaver adrian.kla...@aklaver.com wrote:

 On 04/12/2015 08:25 AM, Ilya Ashchepkov wrote:
  On Sun, 12 Apr 2015 08:10:48 -0700
  Adrian Klaver adrian.kla...@aklaver.com wrote:
 
  On 04/12/2015 07:47 AM, Ilya Ashchepkov wrote:
  Hello.
 
  I'm setting up hot standby slave.
  It recovers from wal archive files, but I can't connect to it:
  $ psql
  psql: FATAL:  the database system is starting up
 
  On master:
  # select name,setting from pg_settings where name like
  'wal_level'; name|   setting
  ---+-
 wal_level | hot_standby
 
 
  My slave recovery.conf:
  $ cat recovery.conf
  # Note that recovery.conf must be in $PGDATA directory.
  # It should NOT be located in the same directory as
  postgresql.conf
 
  # Specifies whether to start the server as a standby. In streaming
  replication, # this parameter must to be set to on.
  standby_mode  = 'on'
 
  # Specifies a connection string which is used for the standby
  server to connect # with the primary.
  primary_conninfo  = 'host=192.168.0.101 port=5432
  user=replication password=*'
 
  # Specifies a trigger file whose presence should cause streaming
  replication to # end (i.e., failover).
  trigger_file = '/media/psqlbak/101/main/standup'
 
  # Specifies a command to load archive segments from the WAL
  archive. If # wal_keep_segments is a high enough number to retain
  the WAL segments # required for the standby server, this may not
  be necessary. But # a large workload can cause segments to be
  recycled before the standby # is fully synchronized, requiring
  you to start again from a new base backup. restore_command =
  '/usr/lib/postgresql/9.3/bin/pg_standby
  -t /tmp/pgsql.trigger.5432 /media/psqlbak/101/wals/main/ %f %p %r'
 
  I tried to comment 'restore_command' in recovery.conf on slave,
  then slave connects to master and starts receiving data, but I
  think it's not very good way. What should I change to receive data
  through connection and reach consistent state on slave?
 
  What have you set for hot_standby on the standby server?:
 
  http://www.postgresql.org/docs/9.3/interactive/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-STANDBY
 
 
  Oh! I missed this! Thank you!
  Now slave reached consistent state some time after start, but still
  no connection to master server and still restoring wal-files.
 
 Not quite sure what you are getting at.
 
 You are not seeing the streaming connection happening?

Yes, no streaming connection.

 If a connection is not being made:
 
 1) Dose user replication have REPLICATION rights?
 2) Is the pg_hba.conf on the master set up to allow a connection from 
 the standby for user replication and database replication?

I commented 'restore_command' in recovery.conf and after start slave
connected to master.
Then I uncomment it back. Is it possible to have a both, streaming
connection and restoring from wal files from NFS share?

 
 Where are the WAL files coming from?

NFS share on master.

 
 
 
 
 
 
 
 
 
 
 
 
 
 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PG-9.3.6, unable to drop role because some objects depend on it

2015-04-13 Thread Dennis Jenkins
I am attempting to remove a role from Postgresql-9.3.6.  I've already
reassigned ownership for the role's tables, functions, sequences, types,
views, etc...  However, I am still unable to remove the role.  Postgresql
reports that 8 objects in the database 'postgres' depend on this role.

How do I locate the database objects that depend on the 'DEADUSER' so that
I can remedy the situation?

ps- I've tried REASSIGN OWNED BY DEADUSER TO pgsql; with no success.


(env vars set for PGDATABASE and PGUSER, target role name changed to
protect the guilty)

# psql -cdrop role DEADUSER;
ERROR:  role DEADUSER cannot be dropped because some objects depend on it
DETAIL:  8 objects in database postgres

# pg_dump --schema-only postgres | grep -i DEADUSER
## No results


Re: [GENERAL] Re: Hot standby problems: consistent state not reached, no connection to master server.

2015-04-13 Thread Adrian Klaver

On 04/13/2015 09:42 AM, Ilya Ashchepkov wrote:

On Sun, 12 Apr 2015 17:30:44 -0700
Adrian Klaver adrian.kla...@aklaver.com wrote:







Oh! I missed this! Thank you!
Now slave reached consistent state some time after start, but still
no connection to master server and still restoring wal-files.


Not quite sure what you are getting at.

You are not seeing the streaming connection happening?


Yes, no streaming connection.


If a connection is not being made:

1) Dose user replication have REPLICATION rights?
2) Is the pg_hba.conf on the master set up to allow a connection from
the standby for user replication and database replication?


I commented 'restore_command' in recovery.conf and after start slave
connected to master.
Then I uncomment it back. Is it possible to have a both, streaming
connection and restoring from wal files from NFS share?


Yes:

http://www.postgresql.org/docs/9.3/interactive/warm-standby.html#STREAMING-REPLICATION

I wonder if your master is recycling WALs fast enough that the streaming 
can't find them and the standby has to go to the archive instead.


What is your wal_keep_segments on the master set to?:

http://www.postgresql.org/docs/9.3/interactive/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-SENDER





Where are the WAL files coming from?


NFS share on master.






























--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unexpected error tables can have at most 1600 columns

2015-04-13 Thread Day, David
Situation

I have a co-developer installing a new Virtual Machine and encountering a 
postgres error during the installation.
One of our  SQL patch files is failing unexpectedly.

The patch is attempting to add columns to a table,  The table involved  
currently has only 2 columns,
Interactively I can generate the same error in his current state.

psql -h ohio -U redcom ace_db
psql (9.3.6)
Type help for help.

ace_db=# select * from log.conference_history;
conf_id | max_size
-+--
(0 rows)

ace_db=# ALTER TABLE log.conference_history ADD talker_limit integer DEFAULT 0;
ERROR:  tables can have at most 1600 columns
ace_db=#
ace_db=#



Puzzled ?


Any thoughts ?


Regards


Dave Day


Re: [GENERAL] unexpected error tables can have at most 1600 columns

2015-04-13 Thread Pavel Stehule
2015-04-13 17:57 GMT+02:00 Day, David d...@redcom.com:

 Situation



 I have a co-developer installing a new Virtual Machine and encountering a
 postgres error during the installation.

 One of our  SQL patch files is failing unexpectedly.



 The patch is attempting to add columns to a table,  The table involved
  currently has only 2 columns,

 Interactively I can generate the same error in his current state.



 psql -h ohio -U redcom ace_db

 psql (9.3.6)

 Type help for help.



 ace_db=# select * from log.conference_history;

 conf_id | max_size

 -+--

 (0 rows)



 ace_db=# ALTER TABLE log.conference_history ADD talker_limit integer
 DEFAULT 0;

 ERROR:  tables can have at most 1600 columns

 ace_db=#

 ace_db=#


There can be removed (invisible columns)

select attname from pg_attribute where attrelid = 'test'::regclass and
attnum  0;

postgres=# select attname from pg_attribute where attrelid =
'test'::regclass and attnum  0;
┌─┐
│ attname │
╞═╡
│ a   │
│ c   │
│ d   │
└─┘
(3 rows)



alter table test drop column a, drop column c;


postgres=# select attname from pg_attribute where attrelid =
'test'::regclass and attnum  0;
┌──┐
│   attname│
╞══╡
│ pg.dropped.1 │
│ pg.dropped.2 │
│ d│
└──┘
(3 rows)

postgres=# select count(*) from pg_attribute where attrelid =
'test'::regclass and attnum  0 and attisdropped;
┌───┐
│ count │
╞═══╡
│ 2 │
└───┘
(1 row)

So maybe it can be a reason of this issue?

Pavel










 Puzzled ?





 Any thoughts ?





 Regards





 Dave Day



Re: [GENERAL] Pgagent

2015-04-13 Thread Adrian Klaver

On 04/13/2015 07:22 AM, Ramesh T wrote:

Hi,
in stepsdefinition-
do
$body$
begin
perform delete_empty_parts();
end;
$body$

delete_empty_parts is the function

i do selected SQL option in step process.Right clicked on job chosen run
now it getting failed(On error i selected failed,if select success it
getting success)


Is there an error message in the Postgres logs?



I'm running function on particular database and selected today dates..

but function wasn't executed.


How do you know?



let me know how to set it function in pgagent..

On Mon, Apr 13, 2015 at 7:26 PM, Adrian Klaver
adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com wrote:

On 04/13/2015 06:53 AM, Ramesh T wrote:

Hi all,
   i have a function to delete data from table.Where i
need to
place function in pgagent.in http://pgagent.in
http://pgagent.in definition section can
i select SQL or BATCH ..?

or else any other method.?


See here:

http://www.pgadmin.org/docs/__dev/pgagent.html
http://www.pgadmin.org/docs/dev/pgagent.html

Any Help..



--
Adrian Klaver
adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general