Re: [GENERAL] DATA Integrity & Recovery

2017-07-12 Thread stevenchang1213
I just finished a point in time  recovery  from a rsync physical data file 
backup + wal archive.Of course it works!You can also achieve the same goal 
through pg_basebackup.
By your descriptions, your vendor seems to use logical backup through pg_dump.
Anyway you can only achieve no data loss through physical backup. 
Check your contract with your vendor,  file a suie if penalty term exists.
steven 
 原始訊息 自: chris faber  日期: 2017/7/12  22:27 
 (GMT+08:00) 至: pgsql-general@postgresql.org 主旨: [GENERAL] DATA Integrity & 
Recovery 
Postgres 9.2
We have a POSTGRES database that we have been backing up via Incremental 
backups.
We had an incident where we had to recover from backup. Our software vendor has 
completed a restore and we have lost 10 days of data. There is no explanation 
as to the reason we have sustained this loss.
I would appreciate the communities help in the following:
1. Determine if data from the incremental backups can be restored or 
recovered.2. Determine if data can be recovered from individual files backed up 
from main Postgres data directory.
Thanks in advanceChris






Re: [GENERAL] have trouble understanding xmin and xmax with update operations from two different sessions

2017-07-01 Thread stevenchang1213
this chapter introduces mvcc
http://www.interdb.jp/pg/pgsql05.html


 原始訊息 自: rajan  日期: 2017/7/1  14:32  
(GMT+08:00) 至: pgsql-general@postgresql.org 主旨: [GENERAL] have trouble 
understanding xmin and xmax with update operations from two different sessions 
have trouble understanding xmin and xmax with update operations from two
different sessions

So, as found below, Session2 is trying to update a row which is already
updated with a different value and it's update fails with *UPDATE 0*

But from Session3, I see that xmax value is visible as Session2's txid. Why
is it like that?
Can we not put Session2's txid to xmin instead(although the update failed)?
And if we try to consider that xmax is update with Session2's txid bcoz the
update failed, then why bother updating the xmax?

Please help me understand this.

*Session1*
testdb=# BEGIN;
BEGIN
testdb=# select ctid, xmin, xmax, * from numbers;
 ctid  |  xmin  | xmax | number
---++--+
 (0,2) | 519107 |    0 | 14
 (0,3) | 519112 |    0 | 23
(2 rows)

testdb=# select txid_current();
 txid_current
--
   519114
(1 row)

testdb=# update numbers set number=24 where number=14;
UPDATE 1
testdb=# COMMIT;
COMMIT


*Session 2*
testdb=# BEGIN;
BEGIN
testdb=# select txid_current();
 txid_current
--
   519115
(1 row)

testdb=# select ctid, xmin, xmax, * from numbers;
 ctid  |  xmin  | xmax | number
---++--+
 (0,2) | 519107 |    0 | 14
 (0,3) | 519112 |    0 | 23
(2 rows)

testdb=# select ctid, xmin, xmax, * from numbers;
 ctid  |  xmin  |  xmax  | number
---+++
 (0,2) | 519107 | 519114 | 14
 (0,3) | 519112 |  0 | 23
(2 rows)

testdb=# update numbers set number=25 where number=14;
UPDATE 0
testdb=# COMMIT;
COMMIT

*Session 3*
testdb=# select txid_current();
 txid_current
--
   519116
(1 row)

testdb=# select ctid, xmin, xmax, * from numbers;
 ctid  |  xmin  |  xmax  | number
---+++
 (0,3) | 519112 |  0 | 23
 (0,4) | 519114 | 519115 | 24
(2 rows)




-
--
Thanks,
Rajan.
--
View this message in context: 
http://www.postgresql-archive.org/have-trouble-understanding-xmin-and-xmax-with-update-operations-from-two-different-sessions-tp5969629.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle database into PostgreSQL using Ora2PG tool.

2017-06-02 Thread stevenchang1213


tell me where this function add_job_history() is?Actually, I don't think you 
can count on ora2pg to transform your pl/sql code to plpgsql or other 
(un)trusted procedural language code. It's not that simple!you can type "\df  
add_job_history"  in psql session to check it's existence if it belongs to 
public schema or access it using fully qualified name scheme.

Steven
從我的 Samsung Galaxy 智慧型手機傳送。
 原始訊息 自: PAWAN SHARMA  日期: 2017/6/2  
16:16  (GMT+08:00) 至: pgsql-general@postgresql.org 主旨: [GENERAL] Oracle 
database into PostgreSQL using Ora2PG tool. 
Hi All,
I am migrating Oracle database into PostgreSQL using Ora2PG tool.
So, I am facing one issue with trigger after generating script output of Oracle 
database.
Oracle : 
CREATE OR REPLACE TRIGGER UPDATE_JOB_HISTORY     AFTER UPDATE OF JOB_ID, 
DEPARTMENT_ID ON EMPLOYEES     FOR EACH ROW BEGIN  
add_job_history(:old.employee_id, :old.hire_date, sysdate,                  
:old.job_id, :old.department_id);END; / 
The script generated by Ora2PG tool.
DROP TRIGGER IF EXISTS update_job_history ON employees CASCADE;
CREATE OR REPLACE FUNCTION trigger_fct_update_job_history() RETURNS trigger AS 
$BODY$BEGIN  add_job_history(OLD.employee_id, OLD.hire_date, LOCALTIMESTAMP,    
              OLD.job_id, OLD.department_id);RETURN NEW;END$BODY$ LANGUAGE 
'plpgsql';
CREATE TRIGGER update_job_history   AFTER UPDATE ON employees FOR EACH ROW  
EXECUTE PROCEDURE trigger_fct_update_job_history();

when I try to run the above-generated script it will show below error.
ERROR:  syntax error at or near "add_job_history"LINE 4:   
add_job_history(OLD.employee_id, OLD.hire_date, LOCALTIMES...          ^NOTICE: 
 relation "employees" does not exist, skipping


Please Suggest or help to resolve it.
-Pawan


 



Re: [GENERAL] Current best practice for maximum shared_buffers settings on big hardware?

2017-05-24 Thread stevenchang1213
for ur reference 
https://madusudanan.com/blog/understanding-postgres-caching-in-depth/


從我的 Samsung Galaxy 智慧型手機傳送。
 原始訊息 自: stevenchang1213 <stevenchang1...@gmail.com> 日期: 
2017/5/24  23:48  (GMT+08:00) 至: Bill Moran <wmo...@potentialtech.com> 副本: 
pgsql-general@postgresql.org 主旨: Re: [GENERAL] Current best practice for 
maximum shared_buffers settings on big hardware? 


1. cache miss due to 256kb ring buffer for large volume data io. that's why 
warming tool is developed for covering the issue.2.pg_prewarm containing in 
contrib module since 9.4, but patch  for 9.2, which load relation data to share 
buffer or is cache.3.pgfincore is a external module like rpm for redhat to be a 
postgres extension for checking relation cache stats in os and more control 
like removing relation from os cache. 
I met the same situation as yours also in 9.2 at least 2 years ago. that's why 
i studied a lot about shared_buffer to figure it out. I suggest u surf internet 
for postgre share buffer issue. you will find what you want.
steven
從我的 Samsung Galaxy 智慧型手機傳送。
 原始訊息 自: Bill Moran <wmo...@potentialtech.com> 日期: 2017/5/24  
22:52  (GMT+08:00) 至: stevenchang1213 <stevenchang1...@gmail.com> 副本: 
pgsql-general@postgresql.org 主旨: Re: [GENERAL] Current best practice for 
maximum shared_buffers settings on big hardware? 
On Wed, 24 May 2017 21:02:45 +0800
stevenchang1213 <stevenchang1...@gmail.com> wrote:

> hello, at most 40% total memory, official doc also says so.

The docs say, "it is unlikely that an allocation of more than 40% of RAM
to shared_buffers will work better than a smaller amount" which is a far
cry from defining a maximum amount, or even warning that there are dangers
for high values. Also, that is in the same paragraph that starts with
"If you have a dedicated database server with 1GB or more of RAM ..." so
I don't need to check the RCS logs to predict that that paragraph hasn't
been updated in a while.

> you can testify it using pg_prewarm and pgfincore.

Not sure how those tools are going to predict whether I'm going to see
database stalls or other performance inversions from adding more
shared_buffers. For the purposes of seeing if shared_buffers are being
used effectively, I need only graph the block hits and misses to see
that a huge number of cache pages are satisfying requests, but also that
the amount of cache misses is still high enough for me to know that my
working set does _not_ fit in shared_buffers. What I _don't_ know is
whether increasing shared_buffers (to say, 128G on a 750G machine) is
still going to result in the same, weird performance inversion I saw
back in the 9.2 days.

> btw, numa supported? if so, extra care is necessary when starting db cluster.

Egad how I hate Linux's default NUMA policy. But I do know how to manage
it, and it's not part of the issue.

> 從我的 Samsung Galaxy 智慧型手機傳送。
>  原始訊息 自: Bill Moran <wmo...@potentialtech.com> 日期: 2017/5/24  
> 20:24  (GMT+08:00) 至: pgsql-general@postgresql.org 主旨: [GENERAL] Current best 
> practice for maximum shared_buffers settings on big hardware? 
> 
> A few years ago, I was working with "big" servers. At least, they were
> big for that age, with *128G* of RAM!!!1 Holy mackeral, right?!!?
> 
> Anyway, at that time, I tried allocating 64G to shared buffers and we
> had a bunch of problems with inconsistent performance, including "stall"
> periods where the database would stop responding for 2 or 3 seconds.
> After trying all sorts of tuning options that didn't help, the problem
> finally went away after reducing shared_buffers to 32G. I speculated, at
> the time, that the shared buffer code hit performance issues managing
> that much memory, but I never had the opportunity to really follow up
> on it.
> 
> Now, this was back in 2012 or thereabouts. Seems like another lifetime.
> Probably PostgreSQL 9.2 at that time.
> 
> Nowadays, 128G is a "medium sized" server. I just got access to one
> with 775G. It would appear that I could order from Dell with 1.5T of
> RAM if I'm willing to sell my house ...
> 
> Yet, all the docs and advice I'm able to find online seem to have been
> written pre 2008 and say things like "if your server has more than 1G
> of RAM ..."
> 
> I feel like it's time for a documentation update ;) But I, personally
> don't have the experience recently enough to know what sort of
> recommendations to make. 
> 
> What are people's experience with modern versions of Postgres on hardware
> this size? Do any of the experts have specific recommendations on large
> shared_buffers settings? Any developers care to comment on any work
> that's been done since 2012 to make large values work better?
> 
> -- 
> Bill Moran <wmo...@potentialtech.com>
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Bill Moran <wmo...@potentialtech.com>


Re: [GENERAL] Current best practice for maximum shared_buffers settings on big hardware?

2017-05-24 Thread stevenchang1213


1. cache miss due to 256kb ring buffer for large volume data io. that's why 
warming tool is developed for covering the issue.2.pg_prewarm containing in 
contrib module since 9.4, but patch  for 9.2, which load relation data to share 
buffer or is cache.3.pgfincore is a external module like rpm for redhat to be a 
postgres extension for checking relation cache stats in os and more control 
like removing relation from os cache. 
I met the same situation as yours also in 9.2 at least 2 years ago. that's why 
i studied a lot about shared_buffer to figure it out. I suggest u surf internet 
for postgre share buffer issue. you will find what you want.
steven
從我的 Samsung Galaxy 智慧型手機傳送。
 原始訊息 自: Bill Moran <wmo...@potentialtech.com> 日期: 2017/5/24  
22:52  (GMT+08:00) 至: stevenchang1213 <stevenchang1...@gmail.com> 副本: 
pgsql-general@postgresql.org 主旨: Re: [GENERAL] Current best practice for 
maximum shared_buffers settings on big hardware? 
On Wed, 24 May 2017 21:02:45 +0800
stevenchang1213 <stevenchang1...@gmail.com> wrote:

> hello, at most 40% total memory, official doc also says so.

The docs say, "it is unlikely that an allocation of more than 40% of RAM
to shared_buffers will work better than a smaller amount" which is a far
cry from defining a maximum amount, or even warning that there are dangers
for high values. Also, that is in the same paragraph that starts with
"If you have a dedicated database server with 1GB or more of RAM ..." so
I don't need to check the RCS logs to predict that that paragraph hasn't
been updated in a while.

> you can testify it using pg_prewarm and pgfincore.

Not sure how those tools are going to predict whether I'm going to see
database stalls or other performance inversions from adding more
shared_buffers. For the purposes of seeing if shared_buffers are being
used effectively, I need only graph the block hits and misses to see
that a huge number of cache pages are satisfying requests, but also that
the amount of cache misses is still high enough for me to know that my
working set does _not_ fit in shared_buffers. What I _don't_ know is
whether increasing shared_buffers (to say, 128G on a 750G machine) is
still going to result in the same, weird performance inversion I saw
back in the 9.2 days.

> btw, numa supported? if so, extra care is necessary when starting db cluster.

Egad how I hate Linux's default NUMA policy. But I do know how to manage
it, and it's not part of the issue.

> 從我的 Samsung Galaxy 智慧型手機傳送。
>  原始訊息 自: Bill Moran <wmo...@potentialtech.com> 日期: 2017/5/24  
> 20:24  (GMT+08:00) 至: pgsql-general@postgresql.org 主旨: [GENERAL] Current best 
> practice for maximum shared_buffers settings on big hardware? 
> 
> A few years ago, I was working with "big" servers. At least, they were
> big for that age, with *128G* of RAM!!!1 Holy mackeral, right?!!?
> 
> Anyway, at that time, I tried allocating 64G to shared buffers and we
> had a bunch of problems with inconsistent performance, including "stall"
> periods where the database would stop responding for 2 or 3 seconds.
> After trying all sorts of tuning options that didn't help, the problem
> finally went away after reducing shared_buffers to 32G. I speculated, at
> the time, that the shared buffer code hit performance issues managing
> that much memory, but I never had the opportunity to really follow up
> on it.
> 
> Now, this was back in 2012 or thereabouts. Seems like another lifetime.
> Probably PostgreSQL 9.2 at that time.
> 
> Nowadays, 128G is a "medium sized" server. I just got access to one
> with 775G. It would appear that I could order from Dell with 1.5T of
> RAM if I'm willing to sell my house ...
> 
> Yet, all the docs and advice I'm able to find online seem to have been
> written pre 2008 and say things like "if your server has more than 1G
> of RAM ..."
> 
> I feel like it's time for a documentation update ;) But I, personally
> don't have the experience recently enough to know what sort of
> recommendations to make. 
> 
> What are people's experience with modern versions of Postgres on hardware
> this size? Do any of the experts have specific recommendations on large
> shared_buffers settings? Any developers care to comment on any work
> that's been done since 2012 to make large values work better?
> 
> -- 
> Bill Moran <wmo...@potentialtech.com>
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Bill Moran <wmo...@potentialtech.com>


Re: [GENERAL] Current best practice for maximum shared_buffers settings on big hardware?

2017-05-24 Thread stevenchang1213
hello, at most 40% total memory, official doc also says so.you can testify it 
using pg_prewarm and pgfincore .
btw, numa supported? if so, extra care is necessary when starting db cluster.


從我的 Samsung Galaxy 智慧型手機傳送。
 原始訊息 自: Bill Moran  日期: 2017/5/24  
20:24  (GMT+08:00) 至: pgsql-general@postgresql.org 主旨: [GENERAL] Current best 
practice for maximum shared_buffers settings on big hardware? 

A few years ago, I was working with "big" servers. At least, they were
big for that age, with *128G* of RAM!!!1 Holy mackeral, right?!!?

Anyway, at that time, I tried allocating 64G to shared buffers and we
had a bunch of problems with inconsistent performance, including "stall"
periods where the database would stop responding for 2 or 3 seconds.
After trying all sorts of tuning options that didn't help, the problem
finally went away after reducing shared_buffers to 32G. I speculated, at
the time, that the shared buffer code hit performance issues managing
that much memory, but I never had the opportunity to really follow up
on it.

Now, this was back in 2012 or thereabouts. Seems like another lifetime.
Probably PostgreSQL 9.2 at that time.

Nowadays, 128G is a "medium sized" server. I just got access to one
with 775G. It would appear that I could order from Dell with 1.5T of
RAM if I'm willing to sell my house ...

Yet, all the docs and advice I'm able to find online seem to have been
written pre 2008 and say things like "if your server has more than 1G
of RAM ..."

I feel like it's time for a documentation update ;) But I, personally
don't have the experience recently enough to know what sort of
recommendations to make. 

What are people's experience with modern versions of Postgres on hardware
this size? Do any of the experts have specific recommendations on large
shared_buffers settings? Any developers care to comment on any work
that's been done since 2012 to make large values work better?

-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general