Re: [GENERAL] error messages not getting logged when running script from cron

2015-11-24 Thread Adrian Klaver

On 11/23/2015 04:33 PM, anj patnaik wrote:

I am seeing a bizarre behavior. The command works fine when called
directly from prompt, but when invoked via cron, there is no output nor
mail.

This works fine:
script 2>&1 | tee /tmp/ff ; mailx -s "email" -myem...@company.com
 < /tmp/ff

Now, I place the identical line in crontab file and the output file
/tmp/ff does not get generated.


Does the script run?

Remember cron has a limited environment. In you actual crontab line do 
you have a complete path to 'script' or is the path in the crontab PATH.


To get your above code to run I had to do something like:

cat cron_txt.txt 2>&1 | tee /tmp/ff ; mailx -s "email" aklaver If you are doing this locally have yoy though about setting MAILTO in 
the crontab and let cron do the mailing. Probably have to use pg_restore 
-v ... if you are not already.




Inside script, i am running a pg_restore which dumps output to stdout.
The usage of tee is the only way to cause it to go into a file.

Any idea here? Thanks

On Mon, Nov 23, 2015 at 3:29 PM, Melvin Davidson > wrote:

You could also try tweaking the following attached backup script.
Caution, I wrote this quickly from a skeleton script and has not
been tested.

On Mon, Nov 23, 2015 at 3:21 PM, Tom Lane > wrote:

anj patnaik > writes:
> My cron tab entry:

> 0 20 * * * db_backup.sh 1> /dev/null 2>&1 | mail -s "backup cron"
>myem...@comp.com 

> I am re-directing stderr to stdout and then sending that to email.

Uh, read it again: you're redirecting stdout to /dev/null and then
redirecting stderr to go where stdout goes.  So all output is
going to the bit bucket, not the pipe.

 regards, tom lane


--
Sent via pgsql-general mailing list
(pgsql-general@postgresql.org )
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JSON path wild cards?

2015-11-24 Thread Oleg Bartunov
On Tue, Nov 24, 2015 at 12:39 PM, Dennis  wrote:

> Is there a way to specify a wild card in a json path?
>
> For example I have the following json doc:
>
> [ {“a”:1,”b”: [ { “x”: 7,”y”:8,”z”:9} ] },  {“a”:2,”b”: [ { “x”:
> 4,”y”:5,”z”:6} ] }, … ]
>
> How do I write a select clause that can return the values for all b x
> values something like [{b:x}] that would return all the b:x values in the
> array? e.g. 7 and 4 ...
>
> Also is there a definition of the syntax of a proper json path for use in
> postgres?
>

Not in postgres currently, but you may try our jsquery extension
https://github.com/postgrespro/jsquery.


Oleg


>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] "trust" authentication in pg_hba.conf

2015-11-24 Thread John R Pierce

On 11/24/2015 1:33 PM, Chris Withers wrote:


What's the default contents of pg_hba.conf that postgres ships with?

I've been to it contains 'trust' for all local connections.
Is this wise? Anyone who can get a shell on your database server can 
connect to any database as any user? 




it varies with distributions, and it can be specified via the -A/--auth 
argument to initdb.   most distributions I've seen use 'peer' for local 
connections by default.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can row level security policies also be implemented for views?

2015-11-24 Thread Caleb Meredith
I'm developing an application where strict control of my data is important.
Views allow me to build a strict custom reading experience, allowing me to
add computed columns and hide private and metadata columns. Row level
security allows me strict write control of my data. However, I can't use
both technologies together, why?

It seems easy conceptually, RLS just adds a WHERE clause to queries if I'm
not mistaken, and conceptually a view is just a query. The CURRENT_USER
issue is valid, but personally it's not too big for me as most auth is done
through database parameters.


Re: [GENERAL] error messages not getting logged when running script from cron

2015-11-24 Thread anj patnaik
I am seeing a bizarre behavior. The command works fine when called directly
from prompt, but when invoked via cron, there is no output nor mail.

This works fine:
script 2>&1 | tee /tmp/ff ; mailx -s "email" -myem...@company.com < /tmp/ff

Now, I place the identical line in crontab file and the output file /tmp/ff
does not get generated.

Inside script, i am running a pg_restore which dumps output to stdout. The
usage of tee is the only way to cause it to go into a file.

Any idea here? Thanks

On Mon, Nov 23, 2015 at 3:29 PM, Melvin Davidson 
wrote:

> You could also try tweaking the following attached backup script.
> Caution, I wrote this quickly from a skeleton script and has not been
> tested.
>
> On Mon, Nov 23, 2015 at 3:21 PM, Tom Lane  wrote:
>
>> anj patnaik  writes:
>> > My cron tab entry:
>>
>> > 0 20 * * * db_backup.sh 1> /dev/null 2>&1 | mail -s "backup cron"
>> > myem...@comp.com
>>
>> > I am re-directing stderr to stdout and then sending that to email.
>>
>> Uh, read it again: you're redirecting stdout to /dev/null and then
>> redirecting stderr to go where stdout goes.  So all output is
>> going to the bit bucket, not the pipe.
>>
>> regards, tom lane
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>


Re: [GENERAL] What could cause CREATE TEMP... "could not read block" error?

2015-11-24 Thread Chris Richards
> Are you running the latest minor release for your PostgreSQL version?

No. 9.3.9 whereas the lastest is 9.3.10

> Were there any crashes recently?

No[*]. See comments below about LVM snapshots.

> Do you have "fsync = on"?

Yes.

> Did you make sure that you have a reliable storage system?

RAID-6 SSDs on ext4


I've observed the problem on other than the "pg_depend" table. A normal
table (called "blocks") had the problem. A "REINDEX TABLE blocks" did not
fix the problem, however a "VACUUM FULL" did "fix" it.

We had thought we figured out what might have caused the problem. We were
taking LVM snapshots while the database was running and then at sometime
later we reverted the snapshot (while postgres was stopped). Our theory was
that since postgres was running and therefore we captured its backing store
in an indeterminate state. We changed our snapshot process to shutdown
postgres, take the LVM snapshot, and then start postgres.

Unfortunately, the problem cropped up again.

We had an empty database except for our created tables, took the snapshot,
and then populated the tables using normal INSERT/UPDATE/DELETEs while the
system ran. Then, we reboot and revert the LVM snapshot--back to the empty
database. We then populated the tables by using a series "COPY  FROM
". The only things special about this is we dropped one foreign key
constraint and that with our data files is that we may issue back-to-back
COPY's to the same table (each COPY pulling in different data as we
pre-process the data before COPY'ing it). The amount of data is relatively
small from our COPY's:

dirtable - 1 row
dirpath - 1 row
cloud - 940 rows
blocks - 176 rows (before it died)

Indexes were not disabled during the bulk import.

Chris


On Mon, Nov 23, 2015 at 2:59 AM, Albe Laurenz 
wrote:

> Chris Richards wrote:
> > Adrian is correct. This worked by itself whereas using it in the
> creation of the temporary table
> > failed.
> > mdb-> SELECT pq.* FROM policyqueue AS pq
> > mdb-> JOIN seed_progress AS sp ON pq.id  =sp.polidx;
> >
> > I checked the query Albe suggested; there were two `relfilenode`s (11936
> and 11937) that exhibited the
> > error. Respectively, they were pg_depend_depender_index and
> pg_depend_reference_index.
> >
> > Unfortunately, I didn't disable the nightly processes and something
> must  have(?) fixed the glitch; at
> > midnight GMT the query ran successfully. Ugh.
> >
> > If it crops up again, I have some tools to try and capture data
> immediately, and the suggested REINDEX
> > since both appear to be indices.
>
> These indexes are both on the system table "pg_depend".
>
> That explains why the query ran without problems but the CREATE TABLE
> didn't:
> Creating an object creates dependencies, and you have data corruption in
> the
> system table that tracks dependencies.
>
> I would be pretty worried in your place if I had a corrupted catalog, even
> if
> it seems to have "fixed itself".  There might be other tables with
> corruption.
>
> I would do two things:
> 1) Wait for a moment when there is little database traffic and
>run "REINDEX TABLE pg_depend;" to rebuild these indexes.
>
> 2) As soon as you can schedule some downtime, pg_dumpall the database
>cluster, stop the server, throw away the database cluster, create a new
>one with "initdb" and restore the dump into that.
>That will get rid of any lurking data corruption.
>Watch out for error messages during the pg_dumpall!
>
>
> The main question is of course how you got the corruption in the first
> place.
> Are you running the latest minor release for your PostgreSQL version?
> Were there any crashes recently?
> Do you have "fsync = on"?
> Did you make sure that you have a reliable storage system?
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] Problems with pg_upgrade after change of unix user running db.

2015-11-24 Thread Bruce Momjian
On Mon, Nov 23, 2015 at 11:12:25AM +, Benedikt Grundmann wrote:
> I got this error trying to upgrade one of our database clusters (happily in
> testing) from 9.2 to 9.4:
> 
> Old and new cluster install users have different values for pg_authid.oid
> 
> Important background here is that we used to run the database as the postgres
> unix user, but recently we had changed it to run as a different user (because
> we have several different databases all running as the postgres user on
> different machines and we wanted each logically separate database to run as a
> different extra for that purpose unix user -- this simplified internal
> administration management).
> 
> We had done this by adding a new superuser to the database (with the name of
> the unix user it will run as in the future). turning off the database, chown 
> -R
>  databasedir, starting the database

Your description is very clear.  In 9.4 and earlier, Postgres checks
that the user running upgrade has the same pg_authid.oid in the old and
new clusters.  In 9.5 we check that the user is the
BOOTSTRAP_SUPERUSERID (10) on both the old and new cluster.

Therefore, what I suggest you do, before running pg_upgrade, is to
rename the pg_authid.oid = 10 row to be your new install user instead of
'postgres', and make your new user row equal 'postgres', e.g. something
like:

--  You already did this first one
--> test=> create user my_new_install_user;
--> CREATE ROLE

select oid from pg_authid where rolname = 'my_new_install_user';
  oid
---
 16385
(1 row)

select oid from pg_authid where rolname = 'postgres';
 oid
-
  10
(1 row)

-- 'XXX' prevents duplicate names
update pg_authid set rolname = 'XXX' where oid = 10;
UPDATE 1
update pg_authid set rolname = 'postgres' where oid = 16385;
UPDATE 1
update pg_authid set rolname = 'my_new_install_user' where oid = 10;
UPDATE 1

What this does it to make your new install user the bootstrap user,
which is a requirement for 9.5 pg_upgrade.  You would do this _before_
running pg_upgrade as my_new_install_user.  However, keep in mind that
once you do this, everthing owned by my_new_install_user and postgres
are now swapped.  This is basically what you need to do after changing
the ownership of the Postgres file system files.

You can see the 9.5 requirements in the pg_upgrade function
check_is_install_user().  You might as well just honor what that
requires as you will eventually be moving to 9.5.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] "trust" authentication in pg_hba.conf

2015-11-24 Thread Chris Withers

Hi All,

What's the default contents of pg_hba.conf that postgres ships with?

I've been to it contains 'trust' for all local connections.
Is this wise? Anyone who can get a shell on your database server can 
connect to any database as any user?


cheers,

Chris


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ??: postgres cpu 100% need help

2015-11-24 Thread 657985...@qq.com
hi kevin:
thanks for you help . but the following config Indicate my OS THP is  Disable
[root@pg1 ~]# grep -i HugePages_Total /proc/meminfo 
HugePages_Total:   0
[root@pg1 ~]# cat /proc/sys/vm/nr_hugepages
0



657985...@qq.com
 
From: Kevin Grittner
Date: 2015-11-23 23:30
To: 657985...@qq.com
CC: Merlin Moncure; Bill Moran; pgsql-general
Subject: Re: [GENERAL] ??: postgres cpu 100% need help
On Mon, Nov 23, 2015 at 12:20 AM, 657985...@qq.com <657985...@qq.com> wrote:
 
> [root@pg1 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
> [always] madvise never
> [root@pg1 ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
> [always] madvise never
 
There's your problem.  You need to set those to "never".  You can
get immediate relief by echoing 'never' to those pseudo-files, but
you need to configure your system to set them at OS boot time, too.
 
-- 
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
 


Re: [GENERAL] ??: postgres cpu 100% need help

2015-11-24 Thread Tom Lane
"657985...@qq.com" <657985...@qq.com> writes:
> thanks for you help . but the following config Indicate my OS THP is  Disable
> [root@pg1 ~]# grep -i HugePages_Total /proc/meminfo 
> HugePages_Total:   0
> [root@pg1 ~]# cat /proc/sys/vm/nr_hugepages
> 0

Those values reflect whether *explicit* huge pages are enabled.  They do
not control *transparent* huge pages.

Red Hat's version of all this is documented here:
https://access.redhat.com/solutions/46111

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ??: postgres cpu 100% need help

2015-11-24 Thread 657985...@qq.com
oh .thanks i understand . but i still have a question .
[root@pg1 pgdata]# uname -a
Linux pg1 3.10.0-123.9.3.el7.x86_64 #1 SMP Thu Nov 6 15:06:03 UTC 2014 x86_64 
x86_64 x86_64 GNU/Linux   
[root@pg1 pgdata]# cat /etc/redhat-release 
CentOS Linux release 7.0.1406 (Core) 

my os is centos7 .  is there  THP problem in it ?



657985...@qq.com
 
From: Tom Lane
Date: 2015-11-25 10:21
To: 657985...@qq.com
CC: Kevin Grittner; Merlin Moncure; Bill Moran; pgsql-general
Subject: Re: [GENERAL] ??: postgres cpu 100% need help
"657985...@qq.com" <657985...@qq.com> writes:
> thanks for you help . but the following config Indicate my OS THP is  Disable
> [root@pg1 ~]# grep -i HugePages_Total /proc/meminfo 
> HugePages_Total:   0
> [root@pg1 ~]# cat /proc/sys/vm/nr_hugepages
> 0
 
Those values reflect whether *explicit* huge pages are enabled.  They do
not control *transparent* huge pages.
 
Red Hat's version of all this is documented here:
https://access.redhat.com/solutions/46111
 
regards, tom lane


[GENERAL] full_page_writes on SSD?

2015-11-24 Thread Marcin Mańk
I saw this:
http://blog.pgaddict.com/posts/postgresql-on-ssd-4kb-or-8kB-pages

It made me wonder: if SSDs have 4kB/8kB sectors, and we'd make the Postgres
page size equal to the SSD page size, do we still need full_page_writes?

Regards
Marcin Mańk


Re: [GENERAL] full_page_writes on SSD?

2015-11-24 Thread Kevin Grittner
On Tue, Nov 24, 2015 at 12:48 PM, Marcin Mańk  wrote:

> if SSDs have 4kB/8kB sectors, and we'd make the Postgres page
> size equal to the SSD page size, do we still need full_page_writes?

If an OS write of the PostgreSQL page size has no chance of being
partially persisted (a/k/a torn), I don't think full page writes
are needed.  That seems likely to be true if pg page size matches
SSD sector size.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full_page_writes on SSD?

2015-11-24 Thread Andres Freund
On 2015-11-24 13:09:58 -0600, Kevin Grittner wrote:
> On Tue, Nov 24, 2015 at 12:48 PM, Marcin Mańk  wrote:
> 
> > if SSDs have 4kB/8kB sectors, and we'd make the Postgres page
> > size equal to the SSD page size, do we still need full_page_writes?
> 
> If an OS write of the PostgreSQL page size has no chance of being
> partially persisted (a/k/a torn), I don't think full page writes
> are needed.  That seems likely to be true if pg page size matches
> SSD sector size.

At the very least it also needs to match the page size used by the OS
(4KB on x86).

But be generally wary of turning of fpw's if you use replication. Not
having them often turns a asynchronously batched write workload into one
containing a lot of synchronous, single threaded, reads. Even with SSDs
that can very quickly lead to not being able to keep up with replay
anymore.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full_page_writes on SSD?

2015-11-24 Thread John R Pierce

On 11/24/2015 10:48 AM, Marcin Mańk wrote:
I saw this: 
http://blog.pgaddict.com/posts/postgresql-on-ssd-4kb-or-8kB-pages


It made me wonder: if SSDs have 4kB/8kB sectors, and we'd make the 
Postgres page size equal to the SSD page size, do we still need 
full_page_writes?



an SSD's actual write block is much much larger than that.they 
emulate 512 or 4k sectors, but they are not actually written in sector 
order, rather new writes are accumulated in a buffer on the drive, then 
written out to a whole block, and a sector mapping table is maintained 
by the drive.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general