Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Scott Mead
On Fri, Nov 17, 2017 at 4:06 PM, marcelo <marcelo.nico...@gmail.com> wrote:

> I need to "emulate" the pg_dump code because the password prompt. Years
> ago I write a program (for the QnX environment) that catched some prompt
> and emulates the standard input. I don't like to do that again.


pg_dump can use an environment variable "PGPASSWORD" upon execution
(actually, all libpq programs can).  You could have a wrapper that sets the
environment variable and then executes pg_dump, this would get you around
that prompt.  Similarly, you could use the .pgpass file.

https://www.postgresql.org/docs/9.5/static/libpq-envars.html
https://www.postgresql.org/docs/9.5/static/libpq-pgpass.html

--Scott



>
>
> On 17/11/17 17:23, John R Pierce wrote:
>
>> On 11/17/2017 12:19 PM, marcelo wrote:
>>
>>> Sorry, I was not exact.
>>> I don't need nor like to change pg_dump. Rather, based on pg_dump code,
>>> I need to develop a daemon which can receive a TCP message (from a
>>> privileged app) containing some elements: the database to dump, the user
>>> under which do that, and his password. (My apps are using that same data,
>>> of course, encripted to the common users).
>>>
>>
>>
>> I would just fork pg_dump to do the actual dump rather than try and
>> incorporate its source code into your app.
>>
>>
>>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Scott Mead
On Fri, Nov 17, 2017 at 7:51 AM, marcelo <marcelo.nico...@gmail.com> wrote:

> I would need to do a mild change to pg_dump, working against a 9.4 server
> on linux.
> Which source tree do I need? Have gcc 4.9.2 in my Lubuntu installation.
> TIA


What exactly do you need to change?  Most likely, there is a quick and easy
fix for whatever you're doing without modifying pg_dump itself.

That being said, if you really want to modify the source, download the
source tarball: https://www.postgresql.org/ftp/source/


>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Postgres 9.6 fails to start on VMWare

2017-10-23 Thread Scott Mead
On Mon, Oct 23, 2017 at 11:26 AM, Martin Moore <martin.mo...@avbrief.com>
wrote:

> It was running – not sure how dd handles this. Maybe badly… ☺
>

it doesn't handle it at all.  This would be the cause of your issue.

--Scott



>
>
> *From: *Michael Nolan <htf...@gmail.com>
> *Date: *Monday, 23 October 2017 at 15:52
> *To: *Martin Moore <martin.mo...@avbrief.com>
> *Cc: *rob stone <floripa...@gmail.com>, "pgsql-general@postgresql.org" <
> pgsql-general@postgresql.org>
> *Subject: *Re: [GENERAL] Postgres 9.6 fails to start on VMWare
>
>
>
>
>
>
>
> On Mon, Oct 23, 2017 at 3:14 AM, Martin Moore <martin.mo...@avbrief.com>
> wrote:
>
> Same server. I tried a few times.
>
> I didn’t move the db separately, but did a ‘dd’ to copy the disk to an
> imagefile which was converted and loaded into VMWare.
>
> I ‘believed’ that this should keep the low level disk structure the same,
> but if this has corrupted the files I can drop, dump and restore, in which
> case how do I ‘drop’ the DB without postgres running?
>
> Ta,
>
> Martin.
>
>
>
> Was the server you were backing up shut down or in backup mode when you
> did the 'dd' copy?
>
> --
>
> Mike Nolan
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Using Variables in Queries

2017-10-19 Thread Scott Mead
On Thu, Oct 19, 2017 at 11:21 AM, Igal @ Lucee.org <i...@lucee.org> wrote:

> Hello,
>
> In other database servers, which I'm finally dropping in favor of
> Postgres, I can do the following (mind you that this is for illustration
> only, I do not actually write queries like that):
>
> DECLARE @query varchar(64) = 'red widget';
>
> SELECT *
> FROM products
> WHERE col1 LIKE @query
>OR col2 LIKE @query
>OR col3 LIKE @query
>OR col4 LIKE @query
>OR col5 LIKE @query
>
> The point is, though, that I can change the @query variable in one place
> which is very convenient.
>
> Is it still true (the posts I see on this subject are quite old) that I
> can not do so in Postgres outside of a stored procedure/function?
>

You can do this if you're using psql.  This doesn't work if you're using
something like JDBC, etc..., but from psql it works great.

https://www.postgresql.org/docs/9.5/static/app-psql.html#APP-PSQL-VARIABLES

--Scott


> And if so, what's the reason of not adding this feature?  Seems very
> useful to me.
>
> Thanks,
>
>
> Igal Sapir
> Lucee Core Developer
> Lucee.org <http://lucee.org/>
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Using cp to back up a database?

2017-10-09 Thread Scott Mead
On Mon, Oct 9, 2017 at 1:19 PM, Ron Johnson <ron.l.john...@cox.net> wrote:

> Maybe my original question wasn't clear, so I'll try again: is it safe to
> do a physical using cp (as opposed to rsync)?
>

Yes -- however* you must configure WAL archiving* first.  If not, no backup
tool, cp, rsync, etc... will provide a good backup.

Oh, and BTW -- The obligatory: You are on an ancient, EOL version of PG.
Upgrade.

Make sure that these are set:
- archive_mode
- archive_command


Then, on when you restore the backup, you need to create a recovery.conf
and configure
- restore_command

https://www.postgresql.org/docs/8.4/static/continuous-archiving.html






>
>
>
> On 10/09/2017 11:49 AM, Darren Douglas wrote:
>
> Ron:
>
> Here is an explanation that may help a bit.
>
> Your script is executing a PHYSICAL backup. A physical backup is simply a
> full copy of the cluster (instance) data directory ($PGDATA). A physical
> backup is your best option when you need to backup the cluster data as well
> as all configuration for the cluster. Essentially, if you had to rebuild
> the entire computer hosting the cluster, you could just reinstall the same
> version of postgres, copy in the backup data directory, and the cluster
> would run exactly as it did before with the same data. A physical backup is
> also necessary when the databases get very large.
>
> In the backup script you posted, the 'pg_start_backup' and
> 'pg_stop_backup' commands fulfill two purposes. The first is to create a
> label for the point in time the backup was started - this is done by
> pg_start_backup. The second is to ensure that all WAL segments that have
> been written since the backup began have been safely archived. That is done
> by pg_stop_backup. This approach is necessary to accomplish an online
> physical backup.
>
> As others have mentioned pg_dump is a LOGICAL backup tool similar to any
> SQL dump you've done with another DBMS. The pg_dump command will do a SQL
> dump to recreate everything within a single database. So, if you have
> multiple databases in your cluster, its not the best option. pg_dumpall is
> the logical backup tool that will do a logical dump of all globals (schema
> + roles) along with all databases in the cluster. Because the
> pg_dump/pg_dumpall commands are not executing a physical backup, the
> pg_start_backup and pg_stop_backup commands do not apply.
>
> As for times when you would elect to do a logical backup, as others have
> mentioned, this is the only valid option when you are restoring to a
> different version of Postgres. It is also a good option to do a backup of a
> single small database or several small databases. And, if for any reason
> the backup needs to be human-readable, this is the approach of choice as
> well.
>
> Darren
>
>
>
> The first
>
> On Mon, Oct 9, 2017 at 6:40 AM, Ron Johnson <ron.l.john...@cox.net> wrote:
>
>> Hi,
>>
>> v8.4.20
>>
>> This is what the current backup script uses:
>>
>> /usr/bin/psql -U postgres -c "SELECT pg_start_backup('Incrementalba
>> ckup',true);"
>> cp -r /var/lib/pgsql/data/* $dumpdir/data/
>> /usr/bin/psql -U postgres template1 -c "SELECT pg_stop_backup();"
>>
>>
>> Should it use rsync or pg_dump instead?
>>
>> Thanks
>>
>> --
>> World Peace Through Nuclear Pacification
>>
>>
>
>
> --
> Darren Douglas
> Synse Solutions
> dar...@synsesolutions.com
> 520-661-5885 <(520)%20661-5885>
>
>
>
> --
> World Peace Through Nuclear Pacification
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Rsync to a recovering streaming replica?

2017-09-27 Thread Scott Mead
On Wed, Sep 27, 2017 at 4:08 PM, Igor Polishchuk <ora4...@gmail.com> wrote:

> Scott,
> Thank you for your insight. I do have some extra disk and network
> throughput to spare. However, my question is ‘Can I run rsync while
> streaming is running?’
>

Ahh, I see.  Sorry

You need to stop the slave, put the master into backup mode, do the
parallel rsync over the existing slave's data directory (for differential).

Then, pg_stop_backup(), start the slave.

--Scott



> A streaming replica is a physical copy of a master, so why not. My concern
> is a possible silent introduction of some block corruptions, that would not
> be fixed by a block copy in wal files. I think such corruptions should not
> happen, and I saw a few instances where running rsync seemed to work.
> I’m curious if somebody is aware about a situation where a corruption is
> likely to happen.
>



>
> Igor
>
> On Sep 27, 2017, at 12:48, Scott Mead <sco...@openscg.com> wrote:
>
>
>
> On Wed, Sep 27, 2017 at 1:59 PM, Igor Polishchuk <ora4...@gmail.com>
> wrote:
>
>> Sorry, here are the missing details, if it helps:
>> Postgres 9.6.5 on CentOS 7.2.1511
>>
>> > On Sep 27, 2017, at 10:56, Igor Polishchuk <ora4...@gmail.com> wrote:
>> >
>> > Hello,
>> > I have a multi-terabyte streaming replica on a bysy database. When I
>> set it up, repetative rsyncs take at least 6 hours each.
>> > So, when I start the replica, it begins streaming, but it is many hours
>> behind right from the start. It is working for hours, and cannot reach a
>> consistent state
>> > so the database is not getting opened for queries. I have plenty of WAL
>> files available in the master’s pg_xlog, so the replica never uses archived
>> logs.
>> > A question:
>> > Should I be able to run one more rsync from the master to my replica
>> while it is streaming?
>> > The idea is to overcome the throughput limit imposed by a single
>> recovery process on the replica and allow to catch up quicker.
>> > I remember doing it many years ago on Pg 8.4, and also heard from other
>> people doing it. In all cases, it seamed working.
>> > I’m just not sure if there is no high risk of introducing some hidden
>> data corruption, which I may not notice for a while on such a huge database.
>> > Any educated opinions on the subject here?
>>
>
> It really comes down to the amount of I/O (network and disk) your system
> can handle while under load.  I've used 2 methods to do this in the past:
>
> - http://moo.nac.uci.edu/~hjm/parsync/
>
>   parsync (parallel rsync)is nice, it does all the hard work for you of
> parellizing rsync.  It's just a pain to get all the prereqs installed.
>
>
> - rsync --itemize-changes
>   Essentially, use this to get a list of files, manually split them out
> and fire up a number of rsyncs.  parsync does this for you, but, if you
> can't get it going for any reason, this works.
>
>
> The real trick, after you do your parallel rsync, make sure that you run
> one final rsync to sync-up any missed items.
>
> Remember, it's all about I/O.  The more parallel threads you use, the
> harder you'll beat up the disks / network on the master, which could impact
> production.
>
> Good luck
>
> --Scott
>
>
>
>
>
>
>
>> >
>> > Thank you
>> > Igor Polishchuk
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
>
> --
> --
> Scott Mead
> Sr. Architect
> *OpenSCG <http://openscg.com/>*
> http://openscg.com
>
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Rsync to a recovering streaming replica?

2017-09-27 Thread Scott Mead
On Wed, Sep 27, 2017 at 1:59 PM, Igor Polishchuk <ora4...@gmail.com> wrote:

> Sorry, here are the missing details, if it helps:
> Postgres 9.6.5 on CentOS 7.2.1511
>
> > On Sep 27, 2017, at 10:56, Igor Polishchuk <ora4...@gmail.com> wrote:
> >
> > Hello,
> > I have a multi-terabyte streaming replica on a bysy database. When I set
> it up, repetative rsyncs take at least 6 hours each.
> > So, when I start the replica, it begins streaming, but it is many hours
> behind right from the start. It is working for hours, and cannot reach a
> consistent state
> > so the database is not getting opened for queries. I have plenty of WAL
> files available in the master’s pg_xlog, so the replica never uses archived
> logs.
> > A question:
> > Should I be able to run one more rsync from the master to my replica
> while it is streaming?
> > The idea is to overcome the throughput limit imposed by a single
> recovery process on the replica and allow to catch up quicker.
> > I remember doing it many years ago on Pg 8.4, and also heard from other
> people doing it. In all cases, it seamed working.
> > I’m just not sure if there is no high risk of introducing some hidden
> data corruption, which I may not notice for a while on such a huge database.
> > Any educated opinions on the subject here?
>

It really comes down to the amount of I/O (network and disk) your system
can handle while under load.  I've used 2 methods to do this in the past:

- http://moo.nac.uci.edu/~hjm/parsync/

  parsync (parallel rsync)is nice, it does all the hard work for you of
parellizing rsync.  It's just a pain to get all the prereqs installed.


- rsync --itemize-changes
  Essentially, use this to get a list of files, manually split them out and
fire up a number of rsyncs.  parsync does this for you, but, if you can't
get it going for any reason, this works.


The real trick, after you do your parallel rsync, make sure that you run
one final rsync to sync-up any missed items.

Remember, it's all about I/O.  The more parallel threads you use, the
harder you'll beat up the disks / network on the master, which could impact
production.

Good luck

--Scott







> >
> > Thank you
> > Igor Polishchuk
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] WAL Archive command.

2017-09-27 Thread Scott Mead
On Wed, Sep 27, 2017 at 2:55 PM, Jerry Sievers <gsiever...@comcast.net>
wrote:

> John Britto <j...@sodexis.com> writes:
>
> > Hello,
> >
> > I have a streaming replication setup along with WAL archive.
> >
> > archive_command = ‘test ! -f /var/pg_archive/%f && cp %p  > location>%f && scp %p postgres@192.168.0.123:/%f'
> >
> > When the SCP command fails, the master repeatedly tries to send the
> > archived WAL to standby. But during this time, the pg_xlog directly
> > grew with newer WAL files.
> >
> > The streaming replication hasn't had the problem because on my check,
> > the WAL write location on the primary was same with the last WAL
> > location received/replayed in standby.
> >
> > Since the pg_xlog in the master had few newer WAL files, the master
> > archive is lagging to pick the current pg_xlog WAL file.  When a new
> > WAL occur in the pg_xlog, Master picks the old WAL file to send to
> > the standby.
>
> Yup Pg is going to handle the unshipped WALs one at a time and it will
> do them in order, oldest (lowest file name) first.
>
> > How should I force the PostgreSQL to batch copy the lagging WAL files
> > to pg_archive and then send to standby.  Can I do this manually using
> > rsync?  I wonder how PostgreSQL knows the changes because it
> > maintains info in archive_status with extension as .ready and .done.
>
> I suggest you fix your basic archiving routine to complete and exit with
> success to postgres.
>

+1

scp %p host:/archive/%f
if [ "$?" -ne 0 ]
then
echo "`date`:FAILED:%p" >> /var/log/failed_xlog.log
cp %p /localarchive/%f
exit 0
fi

  Essentially, always make sure that you are returning a 0 to postgres.  If
there is a failure, either log it or handle it separately.  This code
snippet is *NOT COMPLETE, *there's a lot more to do in order to make it
production ready and recoverable.  The biggest issue I've had with scp is
that you have to set and enforce a timeout and trap the timeout.  Note, the
above only works until your local machine (or the /localarchive partition)
runs out of space.  It's *really* important that you have ultra solid
logging and alerting around this.




> And as for archive command scripts in general, simpler is better.
>
> If you want to manually ship them in bulk, you may do so but then will
> need to remove the corresponding archive_status/$foo.ready file so that
> postgres won't keep trying to ship the same one.
>

I'm a huge fan of this strategy, especially if you're sending to a remote
datacenter.

archive_command.sh:

cp %p /localarchive/%f
if [ "$?" -ne 0 ]
then
echo "`date`:FAILED:%p" >> /var/log/failed_xlog.log
exit 0
fi

send_archive_to_remote.sh
rsync -avzP /localarchive/* host:/archive/


Of course, now you have to deal with files that are removed from the slave
and making sure they get removed from the master appropriately, but, this
is fairly straightforward.

--Scott




> HTH
>
> > Please assist.
> >
> > Thanks,
> >
> > John Britto
> >
> >
> >
>
> --
> Jerry Sievers
> Postgres DBA/Development Consulting
> e: postgres.consult...@comcast.net
> p: 312.241.7800
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Veritas cluster management

2017-08-30 Thread Scott Mead
On Wed, Aug 30, 2017 at 9:43 AM, Ron Johnson <ron.l.john...@cox.net> wrote:

> Hi,
>
> For any of you with those failover clusters, do you know if "pg_ctl
> reload" works (for compatible config file changes), or must we bounce the
> database using "hares -offline" then "hares -online"?
>

pg_ctl reload does work in this case. HOWEVER, if you do something that
could cause trouble to the cluster (i.e.  a pg_hba.conf change that breaks
connectivity), this could cause veritas to try and failover.  It's
recommended that you test your changes to avoid these scenarios.

  Technically however, pg_ctl reload works just fine, just don't break
anything :)

--Scott



>
> Thanks
>
> --
> World Peace Through Nuclear Pacification
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] archive_command fails but works outside of Postgres

2017-08-19 Thread Scott Mead


> On Aug 19, 2017, at 04:05, twoflower  wrote:
> 
> Alvaro Herrera-9 wrote
> I saw one installation with "gsutil cp" in archive_command recently. It had 
> the CLOUDSDK_PYTHON environment variable set in the archive_command itself. 
> Maybe that's a problem.
> After all, this was the solution:
> archive_command = 'CLOUDSDK_PYTHON=/usr/bin/python gsutil cp 
> /storage/postgresql/9.6/main/pg_xlog/%p gs://my_bucket/'
> as also hinted in https://github.com/GoogleCloudPlatform/gsutil/issues/402 
> 
> I still don't understand why the environments differ (the context of 
> archive_command vs. "su postgres -" and executing it there) but I am happy 
> it's working now. Thank you! 

If postgres is running under systemd, you'll have a wildly different 
environment than if you just su to postgres. 

--Scott


> View this message in context: Re: archive_command fails but works outside of 
> Postgres
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.


Re: [GENERAL] How to get transaction started always in WRITE mode.

2017-07-25 Thread Scott Mead
On Tue, Jul 25, 2017 at 5:32 AM, Adam Šlachta <adam.slac...@xitee.com>
wrote:

> Hello,
>
>
>
> In short: Is there any way how to setup PostgreSql 9.6 to always start a
> transaction in WRITE mode?
>
>
>
> Our related configuration:
>
> "default_transaction_isolation" --> "read committed"
>
> "default_transaction_read_only" --> "off"
>

Login to the database with psql as the same user that your java app
connects with try:

show default_transaction_read_only;

This can be set per-user, it's possible you're getting tripped up there.
Also, what happens if you run:

select pg_is_in_recovery();


This can happen if you connect to a postgres slave instead of a master.
Make sure you're always connecting to a master node for executing writes.


>
>
>
> Longer description (for those who are interested, since it is not only
> PostgreSQL related):
>
>
>
> We are facing problems with "cannot execute  in a
> read-only transaction" exception (org.postgresql.util.PSQLException).
>
> It is very likely the problem is caused by our code, however at the moment
> the fastest solution before we solve the things properly would be to setup
> WRITE mode for all started transactions on a database-setup-level.
>
>
>
> SW we use:
>
> -> Java 8
>
> -> Hibernate 5.1.2
>
> -> spring-data-jpa 1.10.4.RELEASE
>
> -> spring-beans, spring-core, other spring stuff of version 4.2.8.RELEASE
>
>
>
> Related configuration (I don't want to spam here with long list of
> configuration files so I pick-up what I consider important):
>
> Hibernate -> first & second level cache switched OFF
>
> SessionFactory -> org.springframework.orm.hibernate5.
> LocalSessionFactoryBean
>
> transactionManager -> org.springframework.orm.jpa.JpaTransactionManager
>
> Spring @Transactional(read-only) hint -> where we could we set it to
> "false"
>
> Our typical @Repository extends 
> org.springframework.data.jpa.repository.JpaRepository,
> which uses implementation from org.springframework.data.jpa.
> repository.support.SimpleJpaRepository.
>

Is it possible that your code / connect layer is setting
default_transaction_read_only to TRUE when the app connects?





>
>
> Thank you very much for any hints.
>
> Adam Slachta
>
>
>
> ---
> This e-mail message including any attachments is for the sole use of the
> intended recipient(s) and may contain privileged or confidential
> information. Any unauthorized review, use, disclosure or distribution is
> prohibited. If you are not the intended recipient, please immediately
> contact the sender by reply e-mail and delete the original message and
> destroy all copies thereof.
>
> Tato zpráva včetně veškerých příloh je důvěrná a mohou ji využít pouze
> osoby, jimž je adresována. Nejste-li adresátem zprávy, obsah i s přílohami
> a kopiemi bezodkladně odstraňte ze svého systému a dále ji nijak
> nevyužívejte. Upozorňujeme Vás, že využívání zpráv, které Vám nejsou
> určeny, je zakázáno, včetně jejich přímého či nepřímého zveřejňování,
> kopírování, tištění, rozšiřování anebo jakéhokoli právního jednání
> učiněného při spoléhání se na jejich obsah. Pokud jste zprávu obdrželi
> omylem, postupujte stejně a neprodleně informujte odesílatele.
>
> Der Inhalt dieser E-Mail ist vertraulich und ausschließlich für den
> bezeichneten Adressaten bestimmt. Wenn Sie nicht der vorgesehene Adressat
> dieser E-Mail oder dessen Vertreter sein sollten, so beachten Sie bitte,
> dass jede Form der Kenntnisnahme, Veröffentlichung, Vervielfältigung oder
> Weitergabe des Inhalts dieser E-Mail unzulässig ist. Wir bitten Sie, sich
> in diesem Fall mit dem Absender der E-Mail in Verbindung zu setzen.
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Logging at schema level

2017-07-21 Thread Scott Mead
On Fri, Jul 21, 2017 at 2:11 AM, Nikhil <nikhilsme...@gmail.com> wrote:

> Schema = tenant. So basically tenant level logging.
>

If each tenant uses a separate user, you could parse this by that user.
You can't separate the logs by user, but, you could use a tool like
pgBadger to parse reports for each individual user (tenant) in the system
and present the logs that way:

From: https://github.com/dalibo/pgbadger

   -u | --dbuser username : only report on entries for the given user.
 ....

--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com

>
> On 21-Jul-2017 11:21 AM, "Andreas Kretschmer" <andr...@a-kretschmer.de>
> wrote:
>
>> On 21 July 2017 07:10:42 GMT+02:00, Nikhil <nikhilsme...@gmail.com>
>> wrote:
>> >Hello,
>> >
>> >I am using postgresql schema feature for multi-tenancy. can we get
>> >postgresql logs at schema level. Currently it is for the whole database
>> >server (pg_log)
>> >
>>
>> What do you want to achieve? Logging of data-changes per tenant?
>>
>> Regards, Andreas.
>>
>>
>> --
>> 2ndQuadrant - The PostgreSQL Support Company
>>
>


Re: [GENERAL] Strange case of database bloat

2017-07-05 Thread Scott Mead
On Wed, Jul 5, 2017 at 7:28 AM, Chris Travers <chris.trav...@gmail.com>
wrote:

>
>
> On Wed, Jul 5, 2017 at 1:00 PM, PT <wmo...@potentialtech.com> wrote:
>
>>
>> 2x the working size for a frequently updated table isn't terrible bloat.
>> Or are
>> you saying it grows 2x every 24 hours and keeps growing? The real
>> question is
>> how often the table is being vacuumed. How long have you let the
>> experiment run
>> for? Does the table find an equilibrium size where it stops growing? Have
>> you
>> turned on logging for autovacuum to see how often it actually runs on this
>> table?
>>
>
> If it were only twice it would not bother me.  The fact that it is twice
> after 24 hrs, 3x after 48 hrs and 4x after 72 hrs is alarming.
>
>>
>> No unremovable rows does not indicate that autovaccum is keeping up. It
>> just
>> indicates that you don't have a problem with uncommitted transactions
>> holding
>> rows for long periods of time.
>>
>
> Right.  I should have specified that I also have not seen auto vacuum in
> pg_stat_activity with an unusual duration.
>

What about anything 'WHERE state = 'idle in transaction' ?



>
>> Have you looked at tuning the autovacuum parameters for this table? More
>> frequent
>> vacuums should keep things more under control. However, if the write load
>> is
>> heavy, you'll probably want to lower autovacuum_vacuum_cost_delay.
>> Personally,
>> I feel like the default value for this should be 0, but there are likely
>> those
>> that would debate that. In any event, if that setting is too high it can
>> cause
>> autovacuum to take so long that it can't keep up. In theory, setting it
>> too low
>> can cause autovaccum to have a negative performance impact, but I've
>> never seen
>> that happen on modern hardware.
>>
>
> Most of the writes are periodic (hourly?) batch updates which are fairly
> big.
>

I've had similar issues when each update makes a row larger than any of the
available slots.  I had a workload (admittedly on an older version of
postgres) where we were updating every row a few times a day.  Each time,
the row (a bytea field) would grow about 0.5 - 5.0%.  This would prevent us
from using freespace (it was all too small).  The only way around this was :

1. Run manual table rebuilds (this was before pg_repack / reorg).  Use
pg_repack now
2. Fix the app

  Essentially, I would do targeted, aggressive vacuuming and then, once a
month (or once I hit a bloat threshold) do a repack (again, it was my
custom process back then).  This was the bandage until I could get the app
fixed to stop churning so badly.


>
>> But that's all speculation until you know how frequently autovacuum runs
>> on
>> that table and how long it takes to do its work.
>>
>
> Given the other time I have seen similar behaviour, the question in my
> mind is why free pages near the beginning of the table don't seem to be
> re-used.
>
> I would like to try to verify that however, if you have any ideas.
>
>>
>> --
>> PT <wmo...@potentialtech.com>
>>
>
>
>
> --
> Best Wishes,
> Chris Travers
>
> Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
> lock-in.
> http://www.efficito.com/learn_more
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Unable to install EASM postgre due to error 8023

2017-06-05 Thread Scott Mead
On Mon, Jun 5, 2017 at 9:59 PM, Garry Sim <garry@netrust.net> wrote:

> Hi Scott,
>
>
>
> Is there a difference between postgre and Entrust Authority Security
> Manager Postgresql Database? But even if end of support, anyway of letting
> me have a better understanding of the error code ?
>

Ah, you're using the bundle that comes with EASM.  I would reach out to
their support for that error code.  It's not an error code that we in the
community would recognize, it's most likely well documented by their
support team.

--Scott




>
>
> *Regards, *
>
>
>
> *Garry Sim*
>
> Professional Service Consultant
>
>
>
> *NETRUST PTE LTD*
> 70 Bendemeer Road #05-03 Luzerne Singapore 339940
>
> DID: +65 6212 1393 <+65%206212%201393> | Fax +65 6212 1366
> <+65%206212%201366>  |Website http://www.netrust.net
>
>
>
>
>
>
>
> *From:* Scott Mead [mailto:sco...@openscg.com]
> *Sent:* Tuesday, June 6, 2017 9:55 AM
> *To:* Garry Sim <garry@netrust.net>
> *Cc:* pgsql-general <pgsql-general@postgresql.org>
> *Subject:* Re: [GENERAL] Unable to install EASM postgre due to error 8023
>
>
>
>
>
>
>
> On Mon, Jun 5, 2017 at 6:14 AM, Garry Sim <garry@netrust.net> wrote:
>
> Hi all,
>
>
>
> I did a search but unable to find anything in regards to this error. I am
> installing “01 SM_81SP1_Win_PostgreSQL_8323_setup.exe” but upon
> installing towards the ending, I am encountering this message. “Modifying
> the database files failed with an exit code of 8023”. I am currently
> installing at VM with OS: Windows Server 2012 R2
>
>
>
>
>
> Is that postgres version 8.3.23?  That went EOL in Feb 2013
>
>
>
> https://www.postgresql.org/support/versioning/
>
>
>
> --Scott
>
>
>
>
>
> *Regards, *
>
>
>
> *Garry Sim*
>
> Professional Service Consultant
>
>
>
>
> *NETRUST PTE LTD*70 Bendemeer Road #05-03 Luzerne Singapore 339940
>
> DID: +65 6212 1393 <+65%206212%201393> | Fax +65 6212 1366
> <+65%206212%201366>  |Website http://www.netrust.net
>
>
>
>
>
>
>
> --
>
> --
> Scott Mead
> Sr. Architect
> *OpenSCG <http://openscg.com>*
>
> http://openscg.com
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Unable to install EASM postgre due to error 8023

2017-06-05 Thread Scott Mead
On Mon, Jun 5, 2017 at 6:14 AM, Garry Sim <garry@netrust.net> wrote:

> Hi all,
>
>
>
> I did a search but unable to find anything in regards to this error. I am
> installing “01 SM_81SP1_Win_PostgreSQL_8323_setup.exe” but upon
> installing towards the ending, I am encountering this message. “Modifying
> the database files failed with an exit code of 8023”. I am currently
> installing at VM with OS: Windows Server 2012 R2
>
>
>

Is that postgres version 8.3.23?  That went EOL in Feb 2013

https://www.postgresql.org/support/versioning/

--Scott


>
>
> *Regards, *
>
>
>
> *Garry Sim*
>
> Professional Service Consultant
>
>
>
> *NETRUST PTE LTD*
> 70 Bendemeer Road #05-03 Luzerne Singapore 339940
>
> DID: +65 6212 1393 <+65%206212%201393> | Fax +65 6212 1366
> <+65%206212%201366>  |Website http://www.netrust.net
>
>
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] logging of application level user in audit trigger

2017-05-09 Thread Scott Mead
On Tue, May 9, 2017 at 2:50 PM, Rajesh Mallah <mallah.raj...@gmail.com>
wrote:

> Hi ,
>
> I am referring to   audit trigger as described in
>
> https://wiki.postgresql.org/wiki/Audit_trigger_91plus OR
> https://wiki.postgresql.org/wiki/Audit_trigger
>
> Although there are documented limitations for these systems , but
> I would like to mention and seek suggestion on a limitation that I feel is
> biggest .
>
>
> It is very a common design pattern in  web-applications that the same
> database
> user is shared for making database changes by different "logged in users"
> of the
> web application.
>
> I feel the core of audit is all about "who"  , "when" and "what" .
>
> In the current audit trigger the "who" is essentially the ROLE which is
> the actor of
> the trigger , but in most scenarios the user associated with the
> web-application session
> is the one that is seeked.
>
> In one of my past projects I passed the web-user to the trigger by setting
> a postgres
> custom variable during the database connection and reading it inside the
> trigger
> and storing it in the audit log table.
>

This is a good method, and one of the best for just straight auditing.  The
other trick I've seen is to use the 'application_name' field.  Developers
would issue:

SET application_name = "app_user:app_name';

This can be read from pg_stat_activity.application_name.  I believe you'd
be able to read that in a procedure with 'show application_name'; and, you
can see it live in pg_stat_activity as well.

select application_name, count(*)
  FROM pg_stat_activity
 GROUP by application_name;

  You'd be able to see each user/app pair and the number of sessions that
were using to the DB at a given time.

--Scott



>
> I am curious how others deal with the same issue , is there better or more
> inbuilt solutions
> to store the application level user in the audit trail records.
>
> Regds
> mallah.
>
> ( https://www.redgrape.tech )
>
>
>
>
>
>
>
>
>
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] GENERAL : [Can somebody explain about Base directory]

2017-05-01 Thread Scott Mead
On Mon, May 1, 2017 at 11:41 AM, VENKTESH GUTTEDAR <
venkteshgutte...@gmail.com> wrote:

> Hello All,
>
> Base directory is consuming to much memory, leading to no space on
> server and stopping the application
>
> Somebody please explain why it takes so much memory and is it safe to
> delete those files.?
>
>
The base directory is where your actual data (tables, indexes) are stored.
It is NOT safe to delete from that directory manually.  It would be better
to login to the database and see if you can either drop tables or indexes.
If you do a 'DROP TABLE ;' in the database, it will delete from
the base directory.  DO NOT DELETE FROM THE BASE DIRECTORY manually unless
you know what you are doing.

   It is possible that you have bloat causing you space issues, but, that's
a harder thing to solve (
https://www.openscg.com/2016/11/postgresql-bloat-estimates/)


> --
> Regards :
> Venktesh Guttedar.
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Recover PostgreSQL database folder data

2017-04-21 Thread Scott Mead
On Fri, Apr 21, 2017 at 12:40 PM, Edson Lidorio <ed...@openmailbox.org>
wrote:

> Hi,
> There was a disaster in my development note. I was able to recover the
> data folder. PostgreSQL 9.6.2, was installed in Centos 7.
>
> Here are the procedures I'm trying to initialize Postgresql for me to do a
> backup.
>
> 1- I installed PostgreSQL 9.6.2 on a VM with Centos 7.
> 2- I stopped the PostgreSQL service: sudo systemctl stop postgresql-9.6
> 3- I renamed the /var/lib/pgsql/9.6/data folder to date data_old and
> copied the old date folder
> 4- I gave permission in the folder date:
>  sudo chown postgres: postgres /var/lib/pgsql/9.6/data;
>

Your error is that you didn't 'chown *-R* postgres:postgres
/var/lib/pgsql/9.6/data'

--Scott


>  sudo chmod 700 /var/lib/pgsql/9.6/data
> 5 - I tried to start the service: sudo systemctl start postgresql-9.6
> It is generating the following errors:
>
> Abr 21 01:25:35 localhost.localdomain systemd[1]: Starting PostgreSQL 9.6
> database server...
> Abr 21 01:25:36 localhost.localdomain postgresql96-check-db-dir[19996]:
> cat: /var/lib/pgsql/9.6/data//PG_VER…ada
> Abr 21 01:25:36 localhost.localdomain postgresql96-check-db-dir[19996]:
> cat: /var/lib/pgsql/9.6/data//PG_VER…ada
> Abr 21 01:25:36 localhost.localdomain systemd[1]: postgresql-9.6.service:
> control process exited, code=ex...us=1
> Abr 21 01:25:36 localhost.localdomain systemd[1]: Failed to start
> PostgreSQL 9.6 database server.
> Abr 21 01:25:36 localhost.localdomain systemd[1]: Unit
> postgresql-9.6.service entered failed state.
> Abr 21 01:25:36 localhost.localdomain systemd[1]: postgresql-9.6.service
> failed.
> Hint: Some lines were ellipsized, use -l to show in full.
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Re: Causeless CPU load waves in backend, on windows, 9.5.5 (EDB binary).

2017-02-13 Thread Scott Mead
On Mon, Feb 13, 2017 at 5:10 PM, Thomas Kellerer <spam_ea...@gmx.net> wrote:

> Nikolai Zhubr schrieb am 13.02.2017 um 23:03:
>
>> Maybe I should have been more specific.
>> What I need is debugging/profiling pure communication side of server
>> operation, implying huge lots of requests and replies going over the
>> wire to and from the server within some continued (valid) session,
>> but so that the server is not actually doing anything above that (no
>> sql, no locking, no synchronizing, zero usefull activity, just
>> pumping network I/O)
>>
>
My personal favorite is 'select 1'

   I know you're not looking for SQL, but, you gotta start somewhere...


>
>>
> If you are willing to drop the "no sql" requirement you could use
> something like
>
> select rpad('*', 1, '*');
>
> this will send a lot of data over the wire, the SQL overhead should be
> fairly small.
>
> You can send more data if you combine that with e.g. generate_series()
>
>
>
>
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Re: Building PL/Perl procedural language without --with-perl configure flag

2017-01-15 Thread Scott Mead
On Sun, Jan 15, 2017 at 6:35 AM, postgres user <postgresuser1...@gmail.com>
wrote:

> The already installed Postgres edition was built using the same
> installation procedure as mentioned in the docs, but without the use of
> --with-perl flag. the point I ask the question is because I want to install
> PL/Perl as a separate extension as one does with PostGIS and not along with
> Postgres install. Is there a way out to solve that problem of building the
> PL/Perl language by somehow creating a custom Makefile as we have for
> contrib extensions or PostGIs etc... and then giving it the path of
> pg_config hence leading to an installation?
>


The common way of doing this is the following:

1. Download the same source that you used to build your existing postgres
2. In your existing postgres 'bin' directory, run pg_config
   -- This will show you the full string passed to configure and your
CFLAGS, LDFLAGS, etc...
3. Re-run configure using the same environment as specified by pg_config,
just add --with-perl
4. Build the server
5. Install the binaries

Note: After building, you could always run a diff between the existing
installation and a new installation and only install the differences.


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Er Data Modeller for PostgreSQL

2016-12-29 Thread Scott Mead
On Thu, Dec 29, 2016 at 8:59 AM, Rich Shepard <rshep...@appl-ecosys.com>
wrote:

> On Thu, 29 Dec 2016, Nicolas Paris wrote:
>
> Hi I'd like to tell about Sql Power Architect
>>
>
> Nicholas,
>
>   SPA was going to be my next re-examination after dbeaver. Since the
> latter
> easily accomplished what I needed I stopped there.
>

I've actually used Sql Power Architect before as well.  I like Schema Spy
because it's a quick read-only way to give me a fast report.  SQL Power
Architect is nice because it's graphical and you can interact with the
DBA.  I've also used DBVisualizer for this ( I actually love DBVisualizer
), but that tool is geared for interacting with the database and has
visualization as an extra.  I'll have to try dbeaver, I've heard of it,
but, haven't used it.

  Schema Spy is great, but, I do always have to come back and find the
exact command needed to kick it off, it's CLI is a bit over-complicated.

--Scott



>
> Thanks for the reminder,
>
>
> Rich
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Disabling inheritance with query.

2016-12-22 Thread Scott Mead
On Thu, Dec 22, 2016 at 9:51 AM, Edmundo Robles <edmu...@sw-argos.com>
wrote:

> I want to do that because,  I have  a  partitioned table  (big_table)  and
> others  (t1,t2,t3,t4)  have  foreign keys  reference to big_table  and i
> had many trobules at insert data, reading the doc:
> "A serious limitation of the inheritance feature is that indexes
> (including unique constraints) and foreign key constraints only apply to
> single tables, not to their inheritance children. "
>
> On Wed, Dec 21, 2016 at 4:58 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>
>> Edmundo Robles <edmu...@sw-argos.com> writes:
>> > i need  disable  inheritance  from many tables in a query like
>> > "delete from pg_inherits where inhparent=20473"  instead alter table ...
>> > but  is safe?   which is the risk for  database if  i  delete it?
>>
>
If you need to do it from many tables, you could write a script to generate
the ALTER TABLE statements

select 'ALTER TABLE ' || schemaname ||'.' || psut.relname || ' NO INHERIT '
|| pc.relname ||';'
  from pg_stat_user_tables psut, pg_class pc, pg_inherits pi
 where pi.inhrelid = psut.relid
   AND pi.inhparent = pc.oid
   AND pi.inhparent = 20473;

I wouldn't manually hit the catalogs, but, this will write all of the ALTER
TABLE statements that you need.


>
>> This seems really dangerous.  You're certainly missing the pg_depend
>> linkages, not to mention attribute inheritance counts in pg_attribute,
>> and there may be other things I'm not remembering offhand.
>>
>> Why can't you use the normal ALTER TABLE approach?
>>
>> regards, tom lane
>>
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Methods to quiesce PostgreSQL DB without configuring log archival

2016-11-23 Thread Scott Mead

> On Nov 22, 2016, at 01:23, MEERA  wrote:
> 
> Hi all,
> 
> If archive_mode is not configured, and i use snapshot solution for backup of 
> the server, how can i ensure data consistency? Is there a way to quiesce all 
> the connections to DB?

If your snapshot solution is atomic, then you are *probably* okay. I would do a 
few restores to test, but atomic snapshots give you a point in time and should 
be consistent. 

Personally, I like archiving because it removes all of the maybes, but, that's 
me :)

> 
> 
> Thanks,
> Meera


Re: [GENERAL] deadlock error - version 8.4 on CentOS 6

2016-10-28 Thread Scott Mead


On 10/28/16 9:27 AM, Steve Clark wrote:
> On 10/28/2016 09:15 AM, Adrian Klaver wrote:
>> On 10/28/2016 05:28 AM, Steve Clark wrote:
>>> Hello List,
>>>
>>> I am occasionally seeing the following error:
>>> ALERT  3 sqlcode=-400 errmsg=deadlock detected on line 3351
>> So what exactly is it doing at line 3351?
>>
>>> from an application written using ecpg when trying an update to the table.
>>> Can autovacuum be causing this,
>>> since no one else is updating this database table.
>> Is there more then one instance of the application running?
>>
>>> Thanks,
>>>
>>>
> No. But I examined the pg_log/log_file and saw an error indicating it
> was autovacuum:
>
>
> 2016-10-27 09:47:02 EDT:srm2api:12968:LOG:  sending cancel to blocking
> autovacuum PID 12874
> 2016-10-27 09:47:02 EDT:srm2api:12968:DETAIL:  Process 12968 waits for
> ExclusiveLock on relation 955454549 of database 955447411.
> 2016-10-27 09:47:02 EDT:srm2api:12968:STATEMENT:  lock table
> t_unit_status_log in exclusive mode
> 2016-10-27 09:47:02 EDT::12874:ERROR:  canceling autovacuum task
> 2016-10-27 09:47:02 EDT::12874:CONTEXT:  automatic vacuum of table
> "srm2.public.t_unit_status_log"
> 2016-10-27 09:47:02 EDT:srm2api:9189:ERROR:  deadlock detected at
> character 8
> 2016-10-27 09:47:02 EDT:srm2api:9189:DETAIL:  Process 9189 waits for
> RowExclusiveLock on relation 955454549 of database 955447411; blocked
> by process 12968.
> Process 12968 waits for ExclusiveLock on relation 955454518 of
> database 955447411; blocked by process 9189.
> Process 9189: update t_unit_status_log set status_date = now (
> ) , unit_active = 'y' , last_updated_date = now ( ) , last_updated_by
> = current_user , devices_down = $1  where unit_serial_no = $2
> Process 12968: lock table t_unit in exclusive mode
>
> This is at the same time and same table that my application reported
> the error on.
>
> So I feel pretty confident this is the issue. I guess I should retry
> the update in my application.
>
> Thanks,

The problem is that you're doing:
'LOCK TABLE t_unit_status_log'

  If you were executing normal updates, autovacuum would be fine. 
Remove the exclusive lock from your code and you'll be fine.

--Scott


>
>
>
> -- 
> Stephen Clark
>

-- 
Scott Mead
Sr. Architect
OpenSCG



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Remove Standby (SLAVE) from Primary (MASTER) - Postgres9.1

2016-10-27 Thread Scott Mead


On 10/27/16 10:12 AM, Joanna Xu wrote:
> We need to keep the data as it is so remving data fold is not feasible.
> If I run "pg_ctl promote" on the slave to make it fail over, this will break 
> the replication and then clean up the configuration related to replication.  
> Any feedback?

Just to [hopefully] clear up some of the confusion..

Somewhere I saw you mention 9.1 .  Since you're using 9.1, you won't
have any replication slots in use, if you were using replication slots,
then the master server could started to queue up WAL files waiting for
that node to come back.  This could cause you to run out of disk space. 
If you were using a replication slot, you would need to (after
promotion) connect to the master and drop it:
- SELECT * FROM pg_replication_slots;
- SELECT pg_drop_replication_slot(slot_name);

https://www.postgresql.org/docs/9.4/static/functions-admin.html#FUNCTIONS-REPLICATION
https://www.postgresql.org/docs/9.4/static/catalog-pg-replication-slots.html

Again, since you're on 9.1, this doesn't matter, BUT, it's important to
keep in mind for the future.

If you are using pure streaming replication, there is nothing that needs
'clean-up' on the master.  Doing a promote will indeed do what you want
to accomplish (NB: See below if you are also archiving your WAL).

That being said, many setups of streaming will also include WAL
(pg_xlog) archiving (check the archive_mode and archive_command in your
postgresql.conf on the master).  If you have configured archiving,
chances are, you will have some sort of cleanup job on the slave that
deals with old archived logs.  You would either

A. Want to stop archiving (for the node you are promoting)
B. Ensure that cleanup is still taking place

  These steps are not done for you when you promote a slave, it's
something that you need to deal with yourself.

There is one other thing to keep in mind.  If you are promoting a
database, it's going to become read-write.  I would make sure to either
erect a firewall around the promoted slave or edit your pg_hba.conf file
to reject traffic from your applications.  If, for some reason your
application were to connect, you could end up with a split-brain
scenario which would mean that some of your data is in the old master
and some is in the newly promoted slave.  This can be very, very
dangerous.  As long as you exercise care, you will be fine, it's just an
important point that I've seen haunt people in the past.
> Thanks,
> Joanna
>
> -Original Message-
> From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
> Sent: Wednesday, October 26, 2016 7:50 PM
> To: Michael Paquier <michael.paqu...@gmail.com>; Joanna Xu 
> <joanna...@amdocs.com>
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Remove Standby (SLAVE) from Primary (MASTER) - 
> Postgres9.1
>
> On 10/26/2016 04:43 PM, Michael Paquier wrote:
>> On Wed, Oct 26, 2016 at 11:18 PM, Joanna Xu <joanna...@amdocs.com> wrote:
>>> I’ve been looking for a procedure (step by step) on how to remove 
>>> Standby
>>> (SLAVE) from Primary (MASTER) for Postgres9.1 in google and the 
>>> archived postings, but no luck.
>> Standby and master servers are two separate Postgres instances. So if 
>> you would like to remove a standby from a cluster. You just need to 
>> basically stop it, then remove its data folder. And you are done.
>> There is no complicated science here.
> Not sure that is what OP wants. From the original post:
>
> " ...
> 2.   After the standby is removed from the primary, both of nodes 
> are in standalone configuration. "
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
> This message and the information contained herein is proprietary and 
> confidential and subject to the Amdocs policy statement,
> you may review at http://www.amdocs.com/email_disclaimer.asp
>

-- 
Scott Mead
Sr. Architect
OpenSCG




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ANN: Upscene releases Database Workbench 5.2.4

2016-10-12 Thread Scott Mead
On Wed, Oct 12, 2016 at 2:51 AM, Martijn Tonies (Upscene Productions) <
m.ton...@upscene.com> wrote:

> Upscene Productions is proud to announce the availability of
>>>> the next version of the popular multi-DBMS development tool:
>>>> " Database Workbench 5.2.4 "
>>>> The 5.2 release includes support for PostgreSQL and adds several other
>>>> features and bugfixes.
>>>> Database Workbench 5 comes in multiple editions with different pricing
>>>> models, there's always a version that suits you!
>>>>
>>>
>>>
>>> I'm not sure announcements of commercial software updates belong on the
>>> pgsql-general email list.   if every vendor of  apackage with postgres
>>> support posted announcements of each incremental update, we'd be buried
>>> in
>>> spam.
>>>
>>
>> Yeah.. If it applies somewhere that would be pgsql-announce.
>>
>
> For the record, it was my understanding pgsql-announce was by PostgreSQL
> only.
>

Nope, pgsql-announce is for exactly these types of things.  It is a
moderated list, but that's the appropriate venue.

--Scott



>
>
>
> With regards,
>
> Martijn Tonies
> Upscene Productions
> http://www.upscene.com
>
> Database Workbench - developer tool for Oracle, MS SQL Server, PostgreSQL,
> SQL Anywhere, MySQL, InterBase, NexusDB and Firebird.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Postgresql using lxd faild to find address

2016-10-11 Thread Scott Mead
On Mon, Oct 10, 2016 at 11:25 AM, Scottix <scot...@gmail.com> wrote:

> Hi I am using postgresql 9.5 in lxd container on Ubuntu. Using the stock
> 64bit 16.04 ubuntu image. Postgres is working just fine within the
> container, but when I try to assign the ip address associated to the
> container it fails to recognize it at boot. When I restart postgresql it
> recognizes it just fine. So it seems like it is not getting the ip address,
> maybe it is a little bit slower. Anyway to delay the boot time?
>
> Steps to reproduce:
> 1. Setup lxd container
> 2. Install postgresql-9.5
> 3. Configure ip address in the config
> 4. restart container
>
> Symptoms:
> 1. postgresql running fine
> 2. No ip address assigned to postgresql
>
> Logs:
> 2016-10-10 14:40:33 UTC [143-1] LOG:  could not bind IPv4 socket: Cannot
> assign requested address
> 2016-10-10 14:40:33 UTC [143-2] HINT:  Is another postmaster already
> running on port 5432? If not, wait a few seconds and retry.
> 2016-10-10 14:40:33 UTC [143-3] WARNING:  could not create listen socket
> for "10.0.3.51"
>

Set your listen_addresses='*'
  This means, any and all available IP addresses.  If you hardcode the IP
in the config, you'll get this type of error.  This way, no matter what
interfaces you have, you'll get a bind.  The downside is that it will
listen on ALL network interfaces you have in the box.  Sometimes you
explicitly don't want a certain interface to be listening.  You can fix
this with a firewall or pg_hba.conf however.

--Scott



> 2016-10-10 14:40:33 UTC [144-1] LOG:  database system was shut down at
> 2016-10-07 23:05:34 UTC
> 2016-10-10 14:40:33 UTC [144-2] LOG:  MultiXact member wraparound
> protections are now enabled
> 2016-10-10 14:40:33 UTC [143-4] LOG:  database system is ready to accept
> connections
> 2016-10-10 14:40:33 UTC [148-1] LOG:  autovacuum launcher started
>
> #systemctl restart postgresql
>
> 2016-10-10 15:17:33 UTC [2353-1] LOG:  database system was shut down at
> 2016-10-10 15:17:32 UTC
> 2016-10-10 15:17:33 UTC [2353-2] LOG:  MultiXact member wraparound
> protections are now enabled
> 2016-10-10 15:17:33 UTC [2352-1] LOG:  database system is ready to accept
> connections
> 2016-10-10 15:17:33 UTC [2357-1] LOG:  autovacuum launcher started
>
> Thanks
>
>
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [SPAM] [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Scott Mead
On Mon, Oct 10, 2016 at 6:15 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Adrian Klaver <adrian.kla...@aklaver.com> writes:
> > On 10/10/2016 12:18 PM, Periko Support wrote:
> >> I was on vacation, but the issue have the same behavior:
>
> > Actually no. Before you had:
>
> > 2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
> > terminated by signal 9: Killed
>
> > Now you have:
>
> > 2016-10-10 07:50:09 PDT WARNING:  terminating connection because of
> > crash of another server process
>
> Most likely it *is* the same thing but the OP trimmed the second log
> excerpt too much.  The "crash of another server process" complaints
> suggest strongly that there was already another problem and this
> is just part of the postmaster's kill-all-children-and-restart
> recovery procedure.
>
> Now, if there really is nothing before this in the log, another possible
> theory is that something decided to send the child processes a SIGQUIT
> signal, which would cause them to believe that the postmaster had told
> them to commit hara-kiri.  I only bring this up because we were already
> shown a script sending random SIGKILLs ... so random SIGQUITs wouldn't be
> too hard to credit either.  But the subsequent log entries don't quite
> square with that idea; if the postmaster weren't already expecting the
> children to die, it would have reacted differently.
>


The better solution is to do this in one query and more safely:

select pid, usename, datname, pg_terminate_backend(pid)
   FROM pg_stat_activity
  WHERE usename = 'openerp'
   AND  now() - query_start > '15 minutes'::interval;

   This will use the builtin 'pg_terminate_backend for you in one shot.

 --Scott




>
> regards, tom lane
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Graphical entity relation model

2016-09-28 Thread Scott Mead
On Wed, Sep 28, 2016 at 4:17 PM, jotpe <jo...@posteo.de> wrote:

> Does anybody know a Software for generating graphical entity relation
> models from existing postgresql databases?
>
>
My personal favorites (in order):
schemaspy.sf.net
https://www.dbvis.com/
http://www.sqlpower.ca/page/architect


> Best regards Johannes




-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Unable to create oracle_fdw (foreign data wrapper) extension

2016-09-15 Thread Scott Mead
On Thu, Sep 15, 2016 at 4:57 PM, Arun Rangarajan <arunrangara...@gmail.com>
wrote:

> Thanks, Scott.
>
> oracle_fdw version 1.5.0 from http://pgxn.org/dist/oracle_fdw/
>
> Oracle client version: instantclient 12.1
>

I've had problems using anything > instant client 10.  Give it a shot.

--Scott



>
> /usr/lib/postgresql/9.4/lib# ldd oracle_fdw.so
> linux-vdso.so.1 =>  (0x7fff50744000)
> libclntsh.so.12.1 => /usr/lib/oracle/12.1/client64/lib/libclntsh.so.12.1
> (0x7f44769f1000)
> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f447000)
> libnnz12.so => /usr/lib/oracle/12.1/client64/lib/libnnz12.so
> (0x7f4475f4f000)
> libons.so => /usr/lib/oracle/12.1/client64/lib/libons.so
> (0x7f4475d0b000)
> libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f4475b07000)
> libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f4475884000)
> libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
> (0x7f4475668000)
> libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x7f447545)
> librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f4475247000)
> libaio.so.1 => /lib/x86_64-linux-gnu/libaio.so.1 (0x7f4475045000)
> libclntshcore.so.12.1 => 
> /usr/lib/oracle/12.1/client64/lib/libclntshcore.so.12.1
> (0x7f4474af5000)
> /lib64/ld-linux-x86-64.so.2 (0x7f447990c000)
>
>
>
> On Thu, Sep 15, 2016 at 1:10 PM, Scott Mead <sco...@openscg.com> wrote:
>
>>
>>
>> On Thu, Sep 15, 2016 at 3:25 PM, Arun Rangarajan <
>> arunrangara...@gmail.com> wrote:
>>
>>> I am following the instructions here:
>>> http://blog.dbi-services.com/connecting-your-postgresql-inst
>>> ance-to-an-oracle-database/
>>> to install Oracle foreign data wrapper, oracle_fdw, on a PostgreSQL
>>> server.
>>>
>>> ---
>>> Oracle version: Oracle Database 12c Enterprise Edition Release
>>> 12.1.0.2.0 - 64bit Production, running on Red Hat Linux 7.2
>>>
>>> PostgreSQL version: PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu,
>>> compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit, running on Debian 7
>>> (wheezy).
>>> ---
>>>
>>> I was able to install sqlplus and connect from PostgreSQL server to
>>> Oracle server using sqlplus successfully, so connectivity is not a problem.
>>>
>>> But when I try to create the extension, I get the following error:
>>>
>>>
>> ---
>>> postgres=# create extension oracle_fdw;
>>> server closed the connection unexpectedly
>>> This probably means the server terminated abnormally
>>> before or while processing the request.
>>> The connection to the server was lost. Attempting reset: Failed.
>>>
>>
>> Hmm, odd that it's causing a crash.
>>
>>   Which version of the oracle_fdw and which version of the oracle
>> libraries are you linked to?  Make sure to check 'ldd oracle_fdw.so'
>>
>> --Scott
>>
>>
>>
>>> ---
>>>
>>> Then I took clues from https://github.com/dalibo/pg_qualstats/issues/1
>>> and added oracle_fdw to shared_preload_libraries in postgresql.conf like
>>> this:
>>>
>>> shared_preload_libraries = 'oracle_fdw'
>>>
>>> but now I can't restart Postgres:
>>>
>>> ---
>>> # service postgresql restart
>>> [] Restarting PostgreSQL 9.4 database server: main[] The
>>> PostgreSQL server failed to start. Please check the log output:
>>> t=2016-09-15 11:05:42 PDT d= h= p=23300 a=FATAL: XX000: invalid cache
>>> ID[FAILt=2016-09-15 11:05:42 PDT d= h= p=23300 a=LOCATION:
>>> SearchSysCacheList, syscache.c:1219 ... failed!
>>>  failed!
>>> ---
>>>
>>> Looking into /var/log/postgresql/postgresql-9.4-main.log I only see
>>> these two lines:
>>>
>>> t=2016-09-15 11:05:42 PDT d= h= p=23300 a=FATAL:  XX000: invalid cache
>>> ID: 41
>>> t=2016-09-15 11:05:42 PDT d= h= p=23300 a=LOCATION:  SearchSysCacheList,
>>> syscache.c:1219
>>>
>>> Removing oracle_fdw from shared_preload_libraries allows postgres to be
>>> restarted, so this is the one causing restart to fail.
>>>
>>> How to fix this and get the foreign data wrapper working?
>>>
>>> Thank you.
>>>
>>> ​
>>>
>>
>>
>>
>> --
>> --
>> Scott Mead
>> Sr. Architect
>> *OpenSCG <http://openscg.com>*
>> http://openscg.com
>>
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Unable to create oracle_fdw (foreign data wrapper) extension

2016-09-15 Thread Scott Mead
On Thu, Sep 15, 2016 at 3:25 PM, Arun Rangarajan <arunrangara...@gmail.com>
wrote:

> I am following the instructions here:
> http://blog.dbi-services.com/connecting-your-postgresql-
> instance-to-an-oracle-database/
> to install Oracle foreign data wrapper, oracle_fdw, on a PostgreSQL server.
>
> ---
> Oracle version: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0
> - 64bit Production, running on Red Hat Linux 7.2
>
> PostgreSQL version: PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled
> by gcc (Debian 4.7.2-5) 4.7.2, 64-bit, running on Debian 7 (wheezy).
> ---
>
> I was able to install sqlplus and connect from PostgreSQL server to Oracle
> server using sqlplus successfully, so connectivity is not a problem.
>
> But when I try to create the extension, I get the following error:
>
>
---
> postgres=# create extension oracle_fdw;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.
>

Hmm, odd that it's causing a crash.

  Which version of the oracle_fdw and which version of the oracle libraries
are you linked to?  Make sure to check 'ldd oracle_fdw.so'

--Scott



> ---
>
> Then I took clues from https://github.com/dalibo/pg_qualstats/issues/1
> and added oracle_fdw to shared_preload_libraries in postgresql.conf like
> this:
>
> shared_preload_libraries = 'oracle_fdw'
>
> but now I can't restart Postgres:
>
> ---
> # service postgresql restart
> [] Restarting PostgreSQL 9.4 database server: main[] The
> PostgreSQL server failed to start. Please check the log output:
> t=2016-09-15 11:05:42 PDT d= h= p=23300 a=FATAL: XX000: invalid cache
> ID[FAILt=2016-09-15 11:05:42 PDT d= h= p=23300 a=LOCATION:
> SearchSysCacheList, syscache.c:1219 ... failed!
>  failed!
> ---
>
> Looking into /var/log/postgresql/postgresql-9.4-main.log I only see these
> two lines:
>
> t=2016-09-15 11:05:42 PDT d= h= p=23300 a=FATAL:  XX000: invalid cache ID:
> 41
> t=2016-09-15 11:05:42 PDT d= h= p=23300 a=LOCATION:  SearchSysCacheList,
> syscache.c:1219
>
> Removing oracle_fdw from shared_preload_libraries allows postgres to be
> restarted, so this is the one causing restart to fail.
>
> How to fix this and get the foreign data wrapper working?
>
> Thank you.
>
> ​
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Scott Mead
On Wed, Jul 27, 2016 at 3:34 AM, Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> On 27/07/2016 10:15, Condor wrote:
>
>> On 26-07-2016 21:04, Dorian Hoxha wrote:
>>
>>> Many comments: https://news.ycombinator.com/item?id=12166585
>>>
>>> https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
>>>
>>> On Tue, Jul 26, 2016 at 7:39 PM, Guyren Howe <guy...@gmail.com> wrote:
>>>
>>> Honestly, I've never heard of anyone doing that. But it sounds like
>>>> they had good reasons.
>>>>
>>>> https://eng.uber.com/mysql-migration/
>>>>
>>>> Thoughts?
>>>>
>>>> --
>>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>>> To make changes to your subscription:
>>>> http://www.postgresql.org/mailpref/pgsql-general
>>>>
>>>
>>
>> They are right for upgrades.
>> It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade
>> to finish upgrade and meanwhile database is offline.
>> In some distros after upgrade of PG version you don't have old binary and
>> library, need to do full dump and restore that take time and disk space.
>>
>
> Our last 1TB upgrade from 9.0 -> 9.3 went like a charm in something like
> seconds. (with the -k option)
> However, be warned that the planing and testing took one full week.
>

That being said, it doesn't really provide a back-out plan.  The beauty of
replication is that you can halt the upgrade at any point if need be and
cut your (hopefully small) losses. If you use -k, you are all in.  Sure,
you could setup a new standby, stop traffic, upgrade whichever node you'd
like (using -k) and still have the other ready in the event of total
catastrophe.  More often than not, I see DBAs and sysads lead the
conversation with "well, postgres can't replicate from one version to
another, so instead " followed by a fast-glazing of management's eyes
and a desire to buy a 'commercial database'.

All in all, Evan's blog seemed to start out decently technical, it quickly
took a turn with half-truths, outdated information and, in some cases,
downright fud:

 "The bug we ran into only affected certain releases of Postgres 9.2 and
has been fixed for a long time now. However, we still find it worrisome
that this class of bug can happen at all. A new version of Postgres could
be released at any time that has a bug of this nature, and because of the
way replication works, this issue has the potential to spread into all of
the databases in a replication hierarchy."


ISTM that they needed a tire swing
<http://i0.wp.com/blogs.perficient.com/perficientdigital/files/2011/07/treecomicbig.jpg>
and were using a dump truck.  Hopefully they vectored somewhere in the
middle and got themselves a nice sandbox.

--Scott


>
>
>>
>> Regards,
>> Hristo S.
>>
>>
>>
>>
>>
>>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] pasting a lot of commands to psql

2016-07-07 Thread Scott Mead
On Thu, Jul 7, 2016 at 1:39 PM, Scott Mead <sco...@openscg.com> wrote:

>
>
> On Thu, Jul 7, 2016 at 1:30 PM, Dmitry Shalashov <skau...@gmail.com>
> wrote:
>
>> Hi Sameer,
>>
>> I am trying to copy-paste (and execute) random snippets of SQL to psql
>> console.
>>
>> There is another ways to do it, which do not involve copy-paste, but I am
>> wondering why is copy-paste won't work. What exactly is happening there...
>>
>>
>> Dmitry Shalashov, surfingbird.ru & relap.io
>>
>> 2016-07-07 20:26 GMT+03:00 Sameer Kumar <sameer.ku...@ashnik.com>:
>>
>>>
>>>
>>> On Fri, 8 Jul 2016, 1:23 a.m. Dmitry Shalashov, <skau...@gmail.com>
>>> wrote:
>>>
>>>> Hi everyone.
>>>>
>>>> Let say that I have some sql file with like hundred of simple
>>>> statements in it. I `cat` it, copy it to buffer, go to my beloved psql and
>>>> insert it there.
>>>> But somewhere after first few lines it screws over:
>>>>
>>>> b2b=> BEGIN;
>>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','ati.su
>>>> ',0,NULL,5);
>>>> INSERT INTO oko_topsites VALUES('russian_federation'b2b=> INSERT INTO
>>>> oko_topsites VALUES('russian_federation','atn.ua',0,NULL,5);
>>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','audit-it.ru
>>>> ',0,NULL,5);
>>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','aup.ru
>>>> ',0,NULL,5);
>>>> '
>>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','
>>>> autocentre.ua',0,NULL,5);
>>>>
>>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','autodoc.ru
>>>> ',0,NULL,5);
>>>>
>>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','calend.ru'
>>>>
>>>> b2b(> INSERT INTO oko_topsites VALUES('russian_federation','
>>>> calorizator.ru',0,NULL,5)
>>>>
>>>> Unclosed quotes, unclosed parenthesis - anyway it wont work.
>>>>
>>>> How to safely insert big number of statements to psql at once?
>>>> I am aware about "execute this file" \i option of psql, that is not the
>>>> answer I am looking for, thanks :-)
>>>>
>>>
> My personal favorite for this exact thing is to use '\e'
>
>   When you are in psql, if you \e (on *nix) it will open a temp file in
> whatever your $EDITOR variable is set ( I use vim).
>

on windows, it actually pops up notepad.exe.  Save and close, same behavior.

--Scott


> Paste your data, then save-close the file.  It will put you back into psql
> and execute the command for you.
>
> --Scott
>
>
>>
>>> What are you exactly aiming to do?
>>>
>>> Have you tried -
>>> psql  < myfile
>>>
>>>
>>>
>>>> Dmitry Shalashov, surfingbird.ru & relap.io
>>>>
>>> --
>>> --
>>> Best Regards
>>> Sameer Kumar | DB Solution Architect
>>> *ASHNIK PTE. LTD.*
>>>
>>> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
>>>
>>> T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
>>>
>>
>>
>
>
> --
> --
> Scott Mead
> Sr. Architect
> *OpenSCG <http://openscg.com>*
> http://openscg.com
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] pasting a lot of commands to psql

2016-07-07 Thread Scott Mead
On Thu, Jul 7, 2016 at 1:30 PM, Dmitry Shalashov <skau...@gmail.com> wrote:

> Hi Sameer,
>
> I am trying to copy-paste (and execute) random snippets of SQL to psql
> console.
>
> There is another ways to do it, which do not involve copy-paste, but I am
> wondering why is copy-paste won't work. What exactly is happening there...
>
>
> Dmitry Shalashov, surfingbird.ru & relap.io
>
> 2016-07-07 20:26 GMT+03:00 Sameer Kumar <sameer.ku...@ashnik.com>:
>
>>
>>
>> On Fri, 8 Jul 2016, 1:23 a.m. Dmitry Shalashov, <skau...@gmail.com>
>> wrote:
>>
>>> Hi everyone.
>>>
>>> Let say that I have some sql file with like hundred of simple statements
>>> in it. I `cat` it, copy it to buffer, go to my beloved psql and insert it
>>> there.
>>> But somewhere after first few lines it screws over:
>>>
>>> b2b=> BEGIN;
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','ati.su
>>> ',0,NULL,5);
>>> INSERT INTO oko_topsites VALUES('russian_federation'b2b=> INSERT INTO
>>> oko_topsites VALUES('russian_federation','atn.ua',0,NULL,5);
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','audit-it.ru
>>> ',0,NULL,5);
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','aup.ru
>>> ',0,NULL,5);
>>> '
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','
>>> autocentre.ua',0,NULL,5);
>>>
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','autodoc.ru
>>> ',0,NULL,5);
>>>
>>> b2b=> INSERT INTO oko_topsites VALUES('russian_federation','calend.ru'
>>>
>>> b2b(> INSERT INTO oko_topsites VALUES('russian_federation','
>>> calorizator.ru',0,NULL,5)
>>>
>>> Unclosed quotes, unclosed parenthesis - anyway it wont work.
>>>
>>> How to safely insert big number of statements to psql at once?
>>> I am aware about "execute this file" \i option of psql, that is not the
>>> answer I am looking for, thanks :-)
>>>
>>
My personal favorite for this exact thing is to use '\e'

  When you are in psql, if you \e (on *nix) it will open a temp file in
whatever your $EDITOR variable is set ( I use vim).  Paste your data, then
save-close the file.  It will put you back into psql and execute the
command for you.

--Scott


>
>> What are you exactly aiming to do?
>>
>> Have you tried -
>> psql  < myfile
>>
>>
>>
>>> Dmitry Shalashov, surfingbird.ru & relap.io
>>>
>> --
>> --
>> Best Regards
>> Sameer Kumar | DB Solution Architect
>> *ASHNIK PTE. LTD.*
>>
>> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
>>
>> T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
>>
>
>


-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] dblink authentication failed

2016-06-27 Thread Scott Mead


> On Jun 27, 2016, at 03:38, Kiss Attila  wrote:
> 
> Hello,
> I have some problem with using dblink_connect_u() in my function. 
> When I provide the password in the connection string, there is no problem, 
> but when I don’t, it says:
> ERROR:  could not establish connection
> DETAIL:  fe_sendauth: no password supplied
> However the .pgpass file in the path /var/lib/postgres/.pgpass does contain 
> the proper password.
> I have tried to connect to the remote server with psql from bash with the 
> following command:
> # psql –h 192.168.1.40 –U myuser –w remote_db_name
> it works fine.
> My .pgpass file does have the line:
> 192.168.1.40:5432:*:myuser:mypassword
> It seems the psql finds the proper password in the .pgpass file but dblink 
> doesn’t.

When dblink makes the connection,  the client is actually the dbserver itself. 
This means that your .pgpass file needs to be setup in the database server as 
the user running the postgres process. 

> The local and the remote server are running on Ubuntu server 14.04, both of 
> them are PostgreSQL 9.4.
> The .pgpass file properties look this:
> # ls –la ./.pgpass
> -rw--- 1 postgres postgres 193 jún   24 13:54 ./.pgpass
> 
> Any ideas are welcome.
> Regards,
> Attila
> 
>   Mentes a vírusoktól. www.avast.com


Re: R: [GENERAL] Vacuum full: alternatives?

2016-06-20 Thread Scott Mead
On Mon, Jun 20, 2016 at 6:13 AM, Andreas Kretschmer <andr...@a-kretschmer.de
> wrote:

>
>
> Am 20.06.2016 um 11:43 schrieb Job:
>
>> Hi Andreas,
>>
>> I would suggest run only autovacuum, and with time you will see a not
>>> more growing table. There is no need for vacuum full.
>>>
>> So new record, when will be pg_bulkloaded, will replace "marked-free"
>> location?
>>
>
>
> exactly, that's the task for vacuum
>
>
I believe that free space is only available to UPDATE, not INSERT.



>
>
> Andreas
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Replication

2016-06-02 Thread Scott Mead
On Thu, Jun 2, 2016 at 10:16 AM, Melvin Davidson <melvin6...@gmail.com>
wrote:

> It's been a few years since I worked with slony, and you did not state
> which version of slony or PostgreSQL you are working with, nor did you
> indicate the O/S.
>

I think OP had pointed to using streaming


> That being said, you should be able to formulate a query with a join
> between sl_path & sl_node that gives you the information you need.
>
> On Thu, Jun 2, 2016 at 9:32 AM, Bertrand Paquet <
> bertrand.paq...@doctolib.fr> wrote:
>
>> Hi,
>>
>> On an hot standby streaming server, is there any way to know, in SQL, to
>> know the ip of current master ?
>> The solution I have is to read the recovery.conf file to find
>> primary_conninfo, but, it can be false.
>>
>>
I've run into this as well.  Only way is recovery.conf.

--Scott



> Regards,
>>
>> Bertrand
>>
>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] Checkpoint Err on Startup of Rsynced System

2016-05-31 Thread Scott Mead
On Tue, May 31, 2016 at 1:13 PM, Jim Longwill <longw...@psmfc.org> wrote:

> I am trying to setup a 2nd, identical, db server (M2) for development and
> I've run into a problem with starting up the 2nd Postgres installation.
>
> Here's what I've done:
>   1) did a 'clone' of 1st (production) machine M1 (so both machines on
> Cent OS 7.2)
>   2) setup an rsync operation, did a complete 'rsync' from M1 to M2
>   3) did a final 'CHECKPOINT' command on M1 postgres
>   4) shutdown postgres on M1 with 'pg_ctl stop'
>   5) did final 'rsync' operation  (then restarted postgres on M1 with
> 'pg_ctl start')
>   6) tried to startup postgres on M2
>
> It won't start, & in the log file gives the error message:
> ...
> < 2016-05-31 09:02:52.337 PDT >LOG:  invalid primary checkpoint record
> < 2016-05-31 09:02:52.337 PDT >LOG:  invalid secondary checkpoint record
> < 2016-05-31 09:02:52.337 PDT >PANIC:  could not locate a valid checkpoint
> record
> < 2016-05-31 09:02:53.184 PDT >LOG:  startup process (PID 26680) was
> terminated by signal 6: Aborted
> < 2016-05-31 09:02:53.184 PDT >LOG:  aborting startup due to startup
> process failure
>
> I've tried several times to do this but always get this result.  So, do I
> need to do a new 'initdb..' operation on machine M2 + restore from M1
> backups?  Or is there another way to fix this?
>

You should have stopped M1 prior to taking the backup.  If you can't do
that, it can be done online via:

   1. Setup archiving
   2. select pg_start_backup('some label');
   3. 
   4. select pg_stop_backup();

  Without archiving and the pg_[start|stop]_backup, you're not guaranteed
anything.  You could use an atomic snapshot (LVM, storage, etc...), but
it's got to be a true snapshot.  Without that, you need archiving + start /
stop backup.

Last section of:
https://wiki.postgresql.org/wiki/Simple_Configuration_Recommendation#Physical_Database_Backups
will take you to:
https://www.postgresql.org/docs/current/static/continuous-archiving.html

--Scott


--o--o--o--o--o--o--o--o--o--o--o--o--
> Jim Longwill
> PSMFC Regional Mark Processing Center
> jlongw...@psmfc.org
> --o--o--o--o--o--o--o--o--o--o--o--o--
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com


Re: [GENERAL] postgresql embedded mode

2016-05-23 Thread Scott Mead
On Mon, May 23, 2016 at 9:51 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 05/23/2016 03:54 AM, aluka raju wrote:
>
>>
>> Hello,
>>
>> As given in the FAQ's that postgresql cannot be
>> embedded
>> https://wiki.postgresql.org/wiki/FAQ#Can_PostgreSQL_be_embedded.3F .
>>
>> Is their any possibility to make it embedded. Till now postgresql has
>> not done this embedded mode and i want to work on how it can be embedded
>> and contribute. please help me how to start or suggest the idea-.
>>
>
> All of this might be easier if you where to explain your design goals for
> whatever you are creating. Right now we have a series of disconnected
> references to parts of whatever it is.
>
> +1
One individual's idea of embedded is not the same as another's




--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com/>*
http://openscg.com

>
>> Thanks,
>> aluka
>>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Using both ident and password in pg_hba.conf

2016-05-09 Thread Scott Mead
On Mon, May 9, 2016 at 5:42 PM, D'Arcy J.M. Cain  wrote:

> On Mon, 09 May 2016 17:12:22 -0400
> Tom Lane  wrote:
> > If the same user id + database combinations might be valid in both
> > cases (from both PHP and manual connections) I think your only other
> > option for distinguishing which auth method to use is to make them
> > come in on different addresses.  Can you set up a secondary IP
> > interface that only the PHP server uses, for example?
>
> I did think of that but how do I define that in pg_hba?  The host field
> only specifies the remote IP, not the local one.
>
> > There's no provision for saying "try this auth method, but if it
> > fails, try subsequent hba lines".  It might be interesting to have
> > that, particularly for methods like ident that don't involve any
> > client interaction.  (Otherwise, you're assuming that the client can
> > cope with multiple challenges, which seems like a large assumption.)
> > I don't have much of a feeling for how hard it would be to do in the
> > server.
>
> I had an idea that that wouldn't be so easy else we would have had it
> by now.  However, I am not sure that that is what is needed.  I was
> thinking of something like this:
>
> hostall   joe@nobody  192.168.151.75/32   password
> hostall   all 192.168.151.75/32   ident
>
> The "all@nobody" field is meant to specify that the remote user is
> nobody but that they are connecting as user joe.  You would be able to
> use "all" as well.  You don't even need to do an ident check unless the
> auth method is "trust" which would be silly anyway.  In fact "password"
> is the only method that even makes any sense at all.
>


So, at a high-level, you want:

- Users deploying php scripts in apache to require a password ( btw -- use
md5, not password)
- Users running php scripts from their shell accounts to connect with no
password to the database

  Is that correct?

  Why not just require that everyone use an (again: *md5*) to connect?  It
would be significantly more secure.  Is their a requirement that shell
account users be able to connect without providing a password?

  (NB:
http://www.postgresql.org/docs/9.4/static/auth-methods.html#AUTH-PASSWORD
 password will send the password in cleartext, md5 will tell libpq to hash
the password for you.  No client-level change).



> --
> D'Arcy J.M. Cain  |  Democracy is three wolves
> http://www.druid.net/darcy/|  and a sheep voting on
> +1 416 788 2246 (DoD#0082)(eNTP)   |  what's for dinner.
> IM: da...@vex.net, VoIP: sip:da...@druid.net
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] index build faster on 8G laptop than 30G server

2016-04-19 Thread Scott Mead
On Tue, Apr 19, 2016 at 5:28 PM, Bill Ross  wrote:

> I've been running an index build for almost an hour on my 30G server that
> takes ~ 20 mins on my puny old macbook.
>
> It seems like I've tuned all I can.. what am I missing?
>
>
Concurrent traffic on the server ? Locks / conflicts with running traffic?

>From a parameter perspective, look at maintenance_work_mem.

--Scott



> Thanks,
> Bill
>
> Records to index: 33305041
>
> --- Server:
>
>  PostgreSQL 9.2.15 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.3
> 20140
> 911 (Red Hat 4.8.3-9), 64-bit
>
> shared_buffers = 8GB# min 128kB
> temp_buffers = 2GB# min 800kB
> work_mem = 8GB# min 64kB
> checkpoint_segments = 256# in logfile segments, min 1, 16MB each
> seq_page_cost = 1.0# measured on an arbitrary scale
> random_page_cost = 1.0# same scale as above
> effective_cache_size = 20GB
>
>   PID   USERPR  NI  VIRTRES  SHR S %CPU %MEMTIME+ COMMAND
>  4069 ec2-user  20   0 8596m 1.7g 1.7g R 99.8  5.6  67:48.36 postgres
>
> Macbook:
>  PostgreSQL 9.4.4 on x86_64-apple-darwin14.3.0, compiled by Apple LLVM
> version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn), 64-bit
>
> shared_buffers = 2048MB# min 128kB
> temp_buffers = 32MB# min 800kB
> work_mem = 8MB# min 64kB
> dynamic_shared_memory_type = posix# the default is the first option
> checkpoint_segments = 32# in logfile segments, min 1, 16MB each
>
> PIDCOMMAND  %CPU TIME #TH   #WQ  #PORTS MEMPURG CMPRS  PGRP
> 52883  postgres 91.0 02:16:14 1/1   07  1427M+ 0B 622M-  52883
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Postgresql 9.3.4 file system compatibility

2016-04-08 Thread Scott Mead
On Fri, Apr 8, 2016 at 9:16 AM, Marllius  wrote:

> thank you, but i need a link in official postgresql documentation
>

I'm not sure if that link exists, the general rule is In g if it's POSIX,
it'll work. You'll find that most PostgreSQL-ers have strong opinions and
preferences in regards to filesystems.   Personally, I know that XFS will
work, it's not *my* preference, but, to each their own.


>
> OCFS2 = oracle cluster file system 2
>
>
2016-04-08 10:00 GMT-03:00 Bob Lunney :
>
>> XFS absolutely does.  Its well supported on Redhat and CentOS 6.x and
>> 7.x.  Highly recommended.
>>
>> Don’t know about OCFS2.
>>
>> Bob Lunney
>> Lead Data Architect
>> MeetMe, Inc.
>>
>> > On Apr 8, 2016, at 8:56 AM, Marllius  wrote:
>> >
>> > Hi guys!
>> >
>> > The OCFS2 and XFS have compatibility with postgresql 9.3.4?
>> >
>>
>
I did some experimentation with ocfs2 back about 7 or 8 years ago
(admittedly, a Big-Bang away, so keep that in mind when reading my
comments).  At the time, OCFS2 was *mostly* POSIX compatible and would
indeed work with Postgres.  What we found (again, at the time) is that
OCFS2 started to have performance problems and eventually a race condition
when using a large number of [relatively] small files.  I believe the DB I
was working on had 10's of databases, each with 1,000+ tables in it, so,
lots of files.  It was really designed for use with Oracle (small number of
large files) and was passed over in favor of ASM.

If it were me, I'd stay away from OCFS2 for anything except Oracle (and in
that case, I'd use ASM).


> > I was looking the documentation but i not found it.
>> >
>> >
>> >
>>
>>
>
>
> --
> Atenciosamente,
>
> Márllius de Carvalho Ribeiro
>


Re: [GENERAL] Schema Size

2016-03-01 Thread Scott Mead
On Tue, Mar 1, 2016 at 6:07 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> You should read the definitions for the functions you are using to
> retrieve the sizes.
>
> ​http://www.postgresql.org/docs/current/static/functions-admin.html​
>
> +1, you've gotta be careful with each of these, they all tend to hide
different, yet critical components of size that you may be having trouble
resolving.

 The other thing to consider is that this isn't including any on-disk space
required for your change traffic in the WAL.  Your $PGDATA will always be
larger than the sum of all your databases sizes...


> On Tue, Mar 1, 2016 at 3:48 PM, drum.lu...@gmail.com  > wrote:
>
>> Hi there
>>
>> Wanna see how size a schema is in my PostgreSQL 9.2
>>
>> Got two queries - they return different values... can u please check?
>>
>> cheers;
>>
>> Query 1:
>> SELECT schema_name,
>> pg_size_pretty(sum(table_size)::bigint) as "disk space",
>> (sum(table_size) / pg_database_size(current_database())) * 100
>> as "percent"
>> FROM (
>>  SELECT pg_catalog.pg_namespace.nspname as schema_name,
>>  pg_relation_size(pg_catalog.pg_class.oid) as table_size
>>  FROM   pg_catalog.pg_class
>>  JOIN pg_catalog.pg_namespace
>>  ON relnamespace = pg_catalog.pg_namespace.oid
>> ) t
>> GROUP BY schema_name
>> ORDER BY schema_name
>>
>>
> ​​pg_relation_size: "Disk space used by the specified fork ('main', 'fsm',
> 'vm', or 'init') of the specified table or index"
>
> The 'init' fork is (I think) non-zero but extremely small.
> TOAST for a given relation is considered its own table
>
>
>> Query 2:
>> select schemaname,
>> pg_size_pretty(sum(pg_table_size(schemaname||'.'||relname))::bigint) as s
>> from pg_stat_user_tables
>> group by schemaname
>>
>
> pg_table_size: "Disk space used by the specified table, excluding indexes
> (but including TOAST, free space map, and visibility map)"
>

Personally, I'm a huge fan of 'pg_total_relation_size' which is all of
pg_table_size + indexes.  It really depends on specifically what you're
trying to count.  If you're looking for the total disk space required by
your tables in a schema, I always [personally] want to include indexes in
this count to make sure I understand the total impact on disk of accessing
my relations.


>
> David J.​
>
>


Re: [GENERAL] Catalog bloat (again)

2016-01-27 Thread Scott Mead


--
Scott Mead via mobile
IPhone : +1-607-765-1395
Skype  : scottm.openscg
Gtalk: sco...@openscg.com

> On Jan 27, 2016, at 22:11, Joshua D. Drake <j...@commandprompt.com> wrote:
> 
>> On 01/27/2016 03:37 PM, Ivan Voras wrote:
>> 
>> 
>> On 28 January 2016 at 00:13, Bill Moran <wmo...@potentialtech.com
>> <mailto:wmo...@potentialtech.com>> wrote:
>> 
>>On Wed, 27 Jan 2016 23:54:37 +0100
>>Ivan Voras <ivo...@gmail.com <mailto:ivo...@gmail.com>> wrote:
>> 
>>> So, question #1: WTF? How could this happen, on a regularly vacuumed
>>> system? Shouldn't the space be reused, at least after a VACUUM? The 
>> issue
>>> here is not the absolute existence of the bloat space, it's that it's
>> > constantly growing for *system* tables.
>> 
>>With a lot of activity, once a day probably isn't regular enough.
>> 
>> 
>> I sort of see what you are saying. I'm curious, though, what goes wrong
>> with the following list of expectations:
>> 
>> 1. Day-to-day load is approximately the same
>> 2. So, at the end of the first day there will be some amount of bloat
>> 3. Vacuum will mark that space re-usable
>> 4. Within the next day, this space will actually be re-used
>> 5. ... so the bloat won't grow.
>> 
>> Basically, I'm wondering why is it growing after vacuums, not why it
>> exists in the first place?
> 
> If something is causing the autovacuum to be aborted you can have this 
> problem.
It long-running transactions / idle in transaction / prepared xacts

  Have you considered slowing down on temp tables?  Typically, when bleeding, 
it's good to find the wound and stitch it up instead of just getting more 
towels


> 
> JD
> 
> 
> -- 
> Command Prompt, Inc.  http://the.postgres.company/
> +1-503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to stop autovacuum for daily partition old tables

2016-01-20 Thread Scott Mead


> On Jan 20, 2016, at 19:54, AI Rumman  wrote:
> 
> But, will it not create transaction wraparound for those table?
> 
> Thanks.
> 
>> On Wed, Jan 20, 2016 at 4:44 PM, Melvin Davidson  
>> wrote:
>> 
>> ALTER TABLE your_schema.your_table SET (autovacuum_enabled = false, 
>> toast.autovacuum_enabled = false);
>> 
>>> On Wed, Jan 20, 2016 at 6:22 PM, AI Rumman  wrote:
>>> Hi,
>>> 
>>> I have a table with daily partition schema on Postgresql 9.1 where we are 
>>> keeping 2 years of data.
>>> Often I experience that autovacuum process is busy with old tables where 
>>> there is no change. How can I stop it?
>>> Please advice.
>>> 
I typically run a vacuum freeze in old partitions that don't get any changes as 
part of a maintenance script.  If the tables actually get no changes, autovac 
should ignore them unless wrap becomes an issue at max_freeze_age... Which, it 
shouldn't of you vacuum freeze and there are no changes. 


>>> Thanks.
>> 
>> 
>> 
>> -- 
>> Melvin Davidson
>> I reserve the right to fantasize.  Whether or not you 
>> wish to share my fantasy is entirely up to you. 
> 


Re: [GENERAL] How can I use a dynamic column name after NEW. or OLD. in trigger procedure?

2016-01-19 Thread Scott Mead
On Tue, Jan 19, 2016 at 5:05 PM, Peter Devoy <pe...@3xe.co.uk> wrote:

> As part of the extension I am writing I am trying to create a trigger
> procedure in which the value of the primary key of the NEW or OLD row
> is used. The trigger will be fired by arbitrary tables so the column
> name must be dynamic.  Something like:
>
> pk_column := 'foo_id'; --example assignment only
>
> EXECUTE 'INSERT INTO  bar (baz) VALUES ($1)'
> USING NEW.quote_literal(pk_column);
>
> Out of desperation I have pretty much brute forced many weird
> combinations of quote_literal, quote_ident, ::regclass, || and USING.
> Unfortunately, I have not been able to get anything to work so any
> help would be very much appreciated.
>
> Thanks for reading
>
>
...
-- Dump into proper partition
sql := 'INSERT INTO ' || v_ets_destination || ' VALUES ( ($1).*)';

-- DEBUG
--RAISE NOTICE 'SQL: %',sql;

BEGIN
 EXECUTE sql USING NEW;
...

--
Scott Mead
Sr. Architect
OpenSCG
PostgreSQL, Java & Linux Experts

Desk   : (732) 339 3419 ext 116
Bridge: (585) 484-8032

http://openscg.com




> Peter Devoy
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Code of Conduct: Is it time?

2016-01-10 Thread Scott Mead
On Sun, Jan 10, 2016 at 5:09 PM, Adrian Klaver 
wrote:

> On 01/10/2016 02:05 PM, Regina Obe wrote:
>
>> Gavin,
>>
>>> I once went out of my way to help someone with Mac.  They were so Mac
>>>
>> centric they did not realize that they were not giving us the right
>> information to help them, but this was not obvious until later in the
>> thread.  I made some comment about Linux - next moment they were accusing
>>
>>> everyone of not helping them properly because they were using a Mac, as
>>>
>> though we had been deliberately discriminating against them!
>>
>> I hear ya. I do the same thing setting up a Linux VM to try to help
>> someone
>> on Ubuntu or CentOS.
>> My main point was if you don't have anything helpful to say, don't say
>> anything at all.
>>
>> I recall someone posting something earlier about on the lists we should
>> have
>> a section like:
>>
>> HELP US HELP YOU
>>
>> That details the information anyone having a problem should provide to
>> make
>> it easy for others to help them.
>> Can't find that item on mailing list.
>>
>
> https://wiki.postgresql.org/wiki/Guide_to_reporting_problems



Maybe I'm out of sync with everyone else, but, I think of list- and IRC
guidelines as distinctly separate from a code of conduct.  I see a code of
conduct as a legal document that allows the community to protect itself
(and individuals its individuals) from illegal and possibly predatory
behavior.  Guidelines for posting: "don't top post, don't paste 500 lines
in to IRC etc... " are things that could get the community to *ignore* you,
but not necessarily cause them to participate in a legal showdown directly
or as a 'third-party'.

   ISTM that if we develop a code of conduct, it would need to be designed
to insulate the community and individuals within it from becoming targets
of legal action.  "Mike said I was bad at postgres, it hurt my consulting
and I want to sue Joe for replying-all and upping the hit-count on
google... "

--Scott

>
>
>>
>> Thanks,
>> Regina
>>
>>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Does PostgreSQL support to write glusterfs by Libgfapi

2015-12-16 Thread Scott Mead
On Wed, Dec 16, 2015 at 1:26 AM, zh1029 <zh1...@sina.com> wrote:

> Hi,
>   It seems low performance PostgreSQL(9.3.6) while writing data to
> glusterFS
> distributed file system. libgfapi is provide since GlusterFS version 3.4 to
> avoid kernel visits/data copy which can improve its performance. But I
> didn't find out any instruction from the PostgreSQL web page. Do you know
> if
> PostgreSQL later release supports using libgfapi to optimized write data to
> ClusterFS file system.
>
> The real question is, why are you using GLusterFS?  It's important to be
careful since PostgreSQL provides no mechanism to allow for shared-disk
clustering (active/active).  If you are planning on using active/passive,
you must plan carefully so as not to create a split-brain scenario.


--
Scott Mead
Sr. Architect
*OpenSCG*
PostgreSQL, Java & Linux Experts
http://openscg.com <http://openscg.com>



>
>
> --
> View this message in context:
> http://postgresql.nabble.com/Does-PostgreSQL-support-to-write-glusterfs-by-Libgfapi-tp5877793.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Loggingt psql meta-commands

2015-12-10 Thread Scott Mead
On Thu, Dec 10, 2015 at 2:50 PM, oleg yusim  wrote:

> Thanks John, I realized that and confirmed in my logs. What I'm trying to
> determine now, can I only log some SELECT statements, or I should log all
> of them or none of them.
>

You can configure this to multiple levels:

 Global, per-user, per-database

ALTER USER postgres SET log_min_duration_statement=0;
ALTER DATABASE xyz SET log_min_duration_statement=0;

  That being said, you would want to make sure that the user issuing the
largest volume of queries is not set with this, otherwise, you could
potential flood your logs with every single query issued.  This has a
tendency to cause performance problems.  The other item of note is that,
once logged in, the user could change that value with a similar ALTER
statement.


--Scott
PostgreSQL database experts
http://www.openscg.com

>
> Oleg
>
> On Thu, Dec 10, 2015 at 1:40 PM, John R Pierce 
> wrote:
>
>> On 12/10/2015 9:58 AM, oleg yusim wrote:
>>
>>> I'm new to PostgreSQL, working on it from the point of view of Cyber
>>> Security assessment. In regards to the here is my question:
>>>
>>> Is it a way to enable logging for psql prompt meta-commands, such as
>>> \du, \dp, \z, etc?
>>>
>>
>> what the other two gentlemen are trying to say is the metacommands are
>> shortcuts for more complex SQL queries of the pg_catalog schema, so to log
>> them, you would need to log all queries and filter for accesses to the
>> pg_catalog
>>
>>
>>
>> --
>> john r pierce, recycling bits in santa cruz
>>
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>


Re: [GENERAL] 2 questions

2015-12-01 Thread Scott Mead


> On Nov 30, 2015, at 12:54, anj patnaik  wrote:
> 
> 1) directory listing:
> 
> /opt/nfsDir/postgres/9.4/
> /bin
> /data
> /etc
>/pgAdmin3
>   
> 

The data directory will cause you many problems.   You will need one data 
directory that is accessed by one AND ONLY one host for each node connected. 
You can't run an instance on multiple machines pointing to the same 'data' 
directory simultaneously.  Data directories cannot be shared by multiple 
instances simultaneously, that's an active/active shared disk cluster and most 
databases don't support it or require massive overhead ( network/licensing I.e. 
Oracle rac) to do that. 

  You *can* re-use the other directories, it can be wrought with issues, and 
you need to carefully think though upgrades, etc

> 2) The way I am using PG now is that I have specified the directory above as 
> the location to install it (from graphical installer). 
> 
> Now, it appears that postgres places files in other directories besides the 
> one specified in the installer. For instance, there are scripts in 
> /etc/init.d to start the service. 
> 
> So in answering my own question: it appears PG places files in other dirs so 
> when I am given a new VM/different physical server with the same NFS mount I 
> would either need to copy these files over or better yet un-install the 
> current PG and re-install from scratch.
> 
> Thanks,
> ap
> 
>> On Fri, Nov 27, 2015 at 8:30 PM, Adrian Klaver  
>> wrote:
>>> On 11/27/2015 01:17 PM, anj patnaik wrote:
>>> Hello,
>>> Yes, postgres is currently installed on a nfs mounted file system. So
>>> when graphical installer runs, there is a form which asks location for
>>> installation. I specified this path /opt/nfsDir/Postgres where nfsDir is
>>> a nfs mount. So currently this is where PG 9.4 lives.
>> 
>> What is under /opt/nfsDir/Postgres?
>> 
>>> 
>>> My question is when I am given a brand new VM on a different physical
>>> server, can I mount that same NFS FS and use the Postgres or do I need
>>> to re-install PG on new VM?
>> 
>> How are you using Postgres now? Please be specific, more detail is better 
>> then less at this point.
>> 
>>> 
>>> I am not sure if PG writes to any other directories besides the one
>>> where it is installed.
>> 
>> 
>>> 
>>> On the issue of logging, I see a lot of log statements because client
>>> apps do upserts and since I use Tcl I don't have a SQL proc, but rather
>>> I let it exception and then do an update on the row.
>> 
>> You can Tcl in the database:
>> 
>> http://www.postgresql.org/docs/9.4/interactive/pltcl.html
>> 
>> That will not change things if you let the database throw an exception there 
>> also.
>> 
>> 
>>> 
>>> So, you can limit the size of an individual log, but there is no way to
>>> tell PG to keep the log file short?
>>> 
>>> if i choose FATAL, I'd lose some log, right?
>>> 
>>> Thank you!
>>> ap
>> 
>> -- 
>> Adrian Klaver
>> adrian.kla...@aklaver.com
> 


Re: [GENERAL] Selecting newly added column returns empty but only when selecting with other columns in table

2015-11-26 Thread Scott Mead


> On Nov 26, 2015, at 21:29, mrtruji  wrote:
> 
> Sure thing. Below are the results from your query along with the version and 
> table info. Not sure about the index. I queried the table quite a bit before 
> adding the new column and didn't have any issues. 
> 
> Here is the result from your query: 
> 
>  nspname | relname | indexrelname | type | ?column?
> -+-+--+--+--
> (0 rows)
> 
> Version: 
> PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 
> 4.8.2-19ubuntu1) 4.8.2, 64-bit
> 
>   
> Table "public.data"
> Column| Type | Modifiers
> --+--+---
>  id   | text |
>  name | text |
>  gender   | text |
>  age  | text |
>  street   | text |
>  city | text |
>  state| text |
>  zip  | text |
>  longitude| double precision |
>  latitude | double precision |
>  geom | geometry(Point,4326) |
>  features_bin | bytea|
> Indexes:
> "ix_data_id" btree (id)
> 
> 
> 
> 
>> On Thu, Nov 26, 2015 at 6:19 PM, Melvin Davidson  
>> wrote:
>> OK, thanks for clarifying, but just for sanity sake, it would REALLY be nice 
>> if you would advise us of the exact version of PostgreSQL and the O/S you 
>> are working with.
>> 
>> A copy of the table structure would also be helpful.
>> 
>> Just one more thing, is it possible you have an index on that table that 
>> might be corrupted?
>> 
>> What does the following query return?
>> 
>> SELECT n.nspname, 
>>i.relname, 
>>i.indexrelname, 
>>CASE WHEN idx.indisprimary 
>> THEN 'pkey' 
>> WHEN idx.indisunique 
>> THEN 'uidx' 
>> ELSE 'idx' 
>> END AS type, 
>> 'INVALID' 
>>   FROM pg_stat_all_indexes i 
>>   JOIN pg_class c ON (c.oid = i.relid) 
>>   JOIN pg_namespace n ON (n.oid = c.relnamespace) 
>>   JOIN pg_index idx   ON (idx.indexrelid =  i.indexrelid ) 
>>  WHERE idx.indisvalid = FALSE  
>>AND i.relname = 'data'
>>  ORDER BY 1, 2,3;
>> 
>>> On Thu, Nov 26, 2015 at 9:10 PM, mrtruji  wrote:
>>> Hi, 
>>> 
>>> Thanks for the reply. The limit is just to simplify results for the 
>>> examples. The same behavior occurs when each of the three queries are not 
>>> limited. Whenever I try to filter by the original columns and select the 
>>> new column the resultant values for the new column are empty. Conversely, 
>>> whenever I select the new column along with original columns without any 
>>> filtering the resultant values for the original columns return empty. It's 
>>> as if the added column is disconnected to the table in some way causing 
>>> problems with queries that combine original columns and the new one.
>>> 
>>> I created and filled in the new column externally using psycopg2 in Python 
>>> so I'm not sure if that could be the source of the problem...

Did you just fire sql alter statements? 


Have you tried creating a new table with the bytea column and loading it with 
the same data?  Just to help narrow things down?

  Also, can you reproduce it?


>>> 
 On Thu, Nov 26, 2015 at 5:39 PM, Melvin Davidson  
 wrote:
 Is it possible you have more than one row where state = 'CA'? Putting a 
 LIMIT 1 would then restrict to only 1 row.
 Have you tried with no limit? IE: SELECT new_col FROM data;
 
 
> On Thu, Nov 26, 2015 at 7:13 PM, mrtruji  wrote:
> Just added a new bytea type column to an existing psql table and 
> populated the column entirely with row data. Running into some strange 
> query results:
> 
> When I select the newly added column by itself I get all the data as 
> expected:
> 
> SELECT new_col FROM data LIMIT 1;
> Result: \x8481e7dec3650040b
> When I try to filter with 'where' on another column in the table, I get 
> the values from the other columns as expected but empty from my 
> new_column:
> 
> SELECT id, state, new_col FROM data WHERE state='CA' limit 1;
> Result: 123456_1; CA; EMPTY ROW
> The reverse is also true. If I select my new column in combination with 
> other columns with no 'where' I get the correct value from my new column 
> but empty for the other columns:
> 
> SELECT id, state, new_col FROM data limit 1;
> Result: EMPTY ROW; EMPTY ROW; \x8481e7dec3650040b
> Thanks to anyone with advice!
> 
 
 
 
 -- 
 Melvin Davidson
 I reserve the right to fantasize.  Whether or not you 
 wish to share my fantasy is entirely up to you. 
>> 
>> 
>> 
>> -- 
>> Melvin Davidson
>> I reserve the right to fantasize.  Whether or not you 
>> wish to 

Re: [GENERAL] Best tool to pull from mssql

2015-11-11 Thread Scott Mead
On Wed, Nov 11, 2015 at 9:37 AM, taspotts  wrote:

> Take a look at  Pentaho Kettle
>   .  They have a
> free community edition.
>
>
+10


> I use it frequently to migrate data from MSSQL to Postgres.  It has a nice
> GUI for setting everything up and the transforms/jobs can be scheduled.
>
>
>
>
> --
> View this message in context:
> http://postgresql.nabble.com/Best-tool-to-pull-from-mssql-tp5873415p5873509.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] can postgres run well on NFS mounted partitions?

2015-11-10 Thread Scott Mead
On Tue, Nov 10, 2015 at 6:26 PM, anj patnaik  wrote:

> Thanks for the feedback. I have setup a second Linux VM (running RHEL
> 5.11) and Postgres 9.4. I ran some insertions today from a client running
> on Windows. The client does a loop of 30 updates.
>
> I am seeing about 10-20% increase in latency in the case where DB is on
> NFS (over TCP) compared to directly on disk.
>
> The other machine I am using to compare is running RHEL 6.5 and Postgres
> 9.4.
>
> Are there any specific tests that are recommended to test that postgres
> over NFS works well?
>
> I am planning on doing a few large data inserts and fetches.
>
> With the little testing, the DB over NFS appears fine.
>
>
Don't do it.  Period.   I've used 4 big-vendor appliances with NFS as well
as my own server.  With maybe 3 exceptions, most of the 'total-data-loss'
scenarios I've dealt with regarding transactional data was due to NFS.

--Scott


>
> Thanks for any feedback.
>
> On Tue, Nov 10, 2015 at 4:29 AM, Albe Laurenz 
> wrote:
>
>> anj patnaik wrote:
>> > Can anyone advise if there are problems running postgres over NFS
>> mounted partitions?
>> >
>> > I do need reliability and high speed.
>>
>> I have got the advice not to use NFS from a number of people who should
>> know,
>> but there are also knowledgable people who use PostgreSQL with NFS.
>>
>> You need hard foreground mounts, and you need an NFS server that is
>> guaranteed not to lose data that the client has synced to disk.
>>
>> You should probably only consider storage systems that directly
>> support NFS, and you should run performance and reliability tests.
>>
>> Yours,
>> Laurenz Albe
>>
>
>


Re: [GENERAL] Red Hat Policies Regarding PostgreSQL

2015-10-28 Thread Scott Mead
On Wed, Oct 28, 2015 at 7:10 AM, Mark Morgan Lloyd <
markmll.pgsql-gene...@telemetry.co.uk> wrote:

> Tom Lane wrote:
>
>> Michael Convey  writes:
>>
>>> Due to policies for Red Hat family distributions, the PostgreSQL
>>> installation will not be enabled for automatic start or have the database
>>> initialized automatically.
>>>
>>
>> To which policies are they referring? Licensing, security, or other?​
>>>
>>
>> Packaging policy: daemons shall not run merely by virtue of having been
>> installed.  Otherwise, if you install a boatload of software without
>> checking each package, you'd have a boatload of probably-unwanted and
>> possibly-incorrectly-configured daemons running.  Which is a performance
>> problem and likely a security hazard too.
>>
>> It's a good policy IMO (though I used to work there so no doubt I've just
>> drunk too much Red Hat koolaid).
>>
>
> Seems reasonable. In fact somewhat better than current KDE as in e.g.
> Debian "Jessie", which embeds a copy of MySQL whether the the user wants to
> use it or not.
>
>
Ubuntu 15 Desktop enables mysqld by default.  In fact, it doesn't seem to
hook up with systemd in any obvious way to disable autostart.

--Scott



> --
> Mark Morgan Lloyd
> markMLl .AT. telemetry.co .DOT. uk
>
> [Opinions above are the author's, not those of his employers or colleagues]
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] trouble downloading postgres 9.4 for RHEL 6.x

2015-10-21 Thread Scott Mead

> On Oct 21, 2015, at 12:28, Adrian Klaver  wrote:
> 
>> On 10/21/2015 09:14 AM, anj patnaik wrote:
>> Ok, i am trying to determine why I am getting errors. Is it possible
>> that my browser is corrupting the transfer?
> 
> Maybe, though I used FireFox to download also. I would go to wherever the 
> file has been downloaded on your computer and delete it and try the download 
> again. The file I got was:
> 
> 37548416 Oct 21 09:20 postgresql-9.5.0-beta1-linux-x64.run
> 
> CCing list
>> 
>> I am seeing: Installer payload initialization failed. This is likely due
>> to an incomplete or corrupt downloaded file.
>> 
>> I am a Linux newbie. Is there a way to test if the transfers are getting
>> corrupted?
>> 
>> after download, did you just change permissions and ran sudo
>> ./postgres.run file?
> 
> Yes, so:
> 
> aklaver@killi:~/Downloads> chmod 755 postgresql-9.5.0-beta1-linux-x64.run
> 
> aklaver@killi:~/Downloads> sudo ./postgresql-9.5.0-beta1-linux-x64.run
> 
>> 
>> The reason for using installer is that it goes through all the steps and
>> instantiates a sid.
> 
> What is a sid?
> 

Postgres isn't like Oracle. Use yum to install, then run 

/etc/init.d/postgres9.5 initdb

Then, run:
/etc/init.d/postgres9.5 start


  That's it.  Initdb creates the instance, start - starts it.  No dbca or other 
such garbage is needed. Welcome to the easy life :)



>> 
>> The purpose of this installation is for me to have a database to rest
>> the restoration of pg_dump.
> 
> You can also get that with a Yum install, see below for more:
> 
> http://www.postgresql.org/download/linux/redhat/
>> 
>> Thank you.
>> 
>> On Wed, Oct 21, 2015 at 12:07 PM, Adrian Klaver
>> > wrote:
>> 
>>On 10/21/2015 08:57 AM, anj patnaik wrote:
>> 
>>I used the same link:
>>http://www.enterprisedb.com/products-services-training/pgdownload
>> 
>>I chose /*Version 9.5.0 Beta 1*/ Linux x86-64
>> 
>> 
>>I downloaded(postgresql-9.5.0-beta1-linux-x64.run) and ran it. I did
>>not actually complete the install as I already have Postgres
>>installed  on this machine. This is on openSUSE 13.2, not that it
>>should matter.
>> 
>> 
>>Then, I tried 9.4 for Linux x86-64
>> 
>>Has anyone downloaded/installed within last 2 days? my OS is
>>RHEL 6.5
>> 
>>I am using the Linux machine's firefox browser to download.
>> 
>>is there a way to use yum to get the same installer program that
>>goes
>>through all the steps?
>> 
>> 
>>No, Yum will not use the installer program, it will use the RH
>>native packaging.
>> 
>>Is there a particular reason you want the installer?
>> 
>>Something specific you want to install?
>> 
>> 
>>Please advise. thanks
>> 
>>On Wed, Oct 21, 2015 at 9:08 AM, Adrian Klaver
>>
>>>>> wrote:
>> 
>> On 10/20/2015 05:48 PM, anj patnaik wrote:
>> 
>> Several weeks ago, I successfully
>> downloaded postgresql-9.4.4-3-linux-x64.run on a Linux
>>server.
>> 
>> 
>> Where did you download from?
>> 
>> 
>> Today, I attempted to download from the site. I notice
>>9.5 is
>> there, but
>> getting lots of errors:
>> 
>> 
>> Looks like you are trying to run a *.exe(Windows) file on a
>>Linux
>> machine, that is the source of the errors.
>> 
>> 
>> 1) when downloading and running latest
>> 
>> rchive:
>>/home/apatnaik/Downloads/postgresql-9.4.5-1-windows-x64.exe
>> 
>>[/home/apatnaik/Downloads/postgresql-9.4.5-1-windows-x64.exe]
>> End-of-central-directory signature not found.
>>Either this
>> file is not
>> a zipfile, or it constitutes one disk of a multi-part
>> archive.  In the
>> latter case the central directory and zipfile
>>comment will
>> be found on
>> the last disk(s) of this archive.
>> zipinfo:  cannot find zipfile directory in one of
>> 
>>/home/apatnaik/Downloads/postgresql-9.4.5-1-windows-x64.exe or
>> 
>> 
>>/home/apatnaik/Downloads/postgresql-9.4.5-1-windows-x64.exe.zip, and
>> cannot find
>> 
>>/home/apatnaik/Downloads/postgresql-9.4.5-1-windows-x64.exe.ZIP,
>> period.
>> 
>> 2) Then I tried to scp from the other machine and changed
>> permissions to
>> 777 and used sudo to execute the file but get this:
>> 
>> bash-4.1$ chmod 777 postgresql-9.4.4-3-linux-x64.run
>> bash-4.1$ sudo 

Re: [GENERAL] trouble downloading postgres 9.4 for RHEL 6.x

2015-10-21 Thread Scott Mead


> On Oct 21, 2015, at 14:58, anj patnaik <patn...@gmail.com> wrote:
> 
> With the graphical installer, I had a way to create a user. Does it create 
> postgres user by default?

Yeah, it creates both the OS user and the database super-user. 


> 
> Let me know. Thx
> 
>> On Wed, Oct 21, 2015 at 1:43 PM, Scott Mead <sco...@openscg.com> wrote:
>> 
>> > On Oct 21, 2015, at 12:28, Adrian Klaver <adrian.kla...@aklaver.com> wrote:
>> >
>> >> On 10/21/2015 09:14 AM, anj patnaik wrote:
>> >> Ok, i am trying to determine why I am getting errors. Is it possible
>> >> that my browser is corrupting the transfer?
>> >
>> > Maybe, though I used FireFox to download also. I would go to wherever the 
>> > file has been downloaded on your computer and delete it and try the 
>> > download again. The file I got was:
>> >
>> > 37548416 Oct 21 09:20 postgresql-9.5.0-beta1-linux-x64.run
>> >
>> > CCing list
>> >>
>> >> I am seeing: Installer payload initialization failed. This is likely due
>> >> to an incomplete or corrupt downloaded file.
>> >>
>> >> I am a Linux newbie. Is there a way to test if the transfers are getting
>> >> corrupted?
>> >>
>> >> after download, did you just change permissions and ran sudo
>> >> ./postgres.run file?
>> >
>> > Yes, so:
>> >
>> > aklaver@killi:~/Downloads> chmod 755 postgresql-9.5.0-beta1-linux-x64.run
>> >
>> > aklaver@killi:~/Downloads> sudo ./postgresql-9.5.0-beta1-linux-x64.run
>> >
>> >>
>> >> The reason for using installer is that it goes through all the steps and
>> >> instantiates a sid.
>> >
>> > What is a sid?
>> >
>> 
>> Postgres isn't like Oracle. Use yum to install, then run
>> 
>> /etc/init.d/postgres9.5 initdb
>> 
>> Then, run:
>> /etc/init.d/postgres9.5 start
>> 
>> 
>>   That's it.  Initdb creates the instance, start - starts it.  No dbca or 
>> other such garbage is needed. Welcome to the easy life :)
>> 
>> 
>> 
>> >>
>> >> The purpose of this installation is for me to have a database to rest
>> >> the restoration of pg_dump.
>> >
>> > You can also get that with a Yum install, see below for more:
>> >
>> > http://www.postgresql.org/download/linux/redhat/
>> >>
>> >> Thank you.
>> >>
>> >> On Wed, Oct 21, 2015 at 12:07 PM, Adrian Klaver
>> >> <adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>> wrote:
>> >>
>> >>On 10/21/2015 08:57 AM, anj patnaik wrote:
>> >>
>> >>I used the same link:
>> >>http://www.enterprisedb.com/products-services-training/pgdownload
>> >>
>> >>I chose /*Version 9.5.0 Beta 1*/ Linux x86-64
>> >>
>> >>
>> >>I downloaded(postgresql-9.5.0-beta1-linux-x64.run) and ran it. I did
>> >>not actually complete the install as I already have Postgres
>> >>installed  on this machine. This is on openSUSE 13.2, not that it
>> >>should matter.
>> >>
>> >>
>> >>Then, I tried 9.4 for Linux x86-64
>> >>
>> >>Has anyone downloaded/installed within last 2 days? my OS is
>> >>RHEL 6.5
>> >>
>> >>I am using the Linux machine's firefox browser to download.
>> >>
>> >>is there a way to use yum to get the same installer program that
>> >>goes
>> >>through all the steps?
>> >>
>> >>
>> >>No, Yum will not use the installer program, it will use the RH
>> >>native packaging.
>> >>
>> >>Is there a particular reason you want the installer?
>> >>
>> >>Something specific you want to install?
>> >>
>> >>
>> >>Please advise. thanks
>> >>
>> >>On Wed, Oct 21, 2015 at 9:08 AM, Adrian Klaver
>> >><adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
>> >><mailto:adrian.kla...@aklaver.com
>> >><mailto:adrian.kla...@aklaver.com>>> wrote:
>> >>
>> >> On 10/20/2015 05:48 PM, anj patnaik wrote:
>> >>
>> >&g

Re: [GENERAL] [BUGS] postgresql table data control

2015-10-19 Thread Scott Mead

> On Oct 19, 2015, at 04:29, Shulgin, Oleksandr  
> wrote:
> 
>> On Sat, Oct 17, 2015 at 1:26 AM, 許耀彰  wrote:
>> Dear Support Team,
>> How can we know each table data increase day by day? It mean how do we get 
>> how many data produce today,included which data? Thank you. 
> 
> 
> [moving from bugs@]
> 
> Please refer to these SQL-level functions:
> 
> pg_relation_size()
> pg_total_relation_size()
> pg_size_pretty()

If you are looking to track this over time, you'll need to use a tool that is 
capturing this in snapshots and do some reporting over it. 

  I've written a script that I use frequently as part of my OpenWatch tool 
(https://bitbucket.org/scott_mead/openwatch/src/d024185b1b5585f7b4c8a5ad6b926eafed2e249e/bindings/java/sql/OpenWatch-Snapshots.sql?at=master=file-view-default).
  Run this script in the database, then run snapshots periodically:

  Select snapshots.save_snap();
 

  To see a size report between snapshots, list the available snapshots:
  
select * from list_snaps();

  To get the size report, get the snap_id numbers you want to report between 
and run:

  select * from snapshots.report_tables(, )

  If you download the full OpenWatch tool 
(https://bitbucket.org/scott_mead/openwatch/downloads) and run it from the 
shell script, you an add cpu, disk/io and memory to the report as well. 



> 
> http://www.postgresql.org/docs/current/static/functions-admin.html
> 


Re: [GENERAL] ID column naming convention

2015-10-17 Thread Scott Mead


> On Oct 13, 2015, at 18:27, droberts  wrote:
> 
> Gavin Flower-2 wrote
>>> On 14/10/15 06:36, droberts wrote:
>>> Hi, is there a problem calling ID's different when used as a FK vs table
>>> ID?
>>> For example
>>> 
>>> 
>>> mydimtable ()
>>>  ID
>>>  name
>>>  description
>>> 
>>> 
>>> myfacttable ()
>>>   my_dim_id   # FK to ID above
>>>   total_sales
>>> 
>>> 
>>> I 'think' if I don't enforce foreign key constraints, then this practice
>>> prevents tools from being able to generate ERD diagrams right?
>>> 
>>> 
>>> 
>>> --
>>> View this message in context:
>>> http://postgresql.nabble.com/ID-column-naming-convention-tp5869844.html
>>> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>> My practice is to name the PRIMARY KEY as id, and foreign keys with the 
>> original table name plus the sufiix_id.
>> 
>> By leaving the table name off the primary key name, and just using id, 
>> makes it more obvious that it is a primary key (plus it seems redundant 
>> to prefix the primary key name with its own table name!).
>> 
>> CREATE TABLE house
>> (
>> id  int PRIMARY KEY,
>> address text
>> );
>> 
>> CREATE TABLE room
>> (
>> id   int PRIMARY KEY,
>> house_id int REFERENCES house(id),
>> name text
>> );
>> 
>> 
>> There are exceptions like:
>> 
>> CREATE TABLE human
>> (
>> idint PRIMARY KEY,
>> mother_id int REFERENCES human (id),
>> father_id int REFERENCES human (id),
>> name  text
>> );
>> 
>> Cheers,
>> Gavin
>> 
>> 
>> -- 
>> Sent via pgsql-general mailing list (
> 
>> pgsql-general@
> 
>> )
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
> 
> Thanks.   My only question is how do you create a schema diagram (ERD) then? 
> The tool won't know what the relationships are unless maybe you put foreign
> key constraints on.  
That's how most tools work, usually by calling the driver api (jdbc 
databasemetadata, etc) which in turn look at the information_schema. If you 
don't setup real referential integrity, any tool that can use names is just 
guessing   

   I think dbvisualizer will 'infer' based on column names.  I KNOW that 
schemaspy has this option, but they explicitly note it is a GUESS. 

   Use foreign keys. 



> BTW does anyone recommend a tool to to that?  I've been
> playing with DbVisualizer.
> 
> 
> 
> --
> View this message in context: 
> http://postgresql.nabble.com/ID-column-naming-convention-tp5869844p5869881.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question

2015-10-15 Thread Scott Mead
On Thu, Oct 15, 2015 at 3:55 PM, Guillaume Lelarge <guilla...@lelarge.info>
wrote:

> 2015-10-15 20:40 GMT+02:00 anj patnaik <patn...@gmail.com>:
>
>> It's a Linux machine with 8 CPUs. I don't have the other details.
>>
>> I get archive member too large for tar format.
>>
>> Is there a recommended command/options when dealing with very large
>> tables, aka 150K rows and half of the rows have data being inserted with
>> 22MB?
>>
>>
> Don't use tar format? I never understood the interest on this one. You
> should better use the custom method.
>

+ 1

 Use -F c


--
Scott Mead
Sr. Architect
*OpenSCG*
PostgreSQL, Java & Linux Experts


http://openscg.com


>
>
>> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w  > /tmp/dump
>> pg_dump: [archiver (db)] connection to database "postgres" failed:
>> fe_sendauth: no password supplied
>> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t   > /tmp/dump
>> Password:
>> pg_dump: [tar archiver] archive member too large for tar format
>> -bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
>> -bash: pg_dumpall: command not found
>> -bash: tmpdb.out-2015101510.gz: Permission denied
>> -bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
>>
>>
>> Thank you so much for replying and accepting my post to this NG.
>>
>> On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson <melvin6...@gmail.com>
>> wrote:
>>
>>> In addition to exactly what you mean by "a long time" to pg_dump 77k of
>>> your table,
>>>
>>> What is your O/S and how much memory is on your system?
>>> How many CPU's are in your system?
>>> Also, what is your hard disk configuration?
>>> What other applications are running simultaneously with pg_dump?
>>> What is the value of shared_memory & maintenance_work_mem in
>>> postgresql.conf?
>>>
>>> On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <
>>> adrian.kla...@aklaver.com> wrote:
>>>
>>>> On 10/14/2015 06:39 PM, anj patnaik wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I recently downloaded postgres 9.4 and I have a client application that
>>>>> runs in Tcl that inserts to the db and fetches records.
>>>>>
>>>>> For the majority of the time, the app will connect to the server to do
>>>>> insert/fetch.
>>>>>
>>>>> For occasional use, we want to remove the requirement to have a server
>>>>> db and just have the application retrieve data from a local file.
>>>>>
>>>>> I know I can use pg_dump to export the tables. The questions are:
>>>>>
>>>>> 1) is there an in-memory db instance or file based I can create that is
>>>>> loaded with the dump file? This way the app code doesn't have to
>>>>> change.
>>>>>
>>>>
>>>> No.
>>>>
>>>>
>>>>> 2) does pg support embedded db?
>>>>>
>>>>
>>>> No.
>>>>
>>>> 3) Or is my best option to convert the dump to sqlite and the import the
>>>>> sqlite and have the app read that embedded db.
>>>>>
>>>>
>>>> Sqlite tends to follow Postgres conventions, so you might be able to
>>>> use the pg_dump output directly if you use --inserts or --column-inserts:
>>>>
>>>> http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
>>>>
>>>>
>>>>> Finally, I am noticing pg_dump takes a lot of time to create a dump of
>>>>> my table. right now, the table  has 77K rows. Are there any ways to
>>>>> create automated batch files to create dumps overnight and do so
>>>>> quickly?
>>>>>
>>>>
>>>> Define long time.
>>>>
>>>> What is the pg_dump command you are using?
>>>>
>>>> Sure use a cron job.
>>>>
>>>>
>>>>> Thanks for your inputs!
>>>>>
>>>>
>>>>
>>>> --
>>>> Adrian Klaver
>>>> adrian.kla...@aklaver.com
>>>>
>>>>
>>>> --
>>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>>> To make changes to your subscription:
>>>> http://www.postgresql.org/mailpref/pgsql-general
>>>>
>>>
>>>
>>>
>>> --
>>> *Melvin Davidson*
>>> I reserve the right to fantasize.  Whether or not you
>>> wish to share my fantasy is entirely up to you.
>>>
>>
>>
>
>
> --
> Guillaume.
>   http://blog.guillaume.lelarge.info
>   http://www.dalibo.com
>


Re: [GENERAL] Creating Report for PieChart

2015-10-13 Thread Scott Mead


> On Oct 13, 2015, at 19:56, Alex Magnum  wrote:
> 
> Hello,
> I need to process some statistics for a pie chart (json) where I only want to 
> show a max of 8 slices. If I have more data points like in below table I need 
> to combine all to a slice called others. If there are less or equal 8 i use 
> them as is.
> 
> I am currently doing this with a plperl function which works well but was 
> just wondering out of curiosity if that could be done withing an sql query.
> 
> Anyone having done something similar who could point me in the right 
> direction? 
> 
> 
> SELECT count(*),country_name FROM stats_archive WHERE id=400 GROUP BY 
> country_name ORDER BY COUNT DESC;
>  count |   country_name
> ---+---
>302 | Malaysia
> 65 | Singapore
> 57 | Thailand
> 26 | Indonesia
> 15 | France
> 14 | United States
> 14 | India
> 13 | Philippines
> 12 | Vietnam
> 10 | Republic of Korea
> 10 | Canada
>  7 | Australia
>  6 | Brazil
>  6 | Czech Republic
>  5 | Switzerland
>  4 | Saudi Arabia
>  3 | Ireland
>  3 | Japan
>  3 | Sweden
>  3 | South Africa
>  3 | Belarus
>  3 | Colombia
>  3 | United Kingdom
>  1 | Peru
> 
> 
>country_name  | count | perc  
> -+---+---
>  Malaysia|   302 |  51.4 
>  Singapore   |65 |  11.0 
>  Thailand|57 |   9.7 
>  Indonesia   |26 |   4.4 
>  France  |15 |   2.6 
>  United States   |14 |   2.4 
>  India   |14 |   2.4 
>  Others  |95 |  16.1 
>  Total   |   588 |   100 
> 
> Thanks a lot for any suggestions
I would use rank to get a rank number for each record. 


> Alex


Re: [GENERAL] Creating Report for PieChart

2015-10-13 Thread Scott Mead
On Tue, Oct 13, 2015 at 9:14 PM, Scott Mead <sco...@openscg.com> wrote:

>
>
> On Oct 13, 2015, at 19:56, Alex Magnum <magnum11...@gmail.com> wrote:
>
> Hello,
> I need to process some statistics for a pie chart (json) where I only want
> to show a max of 8 slices. If I have more data points like in below table I
> need to combine all to a slice called others. If there are less or equal 8
> i use them as is.
>
> I am currently doing this with a plperl function which works well but was
> just wondering out of curiosity if that could be done withing an sql query.
>
> Anyone having done something similar who could point me in the right
> direction?
>
>
> SELECT count(*),country_name FROM stats_archive WHERE id=400 GROUP BY
> country_name ORDER BY COUNT DESC;
>  count |   country_name
> ---+---
>302 | Malaysia
> 65 | Singapore
> 57 | Thailand
> 26 | Indonesia
> 15 | France
> 14 | United States
> 14 | India
> 13 | Philippines
> 12 | Vietnam
> 10 | Republic of Korea
> 10 | Canada
>  7 | Australia
>  6 | Brazil
>  6 | Czech Republic
>  5 | Switzerland
>  4 | Saudi Arabia
>  3 | Ireland
>  3 | Japan
>  3 | Sweden
>  3 | South Africa
>  3 | Belarus
>  3 | Colombia
>  3 | United Kingdom
>  1 | Peru
>
>
>country_name  | count | perc
> -+---+---
>  Malaysia|   302 |  51.4
>  Singapore   |65 |  11.0
>  Thailand|57 |   9.7
>  Indonesia   |26 |   4.4
>  France  |15 |   2.6
>  United States   |14 |   2.4
>  India   |14 |   2.4
>  Others  |95 |  16.1
>  Total   |   588 |   100
>
> Thanks a lot for any suggestions
>
> I would use rank to get a rank number for each record.
>

  Sorry, Sent the last one from my phone, here's an example:



Use 'rank' to generate the rank order of the entry.

postgres=# select country, count(1) num_entries,
rank() over (order by count(1) DESC)
from test GROUP by country ORDER BY num_entries DESC;
 country | num_entries | rank
-+-+--
 US  |  20 |1
 CA  |  15 |2
 SP  |   8 |3
 IT  |   7 |4
(4 rows)

There's probably an easier way to do this without a sub-select, but, it
works.

postgres=# SELECT country, num_entries, rank
   FROM (select country, count(1) num_entries,
   rank() over (order by count(1) DESC)
   FROM test GROUP by country
) foo WHERE rank < 4;

 country | num_entries | rank
-+-+--
 US  |  20 |1
 CA  |  15 |2
 SP  |   8 |3
(3 rows)

postgres=#

--
Scott Mead
OpenSCG
www.openscg.com

>
>
> Alex
>
>


Re: [GENERAL] backup.old

2015-10-07 Thread Scott Mead
On Wed, Oct 7, 2015 at 15:38, David G. Johnston 


wrote:
On Wed, Oct 7, 2015 at 3:29 PM, Steve Pribyl < 
steve.pri...@akunacapital.com [steve.pri...@akunacapital.com] > wrote:

Thank you very much. I read someplace if you run pg_start_backup twice the
backup.old will be created, but there was not much beyond that and now I 
can't

seem to find the reference.


backup_label gets deleted on pg_stop_backup() on the *master*.
Backup_label will still be in the *backup* itself however (or, more 
succinctly,
a slave server). When you start the backup / slave, it will process 
backup_label

so that it can start recovery. Once we don't need it anymore, the file is
renamed to backup_label.old. Typically, when you see a backup_label.old on 
a
writable master, it was either: * a backup that was restored and put in to 
service * a slave server that was promoted A pg_controldata will probably 
show a timeline != 1




Scanning the docs and logic tells me that attempting to do pg_start_backup 
twice
in a row should result in the second attempt giving an error...but I could 
be

misinformed.
The file pg_start_backup creates is named "backup_label" and so I'd also 
expect

any attempt to add an old suffix would keep the same base name...
David J.

Re: [GENERAL] Recording exceptions within function (autonomous transactions?)

2015-10-06 Thread Scott Mead

> On Oct 6, 2015, at 05:38, Steve Pritchard <steve.pritch...@bto.org> wrote:
> 
> I am porting several stored procedures from Oracle to Postgres. In the Oracle 
> code, if an exception is thrown within a stored procedure, the exception is 
> caught and details are written to a database table using an autonomous 
> transaction (as the main transaction is rolled back).
> 
> As far as I can see from the documentation, Postgres doesn't support 
> autonomous transaction (although there is talk about it at 
> https://wiki.postgresql.org/wiki/Autonomous_subtransactions - is this 
> something that is being discussed for a future release?).
> 
> The Postgres functions that I'm writing are batch processes that will be 
> invoked via a scheduler (either cron or pgAgent).
> 
> Ideally I'd like to record the exceptions in a database table. If this isn't 
> possible then recording in a log fie would be acceptable, but I'd like to 
> keep this separate from the main postgres log.
> 
> Alternatives that I've come up with (none of them very satisfactory):
> use 'raise' to record in postgres log
> put the error recording in the client code (as invoked by scheduler) - use 
> BEGIN TRANSACTION to start a new transaction
> use COPY to output to a file
> Can anyone suggest something that would meet my requirements above?
It's hacky, and, I haven't tried it in a few years.  Setup a foreign table that 
resides in the same database. When you write to the foreign table, it will be 
using a 'loopback' connection, and that transaction will be able to commit 
because it is a separate connection. 

  To be fair, I haven't actually done this since the days of dblink, I 
*believe* it should work with fdw though. 

--
Scott Mead
Sr. Architect
OpenSCG
http://openscg.com

> Steve Pritchard
> British Trust for Ornithology, UK


Re: [GENERAL] How to reduce pg_dump backup time

2015-10-06 Thread Scott Mead

> On Oct 6, 2015, at 00:52, Sachin Srivastava  wrote:
> 
> Dear Team,
> 
>  
> 
> I am using PostgreSQL 9.1 on Redhat Linux on my Production environment. My 
> database size is 680 GB and it take 7 hour for completion the pg_dump backup.
> 
>  
> 
> I want that my pg_dump backup should be fast and take less time.
> 
>  
> 
> In PostgresQL 9.3 there is “ -j njobs” option is available for fast pg_dump 
> backup.
> 
>  
> 
> There is any way, that I will use “ -j njobs” option in “PostgreSQL 9.1 “ so 
> that my backup is completed in less time or any other way in ”PostgreSQL 9.1” 
> to take the backup fast. Kindly suggest?
> 
>  
> 
If you're using compression, disable it. It'll require more space, but would be 
faster. Honestly, you should upgrade to a newer version, 9.1 is EOL in slightly 
less than 1 year. 

  For the upgrade, you can use a newer version of pg_dump. I'm not sure if the 
parallel option would be available against a 9.1 server (but it's worth a try). 
I wouldn't try restoring that dump to a 9.1 server, just use it to upgrade to 
9.4. 


>  
> 
> [root]# lsb_release –a
> 
>  
> 
> LSB Version:
> :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
> 
> Distributor ID: RedHatEnterpriseServer
> 
> Description:Red Hat Enterprise Linux Server release 6.7 (Santiago)
> 
> Release:6.7
> 
> Codename:   Santiago
> 
>  
> 
> [root@CPPMOMA_DB01 even]# psql
> 
> Password:
> 
> psql.bin (9.1.2)
> 
> Type "help" for help.
> 
>  
> 
> postgres=# select version();
> 
> version
> 
> ---
> 
>  PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 
> 20080704 (Red Hat 4.1.2-46), 64-bit
> 
> (1 row)
> 
>  
> 
> postgres=#
> 
>  
> 
>  
> 
>  
> 
>  
> 
> Regards,
> 
> Sachin Srivastava


Re: [GENERAL] How to reduce pg_dump backup time

2015-10-06 Thread Scott Mead
On Tue, Oct 6, 2015 at 8:54 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 10/05/2015 09:52 PM, Sachin Srivastava wrote:
>
>> Dear Team,
>>
>> I am using PostgreSQL 9.1 on Redhat Linux on my Production environment.
>> My database size is 680 GB and it take 7 hour for completion the pg_dump
>> backup.
>>
>> I want that my pg_dump backup should be fast and take less time.
>>
>> In PostgresQL 9.3 there is “ -j /njobs” /option is available for fast
>> pg_dump backup.
>>
>> There is any way, that I will use *“ **-j /njobs”/*/option /in
>> “PostgreSQL 9.1 “ so that my backup is completed in less time or any
>> other way in ”PostgreSQL 9.1” to take the backup fast. Kindly suggest?
>>
>
> In addition to Scott's suggestions have you looked at replication?:
>
> http://www.postgresql.org/docs/9.1/interactive/high-availability.html
>
> Basically, continuous backup.
>

+1 - It doesn't actually lower the base backup time, but, there's little
overhead for archiving (other than storage space) and restoration is just
starting an instance and replaying a few logs.  *Much* faster.  You can use
a tool like WAL-E or barman to build out a nice strategy.

   At the end of the day, you'll still want to do a periodic, logical
backup (make sure your phyiscal backups are sane), but, using physical
backups will open you up to many more options.

--
Scott Mead
Sr. Architect
*OpenSCG*
http://www.openscg.com <http://openscg.com/>



>> *[root]# lsb_release –a*
>>
>> **
>>
>> LSB
>>
>> Version::base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
>>
>> Distributor ID: RedHatEnterpriseServer
>>
>> Description:Red Hat Enterprise Linux Server release 6.7 (Santiago)
>>
>> Release:6.7
>>
>> Codename:Santiago
>>
>> *[root@CPPMOMA_DB01 even]# psql*
>>
>> Password:
>>
>> psql.bin (9.1.2)
>>
>> Type "help" for help.
>>
>> postgres=# select version();
>>
>> version
>>
>>
>> ---
>>
>> PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)
>> 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit
>>
>> (1 row)
>>
>> postgres=#
>>
>> *Regards,*
>>
>> Sachin Srivastava
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] PG V9 on NFS

2013-02-11 Thread Scott Mead
On Mon, Feb 11, 2013 at 5:22 PM, Gauthier, Dave dave.gauth...@intel.comwrote:

  Can PG V9.1* support a DB that's on an NFS disk?

 I googled around, but nothing popped out.

 Also, would you happen to know the answer to this for MySQL v5.5*? 

 Thanks in Advance.



I've done this before, and I really wish I hadn't.  NFS has gotten speedier
and more reliable, but the tendency to lock up an entire system and crash
unexpectedly is still there.  The other thing with NFS is that it is a lot
easier for people to try and run multi-headed clusters ( which, isn't
supported in postgres and will cause corruption if you do it).  If network
based storage is really a need, take a look at iSCSI or ATA over ethernet.
 I've had great luck there.


--Scott


Re: [GENERAL] .pgpass and root: a problem

2013-02-05 Thread Scott Mead
On Tue, Feb 5, 2013 at 12:15 PM, Shaun Thomas stho...@optionshouse.comwrote:

 Hey folks,

 We're wanting to implement a more secure password policy, and so have
 considered switching to LDAP/Active Directory for passwords. Normally, this
 would be fine, but for two things:

 1. Tons of our devs use .pgpass files to connect everywhere.
 2. Several devs have root access to various environments.


I would love to see pgpass storing encrypted stuff here, that'd be great...
in the meantime...

 Is there any way that you could move your 'root-fellas' to a 'sudo' model
so that they can have *most* of what they need, without allowing identity
switches ?  I was trying to come up with something clever, but if they're
root, they're root.

--Scott Mead
sco...@openscg.com
http://www.openscg.com




 So, by switching from database-stored passwords to LDAP, we open a
 security problem that currently only affects the database, to developers'
 personal LDAP password, which is the key to every service and machine they
 use in the company.

 Unfortunately I can't see any way around this at all. Ident won't really
 work on remote systems, .pgpass isn't encrypted, and you can't use
 encrypted/hashed password entries either.

 I agree that we should probably have our root access much more locked down
 than it is, but it's still a valid problem. I don't think I'd even want a
 restricted set of root users able to see my LDAP password in plain text.

 Has anyone put thought into combining LDAP and .pgpass, or has it simply
 been abandoned every time the issue has presented itself?

 Thanks in advance!

 --
 Shaun Thomas
 OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
 312-676-8870
 stho...@optionshouse.com

 __**

 See 
 http://www.peak6.com/email_**disclaimer/http://www.peak6.com/email_disclaimer/for
  terms and conditions related to this email


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] OpenSCG 9.0.10 package

2012-10-11 Thread Scott Mead
Hey Ian,


On Thu, Oct 11, 2012 at 10:52 AM, Ian Harding harding@gmail.com wrote:

 I know this is the wrong place, but I tried creating an account on
 their site to contact them and it does not work.


I'll take a look at that right away, sorry it happened!


 The 9.0.10 package throws an error when I try to install it that it
 has an unsatisfied dependency on libpython.  Since it brings its own
 libraries, I'm not sure why that would be, but there it is.  Hoping
 someone from openscg sees this...

 rpm -ivh postgres-9.0.10-1.x86_64.openscg.rpm
 warning: postgres-9.0.10-1.x86_64.openscg.rpm: Header V3 DSA/SHA1
 Signature, key ID 38379aec: NOKEY
 error: Failed dependencies:
   libpython2.4.so.1.0()(64bit) is needed by postgres90-9.0.10-1.x86_64


Which linux distribution / version are you installing on ?  If you have a
package manager that can help with dependency resolution (like yum), can
you use that to either install the postgres RPM or just the lib python
package?

--Scott
OpenSCG



 Also...

 ls /usr/lib64/libpython2.6.so.1.0
 /usr/lib64/libpython2.6.so.1.0


 although I'm pretty sure that doesn't matter.

 - Ian


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Visualize database schema

2012-08-14 Thread Scott Mead
On Tue, Aug 14, 2012 at 5:24 AM, Robert Gravsjö robert.grav...@imano.sewrote:


  Can anyone advice about a tool to visualize a database schema? Ideally,
 I would like something that takes the SQL definition of a schema or database
  (essentially the output of pg_dump) and produces a graphical
 representation of the tables, constraints and indexes which can be moved
 around for ease of
  visualization (something like the Graphical query builder that comes
 with pgAdmin =1.14).

 A simple method for this, open source cross-platform and work for various
 RDBMS, is the following:
  1. Download  SQL Power Architect at
 https://code.google.com/p/power-architect/downloads/list (it's a java app
 so you need a JVM installed and JDBC drivers for the RDBMS to use)


+1 on SQL Power Architect.  I use it a lot.

--Scott


 2. Run it, right click in left pane and choose Add source connection -
 New connection... to create a connection for your db
 3. Expand the connection  and the database and then simply drag and drop
 the schema to the right pane.
 4. Click on Automatic layout in the toolbar (it's the icon that looks
 like three connected boxes and a green triangle)

 From there you can examine the database. There are some limitations,
 constraints for example, so for more complicated tools
 look into ERD-tools as suggested by John in another reply.

 Regards,
 roppert


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Two instances show same databases

2012-04-18 Thread Scott Mead
On Wed, Apr 18, 2012 at 7:45 PM, Jose Hales-Garcia 
jose.halesgar...@stat.ucla.edu wrote:


 Hello,

 I'm setting up a second instance on a server.  The instance versions are
 8.3.5 (original instance) and 9.1.3 (the new instance I'm attempting to set
 up).

 I believe I have followed the steps in setting up a second instance
 faithfully.  The new instance starts up, but I'm seeing that when I list
 the 9.1.3 databases, the 8.3.5 databases are shown.
 As the _pgsql user I run '/usr/local/postgresql/pgsql-9.1.3/bin/psql -l'
 and I get a listing of databases from the 8.3.5 instance (with the 'Access
 privileges' column added).

 I believe this can't be right and I think I've done something wrong.

 The settings I've used for the 9.1.3 instance are:

 The configuration during compile used was:
 ./configure --prefix=/usr/local/postgresql/pgsql-9.1.3
 (For 8.3.5 I used the default prefix, /usr/local/pgsql)

 In /usr/local/postgresql/pgsql-9.1.3/data/postgresql.conf
 port = 5433
 (The 8.3.5 port is the default, 5432)



Then you need to connect to the 9.1.3 DB with the 9.1.3 port and the 8.3.5
DB with the 8.3.5 port:

psql -p 5432 -l
  would show 8.3.5 databases

psql -p 5433 -l

   would show 9.1.3 databases

 --Scott




 The initdb command used is:
 /usr/local/postgresql/pgsql-9.1.3/bin/initdb -E utf8 --locale=en_US.UTF-8
 -D /usr/local/postgresql/pgsql-9.1.3/data

 The startup invocation is:
 /usr/local/postgresql/pgsql-9.1.3/bin/pg_ctl -D
 /usr/local/postgresql/pgsql-9.1.3/data -l
 /var/log/pgsql/pgsql-9.1.3/access.log start

 The lock files are in place:
 /tmp/.s.PGSQL.5432
 /tmp/.s.PGSQL.5432.lock
 /tmp/.s.PGSQL.5433
 /tmp/.s.PGSQL.5433.lock


 I appreciate any help.

 Sincerely,
 Jose
 ...
 Jose Hales-Garcia
 UCLA Department of Statistics




Re: [GENERAL] Why warm-standby doesn't work using file-based log shipping method?

2012-02-21 Thread Scott Mead
On Tue, Feb 21, 2012 at 5:09 AM, sunpeng blueva...@gmail.com wrote:

 I try to setup warm-standby using file-based log shipping method:
 Master: 5432 port
 Standby:6432 port at same machine
 Master's :

- wal_level = archive
- archive_mode = on
- archive_command = 'cp %p /home/postgres/archive/%f'

 Standby's restore_command = 'cp /home/postgres/archive/%f %p'

The slave's restore_command needs to be pg_standby (or similar) instead of
'cp' , cp will just kick and error and stop restoring on the first file it
doesn't find.

http://www.postgresql.org/docs/9.1/static/pgstandby.html

--Scott



 1. Firstly do a base backup for Standby ( successfuly )
 [postgres@localhost ~]$ /home/postgres/db/standby/pgsql/bin/postmaster -D
 /home/postgres/db/standby/pgsql/data --port=6432
 LOG:  database system was interrupted; last known up at 2012-02-21
 17:24:33 CST
 LOG:  starting archive recovery
 LOG:  restored log file 00010006 from archive
 LOG:  redo starts at 0/670
 LOG:  consistent recovery state reached at 0/700
 cp: cannot stat `/home/postgres/archive/00010007': No such
 file or directory
 LOG:  could not open file pg_xlog/00010007 (log file 0,
 segment 7): No such file or directory
 LOG:  redo done at 0/694
 LOG:  restored log file 00010006 from archive
 cp: cannot stat `/home/postgres/archive/0002.history': No such file or
 directory
 LOG:  selected new timeline ID: 2
 cp: cannot stat `/home/postgres/archive/0001.history': No such file or
 directory
 LOG:  archive recovery complete
 LOG:  database system is ready to accept connections
 LOG:  autovacuum launcher started
 2. Then, let standby running as  warm-standby mode with the following
 parameters in recovery.conf to enable it continously track the Master
 server:

- standby_mode = on
- restore_command = 'cp /home/postgres/archive/%f %p'

 [postgres@localhost ~]$ /home/postgres/db/standby/pgsql/bin/postmaster -D
 /home/postgres/db/standby/pgsql/data --port=6432
 LOG:  database system was shut down at 2012-02-21 17:29:38 CST
 cp: cannot stat `/home/postgres/archive/0003.history': No such file or
 directory
 cp: cannot stat `/home/postgres/archive/0002.history': No such file or
 directory
 LOG:  entering standby mode
 cp: cannot stat `/home/postgres/archive/00020007': No such
 file or directory
 WARNING:  WAL was generated with wal_level=minimal, data may be missing
 HINT:  This happens if you temporarily set wal_level=minimal without
 taking a new base backup.
 LOG:  consistent recovery state reached at 0/700013C
 LOG:  record with zero length at 0/700013C
 cp: cannot stat `/home/postgres/archive/00020007': No such
 file or directory
 cp: cannot stat `/home/postgres/archive/0003.history': No such file or
 directory
 cp: cannot stat `/home/postgres/archive/00020007': No such
 file or directory
 cp: cannot stat `/home/postgres/archive/00020007': No such
 file or directory
 cp: cannot stat `/home/postgres/archive/0003.history': No such file or
 directory
 cp: cannot stat `/home/postgres/archive/00020007': No such
 file or directory
 cp: cannot stat `/home/postgres/archive/00020007': No such
 file or directory

 3. Then let us do some DML queries on Master, for example, I insert new
 tuples(large number, 100) to a table.
 4. Theoreticaly new tuples could also be visible on Standby server some
 time later, yet after I promote the Standby to check, it reveals nothing
 has been down.

 My question is:
 1. Are there anything I have missed ?
 2. I still can't understand why Standby(when running as standby mode)
 always try to find  `/home/postgres/archive/00020007' file,
 yet Master only copy the following files to archive directory:
 [postgres@localhost archive]$ cd /home/postgres/archive
 [postgres@localhost archive]$ ls
 00010004
 00010005
 00010006
 00010006.0020.backup
 00010007
 00010008
 00010009
 obviously there is no 00020007.


 Thanks!




Re: [GENERAL] Database system identifier via SELECT?

2011-12-08 Thread Scott Mead
On Thu, Dec 8, 2011 at 4:27 PM, Bruce Momjian br...@momjian.us wrote:

 Joshua D. Drake wrote:
 
  On 12/08/2011 12:57 PM, Bruce Momjian wrote:
  
   Chris Redekop wrote:
   Is there any way to get the database system identifier via a select
   statement?  I have a primary/secondary async replication setup, and
 I'd
   like be able to verify from the client side that the provided primary
 and
   secondary connection strings do in fact refer to the same data set...
  
   Wow, that is a reasonable thing to want available via SQL, but I can't
   see a way to get to it.
  
   The only method I can suggest is to write a server-side C function that
   calls GetSystemIdentifier().


select inet_server_addr()?

--Scott




 
  This seems like something we should have in core, don't you think?

 Yeah, kind of, except this is the first request we ever got for this.
 The identifier is passed as part of streaming replication, so maybe it
 will be needed more in the future.

 --
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Extending the volume size of the data directory volume

2011-11-30 Thread Scott Mead
On Wed, Nov 30, 2011 at 4:19 PM, panam pa...@gmx.net wrote:

 Had to restart the import. This time, I tried with a smaller initial disk
 size (1GB) and extended it dynamically. It did not cause any problems.
 A different reason might be, that I remounted the volume in between during
 the last update to deactivate buffer flushing. Maybe a bad combination.
 Let's see how far it gets this time...


If you remounted without restarting postgres, I would say that's your
problem

--JT



 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/Extending-the-volume-size-of-the-data-directory-volume-tp5030663p5036767.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Recommended Protocol: Adding Rows to Table

2011-11-25 Thread Scott Mead
On Fri, Nov 25, 2011 at 11:05 AM, Rich Shepard rshep...@appl-ecosys.comwrote:

  The data originated in a spreadsheet and, based on my experience, contains
 duplicate records. After reformatting there are 143,260 rows to insert in
 the table. The approach I tried seems to have problems (explained below)
 and
 I would like to learn the proper way to insert rows in either an empty
 table
 or one with existing rows since I'll need to do this procedure for my
 projects.

  The table was created with the primary key and I used INSERT INTO ... to
 load the data. Many duplicate records, so I split the file into smaller
 ones
 and re-ran the command to load them. I'd then remove the reported duplicate
 rows from the source (text) file. Between runs, I'd issue the DROP TABLE
 command within psql and check it was empty using 'select count(*) from
 waterchem;'.

  It appeared that I removed duplicates from the first couple of smaller
 files so I combined them into one file named ok.sql. But, when I tested the
 combined file it, too, reported many duplicate records. Something wrong
 here.

  Perhaps a better approach is to put the CREATE TABLE command above the
 INSERT INTO section of the file (without specifying a PK), load that using
 the command 'psql -d database -f waterchem.sql', then add the PK and
 remove duplicates as postgres presents them. Or, perhaps there is a much
 more efficient way to accomplish this task.

  Rather than my flailing around and spending a lot of time failing to load
 all non-duplicate rows into the table I'd like to learn the proper way to
 accomplish this task. Suggestions, recommendations, and your experiences
 are
 requested.


Why don't you first load the data into a table (no primary key), then use
SQL to find your dups?

once loaded:
  SELECT primary_key_column, count(1) from table group by 1 having
count(1)  1;

 At least then, you'll really know what you're in for.  You can either
script a DELETE or... whatever you want to do, once clean, you can add the
PK.

--
Scott


 TIA,

 Rich

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Scott Mead
On Sun, Nov 20, 2011 at 6:21 AM, Marko Kreen mark...@gmail.com wrote:

 On Sun, Nov 20, 2011 at 8:32 AM, Phoenix Kiula phoenix.ki...@gmail.com
 wrote:
  The password I am entering in the terminal is right for sure. I've
  tried it a few times, checked the caps lock, etc. Also, if the log
  carries this FATAL password authentication failed, why does the
  terminal give the vague error no working server connection?


ISTM that either your connect string is bad to the database or you already
have too many clients connected to the db.  Have you tried:

  show max_clients;

  select count(1) from pg_stat_activity;

  In postgres?  Is it possible that there are just too many clients already
connected?  If not, then it's probably just your connect string ( in
pgbouncer.ini) not being quite right.  You are using 127.0.0.1 for
connecting, is postgres even listening?

  netstat -lntp | grep 5432

Good luck.

--Scott


 no working connection means that client logged into pgbouncer
 successfully,
 but pgbouncer cannot log into server.

 Please look into Postrgres log file for details.

 If you see no failures there, you have wrong connect string in
 pgbouncer.ini.

 --
 marko

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Using the internal data dictionary

2011-11-18 Thread Scott Mead
On Fri, Nov 18, 2011 at 10:54 AM, Bill Thoen bth...@gisnet.com wrote:

  Thanks, guys!

 I'll take a closer look at the information_schema and pgAdmin and Maestro.
 Reinventing the wheel isn't a problem as this job is not critical, but the
 educational experience in looking at the system from another POV may be the
 bigger prize.


If you're looking to get to know the pg_catalog, run psql with the -E flag.
 Every time you run a meta-command, you will get a SQL dump that shows you
how psql assembled the output:

scott$ bin/psql -E postgres
psql (9.2devel)
Type help for help.

postgres=# \dt+
* QUERY **
SELECT n.nspname as Schema,
  c.relname as Name,
  CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN
'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' WHEN 'f' THEN
'foreign table' END as Type,
  pg_catalog.pg_get_userbyid(c.relowner) as Owner,
  pg_catalog.pg_size_pretty(pg_catalog.pg_table_size(c.oid)) as Size,
  pg_catalog.obj_description(c.oid, 'pg_class') as Description
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','')
  AND n.nspname  'pg_catalog'
  AND n.nspname  'information_schema'
  AND n.nspname !~ '^pg_toast'
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;
**

List of relations
 Schema | Name | Type  | Owner |Size| Description
+--+---+---++-
 public | joe  | table | scott | 8192 bytes |
 public | mike | table | scott | 8192 bytes |
(2 rows)

postgres=#

It's a pretty quick way to get familiar with how the catalogs are
structured.  I agree that when it comes to 'getting things done',
information_schema is quicker and more stable, but knowing pg_catalog is a
great way to step-it-up.

--
Scott Mead
 OpenSCG, http://www.openscg.com



 - Bill



 On 11/17/2011 8:34 PM, David Johnston wrote:

 On Nov 17, 2011, at 22:17, Bill Thoen bth...@gisnet.com bth...@gisnet.com 
 wrote:


  I need to assemble a complete data dictionary for project documentation and 
 other purposes and I was wondering about the pros and cons of using the 
 pg_catalog metadata. But I hesitate to poke around in here because I don't 
 know why it's kept so out of sight and not much documented. But it seems like 
 an ideal source of information to tap with a program to generate accurate, 
 current reports of what's in the database.

 Is this a bad idea (everything I'm thinking of doing would be read only 
 except for the description fields) but I'd just like to make sure that 
 there's not some innocent looking table in there that acts as a doomsday 
 device if you so much as read its first record, etc.  I'm just not sure why 
 this isn't more widely used or talked about.

 Regards,

 Bill Thoen
 GISnethttp://gisnet.com303-786-9961
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general

  information_schema is the more standardized point of entry into the database 
 meta-data, catalog is generally intended for internals use and thus has a 
 less stable API contract.  That said, you are reinventing the wheel if you 
 are looking for a straight dump of the current reality.  Various third-party 
 tools already do this.  I've used, but am not affiliated with, PostgreSQL 
 Maestro.  Also, pgAdmin, I think, provides access to this information as well 
 (as does psql via it's various commands).

 You should never directly update the catalog but instead use the appropriate 
 SQL command.  For descriptions you need to use COMMENT ON.  Reading it 
 should never cause a problem.

 David J.






Re: [GENERAL] monitoring sql queries

2011-11-17 Thread Scott Mead
On Thu, Nov 17, 2011 at 11:46 AM, Tomas Vondra t...@fuzzy.cz wrote:

 On 17 Listopad 2011, 17:32, hubert depesz lubaczewski wrote:
  On Thu, Nov 17, 2011 at 09:29:11AM -0700, J.V. wrote:
  I am in need of a tool or method to see each/every SQL query that
  hits the PostgreSQL database.  By query I mean the query in SQL
  syntax with all the parameters passed.
 
  What I want to do is:
  1) see the query
  2) Determine how long the query takes to execute
  3) Possibly log both of the above to a log file
 
  Is there any tool internal to PostgreSQL that would allow me to do this?
 
  I cannot monitor it from the code that is actually connecting 
  sending the query.
 
  any help or pointers would be greatly appreciated.
 
  just enable logging of queries.

 As depesz mentioned, there's a log_min_duration GUC, that allows you to
 log queries that exceed some time interval. If you want to log all
 queries, you may set this to 0 but it may easily fill your log with
 garbage.


Just as a warning, on heavily-loaded systems, this logging can have a
significant impact to your performance.  Not so much because it's logging,
but due to the fact that your log-files may start requiring more disk I/O
than the actual database.  If you are going to do this under any serious
load, I would recommend separating 'pg_log' on to a separate [set of]
physical disk[s].

--Scott


 There are two contrib modules that might help you - pg_stat_statements and
 auto_explain. The former one is probably more interesting in this case.

 Tomas


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] monitoring sql queries

2011-11-17 Thread Scott Mead
On Thu, Nov 17, 2011 at 4:32 PM, J.V. jvsr...@gmail.com wrote:

 How is this accomplished?

 Is it possible to log queries to a table with additional information?

 1) num rows returned (if a select)

This isn't logged


 2) time to complete the query

This is logged


 3) other info?

 Take a look at the log_line_prefix parameter


 How is enabling this actually done?


 You enable this by using a GUC (global unified config) variable in the
postgresql.conf file called log_min_duration_statement.

--Scott



 On 11/17/2011 9:32 AM, hubert depesz lubaczewski wrote:

 On Thu, Nov 17, 2011 at 09:29:11AM -0700, J.V. wrote:

 I am in need of a tool or method to see each/every SQL query that
 hits the PostgreSQL database.  By query I mean the query in SQL
 syntax with all the parameters passed.

 What I want to do is:
 1) see the query
 2) Determine how long the query takes to execute
 3) Possibly log both of the above to a log file

 Is there any tool internal to PostgreSQL that would allow me to do this?

 I cannot monitor it from the code that is actually connecting
 sending the query.

 any help or pointers would be greatly appreciated.

 just enable logging of queries.

 Best regards,

 depesz


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Re: [GENERAL] Need Help Installing Dblink…(Desperately…)

2011-11-12 Thread Scott Mead
On Sat, Nov 12, 2011 at 1:02 PM, Jerry Levan jerry.le...@gmail.com wrote:

 I think I have somehow botched my installation of 9.1.1 on Mac OSX 10.7.2

 I have a mac running 9.1.1.

 I then built dblink from the contrib directory and I did not
 see an install option in the make file.


 The Makefile should be including the global did you try a 'make
install' ?



 I tried copying the dblink.so file to the postgresql library directory
 but i cannot figure out how to install the definitions..

 google:

http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/contrib/dblink/Attic/README.dblink?rev=1.12.4.1



 Any clues would be helpful.

 Thanks

 Jerry

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Regarding the shared disk fail over cluster configuration

2011-11-02 Thread Scott Mead
On Wed, Nov 2, 2011 at 11:55 AM, Debasis Mishra debasis1...@gmail.comwrote:

 Thanks a lot Ondrej Ivanic.I have few more doubts.

 1)While installing the postgress it asks for the data directory,which i
 refer to SAN volume(Shared LUN)-(example - /dbdata/pgsqldata).

 After that i am exporting $PGDATA= SAN Volume(example - /dbdata/pgsqldata).
 Where /dbdata is the shared LUN .

 Then I am running initdb - but it is not running successfully. The Error is
 /dbdata/pgsqldata is not empty.

 Because I can understand while installation postgress creates
 files/directory in the data folder. While running initdb also it is trying
 to create some directory which is already present.

 So I am not sure how to go ahead with this scenario.


It seems that your installer has already created the data for you.  If you
have files like:

  postgresql.conf
  base/
  pg_xlog/
..

 Then your postgres data directory is already initialized, no need for
initdb.



 2) Regarding the shared disk failover clustering you mentioned that if
 primary fails , cluster should bring it down and start postgres in
 secondary
 node.

 My doubt is - Whether cluster should start the postgres service in
 secondary
 node during failover or postgress will be running always. My undersatnding
 was in both the node postgress will be running and pointing to shared
 dbdata. And if node1/primary fails then cluster software should bring
 node2/secondary up.once it will bring node2 online there postgress must be
 running already.


 Is this feasible?


Do not do this.  At first glance, you can actually run in this
configuration (i.e. you can start the postmasters) but it will cause data
corruption.  DO NOT under ANY circumstances try to run both primary and
secondary on the same data directory simultaneously.  You will corrupt your
entire database.  The only way to run postgres with shared disk clustering
is in Active/Passive mode.

--
 Scott Mead
   OpenSCG http://www.openscg.com



 Thanks,
 Debasis

 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/Regarding-the-shared-disk-fail-over-cluster-configuration-tp4952316p4958449.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Error On Slony Replication

2011-11-02 Thread Scott Mead
On Wed, Nov 2, 2011 at 2:55 PM, Raymond O'Donnell r...@iol.ie wrote:

 On 02/11/2011 18:34, Prashant Bharucha wrote:
  Hello All
 
  For replication ,Created cluster and after I run Slon command
 
   getting error on
 
  PQconnectdb(dbname=XYZ host= user=cls password=1) failed - could
  not translate host name


If you want to connect to an instance on the local machine with unix
sockets, don't include the host=

--
  Scott Mead
   OpenSCG http://www.openscg.com


 The host is missing from the above.

 Ray.


 --
 Raymond O'Donnell :: Galway :: Ireland
 r...@iol.ie

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-10-27 Thread Scott Mead
On Thu, Oct 27, 2011 at 2:39 PM, Brian Fehrle bri...@consistentstate.comwrote:

 Hi all, need some help/clues on tracking down a performance issue.

 PostgreSQL version: 8.3.11

 I've got a system that has 32 cores and 128 gigs of ram. We have connection
 pooling set up, with about 100 - 200 persistent connections open to the
 database. Our applications then use these connections to query the database
 constantly, but when a connection isn't currently executing a query, it's
 IDLE. On average, at any given time, there are 3 - 6 connections that are
 actually executing a query, while the rest are IDLE.


Remember, when you read pg_stat_activity, it is showing you query activity
from that exact specific moment in time.  Just because it looks like only
3-6 connections are executing, doesn't mean that 200 aren't actually
executing  .1ms statements.  With such a beefy box, I would see if you can
examine any stats from your connection pooler to find out how many
connections are actually getting used.




 About once a day, queries that normally take just a few seconds slow way
 down, and start to pile up, to the point where instead of just having 3-6
 queries running at any given time, we get 100 - 200. The whole system comes
 to a crawl, and looking at top, the CPU usage is 99%.

 Looking at top, I see no SWAP usage, very little IOWait, and there are a
 large number of postmaster processes at 100% cpu usage (makes sense, at this
 point there are 150 or so queries currently executing on the database).

  Tasks: 713 total,  44 running, 668 sleeping,   0 stopped,   1 zombie
 Cpu(s):  4.4%us, 92.0%sy,  0.0%ni,  3.0%id,  0.0%wa,  0.0%hi,  0.3%si,
  0.2%st
 Mem:  134217728k total, 131229972k used,  2987756k free,   462444k buffers
 Swap:  8388600k total,  296k used,  8388304k free, 119029580k cached


 In the past, we noticed that autovacuum was hitting some large tables at
 the same time this happened, so we turned autovacuum off to see if that was
 the issue, and it still happened without any vacuums running.

That was my next question :)


 We also ruled out checkpoints being the cause.

 How exactly did you rule this out?  Just because a checkpoint is over
doesn't mean that it hasn't had a negative effect on the OS cache.  If
you're stuck going to disk, that could be hurting you (that being said, you
do point to a low I/O wait above, so you're probably correct in ruling this
out).



 I'm currently digging through some statistics I've been gathering to see if
 traffic increased at all, or remained the same when the slowdown occurred.
 I'm also digging through the logs from the postgresql cluster (I increased
 verbosity yesterday), looking for any clues. Any suggestions or clues on
 where to look for this to see what can be causing a slowdown like this would
 be greatly appreciated.

 Are you capturing table-level stats from pg_stat_user_[tables | indexes]?
Just because a server doesn't look busy doesn't mean that you're not doing
1000 index scans per second returning 1000 tuples each time.

--Scott


 Thanks,
- Brian F

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Extraneous Files

2011-10-20 Thread Scott Mead
On Thu, Oct 20, 2011 at 1:12 PM, Ian Harding harding@gmail.com wrote:

 If someone happened to accidentally end up with a lot of files that
 were NOT part of their database in the data/base/X directory, how
 could they go about getting a reliable list of files they could safely
 delete?  The files were there before the current incarnation of the
 database, so have ctime before the good ones, but may be valid because
 they were updated by rsync, but I don't know that mtime is reliable
 for every file either, since some may have been unchanged from the
 first time this someone ran rsync.

 Not a super big deal, just wasting a lot of space...


It is a big deal, one should NEVER EVER manually delete files from under the
data directory.  If you want to remove the data, you can figure out what
relations the files tie back to by examining 'relfilenode' on pg_class...

select relname from pg_class where relfilenode='X'
   --  is the name of the file before a period

  If you want to delete the files, you can then 'drop table relname' .  If
you value the logical consistency of your database, I wouldn't delete
anything unless you go through the database engine with a 'drop' command.

--Scott


 Thanks!

 Ian

 postgresql 9.0.4 on linux

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] pg_dump with select command

2011-09-14 Thread Scott Mead
On Wed, Sep 14, 2011 at 5:31 AM, Adarsh Sharma adarsh.sha...@orkash.comwrote:

 Any update on below issue.


Someone already responded to you with the answer, don't top-post.



 Thanks



 Adarsh Sharma wrote:

 Dear all,

 Today I need some part ( subset ) of some tables to another database to a
 remote server.
 I need to take backup of  tables after satisfying a select query.

 Is there any option to specify query in pg_dump command.I researched in
 the manual but not able to find that.
 Please let me know if it is possible as we can can specify in mysqldump
 command.


 Thanks



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] COPY failure on directory I own

2011-08-30 Thread Scott Mead
On Tue, Aug 30, 2011 at 1:20 PM, Scott Ribe scott_r...@elevated-dev.comwrote:

 On Aug 30, 2011, at 11:14 AM, Rich Shepard wrote:

  The permissions on that directory are 755 and it's owned by me. Since I
  have no problems writing other files to that directory I must have the
  command syntax incorrect but I don't see where.


In this case, it's not about YOU and your permissions, it's about the
server.  The COPY command writes data as the 'postgres' operating system
user (or whichever user owns the postgres backend process).

--Scott




 Where is the server and where are you? You are issuing a command to the
 server to create a file at that path on the server.

 --
 Scott Ribe
 scott_r...@elevated-dev.com
 http://www.elevated-dev.com/
 (303) 722-0567 voice





 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Indexes on inheriting tables

2011-08-29 Thread Scott Mead
2011/8/24 Ondrej Ivanič ondrej.iva...@gmail.com

 Hi,

 On 25 August 2011 11:17, Toby Corkindale
 toby.corkind...@strategicdata.com.au wrote:
  Do I need to make sure I re-create every index on every child table I
  create?
  That would be.. annoying, at best.

 Yes, it is little bit annoying but I like it. You don't need any index
 on parent table but you have to create them manually. I wrote simple
 python script which creates partitions and required indexes in advance
 (tables are partitioned by date).

 I like the flexibility because you can have different indexex on
 different partitions. For example, I discovered that adding index will
 improve several queries. In the production I can't afford exclusive
 lock (build index concurrently takes ages) so I updated and re-run the
 script which re-created future partitions.



My Personal favorite is the LIKE syntax:

  CREATE TABLE foo_1 (LIKE foo including indexes ) inherits (foo);

  It doesn't help you change children after the fact, but your new
partitions automatically get whatever indexes you've stuck on the master
table.

--Scott



 --
 Ondrej Ivanic
 (ondrej.iva...@gmail.com)

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] dblink() from GridSQL

2011-05-05 Thread Scott Mead
On Thu, May 5, 2011 at 1:40 PM, Merlin Moncure mmonc...@gmail.com wrote:

 On Thu, May 5, 2011 at 12:13 PM, Sam Nelson s...@consistentstate.com
 wrote:
  Hi List,
  We have a customer who is trying to migrate a few PostgresPlus instances
 to
  GridSQL clusters.  They have a process that pulls data from another
 server
  using dblink every night, and we're trying to replicate that on the
 GridSQL
  instance, but grid is being a bit of a pain.
  Grid doesn't seem to allow functions in from statements, and, of course,
 it
  spits out errors about functions returning records being called in the
 wrong
  context if we just try select dblink(foo, bar); (we had to try it).
  Has anyone else run into this specific issue?


GridSQL itself doesn't support functions.


  Is there a known workaround?
   Any ideas on what else we should try?


You'd have to present the data to be partitioned to the gsql controller for
partitioning to happen properly, or use the high-speed import that it comes
with.  Could you dump the data to an intermediary csv and then push it at
the import utility?

--Scott


 have you considered wrapping the output of the dblink query in a view?

 merlin

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] pg_reorg

2011-04-20 Thread Scott Mead
On Tue, Apr 19, 2011 at 9:48 AM, Jens Wilke j...@wilke.org wrote:

 On Tue, Apr 19, 2011 at 04:02:01AM +0530, Vibhor Kumar wrote:

   IIRC vacuum full mode rewrites the indexes as well.
 
  Till 8.4 no. From 9.0 onwards yes. However VACUUM FULL still locks the
 table.

 Don't be confused with the vacuum full term.
 This has nothing to do with the postgresql vacuum full command.
 Both pg_reorg's vacuum full and cluster mode do the pretty same thing.
 They rewrite the table and all their indexes. They use triggers to update
 the new table during the reorganisation.
 The only difference is that cluster does an additional order by.
 Both of them lock the original table at the end of the reorganisation just
 for the switch.
 If the lock is not granted within -T seconds, the backends holding locks
 are canceled.

 If you run out of diskspace, it's possible to reorg table by table.
 And yes, pg_reorg does only work with tables with a primary key.
 This will change in future releases, IIRC


How does it do with tables that have huge amounts (50 - 100 GB ) of TOASTed
data?





 regards, Jens

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] pg_reorg

2011-04-18 Thread Scott Mead
On Mon, Apr 18, 2011 at 4:39 PM, Merlin Moncure mmonc...@gmail.com wrote:

 ...is an amazing tool!


I've seen it, but catalog hacks always make me nervous.  Anybody else have
good / bad experience to share?

--scott



 merlin

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Per-session memory footprint (9.0/windows)

2011-03-02 Thread Scott Mead
On Wed, Mar 2, 2011 at 11:07 AM, Hannes Erven han...@erven.at wrote:

 Folks,


 I run a PG (currently 8.4, but will shortly migrate to 9.0) database on
 Windows Server 2003 that supports a desktop application which opens a
 few long-running sessions per user. This is due to the Hibernate
 persistence layer and the one session per view pattern that is
 recommended for such applications.
 These sessions usually load a pile of data once to display to the user,
 and then occasionally query updates of this data or even fetch single
 rows over a long time (like a few hours).

 It seems that each of the server postmaster.exe processes takes up
 approx. 5 MB of server memory (the virtual memory size column in task
 manager), and I guess this truly is the private memory these processes
 require. This number is roughly the same for 8.4 and 9.0 .


Task manager is mis-leading as multiple processes are sharing memory.  You
need process explorer
http://technet.microsoft.com/en-us/sysinternals/bb896653 (or something like
it) to see real memory consumption per backend.  Adding up the columns in
task manager is wrong and most definitely scary if you believe it :-)

--Scott




 As there are many, many such server processes running, is there anything
 I can do to reduce/optimize the per-session memory footprint?

 I'm aware of the sort_mem etc. parameters
 (http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ) but
 these seem to only apply to the execution of queries, not to sessions
 that mainly sit around waiting, right?


 Thank you for any hints!

-hannes

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Tablespace Issue

2011-02-17 Thread Scott Mead
On Thu, Feb 17, 2011 at 4:08 AM, Adarsh Sharma adarsh.sha...@orkash.comwrote:

 Dear all,

 Today I got to have a strong  issue while craeting table in a database.

 pdc_uima=# create table clause2_naxal as select * from clause2 c
 ,page_naxal_v3 p where c.source_id = p.crawled_page_id;
 ERROR:  could not extend relation pg_tblspc/17208/17216/23098672: No space
 left on device
 HINT:  Check free disk space.
 pdc_uima=# drop table clause2_naxal;
 ERROR:  table clause2_naxal does not exist

 I thougt that that the hard disk becomes full in which the tablespace is
 created for pdc_uima database.

 So, I attach a new hard disk(300 Gb ) and create a new tablespace  in it .

 And after, issue the below command :

 ALTER DATABASE pdc_uima SET TABLESPACE ts_pdc_uima1;

 It takes so much time as database size is near about 40 GB.

 It seems it moves the whole database to new place. I think this is not good
 for future. Is there is any way to  have new  data in  new  tablespace  and
  remain old as it  is.


There is a 'TABLESPACE' option to the CREATE TABLE statament.
   http://www.postgresql.org/docs/current/static/sql-createtable.html


--Scott



 Also, Can I delete the old tablespace now .


 Thanks  Best Regards,

 Adarsh Sharma


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] monitoring warm standby lag in 8.4?

2010-12-10 Thread Scott Mead
Yeah, my website is busted.  I'll fix it for you.


On Thu, Dec 9, 2010 at 2:30 PM, Josh Kupershmidt schmi...@gmail.com wrote:
 Hi all,

 I'm wondering if there's an accepted way to monitor a warm standby
 machine's lag in 8.4. The wiki[1] has a link[2] to a script which
 parses the output of pg_controldata, looking for a line like this:

  Time of latest checkpoint:            Thu 09 Dec 2010 01:35:46 PM EST

 But I'm not sure whether this timestamp is to be trusted as an
 indicator of how far behind the standby is in its recovery -- this
 timestamp just tells us when the standby last performed a checkpoint,
 regardless of how far behind in the WAL stream it is, right?

 I haven't come across any other monitoring suggestions for warm
 standby on 8.4. I've seen suggestions for hot standby slaves to use:
  SELECT pg_last_xlog_receive_location();
 but this won't work on an 8.4 warm standby of course. I've searched
 around and haven't found[3] any other tips on how to monitor my
 standby.

 The manual mentions[4] using pg_xlogfile_name_offset() in the context
 of implementing record-based log shipping. Would this be useful for
 monitoring standby lag? Any other ideas?

 Thanks,
 Josh


 --
 [1] http://wiki.postgresql.org/wiki/Warm_Standby
 [2] http://www.kennygorman.com/wordpress/?p=249
 [3] I was hoping this page would have some relevant info:
 http://www.scottrmead.com/blogs/scott/warm-standby-monitoring , but
 it's down now :(
 [4] 
 http://www.postgresql.org/docs/8.4/static/warm-standby.html#WARM-STANDBY-RECORD

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] where is pg_stat_activity (and others) in the documentation?

2010-11-20 Thread Scott Mead
On Mon, Nov 15, 2010 at 10:03 AM, Vick Khera vi...@khera.org wrote:

 On Mon, Nov 15, 2010 at 5:15 AM, Willy-Bas Loos willy...@gmail.com
 wrote:
  I was looking for what exactly waiting means in pg_stat_activity.



You can find out exactly what you're waiting for by correlating this to the
pg_locks table.

  Grab the 'procpid' of your waiting query and run:

  select * from pg_locks where pid=thepid and granted='f';

  Notice, in the pg_locks table, the logic works slightly different... if
you're 'waiting' (pg_stat_activity), then you haven't been 'granted'
(pg_locks).


As far as documentation:
http://www.postgresql.org/docs/current/static/monitoring-stats.html

--Scott



 waiting for a lock.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Survey on backing up unlogged tables: help us with PostgreSQL development!

2010-11-17 Thread Scott Mead
On Tue, Nov 16, 2010 at 10:25 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Man, the number of misunderstandings in this thread is staggering.
 Let me try to explain what the proposed feature will and will not do.

 1. The system catalog entries for all tables will be wal-logged.
 So schema (DDL) will survive a crash.  There wouldn't be any way
 to make it not do that, because we can't wal-log only some updates
 to a particular table, and that includes the catalogs in particular.

 Gotcha


 2. What's proposed as the new feature is that specific non-system
 tables can be marked as unlogged, meaning that WAL entries won't
 be made for changes in those tables' contents (nor their indexes'
 contents).  So we can't guarantee that the contents of such tables
 will be correct or consistent after a crash.  The proposed feature
 deals with this by forcibly truncating all such tables after a crash,
 thus ensuring that they're consistent though not populated.  So the
 possible use-cases for such tables are limited to where (a) you can
 repopulate the tables on demand, or (b) you don't really care about
 losing data on a crash.


I would rather be allowed to decide that for myself.



 3. There's a lot of wishful thinking here about what constitutes a
 crash.  A backend crash *is* a crash, even if the postmaster keeps
 going.  Data that had been in shared buffers doesn't get written out
 in such a scenario (and if we tried, it might be corrupt anyway).  So
 unlogged tables would be corrupt and in need of truncation after such an
 event.  Obviously, the same goes for an OS-level crash or power failure.


Right, just let *me* decide, that's all.



 4. The last bit of discussion on -hackers concerned what to do in
 the case where the server got shut down cleanly.  If it was shut
 down cleanly, then any data for unlogged tables would have been
 written out from shared buffers ... but did the data make it to disk?
 There's no easy way to know that.  In the event of an OS crash or
 power failure shortly after server shutdown, it's possible that
 the unlogged tables would be corrupt.  So Robert's initial proposal
 includes truncating unlogged tables at any database startup, even
 if the previous shutdown was clean.  Some (including me) are arguing
 that that is unnecessarily strict; but you do have to realize that
 you're taking some risk with data validity if it doesn't do that.


It is too strict, it makes the feature barely more usable than a temp table.
As a DBA, I realize the implication of the feature:
   *) b0rked indexes
   *) b0rked data
   *) Not knowing what's good and what's bad
   *) Bad reports
   *) Bad Bi

etc..., etc... etc...

   Still, I'd rather be allowed to make the decision here.  I think that
having the database try to enforce integrity on something i've marked as
'corruptable' (via the 'unlogged' flag) will be a constant fight between me
and the system.  In the end, I'd just not use the feature.


 The bottom line here is that you really can only use the feature
 for data that you're willing to accept losing on no notice.
 Allowing the data to persist across clean shutdowns would probably
 improve usability a bit, but it's not changing that fundamental fact.


Agreed, and that's fine.  IMHO, it improves the usability 10 fold.  Having
it truncated on server restart is useful for only a fraction of the
use-cases for this feature.

--Scott



regards, tom lane



Re: [GENERAL] Survey on backing up unlogged tables: help us with PostgreSQL development!

2010-11-17 Thread Scott Mead
On Wed, Nov 17, 2010 at 10:38 AM, Scott Mead sc...@scottrmead.com wrote:

 On Tue, Nov 16, 2010 at 10:25 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Man, the number of misunderstandings in this thread is staggering.
 Let me try to explain what the proposed feature will and will not do.

 1. The system catalog entries for all tables will be wal-logged.
 So schema (DDL) will survive a crash.  There wouldn't be any way
 to make it not do that, because we can't wal-log only some updates
 to a particular table, and that includes the catalogs in particular.

 Gotcha


 2. What's proposed as the new feature is that specific non-system
 tables can be marked as unlogged, meaning that WAL entries won't
 be made for changes in those tables' contents (nor their indexes'
 contents).  So we can't guarantee that the contents of such tables
 will be correct or consistent after a crash.  The proposed feature
 deals with this by forcibly truncating all such tables after a crash,
 thus ensuring that they're consistent though not populated.  So the
 possible use-cases for such tables are limited to where (a) you can
 repopulate the tables on demand, or (b) you don't really care about
 losing data on a crash.


 I would rather be allowed to decide that for myself.



 3. There's a lot of wishful thinking here about what constitutes a
 crash.  A backend crash *is* a crash, even if the postmaster keeps
 going.  Data that had been in shared buffers doesn't get written out
 in such a scenario (and if we tried, it might be corrupt anyway).  So
 unlogged tables would be corrupt and in need of truncation after such an
 event.  Obviously, the same goes for an OS-level crash or power failure.


 Right, just let *me* decide, that's all.



 4. The last bit of discussion on -hackers concerned what to do in
 the case where the server got shut down cleanly.  If it was shut
 down cleanly, then any data for unlogged tables would have been
 written out from shared buffers ... but did the data make it to disk?
 There's no easy way to know that.  In the event of an OS crash or
 power failure shortly after server shutdown, it's possible that
 the unlogged tables would be corrupt.  So Robert's initial proposal
 includes truncating unlogged tables at any database startup, even
 if the previous shutdown was clean.  Some (including me) are arguing
 that that is unnecessarily strict; but you do have to realize that
 you're taking some risk with data validity if it doesn't do that.


 It is too strict, it makes the feature barely more usable than a temp
 table.
 As a DBA, I realize the implication of the feature:
*) b0rked indexes
*) b0rked data
*) Not knowing what's good and what's bad
*) Bad reports
*) Bad Bi

 etc..., etc... etc...

Still, I'd rather be allowed to make the decision here.  I think that
 having the database try to enforce integrity on something i've marked as
 'corruptable' (via the 'unlogged' flag) will be a constant fight between me
 and the system.  In the end, I'd just not use the feature.


 The bottom line here is that you really can only use the feature
 for data that you're willing to accept losing on no notice.
 Allowing the data to persist across clean shutdowns would probably
 improve usability a bit, but it's not changing that fundamental fact.


 Agreed, and that's fine.  IMHO, it improves the usability 10 fold.  Having
 it truncated on server restart is useful for only a fraction of the
 use-cases for this feature.



Now that I've just sent that last piece, what about a 'truncate on restart'
option that is defaulted to on?  That way, the community feels good knowing
that we're trying to protect people from themselves, but like the 'fsync'
feature, I can load the gun and pull the trigger if I really want to.  I'd
like to see that so even if there is a server crash, it doesn't truncate.
That way, i can rename the garbage table if I want, create a new one for all
new data and then be allowed to glean what I can from the last one.

--Scott



 --Scott



regards, tom lane





Re: [GENERAL] Survey on backing up unlogged tables: help us with PostgreSQL development!

2010-11-17 Thread Scott Mead
On Wed, Nov 17, 2010 at 12:49 PM, Josh Berkus j...@agliodbs.com wrote:


  As was already mentioned, application logs. Unlogged tables would be
 perfect for that, provided they don't go *poof* every now and then for
 no good reason. Nobody's going to be too heart broken if a handful of
 log records go missing, or get garbled, after a server crash or power
 outage. Delete 'em all after every restart though, and that's a problem.


 That's a nice thought, but it's not how data corruption works in the event
 of a crash.  If a table is corrupted, *we don't know* how it's corrupted,
 and it's not just the last few records which are corrupted.  So for
 unlogged tables, there is never going to be any other option for crashes
 than to truncate them.

 Robert Haas did discuss the ability to synch unlogged tables on a planned
 shutdown, though.   However, that's liable to wait until 9.2, given the
 multiple steps required to make it work.

 Note that you would have the option of periodically synching an unlogged
 table to pgdump or to a logged table, via script, if you cared about
 retaining the data.  That would probably give you the behavior you want,
 above.


In an airplane, a pilot can kill the engine mid-flight if [s]he wants to.
They can deploy the flaps /slats at cruise speed / altitude, and if they're
so minded, they can land with a full tank of gas.  Now, none of these things
are particularly wise, but that's why the pilots are given *slightly* more
learning than your average bus driver.

  If you want to have a widely usable 'unlogged' table feature, I highly
recommend that 'truncate on server crash/restart' be an option that is
defaulted to true.  That way, I can go in an push the buttons I want and
give corrupted data to whomever, whenever i like.  (Land with a full tank of
Jet-A).

Whatever the decision is about backup, doesn't really matter IMO, but I
honestly think that the benefit of an unlogged table is there for both
session data (I run my session db's in fsync mode anyway and re-initdb them
on boot) AND for logging data where I can't take WAL anymore, but would like
to be able to have them in the same cluster as other stuff.  If they just
disappear then this feature won't be useful [to me] and I'll have to either
wait for the patch or give up on it and do a flat-file / lucene project just
to deal with it (I really don't want to do that :-).

--Scott




 --
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Survey on backing up unlogged tables: help us with PostgreSQL development!

2010-11-16 Thread Scott Mead
On Tue, Nov 16, 2010 at 7:21 PM, Glen Parker glene...@nwlink.com wrote:

 On 11/16/2010 03:24 PM, Karsten Hilbert wrote:

 PostgreSQL 9.1 is likely to have, as a feature, the ability to create
 tables which are unlogged, meaning that they are not added to the
 transaction log, and will be truncated (emptied) on database restart.
 Such tables are intended for highly volatile, but not very valuable,
 data, such as session statues, application logs, etc.


 I have been following loosely this discussion on HACKERS, but seem to have
 missed the part about truncating such tables on server restart.

 I have an immediate use for unlogged tables (application logs), but having
 them truncate after even a clean server restart would be a show stopper.  I
 keep log data for 2 months, and never back it up.  Having it disappear after
 a system melt down is acceptable, but not after a clean restart.  That would
 be utterly ridiculous!


+1  -- Is there a technical reason to do a TRUNCATE on restart?  I'd feel
better if I could just have unlogged tables that survive unless something
like a power-outage etc...  I'm in the exact same boat here, lots of big
logging tables that need to survive reboot, but are frustrating when it
comes to WAL generation.




 As to the topic of the thread, I think pg_dump needs to dump unlogged
 tables by default.

 -1 I disagree.  I'm fine with having the loaded weapon  pointed at my foot.

--Scott



 -Glen



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] port warded (iptables) postgres

2010-11-16 Thread Scott Mead
Make sure that listen_addresses='someIP' or '*'

By default, the server  only listens on unix sockets.


--Scott


On Tue, Nov 16, 2010 at 6:41 PM, zhong ming wu mr.z.m...@gmail.com wrote:

 Hello

 I have successfully used iptables to direct ports for other services.
 For example I run apache on 8443 on 127.0.0.1 and
 use iptable to direct traffic to public ip on 443.  Trying the same
 with postgresql does not seem to work properly.

 I was wondering whether anyone has successfully used iptables +
 postgres this way

 Thanks in advance

 Mr. Wu

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Survey on backing up unlogged tables: help us with PostgreSQL development!

2010-11-16 Thread Scott Mead
On Tue, Nov 16, 2010 at 8:15 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Scott Mead sc...@scottrmead.com writes:
  +1  -- Is there a technical reason to do a TRUNCATE on restart?  I'd feel
  better if I could just have unlogged tables that survive unless something
  like a power-outage etc...  I'm in the exact same boat here, lots of big
  logging tables that need to survive reboot, but are frustrating when it
  comes to WAL generation.

 Keep in mind that these tables are *not* going to survive any type of
 backend crash.


  Not surviving a crash is fine.  IMHO, if we'd lose data in myisam files,
I'm happy to lose them on pg nologging tables.  I just want it to survive a
stop / start operation.  The benefits (think of multi-host syslog
consolidation with FTS drools ) on these tables FAR outweigh the
off-chance that a crash will cause me some heartache.


 Maybe my perceptions are colored because I deal with
 Postgres bugs all the time, but I think of backend crashes as pretty
 common, certainly much more common than an OS-level crash.  I'm afraid
 you may be expecting unlogged tables to be significantly more robust
 than they really will be.



Bugs?  What bugs :)

  Honestly, I've only had a couple of *Prod* crashes (knocks on wood), but
the need to restart occurs every now and then.

--Scott






regards, tom lane



Re: [GENERAL] creating a table based on a table in stored in another database

2010-05-18 Thread Scott Mead
On Tue, May 18, 2010 at 4:45 AM, Malm Paul paul.m...@saabgroup.com wrote:

  I can add that I would like to do this in my java application.
 E.i. just create an empty table in database1 which has the same column
 names and types as a table stored in database2.

 I tried this for a start:
 stmnt = dbConnection.prepareStatement(select dblink_connect('myconn',
 'dbname=gemaps'));
 resultset = stmnt.executeQuery();

 but got ERROR: function dblink_connect(unknown, unknown) does not exist


Do you:

 A) Install the dblink contrib module?
 B) Run the dblink SQL file?


--Scott

/P
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:
 pgsql-general-ow...@postgresql.org] On Behalf Of A. Kretschmer
 Sent: den 18 maj 2010 10:16
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] creating a table based on a table in stored in
 another database

 In response to Malm Paul :
   Hi list,
  in a database I have different kind of tables. I would like to take the
 meta data from one of those tables and create the same type of table (but
 empty) in another database.
  Can anyone, please, tell me how to do this?

 Create a schema-only dump and restore it into the other database.


 Regards, Andreas
 --
 Andreas Kretschmer
 Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
 GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
 changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



  1   2   3   >