Re: initdb when data/ folder has mount points

2018-02-21 Thread Michael Paquier
On Wed, Feb 21, 2018 at 07:56:38PM -0500, David Steele wrote:
> On 2/21/18 7:01 PM, Tom Lane wrote:
>> For pg_log, just put it somewhere else and set the appropriate
>> configuration option to say where to write the postmaster log files.
>> Or you could use a symlink, like the solution for pg_xlog, but
>> I don't see any advantage there.
> 
> Symlinking pg_log is not ideal because the logs end up in the backup. It
> gets pretty weird when those logs get restored to a standby and somebody
> starts reading them.

log_directory in postgresql.conf san be set up with an absolute
directory value.  So there is no actual need for a symlink with pg_log.
This also reduces the amount of data transfered as part of base
backups without actually needing them. 
--
Michael


signature.asc
Description: PGP signature


Re: oracle to postgresql conversion tool

2018-02-21 Thread Pawan Sharma
On Feb 22, 2018 11:16 AM, "Pavel Stehule"  wrote:

Hi

2018-02-22 5:59 GMT+01:00 Marcin Giedz :

> Hi, there are at least 5 tools I found on the PG list but could you
> recommend well tested, free one ? we need to migrate production 30GB oracle
> 11 db to postgres 9 and are looking for best approach. Of course if there
> is no free/open solution any commercial & recommended tool is always
> welcome.
>

Hard to say, what is the best approach. I used ora2pg on 60GB database with
thousands views, thousands procedures without problems. It is not too fast,
but just works

Regards

Pavel

>
> Thank you
> Marcin
>

Hi Pavel,

You can easily migrate table,view,index's,mview ..etc using any migration
tool, but you need to work some how on plsql objects like procedure,
trigger, function.etc


Re: oracle to postgresql conversion tool

2018-02-21 Thread Pavel Stehule
Hi

2018-02-22 5:59 GMT+01:00 Marcin Giedz :

> Hi, there are at least 5 tools I found on the PG list but could you
> recommend well tested, free one ? we need to migrate production 30GB oracle
> 11 db to postgres 9 and are looking for best approach. Of course if there
> is no free/open solution any commercial & recommended tool is always
> welcome.
>

Hard to say, what is the best approach. I used ora2pg on 60GB database with
thousands views, thousands procedures without problems. It is not too fast,
but just works

Regards

Pavel

>
> Thank you
> Marcin
>


Re: oracle to postgresql conversion tool

2018-02-21 Thread Venkata B Nagothi
On Thu, Feb 22, 2018 at 3:59 PM, Marcin Giedz  wrote:

> Hi, there are at least 5 tools I found on the PG list but could you
> recommend well tested, free one ? we need to migrate production 30GB oracle
> 11 db to postgres 9 and are looking for best approach. Of course if there
> is no free/open solution any commercial & recommended tool is always
> welcome.
>

Try using ora2pg which is a quite popular tool and a well tested one.

Regards,

Venkata B N
Database Consultant


oracle to postgresql conversion tool

2018-02-21 Thread Marcin Giedz
Hi, there are at least 5 tools I found on the PG list but could you recommend 
well tested, free one ? we need to migrate production 30GB oracle 11 db to 
postgres 9 and are looking for best approach. Of course if there is no 
free/open solution any commercial & recommended tool is always welcome. 

Thank you 
Marcin 


Re: Understanding query planner cpu usage

2018-02-21 Thread Tom Lane
Lucas Fairchild-Madar  writes:
> On Wed, Feb 21, 2018 at 4:14 PM, Tom Lane  wrote:
>> If so, this might be some manifestation of a problem we've seen before:
>> the planner tries to find out the current live max value of the column
>> by scanning the index, and that's really slow if there are a lot of
>> recently-dead entries at the index end, because each of them has to be
>> inspected and then hinted dead.

> I took a look further in this direction and found that our cleanup process
> and this analytical rollup both happened to run very close to each other
> twice a day. I moved the cleanup process to run at a time when it should
> never collide with analytical rollups and we'll see if the load spike
> happens again.

I'm thinking that will fix it for you.

> Perhaps this is a stupid question, but would creating the index in
> descending order solve this problem (as a unique index, not a primary key)?

I think most likely it'd just move the issue to the other end of the
index, though you could try.  See below.

> What is the planner doing when trying to find the current live max value of
> the column?

It's trying to estimate whether a mergejoin will be able to stop short of
reading all the tuples from the other side of the join.  (For instance,
if you've got 1,3,5 on one side, and 1,4,5,7,8,9,19 on the other, the
second input doesn't have to be read past "7" because once we run off the
end of the first input, we know we couldn't see any matches later on the
second input.  So the planner wants to compare the ending key value on
each side to the key distribution on the other side, to see what this might
save.)  Now, that's a unidirectional question for any particular mergejoin
plan, so that for any one cost estimate it's only going to need to look at
one end of the key range.  But I think it will consider merge joins with
both sort directions, so that both ends of the key range will get
investigated in this way.  I might be wrong though; it's late and I've
not looked at that code in awhile ...

regards, tom lane



Re: PostgreSQL backup stategies

2018-02-21 Thread Melvin Davidson
On Wed, Feb 21, 2018 at 10:00 PM, Luis Marin 
wrote:

> Dear Friends,
>
> Please, somebody knows a good source of information about PostgreSQL 9
> backup strategies for production systems, if you have read a good book with
> this subject, could you share the author and name of the book ?
>
> Thanks
>

>... a good source of information about PostgreSQL 9 backup strategies

*That is a very broad topic, It is also highly dependent of your particular
configuration.*

*What is your O/S?*

*How big is your database?*

*What is your high availability (replication) configuration?*

*What are the hours for production access?*


*Do you intend to use 3rd party solution? EG: Barman, *
*You might want to start here ->
https://www.compose.com/articles/postgresql-backups-and-everything-you-need-to-know/
*
-- 
*Melvin Davidson*

*Maj. Database & Exploration Specialist*

*Universe Exploration Command – UXC*

Employment by invitation only!


PostgreSQL backup stategies

2018-02-21 Thread Luis Marin
Dear Friends,

Please, somebody knows a good source of information about PostgreSQL 9
backup strategies for production systems, if you have read a good book with
this subject, could you share the author and name of the book ?

Thanks


Re: Not sure if I should CREATE INDEX for text columns on which I plan to filter later

2018-02-21 Thread Tomas Vondra

On 02/21/2018 05:00 PM, Bill Moran wrote:
> On Wed, 21 Feb 2018 13:33:18 +0100
> Alexander Farber  wrote:
> 
>> Hi Martin -
>>
>> On Wed, Feb 21, 2018 at 1:26 PM, Martin Moore 
>> wrote:
>>
>>> I’m no expert but I’d think it unlikely an index would be considered for a
>>> table with only 100 rows in. Also I’m pretty sure only one index per table
>>> is used, so you’d want to put state1 and state2 in one index.
>>
>> I hope to have more records in the words_games table later when my game is
>> launched (currently in soft launch/beta).
> 
> To elaborate+clarify Martin's comments.
> 
> The index will not be used while the table is small because it's actually 
> slower
> to process an index than it is to just read the entire table. However, as the
> table gets more rows, these timings will reverse and Postgres will start using
> the indexes. It's probably best to just create them even though the table is
> small. The performance improvement you'll get when the table grows will be
> well worth it, and it avoids the problem of trying to remember to create it 
> later.
> 
> However, Martin's other comment about only using a single index is incorrect.
> Postgres can use multiple indexes per query, so it's often good practace to
> put indexes on every column that might ever be used in a WHERE clause.
> 

I call this practice "shotgun" and generally discourage people from
using it. It seems attractive, but not every where condition can be
evaluated using an index, and every index has maintenance overhead.

There are cases when it's somewhat reasonable (e.g. when you don't know
which columns will be referenced by WHERE conditions, and data ingestion
has lower priority than queries). But that doesn't seem to be the case
here - you know the WHERE conditions, and people are likely sending a
lot of inserts (and expecting low latency responses).

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: initdb when data/ folder has mount points

2018-02-21 Thread David Steele

On 2/21/18 7:01 PM, Tom Lane wrote:

Ron Johnson  writes:

Apparently, initdb assumes that data/ is one big mount point. However, we
have four mount points:
/var/lib/pgsql/9.6/data/backup
/var/lib/pgsql/9.6/data/base
/var/lib/pgsql/9.6/data/pg_log
/var/lib/pgsql/9.6/data/pg_xlog


Don't do that.


Agreed.


There's no reason for backup storage to be under the data directory (and
lots of good reasons for it not to be).  Just put it somewhere else.


Yes -- in this configuration your backups would be backed up with every 
backup.  It's pretty obvious where that would go.



The supported way to put pg_xlog on a separate volume is to put that
mount point somewhere else, and make $PGDATA/pg_xlog be a symlink to
it.  IIRC, there's an initdb option to help with that, though you can
also make it so manually after initdb.


initdb supports linking pg_xlog/pg_wal with the --xlogdir/--waldir option.


For pg_log, just put it somewhere else and set the appropriate
configuration option to say where to write the postmaster log files.
Or you could use a symlink, like the solution for pg_xlog, but
I don't see any advantage there.


Symlinking pg_log is not ideal because the logs end up in the backup. 
It gets pretty weird when those logs get restored to a standby and 
somebody starts reading them.



I don't see any point in making base/ be its own mount point.  Once
you get rid of those other subdirectories there's not going to be
enough "global" storage left to justify its own volume.


Agreed.

--
-David
da...@pgmasters.net



Re: Understanding query planner cpu usage

2018-02-21 Thread Lucas Fairchild-Madar
On Wed, Feb 21, 2018 at 4:14 PM, Tom Lane  wrote:

> Lucas Fairchild-Madar  writes:
> > I'm having an perplexing issue in PG 10.1 wherein deleting a large amount
> > of rows from a table causes query planning time to spike dramatically
> for a
> > while. This happens with or without autovacuums so vacuuming isn't the
> > issue.
>
> Would the deleted rows happen to be the extremal values of some indexed
> column that is a join key in the slowly-planned queries?
>

Yes. Basically, this is sort of a moving set of several weeks of data. We
have a job that goes through (currently, every 3 hours) and periodically
wipes out data from the very beginning. At the same time, more data is
continually being appended to the table. All of these rows are immutable.
The vast majority of the work happening in the database happens in the last
two or three million rows of these tables; the rest is just there for
auditing and analytics.


> If so, this might be some manifestation of a problem we've seen before:
> the planner tries to find out the current live max value of the column
> by scanning the index, and that's really slow if there are a lot of
> recently-dead entries at the index end, because each of them has to be
> inspected and then hinted dead.  You'd pay that overhead at some point
> anyway, of course.  The cases where it becomes a problem are where the
> planner inspects these values but *can't* hint them dead, such as when
> the deletion hasn't committed yet, or they're newly inserted rows that
> likewise aren't committed.  Then each incoming query does the work
> over again until the transient state is resolved.
>
> We've done various things to ameliorate this, but maybe you've found
> some new way to cause it to be a pain point.  Is there anything special
> about the way you're deleting the rows?  Maybe there's a long-running
> transaction in the background that can still see the deleted rows?
>
>
This sounds like the trigger here. We have a long-running (~20 minute)
transaction that's doing analytical rollups on these sets of tables. It's
doing them only on very recent data, but I assume having an open
transaction means the tuples can't be marked as dead yet because the
queries within that transaction could still potentially access them.

I took a look further in this direction and found that our cleanup process
and this analytical rollup both happened to run very close to each other
twice a day. I moved the cleanup process to run at a time when it should
never collide with analytical rollups and we'll see if the load spike
happens again. When this happens, the long-running query takes almost an
hour so the problem compounds.

It would be helpful to have a way to introspect in to what the query
planner is doing. For instance, if I could explain (analyze, verbose,
queryplan) the query and the queryplan told me that it spent 5000ms
skipping over dead tuples in the index, knowing that would arm me with more
information to tune the system better.

Perhaps this is a stupid question, but would creating the index in
descending order solve this problem (as a unique index, not a primary key)?
What is the planner doing when trying to find the current live max value of
the column?

Thanks,
Lucas


Re: initdb when data/ folder has mount points

2018-02-21 Thread Tom Lane
Ron Johnson  writes:
> On 02/21/2018 06:01 PM, Tom Lane wrote:
>> Ron Johnson  writes:
>>> Apparently, initdb assumes that data/ is one big mount point. However, we
>>> have four mount points:
>>> /var/lib/pgsql/9.6/data/backup
>>> /var/lib/pgsql/9.6/data/base
>>> /var/lib/pgsql/9.6/data/pg_log
>>> /var/lib/pgsql/9.6/data/pg_xlog

>> Don't do that.

> I'm replicating the structure in our existing systems.  Is there an (up to 
> date) Best Practices document for these kinds of issues?

Well, initdb has never allowed a non-empty target (at least not in this
century).  So your existing setup must have been achieved by manual
hacking post-initdb, which you could certainly do again if you're dead
set on that layout.  I remain of the opinion that it's not a good
design, though.

regards, tom lane



Re: initdb when data/ folder has mount points

2018-02-21 Thread Ron Johnson

On 02/21/2018 06:01 PM, Tom Lane wrote:

Ron Johnson  writes:

Apparently, initdb assumes that data/ is one big mount point. However, we
have four mount points:
/var/lib/pgsql/9.6/data/backup
/var/lib/pgsql/9.6/data/base
/var/lib/pgsql/9.6/data/pg_log
/var/lib/pgsql/9.6/data/pg_xlog

Don't do that.

There's no reason for backup storage to be under the data directory (and
lots of good reasons for it not to be).  Just put it somewhere else.

The supported way to put pg_xlog on a separate volume is to put that
mount point somewhere else, and make $PGDATA/pg_xlog be a symlink to
it.  IIRC, there's an initdb option to help with that, though you can
also make it so manually after initdb.

For pg_log, just put it somewhere else and set the appropriate
configuration option to say where to write the postmaster log files.
Or you could use a symlink, like the solution for pg_xlog, but
I don't see any advantage there.

I don't see any point in making base/ be its own mount point.  Once
you get rid of those other subdirectories there's not going to be
enough "global" storage left to justify its own volume.


I'm replicating the structure in our existing systems.  Is there an (up to 
date) Best Practices document for these kinds of issues?



--
Money Angular momentum makes the world go 'round.


Re: Understanding query planner cpu usage

2018-02-21 Thread Tom Lane
Lucas Fairchild-Madar  writes:
> I'm having an perplexing issue in PG 10.1 wherein deleting a large amount
> of rows from a table causes query planning time to spike dramatically for a
> while. This happens with or without autovacuums so vacuuming isn't the
> issue.

Would the deleted rows happen to be the extremal values of some indexed
column that is a join key in the slowly-planned queries?

If so, this might be some manifestation of a problem we've seen before:
the planner tries to find out the current live max value of the column
by scanning the index, and that's really slow if there are a lot of
recently-dead entries at the index end, because each of them has to be
inspected and then hinted dead.  You'd pay that overhead at some point
anyway, of course.  The cases where it becomes a problem are where the
planner inspects these values but *can't* hint them dead, such as when
the deletion hasn't committed yet, or they're newly inserted rows that
likewise aren't committed.  Then each incoming query does the work
over again until the transient state is resolved.

We've done various things to ameliorate this, but maybe you've found
some new way to cause it to be a pain point.  Is there anything special
about the way you're deleting the rows?  Maybe there's a long-running
transaction in the background that can still see the deleted rows?

Or I might be barking up the wrong tree entirely.  But this sure
sounds reminiscent of that class of problems.

regards, tom lane



Re: initdb when data/ folder has mount points

2018-02-21 Thread Tom Lane
Ron Johnson  writes:
> Apparently, initdb assumes that data/ is one big mount point. However, we 
> have four mount points:
> /var/lib/pgsql/9.6/data/backup
> /var/lib/pgsql/9.6/data/base
> /var/lib/pgsql/9.6/data/pg_log
> /var/lib/pgsql/9.6/data/pg_xlog

Don't do that.

There's no reason for backup storage to be under the data directory (and
lots of good reasons for it not to be).  Just put it somewhere else.

The supported way to put pg_xlog on a separate volume is to put that
mount point somewhere else, and make $PGDATA/pg_xlog be a symlink to
it.  IIRC, there's an initdb option to help with that, though you can
also make it so manually after initdb.

For pg_log, just put it somewhere else and set the appropriate
configuration option to say where to write the postmaster log files.
Or you could use a symlink, like the solution for pg_xlog, but
I don't see any advantage there.

I don't see any point in making base/ be its own mount point.  Once
you get rid of those other subdirectories there's not going to be
enough "global" storage left to justify its own volume.

regards, tom lane



Re: initdb when data/ folder has mount points

2018-02-21 Thread Rich Shepard

On Wed, 21 Feb 2018, Ron Johnson wrote:


Apparently, initdb assumes that data/ is one big mount point. However, we
have four mount points:
/var/lib/pgsql/9.6/data/backup
/var/lib/pgsql/9.6/data/base
/var/lib/pgsql/9.6/data/pg_log
/var/lib/pgsql/9.6/data/pg_xlog


Ron,

  What command do you use? Here, on Slackware-14.2, /var/lib/psql/10.2/data/
contains:

PG_VERSIONpg_ident.conf  pg_snapshots  pg_wal
base  pg_logical pg_stat   pg_xact
globalpg_multixact   pg_stat_tmp   postgresql.auto.conf
pg_commit_ts  pg_notify  pg_subtrans   postgresql.conf
pg_dynshmem   pg_replslotpg_tblspc postmaster.opts
pg_hba.conf   pg_serial  pg_twophase   postmaster.pid

  The 9.6 version was the same.

  The command I use (as user postgres) is: 
initdb -D /var/lib/pgsql/10.2/data &


HTH,

Rich





Understanding query planner cpu usage

2018-02-21 Thread Lucas Fairchild-Madar
I'm having an perplexing issue in PG 10.1 wherein deleting a large amount
of rows from a table causes query planning time to spike dramatically for a
while. This happens with or without autovacuums so vacuuming isn't the
issue.

CPU usage during this time spikes as well. I can't determine if the query
planning issue is caused by CPU starvation or if the CPU starvation is
caused by the query planning. Either way, query planning spikes from 15ms
to nearly 7 seconds, and the load average on the machine spikes from around
1.0 to 80 or 90. Despite query planning time increasing by several orders
of magnitude, query execution time remains very similar.

The schema and query are similar to what's below, with each table
containing about 60 million rows. The load spike happens after deleting
approximately 1% of the table data from each of the tables in a
transaction. It happens immediately after the transaction commits, not
during the transaction itself. The transaction takes about a minute to
execute.

It's possible there are other longer lived transactions that may be working
on data in these tables, and I think there's probably a clue to the cause
here.

Does anyone have any clues or pointers for what I can look for? Is there a
way to interrogate the query planner and see what it's doing? (explain
analyze only gives the final chosen plan and how long query planning took
to execute)

CREATE TABLE data (
data_id bigint NOT NULL PRIMARY KEY
);

CREATE TABLE segment (
data_id bigint NOT NULL,
segment_id bigint NOT NULL PRIMARY KEY
);

CREATE TABLE raw (
segment_id bigint NOT NULL,
raw_id bigint NOT NULL PRIMARY KEY
);

CREATE TABLE processed (
segment_id bigint NOT NULL,
raw_id bigint NOT NULL,
processed_id bigint NOT NULL PRIMARY KEY,
magical_id integer
);

All primary keys are auto-incrementing sequences.

data -> segment is 1:many (but typically 1:1)
segment -> raw is 1:many (but almost always 1:1)
raw -> processed is 1:1

select * from processed
  left join raw using (raw_id, segment_id)
  left join segment using (segment_id)
  left join data using (data_id)
  where processed.magical_id = [magical_id]
  order by processed_id desc;


initdb when data/ folder has mount points

2018-02-21 Thread Ron Johnson

Hi,

v9.6.6

Apparently, initdb assumes that data/ is one big mount point. However, we 
have four mount points:

/var/lib/pgsql/9.6/data/backup
/var/lib/pgsql/9.6/data/base
/var/lib/pgsql/9.6/data/pg_log
/var/lib/pgsql/9.6/data/pg_xlog

They are all empty.  How do I convince it to ignore the fact that the 
directories exist?


Thanks

--
Money Angular momentum makes the world go 'round.


Re: Installing 9.6.6 to a RHEL 6.7 server with no Internet access

2018-02-21 Thread Ron Johnson

It does.  Thank you.

On 02/21/2018 10:03 AM, Alvaro Aguayo Garcia-Rada wrote:

Not sure if this may help you. A few months ago, I made an automated OpenSUSE 
installer which also had to install PostgreSQL 9.6 server using the RHEL PGDG 
releases. There's no YUM there, so I did it instead with RPM files directly.

The RPM files I included on my autoinstall ISO are the following:

postgresql96-9.6.2-2PGDG.rhel7.x86_64.rpm
postgresql96-contrib-9.6.2-2PGDG.rhel7.x86_64.rpm
postgresql96-devel-9.6.2-2PGDG.rhel7.x86_64.rpm
postgresql96-libs-9.6.2-2PGDG.rhel7.x86_64.rpm
postgresql96-server-9.6.2-2PGDG.rhel7.x86_64.rpm

Of course, you may want or need to use more recent versions. Also, you may ommit 
the contrib & devel packages, as they may not always be needed(in my case, they 
were)

Regarding dependencies, with a quick look, I can say these will require a few 
common packages:

- zlib
- pthreads
- readline
- openssl
- libxml2
- libxslt
- libuuid
- krb5
- libldap / openldap

That seems to be all required packages. Hope this helps.

Regards,

Alvaro Aguayo
Operations Manager
Open Comb Systems E.I.R.L.

Office: (+51-1) 3377813 | Mobile: (+51) 995540103 | (+51) 954183248
Web: www.ocs.pe

- Original Message -
From: "Ron Johnson" 
To: "PostgreSql-general" 
Sent: Wednesday, 21 February, 2018 10:49:00
Subject: Installing 9.6.6 to a RHEL 6.7 server with no Internet access

Hi.

According to https://www.postgresql.org/download/linux/redhat/ I must first
install the repository.  However, since that system doesn't have Internet
access, I need to manually copy the files from my Windows laptop to the RHEL
6.7 server and then localinstall them.

So, the question: what packages do I need to download from
https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-6.7-x86_64/ in
order to "yum localinstall postgresql96" and "yum localinstall
postgresql96-server"?

Thanks




--
Money Angular momentum makes the world go 'round.


Building PostgreSQL old version from source to test vulnerability CVE-2017-7546

2018-02-21 Thread Julián Jiménez González
Hello.

I'm trying to build a vulnerable PostgreSQL system in a Docker container in
order to be able to exploit this vulnerability
. I'm testing
with a C# app which simply connects using a *connstring* through
*npgsql *driver
(this is one of the vulnerable drivers
 since it's not based on
*libpq*):

string connstring = String.Format("Server={0};Port={1};" +
"User
Id={2};Password={3};Database={4};PersistSecurityInfo=true",
//"10.5.0.73", "10005", "postgres", //Docker inside
Linux Mint's VM with port mapped like "docker run -p 10005:5432 ..."
//"10.5.0.73", "5432", "postgres", //Linux Mint's VM
"10.5.0.163", "5432", "postgres", //Ubuntu server's VM
"", "postgres");

*I've been able to exploit it in my Linux Mint VM* building from this commit
,
which is the parent of the commit which introduces the fix
 for the
vulnerability, but I couldn't do it either:


   1. Building from *the same commit* zip file in another VM (Ubuntu
   server).
   2. Using DockerHub 's
   versions *which are expected to be vulnerable* (like 9.2.20, 9.6.3 and
   9.6.2).
   3. With a custom Docker container based on *debian:jessie* (also tried
   with ubuntu:latest).


I've checked the ports I use are not conflicting with ports from another
proccess in any of those systems:

sudo lsof -i -P -n | grep LISTEN

and I've checked they all have these configurations:

/usr/local/pgsql/data/*pg_hba.conf*: host all all all password (This
is the *last
line*, and I also tried with *md5*).

/usr/local/pgsql/data/*postgresql.conf*: listen_addresses = '*'

I'm actually completely sure I'm connecting to the postgres instance I
expect, since the responses make complete sense when I adjust the config or
shut down the instances, except for those with empty passwords which should
log in successfully, but throw this exception instead:

No password has been provided but the backend requires one (in plaintext)


(Changing plaintext to md5 when I change pg_hba.conf's config).

It looks like the version I'm using is patched but it makes no sense why
the hell I could exploit it in my Linux Mint VM and I can't in my Ubuntu
Server VM, since I'm using the *exact same commit*. Regarding Docker
official images, they are 7 and 9 months old, which is before 7th August
(the day the patch came in), so I can't understand this either.

I need and would greatly appreciate any help tracking this problem down.

Thanks in advance.

[image: logo_170x100px.png] 

Julián Jiménez González

Investigador - Desarrollador | Área de Servicios y Aplicaciones

Researcher - Developer | Services & Applications Department

Ph. (+34) 986 120 430  Ext. 3021
jjime...@gradiant.org  |  www.gradiant.org

[image: Iconos Redes Sociales GRD Firma email-01]
  [image: Iconos Redes Sociales GRD
Firma email-02]   [image: Iconos Redes
Sociales GRD Firma email-03] 
 [image: Iconos Redes Sociales GRD Firma email-04]


Take care of the environment. Try not to print this email.
The information contained in this email message may be confidential
information, and may also be the subject of legal professional privilege.
If you are not the intended recipient, any use, interference with,
disclosure or copying of this material is unauthorized and prohibited.
Please inform us immediately and destroy the email. Thank you for your
cooperation.


Re: Installing 9.6.6 to a RHEL 6.7 server with no Internet access

2018-02-21 Thread Alvaro Aguayo Garcia-Rada
Not sure if this may help you. A few months ago, I made an automated OpenSUSE 
installer which also had to install PostgreSQL 9.6 server using the RHEL PGDG 
releases. There's no YUM there, so I did it instead with RPM files directly.

The RPM files I included on my autoinstall ISO are the following:

postgresql96-9.6.2-2PGDG.rhel7.x86_64.rpm
postgresql96-contrib-9.6.2-2PGDG.rhel7.x86_64.rpm
postgresql96-devel-9.6.2-2PGDG.rhel7.x86_64.rpm
postgresql96-libs-9.6.2-2PGDG.rhel7.x86_64.rpm
postgresql96-server-9.6.2-2PGDG.rhel7.x86_64.rpm

Of course, you may want or need to use more recent versions. Also, you may 
ommit the contrib & devel packages, as they may not always be needed(in my 
case, they were)

Regarding dependencies, with a quick look, I can say these will require a few 
common packages:

- zlib
- pthreads
- readline
- openssl
- libxml2
- libxslt
- libuuid
- krb5
- libldap / openldap

That seems to be all required packages. Hope this helps.

Regards,

Alvaro Aguayo
Operations Manager
Open Comb Systems E.I.R.L.

Office: (+51-1) 3377813 | Mobile: (+51) 995540103 | (+51) 954183248
Web: www.ocs.pe

- Original Message -
From: "Ron Johnson" 
To: "PostgreSql-general" 
Sent: Wednesday, 21 February, 2018 10:49:00
Subject: Installing 9.6.6 to a RHEL 6.7 server with no Internet access

Hi.

According to https://www.postgresql.org/download/linux/redhat/ I must first 
install the repository.  However, since that system doesn't have Internet 
access, I need to manually copy the files from my Windows laptop to the RHEL 
6.7 server and then localinstall them.

So, the question: what packages do I need to download from 
https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-6.7-x86_64/ in 
order to "yum localinstall postgresql96" and "yum localinstall 
postgresql96-server"?

Thanks

-- 
Money Angular momentum makes the world go 'round.



Re: Not sure if I should CREATE INDEX for text columns on which I plan to filter later

2018-02-21 Thread Martin Moore
My point was that your explain would be the same with indexes as without as 
they won’t be used.

 

Martin.

 

From: Alexander Farber 
Date: Wednesday, 21 February 2018 at 12:33
Cc: pgsql-general 
Subject: Re: Not sure if I should CREATE INDEX for text columns on which I plan 
to filter later

 

Hi Martin -

 

On Wed, Feb 21, 2018 at 1:26 PM, Martin Moore  wrote:

I’m no expert but I’d think it unlikely an index would be considered for a 
table with only 100 rows in. Also I’m pretty sure only one index per table is 
used, so you’d want to put state1 and state2 in one index.

 

I hope to have more records in the words_games table later when my game is 
launched (currently in soft launch/beta). 

Regards

Alex 



Installing 9.6.6 to a RHEL 6.7 server with no Internet access

2018-02-21 Thread Ron Johnson

Hi.

According to https://www.postgresql.org/download/linux/redhat/ I must first 
install the repository.  However, since that system doesn't have Internet 
access, I need to manually copy the files from my Windows laptop to the RHEL 
6.7 server and then localinstall them.


So, the question: what packages do I need to download from 
https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-6.7-x86_64/ in 
order to "yum localinstall postgresql96" and "yum localinstall 
postgresql96-server"?


Thanks

--
Money Angular momentum makes the world go 'round.


Re: Not sure if I should CREATE INDEX for text columns on which I plan to filter later

2018-02-21 Thread Alexander Farber
Here is the current DESC of the table (I already use few joins):

words=> \d words_games
  Table "public.words_games"
  Column  |   Type   | Collation | Nullable
| Default
--+--+---+--+--
 gid  | integer  |   | not null |
nextval('words_games_gid_seq'::regclass)
 created  | timestamp with time zone |   | not null |
 finished | timestamp with time zone |   |  |
 player1  | integer  |   | not null |
 player2  | integer  |   |  |
 played1  | timestamp with time zone |   |  |
 played2  | timestamp with time zone |   |  |
 state1   | text |   |  |
 state2   | text |   |  |
 score1   | integer  |   | not null |
 score2   | integer  |   | not null |
 hand1| character(1)[]   |   | not null |
 hand2| character(1)[]   |   | not null |
 pile | character(1)[]   |   | not null |
 letters  | character(1)[]   |   | not null |
 values   | integer[]|   | not null |
 bid  | integer  |   | not null |
 reason   | text |   |  |
Indexes:
"words_games_pkey" PRIMARY KEY, btree (gid)
"words_games_reason_index" btree (reason)
"words_games_state1_index" btree (state1)
"words_games_state2_index" btree (state2)
Check constraints:
"words_games_check" CHECK (player1 <> player2)
"words_games_score1_check" CHECK (score1 >= 0)
"words_games_score2_check" CHECK (score2 >= 0)
Foreign-key constraints:
"words_games_bid_fkey" FOREIGN KEY (bid) REFERENCES words_boards(bid)
ON DELETE CASCADE
"words_games_player1_fkey" FOREIGN KEY (player1) REFERENCES
words_users(uid) ON DELETE CASCADE
"words_games_player2_fkey" FOREIGN KEY (player2) REFERENCES
words_users(uid) ON DELETE CASCADE
Referenced by:
TABLE "words_chat" CONSTRAINT "words_chat_gid_fkey" FOREIGN KEY (gid)
REFERENCES words_games(gid) ON DELETE CASCADE
TABLE "words_moves" CONSTRAINT "words_moves_gid_fkey" FOREIGN KEY (gid)
REFERENCES words_games(gid) ON DELETE CASCADE
TABLE "words_scores" CONSTRAINT "words_scores_gid_fkey" FOREIGN KEY
(gid) REFERENCES words_games(gid) ON DELETE CASCADE


Re: Not sure if I should CREATE INDEX for text columns on which I plan to filter later

2018-02-21 Thread Alexander Farber
Hi Martin -

On Wed, Feb 21, 2018 at 1:26 PM, Martin Moore 
wrote:

> I’m no expert but I’d think it unlikely an index would be considered for a
> table with only 100 rows in. Also I’m pretty sure only one index per table
> is used, so you’d want to put state1 and state2 in one index.
>

I hope to have more records in the words_games table later when my game is
launched (currently in soft launch/beta).

Regards
Alex


Not sure if I should CREATE INDEX for text columns on which I plan to filter later

2018-02-21 Thread Alexander Farber
Hello,

in a 2 player game I store all games in the following PostgreSQL 10.2 table:

CREATE TABLE words_games (
gid SERIAL PRIMARY KEY,

created timestamptz NOT NULL,
finished timestamptz,

player1 integer REFERENCES words_users(uid) ON DELETE CASCADE NOT
NULL CHECK (player1 <> player2),
player2 integer REFERENCES words_users(uid) ON DELETE CASCADE,

played1 timestamptz,
played2 timestamptz,

state1  text, -- tie, winning, losing, draw, won, lost
state2  text, -- tie, winning, losing, draw, won, lost
reason  text, -- regular, resigned, expired, banned

score1  integer NOT NULL CHECK (score1 >= 0),
score2  integer NOT NULL CHECK (score2 >= 0),

hand1   char[7]   NOT NULL,
hand2   char[7]   NOT NULL,
pilechar[116] NOT NULL,

letters char[15][15] NOT NULL,
values  integer[15][15] NOT NULL,
bid integer NOT NULL REFERENCES words_boards ON DELETE CASCADE
);

In the 3 text columns state1, state2 and reason I store all possible
game/player states.

For example: player1 has failed to make her move in time, so that would
result in:
state1 = 'lost',
state2 = 'won',
reason = 'expired',

On an advice I've got from this mailing list I am explicitly not using
enums (in case I need to add unforseen states).

The purpose of these 3 text columns is for me to display player stats
later, by quering the columns.

As you can imagine, mostly I perform SELECT on the words_games table - to
send update to the game clients (PC and mobile).

And in more seldom cases I update these 3 text columns - when a move is
performed or a game gets finished or expires.

My question please:

Should I add the 3 indices as in:

CREATE INDEX words_games_state1_index on words_games(state1);
CREATE INDEX words_games_state2_index on words_games(state2);
CREATE INDEX words_games_reason_index on words_games(reason);

I am asking, because as an unexperienced database user I fail to see any
difference when I run EXPLAIN:

words=> select gid, state1, state2 from words_games where state1='won' or
state2='won';
 gid | state1 | state2
-++
 146 | lost   | won
 144 | lost   | won
 145 | lost   | won
 150 | won| lost
..
 256 | won| lost
 255 | won| lost
  35 | lost   | won
(100 rows)

words=> explain select gid, state1, state2 from words_games where
state1='won' or state2='won';
  QUERY PLAN
---
 Seq Scan on words_games  (cost=0.00..109.85 rows=96 width=12)
   Filter: ((state1 = 'won'::text) OR (state2 = 'won'::text))
(2 rows)

words=> CREATE INDEX words_games_state1_index on words_games(state1);
CREATE INDEX

words=> CREATE INDEX words_games_state2_index on words_games(state2);
CREATE INDEX

words=> explain select gid, state1, state2 from words_games where
state1='won' or state2='won';
  QUERY PLAN
---
 Seq Scan on words_games  (cost=0.00..109.85 rows=96 width=12)
   Filter: ((state1 = 'won'::text) OR (state2 = 'won'::text))
(2 rows)

Thank you for any insights
Alex