Re: master-> 2 hot standbys

2019-03-11 Thread Julie Nishimura
Hello,
I have another question about the scenario we are thinking about.
So, we had master-> standby (both the same version,  PostgreSQL 9.6.2 on 
x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.3.1-14ubuntu2) 5.3.1 20160413, 
64-bit). We almost ran out of space, and it was not possible to extend our 
current storage. So, the decision was made to build new, bigger, standby, and 
add it as second hot standby (lets call it "db_3"). So, we put our master in 
archive mode, started log shipment, created second standby with pg_basebackup, 
rsynced wals, and was able to start second standby. Now, our scenario has 
master -> 2 standbys. However, our second standby was built on  PostgreSQL 
9.6.11 on x86_64-pc-linux-gnu (Ubuntu 9.6.11-1.pgdg16.04+1), compiled by gcc 
(Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609, 64-bit. Currently replication 
is up and running from master (db_1) to both standbys (db_b and db_c). Now, we 
would like to break the replication, and remove db_3 from this picture, and let 
it function as stand-alone server, leaving the rest of databases to continue 
reside on on db_1 and replicate to standby db_2 (we have multiple dbs on that 
server, and there is only one large beast db of 19 TB is our concern, we are 
trying to run it as stand-alone).

However, we are concerned with the performance on db_3. Currently, we are 
unable to test writing performance on db_3, since it is read only mode (as 
standby). If we do what we would like to and let it run for 2-3 days, and in 
case if we are not satisfied with the performance, how can we fail over back to 
db_1? Is it even possible (there are about 30 different databases on that 
cluster, but as I mentioned before, only one, 19 Tb), will be running on db_3, 
but the rest of databases will be still running on db_1, and they will be 
running independently?

I dont think it is possible, looks like merging process in case of rollback... 
How can we test the writing performance on db_3, you think?

Thanks for ANY ideas and suggestions


From: Ben Chobot 
Sent: Monday, March 4, 2019 5:08 PM
To: Julie Nishimura
Cc: pgsql-general@lists.postgresql.org
Subject: Re: master-> 2 hot standbys


On Mar 4, 2019, at 1:59 PM, Julie Nishimura 
mailto:juliez...@hotmail.com>> wrote:

Hello,
Our current master 9.2 has two active standbys. Can you please help me out with 
the right sequence of events if we would like to promote one of current 
standbys to master and convert master to standby?

It depends on how you've set things up, of course, but generally you can run 
"pg_ctl promote" on one of your slaves. This will make that slave start a new 
timeline and begin to act as a master. To restore your old master to a standby, 
you will need to rebuild it with something like pg_basebackup, as I do not 
believe pg_replay is supported with 9.2.

https://www.postgresql.org/docs/9.5/warm-standby-failover.html

Terminating client connections on the old master to avoid splitbrain and then 
making those clients reconnect to your new master is left as an exercise to the 
reader - how you do it really depends on your environment - but be sure you do 
it right, because splitbrain is a real danger. In a similar vein, you don't 
want two masters trying to save the same wals to the same wal archive.

Also, if your two slaves are using asynchronous replication, be aware that the 
slave you choose to become the new master has a chance to be behind the other 
slave in wal replay when you promote it, unless you take precautions. This 
would mean your other slave won't be able to sync up with the newly promoted 
master.


Re: Tools to migrate data from Json files to PostgreSQL DB.

2019-03-11 Thread github kran
Thank you all for your tips. I will try to see if I can make use of these
useful tips using my java application.

On Fri, Mar 8, 2019 at 1:33 AM Tony Shelver  wrote:

> You may want to assess how you want to store and access the data in
> Postgres before deciding on an import strategy.
>
> I have a system with a mix of relational and JSON data.  The data was
> originally sourced in flat file format.
> I wrote a few Python programs to take the data, then format to JSON, which
> I then submitted to pg functions.
> To make life easier, I submitted it as 1 JSON field to be decomposed by
> Postgres into relational data, and another to be loaded straight into pg
> JSON columns.
>
> The functions then either strip out the data from JSON using the PG JSON
> functions and store it relationally, or plug it straight into a JSON data
> element.
>
> Not terribly difficult to do, especially if you are not navigating complex
> JSON structures in pl/pgsql to strip out the data.
>
> Plenty of python JSON examples out there. Less so for PG :)
>
> On Thu, 7 Mar 2019 at 23:21, github kran  wrote:
>
>> Hello PostgreSQL Team,
>>
>> Are there are any tools to migrate data present in the json files ? to
>> the postgreSQL database.
>> We have data in flat files about 2 billion records across multiple
>> files.
>>
>> 1) What is the easiest way I can transfer this data to relational
>> database ?.
>> 2) Any tools I can use ?. and any better ways do it ?
>>
>> Appreciate your help.
>>
>>
>> Thanks
>> Kran,
>>
>


Re: POSTGRES/MYSQL

2019-03-11 Thread Michael Nolan
On Mon, Mar 11, 2019 at 2:20 PM Gavin Flower 
wrote:

> On 12/03/2019 05:35, Michael Nolan wrote:
> [...]
> >  MySQL is better at isolating users from each other and requires less
> > expertise to administer.
>
> [...]
>
> I keep reading that MySQL is easier to administer, but never seen any
> evidence of that.  And in my very limited experience of both, I've found
> PostgreSQL easier to set up & administer.
>

I did not say MySQL is easier to administer, I said it requires less
expertise to administer.

Perhaps I should have said it requires less expertise to administer it well
enough for it to function adequately.  Consider the number of packages that
run on MySqL (like Joomla, Wordpress and phpbb3) and the level of expertise
of those running those systems.

I've never worked with DB2, but I did spend 10 years as a DBA for Oracle
systems, and I'd place PostgreSQL somewhere in between MySQL and Oracle for
the amount of expertise it takes to administer it properly.  And the more
complex the application, the more that expertise is needed.

MySQL's data issues are well-documented and while they can generally be
dealt with at the application level, I prefer to handle both data issues
and implementation of business rules at the database level, because that
forestalls a lot of problems.
--
Mike Nolan


Re: POSTGRES/MYSQL

2019-03-11 Thread Gavin Flower

On 12/03/2019 09:40, Thiemo Kellner wrote:


Quoting Adrian Klaver :


On 3/11/19 9:31 AM, Sonam Sharma wrote:

Hi Adrian,
Ours is retail company and the DB size is Max 30gb, currently we are 
using db2.


Things to consider:

1) Migration tools for DB2 --> MySQL/Postgresql. I have not done 
this, so someone else will have to comment.


2) The clients/frameworks/ORMs you use now to connect to the 
database. Do they also work with Postgresql/MySQL?


It is also worth to consider if the architecture/model of DB2 fits 
PostgreSQL/MySQL. And while at it, how about getting rid of all the 
itching quirks of the current solution anyway? I see the danger of 
getting disappointed by any of PostgreSQL/MySQL if the current 
solution uses DB2 features that cannot be easily mapped to any of the 
contenders features.


Bottom line of my insinuation is that the migration tool could be less 
an point if you get the opportunity to overhaul your application.


Kind two dimes

Thiemo


Yes, I'm aware that different RDDBMS's having their own quirks that 
people either exploit as benefits, or have to work around in a new DB.  
So naively converting one DB to another may have huge performance hits, 
and possible unexpected results (MySQL, I'm looking at you!) -- even if 
both DB's were technically equally good!


I think a conversion tool is a good starting point.  However, I strongly 
agree with Thiemo, that you should carefully review your existing 
databases design/implementation -- so as to take maximum advantage of 
the facilities of progress, and avoid any pitfalls created by naively 
importing isms that are specific to your old db -- that might well be 
counter productive in PostgreSQL.



Cheers,
Gavin





Re: POSTGRES/MYSQL

2019-03-11 Thread Tim Clarke
+1 for Postgres here. We moved to it from Oracle 15 years ago and have
never looked back for a second.

mySQL databases over the same period have needed occasional repairs that
Postgres never has. We couldn't possibly choose mySQL over Postgres now;
the feature list of mySQL is so inferior we'd need to come up with some
major engineering patches.


Tim Clarke
IT Director
Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420

On 11/03/2019 19:20, Gavin Flower wrote:
> On 12/03/2019 05:35, Michael Nolan wrote:
> [...]
>>  MySQL is better at isolating users from each other and requires less
>> expertise to administer.
>
> [...]
>
> I keep reading that MySQL is easier to administer, but never seen any
> evidence of that.  And in my very limited experience of both, I've
> found PostgreSQL easier to set up & administer.
>
> From what I've read about problems with MySQL, I think that if you
> value your data, just don't use MySQL.   At least 4 times, since 2001,
> I've searched for PostgreSQL vs MySQL comparisons, and each time found
> PostgreSQL to be superior in terms of performance, reliability,
> robustness, and ease of use.
>
> There is a definite trend of people moving from MySQL to PostgreSQL,
> and its not just because of Oracle (MySQL diehards are moving to
> MariaDB).
>
> I have a lot more confidence in PostgreSQL, than MySQL/MariaDB.
>
>
> Cheers,
> Gavin
>
>
> P.S. Don't top post!  As commenting at the bottom, is the norm for
> PostgreSQL mailing lists.
>
>


Main: +44 (0)1376 503500 | Fax: +44 (0)1376 503550
Web: https://www.manifest.co.uk/



Minerva Analytics Ltd
9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | England



Copyright: This e-mail may contain confidential or legally privileged 
information. If you are not the named addressee you must not use or disclose 
such information, instead please report it to 
ad...@minerva.info
Legal:  Minerva Analytics is the trading name of: Minerva Analytics
Ltd: Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
Registered in England Number 2920820 Registered Office at above address. Please 
Click Here >> for further information.


Re: POSTGRES/MYSQL

2019-03-11 Thread Thiemo Kellner



Quoting Adrian Klaver :


On 3/11/19 9:31 AM, Sonam Sharma wrote:

Hi Adrian,
Ours is retail company and the DB size is Max 30gb, currently we  
are using db2.


Things to consider:

1) Migration tools for DB2 --> MySQL/Postgresql. I have not done  
this, so someone else will have to comment.


2) The clients/frameworks/ORMs you use now to connect to the  
database. Do they also work with Postgresql/MySQL?


It is also worth to consider if the architecture/model of DB2 fits  
PostgreSQL/MySQL. And while at it, how about getting rid of all the  
itching quirks of the current solution anyway? I see the danger of  
getting disappointed by any of PostgreSQL/MySQL if the current  
solution uses DB2 features that cannot be easily mapped to any of the  
contenders features.


Bottom line of my insinuation is that the migration tool could be less  
an point if you get the opportunity to overhaul your application.


Kind two dimes

Thiemo


--
Signal: +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B





Finding older RPMs of current releases

2019-03-11 Thread pbj
Is there any place to find 10.6 RPMs such as

postgresql10-contrib-10.6-1PGDG.rhel7.x86_64.rpm
postgresql10-contrib-10.6-2PGDG.rhel7.x86_64.rpm

We have a 10.6 installation that we need the contribs for but
can't easily upgrade to 10.7 at this time.

Thanks,
PJ



Re: Autovacuum Transaction Wraparound

2019-03-11 Thread Perumal Raj
Hi Adrian

What was the full message?

   autovacuum: VACUUM (to prevent wraparound)

Though i am running vacuum manually (nowadays) and autovacuum is running
perfectly once its threshold reaches.

What will happen if my DB reaches 200M transaction age again ? ( Here my
understanding is no dead tuples to cleanup --- I may be missing full
concept , Please correct me if i am wrong) .

What will be impact to DB ( Performance ) During Vacuum freeze ( My
Assumption is autovacuum will run "vacuum freeze" once DB age reached 200M
) ?

When should i consider to increase pg_settings value with respect to
Autovacuum ?

Regards,



On Mon, Mar 11, 2019 at 12:45 PM Adrian Klaver 
wrote:

> On 3/11/19 12:16 PM, Perumal Raj wrote:
> > Hi Adrian/Joshua
> >
> > Sorry to mention in the previous thread,
> >
> > Auto-vacuum is already enabled in the Cluster and its doing the job
> > perfectly. But only thing manual vacuum scheduled now (weekly Cluster
> > wide) after noticing 'Transaction Wraparound message during Autovacuum
> run.
>
> What was the full message?
>
> >
> > Version : 9.2.24
>
> FYI 9.2 is 1 years+ past EOL.
>
> >
> > Query :
> >
> > SELECT datname, age(datfrozenxid) FROM pg_database
> >   datname  |age
> > --+---
> >   template1| 133492380
> >   template0| 180987489
> >   postgres |  93330701
> >   nagio| 109936658
> >   arch__old| 109936658
> >   prod .   | 151621905
>
> So at some point the server will force a VACUUM to freeze ids and
> prevent wraparound before the age gets to your autovacuum_freeze_max_age
> below. That might even have been the message you saw.
>
> >
> > Settings :
> >
> >name   |  setting  | unit
> > -+---+--
> >   autovacuum  | on|
> >   autovacuum_analyze_scale_factor | 0.05  |
> >   autovacuum_analyze_threshold| 50|
> >   autovacuum_freeze_max_age   | 2 |
> >   autovacuum_max_workers  | 3 |
> >   autovacuum_naptime  | 60| s
> >   autovacuum_vacuum_cost_delay| 20| ms
> >   autovacuum_vacuum_cost_limit| -1|
> >   autovacuum_vacuum_scale_factor  | 0.2   |
> >   autovacuum_vacuum_threshold | 50|
> >
> >   log_autovacuum_min_duration |-1 .   |
> >
> > Regards,
> >
> >
> > On Mon, Mar 11, 2019 at 12:07 PM Adrian Klaver
> > mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 3/11/19 11:51 AM, Perumal Raj wrote:
> >  > Hi Experts
> >  >
> >  > I have noticed in my Database that, there is no regular Vacuum
> >  > maintenance happening
> >
> > What Postgres version?
> >
> >  > So i started Weekly Job across cluster. But still i am seeing
> > gradual
> >  > growth on transacation ID.
> >
> > What query are you using?
> >
> >  >
> >  > DB is still using default autovacuum_freeze_min_age &
> >  > autovacuum_freeze_table_age.
> >
> > What are the actual settings for?:
> >
> > https://www.postgresql.org/docs/10/runtime-config-autovacuum.html
> >
> >  >
> >  > Question : Since i am running regularly vacuum job ( weekly) and
> the
> >  > Transaction age is gradually growing , What is next once i hit
> 200M
> >  > limit ( default ).
> >  > Should i increase my default value ? If so any calculation for
> > increase
> >  > the value based on my DB transaction growth.
> >  >
> >  > Thanks,
> >  > Raj
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: POSTGRES/MYSQL

2019-03-11 Thread Edson Carlos Ericksson Richter



Em 11/03/2019 16:20, Gavin Flower escreveu:

On 12/03/2019 05:35, Michael Nolan wrote:
[...]
 MySQL is better at isolating users from each other and requires less 
expertise to administer.


[...]

I keep reading that MySQL is easier to administer, but never seen any 
evidence of that.  And in my very limited experience of both, I've 
found PostgreSQL easier to set up & administer.



If you allow me an apart, "easy to administer" must means also that we 
have no referencial integrity hell - which MySQL doesn't guarantee. I 
had problems in past even using InnoDB, which is supposed to work... 
Easy to adminiter also means not having to deal with internal tables 
whenever MySQL decides not accepting our administrator passwords anymore...


So, IMHO, easy to administer is a system that don't drive me crazy, and 
_for me_ this system is PostgreSQL - at least in my very special case 
(which is reach old age without becoming nuts).


;-)


Regards,


Edson





Re: Autovacuum Transaction Wraparound

2019-03-11 Thread Adrian Klaver

On 3/11/19 12:16 PM, Perumal Raj wrote:

Hi Adrian/Joshua

Sorry to mention in the previous thread,

Auto-vacuum is already enabled in the Cluster and its doing the job 
perfectly. But only thing manual vacuum scheduled now (weekly Cluster 
wide) after noticing 'Transaction Wraparound message during Autovacuum run.


What was the full message?



Version : 9.2.24


FYI 9.2 is 1 years+ past EOL.



Query :

SELECT datname, age(datfrozenxid) FROM pg_database
  datname  |age
--+---
  template1| 133492380
  template0| 180987489
  postgres |  93330701
  nagio| 109936658
  arch__old| 109936658
  prod .   | 151621905


So at some point the server will force a VACUUM to freeze ids and 
prevent wraparound before the age gets to your autovacuum_freeze_max_age 
below. That might even have been the message you saw.




Settings :

   name   |  setting  | unit
-+---+--
  autovacuum  | on|
  autovacuum_analyze_scale_factor | 0.05  |
  autovacuum_analyze_threshold| 50|
  autovacuum_freeze_max_age   | 2 |
  autovacuum_max_workers  | 3 |
  autovacuum_naptime  | 60| s
  autovacuum_vacuum_cost_delay| 20| ms
  autovacuum_vacuum_cost_limit| -1|
  autovacuum_vacuum_scale_factor  | 0.2   |
  autovacuum_vacuum_threshold | 50|

  log_autovacuum_min_duration |-1 .   |

Regards,


On Mon, Mar 11, 2019 at 12:07 PM Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 3/11/19 11:51 AM, Perumal Raj wrote:
 > Hi Experts
 >
 > I have noticed in my Database that, there is no regular Vacuum
 > maintenance happening

What Postgres version?

 > So i started Weekly Job across cluster. But still i am seeing
gradual
 > growth on transacation ID.

What query are you using?

 >
 > DB is still using default autovacuum_freeze_min_age &
 > autovacuum_freeze_table_age.

What are the actual settings for?:

https://www.postgresql.org/docs/10/runtime-config-autovacuum.html

 >
 > Question : Since i am running regularly vacuum job ( weekly) and the
 > Transaction age is gradually growing , What is next once i hit 200M
 > limit ( default ).
 > Should i increase my default value ? If so any calculation for
increase
 > the value based on my DB transaction growth.
 >
 > Thanks,
 > Raj


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com



Upgrade standby after starting cluster using rsync

2019-03-11 Thread Martín Fernández
Hello,

I've wrote a couple of questions around pg_upgrade and updating standbys using 
rsync last week. We were able to successfully upgrade half of our cluster (the 
other half was kept for failover) from pg92 with postgis 1.5.8 to pg10 with 
postgis 2.4. It was a really interesting challenge because of postgis binary 
incompatibility for geometry data types.

The rsync call that we used looked exactly like this (taken from pg_upgrade man 
page basically):

`rsync --verbose --verbose --progress --archive --delete --hard-links 
--size-only  --no-inc-recursive /var/lib/postgres/9.2 /var/lib/postgres/10 
$REPLICA_IP:/var/lib/postgres`

We are now in the journey of upgrading the other half of the cluster since we 
have concluded that the upgrade was successful. 

We are planning on using the same rsync call to upgrade the rest of the 
standbys (in combination with pg_start_backup/pg_stop_backup low level api). My 
only concern is that I'm not 100% sure if the `--size-only` flag will be enough 
to guarantee that files are the same. On the initial set of standbys that we 
upgraded this shouldn't generate an issue since the standbys were at the same 
last checkpoint than the master  and we did the rsync call before starting the 
primary (after running pg_upgrade).

Is there any potential issues that could show up if we do it with --size-only ? 
Should we use the default rsync mechanism that would check for size and 
timestamps ?

Hoping someone has some better experience than me on upgrading standbys using 
rsync.

Thanks for all the help as usual!

Best,

Martín

Re: POSTGRES/MYSQL

2019-03-11 Thread Gavin Flower

On 12/03/2019 05:35, Michael Nolan wrote:
[...]
 MySQL is better at isolating users from each other and requires less 
expertise to administer.


[...]

I keep reading that MySQL is easier to administer, but never seen any 
evidence of that.  And in my very limited experience of both, I've found 
PostgreSQL easier to set up & administer.


From what I've read about problems with MySQL, I think that if you 
value your data, just don't use MySQL.   At least 4 times, since 2001, 
I've searched for PostgreSQL vs MySQL comparisons, and each time found 
PostgreSQL to be superior in terms of performance, reliability, 
robustness, and ease of use.


There is a definite trend of people moving from MySQL to PostgreSQL, and 
its not just because of Oracle (MySQL diehards are moving to MariaDB).


I have a lot more confidence in PostgreSQL, than MySQL/MariaDB.


Cheers,
Gavin


P.S. Don't top post!  As commenting at the bottom, is the norm for 
PostgreSQL mailing lists.





Re: Autovacuum Transaction Wraparound

2019-03-11 Thread Adrian Klaver

On 3/11/19 11:51 AM, Perumal Raj wrote:

Hi Experts

I have noticed in my Database that, there is no regular Vacuum 
maintenance happening


What Postgres version?

So i started Weekly Job across cluster. But still i am seeing gradual 
growth on transacation ID.


What query are you using?



DB is still using default autovacuum_freeze_min_age & 
autovacuum_freeze_table_age.


What are the actual settings for?:

https://www.postgresql.org/docs/10/runtime-config-autovacuum.html



Question : Since i am running regularly vacuum job ( weekly) and the 
Transaction age is gradually growing , What is next once i hit 200M 
limit ( default ).
Should i increase my default value ? If so any calculation for increase 
the value based on my DB transaction growth.


Thanks,
Raj



--
Adrian Klaver
adrian.kla...@aklaver.com



Autovacuum Transaction Wraparound

2019-03-11 Thread Perumal Raj
Hi Experts

I have noticed in my Database that, there is no regular Vacuum maintenance
happening
So i started Weekly Job across cluster. But still i am seeing gradual
growth on transacation ID.

DB is still using default autovacuum_freeze_min_age &
autovacuum_freeze_table_age.

Question : Since i am running regularly vacuum job ( weekly) and the
Transaction age is gradually growing , What is next once i hit 200M limit (
default ).
Should i increase my default value ? If so any calculation for increase the
value based on my DB transaction growth.

Thanks,
Raj


Re: POSTGRES/MYSQL

2019-03-11 Thread Sameer Kumar
On Tue 12 Mar, 2019, 1:58 AM Adrian Klaver, 
wrote:

> On 3/11/19 9:31 AM, Sonam Sharma wrote:
> > Hi Adrian,
> > Ours is retail company and the DB size is Max 30gb, currently we are
> > using db2.
>
> Things to consider:
>
> 1) Migration tools for DB2 --> MySQL/Postgresql. I have not done this,
> so someone else will have to comment.
>

This will be fairly simple for a 30gb database if you can afford some
downtime. I have some bit experience with this (DB2 9.7 to PostgreSQL). You
can use db2look to export the DDL out and then do some minor modifications
and run the SQL (more or less) as is on Postgres.

Use "db2 export" command to export data in CSV format (separate file for
each table).

Use Postgres COPY command to copy data from CSV format.

Make sure that you set PostgreSQL client encoding to something that aligns
with database codepage on DB2 and LC_LANG setting. If DB2 is hosted on AIX
or Linux, avoid using a Windows client to fire db2 export. Migrating off
from Windows could be tricky. The codepage and encoding issues might need a
few retries to align. So pls test in UAT with prod like data sample.


> 2) The clients/frameworks/ORMs you use now to connect to the database.
> Do they also work with Postgresql/MySQL?
>
>
> >
> > On Mon, Mar 11, 2019, 8:21 PM Adrian Klaver  > > wrote:
> >
> > On 3/11/19 4:31 AM, Sonam Sharma wrote:
> >  > Hi All,
> >  >
> >  > We are planning to migrate our database into any open source DB.
> >  > Can someone please help me in knowing which one will be better
> among
> >  > POSTGRESQL and MYSQL.
> >  >
> >  > In what terms postgres is better than MYSQL.
> > This cannot really be answered until more information is provided
> about
> > how your current database is used and how you want to continue in the
> > future.
> >
> >  >
> >  > Regards,
> >  > Sonam
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
> --

-- 

Best Regards,

*Sameer Kumar | Senior Solution Architect*

*ASHNIK PTE. LTD.*

36 Robinson Road, #14-04 City House, Singapore 068877

T: +65 6438 3504 | www.ashnik.com

Skype: sameer.ashnik |   M: +65 8110 0350


Re: POSTGRES/MYSQL

2019-03-11 Thread Adrian Klaver

On 3/11/19 9:31 AM, Sonam Sharma wrote:

Hi Adrian,
Ours is retail company and the DB size is Max 30gb, currently we are 
using db2.


Things to consider:

1) Migration tools for DB2 --> MySQL/Postgresql. I have not done this, 
so someone else will have to comment.


2) The clients/frameworks/ORMs you use now to connect to the database. 
Do they also work with Postgresql/MySQL?





On Mon, Mar 11, 2019, 8:21 PM Adrian Klaver > wrote:


On 3/11/19 4:31 AM, Sonam Sharma wrote:
 > Hi All,
 >
 > We are planning to migrate our database into any open source DB.
 > Can someone please help me in knowing which one will be better among
 > POSTGRESQL and MYSQL.
 >
 > In what terms postgres is better than MYSQL.
This cannot really be answered until more information is provided about
how your current database is used and how you want to continue in the
future.

 >
 > Regards,
 > Sonam


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: POSTGRES/MYSQL

2019-03-11 Thread Christopher Browne
On Mon, 11 Mar 2019 at 12:36, Michael Nolan  wrote:
> There isn't a simple answer to this, it's like asking 'which is better for 
> cooking:  aluminum or stainless steel'.  The answer is 'it depends on what 
> you're trying to do'.i

Metaphors can be dangerous (especially when automotive ones get
chosen!), but this is a pretty good one.

Often, it doesn't terribly much matter which kind of cookware you use;
both aluminum and steel will allow you to cook your dinner, and if
your needs are not challenging, the differences may make little
difference.  That seems likely to be true here; both Postgres and
MySQL have a sizable set of relatively common facilities where they
would function in broadly similar ways.  This wasn't so true in the
MySQL 3 versus PostgreSQL 6 days, when they had enormously different
expressions of basic usage patterns.  (e.g. - in MySQL, you'd get data
eaten by weak data types, or performance killed when writes demand
full table locks, but PostgreSQL would lose performance when VACUUM
would lock the whole table).  Modern MySQL is a lot less sloppy than
it used to be, and Modern PostgreSQL performs a way lot better than it
used to.

And you can certainly fry bacon in either a steel or aluminum pan;
performance of that is liable to depend more on the qualities of stove
and of the bacon than on the qualities of the fry pan.

Paralleling the notion that performance and reliability might depend
more on the qualities of the server, between CPUs, amounts and speed
of RAM, and whether you're persisting data on SSDs versus "spinning
rust disks."  If your disk dies, the database goes away, "poof,"
irrespective of what brand of database you're using...

It is also somewhat foolish to get deep into minutiae when we have no
idea which details do or do not matter for the particular situation.

It's pretty likely that there *is* some relevant minutiae, but, when
the only details given are:

- Retail company
- DB size ~30GB
- Present database is DB2

That doesn't point all that readily at "relevant minutiae".

Sorts of things that might help:
- What kinds of data types are in use?
- What is the architecture of the system in practice?
- What kinds of transaction load are being thrown at the system?
  - Presumably some OLTP activity to record retail activities
  - Reporting on activities (which might involve replication?)
  - Data lifecycles (how does data arrive, how does it exit once irrelevant?)

There are consulting organizations out there that would be quite
prepared to help with that sort of analysis.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"



Re: POSTGRES/MYSQL

2019-03-11 Thread Michael Nolan
On Mon, Mar 11, 2019 at 6:32 AM Sonam Sharma  wrote:

> Hi All,
>
> We are planning to migrate our database into any open source DB.
> Can someone please help me in knowing which one will be better among
> POSTGRESQL and MYSQL.
>
> In what terms postgres is better than MYSQL.
>
> Regards,
> Sonam
>

There isn't a simple answer to this, it's like asking 'which is better for
cooking:  aluminum or stainless steel'.  The answer is 'it depends on what
you're trying to do'.i

There are things that PostgreSQL does very well.  There are things that
MySQL does very well.  There are things that they don't do as well.

PostgreSQL is, IMHO, more robust and will handle larger databases.  MySQL
is better at isolating users from each other and requires less expertise to
administer.

Depending on your needs, the size of your organization and your budget, you
may want to bring in outside consultants to help you plan.
--
Mike Nolan
no...@tssi.com


Re: POSTGRES/MYSQL

2019-03-11 Thread Sonam Sharma
Hi Adrian,
Ours is retail company and the DB size is Max 30gb, currently we are using
db2.

On Mon, Mar 11, 2019, 8:21 PM Adrian Klaver 
wrote:

> On 3/11/19 4:31 AM, Sonam Sharma wrote:
> > Hi All,
> >
> > We are planning to migrate our database into any open source DB.
> > Can someone please help me in knowing which one will be better among
> > POSTGRESQL and MYSQL.
> >
> > In what terms postgres is better than MYSQL.
> This cannot really be answered until more information is provided about
> how your current database is used and how you want to continue in the
> future.
>
> >
> > Regards,
> > Sonam
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


9.6.11- could not truncate directory "pg_serial": apparent wraparound

2019-03-11 Thread Pavel Suderevsky
Hi,

PG: 9.6.11
OS: CentOS
Env: AWS EC2

I've faced the following exceptions in PostgreSQL server log:
> could not truncate directory "pg_serial": apparent wraparound
Sometimes it repeats every 5 min and the longest period was 40 min.

In fact, I can't find any suspicious events happening that periods.
pg_wait_sampling didn't catch any events, no long queries (more than 60s),
Autovacuum workers or transactions in "idle in transaction" state were in
action at this time.

The only related I could find in archive is:
https://www.postgresql.org/message-id/CACjxUsON4Vya3a6r%3DubwmN-4qTDDfZjuwSzjnL1QjdUc8_gzLw%40mail.gmail.com
>You should not see the errors you are reporting nor
>the warning I mentioned unless a serializable transaction remains
>active long enough for about 1 billion transaction IDs to be
>consumed.

Database age now is just 18.5 millions of transactions.

Server has two standbys (sync and async), hot_standby_feedback is off.

Please advice what I can do to find a reason of these exceptions.


Re: POSTGRES/MYSQL

2019-03-11 Thread Adrian Klaver

On 3/11/19 4:31 AM, Sonam Sharma wrote:

Hi All,

We are planning to migrate our database into any open source DB.
Can someone please help me in knowing which one will be better among 
POSTGRESQL and MYSQL.


In what terms postgres is better than MYSQL.
This cannot really be answered until more information is provided about 
how your current database is used and how you want to continue in the 
future.




Regards,
Sonam



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: POSTGRES/MYSQL

2019-03-11 Thread Laurenz Albe
Sonam Sharma wrote:
> In what terms postgres is better than MYSQL.

It is easier to name the few things where MySQL might be better:
- If you need a key-value store that receives lots of updates.
- More widely used.

If I moved to an open source database, I wouldn't choose one that
is owned and controlled by Oracle, but free open source.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: POSTGRES/MYSQL

2019-03-11 Thread Thomas Kellerer
Sonam Sharma schrieb am 11.03.2019 um 12:31:
> We are planning to migrate our database into any open source DB.
> Can someone please help me in knowing which one will be better among 
> POSTGRESQL and MYSQL.
> 
> In what terms postgres is better than MYSQL.

you might want to have a look at this:

  https://stackoverflow.com/a/8182996

My personal(!) opinion: I wouldn't trust a database that thinks:

  delete from customer
  where '1fool';

is a valid SQL statement and will happily delete all customers. 






Re:POSTGRES/MYSQL

2019-03-11 Thread chris
More blog online, choose one for your attention. 
https://www.mssqltips.com/sqlservertip/5745/compare-sql-server-mysql-and-postgresql-features/


Best Regards,
| |
Chris
|
|
yuanzefuwa...@126.com
|
签名由网易邮箱大师定制


On 03/11/2019 19:31,Sonam Sharma wrote:
Hi All,


We are planning to migrate our database into any open source DB.
Can someone please help me in knowing which one will be better among POSTGRESQL 
and MYSQL.


In what terms postgres is better than MYSQL.


Regards,
Sonam

POSTGRES/MYSQL

2019-03-11 Thread Sonam Sharma
Hi All,

We are planning to migrate our database into any open source DB.
Can someone please help me in knowing which one will be better among
POSTGRESQL and MYSQL.

In what terms postgres is better than MYSQL.

Regards,
Sonam


Re: TPC-DS queries

2019-03-11 Thread Tatsuo Ishii
> Hi,
> 
> I think that the sql is not valid. Based on the order by
> documentation, a column label cannot be used in an expression.
> 
> from    https://www.postgresql.org/docs/11/queries-order.html
>  > Note that an output column name has to stand alone, that is, it
> cannot be used in an expression.

Thanks. Yes, you are correct. The line should be something like:

   ,case when grouping(i_category)+grouping(i_class) = 0 then i_category end

> Regards
> s.
> 
> On 11.03.2019 06:30, Tatsuo Ishii wrote:
>> I played with TPC-DS and found some of them can't be executed because
>> of SQL errors and I am not sure why.
>>
>> For example with query 36:
>>
>> select
>>  sum(ss_net_profit)/sum(ss_ext_sales_price) as gross_margin
>> ,i_category
>> ,i_class
>> ,grouping(i_category)+grouping(i_class) as lochierarchy
>> ,rank() over (
>>  partition by grouping(i_category)+grouping(i_class),
>>  case when grouping(i_class) = 0 then i_category end
>>  order by sum(ss_net_profit)/sum(ss_ext_sales_price) asc) as
>>  rank_within_parent
>>   from
>>  store_sales
>> ,date_dim   d1
>> ,item
>> ,store
>>   where
>>  d1.d_year = 2000
>>   and d1.d_date_sk = ss_sold_date_sk
>>   and i_item_sk  = ss_item_sk
>>   and s_store_sk  = ss_store_sk
>>   and s_state in ('TN','TN','TN','TN',
>>   'TN','TN','TN','TN')
>>   group by rollup(i_category,i_class)
>>   order by
>> lochierarchy desc
>>,case when lochierarchy = 0 then i_category end -- line 25 is here.
>>,rank_within_parent
>>limit 100;
>> psql:query_0.sql:1935: ERROR:  column "lochierarchy" does not exist
>> LINE 25:   ,case when lochierarchy = 0 then i_category end
>>
>> I have follwed the instruction here.
>> https://ankane.org/tpc-ds
>>
>> PostgreSQL is master branch HEAD. For me, the SQL above looks to be
>> valid.
>>
>> Best regards,
>> --
>> Tatsuo Ishii
>> SRA OSS, Inc. Japan
>> English: http://www.sraoss.co.jp/index_en.php
>> Japanese:http://www.sraoss.co.jp
>>
> 



Re: TPC-DS queries

2019-03-11 Thread reg_pg_stefanz

Hi,

I think that the sql is not valid. Based on the order by documentation, 
a column label cannot be used in an expression.


from    https://www.postgresql.org/docs/11/queries-order.html
 > Note that an output column name has to stand alone, that is, it 
cannot be used in an expression.


Regards
s.

On 11.03.2019 06:30, Tatsuo Ishii wrote:

I played with TPC-DS and found some of them can't be executed because
of SQL errors and I am not sure why.

For example with query 36:

select
 sum(ss_net_profit)/sum(ss_ext_sales_price) as gross_margin
,i_category
,i_class
,grouping(i_category)+grouping(i_class) as lochierarchy
,rank() over (
partition by grouping(i_category)+grouping(i_class),
case when grouping(i_class) = 0 then i_category end
order by sum(ss_net_profit)/sum(ss_ext_sales_price) asc) as 
rank_within_parent
  from
 store_sales
,date_dim   d1
,item
,store
  where
 d1.d_year = 2000
  and d1.d_date_sk = ss_sold_date_sk
  and i_item_sk  = ss_item_sk
  and s_store_sk  = ss_store_sk
  and s_state in ('TN','TN','TN','TN',
  'TN','TN','TN','TN')
  group by rollup(i_category,i_class)
  order by
lochierarchy desc
   ,case when lochierarchy = 0 then i_category end -- line 25 is here.
   ,rank_within_parent
   limit 100;
psql:query_0.sql:1935: ERROR:  column "lochierarchy" does not exist
LINE 25:   ,case when lochierarchy = 0 then i_category end

I have follwed the instruction here.
https://ankane.org/tpc-ds

PostgreSQL is master branch HEAD. For me, the SQL above looks to be
valid.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp






Re: Ran out of memory retrieving query results.

2019-03-11 Thread Andreas Kretschmer




Am 11.03.19 um 06:44 schrieb Nanda Kumar:

Hello Tem,

Can you please help on the below issues . The below Error occurred when I run 
the select statement for the huge data volume.

  Error Details :

  Ran out of memory retrieving query results.


you should provide more details, for instance (at least)
* os and pg version
* how much ram contains the machine
* config-parameters (shared_buffers, work_mem, max_connections)
* running activities
* the query itself
* table-definition
* how large are the tables
* the EXPLAIN of the query


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




RE: Ran out of memory retrieving query results.

2019-03-11 Thread Nanda Kumar
Hello Tem,

Can you please help on the below issues . The below Error occurred when I run 
the select statement for the huge data volume.

 Error Details :

 Ran out of memory retrieving query results.

Regards
Nanda Kumar.M
SmartStream Technologies
Eastland Citadel | 5th Floor | 102 Hosur Road | Banglore 560 095 | India
nanda.ku...@smartstream-stp.com | www.smartstream-stp.com
Mob
+91 99720 44779
Tel
+91 80617 64107



-Original Message-
From: Stephen Frost [mailto:sfr...@snowman.net]
Sent: 09 March 2019 00:19
To: Nanda Kumar
Cc: pgsql-general Owner
Subject: Re: Ran out of memory retrieving query results.

Greetings,

You need to email pgsql-general@lists.postgresql.org with your question, this 
address is for the moderators.

Thanks!

Stephen

* Nanda Kumar (nanda.ku...@smartstream-stp.com) wrote:
> Hello Team,
>
>
>
> I am getting error when I try to run the select query. Kindly help me in 
> fixing this issue.
>
>
>
> Error Details :
>
>
>
> Ran out of memory retrieving query results.
>
>
>
> Screenshot of the error :
>
>
>
> [cid:image001.png@01D4D5AA.5A204D50]
>
>
>
> Regards
>
> Nanda Kumar.M
>
> SmartStream Technologies
>
> Eastland Citadel | 5th Floor | 102 Hosur Road | Banglore 560 095 |
> India
>
> nanda.ku...@smartstream-stp.com | www.smartstream-stp.com
>
> Mob +91 99720 44779
>
> 
> The information in this email is confidential and may be legally privileged. 
> It is intended solely for the addressee. Access to this email by anyone else 
> is unauthorised. If you are not the intended recipient, any disclosure, 
> copying, distribution or any action taken or omitted to be taken in reliance 
> on it, is prohibited and may be unlawful.



 The information in this email is confidential and may be legally privileged. 
It is intended solely for the addressee. Access to this email by anyone else is 
unauthorised. If you are not the intended recipient, any disclosure, copying, 
distribution or any action taken or omitted to be taken in reliance on it, is 
prohibited and may be unlawful.