[GENERAL] PostgreSQL 9.5 showing unexpected behavior when installed with different locale other than default

2017-11-08 Thread Mayank Agrawal
Hi,

On Windows 7, I am trying to install PostgreSQL 9.5 (
*postgresql-9.5.2-1-windows-x64.exe*) with different locales, but it is
getting installed with default locale only, which is, `*English_United
States.1252*`. I tried both the methods, GUI and unattended/silent, but the
results are same.

I followed the below steps:

1. Set the locale of system to the particular locale, for ex. `*English,
Australia*`.
2. Restarted the system.
3. Installed PostgreSQL using following command using parameter `*--locale*
`:

* start /w postgresql-9.5.2-1-windows-x64.exe --mode unattended
--unattendedmodeui none --superaccount "postgres" --superpassword
"postgrespwd" --prefix "C:\Program Files\PostgreSQL\9.5" --datadir
"C:\Program Files\PostgreSQL\9.5\data" --serverport "5432"
--install_runtimes 1 --locale "English, Australia"*

4. Executed the command:  `*Show lc_collate*` and got the output as
`*English_United
States.1252*`

Similarly, I followed the same steps while installing through GUI mode
(selected option `English, Australia`) and got the same unexpected output.

Here is the important logs from PostgreSQL log file bitrock_installer.log
(GUI mode):


Executing cscript //NoLogo "C:\Program
Files\PostgreSQL\9.5/installer/server/initcluster.vbs" "NT
AUTHORITY\NetworkService" "postgres" "" "C:\Program
Files\PostgreSQL\9.5" "C:\Program Files\PostgreSQL\9.5\data" 5432 "English,
Australia" 0
Script exit code: 0
.
.
.
The database cluster will be initialized with locale "English_United
States.1252".

5. When I selected `*[Default Locale]*` in GUI mode (PostgreSQL 9.5), then
only it got installed with `*English, Australia*` locale. Output of command
`*Show lc_collate*` was `*English_Australia.1252*`

When I installed PostgreSQL 8.4.8 under same conditions (in GUI mode),
there were no issues with that. It got installed with locale English,
Australia.

Any idea, what can be the reason behind this?


Thanks and Regards,
Mayank Agrawal


.


Re: [GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-08 Thread Igal @ Lucee.org

On 11/8/2017 6:25 PM, Igal @ Lucee.org wrote:

On 11/8/2017 5:27 PM, Allan Kamau wrote:
Maybe using NUMERIC without explicitly stating the precision is 
recommended. This would allow for values with many decimal places to 
be accepted without truncation. Your field may need to capture very 
small values such as those in bitcoin trading or some banking fee or 
interest.


That's a very good idea.  For some reason I thought that I tried that 
earlier and it didn't work as expected, but I just tested it (again?) 
and it seems to work well, so that's what I'll do.


Another weird thing that I noticed:

On another column, "total_charged", that was migrated properly as a 
`money` type, when I run `sum(total_charged::money)` I get `null`, but 
if I cast it to numeric, i.e. `sum(total_charged::numeric)`, I get the 
expected sum result.


Is there a logical explanation to that?


Igal


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-08 Thread Igal @ Lucee.org

On 11/8/2017 5:27 PM, Allan Kamau wrote:
Maybe using NUMERIC without explicitly stating the precision is 
recommended. This would allow for values with many decimal places to 
be accepted without truncation. Your field may need to capture very 
small values such as those in bitcoin trading or some banking fee or 
interest.


That's a very good idea.  For some reason I thought that I tried that 
earlier and it didn't work as expected, but I just tested it (again?) 
and it seems to work well, so that's what I'll do.


Thank you,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-08 Thread Allan Kamau
On Nov 9, 2017 04:12, "Igal @ Lucee.org"  wrote:

Thank you all for your help:


On 11/8/2017 4:45 PM, Tom Lane wrote:

"Igal @ Lucee.org"   writes:

The value in the offending insert is:  0.0

Why does Postgres decide that 0.0 is "double precision" (which is a
weird name in my opinion -- why can't it just be double) and not money?

Kettle must be telling it that --- on its own, PG would think '0.0'
is numeric, which it does have a cast to money for.


Looks like you are correct.  Kettle shows me the INSERT statement and when
I execute it outside of Kettle (in a regular SQL client), the INSERT
succeeds.

On 11/8/2017 4:45 PM, David G. Johnston wrote:

The lack of quotes surrounding the value is significant.  Money input
requires a string literal.  Only (more or less) integer and double literal
values can be written without the single quotes.


That didn't work.  I CAST'ed the value in the SELECT to VARCHAR(16) but all
it did was change the error message to say that it expected `money` but
received `character varying`.

On 11/8/2017 4:52 PM, Allan Kamau wrote:

 On Nov 9, 2017 03:46, "Tom Lane" 

Maybe using NUMERIC without explicitly stating the precision is
recommended. This would allow for values with many decimal places to be
accepted without truncation. Your field may need to capture very small
values such as those in bitcoin trading or some banking fee or interest.

Allan.


Re: [GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-08 Thread Igal @ Lucee.org

Thank you all for your help:

On 11/8/2017 4:45 PM, Tom Lane wrote:

"Igal @ Lucee.org"  writes:

The value in the offending insert is:  0.0

Why does Postgres decide that 0.0 is "double precision" (which is a
weird name in my opinion -- why can't it just be double) and not money?

Kettle must be telling it that --- on its own, PG would think '0.0'
is numeric, which it does have a cast to money for.


Looks like you are correct.  Kettle shows me the INSERT statement and 
when I execute it outside of Kettle (in a regular SQL client), the 
INSERT succeeds.


On 11/8/2017 4:45 PM, David G. Johnston wrote:

The lack of quotes surrounding the value is significant.  Money input 
requires a string literal.  Only (more or less) integer and double 
literal values can be written without the single quotes.


That didn't work.  I CAST'ed the value in the SELECT to VARCHAR(16) but 
all it did was change the error message to say that it expected `money` 
but received `character varying`.


On 11/8/2017 4:52 PM, Allan Kamau wrote:


 On Nov 9, 2017 03:46, "Tom Lane" 



Re: [GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-08 Thread Allan Kamau
On Nov 9, 2017 03:46, "Tom Lane"  wrote:

"Igal @ Lucee.org"  writes:
> I have a column named "discount" of type money in SQL Server.  I created
> the table in Postgres with the same name and type, since Postgres has a
> type named money, and am transferring the data by using PDI (Pentaho
> Data Integration) Kettle/Spoon.

> Kettle throws an error though:  column "discount" is of type money but
> expression is of type double precision.

> The value in the offending insert is:  0.0

> Why does Postgres decide that 0.0 is "double precision" (which is a
> weird name in my opinion -- why can't it just be double) and not money?

Kettle must be telling it that --- on its own, PG would think '0.0'
is numeric, which it does have a cast to money for.

regression=# create table m (m1 money);
CREATE TABLE
regression=# insert into m values (0.0);
INSERT 0 1
regression=# insert into m values (0.0::numeric);
INSERT 0 1
regression=# insert into m values (0.0::float8);
ERROR:  column "m1" is of type money but expression is of type double
precision
LINE 1: insert into m values (0.0::float8);
  ^
HINT:  You will need to rewrite or cast the expression.

You'll need to look at the client-side code to see where it's going wrong.

> The only solution I found is to set the column in Postgres to DOUBLE
> PRECISION instead of MONEY, but I'm not sure if there are negative side
> effects to that?

Well, it's imprecise.  Most people don't like that when it comes to
monetary amounts.

regards, tom lane


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Could try using NUMERIC datatype for such a field.

Allan


Re: [GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-08 Thread David G. Johnston
On Wednesday, November 8, 2017, Igal @ Lucee.org  wrote:
>
> Kettle throws an error though:  column "discount" is of type money but
> expression is of type double precision.
>
> The value in the offending insert is:  0.0
>
> Why does Postgres decide that 0.0 is "double precision" (which is a weird
> name in my opinion -- why can't it just be double) and not money?
>
The lack of quotes surrounding the value is significant.  Money input
requires a string literal.  Only (more or less) integer and double literal
values can be written without the single quotes.

David J.


Re: [GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-08 Thread Tom Lane
"Igal @ Lucee.org"  writes:
> I have a column named "discount" of type money in SQL Server.  I created 
> the table in Postgres with the same name and type, since Postgres has a 
> type named money, and am transferring the data by using PDI (Pentaho 
> Data Integration) Kettle/Spoon.

> Kettle throws an error though:  column "discount" is of type money but 
> expression is of type double precision.

> The value in the offending insert is:  0.0

> Why does Postgres decide that 0.0 is "double precision" (which is a 
> weird name in my opinion -- why can't it just be double) and not money?

Kettle must be telling it that --- on its own, PG would think '0.0'
is numeric, which it does have a cast to money for.

regression=# create table m (m1 money);
CREATE TABLE
regression=# insert into m values (0.0);
INSERT 0 1
regression=# insert into m values (0.0::numeric);
INSERT 0 1
regression=# insert into m values (0.0::float8);
ERROR:  column "m1" is of type money but expression is of type double precision
LINE 1: insert into m values (0.0::float8);
  ^
HINT:  You will need to rewrite or cast the expression.

You'll need to look at the client-side code to see where it's going wrong.

> The only solution I found is to set the column in Postgres to DOUBLE 
> PRECISION instead of MONEY, but I'm not sure if there are negative side 
> effects to that?

Well, it's imprecise.  Most people don't like that when it comes to
monetary amounts.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-08 Thread Igal @ Lucee.org

Hello,

I am migrating a database from MS SQL Server to Postgres.

I have a column named "discount" of type money in SQL Server.  I created 
the table in Postgres with the same name and type, since Postgres has a 
type named money, and am transferring the data by using PDI (Pentaho 
Data Integration) Kettle/Spoon.


Kettle throws an error though:  column "discount" is of type money but 
expression is of type double precision.


The value in the offending insert is:  0.0

Why does Postgres decide that 0.0 is "double precision" (which is a 
weird name in my opinion -- why can't it just be double) and not money?


I have control over the SELECT but not over the INSERT.  Is there any 
way to set the cast the value on the SELECT side in MS SQL Server to 
specify the column type of Postgres-money?


The only solution I found is to set the column in Postgres to DOUBLE 
PRECISION instead of MONEY, but I'm not sure if there are negative side 
effects to that?


Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] database size changed after restoring using pg_restore

2017-11-08 Thread John R Pierce

On 11/8/2017 2:59 PM, Dylan Luong wrote:


Hi

I am upgrading some databases from a PostgreSQL 9.2 server to a new 
PostgreSQL 9.6 server.


I used pg_dump and pg_restore for the upgrade.

.

But when I listed the size of the database (postgres=# \l+) between 
the 9.2 and the upgraded 9.6, they were different.


on 9.2 it was 3776 MB
on 9.6 it was 1570 MB

I also did a few more databases using the same steps and they all 
appeared to be smaller. Is that normal?



yes.  a freshly restored database will be all contiguous with no 
embedded free tuples left over from operations.   databases that get 
updates (or inserts/deletes) tend to bloat.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] database size changed after restoring using pg_restore

2017-11-08 Thread Andreas Joseph Krogh
På onsdag 08. november 2017 kl. 23:59:40, skrev Dylan Luong <
dylan.lu...@unisa.edu.au <mailto:dylan.lu...@unisa.edu.au>>:
Hi 

I am upgrading some databases from a PostgreSQL 9.2 server to a new 
PostgreSQL 9.6 server.

 

I used pg_dump and pg_restore for the upgrade.

 

This is the command I used to dump the database. I ran the command from the 
9.6 server.

pg_dump -h 9.2server -j 16 --format=directory -f 
/pg_backup/backup/dump/mydb-20171108.dmp -U postgres -W mydb

 

This is the command I used to restore the database on the 9.6server.

pg_restore -j 16 -C -d postgres /pg_backup/backup/dump/mydb-20171108.dmp

 

Everything appears ok, the dump and restore completed without and errors.

 

But when I listed the size of the database (postgres=# \l+) between the 9.2 
and the upgraded 9.6, they were different.

 

on 9.2 it was 3776 MB

on 9.6 it was 1570 MB

 

I also did a few more databases using the same steps and they all appeared to 
be smaller. Is that normal?


Yep.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 


[GENERAL] database size changed after restoring using pg_restore

2017-11-08 Thread Dylan Luong
Hi
I am upgrading some databases from a PostgreSQL 9.2 server to a new PostgreSQL 
9.6 server.

I used pg_dump and pg_restore for the upgrade.

This is the command I used to dump the database. I ran the command from the 9.6 
server.
pg_dump -h 9.2server -j 16 --format=directory -f 
/pg_backup/backup/dump/mydb-20171108.dmp -U postgres -W mydb

This is the command I used to restore the database on the 9.6server.
pg_restore -j 16 -C -d postgres /pg_backup/backup/dump/mydb-20171108.dmp

Everything appears ok, the dump and restore completed without and errors.

But when I listed the size of the database (postgres=# \l+) between the 9.2 and 
the upgraded 9.6, they were different.

on 9.2 it was 3776 MB
on 9.6 it was 1570 MB

I also did a few more databases using the same steps and they all appeared to 
be smaller. Is that normal?

Regards
Dylan


Re: [GENERAL] Because PostgreSQL is compiling in old versions of OS?

2017-11-08 Thread John R Pierce

On 11/8/2017 11:28 AM, DrakoRod wrote:

Which the best OS version to complining with the goal to build binaries "standard" o 
"more compatible"?



thats very difficult because library versions change between major 
distro releases.   Stuff compiled for RHEL6/CentOS6 will run on 
RHEL7/CentOS7 if you install the corresponding -compat libraries, but 
thats about as good as it gets.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] wal configuration setting for fast streaming replication with logical decoding

2017-11-08 Thread Weiping Qu

Hi,

I intend to increase the speed of streaming replication with logical 
decoding using following configuration:


wal_level = logical
fsync = on
synchronous_commit = off
wal_sync_method = fdatasync
wal_buffers = 256MB
wal_writer_delay = 2seconds

checkpoint_timeout = 15min
max_wal_size=10GB

The intention is to first let WAL records to be buffered in WAL buffers 
(with increasing wal_buffers as 256MB) by turning off synchronous_commit 
and increasing the wal_writer_delay to 2 second.
Target WAL records are wished to be directly fetched from RAM through 
streaming replication to external nodes, thus reducing I/Os.
Besides, to avoid expensive checkpoints, its timeout and max_wal_size 
are also increased.


However, as suggested online, wal_buffers should be not more than one 
WAL segment file which is 16MB.

and wal_writer_delay should be at millisecond level.
Therefore, I would like to listen to your opinions.

Besides, I would also like to fetch WAL records periodically (say per 
150 ms) which would cause pile-up of WAL records in memory at each 
wal_writer_delay interval.
As also introduced online, when XLogInsertRecord is called, a new record 
is inserted in to WAL buffers, if no space, then a few WAL records would 
be moved to kernel cache (buffer cache). Shall I also set 
vm.dirty_background_ratio = 5 and vm.dirty_ratio = 80 to avoid disk I/Os?


Looking forward to your kind help.
Best,
Weiping



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Because PostgreSQL is compiling in old versions of OS?

2017-11-08 Thread José María Terry Jiménez
El 8/11/17 a las 20:28, DrakoRod escribió:

> Hi everyone!
>
> I want to develop a installer for many purposes, but i have a question, when
> I review the currently PostgreSQL versions, I see Ubuntu 5 or RHEL 4 , when
> currently we have Ubuntu 16 or RHEL 7. for example: 
>
> /PostgreSQL 9.6.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
> 20120313 (Red Hat 4.4.7-16), 64-bit 
>
> PostgreSQL 9.6.5 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
> 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit/
>
> This is a standard, convention or is for compatibility? Which the best OS
> version to complining with the goal to build binaries "standard" o "more
> compatible"?
>
> Thanks for your help! 
>
>
Hello

You're wrong, that are gcc versions, not OS versions.

For example in my CentOS 6 Box

cat /etc/redhat-release
CentOS release 6.9 (Final)

gcc -v
[...trimmed...]
gcc versión 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC)

Best


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Because PostgreSQL is compiling in old versions of OS?

2017-11-08 Thread DrakoRod
Hi everyone!

I want to develop a installer for many purposes, but i have a question, when
I review the currently PostgreSQL versions, I see Ubuntu 5 or RHEL 4 , when
currently we have Ubuntu 16 or RHEL 7. for example: 

/PostgreSQL 9.6.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (Red Hat 4.4.7-16), 64-bit 

PostgreSQL 9.6.5 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit/

This is a standard, convention or is for compatibility? Which the best OS
version to complining with the goal to build binaries "standard" o "more
compatible"?

Thanks for your help! 



-
Dame un poco de fe, eso me bastará.
Rozvo Ware Solutions 
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] filter records by substring match of an postgresql array column

2017-11-08 Thread Jeff Janes
On Wed, Nov 8, 2017 at 4:28 AM, Arup Rakshit 
wrote:

>
> And to do I wanted to add an index like:
>
> CREATE INDEX trgm_idx_video_tags ON videos USING gist
> ((array_to_string(tags, ', ')) gist_trgm_ops)
>
> But on running this I get an error as:
>
> ERROR:  functions in index expression must be marked IMMUTABLE
>
> How can I fix this?
>
>
wrap array_to_string with text[] argument into a function and mark it as
immutable:

create function txt_array_to_string (text[]) returns text language sql
immutable as $$ select array_to_string($1,', ') $$;

And then build your index on that function.

I don't think there are any caveats on this.  Array_to_string is not
immutable because it can work with dates and numbers, which can change with
configuration settings, such as timezone.  But when given text[] argument,
I think it actually is immutable.

Cheers,

Jeff


Re: [GENERAL] filter records by substring match of an postgresql array column

2017-11-08 Thread Arup Rakshit
I enabled the extension `pg_trgm`.

I thought to query like:

SELECT
   "videos".* 
FROM
   "videos" 
WHERE
   (
  array_to_string(tags, ', ') ilike '%web shorts%'
   )
   AND EXISTS
   (
  SELECT
  FROM
 unnest(tags) AS u(val) 
  WHERE
 u.val ILIKE '%web shorts%'
   )
;

And to do I wanted to add an index like:

CREATE INDEX trgm_idx_video_tags ON videos USING gist ((array_to_string(tags, 
', ')) gist_trgm_ops)

But on running this I get an error as:

ERROR:  functions in index expression must be marked IMMUTABLE

How can I fix this?



> On Nov 8, 2017, at 4:02 PM, Arup Rakshit  wrote:
> 
> Hi,
> 
> I do have a videos table, and it has a column called `tags` of type array. I 
> would like to select all videos where any string inside tag column matches a 
> given substring. What method should I use? The *Contains `@>` operator* will 
> do full string comparisons as far as I understood.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] filter records by substring match of an postgresql array column

2017-11-08 Thread Jeff Janes
On Nov 8, 2017 02:34, "Arup Rakshit"  wrote:

Hi,

I do have a videos table, and it has a column called `tags` of type array.
I would like to select all videos where any string inside tag column
matches a given substring. What method should I use? The *Contains `@>`
operator* will do full string comparisons as far as I understood.



The extension parray_gin (
https://pgxn.org/dist/parray_gin/doc/parray_gin.html) offers the @@>
operator.

Cheers,

Jeff


[GENERAL] filter records by substring match of an postgresql array column

2017-11-08 Thread Arup Rakshit
Hi,

I do have a videos table, and it has a column called `tags` of type array. I 
would like to select all videos where any string inside tag column matches a 
given substring. What method should I use? The *Contains `@>` operator* will do 
full string comparisons as far as I understood.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] DROP INDEX hangs

2017-11-08 Thread Arnaud L.

Hi all

Using PostgreSQL 9.3.4, compiled by Visual C++ build 1600, 32-bit (on 
Windows, obviously).
I noticed that my vacuumdb was hanging on a specific index, so I decided 
that the easiest path was to create a duplicate of it (CREATE INDEX 
CONCURRENTLY myindex_v2 ...), and drop the original one (DROP INDEX 
myindex).
Bad idea : the DROP INDEX command is hanging, doing no IO but using a 
full CPU core.
It does not look like it is waiting for a lock (pg_locks show very few 
locks right now, and absolutely none related to this pid).


The index is not listed anymore in the table description though 
(something was definitively wrong with this index).


I tried to nicely cancel the query via pgadmin, but this does not help.
What can I do ? Should I terminate this backend ? What is the risk of 
terminating a drop index operation ?


Thanks for your advices !

--
Arnaud


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general