Re: Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-13 Thread Chris Travers
On Wed, Nov 14, 2018 at 8:06 AM Sachin Kotwal  wrote:

> Hi PostgreSQL lovers,
>
> I heard news that Redhat is going to acquired by IBM. IBM has its on
> database. And they have history of selling applications with their own
> hardware (software and hardware together).
>
> As per my knowledge PostgreSQL community has better support for Redhat
> family than any other platforms.
>

I don't know that is the case.  PostgreSQL support is different on Debian,
RedHat, Gentoo, etc.  The Debian folks, for example have a lot of tooling
around running many different instances of different versions on the same
host (as does Gentoo by the way).  In that respect, RedHat seems behind ;-)
 On the other hand, that lack of functionality means running it on RedHat
is a bit simpler if more limited.

I say this as someone who has a lot of experience running PostgreSQL on all
of the above.  The platforms are different in how they approach PostgreSQL
and what they want you to be able to do with it.


>
> Does community is going to support/focus more Debian platforms ?
>

As the community is growing I suspect that all platforms will get more
focus.  But usually the question is what the distro does to support
PostgreSQL for their own use cases.


> Does community has any plan to switch their main supported platform?
>

That question makes no sense.  We don't have a main supported platform.


> Please share if any other plan.
>
>
> --
>
> Thanks and Regards,
> Sachin Kotwal
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-13 Thread Sachin Kotwal
Hi PostgreSQL lovers,

I heard news that Redhat is going to acquired by IBM. IBM has its on
database. And they have history of selling applications with their own
hardware (software and hardware together).

As per my knowledge PostgreSQL community has better support for Redhat
family than any other platforms.

Does community is going to support/focus more Debian platforms ?
Does community has any plan to switch their main supported platform?
Please share if any other plan.


-- 

Thanks and Regards,
Sachin Kotwal


Re: Move cluster to new host, upgraded version

2018-11-13 Thread Adrian Klaver

On 11/13/18 3:47 PM, Rich Shepard wrote:

On Sun, 11 Nov 2018, Rich Shepard wrote:


Looks like the pg_dumpall '-h' option will act on the other host's data
directory.


   Worked as advertised. Just read the dumped file into the new 11.1 data
directory.


Just realized the question I should have asked is:

How did you get the pg_dumpall file processed by Postgres?

In other words how did you do it without a password?



   The new installation is asking for my password to access my databases.
Where do I turn this off?

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Move cluster to new host, upgraded version

2018-11-13 Thread Adrian Klaver

On 11/13/18 4:41 PM, Rich Shepard wrote:

On Tue, 13 Nov 2018, Adrian Klaver wrote:


No:
" Each of the first four fields can be a literal value, or *, which 
matches anything. "


Adrian,

   Okay.


If the record starts with local then that is for socket connections.
If you are connecting to a host e.g -h localhost then you need to look 
at the host records.


   I'm connecting from the same host on which the server is installed. I've
always used 'psql ' and been connected. The new desktop's


Please show complete connection command.


pg_hba.conf is identical to the old desktop's pg_hba.conf, but only the new
one is now asking for a password. Permissions on both are the same.


Can you show us the new pg_hba.conf?



Thanks,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Move cluster to new host, upgraded version

2018-11-13 Thread Rich Shepard

On Tue, 13 Nov 2018, Adrian Klaver wrote:


No:
" Each of the first four fields can be a literal value, or *, which matches 
anything. "


Adrian,

  Okay.


If the record starts with local then that is for socket connections.
If you are connecting to a host e.g -h localhost then you need to look at the 
host records.


  I'm connecting from the same host on which the server is installed. I've
always used 'psql ' and been connected. The new desktop's
pg_hba.conf is identical to the old desktop's pg_hba.conf, but only the new
one is now asking for a password. Permissions on both are the same.

Thanks,

Rich



Re: Move cluster to new host, upgraded version

2018-11-13 Thread Adrian Klaver

On 11/13/18 4:24 PM, Rich Shepard wrote:

On Tue, 13 Nov 2018, Adrian Klaver wrote:


You have two options:

1) The preferred one. Keep the password and create a .pgpass file to 
hold the password:

https://www.postgresql.org/docs/10/libpq-pgpass.html


Adrian,

   That's database-specific if I read the manual page correctly.


No:

" Each of the first four fields can be a literal value, or *, which 
matches anything. "





My guess is you had one on the other machine.


   Nope. I've been running postgres since 1997 and never used a password
since I'm the only one using the databases.


2) Change your auth method in pg_hba.conf:
https://www.postgresql.org/docs/10/client-authentication.html


   /var/lib/pgsql/11/data/pg_hba.conf has trust as the local authentication


If the record starts with local then that is for socket connections.

If you are connecting to a host e.g -h localhost then you need to look 
at the host records.


method for all databases. Perhaps I need to restart the server after 
loading

the databases. Will try that.

Thanks,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Move cluster to new host, upgraded version

2018-11-13 Thread Rich Shepard

On Tue, 13 Nov 2018, Adrian Klaver wrote:


You have two options:

1) The preferred one. Keep the password and create a .pgpass file to hold the 
password:

https://www.postgresql.org/docs/10/libpq-pgpass.html


Adrian,

  That's database-specific if I read the manual page correctly.


My guess is you had one on the other machine.


  Nope. I've been running postgres since 1997 and never used a password
since I'm the only one using the databases.


2) Change your auth method in pg_hba.conf:
https://www.postgresql.org/docs/10/client-authentication.html


  /var/lib/pgsql/11/data/pg_hba.conf has trust as the local authentication
method for all databases. Perhaps I need to restart the server after loading
the databases. Will try that.

Thanks,

Rich



Re: Move cluster to new host, upgraded version

2018-11-13 Thread Adrian Klaver

On 11/13/18 3:47 PM, Rich Shepard wrote:

On Sun, 11 Nov 2018, Rich Shepard wrote:


Looks like the pg_dumpall '-h' option will act on the other host's data
directory.


   Worked as advertised. Just read the dumped file into the new 11.1 data
directory.

   The new installation is asking for my password to access my databases.
Where do I turn this off?


You have two options:

1) The preferred one. Keep the password and create a .pgpass file to 
hold the password:


https://www.postgresql.org/docs/10/libpq-pgpass.html

My guess is you had one on the other machine.

2) Change your auth method in pg_hba.conf:

https://www.postgresql.org/docs/10/client-authentication.html


Rich





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: TOAST : partial detoasting : only a small part of a toasted value (for pgpointcloud)

2018-11-13 Thread Tom Lane
=?UTF-8?Q?R=C3=A9mi_Cura?=  writes:
> So the pgpointcloud store sometimes very large groups of points into one
> row (TOASTED), something along few kB to few MB. TOAST would be in EXTERNAL
> mode (no compression).
> Sometimes we only want to access a part of this data (one or several blocks
> within the full data).
>> From what I  understand, for the moment it requires to fetch all the data
> from disk, then de-toast it, then select only the part of the data we are
> interested in.

> Yet I think it is possible to detoast only a subset of the data (and thus
> fetch only some part of the data), considering that the [doc on toast](
> https://www.postgresql.org/docs/11/storage-toast.html) says
>> " Use of EXTERNAL will make substring operations on wide text and bytea
> columns faster
>> (at the penalty of increased storage space) because these operations are
> optimized
>> to fetch only the required parts of the out-of-line value when it is not
> compressed. "

> So my question is how does it work, how easy would it be to implement for
> pgpointcloud?

See PG_DETOAST_DATUM_SLICE and users of that macro.

regards, tom lane



TOAST : partial detoasting : only a small part of a toasted value (for pgpointcloud)

2018-11-13 Thread Rémi Cura
Hi dear list,
I have a tricky question about TOASTED memory in Postgres related to the
[pgpointcloud](https://github.com/pgpointcloud/pointcloud) extension.
(using Postgres 11 if it matters)

So the pgpointcloud store sometimes very large groups of points into one
row (TOASTED), something along few kB to few MB. TOAST would be in EXTERNAL
mode (no compression).

Sometimes we only want to access a part of this data (one or several blocks
within the full data).
>From what I  understand, for the moment it requires to fetch all the data
from disk, then de-toast it, then select only the part of the data we are
interested in.

Yet I think it is possible to detoast only a subset of the data (and thus
fetch only some part of the data), considering that the [doc on toast](
https://www.postgresql.org/docs/11/storage-toast.html) says
> " Use of EXTERNAL will make substring operations on wide text and bytea
columns faster
> (at the penalty of increased storage space) because these operations are
optimized
> to fetch only the required parts of the out-of-line value when it is not
compressed. "

So my question is how does it work, how easy would it be to implement for
pgpointcloud?

Many thanks for your time and help
Remi-C


Re: Move cluster to new host, upgraded version

2018-11-13 Thread Rich Shepard

On Sun, 11 Nov 2018, Rich Shepard wrote:


Looks like the pg_dumpall '-h' option will act on the other host's data
directory.


  Worked as advertised. Just read the dumped file into the new 11.1 data
directory.

  The new installation is asking for my password to access my databases.
Where do I turn this off?

Rich



Re: WTF with hash index?

2018-11-13 Thread Alvaro Herrera
On 2018-Nov-13, Олег Самойлов wrote:

> Very much better. What about to copy paste algorithm from
> gin(jsonb_path_ops) to the hash index?

You're welcome to submit patches.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: WTF with hash index?

2018-11-13 Thread Олег Самойлов
I am just doing experiment what a type a most suitable for enumeration in 
PostgreSQL. And what index. And this effect looked for me very strange. There 
is in the PostgreSQL one another hash index. This is gin(jsonb_path_ops) for 
the jsob type. It is also use hash internally, but it is much better.
Example based on the previous example.

create table jender (jdoc jsonb);

insert into jender (jdoc) select ('{"gender": "'||gender||'"}')::jsonb from 
gender;

create index jender_hash on jender using gin (jdoc jsonb_path_ops);

=> \d+
   List of relations
 Schema |  Name  | Type  | Owner | Size  | Description
++---+---+---+-
 public | gender | table | olleg | 35 MB |
 public | jender | table | olleg | 54 MB |
(2 rows)

=> \di+
   List of relations
 Schema | Name | Type  | Owner | Table  |  Size   | Description
+--+---+---++-+-
 public | gender_btree | index | olleg | gender | 21 MB   |
 public | gender_hash  | index | olleg | gender | 47 MB   |
 public | jender_hash  | index | olleg | jender | 1104 kB |
(3 rows)

Very much better. What about to copy paste algorithm from gin(jsonb_path_ops) 
to the hash index?

Re: WTF with hash index?

2018-11-13 Thread Andreas Kretschmer




Am 13.11.2018 um 19:12 schrieb Ron:

On 11/13/2018 12:07 PM, Andreas Kretschmer wrote:



Am 13.11.2018 um 17:42 schrieb Олег Самойлов:
insert into gender (gender) select case when random<0.50 then 
'female' when random<0.99 then 'male' else 'other' end from (select 
random() as random, generate_series(1,:table_size)) as subselect;


is that really your intended data distibution? 99% male?


select case when random<0.50 then 'female'
when random<0.99 then 'male'
    else 'other' end
from (select random() as random, generate_series(1,:table_size)) as 
subselect;


Shouldn't that make 49% male?



you are right, my fault :-(.
the case - statement will be left if the first condition is true.

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: WTF with hash index?

2018-11-13 Thread Andreas Kretschmer




Am 13.11.2018 um 17:42 schrieb Олег Самойлов:
insert into gender (gender) select case when random<0.50 then 'female' 
when random<0.99 then 'male' else 'other' end from (select random() as 
random, generate_series(1,:table_size)) as subselect;


is that really your intended data distibution? 99% male?

Regards, Andreas
--

2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: WTF with hash index?

2018-11-13 Thread Ron

On 11/13/2018 12:07 PM, Andreas Kretschmer wrote:



Am 13.11.2018 um 17:42 schrieb Олег Самойлов:
insert into gender (gender) select case when random<0.50 then 'female' 
when random<0.99 then 'male' else 'other' end from (select random() as 
random, generate_series(1,:table_size)) as subselect;


is that really your intended data distibution? 99% male?


select case when random<0.50 then 'female'
when random<0.99 then 'male'
    else 'other' end
from (select random() as random, generate_series(1,:table_size)) as subselect;

Shouldn't that make 49% male?

--
Angular momentum makes the world go 'round.



Re: WTF with hash index?

2018-11-13 Thread Laurenz Albe
Олег Самойлов wrote:
> \set table_size 100
> begin;
> create table gender (gender varchar);
> 
> insert into gender (gender) select case when random<0.50 then 'female' when 
> random<0.99 then 'male' else 'other' end from (select random() as random, 
> generate_series(1,:table_size)) as subselect;
> 
> create index gender_btree on gender using btree (gender);
> create index gender_hash on gender using hash (gender);
> commit;
> vacuum full analyze;
> 
> Vacuum full is not necessary here, just a little vodoo programming. I 
> expected that the hash index will be much smaller and quicker than the btree 
> index, because it doesn’t keep values inside itself, only hashes. But:
> 
> => \d+
>List of relations
>  Schema |  Name  | Type  | Owner | Size  | Description
> ++---+---+---+-
>  public | gender | table | olleg | 35 MB |
> (1 row)
> 
> => \di+
>   List of relations
>  Schema | Name | Type  | Owner | Table  | Size  | Description
> +--+---+---++---+-
>  public | gender_btree | index | olleg | gender | 21 MB |
>  public | gender_hash  | index | olleg | gender | 47 MB |
> (2 rows)
> 
> The hash index not only is more than the btree index, but also is bigger than 
> the table itself. What is wrong with the hash index?

I guess the problem here is that there are so few distinct values, so
all the index items end up in only three hash buckets, forming large
linked lists.

I can't tell off-hand why that would make the index so large though.

Anyway, indexes are pretty useless in such a case.

Is the behavior the same if you have many distinct values?

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




WTF with hash index?

2018-11-13 Thread Олег Самойлов
CentOS 7

$ rpm -q postgresql10
postgresql10-10.6-1PGDG.rhel7.x86_64

SQL script for psql:

\set table_size 100
begin;
create table gender (gender varchar);

insert into gender (gender) select case when random<0.50 then 'female' when 
random<0.99 then 'male' else 'other' end from (select random() as random, 
generate_series(1,:table_size)) as subselect;

create index gender_btree on gender using btree (gender);
create index gender_hash on gender using hash (gender);
commit;
vacuum full analyze;

Vacuum full is not necessary here, just a little vodoo programming. I expected 
that the hash index will be much smaller and quicker than the btree index, 
because it doesn’t keep values inside itself, only hashes. But:

=> \d+
   List of relations
 Schema |  Name  | Type  | Owner | Size  | Description
++---+---+---+-
 public | gender | table | olleg | 35 MB |
(1 row)

=> \di+
  List of relations
 Schema | Name | Type  | Owner | Table  | Size  | Description
+--+---+---++---+-
 public | gender_btree | index | olleg | gender | 21 MB |
 public | gender_hash  | index | olleg | gender | 47 MB |
(2 rows)

The hash index not only is more than the btree index, but also is bigger than 
the table itself. What is wrong with the hash index?

Re: LOG: incomplete startup packet

2018-11-13 Thread Pavel Stehule
út 13. 11. 2018 v 16:42 odesílatel Pavel Demidov 
napsal:

> Hello,
> Thank your answer. If there are a lot of messages 'LOG: incomplete startup
> packet' into postgres log will it cause the log overflow?
> Does it possible to filter it or ban to write.
>

It is garbage. Usually it means nothing, but better to work live without
this garbage.

Pavel



> Regards,
> Paul
>
> On Thu, Nov 8, 2018 at 1:06 PM Pavel Stehule 
> wrote:
>
>> Hi
>>
>> čt 8. 11. 2018 v 10:19 odesílatel Pavel Demidov 
>> napsal:
>>
>>> Hello,
>>>
>>> Time to time found in postgres.log the following message
>>>
>>> 01:35:22.608  LOG: incomplete startup packet
>>> 01:35:27.147  LOG: incomplete startup packet
>>> 01:35:52.593  LOG: incomplete startup packet
>>> 01:35:57.146  LOG: incomplete startup packet
>>>
>>> 01:36:22.596  LOG: incomplete startup packet
>>> 01:36:27.146  LOG: incomplete startup packet
>>> 01:36:52.593  LOG: incomplete startup packet
>>> 01:36:57.145  LOG: incomplete startup packet
>>>
>>> Are any way exists how to identify the source. What need to collect and
>>> what need to see.
>>>
>>> Best Regards
>>> Paul
>>>
>>
>> some monitoring tools does it
>>
>> Regards
>>
>> Pavel
>>
>


Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread Ravi Krishna
> 
> I apologize for top posting,  Google hid all of the other stuff.
> 

It is only me who thinks that when it comes to destroying email as a 
communication tool, no one did a better job than effing gmail.




Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread George Woodring
I apologize for top posting,  Google hid all of the other stuff.

George
iGLASS Networks
www.iglass.net



>>
>>


Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread George Woodring
CREATE OR REPLACE VIEW tickets AS
 SELECT *, tableoid
   FROM public.tickets
  WHERE ( ticketsid IN (SELECT ticketsid
FROM machtick));

iGLASS Networks
www.iglass.net


On Tue, Nov 13, 2018 at 10:32 AM Adrian Klaver 
wrote:

> On 11/13/18 6:54 AM, George Woodring wrote:
> > The tickets view restricts which tickets can be seen by the schema.
> >
> > 9.3 must have created the view in the same column order as the table
> > (which is the case looking at one of our 9.3 databases which we have not
> > updated yet), which is why we never saw the issue before.
>
> What is the view definition?
>
> >
> > George
> > iGLASS Networks
> > www.iglass.net 
> >
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: LOG: incomplete startup packet

2018-11-13 Thread Pavel Demidov
Hello,
Thank your answer. If there are a lot of messages 'LOG: incomplete startup
packet' into postgres log will it cause the log overflow?
Does it possible to filter it or ban to write.

Regards,
Paul

On Thu, Nov 8, 2018 at 1:06 PM Pavel Stehule 
wrote:

> Hi
>
> čt 8. 11. 2018 v 10:19 odesílatel Pavel Demidov 
> napsal:
>
>> Hello,
>>
>> Time to time found in postgres.log the following message
>>
>> 01:35:22.608  LOG: incomplete startup packet
>> 01:35:27.147  LOG: incomplete startup packet
>> 01:35:52.593  LOG: incomplete startup packet
>> 01:35:57.146  LOG: incomplete startup packet
>>
>> 01:36:22.596  LOG: incomplete startup packet
>> 01:36:27.146  LOG: incomplete startup packet
>> 01:36:52.593  LOG: incomplete startup packet
>> 01:36:57.145  LOG: incomplete startup packet
>>
>> Are any way exists how to identify the source. What need to collect and
>> what need to see.
>>
>> Best Regards
>> Paul
>>
>
> some monitoring tools does it
>
> Regards
>
> Pavel
>


Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread Adrian Klaver

On 11/13/18 6:54 AM, George Woodring wrote:

The tickets view restricts which tickets can be seen by the schema.

9.3 must have created the view in the same column order as the table 
(which is the case looking at one of our 9.3 databases which we have not 
updated yet), which is why we never saw the issue before.


What is the view definition?



George
iGLASS Networks
www.iglass.net 





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread George Woodring
The tickets view restricts which tickets can be seen by the schema.

9.3 must have created the view in the same column order as the table (which
is the case looking at one of our 9.3 databases which we have not updated
yet), which is why we never saw the issue before.

George
iGLASS Networks
www.iglass.net


On Tue, Nov 13, 2018 at 9:46 AM Adrian Klaver 
wrote:

> On 11/13/18 6:27 AM, George Woodring wrote:
> > I think the issue is that the function is not putting the data into the
> > tickets%ROWTYPE correctly.  When I do \d on public.tickets and
> > iss-hackers.tickets, the columns are in a different order.
> >
>
> >
> > The error message is saying column2 is not a timestamp, which the public
> > table is a timestamp for column2.  If I change my SELECT in the function
> > from SELECT * to SELECT opendate  I can fix my issue easily.
>
> Or change this:
>
>   SELECT * INTO ticket FROM tickets WHERE ticketsid=tid;
>
> to:
>
>   SELECT * INTO ticket FROM public.tickets WHERE ticketsid=tid;
>
> This will match the ROWTYPE:
>
> ticket public.tickets%ROWTYPE;
>
> >
> > George
> > iGLASS Networks
> > www.iglass.net 
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread Adrian Klaver

On 11/13/18 6:27 AM, George Woodring wrote:
I think the issue is that the function is not putting the data into the 
tickets%ROWTYPE correctly.  When I do \d on public.tickets and 
iss-hackers.tickets, the columns are in a different order.






The error message is saying column2 is not a timestamp, which the public 
table is a timestamp for column2.  If I change my SELECT in the function 
from SELECT * to SELECT opendate  I can fix my issue easily.


Or change this:

 SELECT * INTO ticket FROM tickets WHERE ticketsid=tid;

to:

 SELECT * INTO ticket FROM public.tickets WHERE ticketsid=tid;

This will match the ROWTYPE:

ticket public.tickets%ROWTYPE;



George
iGLASS Networks
www.iglass.net 





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread Adrian Klaver

On 11/13/18 6:27 AM, George Woodring wrote:
I think the issue is that the function is not putting the data into the 
tickets%ROWTYPE correctly.  When I do \d on public.tickets and 
iss-hackers.tickets, the columns are in a different order.


That is because you have a table tickets in the public schema and a view 
tickets in the iss-hackers schema.


Is that what you really want?




\d public.tickets
     Column    |           Type           |  
Modifiers

--+--+-
  ticketsid    | integer                  | not null default 
nextval('tickets_ticketsid_seq'::regclass)

  opendate     | timestamp with time zone | default now()
  state        | smallint                 | default 1
  opentech     | character varying(50)    |
  priority     | smallint                 | default 10
  severity     | smallint                 | default 30
  problem      | character varying(300)   |
  summary      | text                     |
  parent       | integer                  |
  remed        | boolean                  | default false
  remed2       | boolean                  | default false
  remed_hstart | timestamp with time zone |
  autoclean    | boolean                  | default false
  remoteid     | character varying        |
  remotesync   | timestamp with time zone |
  sla_time     | interval                 |
  sla_alarm    | boolean                  |

\d iss-hackers.tickets
              View "iss-hackers.tickets"
     Column    |           Type           | Modifiers
--+--+---
  ticketsid    | integer                  |
  opentech     | character varying(50)    |
  summary      | text                     |
  parent       | integer                  |
  opendate     | timestamp with time zone |
  priority     | smallint                 |
  problem      | character varying(300)   |
  autoclean    | boolean                  |
  state        | smallint                 |
  severity     | smallint                 |
  remed        | boolean                  |
  remed2       | boolean                  |
  remoteid     | character varying        |
  remotesync   | timestamp with time zone |
  sla_time     | interval                 |
  sla_alarm    | boolean                  |
  remed_hstart | timestamp with time zone |
  tableoid     | oid                      |


The error message is saying column2 is not a timestamp, which the public 
table is a timestamp for column2.  If I change my SELECT in the function 
from SELECT * to SELECT opendate  I can fix my issue easily.


George
iGLASS Networks
www.iglass.net 





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Db restore Error literal carriage return found hint use \r

2018-11-13 Thread Adrian Klaver

On 11/13/18 1:12 AM, Om Prakash Jaiswal wrote:

I am using postgresql 10.6
While restore data using psql, error getting literal carriage return 
found line no. 4446578 hint use \r


Can you show the entire row(line)?



I used \i  and \r , still getting same error.
Backup size 35GB plain databack up.
Data type JSON in which I am getting error while copy ing in restore 
operation.


Please solve it.

Regards
Om Prakash
Bangalore

Sent from Yahoo Mail on Android 




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread George Woodring
I think the issue is that the function is not putting the data into the
tickets%ROWTYPE correctly.  When I do \d on public.tickets and
iss-hackers.tickets, the columns are in a different order.

\d public.tickets
Column|   Type   |
Modifiers
--+--+-
 ticketsid| integer  | not null default
nextval('tickets_ticketsid_seq'::regclass)
 opendate | timestamp with time zone | default now()
 state| smallint | default 1
 opentech | character varying(50)|
 priority | smallint | default 10
 severity | smallint | default 30
 problem  | character varying(300)   |
 summary  | text |
 parent   | integer  |
 remed| boolean  | default false
 remed2   | boolean  | default false
 remed_hstart | timestamp with time zone |
 autoclean| boolean  | default false
 remoteid | character varying|
 remotesync   | timestamp with time zone |
 sla_time | interval |
 sla_alarm| boolean  |

\d iss-hackers.tickets
 View "iss-hackers.tickets"
Column|   Type   | Modifiers
--+--+---
 ticketsid| integer  |
 opentech | character varying(50)|
 summary  | text |
 parent   | integer  |
 opendate | timestamp with time zone |
 priority | smallint |
 problem  | character varying(300)   |
 autoclean| boolean  |
 state| smallint |
 severity | smallint |
 remed| boolean  |
 remed2   | boolean  |
 remoteid | character varying|
 remotesync   | timestamp with time zone |
 sla_time | interval |
 sla_alarm| boolean  |
 remed_hstart | timestamp with time zone |
 tableoid | oid  |


The error message is saying column2 is not a timestamp, which the public
table is a timestamp for column2.  If I change my SELECT in the function
from SELECT * to SELECT opendate  I can fix my issue easily.

George
iGLASS Networks
www.iglass.net


On Tue, Nov 13, 2018 at 8:44 AM Pavel Stehule 
wrote:

> Hi
>
> út 13. 11. 2018 v 14:18 odesílatel George Woodring <
> george.woodr...@iglass.net> napsal:
>
>> We are having an issue with one of our plpgsql functions after migrating
>> from 9.3 to 9.6.  The function works fine until you change the search path.
>>
>> psql (9.6.10)
>> Type "help" for help.
>>
>> woody=> select ticket_summary(8154);
>>   ticket_summary
>> ---
>>  {58451,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}
>> (1 row)
>>
>> woody=> set search_path to "iss-hackers", public;
>> SET
>> woody=> select ticket_summary(8154);
>> ERROR:  invalid input syntax for type timestamp with time zone: "woodring"
>> CONTEXT:  PL/pgSQL function ticket_summary(integer) line 11 at SQL
>> statement
>>
>> It is confused which column is which after the change.
>>
>> The tables used by the function are:
>> public.tickets - A table with 3 child tables
>> iss-hackers.tickets - A view of public.tickets with a where clause.
>> public.followups - A table with 3 child tables.
>>
>> CREATE OR REPLACE FUNCTION ticket_summary(int4) RETURNS int8[] AS $$
>>DECLARE
>>   tid ALIAS FOR $1;
>>   cstate public.followups.state%TYPE := 1;
>>   ticket public.tickets%ROWTYPE;
>>   followup public.followups%ROWTYPE;
>>   summary int8[] := '{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
>> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}';
>>   lastdate public.followups.date%TYPE;
>>BEGIN
>>   SELECT * INTO ticket FROM tickets WHERE ticketsid=tid;
>>   IF NOT FOUND THEN
>>  return summary;
>>   END IF;
>>   lastdate := ticket.opendate;
>>   FOR followup IN SELECT * FROM public.followups WHERE ticketsid=tid
>> AND state IS NOT NULL ORDER BY date LOOP
>>  summary[cstate] := summary[cstate] + extract( EPOCH FROM
>> (followup.date - lastdate))::int;
>>  cstate := followup.state;
>>  lastdate := followup.date;
>>   END LOOP;
>>   summary[cstate] := summary[cstate] + extract( EPOCH FROM
>> (current_timestamp - lastdate))::int;
>>   RETURN summary;
>>END;
>> $$ LANGUAGE plpgsql;
>>
>> I assume I can fix this by putting the function into each of the schemas,
>> but I thought I would ask opinions before doing so.
>>
>
> It looks strange. Maybe you have some garbage in iss-hackers schema
> created in upgrade time.
>
> Hard to say what is wrong without complete schema.
>
> Regards
>
> Pavel
>
>
>>  

Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread Adrian Klaver

On 11/13/18 5:17 AM, George Woodring wrote:
We are having an issue with one of our plpgsql functions after migrating 
from 9.3 to 9.6.  The function works fine until you change the search path.


psql (9.6.10)
Type "help" for help.

woody=> select ticket_summary(8154);
                           ticket_summary
---
  {58451,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}
(1 row)

woody=> set search_path to "iss-hackers", public;
SET
woody=> select ticket_summary(8154);
ERROR:  invalid input syntax for type timestamp with time zone: "woodring"


Well the above is the base of the issue, the string 'woodring' is being 
used as a timestamp entry. Somewhere wires are getting crossed.


In the function below you have:

 SELECT * INTO ticket FROM tickets WHERE ticketsid=tid;

This is the only place where you do not schema qualify a table.

Is there more then tickets table?



CONTEXT:  PL/pgSQL function ticket_summary(integer) line 11 at SQL statement

It is confused which column is which after the change.

The tables used by the function are:
public.tickets - A table with 3 child tables
iss-hackers.tickets - A view of public.tickets with a where clause.
public.followups - A table with 3 child tables.

CREATE OR REPLACE FUNCTION ticket_summary(int4) RETURNS int8[] AS $$
    DECLARE
       tid ALIAS FOR $1;
       cstate public.followups.state%TYPE := 1;
       ticket public.tickets%ROWTYPE;
       followup public.followups%ROWTYPE;
       summary int8[] := '{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}';

       lastdate public.followups.date%TYPE;
    BEGIN
       SELECT * INTO ticket FROM tickets WHERE ticketsid=tid;
       IF NOT FOUND THEN
          return summary;
       END IF;
       lastdate := ticket.opendate;
       FOR followup IN SELECT * FROM public.followups WHERE 
ticketsid=tid AND state IS NOT NULL ORDER BY date LOOP
          summary[cstate] := summary[cstate] + extract( EPOCH FROM 
(followup.date - lastdate))::int;

          cstate := followup.state;
          lastdate := followup.date;
       END LOOP;
       summary[cstate] := summary[cstate] + extract( EPOCH FROM 
(current_timestamp - lastdate))::int;

       RETURN summary;
    END;
$$ LANGUAGE plpgsql;

I assume I can fix this by putting the function into each of the 
schemas, but I thought I would ask opinions before doing so.


  Thanks,
George Woodring
iGLASS Networks
www.iglass.net 



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread Pavel Stehule
Hi

út 13. 11. 2018 v 14:18 odesílatel George Woodring <
george.woodr...@iglass.net> napsal:

> We are having an issue with one of our plpgsql functions after migrating
> from 9.3 to 9.6.  The function works fine until you change the search path.
>
> psql (9.6.10)
> Type "help" for help.
>
> woody=> select ticket_summary(8154);
>   ticket_summary
> ---
>  {58451,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}
> (1 row)
>
> woody=> set search_path to "iss-hackers", public;
> SET
> woody=> select ticket_summary(8154);
> ERROR:  invalid input syntax for type timestamp with time zone: "woodring"
> CONTEXT:  PL/pgSQL function ticket_summary(integer) line 11 at SQL
> statement
>
> It is confused which column is which after the change.
>
> The tables used by the function are:
> public.tickets - A table with 3 child tables
> iss-hackers.tickets - A view of public.tickets with a where clause.
> public.followups - A table with 3 child tables.
>
> CREATE OR REPLACE FUNCTION ticket_summary(int4) RETURNS int8[] AS $$
>DECLARE
>   tid ALIAS FOR $1;
>   cstate public.followups.state%TYPE := 1;
>   ticket public.tickets%ROWTYPE;
>   followup public.followups%ROWTYPE;
>   summary int8[] := '{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}';
>   lastdate public.followups.date%TYPE;
>BEGIN
>   SELECT * INTO ticket FROM tickets WHERE ticketsid=tid;
>   IF NOT FOUND THEN
>  return summary;
>   END IF;
>   lastdate := ticket.opendate;
>   FOR followup IN SELECT * FROM public.followups WHERE ticketsid=tid
> AND state IS NOT NULL ORDER BY date LOOP
>  summary[cstate] := summary[cstate] + extract( EPOCH FROM
> (followup.date - lastdate))::int;
>  cstate := followup.state;
>  lastdate := followup.date;
>   END LOOP;
>   summary[cstate] := summary[cstate] + extract( EPOCH FROM
> (current_timestamp - lastdate))::int;
>   RETURN summary;
>END;
> $$ LANGUAGE plpgsql;
>
> I assume I can fix this by putting the function into each of the schemas,
> but I thought I would ask opinions before doing so.
>

It looks strange. Maybe you have some garbage in iss-hackers schema created
in upgrade time.

Hard to say what is wrong without complete schema.

Regards

Pavel


>  Thanks,
> George Woodring
> iGLASS Networks
> www.iglass.net
>


Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread George Woodring
We are having an issue with one of our plpgsql functions after migrating
from 9.3 to 9.6.  The function works fine until you change the search path.

psql (9.6.10)
Type "help" for help.

woody=> select ticket_summary(8154);
  ticket_summary
---
 {58451,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}
(1 row)

woody=> set search_path to "iss-hackers", public;
SET
woody=> select ticket_summary(8154);
ERROR:  invalid input syntax for type timestamp with time zone: "woodring"
CONTEXT:  PL/pgSQL function ticket_summary(integer) line 11 at SQL statement

It is confused which column is which after the change.

The tables used by the function are:
public.tickets - A table with 3 child tables
iss-hackers.tickets - A view of public.tickets with a where clause.
public.followups - A table with 3 child tables.

CREATE OR REPLACE FUNCTION ticket_summary(int4) RETURNS int8[] AS $$
   DECLARE
  tid ALIAS FOR $1;
  cstate public.followups.state%TYPE := 1;
  ticket public.tickets%ROWTYPE;
  followup public.followups%ROWTYPE;
  summary int8[] := '{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}';
  lastdate public.followups.date%TYPE;
   BEGIN
  SELECT * INTO ticket FROM tickets WHERE ticketsid=tid;
  IF NOT FOUND THEN
 return summary;
  END IF;
  lastdate := ticket.opendate;
  FOR followup IN SELECT * FROM public.followups WHERE ticketsid=tid
AND state IS NOT NULL ORDER BY date LOOP
 summary[cstate] := summary[cstate] + extract( EPOCH FROM
(followup.date - lastdate))::int;
 cstate := followup.state;
 lastdate := followup.date;
  END LOOP;
  summary[cstate] := summary[cstate] + extract( EPOCH FROM
(current_timestamp - lastdate))::int;
  RETURN summary;
   END;
$$ LANGUAGE plpgsql;

I assume I can fix this by putting the function into each of the schemas,
but I thought I would ask opinions before doing so.

 Thanks,
George Woodring
iGLASS Networks
www.iglass.net


Re: pg9.6 when is a promoted cluster ready to accept "rewind" request?

2018-11-13 Thread talk to ben
Hi,

You might have to wait for pg_is_in_recovery to be false after the
promotion. (in 9.6 pg_ctl promote doesn't wait for promotion to complete
unlike 10).  [1]

You should CHECKOINT between 2 and 3. (or wait for the first checkpoint to
finish)
In the thread [2], Michael Paquier explains that:

" This makes the promoted standby update its
timeline number in the on-disk control file, which is used by pg_rewind
to check if a rewind needs to happen or not. "

Benoit,

[1] https://paquier.xyz/postgresql-2/postgres-10-wait-pgctl-promote/
[2]
https://www.postgresql.org/message-id/flat/20180804205026.GJ20967%40paquier.xyz#f89ffda99fb5e7355e7499f496a712f7


Db restore Error literal carriage return found hint use \r

2018-11-13 Thread Om Prakash Jaiswal
I am using postgresql 10.6While restore data using psql, error getting literal 
carriage return found line no. 4446578 hint use \r I used \i  and \r , still 
getting same error.Backup size 35GB plain databack up.Data type JSON in which I 
am getting error while copy ing in restore operation.
Please solve it.
Regards Om PrakashBangalore

Sent from Yahoo Mail on Android