Re: what causes new temp schemas to be created

2023-07-10 Thread Ted Toth
So it's an optimization to reuse existing but currently unused temp
schemas, correct?

On Mon, Jul 10, 2023 at 11:22 AM David G. Johnston
 wrote:
>
> On Mon, Jul 10, 2023 at 9:20 AM Ted Toth  wrote:
>>
>> I don't see that the schema is removed when the session is over and I
>> see other sessions come along later and use it. I'm assuming here that
>> a session is started on connect and ended when the connection is
>> closed.
>>
>
> The first time a session needs a temporary schema it is assigned one which is 
> then immediately cleared out.
>
> David J.




Re: what causes new temp schemas to be created

2023-07-10 Thread Ted Toth
I don't see that the schema is removed when the session is over and I
see other sessions come along later and use it. I'm assuming here that
a session is started on connect and ended when the connection is
closed.

On Mon, Jul 10, 2023 at 9:21 AM David G. Johnston
 wrote:
>
> On Mon, Jul 10, 2023 at 7:18 AM Ted Toth  wrote:
>>
>> When a temp table is created I see a pg_temp_NNN (for example
>> pg_temp_3, pg_toast_temp_3) schemas created when/why are additional
>> temp schemas created( pg_temp_4/pg_toast_temp_4)?
>>
>
> Temporary schemas are isolated to the session they are created in.  Hence, 
> you get multiple temporary schemas if you have concurrent sessions using 
> temporary objects.
>
> David J.




what causes new temp schemas to be created

2023-07-10 Thread Ted Toth
When a temp table is created I see a pg_temp_NNN (for example
pg_temp_3, pg_toast_temp_3) schemas created when/why are additional
temp schemas created( pg_temp_4/pg_toast_temp_4)?

Ted




Re: temp table security labels

2023-07-07 Thread Ted Toth
seems to me that sepgsql_schema_post_create should be doing a
selabel_lookup and using that default label if one exists instead of
computing the label ... hmm I'll have to think about it

On Fri, Jul 7, 2023 at 10:44 AM Ted Toth  wrote:
>
> I see it now sepgsql_schema_post_create is doing it ... it's just not
> doing what I think it should :(
>
> On Fri, Jul 7, 2023 at 8:05 AM Ted Toth  wrote:
> >
> > Also AFAICT the schema created for the temp table does not get deleted
> > at the end of the session which also causes issues.
> >
> > On Fri, Jul 7, 2023 at 7:43 AM Ted Toth  wrote:
> > >
> > > I'm seeing security labels set on db objects when a temporary table is
> > > created but they don't match the labels I've configured in my
> > > sepgsql_contexts file. Our code doesn't actually execute a "SECURITY
> > > LABEL ..." command for the temp tables but something else must be
> > > doing so because I see pg_temp_NNN in pg_seclabels. So I'm confused at
> > > how the objects are being labeled, can anyone help me understand how
> > > this is happening?
> > >
> > > Ted




Re: temp table security labels

2023-07-07 Thread Ted Toth
I see it now sepgsql_schema_post_create is doing it ... it's just not
doing what I think it should :(

On Fri, Jul 7, 2023 at 8:05 AM Ted Toth  wrote:
>
> Also AFAICT the schema created for the temp table does not get deleted
> at the end of the session which also causes issues.
>
> On Fri, Jul 7, 2023 at 7:43 AM Ted Toth  wrote:
> >
> > I'm seeing security labels set on db objects when a temporary table is
> > created but they don't match the labels I've configured in my
> > sepgsql_contexts file. Our code doesn't actually execute a "SECURITY
> > LABEL ..." command for the temp tables but something else must be
> > doing so because I see pg_temp_NNN in pg_seclabels. So I'm confused at
> > how the objects are being labeled, can anyone help me understand how
> > this is happening?
> >
> > Ted




Re: temp table security labels

2023-07-07 Thread Ted Toth
Also AFAICT the schema created for the temp table does not get deleted
at the end of the session which also causes issues.

On Fri, Jul 7, 2023 at 7:43 AM Ted Toth  wrote:
>
> I'm seeing security labels set on db objects when a temporary table is
> created but they don't match the labels I've configured in my
> sepgsql_contexts file. Our code doesn't actually execute a "SECURITY
> LABEL ..." command for the temp tables but something else must be
> doing so because I see pg_temp_NNN in pg_seclabels. So I'm confused at
> how the objects are being labeled, can anyone help me understand how
> this is happening?
>
> Ted




temp table security labels

2023-07-07 Thread Ted Toth
I'm seeing security labels set on db objects when a temporary table is
created but they don't match the labels I've configured in my
sepgsql_contexts file. Our code doesn't actually execute a "SECURITY
LABEL ..." command for the temp tables but something else must be
doing so because I see pg_temp_NNN in pg_seclabels. So I'm confused at
how the objects are being labeled, can anyone help me understand how
this is happening?

Ted




dynamic partition table inserts

2023-01-25 Thread Ted Toth
I'm creating partition tables on the fly following the technique described
in https://minervadb.com/index.php/postgresql-dynamic-partitioning/ .
Regardless of whether a partition table is created or already exists all I
really want to do is change the destination table (TG_TABLE_NAME?) of the
insert to the partition table. Is this possible or is my only option to
generate a new insert as in the example?

Ted


Re: table inheritance partition and indexes

2022-11-25 Thread Ted Toth
Docs can always be better, right ;) When I went back and looked at the page
I did realize that the ATTACH INDEX command was only mentioned in the
description of declarative partitioning and not in the inheritance section
so I should have paid closer attention to the location.

Ted

On Wed, Nov 23, 2022 at 8:13 PM David Rowley  wrote:

> On Thu, 24 Nov 2022 at 11:34, Ted Toth  wrote:
> >
> > On Wed, Nov 23, 2022 at 4:01 PM Tom Lane  wrote:
> >> Then you're stuck managing it manually.  But ATTACH PARTITION is
> >> not relevant to INHERITS-style partitioning.
> >
> > That's the part that wasn't clear to me, thanks.
>
> Would this have been more clear if [1] didn't mention both declarative
> partitioning and inheritance partition on the same page? I've wondered
> before if we should split that into two separate pages.
>
> David
>
> [1]
> https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-USING-INHERITANCE
>


Re: table inheritance partition and indexes

2022-11-23 Thread Ted Toth
On Wed, Nov 23, 2022 at 4:01 PM Tom Lane  wrote:

> Ted Toth  writes:
> > On Wed, Nov 23, 2022 at 1:24 PM Ron  wrote:
> >> Out of curiosity, why INHERITS in v13 instead of PARTITION BY?
>
> > Because none of the declarative partition types do what I want.
>
> Then you're stuck managing it manually.  But ATTACH PARTITION is
> not relevant to INHERITS-style partitioning.
>

That's the part that wasn't clear to me, thanks.


>
> regards, tom lane
>


Re: table inheritance partition and indexes

2022-11-23 Thread Ted Toth
Because none of the declarative partition types do what I want.

On Wed, Nov 23, 2022 at 1:24 PM Ron  wrote:

> Out of curiosity, why INHERITS in v13 instead of PARTITION BY?
>
> On 11/23/22 09:31, Ted Toth wrote:
> > I've created a table with a number of indexes and then created a
> partition
> > table that inherits from it using "CREATE TABLE... INHERITS..." . I've
> > then queried pg_indexes on the parent for its indexes and tried creating
> > matching indexes on the child and attaching them to the parent table
> > indexes. However "ALTER TABLE ... ATTACH PARTITION ..." is returning the
> > error " is not a table partitioned index".
> I've
> > followed the inheritance partitioning example and read the "CREATE
> INDEX"
> > docs but I must be missing something, can anyone help me understand what
> > I'm doing wrong? I'm using version 13.
> >
> > Ted
> >
>
> --
> Angular momentum makes the world go 'round.
>
>
>


table inheritance partition and indexes

2022-11-23 Thread Ted Toth
I've created a table with a number of indexes and then created a partition
table that inherits from it using "CREATE TABLE... INHERITS..." . I've then
queried pg_indexes on the parent for its indexes and tried creating
matching indexes on the child and attaching them to the parent table
indexes. However "ALTER TABLE ... ATTACH PARTITION ..." is returning the
error " is not a table partitioned index". I've
followed the inheritance partitioning example and read the "CREATE INDEX"
docs but I must be missing something, can anyone help me understand what
I'm doing wrong? I'm using version 13.

Ted


security label and indexes

2022-11-22 Thread Ted Toth
I noticed that the 'security label' sql command does not include indexes as
objects that can be labeled, why is that? What sepgsql security class are
indexes, db_table?

Ted


what's inherited

2022-10-18 Thread Ted Toth
When I create a table that inherits from another table what all is
inherited i.e. ownership, security policy, ...?

Ted




Re: table inheritance and privileges

2022-09-28 Thread Ted Toth
I have written a 'before insert' trigger using a similar technique to
that described in section 5.11.3 of :
https://www.postgresql.org/docs/14/ddl-partitioning.html#DDL-PARTITIONING-USING-INHERITANCE

However my trigger queries to see if the partition table exists,
creates it if it doesn't then inserts into it and returns 'SKIP'
(plpython3u).

On Wed, Sep 28, 2022 at 2:49 PM Tom Lane  wrote:
>
> Ted Toth  writes:
> > I'm dynamically creating tables that inherit from another table but
> > they don't inherit the access privileges. Is there a way to have the
> > new tables inherit their parent tables access privileges?
>
> No, but do you need that?  When accessing the parent table, there's
> no need for privileges on individual children --- we only check
> tables(s) directly named in the query.
>
> regards, tom lane




table inheritance and privileges

2022-09-28 Thread Ted Toth
I'm dynamically creating tables that inherit from another table but
they don't inherit the access privileges. Is there a way to have the
new tables inherit their parent tables access privileges?

Ted




plpython/python string formatting

2022-08-16 Thread Ted Toth
I've just started playing with plpython but ran into a issue when I
was trying to use standard python string formatting to generate a SQL
string for example:
s = "EXECUTE format('CREATE INDEX %s ON %s USING (column_name)' %
(index_name, table_name))"

but plpython then tried to run the EXECUTE instead of just setting
variable 's'. Why does this happen?

Ted




plpython questions

2022-08-16 Thread Ted Toth
Is this the right list to ask questions about plpython? If not what
would be the best list or git repo to ask questions related to
plpython?

Ted




Re: when is RLS policy applied

2020-07-24 Thread Ted Toth
I've looked for information on leakproofness of operators but haven't found
anything can you direct me to a source of this information?

On Fri, Jul 24, 2020 at 3:40 PM Ted Toth  wrote:

>
> On Fri, Jul 24, 2020 at 3:15 PM Tom Lane  wrote:
>
>> Ted Toth  writes:
>> > I'm trying to understand when RLS select policy is applied so I created
>> the
>> > follow to test but I don't understand why the query filter order is
>> > different for the 2 queries can anyone explain?
>>
>> The core reason why not is that the ~~ operator isn't considered
>> leakproof.  Plain text equality is leakproof, so it's safe to evaluate
>> ahead of the RLS filter --- and we'd rather do so because the plpgsql
>> function is assumed to be much more expensive than a built-in operator.
>>
>> (~~ isn't leakproof because it can throw errors that expose information
>> about the pattern argument.)
>>
>> regards, tom lane
>>
>
> Thanks for the explanation.
>
> Ted
>


Re: when is RLS policy applied

2020-07-24 Thread Ted Toth
On Fri, Jul 24, 2020 at 3:15 PM Tom Lane  wrote:

> Ted Toth  writes:
> > I'm trying to understand when RLS select policy is applied so I created
> the
> > follow to test but I don't understand why the query filter order is
> > different for the 2 queries can anyone explain?
>
> The core reason why not is that the ~~ operator isn't considered
> leakproof.  Plain text equality is leakproof, so it's safe to evaluate
> ahead of the RLS filter --- and we'd rather do so because the plpgsql
> function is assumed to be much more expensive than a built-in operator.
>
> (~~ isn't leakproof because it can throw errors that expose information
> about the pattern argument.)
>
>     regards, tom lane
>

Thanks for the explanation.

Ted


when is RLS policy applied

2020-07-24 Thread Ted Toth
I'm trying to understand when RLS select policy is applied so I created the
follow to test but I don't understand why the query filter order is
different for the 2 queries can anyone explain?

CREATE USER bob NOSUPERUSER;
CREATE TABLE t_service (service_type text, service text);
INSERT INTO t_service VALUES
('open_source', 'PostgreSQL consulting'),
('open_source', 'PostgreSQL training'),
('open_source', 'PostgreSQL 24x7 support'),
('closed_source', 'Oracle tuning'),
('closed_source', 'Oracle license management'),
('closed_source', 'IBM DB2 training');
GRANT ALL ON SCHEMA PUBLIC TO bob;
GRANT ALL ON TABLE t_service TO bob;
CREATE FUNCTION debug_me(text) RETURNS boolean AS
$$
BEGIN
RAISE NOTICE 'called as session_user=%, current_user=% for "%" ',
session_user, current_user, $1;
RETURN true;
END;
$$ LANGUAGE 'plpgsql';
GRANT ALL ON FUNCTION debug_me TO bob;
ALTER TABLE t_service ENABLE ROW LEVEL SECURITY;
CREATE POLICY bob_pol ON t_service
FOR SELECT
TO bob
USING (debug_me(service));
SET ROLE bob;
explain analyze select * from t_service where service like 'Oracle%';
NOTICE:  called as session_user=postgres, current_user=bob for "PostgreSQL
consulting"
NOTICE:  called as session_user=postgres, current_user=bob for "PostgreSQL
training"
NOTICE:  called as session_user=postgres, current_user=bob for "PostgreSQL
24x7 support"
NOTICE:  called as session_user=postgres, current_user=bob for "Oracle
tuning"
NOTICE:  called as session_user=postgres, current_user=bob for "Oracle
license management"
NOTICE:  called as session_user=postgres, current_user=bob for "IBM DB2
training"
  QUERY PLAN

--
 Seq Scan on t_service  (cost=0.00..241.00 rows=1 width=64) (actual
time=0.294..0.391 rows=2 loops=1)
   Filter: (debug_me(service) AND (service ~~ 'Oracle%'::text))
   Rows Removed by Filter: 4
 Planning time: 0.112 ms
 Execution time: 0.430 ms
(5 rows)

explain analyze select * from t_service where
t_service.service_type='open_source';
NOTICE:  called as session_user=postgres, current_user=bob for "PostgreSQL
consulting"
NOTICE:  called as session_user=postgres, current_user=bob for "PostgreSQL
training"
NOTICE:  called as session_user=postgres, current_user=bob for "PostgreSQL
24x7 support"
  QUERY PLAN

--
 Seq Scan on t_service  (cost=0.00..241.00 rows=1 width=64) (actual
time=0.159..0.302 rows=3 loops=1)
   Filter: ((service_type = 'open_source'::text) AND debug_me(service))
   Rows Removed by Filter: 3
 Planning time: 0.129 ms
 Execution time: 0.348 ms
(5 rows)


FDW and RLS

2020-05-22 Thread Ted Toth
Will RLS be applied to data being retrieved via a FDW?

Ted


Re: jsonb unique constraints

2020-05-05 Thread Ted Toth
I was able to create a unique index, thanks.

On Tue, May 5, 2020 at 10:38 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Tue, May 5, 2020 at 8:33 AM Ted Toth  wrote:
>
>> Can you have unique constraints on jsonb columns keys? I've looked for
>> examples but haven't found any what is the proper syntax? Here's what I
>> tried:
>>
>> CREATE TABLE report_json (
>> recnum int,
>> id integer,
>> report jsonb,
>> PRIMARY KEY (recnum),
>> CONSTRAINT report_json_unique_constraint UNIQUE (id,((report ->>
>> 'data')::int),((report ->> 'ctc')::int),((report ->> 'dtg')::int)
>> );
>>
>> which causes a syntax error at the first '(' around 'report ->>'.
>>
>
> Documentation says constraints must reference column names - so no.  You
> can probably get what you are after by directly creating a unique index
> though - those allow expressions.
>
> David J.
>


jsonb unique constraints

2020-05-05 Thread Ted Toth
Can you have unique constraints on jsonb columns keys? I've looked for
examples but haven't found any what is the proper syntax? Here's what I
tried:

CREATE TABLE report_json (
recnum int,
id integer,
report jsonb,
PRIMARY KEY (recnum),
CONSTRAINT report_json_unique_constraint UNIQUE (id,((report ->>
'data')::int),((report ->> 'ctc')::int),((report ->> 'dtg')::int)
);

which causes a syntax error at the first '(' around 'report ->>'.


Re: performance of first exec of prepared statement

2020-04-17 Thread Ted Toth
On Fri, Apr 17, 2020 at 8:28 AM Ted Toth  wrote:

>
>
> On Thu, Apr 16, 2020 at 8:09 PM Rob Sargent  wrote:
>
>> On 4/16/20 6:15 PM, Adrian Klaver wrote:
>> > On 4/16/20 4:59 PM, Ted Toth wrote:
>> >>
>> >>
>> >> On Thu, Apr 16, 2020 at 6:29 PM Ted Toth > >> <mailto:txt...@gmail.com>> wrote:
>> >>
>> >> I've noticed that the first exec of an INSERT prepared statement
>> >> takes ~5 time longer (I'm using libpq in C and wrapping the calls
>> to
>> >> time them) then subsequent exec's is this the expected behavior and
>> >> if so is there any thing I can do to mitigate this affect?
>> >>
>> >> Ted
>> >>
>> >>
>> >> For example (in my environment) I'm seeing the prepare take ~10ms,
>> >> the first exec take ~30 ms and subsequent exec's take ~4 ms.
>> >>
>> >
>> > I don't have an answer. I believe though that to help those that might
>> > it would be helpful to show the actual code.
>> >
>> >
>> You expect the subsequent calls to benefit from the cached query parse
>> and planning.  What does you query cost without begin wrapped in a
>> prepared statement (preferably from a cold start).
>>
>>
>>
>> I thought that's what the PQprepare call was supposed to do i.e.
> parsing/planning.
>
> It's a bit difficult to get an unprepared query cost since there are a lot
> of columns :(
> #define INSERT_SQL "INSERT INTO t () VALUES
> ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35,$36,$37,$38,$39,$40,$41,$42,$43,$44,$45,$46,$47,$48,$49,$50,$51,$52,$53,$54,$55,$56,$57,$58,$59,$60,$61,$62,$63,$64,$65,$66,$67,$68,$69,$70,$71,$72,$73,$74,$75,$76,$77,$78,$79,$80,$81,$82,$83,$84,$85,$86,$87);"
>
> Ah prepare does the parsing and execute does the planning.

Another related question is that my implementation uses strings for all
values is there a performance benefit to using actual values and specifying
their datatype?


Re: performance of first exec of prepared statement

2020-04-17 Thread Ted Toth
On Thu, Apr 16, 2020 at 8:09 PM Rob Sargent  wrote:

> On 4/16/20 6:15 PM, Adrian Klaver wrote:
> > On 4/16/20 4:59 PM, Ted Toth wrote:
> >>
> >>
> >> On Thu, Apr 16, 2020 at 6:29 PM Ted Toth  >> <mailto:txt...@gmail.com>> wrote:
> >>
> >> I've noticed that the first exec of an INSERT prepared statement
> >> takes ~5 time longer (I'm using libpq in C and wrapping the calls to
> >> time them) then subsequent exec's is this the expected behavior and
> >> if so is there any thing I can do to mitigate this affect?
> >>
> >> Ted
> >>
> >>
> >> For example (in my environment) I'm seeing the prepare take ~10ms,
> >> the first exec take ~30 ms and subsequent exec's take ~4 ms.
> >>
> >
> > I don't have an answer. I believe though that to help those that might
> > it would be helpful to show the actual code.
> >
> >
> You expect the subsequent calls to benefit from the cached query parse
> and planning.  What does you query cost without begin wrapped in a
> prepared statement (preferably from a cold start).
>
>
>
> I thought that's what the PQprepare call was supposed to do i.e.
parsing/planning.

It's a bit difficult to get an unprepared query cost since there are a lot
of columns :(
#define INSERT_SQL "INSERT INTO t () VALUES
($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35,$36,$37,$38,$39,$40,$41,$42,$43,$44,$45,$46,$47,$48,$49,$50,$51,$52,$53,$54,$55,$56,$57,$58,$59,$60,$61,$62,$63,$64,$65,$66,$67,$68,$69,$70,$71,$72,$73,$74,$75,$76,$77,$78,$79,$80,$81,$82,$83,$84,$85,$86,$87);"


Re: performance of first exec of prepared statement

2020-04-16 Thread Ted Toth
On Thu, Apr 16, 2020 at 6:29 PM Ted Toth  wrote:

> I've noticed that the first exec of an INSERT prepared statement takes ~5
> time longer (I'm using libpq in C and wrapping the calls to time them) then
> subsequent exec's is this the expected behavior and if so is there any
> thing I can do to mitigate this affect?
>
> Ted
>

For example (in my environment) I'm seeing the prepare take ~10ms, the
first exec take ~30 ms and subsequent exec's take ~4 ms.


performance of first exec of prepared statement

2020-04-16 Thread Ted Toth
I've noticed that the first exec of an INSERT prepared statement takes ~5
time longer (I'm using libpq in C and wrapping the calls to time them) then
subsequent exec's is this the expected behavior and if so is there any
thing I can do to mitigate this affect?

Ted


Re: Ident authentication failed

2020-03-26 Thread Ted To
Thanks for the many responses I've gotten!  What a useful and welcoming 
community!


I finally managed to get it configured following: 
https://github.com/miniflux/miniflux/issues/80#issuecomment-375996546


On 3/26/20 4:24 PM, Adrian Klaver wrote:

On 3/26/20 12:40 PM, Ted To wrote:
Thank you -- I added two lines to the hba file to allow for ident 
authentication, restarted postgres and still the same errors.


Alright from here:

https://miniflux.app/docs/configuration.html
The Golang Postgres client pq is being used and:

"The default value for DATABASE_URL is user=postgres password=postgres 
dbname=miniflux2 sslmode=disable."


From here:

https://godoc.org/github.com/lib/pq#hdr-Connection_String_Parameters

"host - The host to connect to. Values that start with / are for unix
  domain sockets. (default is localhost)"

So unless you have specified a host in the conf file you need to be 
paying attention to the lines in pg_hba.conf that start with host.


Where the two lines you added above for host?

You can check what is happening on the server end by looking in the 
Postgres logs. That will also show you the connection string.



Do you have more then one instance of Postgres on the machine?



On 3/26/20 3:35 PM, Ron wrote:
You're only allowing "trust" authentication, not "ident" 
authentication.


On 3/26/20 2:29 PM, Ted To wrote:


Hi,

I'm trying to configure miniflux and am struggling to figure out 
how to configure postgres for that purpose.  (The miniflux 
instructions are very terse.)  Regardless of whether I run the 
command as the postgres user or the miniflux user, I get similar 
errors.  I am running Centos 7 using the updated scl version of 
postgres (9.6.10).


With the following, the postgres user "miniflux" along with the 
password are specified in /etc/miniflux.conf.


$ miniflux -c /etc/miniflux.conf -migrate
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "miniflux"

I have no problems logging into postgres as the miniflux user using 
the same password specified in /etc/miniflux.conf.


Running the same command without specifying the configuration file 
uses the postgres user and also fails


$ miniflux -migrate
[INFO] The default value for DATABASE_URL is used
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "postgres"

My /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf file 
includes


# TYPE  DATABASE    USER    ADDRESS METHOD

local   miniflux   miniflux  trust
local   miniflux   postgres  trust

# "local" is for Unix domain socket connections only
local   all all peer
# IPv4 local connections:
host    all all     127.0.0.1/32 trust

I'm at a complete loss.

Any suggestions?

Thanks,

Ted To








Re: Ident authentication failed

2020-03-26 Thread Ted To


On 3/26/20 3:35 PM, Adrian Klaver wrote:

Can you show the entire connection string used by miniflux?

The miniflux command is binary so, no, not easily.


Did you reload/restart the server after making the changes below?

Yes.




My /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf file includes

# TYPE  DATABASE    USER    ADDRESS METHOD

local   miniflux   miniflux  trust
local   miniflux   postgres  trust

# "local" is for Unix domain socket connections only
local   all all peer
# IPv4 local connections:
host    all all 127.0.0.1/32 trust

I'm at a complete loss.

Any suggestions?

Thanks,

Ted To






Re: Ident authentication failed

2020-03-26 Thread Ted To
Thank you -- I added two lines to the hba file to allow for ident 
authentication, restarted postgres and still the same errors.


On 3/26/20 3:35 PM, Ron wrote:

You're only allowing "trust" authentication, not "ident" authentication.

On 3/26/20 2:29 PM, Ted To wrote:


Hi,

I'm trying to configure miniflux and am struggling to figure out how 
to configure postgres for that purpose.  (The miniflux instructions 
are very terse.)  Regardless of whether I run the command as the 
postgres user or the miniflux user, I get similar errors.  I am 
running Centos 7 using the updated scl version of postgres (9.6.10).


With the following, the postgres user "miniflux" along with the 
password are specified in /etc/miniflux.conf.


$ miniflux -c /etc/miniflux.conf -migrate
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "miniflux"

I have no problems logging into postgres as the miniflux user using 
the same password specified in /etc/miniflux.conf.


Running the same command without specifying the configuration file 
uses the postgres user and also fails


$ miniflux -migrate
[INFO] The default value for DATABASE_URL is used
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "postgres"

My /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf file includes

# TYPE  DATABASE    USER    ADDRESS METHOD

local   miniflux   miniflux  trust
local   miniflux   postgres  trust

# "local" is for Unix domain socket connections only
local   all all peer
# IPv4 local connections:
host    all all 127.0.0.1/32 trust

I'm at a complete loss.

Any suggestions?

Thanks,

Ted To





Ident authentication failed

2020-03-26 Thread Ted To

Hi,

I'm trying to configure miniflux and am struggling to figure out how to 
configure postgres for that purpose.  (The miniflux instructions are 
very terse.)  Regardless of whether I run the command as the postgres 
user or the miniflux user, I get similar errors.  I am running Centos 7 
using the updated scl version of postgres (9.6.10).


With the following, the postgres user "miniflux" along with the password 
are specified in /etc/miniflux.conf.


$ miniflux -c /etc/miniflux.conf -migrate
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "miniflux"

I have no problems logging into postgres as the miniflux user using the 
same password specified in /etc/miniflux.conf.


Running the same command without specifying the configuration file uses 
the postgres user and also fails


$ miniflux -migrate
[INFO] The default value for DATABASE_URL is used
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "postgres"

My /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf file includes

# TYPE  DATABASE    USER    ADDRESS METHOD

local   miniflux   miniflux  trust
local   miniflux   postgres  trust

# "local" is for Unix domain socket connections only
local   all all peer
# IPv4 local connections:
host    all all 127.0.0.1/32 trust

I'm at a complete loss.

Any suggestions?

Thanks,

Ted To



libpq prepared statement insert null for foreign key

2020-03-09 Thread Ted Toth
I tries setting the parameter for the foreign NULL:
params[i] = NULL;

and I get a 'Key not present in table ...' error. How do I insert a record
where I don't have a foreign key yet?

Ted


libpq and escaping array string literals

2020-03-06 Thread Ted Toth
I've got so C code that interacting with a table containing a field of type
text[]. Strings I've got to put in the array may be unicode and or contain
single or double quotes etc ... What's the best way to escape these
strings?

Ted


Re: before insert for each row trigger on upsert

2020-02-20 Thread Ted Toth
On Thu, Feb 20, 2020 at 2:32 PM Adrian Klaver 
wrote:

> On 2/20/20 12:17 PM, Ted Toth wrote:
> > I'm a little confused why the before insert trigger fires al all but
> > since it does is  there a way to know that an update will occur?
>
> Because ON CONFLICT DO UPDATE is part of an INSERT command.
>
> > Basically I don't want the trigger affect the row on update.
>
> Not sure that a BEFORE trigger will work for that anyway as it will not
> have submitted the data yet for ON CONFLICT resolution.
>

I think you are right in that the trigger doesn't change the row being
updated (which is good) but it also updates another table and that's what
I'd actually like to avoid on update.


> >
> > Ted
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


before insert for each row trigger on upsert

2020-02-20 Thread Ted Toth
I'm a little confused why the before insert trigger fires al all but since
it does is  there a way to know that an update will occur? Basically I
don't want the trigger affect the row on update.

Ted


Autovacuum behavior with rapid insert/delete 9.6

2018-03-29 Thread Ted Filmore
Using mostly default parameters with 9.3 we are seeing reasonable
performance with our heavy transaction-based application.

However, when we are upgraded to 9.6 we are seeing horrible performance
with the same loads using the same hardware.

Gathering the configuration details, but I think I've spotted something
that might point to the problem.  Our application has a repeated pattern of
inserting a customer invoice and details sending a message to another
application which reads the transaction, inserts it on another database
server, and the deletes the original.

In looking at pg_stat_activity I notice under the same load, autovacuum
seems to be running way more often under 9.6 and hanging around longer in
the activity queue.  Also, I notice that IO run queue is like 10x deeper
and write latency balloons from < 10ms to around 300/400msec. Clearly
something is slamming the IO channel.

Granted I see there are many new autovacuum variables that likely have to
be tuned, Also, we've really got to redesign the application logic, it
seems moronic to have rapid insert/deletes in a handful of tables that is
likely to overwhelm autovacuum for no good reason.

What I am really asking to confirm is after describing the situation is it
reasonable to focus on (in the short term) tuning autovacuum to increase
performance or does this not make sense given the workload and I should
look elsewhere?

Thanks.


Re: large numbers of inserts out of memory strategy

2017-11-30 Thread Ted Toth
On Thu, Nov 30, 2017 at 4:22 AM, Peter J. Holzer <hjp-pg...@hjp.at> wrote:
> On 2017-11-29 08:32:02 -0600, Ted Toth wrote:
>> Yes I did generate 1 large DO block:
>>
>> DO $$
>> DECLARE thingid bigint; thingrec bigint; thingdataid bigint;
>> BEGIN
>> INSERT INTO thing
>> (ltn,classification,machine,source,thgrec,flags,serial,type) VALUES
>> ('T007336','THING',0,1025,7336,7,'XXX869977564',1) RETURNING id,thgrec
>> INTO thingid,thingrec;
>> INSERT INTO recnum_thing (recnum,thing_id) VALUES (thingrec,thingid);
>> INSERT INTO thingstatus
>> (thing_id,nrpts,rmks_cs,force_type_id,ftn_cs,force_code,arr_cs,mask,toi_state,plot_id,signa_cs,lastchange,des_cs,rig_cs,ownship,correlation,maxrpts,rtn_cs,ctc_cs,group_mask,dep_cs)
>> VALUES 
>> (thingid,121,'{0,0,0,0}',440,0,23,0,0,0,'{23,-1,3803,3805,-1,-1,0,6}',0,1509459164,0,0,0,0,1000,0,0,0,0);
>> INSERT INTO thinger
>> (thing_id,spe_key,cse_unc,lat_spd,cov,dtg,lng,spd,ave_spd,cse,tol,nrpts,lat,alpha,sigma,spd_unc,lng_spd)
>> VALUES 
>> (thingid,-1,0.0,-6.58197336634e-08,'{4.27624291532e-09,0.0,3.07802916488e-09,0.0,4.27624291532e-09,0.0,3.07802916488e-09,4.16110417234e-08,0.0,4.16110417234e-08}',1509459163,2.21596980095,0.000226273215958,1.0,0.0,0.1000149,121,0.584555745125,10.0,4.23079740131e-08,0.0,-2.4881907e-10);
>> INSERT INTO thingdata
>> (thing_id,category,db_num,xref,org_type,trademark,shortname,fcode,platform,callsign,type,orig_xref,shipclass,home_base,uic,service,di,lngfixed,hull,precision,alert,flag,besufx,name,mmsi,catcode,ntds,imo,pn_num,chxref,threat,sconum,latfixed,db_type,pif,echelon,jtn,quantity,overwrite)
>> VALUES 
>> (thingid,'XXX','','','','','004403704','23','','','','','UNEQUATED','','','','',0.0,'','{0,0,0,0,0}','','KS','','UNKNOWN','004403704','','','','','','AFD','',0.0,3,'','',0,0,0)
>> RETURNING id INTO thingdataid;
>> INSERT INTO thingnum (thingdata_id,thgnum,state,dtg,cmd) VALUES
>> (thingdataid,'013086',0,1502970401,'FOO');
>>
>> 
>>
>> END $$;
>>
>> Should I limit the number of 'thing' inserts within a DO block or
>> wrapping each 'thing' insert in it's own DO block?
>

Thanks for the specific suggestions.

> I would suggest getting rid of the do block entirely if that is
> possible. Just create lots of insert statements. You can get the current
> value of a sequence with currval('sequence_name').

What is the downside of using a DO block? I'd have to do a nextval on
each sequence before I could use currval, right? Or I could do 'select
last_value from '.

One thing that is unclear to me is when commits occur while using psql
would you know where in the docs I can find information on this
subject?

>
> Alternately or in addition, since you are using python, you might want
> to insert directly into the database from python using psycopg2. For
> separate insert statements that should have about the same performance.
> (It is usually much faster to write to a csv file and load that with
> copy than to insert each row, but you don't do that and it might be
> difficult in your case).

Yes, I thought about generating csv files but didn't see a way to deal
with the foreign keys.

>
> hp
>
> --
>_  | Peter J. Holzer| we build much bigger, better disasters now
> |_|_) || because we have much more sophisticated
> | |   | h...@hjp.at | management tools.
> __/   | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>



Re: large numbers of inserts out of memory strategy

2017-11-29 Thread Ted Toth
On Tue, Nov 28, 2017 at 9:59 PM, Tom Lane  wrote:
> Brian Crowell  writes:
>> On Tue, Nov 28, 2017 at 12:38 PM, Tomas Vondra >> wrote:
>>> So what does the script actually do? Because psql certainly is not
>>> running pl/pgsql procedures on it's own. We need to understand why
>>> you're getting OOM in the first place - just inserts alone should not
>>> cause failures like that. Please show us more detailed explanation of
>>> what the load actually does, so that we can try reproducing it.
>
>> Perhaps the script is one giant insert statement?
>
> It's pretty clear from the memory map that the big space consumption
> is inside a single invocation of a plpgsql function:
>
> SPI Proc: 2464408024 total in 279 blocks; 1672 free (1 chunks); 
> 2464406352 used
>   PL/pgSQL function context: 537911352 total in 74 blocks; 2387536 free 
> (4 chunks); 535523816 used
>
> So whatever's going on here, there's more to it than a giant client-issued
> INSERT (or COPY), or for that matter a large number of small ones.  What
> would seem to be required is a many-megabyte-sized plpgsql function body
> or DO block.
>
> Actually, the truly weird thing about that map is that the "PL/pgSQL
> function context" seems to be a child of a "SPI Proc" context, whereas
> it's entirely clear from the code that it ought to be a direct child of
> TopMemoryContext.  I have no idea how this state of affairs came to be,
> and am interested to find out.
>
> regards, tom lane


Yes I did generate 1 large DO block:

DO $$
DECLARE thingid bigint; thingrec bigint; thingdataid bigint;
BEGIN
INSERT INTO thing
(ltn,classification,machine,source,thgrec,flags,serial,type) VALUES
('T007336','THING',0,1025,7336,7,'XXX869977564',1) RETURNING id,thgrec
INTO thingid,thingrec;
INSERT INTO recnum_thing (recnum,thing_id) VALUES (thingrec,thingid);
INSERT INTO thingstatus
(thing_id,nrpts,rmks_cs,force_type_id,ftn_cs,force_code,arr_cs,mask,toi_state,plot_id,signa_cs,lastchange,des_cs,rig_cs,ownship,correlation,maxrpts,rtn_cs,ctc_cs,group_mask,dep_cs)
VALUES 
(thingid,121,'{0,0,0,0}',440,0,23,0,0,0,'{23,-1,3803,3805,-1,-1,0,6}',0,1509459164,0,0,0,0,1000,0,0,0,0);
INSERT INTO thinger
(thing_id,spe_key,cse_unc,lat_spd,cov,dtg,lng,spd,ave_spd,cse,tol,nrpts,lat,alpha,sigma,spd_unc,lng_spd)
VALUES 
(thingid,-1,0.0,-6.58197336634e-08,'{4.27624291532e-09,0.0,3.07802916488e-09,0.0,4.27624291532e-09,0.0,3.07802916488e-09,4.16110417234e-08,0.0,4.16110417234e-08}',1509459163,2.21596980095,0.000226273215958,1.0,0.0,0.1000149,121,0.584555745125,10.0,4.23079740131e-08,0.0,-2.4881907e-10);
INSERT INTO thingdata
(thing_id,category,db_num,xref,org_type,trademark,shortname,fcode,platform,callsign,type,orig_xref,shipclass,home_base,uic,service,di,lngfixed,hull,precision,alert,flag,besufx,name,mmsi,catcode,ntds,imo,pn_num,chxref,threat,sconum,latfixed,db_type,pif,echelon,jtn,quantity,overwrite)
VALUES 
(thingid,'XXX','','','','','004403704','23','','','','','UNEQUATED','','','','',0.0,'','{0,0,0,0,0}','','KS','','UNKNOWN','004403704','','','','','','AFD','',0.0,3,'','',0,0,0)
RETURNING id INTO thingdataid;
INSERT INTO thingnum (thingdata_id,thgnum,state,dtg,cmd) VALUES
(thingdataid,'013086',0,1502970401,'FOO');



END $$;

Should I limit the number of 'thing' inserts within a DO block or
wrapping each 'thing' insert in it's own DO block?



Re: large numbers of inserts out of memory strategy

2017-11-28 Thread Ted Toth
On Tue, Nov 28, 2017 at 12:01 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
>
>
> On 11/28/2017 06:54 PM, Ted Toth wrote:
>> On Tue, Nov 28, 2017 at 11:22 AM, Tomas Vondra
>> <tomas.von...@2ndquadrant.com> wrote:
>>> Hi,
>>>
>>> On 11/28/2017 06:17 PM, Ted Toth wrote:
>>>> I'm writing a migration utility to move data from non-rdbms data
>>>> source to a postgres db. Currently I'm generating SQL INSERT
>>>> statements involving 6 related tables for each 'thing'. With 100k or
>>>> more 'things' to migrate I'm generating a lot of statements and when I
>>>> try to import using psql postgres fails with 'out of memory' when
>>>> running on a Linux VM with 4G of memory. If I break into smaller
>>>> chunks say ~50K statements then thde import succeeds. I can change my
>>>> migration utility to generate multiple files each with a limited
>>>> number of INSERTs to get around this issue but maybe there's
>>>> another/better way?
>>>>
>>>
>>> The question is what exactly runs out of memory, and how did you modify
>>> the configuration (particularly related to memory).
>>>
>>> regards
>>>
>>> --
>>> Tomas Vondra  http://www.2ndQuadrant.com
>>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>>
>> I'm pretty new to postgres so I haven't changed any configuration
>> setting and the log is a bit hard for me to make sense of :(
>>
>
> The most interesting part of the log is this:
>
> SPI Proc: 2464408024 total in 279 blocks; 1672 free (1 chunks);
> 2464406352 used
>   PL/pgSQL function context: 537911352 total in 74 blocks; 2387536
> free (4 chunks); 535523816 used
>
>
> That is, most of the memory is allocated for SPI (2.4GB) and PL/pgSQL
> procedure (500MB). How do you do the load? What libraries/drivers?
>
> regards
>
> --
> Tomas Vondra  http://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

I'm doing the load with 'psql -f'. I using 9.6 el6 rpms on a Centos VM
I downloaded from the postgres repo.



large numbers of inserts out of memory strategy

2017-11-28 Thread Ted Toth
I'm writing a migration utility to move data from non-rdbms data
source to a postgres db. Currently I'm generating SQL INSERT
statements involving 6 related tables for each 'thing'. With 100k or
more 'things' to migrate I'm generating a lot of statements and when I
try to import using psql postgres fails with 'out of memory' when
running on a Linux VM with 4G of memory. If I break into smaller
chunks say ~50K statements then thde import succeeds. I can change my
migration utility to generate multiple files each with a limited
number of INSERTs to get around this issue but maybe there's
another/better way?

Ted



RE: To all who wish to unsubscribe

2017-11-20 Thread Leavitt, Ted
Vick,
  I thought it was just me.  Yes, the sender changed:

pgsql-general-ow...@postgresql.org<mailto:pgsql-general-ow...@postgresql.org> 
-> pgsql-general@lists.postgresql.org<mailto:pgsql-general@lists.postgresql.org>

I have updated my rules accordingly.

-Ted

From: Vick Khera [mailto:vi...@khera.org]
Sent: Monday, November 20, 2017 10:58 AM
To: pgsql-general@lists.postgresql.org
Subject: Re: To all who wish to unsubscribe

Did the list software change? All my messages from here are not being properly 
auto-files by the filter I have set up.

On Nov 20, 2017, at 13:48, Martin Fernau 
<martin.fer...@fernausoft.de<mailto:martin.fer...@fernausoft.de>> wrote:
PLS unsubscribe via 
https://lists.postgresql.org/manage/<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.postgresql.org_manage_=DwMFaQ=UrUhmHsiTVT5qkaA4d_oSzcamb9hmamiCDMzBAEwC7E=iPrse5XWguV8YRO4FQgF7w=RgQ3RNflSqZjDrK-98gYUdH-FY0XIOSNWX9oi3og4LI=uxU76wstG5dprHZ4cBS3O_10ErB9eLZVjkvwQVGzHcs=>

Today I received dozens of unsubscribe mails, spamming my mail account :-)

thx
Am 20.11.2017 um 19:42 schrieb Ramalingam, Sankarakumar:


Thank you!

Kumar Ramalingam

From: Zacher, Stacy [mailto:szac...@mcw.edu]
Sent: Monday, November 20, 2017 12:56 PM
To: 
pgsql-general@lists.postgresql.org<mailto:pgsql-general@lists.postgresql.org>
Subject: unsubscribe



The information contained in this e-mail and in any attachments is intended 
only for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any review, retransmission, 
dissemination or other use of, or taking of any action in reliance upon, this 
information by persons or entities other than the intended recipient is 
prohibited. This message has been scanned for known computer viruses.

--

This email was Malware checked by UTM 9. 
http://www.sophos.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.sophos.com=DwMFaQ=UrUhmHsiTVT5qkaA4d_oSzcamb9hmamiCDMzBAEwC7E=iPrse5XWguV8YRO4FQgF7w=RgQ3RNflSqZjDrK-98gYUdH-FY0XIOSNWX9oi3og4LI=UQYTkyUJE-SX0S8k0K-trJOs7vXK4XMCWwJiFyOaUqY=>



--

FERNAUSOFT GmbH

Gartenstraße 42 - 37269 Eschwege



Telefon (0 56 51) 95 99-0

Telefax (0 56 51) 95 99-90



eMail martin.fer...@fernausoft.de<mailto:martin.fer...@fernausoft.de>

Internet 
http://www.fernausoft.de<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.fernausoft.de=DwMFaQ=UrUhmHsiTVT5qkaA4d_oSzcamb9hmamiCDMzBAEwC7E=iPrse5XWguV8YRO4FQgF7w=RgQ3RNflSqZjDrK-98gYUdH-FY0XIOSNWX9oi3og4LI=EhOQ6PJgUCzS-2lNoPpDpDDcyfp-vLazhxRB34VrcbI=>



Handelsregister Eschwege, HRB 1585

Geschäftsführer: Axel Fernau, Ulrich Fernau, Martin Fernau

Steuernummer 025 233 00041

USt-ID-Nr. DE 178 554 622