Re: locks within select

2021-12-13 Thread David G. Johnston
On Monday, December 13, 2021, Marc Millas  wrote:
>
> I was wondering if for example, within a join, some kind of lock may be
> acquired so that the dataset concerned is not changed during execution ?
> (for example a delete then autovacuum ??)
>

Read this primer on MVCC from the docs:

https://www.postgresql.org/docs/current/mvcc-intro.html

David J.


Re: log shipping with pg_receivewal

2021-12-13 Thread Dilip Kumar
On Tue, Dec 14, 2021 at 9:55 AM Marc Millas  wrote:
>
> Hi,
> postgres 13 question,
>
> if I setup pg_receivewal from a primary instance to a given directory, and 
> then setup a standby with a restore_command pointing to that directory,, the 
> changes on the primary are applied on the standby at each wal change. And 
> this, even if I run pg_receivewal with the --synchronous option.
>
> My question: as the synchronous option is supposed to make pg_receivewal 
> write transaction immediately in the wal files, is there a way to ask the 
> standby to apply them on the fly ie. without waiting a wal file change ?

I don't think that can be done, actually, the wal_receiver process is
running in the same cluster so we can interact with that process and
know exactly up to what LSN it has flush and what wal data is valid to
be applied and wal receiver also continues to append more WAL to the
same file.  But  pg_receivewal is a separate utility so this can serve
as an archive location that means we can restore the complete WAL file
only.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: log shipping with pg_receivewal

2021-12-13 Thread Michael Paquier
On Tue, Dec 14, 2021 at 05:25:04AM +0100, Marc Millas wrote:
> My question: as the synchronous option is supposed to make pg_receivewal
> write transaction immediately in the wal files, is there a way to ask the
> standby to apply them on the fly ie. without waiting a wal file change ?

Nope, there is no such option.  First, pg_receivewal strongly relies
on the rename of WAL segment to its correct name once completed.  If
there is any failure, we would start back at the beginning of the last
partial segment found.

Saying that, you could do things by using a restore_command that
checks after a .partial file directly in the path used by
pg_receivewal to store the archives, and Postgres would repeatedly ask
for a segment once it is out.  But really, this is going to be really
expensive as a restore_command would copy a full file, so that's a lot
of bandwidth wasted away for nothing.  Streaming replication would be
likely your best, and cheapest, option here.
--
Michael


signature.asc
Description: PGP signature


log shipping with pg_receivewal

2021-12-13 Thread Marc Millas
Hi,
postgres 13 question,

if I setup pg_receivewal from a primary instance to a given directory, and
then setup a standby with a restore_command pointing to that directory,,
the changes on the primary are applied on the standby at each wal change.
And this, even if I run pg_receivewal with the --synchronous option.

My question: as the synchronous option is supposed to make pg_receivewal
write transaction immediately in the wal files, is there a way to ask the
standby to apply them on the fly ie. without waiting a wal file change ?

thanks,


Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com


locks within select

2021-12-13 Thread Marc Millas
Hi,

To my understanding, a select on a table does acquire an access share lock
on that table.
my question is: is there any other kind of lock that a select may acquire ?
obviously, all explicit locking schemes like select for update, etc... is
out of this question scope.

I was wondering if for example, within a join, some kind of lock may be
acquired so that the dataset concerned is not changed during execution ?
(for example a delete then autovacuum ??)

thanks


Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com


Re: Why can't I have a "language sql" anonymous block?

2021-12-13 Thread Adrian Klaver

On 12/13/21 13:15, Bryn Llewellyn wrote:

Well… that’s the answer: “nobody thought it’d be useful”. Thanks, Tom.

The difference between using a “language sql” anonymous block and just 
executing the contained SQL statements? is partly a clear declaration of the 
intent of your code and a guarantee that all the statements are sent from 
client to server in one go. (But see what I say about “prepare” below.)


As in?:

begin;
drop table if exists t cascade;
create table t(k serial primary key, v text not null);
commit;



Here’s how I’d reason the case.

There must be a reason to prefer a “language sql” procedure over a “language 
plpgsql” procedure—otherwise the former wouldn’t be supported.


The 'Law of Minimums', use the minimum needed functionality to get the 
job done. Less chance of wandering into areas where there be dragons.




A “language sql” procedure has restricted functionality compared with a 
“language plpgsql” procedure. So I can only guess that it’s preferred when it 
lets you program what you need ‘cos simpler means quicker.


Also inlining:

https://wiki.postgresql.org/wiki/Inlining_of_SQL_functions

https://stackoverflow.com/questions/53040170/postgresql-inline-function-behavior



In general, an anonymous block is preferred over a stored unit when you don’t 
want to clutter the schema with an object that’s used only occasionally. (In 
some cases, you don’t even have the privileges to create a stored unit but can 
execute an anonymous block. So putting these two ideas together makes the case 
for a “language sql” anonymous block.

Another reason to support “language sql” anonymous blocks is to improve 
symmetry—and therefore usability: one fewer arbitrary rule to remember.

B.t.w., you mentioned the restriction that DO blocks can’t have parameters. The 
reason for allowing them is what I just referred to: don’t want to, or simply 
cannot, create a procedure or function. Oracle Database allows binding to 
placeholders in anonymous blocks (their vocabulary for “ parameters”)—and, as I 
recall, has since their very first appearance as a feature.

Might your “they don't so far” give me hope that they presently will be? 
Presumably, the notion would have to include the ability to prepare-once and 
execute-many using an anonymous block. (This is another counter intuitive 
restriction that, today, has to be learned.)

t...@sss.pgh.pa.us wrote:

Bryn Llewellyn  writes:

Was there a deliberate decision not to allow a “language sql” anonymous block? 
Or is it just that nobody thought that it would be useful?


I think nobody thought it'd be useful.  What's the difference from just 
executing the contained SQL statements?

(If DO blocks had parameters, the conclusion might be different, but they don't 
so far.)

regards, tom lane






--
Adrian Klaver
adrian.kla...@aklaver.com






Re: When Update balloons memory

2021-12-13 Thread Tom Lane
Klaudie Willis  writes:
> So, it seems like the index is central cog here:
>> create index ind1 on alpha ((deltatime::date));
> where "alpha" is a partition tableset partitioned by (deltatime::date)
> The general and simple updates like:
>> update alphatable set gamma=gamma || "#postfix#"
> makes the process memory balloon to the point of OOM.

That seems like a bug, but please supply a self-contained test case
rather than expecting other people to reverse-engineer one.

regards, tom lane




Re: When Update balloons memory

2021-12-13 Thread Klaudie Willis
Thanks for the insight!

I have recreated the problem on a different machine and installation where I 
was more free to experiment to isolate what causes this.
So, it seems like the index is central cog here:
> create index ind1 on alpha ((deltatime::date));
where "alpha" is a partition tableset partitioned by (deltatime::date)
The general and simple updates like:
> update alphatable set gamma=gamma || "#postfix#"
makes the process memory balloon to the point of OOM.

If I remove the ind1 index on "deltatime::date", and just add another one on a 
random column, the problem disappears.  So it seems like the index on the 
partition key is relevant.
Additional info, alphatable is a 200M evenly distributed row across the 
partitions, and I haven't tried to see if the ::date casting is relevant for 
the problem. No there are no triggers here; I can't vouch for what the system 
creates behind my back though.

Is this a feature or a bug?

--
Klaudie

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐

On Tuesday, December 7th, 2021 at 15:57, Tom Lane  wrote:

> Klaudie Willis klaudie.wil...@protonmail.com writes:
>
> > The following statement below, when not divided up into chunks, but run 
> > across all 800M rows, did trigger an OOM-kill from the OS.
>
> An UPDATE should only result in memory bloat if it's queuing trigger
>
> events to be processed at end-of-statement. You claim there are
>
> no triggers, but are you sure? (what about foreign keys?)
>
> Otherwise, it seems possible that you've identified a memory leak,
>
> but there's not enough detail here to investigate. Can you create
>
> a reproducible test case?
>
> regards, tom lane




Re: Why can't I have a "language sql" anonymous block?

2021-12-13 Thread Bryn Llewellyn
Well… that’s the answer: “nobody thought it’d be useful”. Thanks, Tom.

The difference between using a “language sql” anonymous block and just 
executing the contained SQL statements? is partly a clear declaration of the 
intent of your code and a guarantee that all the statements are sent from 
client to server in one go. (But see what I say about “prepare” below.)

Here’s how I’d reason the case.

There must be a reason to prefer a “language sql” procedure over a “language 
plpgsql” procedure—otherwise the former wouldn’t be supported.

A “language sql” procedure has restricted functionality compared with a 
“language plpgsql” procedure. So I can only guess that it’s preferred when it 
lets you program what you need ‘cos simpler means quicker.

In general, an anonymous block is preferred over a stored unit when you don’t 
want to clutter the schema with an object that’s used only occasionally. (In 
some cases, you don’t even have the privileges to create a stored unit but can 
execute an anonymous block. So putting these two ideas together makes the case 
for a “language sql” anonymous block.

Another reason to support “language sql” anonymous blocks is to improve 
symmetry—and therefore usability: one fewer arbitrary rule to remember.

B.t.w., you mentioned the restriction that DO blocks can’t have parameters. The 
reason for allowing them is what I just referred to: don’t want to, or simply 
cannot, create a procedure or function. Oracle Database allows binding to 
placeholders in anonymous blocks (their vocabulary for “ parameters”)—and, as I 
recall, has since their very first appearance as a feature.

Might your “they don't so far” give me hope that they presently will be? 
Presumably, the notion would have to include the ability to prepare-once and 
execute-many using an anonymous block. (This is another counter intuitive 
restriction that, today, has to be learned.)

t...@sss.pgh.pa.us wrote:

Bryn Llewellyn  writes:
> Was there a deliberate decision not to allow a “language sql” anonymous 
> block? Or is it just that nobody thought that it would be useful?

I think nobody thought it'd be useful.  What's the difference from just 
executing the contained SQL statements?

(If DO blocks had parameters, the conclusion might be different, but they don't 
so far.)

regards, tom lane





Properly handling aggregate in nested function call

2021-12-13 Thread Matt Magoffin
I am working on a C aggregate function that returns a numeric[] result. If I 
execute the aggregate and return the results directly in my SQL, I get the 
expected results. For example:

SELECT vec_agg_mean(nums) FROM measurements;
NOTICE:  avg 0 = 1.2300
NOTICE:  avg 1 = 1.9700
NOTICE:  avg 2 = 3.7000
  vec_agg_mean  

 {1.2300,1.9700,3.7000}
(1 row)

The NOTICE logs are are there to help me verify the computed result in the 
aggregate final function, and they essentially do

DatumGetCString(DirectFunctionCall1(numeric_out, datum))

However if I nest the aggregate inside something, such as unnest(), I get what 
appear to be just memory addresses:

SELECT unnest(vec_agg_mean(nums)) FROM measurements;
NOTICE:  avg 0 = 1.2300
NOTICE:  avg 1 = 1.9700
NOTICE:  avg 2 = 3.7000
 unnest 

 94674302945040
 94674302945052
 94674302945064
(3 rows)

You can see the NOTICE logs are still the same. Passing the aggregate result to 
any other function seems to expose the problem, for example:

SELECT ARRAY[vec_agg_mean(nums)]::numeric[] FROM measurements;
NOTICE:  avg 0 = 1.2300
NOTICE:  avg 1 = 1.9700
NOTICE:  avg 2 = 3.7000
  array   
--
 {{94674302928624,94674302928636,94674302928648}}
(1 row)

Any ideas what I’m doing wrong here? The source is available here:

https://github.com/SolarNetwork/aggs_for_vecs/blob/9e742cdc32a113268fd3c1f928c8ac724acec9f5/vec_agg_mean.c
 


Cheers,
Matt Magoffin

Re: Why can't I have a "language sql" anonymous block?

2021-12-13 Thread Tom Lane
Bryn Llewellyn  writes:
> Was there a deliberate decision not to allow a “language sql” anonymous 
> block? Or is it just that nobody thought that it would be useful?

I think nobody thought it'd be useful.  What's the difference from
just executing the contained SQL statements?

(If DO blocks had parameters, the conclusion might be different,
but they don't so far.)

regards, tom lane




Why can't I have a "language sql" anonymous block?

2021-12-13 Thread Bryn Llewellyn
My question is this:

Was there a deliberate decision not to allow a “language sql” anonymous block? 
Or is it just that nobody thought that it would be useful?

Here’s what I mean. First, something that works (using PG Version 14.1):

create procedure p_plpgsql()
  security definer
  language plpgsql
as $body$
begin
  drop table if exists t cascade;
  create table t(k serial primary key, v text not null);
end;
$body$;

I can transform this trivially to an anonymous block:

do language plpgsql $body$
begin
  drop table if exists t cascade;
  create table t(k serial primary key, v text not null);
end;
$body$;

I can also transform the procedure trivially to "language sql”:

create procedure p_sql()
  security definer
  language sql
as $body$
  drop table if exists t cascade;
  create table t(k serial primary key, v text not null);
$body$;

But trying the corresponding “language sql” anonymous block:

do language sql $body$
begin
  drop table if exists t cascade;
  create table t(k serial primary key, v text not null);
end;
$body$;

Causes this error:

0A000 (feature_not_supported): language "sql" does not support inline code 
execution




Re: Where can I follow the progress of "Pluggable Storage" development?

2021-12-13 Thread Jean Baro
Thank you, I will check it out.

Em seg., 13 de dez. de 2021 16:19, Fabrízio de Royes Mello <
fabri...@timbira.com.br> escreveu:

>
> Em seg., 13 de dez. de 2021 às 11:35, Jean Baro 
> escreveu:
> >
> > Hello there.
> >
> > I am just an enthusiast of PostgreSQL and would like to get more
> information about Pluggable Storage's progress (or not). Please.
> >
>
> We already support it since version 12 released in 2019 but we named it as
> "Table Access Method" [1].
>
> If you want to track the development activities you should subscribe to
> the pgsql-hackers mailing list [2] and also check the commitfest [3]
>
> Regards,
>
> [1] https://www.postgresql.org/docs/current/tableam.html
> [2] https://www.postgresql.org/list/pgsql-hackers/
> [3] https://commitfest.postgresql.org/
>
> --
> Fabrízio Mello
>


Re: Where can I follow the progress of "Pluggable Storage" development?

2021-12-13 Thread Fabrízio de Royes Mello
Em seg., 13 de dez. de 2021 às 11:35, Jean Baro  escreveu:
>
> Hello there.
>
> I am just an enthusiast of PostgreSQL and would like to get more
information about Pluggable Storage's progress (or not). Please.
>

We already support it since version 12 released in 2019 but we named it as
"Table Access Method" [1].

If you want to track the development activities you should subscribe to the
pgsql-hackers mailing list [2] and also check the commitfest [3]

Regards,

[1] https://www.postgresql.org/docs/current/tableam.html
[2] https://www.postgresql.org/list/pgsql-hackers/
[3] https://commitfest.postgresql.org/

--
Fabrízio Mello


Re: Error : /usr/local/share/lua/5.1/pgmoon/init.lua:211: don’t know how to auth: 10

2021-12-13 Thread Tom Lane
Kaushal Shriyan  writes:
>> Error:
>> /usr/local/share/lua/5.1/pgmoon/init.lua:211: don't know how to auth: 10

I believe this is failing on a SCRAM password challenge.  You need
to update the client-side code to a version that knows about SCRAM,
or else change your password to be MD5-hashed instead of SCRAM-hashed
(see server's password_encryption setting).  v14 changed the default
for that from md5 to scram-sha-256.

regards, tom lane




Error : /usr/local/share/lua/5.1/pgmoon/init.lua:211: don’t know how to auth: 10

2021-12-13 Thread Kaushal Shriyan
Hi,

I am running CentOS Stream release 8 with kong-2.6.0-1.x86_64 with the
below PostgreSQL database versions.

postgresql14-libs-14.1-1PGDG.rhel8.x86_64
postgresql14-14.1-1PGDG.rhel8.x86_64
postgresql14-server-14.1-1PGDG.rhel8.x86_64.


#kong migrations bootstrap -c /etc/kong/kong.conf --vv
> 2021/12/13 10:44:57 [verbose] Kong: 2.6.0
> 2021/12/13 10:44:57 [debug] ngx_lua: 10020
> 2021/12/13 10:44:57 [debug] nginx: 1019009
> 2021/12/13 10:44:57 [debug] Lua: LuaJIT 2.1.0-beta3
> 2021/12/13 10:44:57 [verbose] reading config file at /etc/kong/kong.conf
> 2021/12/13 10:44:57 [debug] reading environment variables
> 2021/12/13 10:44:57 [debug] admin_access_log = "logs/admin_access.log"
> 2021/12/13 10:44:57 [debug] admin_error_log = "logs/error.log"
> 2021/12/13 10:44:57 [debug] admin_listen = {"127.0.0.1:8001 reuseport
> backlog=16384","127.0.0.1:8444 http2 ssl reuseport backlog=16384"}
> 2021/12/13 10:44:57 [debug] admin_ssl_cert = {}
> 2021/12/13 10:44:57 [debug] admin_ssl_cert_key = {}
> 2021/12/13 10:44:57 [debug] anonymous_reports = true
> 2021/12/13 10:44:57 [debug] cassandra_contact_points = {"127.0.0.1"}
> 2021/12/13 10:44:57 [debug] cassandra_data_centers = {"dc1:2","dc2:3"}
> 2021/12/13 10:44:57 [debug] cassandra_keyspace = "kong"
> 2021/12/13 10:44:57 [debug] cassandra_lb_policy = "RequestRoundRobin"
> 2021/12/13 10:44:57 [debug] cassandra_port = 9042
> 2021/12/13 10:44:57 [debug] cassandra_read_consistency = "ONE"
> 2021/12/13 10:44:57 [debug] cassandra_refresh_frequency = 60
> 2021/12/13 10:44:57 [debug] cassandra_repl_factor = 1
> 2021/12/13 10:44:57 [debug] cassandra_repl_strategy = "SimpleStrategy"
> 2021/12/13 10:44:57 [debug] cassandra_schema_consensus_timeout = 1
> 2021/12/13 10:44:57 [debug] cassandra_ssl = false
> 2021/12/13 10:44:57 [debug] cassandra_ssl_verify = false
> 2021/12/13 10:44:57 [debug] cassandra_timeout = 5000
> 2021/12/13 10:44:57 [debug] cassandra_username = "kong"
> 2021/12/13 10:44:57 [debug] cassandra_write_consistency = "ONE"
> 2021/12/13 10:44:57 [debug] client_body_buffer_size = "8k"
> 2021/12/13 10:44:57 [debug] client_max_body_size = "0"
> 2021/12/13 10:44:57 [debug] client_ssl = false
> 2021/12/13 10:44:57 [debug] cluster_control_plane = "127.0.0.1:8005"
> 2021/12/13 10:44:57 [debug] cluster_data_plane_purge_delay = 1209600
> 2021/12/13 10:44:57 [debug] cluster_listen = {"0.0.0.0:8005"}
> 2021/12/13 10:44:57 [debug] cluster_mtls = "shared"
> 2021/12/13 10:44:57 [debug] cluster_ocsp = "off"
> 2021/12/13 10:44:57 [debug] cluster_v2 = false
> 2021/12/13 10:44:57 [debug] database = "postgres"
> 2021/12/13 10:44:57 [debug] db_cache_ttl = 0
> 2021/12/13 10:44:57 [debug] db_cache_warmup_entities = {"services"}
> 2021/12/13 10:44:57 [debug] db_resurrect_ttl = 30
> 2021/12/13 10:44:57 [debug] db_update_frequency = 5
> 2021/12/13 10:44:57 [debug] db_update_propagation = 0
> 2021/12/13 10:44:57 [debug] dns_error_ttl = 1
> 2021/12/13 10:44:57 [debug] dns_hostsfile = "/etc/hosts"
> 2021/12/13 10:44:57 [debug] dns_no_sync = false
> 2021/12/13 10:44:57 [debug] dns_not_found_ttl = 30
> 2021/12/13 10:44:57 [debug] dns_order = {"LAST","SRV","A","CNAME"}
> 2021/12/13 10:44:57 [debug] dns_resolver = {}
> 2021/12/13 10:44:57 [debug] dns_stale_ttl = 4
> 2021/12/13 10:44:57 [debug] error_default_type = "text/plain"
> 2021/12/13 10:44:57 [debug] go_plugins_dir = "off"
> 2021/12/13 10:44:57 [debug] go_pluginserver_exe =
> "/usr/local/bin/go-pluginserver"
> 2021/12/13 10:44:57 [debug] headers = {"server_tokens","latency_tokens"}
> 2021/12/13 10:44:57 [debug] host_ports = {}
> 2021/12/13 10:44:57 [debug] kic = false
> 2021/12/13 10:44:57 [debug] log_level = "notice"
> 2021/12/13 10:44:57 [debug] lua_package_cpath = ""
> 2021/12/13 10:44:57 [debug] lua_package_path = "./?.lua;./?/init.lua;"
> 2021/12/13 10:44:57 [debug] lua_socket_pool_size = 30
> 2021/12/13 10:44:57 [debug] lua_ssl_protocols = "TLSv1.1 TLSv1.2 TLSv1.3"
> 2021/12/13 10:44:57 [debug] lua_ssl_trusted_certificate = {}
> 2021/12/13 10:44:57 [debug] lua_ssl_verify_depth = 1
> 2021/12/13 10:44:57 [debug] mem_cache_size = "128m"
> 2021/12/13 10:44:57 [debug] nginx_admin_client_body_buffer_size = "10m"
> 2021/12/13 10:44:57 [debug] nginx_admin_client_max_body_size = "10m"
> 2021/12/13 10:44:57 [debug] nginx_admin_directives =
> {{name="client_max_body_size",value="10m"},{name="client_body_buffer_size",value="10m"}}
> 2021/12/13 10:44:57 [debug] nginx_daemon = "on"
> 2021/12/13 10:44:57 [debug] nginx_events_directives =
> {{name="worker_connections",value="auto"},{name="multi_accept",value="on"}}
> 2021/12/13 10:44:57 [debug] nginx_events_multi_accept = "on"
> 2021/12/13 10:44:57 [debug] nginx_events_worker_connections = "auto"
> 2021/12/13 10:44:57 [debug] nginx_http_client_body_buffer_size = "8k"
> 2021/12/13 10:44:57 [debug] nginx_http_client_max_body_size = "0"
> 2021/12/13 10:44:57 [debug] nginx_http_directives =
> 

RE: [EXTERNAL] Re: performance expectations for table(s) with 2B recs

2021-12-13 Thread Godfrin, Philippe E
>
>
>From: Peter J. Holzer hjp-pg...@hjp.at
>Sent: Friday, December 10, 2021 3:43 PM
>To: 
>pgsql-general@lists.postgresql.org
>Subject: Re: [EXTERNAL] Re: performance expectations for table(s) with 2B recs
>
>On 2021-12-10 18:04:07 +, Godfrin, Philippe E wrote:
>> >But in my experience the biggest problem with large tables are unstable
>> >execution plans - for most of the parameters the optimizer will choose
>> >to use an index, but for some it will erroneously think that a full
>> >table scan is faster. That can lead to a situation where a query
>> >normally takes less than a second, but sometimes (seemingly at random)
>> >it takes several minutes
>[...]
>> For Peter I have a question. What exactly causes ‘unstable execution plans’ 
>> ??
>>
>> Besides not using bind variables, bad statistics, would you elaborate
>> in what would contribute to that instability?
>
>Not using bind variables and bad statistics are certainly big factors:
>
>On one hand not using bind variables gives a lot more information to the
>optimizer, so it can choose a better plan at run time. On the other hand
>that makes hard to predict what plan it will choose.
>
>Bad statistics come in many flavours: They might just be wrong, that's
>usually easy to fix. More problematic are statistics which just don't
>describe reality very well - they may not show a correlation, causing
>the optimizer to assume that two distributions are independent when they
>really aren't (since PostgreSQL 10 you can create statistics on multiple
>columns which helps in many but not all cases) or not show some other
>peculiarity of the data. Or they may be just so close to a flipping
>point that a small change causes the optimizer to choose a wildly
>different plan.
>
>Another source is dynamically generated SQL. Your application may just
>put together SQL from fragments or it might use something like
>SQLalchemy or an ORM. In any of these cases what looks like one query
>from a user's perspective may really be a whole family of related
>queries - and PostgreSQL will try to find the optimal plan for each of
>them. Which is generally a good thing, but it adds opportunities to mess
>up.
>
>hp
>
>--
>_ | Peter J. Holzer | Story must make more sense than reality.
>|_|_) | |
>| | | h...@hjp.at | -- Charles Stross, "Creative writing
>__/ | http://www.hjp.at/ | challenge!"

Good answer Peter, I agree wholeheartedly. I was curious if there was something 
specific to Postgresql .
phil


Re: Postgresql + containerization possible use case

2021-12-13 Thread Achilleas Mantzios

Thank you Rainer,
I'll have to play a little bit myself, and read some serious docs about it, in 
order to fully comprehend your thoughts.
I'll revisit your positions I hope.

On 13/12/21 5:11 μ.μ., Rainer Duffner wrote:




Am 13.12.2021 um 12:41 schrieb Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>>:

Our setup has been open source since forever. So licenses for something that 
used to be free for ages would be hard to introduce.



That ist totally understandable.
140x800 for the RHEL license alone is over 100k/year.
Though you might get a volume discount at that point ;-)



So Docker is NOT free? Please share your thoughts? I am a complete noob.



The commercial version is not free.
The „CE“ version is free, but I’m not sure how stable the API is.

Dockerhub, for what it’s worth, isn’t completely free anymore either.




Those servers I am talking about have no internet connectivity. And the 
satellite connection costs are high.
(although I think we pay a fixed amount for a certain total data transfer size).




Yes. That’s why using a more offline-friendly infrastructure might make more 
sense.

RHEL seems to be pretty well tuned running in high-secure air-gapped networks - 
which is what a ship basically is.


The question is: do you want to get there?
But maybe your developers want to get here, because they don’t want to learn 
about software-packaging (anymore) - but is that what the business wants?


Those servers live for years, the objective is to facilitate upgrades.



Yes, but docker-upgrades aren’t free either.

At least, I cannot imagine running such an infrastructure in the gung-ho style 
that a typical developer runs his docker-containers.

You’d want to run a tight ship with those ;-)

You will need a lot more tooling around this (continuous integration, continuous deployment) - which is the reason I suggested moving all this infrastructure to the ship itself. If you produce and 
mirror the artifacts locally, you should have less backhaul traffic (which is what I assume is killing you with sat-com - downstream can probably be had cheap-ish these days - or pretty soon via the 
likes of StarLink)








https://www.docker.com/blog/how-carnival-creates-customized-guest-experiences-with-docker/

Thanks for the link, I didn't quite understand what they do with docker (video 
included).


It’s passenger entertainment and engagement, from what I can see.

It’s sad and funny to read, as shortly after that huge cruises got out of 
fashion ;-)


It’s also not mission-critical.




120 docker containers in two data centers on the ship? Ours will be just a 
single linux box with limited connectivity (in some seas no connectivity ) to 
the internet/shore.



Yes, and this is IMO where you will have to get management on board to upgrade infrastructure and tooling around docker containers (and likely Kubernetes) - if your developers persist on using it 
that way.


You will end up running a small server-cluster on each ship - I would guess 
there are specialized vendors who produce sea-worthy server-equipment.





(That was pre-pandemic…)

I would make an educated guess that you’d need to have the whole 
docker-infrastructure on each ship (build-server, repository etc.pp.) to 
minimize sat-com traffic.


Hmm, I don't know about that. The hardware is given (existing) and limited.
You are like the 2nd person who warned about comms as being an issue with 
docker/containers.

Can't someone have a free docker system inside a linux server and run the 
containers (free again) until he/she decides to upgrade either docker or (more 
frequently) one of the images?
Is Docker upwards compatible? Meaning new Docker versions to run old images ?



Yes, likely. But upgrading docker itself sometimes comes with its own 
challenges.

As you said, all these servers are with limited connectivity and no local help 
available...

Again, I’m the last person you want to ask how docker actually works (but 
often, the developers themselves don’t know either, but they know which 
commands to feed it…)

Also, most of my knowledge of ships is from watching NCIS (and reading Clive 
Cussler novels...).






I mean, it looks like it could be done. But this is where the „dev“ part in the 
„devops" world has to take a step back and the „ops“ guys need to come forward.


Can you please explain in more detail?



With 140-ish servers air-gapped on vessels around the world, this is IMO a 
serious operations-problem and needs to be handled properly, with an air-tight 
(or water-tight…) operational concept.

Your infrastructure has outgrown its original design limits. Time to talk to 
management about upping the game (and the budget).




Best Regards



--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Postgresql + containerization possible use case

2021-12-13 Thread Achilleas Mantzios

On 13/12/21 5:55 μ.μ., Adrian Klaver wrote:

On 12/13/21 03:41, Achilleas Mantzios wrote:

Hi Rainer,

On 13/12/21 11:50 π.μ., Rainer Duffner wrote:




So Docker is NOT free? Please share your thoughts? I am a complete noob.


Take a look here:

https://www.docker.com/pricing

Thank you Adrian!



Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt







--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Postgresql + containerization possible use case

2021-12-13 Thread Achilleas Mantzios

Hi
On 13/12/21 6:11 μ.μ., Benedict Holland wrote:
Check out rolling updates with kubernetis and yes, a containerized postgresql environment is likely the best option. The really nice bit about containers is that you don't actually care about 
underlying hardware. You need a docker daemon. K8 manages clusters and it sounds like your use case would benefit from that.
Thanks, postgresql's bigger challenges as we all know will be upgrading OIDs, upgrading the JDBC related java code, and maybe many more related challenges. I won't say which, but we are running an 
unsupported version of pgsql on the ships.

So apps are much more frequently upgraded than the system of pgsql.
I will check out rolling upgrades with kubernetes. (are we talking BDR here?)


On Mon, Dec 13, 2021, 11:03 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

Hi Berto,

On 13/12/21 3:03 μ.μ., Bèrto ëd Sèra wrote:


On Mon, 13 Dec 2021 at 12:41, Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

Hmm, I don't know about that. The hardware is given (existing) and 
limited.
You are like the 2nd person who warned about comms as being an issue 
with docker/containers.


Quick question: do those ships ever touch port? If so, it's not a fact that 
you have to rely on sat com to do updates. Just keep an on-shore table of 
updates and do the updates every time a
ship is in a port and can use a normal port wifi com. IMHO this would be 
the safest option EVEN if satcom was free of cost, as it will never be half as 
stable as a land based com line.


Yes they do touch port, otherwise they would be of little commercial value!
I am confused here : Are we talking about traffic due to :
a) normal business traffic (designed by me/us old system based on DBMirror)
b) system updates due to some decision of the system itself and/or our team
c) normal app shipping via new images, or new .ear/.war or whatever
?

And since you are not deploying the same thing everywhere (you quote 
different HW everywhere, so hardly one image could be optimised for 
everything), you are going to need to test upgrades for
each ship, which is both dangerous and expensive, if done on sat com.


We are deploying the same exact app everywhere (jboss), but of course 
exim4, etc other services will be specific for every ship.



My 5p
Berto



-- 
Achilleas Mantzios

DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt




--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Postgresql + containerization possible use case

2021-12-13 Thread Benedict Holland
Check out rolling updates with kubernetis and yes, a containerized
postgresql environment is likely the best option. The really nice bit about
containers is that you don't actually care about underlying hardware. You
need a docker daemon. K8 manages clusters and it sounds like your use case
would benefit from that.

On Mon, Dec 13, 2021, 11:03 AM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> Hi Berto,
>
> On 13/12/21 3:03 μ.μ., Bèrto ëd Sèra wrote:
>
>
> On Mon, 13 Dec 2021 at 12:41, Achilleas Mantzios <
> ach...@matrix.gatewaynet.com> wrote:
>
>> Hmm, I don't know about that. The hardware is given (existing) and
>> limited.
>> You are like the 2nd person who warned about comms as being an issue with
>> docker/containers.
>>
>
> Quick question: do those ships ever touch port? If so, it's not a fact
> that you have to rely on sat com to do updates. Just keep an on-shore table
> of updates and do the updates every time a ship is in a port and can use a
> normal port wifi com. IMHO this would be the safest option EVEN if satcom
> was free of cost, as it will never be half as stable as a land based com
> line.
>
> Yes they do touch port, otherwise they would be of little commercial value!
> I am confused here : Are we talking about traffic due to :
> a) normal business traffic (designed by me/us old system based on DBMirror)
> b) system updates due to some decision of the system itself and/or our team
> c) normal app shipping via new images, or new .ear/.war or whatever
> ?
>
> And since you are not deploying the same thing everywhere (you quote
> different HW everywhere, so hardly one image could be optimised for
> everything), you are going to need to test upgrades for each ship, which is
> both dangerous and expensive, if done on sat com.
>
>
> We are deploying the same exact app everywhere (jboss), but of course
> exim4, etc other services will be specific for every ship.
>
>
> My 5p
> Berto
>
>
>
> --
> Achilleas Mantzios
> DBA, Analyst, IT Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>


Re: Postgresql + containerization possible use case

2021-12-13 Thread Achilleas Mantzios

Hi Berto,

On 13/12/21 3:03 μ.μ., Bèrto ëd Sèra wrote:


On Mon, 13 Dec 2021 at 12:41, Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

Hmm, I don't know about that. The hardware is given (existing) and limited.
You are like the 2nd person who warned about comms as being an issue with 
docker/containers.


Quick question: do those ships ever touch port? If so, it's not a fact that you have to rely on sat com to do updates. Just keep an on-shore table of updates and do the updates every time a ship is 
in a port and can use a normal port wifi com. IMHO this would be the safest option EVEN if satcom was free of cost, as it will never be half as stable as a land based com line.



Yes they do touch port, otherwise they would be of little commercial value!
I am confused here : Are we talking about traffic due to :
a) normal business traffic (designed by me/us old system based on DBMirror)
b) system updates due to some decision of the system itself and/or our team
c) normal app shipping via new images, or new .ear/.war or whatever
?
And since you are not deploying the same thing everywhere (you quote different HW everywhere, so hardly one image could be optimised for everything), you are going to need to test upgrades for each 
ship, which is both dangerous and expensive, if done on sat com.


We are deploying the same exact app everywhere (jboss), but of course exim4, 
etc other services will be specific for every ship.



My 5p
Berto



--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Postgresql + containerization possible use case

2021-12-13 Thread Adrian Klaver

On 12/13/21 03:41, Achilleas Mantzios wrote:

Hi Rainer,

On 13/12/21 11:50 π.μ., Rainer Duffner wrote:




So Docker is NOT free? Please share your thoughts? I am a complete noob.


Take a look here:

https://www.docker.com/pricing


Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Postgresql + containerization possible use case

2021-12-13 Thread Rainer Duffner


> Am 13.12.2021 um 12:41 schrieb Achilleas Mantzios 
> :
> 
> Our setup has been open source since forever. So licenses for something that 
> used to be free for ages would be hard to introduce.


That ist totally understandable.
140x800 for the RHEL license alone is over 100k/year.
Though you might get a volume discount at that point ;-)


> So Docker is NOT free? Please share your thoughts? I am a complete noob. 


The commercial version is not free.
The „CE“ version is free, but I’m not sure how stable the API is.

Dockerhub, for what it’s worth, isn’t completely free anymore either.



> Those servers I am talking about have no internet connectivity. And the 
> satellite connection costs are high.
> (although I think we pay a fixed amount for a certain total data transfer 
> size).
> 


Yes. That’s why using a more offline-friendly infrastructure might make more 
sense.

RHEL seems to be pretty well tuned running in high-secure air-gapped networks - 
which is what a ship basically is.

>> The question is: do you want to get there?
>> But maybe your developers want to get here, because they don’t want to learn 
>> about software-packaging (anymore) - but is that what the business wants?
>> 
> Those servers live for years, the objective is to facilitate upgrades. 


Yes, but docker-upgrades aren’t free either.

At least, I cannot imagine running such an infrastructure in the gung-ho style 
that a typical developer runs his docker-containers.

You’d want to run a tight ship with those ;-)

You will need a lot more tooling around this (continuous integration, 
continuous deployment) - which is the reason I suggested moving all this 
infrastructure to the ship itself. If you produce and mirror the artifacts 
locally, you should have less backhaul traffic (which is what I assume is 
killing you with sat-com - downstream can probably be had cheap-ish these days 
- or pretty soon via the likes of StarLink)





>> 
>> https://www.docker.com/blog/how-carnival-creates-customized-guest-experiences-with-docker/
>>  
>> Thanks
>>  for the link, I didn't quite understand what they do with docker (video 
>> included).

It’s passenger entertainment and engagement, from what I can see.

It’s sad and funny to read, as shortly after that huge cruises got out of 
fashion ;-)


It’s also not mission-critical.



> 120 docker containers in two data centers on the ship? Ours will be just a 
> single linux box with limited connectivity (in some seas no connectivity ) to 
> the internet/shore.


Yes, and this is IMO where you will have to get management on board to upgrade 
infrastructure and tooling around docker containers (and likely Kubernetes) - 
if your developers persist on using it that way.

You will end up running a small server-cluster on each ship - I would guess 
there are specialized vendors who produce sea-worthy server-equipment.



>> 
>> (That was pre-pandemic…)
>> 
>> I would make an educated guess that you’d need to have the whole 
>> docker-infrastructure on each ship (build-server, repository etc.pp.) to 
>> minimize sat-com traffic.
> 
> Hmm, I don't know about that. The hardware is given (existing) and limited.
> You are like the 2nd person who warned about comms as being an issue with 
> docker/containers.
> 
> Can't someone have a free docker system inside a linux server and run the 
> containers (free again) until he/she decides to upgrade either docker or 
> (more frequently) one of the images?
> Is Docker upwards compatible? Meaning new Docker versions to run old images ?


Yes, likely. But upgrading docker itself sometimes comes with its own 
challenges.

As you said, all these servers are with limited connectivity and no local help 
available...

Again, I’m the last person you want to ask how docker actually works (but 
often, the developers themselves don’t know either, but they know which 
commands to feed it…)

Also, most of my knowledge of ships is from watching NCIS (and reading Clive 
Cussler novels...). 


> 
>> 
>> I mean, it looks like it could be done. But this is where the „dev“ part in 
>> the „devops" world has to take a step back and the „ops“ guys need to come 
>> forward.
>> 
> Can you please explain in more detail?


With 140-ish servers air-gapped on vessels around the world, this is IMO a 
serious operations-problem and needs to be handled properly, with an air-tight 
(or water-tight…) operational concept.

Your infrastructure has outgrown its original design limits. Time to talk to 
management about upping the game (and the budget).




Best Regards

Where can I follow the progress of "Pluggable Storage" development?

2021-12-13 Thread Jean Baro
Hello there.

I am just an enthusiast of PostgreSQL and would like to get more
information about Pluggable Storage's progress (or not). Please.

Thanks


Re: Postgresql + containerization possible use case

2021-12-13 Thread Achilleas Mantzios

Hi Rainer,

On 13/12/21 11:50 π.μ., Rainer Duffner wrote:




Am 10.12.2021 um 13:01 schrieb Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>>:

On 10/12/21 1:24 μ.μ., o1bigtenor wrote:



On Fri, Dec 10, 2021 at 3:24 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

Hi
we are running some 140 remote servers (in the 7 seas via satellite 
connections), and in each one of them we run:
- jboss
- postgresql
- uucp (not as a daemon)
- gpsd
- samba
- and possibly some other services

Hardware and software upgrades are very hard since there is no physical 
access to those servers by trained personnel, and also there is a diversity of 
software versions.

The idea for future upgrades is to containerize certain aspects of the 
software. The questions are (I am not skilled in docker, only minimal contact 
with lxd) :
- is this a valid use case for containerization?
- are there any gotchas around postgersql, the reliability of the system ?
- since we are talking about 4+ basic services (pgsqk, jboss, uucp, samba), 
is docker a good fit or should we be looking into lxd as well?
- are there any success stories of other after following a similar path?



Thanks

My experience with LXD is that upon install you are now on a regular update 
plan that is impossible to change.

Ehhmmm we are running some old versions there already (jboss, pgsql), LXD would 
not differ in this regard.
What do you mean? that the updates for LXD are huge? short spaced/very regular?
Can you pls elaborate some more on that?



IIRC, you can’t really control, which updates are installed for LXD (and snap). 
You can’t create a local mirror.

IIRC, you can delay snap updates, but you can’t really reject them.

Maybe you can these days, with landscape server?

(insert the usual rant about Enterprise != Ubuntu here)

I don’t know about LXD, but as it’s only running on Ubuntu and is  apparently developed by a single guy (who might or might not work for Canonical - sorry, too lazy to check), I wouldn’t hold my 
breath as to its long-term viability.


Ubuntu will probably morph into a container-only, cloud-only OS sooner than 
later - the writing is on the wall (IMHO).

All notes taken, thank you.




This means that your very expensive data connection will be preempted for 
updates at the whim of the
canonical crew. Suggest not using such (most using such on wireless connections 
seem to have found
the resultant issues less than wonderful - - cost (on the data connection) 
being #1 and the inability to achieve
solid reliability crowding it for #2).

Crew has their own paid service. Business connection is for business not crew.



The word „crew“ was meant to say „employees of Canonical“ - I’m sure the 
allegory was not meant to mess with you...



What I am interested is, could docker be of any use in the above scenario? 
Containerization in general?
The guys (admins/mgmt) here seem to be dead set on docker, but I have to 
guarantee some basic data safety requirements.




I know very little about docker, but IMO, for ultimate stability, you could 
switch to RHEL and use their certified images:

https://catalog.redhat.com/software/containers/search?q=PostgreSQL%2012=1


My coworker says, he re-packages all his docker-images himself (with RPMs from 
his own mirror), so that he understands what’s really in them.


The big problem that I see with your use-case and docker is that docker implies 
frequent, small updates to the whole stack - including docker itself (unless 
you pay for the LTS version).

This is not what you do right now, I reckon?



Our setup has been open source since forever. So licenses for something that 
used to be free for ages would be hard to introduce.
So Docker is NOT free? Please share your thoughts? I am a complete noob.
Those servers I am talking about have no internet connectivity. And the 
satellite connection costs are high.
(although I think we pay a fixed amount for a certain total data transfer size).


The question is: do you want to get there?
But maybe your developers want to get here, because they don’t want to learn 
about software-packaging (anymore) - but is that what the business wants?


Those servers live for years, the objective is to facilitate upgrades.


https://www.docker.com/blog/how-carnival-creates-customized-guest-experiences-with-docker/

Thanks for the link, I didn't quite understand what they do with docker (video 
included).
120 docker containers in two data centers on the ship? Ours will be just a 
single linux box with limited connectivity (in some seas no connectivity ) to 
the internet/shore.


(That was pre-pandemic…)

I would make an educated guess that you’d need to have the whole 
docker-infrastructure on each ship (build-server, repository etc.pp.) to 
minimize sat-com traffic.


Hmm, I don't know about that. The hardware is given (existing) and limited.
You are like the 2nd person who warned about comms as 

Re: Postgresql + containerization possible use case

2021-12-13 Thread Rainer Duffner


> Am 10.12.2021 um 13:01 schrieb Achilleas Mantzios 
> :
> 
> On 10/12/21 1:24 μ.μ., o1bigtenor wrote:
>> 
>> 
>> On Fri, Dec 10, 2021 at 3:24 AM Achilleas Mantzios 
>> mailto:ach...@matrix.gatewaynet.com>> wrote:
>> Hi
>> we are running some 140 remote servers (in the 7 seas via satellite 
>> connections), and in each one of them we run:
>> - jboss
>> - postgresql
>> - uucp (not as a daemon)
>> - gpsd
>> - samba
>> - and possibly some other services
>> 
>> Hardware and software upgrades are very hard since there is no physical 
>> access to those servers by trained personnel, and also there is a diversity 
>> of software versions.
>> 
>> The idea for future upgrades is to containerize certain aspects of the 
>> software. The questions are (I am not skilled in docker, only minimal 
>> contact with lxd) :
>> - is this a valid use case for containerization?
>> - are there any gotchas around postgersql, the reliability of the system ?
>> - since we are talking about 4+ basic services (pgsqk, jboss, uucp, samba), 
>> is docker a good fit or should we be looking into lxd as well?
>> - are there any success stories of other after following a similar path?
>> 
>> 
> Thanks
>> My experience with LXD is that upon install you are now on a regular update 
>> plan that is impossible to change. 
> Ehhmmm we are running some old versions there already (jboss, pgsql), LXD 
> would not differ in this regard.
> What do you mean? that the updates for LXD are huge? short spaced/very 
> regular?
> Can you pls elaborate some more on that?


IIRC, you can’t really control, which updates are installed for LXD (and snap). 
You can’t create a local mirror.

IIRC, you can delay snap updates, but you can’t really reject them. 

Maybe you can these days, with landscape server?

(insert the usual rant about Enterprise != Ubuntu here)

I don’t know about LXD, but as it’s only running on Ubuntu and is  apparently 
developed by a single guy (who might or might not work for Canonical - sorry, 
too lazy to check), I wouldn’t hold my breath as to its long-term viability.

Ubuntu will probably morph into a container-only, cloud-only OS sooner than 
later - the writing is on the wall (IMHO).


>> This means that your very expensive data connection will be preempted for 
>> updates at the whim of the 
>> canonical crew. Suggest not using such (most using such on wireless 
>> connections seem to have found 
>> the resultant issues less than wonderful - - cost (on the data connection) 
>> being #1 and the inability to achieve 
>> solid reliability crowding it for #2). 
> Crew has their own paid service. Business connection is for business not crew.


The word „crew“ was meant to say „employees of Canonical“ - I’m sure the 
allegory was not meant to mess with you...


> What I am interested is, could docker be of any use in the above scenario? 
> Containerization in general?
> The guys (admins/mgmt) here seem to be dead set on docker, but I have to 
> guarantee some basic data safety requirements.
> 


I know very little about docker, but IMO, for ultimate stability, you could 
switch to RHEL and use their certified images:

https://catalog.redhat.com/software/containers/search?q=PostgreSQL%2012=1


My coworker says, he re-packages all his docker-images himself (with RPMs from 
his own mirror), so that he understands what’s really in them.


The big problem that I see with your use-case and docker is that docker implies 
frequent, small updates to the whole stack - including docker itself (unless 
you pay for the LTS version).

This is not what you do right now, I reckon?

The question is: do you want to get there?
But maybe your developers want to get here, because they don’t want to learn 
about software-packaging (anymore) - but is that what the business wants?


https://www.docker.com/blog/how-carnival-creates-customized-guest-experiences-with-docker/

(That was pre-pandemic…)

I would make an educated guess that you’d need to have the whole 
docker-infrastructure on each ship (build-server, repository etc.pp.) to 
minimize sat-com traffic.

I mean, it looks like it could be done. But this is where the „dev“ part in the 
„devops" world has to take a step back and the „ops“ guys need to come forward.




Rainer

LibPQ: Lifetime of PGresult passed in to PQnoticeReceiver callback

2021-12-13 Thread Dominique Devienne
Hi,

The doc at
https://www.postgresql.org/docs/current/libpq-notice-processing.html is not
clear to me on whether it should be PQclear'd or not. Who manages the
lifetime of that PGresult?

Also, the "could be called as long as either the PGconn or PGresult exist"
mention,
implies we do not need to clear the notice callback before PQfinish'ing the
connection.
(and that does not seem possible, given passing in a NULL pointer is just a
*get* it seems).
Is that correct?

Yet we are seeing our notice callback being called with an invalid PGresult
(i.e. crash accessing it).
What could be the reasons for that? I'm assuming user-error in our code,
but I don't see why yet.

Thanks for any insights. --DD


Re: Postgresql + containerization possible use case

2021-12-13 Thread Achilleas Mantzios

On 11/12/21 6:31 μ.μ., Michael Lewis wrote:
Interesting. I'm not sure that moving an image for a VM with Postgres is compatible with the goal of minimizing time/data but if upgrades are yearly or something, perhaps that is reasonable. It has 
been 9+ years since that post as well. But the full data will need to be included in the image if the goal is not requiring any (significant) tech support at each location, right? Unless the data 
that needs to be stored at each site is small, that seems a bit unlikely to be a feasible option.
The data are independent, read-write, and different for each server. Basically they (sysadm ppl) want to facilitate system/kernel upgrades, which is a must as old hardware is harder to support with 
newest kernel versions.

Their idea is for exim4, pgsql, jboss to reside in separate docker images.
However I asked for a detailed description of the problem, in order to focus 
more formally on the problem.
I know docker and postgresql has been a hot topic for quite some time. First 
presentations said ... dont do it, but lately we see more and more positive 
experiences, articles, success stories.

Pgsql major versions and data go hand in hand, I hate to give them the illusion 
that separating postgresql from data is something to consider.
Serious thought is needed, for sure, I am just thinking that dockerizing exim4, 
jboss, and other services might be a valid idea, while still keeping postgresql 
and data on the host ?

I had so much troubles because of unreliable system setup in the past, that I'd 
hate to repeat the same (or similar) mistakes again.

--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Postgresql + containerization possible use case

2021-12-13 Thread Achilleas Mantzios

Hi Michael, Adrian

On 10/12/21 11:28 μ.μ., Adrian Klaver wrote:

On 12/10/21 12:06, Michael Lewis wrote:

On Fri, Dec 10, 2021, 2:24 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

    Hi
    we are running some 140 remote servers (in the 7 seas via satellite
    connections)


How are they used? What is in Postgres? Should that all have the exact same 
read only data at all times?



A previous explanation from Achilleas Mantzios , not sure if this is still 
exactly how it works:

https://www.postgresql.org/message-id/3124961.tmAbAIay6W%40smadev.internal.net

Thank you Adrian, the situation is the similar with the one you posted above 
(in 2012), albeit sizes/number of remote servers has increased.

In order to respond to Michael,
Remote pgsql servers are masters. A large majority of tables is read-only (by software, not enforced by other means), but those servers they have their very important write functions as well. And the 
data are individual for any of those 100+ servers. There are no two identical DBs in the whole topology. Every server has its own data specific for this vessel.









--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Postgresql + containerization possible use case

2021-12-13 Thread Achilleas Mantzios

Hi Adrian

On 10/12/21 6:01 μ.μ., Adrian Klaver wrote:

On 12/10/21 01:24, Achilleas Mantzios wrote:

Hi




The idea for future upgrades is to containerize certain aspects of the 
software. The questions are (I am not skilled in docker, only minimal contact 
with lxd) :
- is this a valid use case for containerization?
- are there any gotchas around postgersql, the reliability of the system ?
- since we are talking about 4+ basic services (pgsqk, jboss, uucp, samba), is 
docker a good fit or should we be looking into lxd as well?
- are there any success stories of other after following a similar path?


My question is what is it going to change? The software in the containers still need to be maintained/upgraded and now you have added maintenance and management of the container software and 
communication between containers.


I think the rationale is that sysadms may upgrade system and kernel without caring about breaking things, the docker images of the respective services (jboss, pgsql, exim4, etc..) will evolve (more) 
independently.


Thank you!

PS

For those who wonder about UUCP, UUCP was our comms solution prior we installed TCP/IP on the vessels. However, to this day, it provides a nice management layer, a sort of protection before data 
leave the vessel or reach the vessel, in a user controlled manner. So uucp stayed as it matched exactly the business as far data transfers and emails are concerned. It would be hard to uniformly 
manage data transfers and emails in/out in a plain TCP/IP setup (rsync, ftp, etc, sendmail/exim4/postfix or other MTA).








--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Postgresql + containerization possible use case

2021-12-13 Thread Achilleas Mantzios

On 10/12/21 4:02 μ.μ., o1bigtenor wrote:


The 'book' says everything is wonderful - - - - if it were me - - - no 
guarantees until 'I' am sure.
If they want it - - - - and want you to guarantee it - - - - I wouldn't touch 
it myself!! (That's my opinion and
worth all of what you paid for it. I have pile of software installed on my main 
use system - - - I'm looking
for good stuff that works and far too much stuff in the programming world is 
hype and not function!!)

Thanks, this is more than true.


Regards



--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt