On 04/03/18 11:28, Laurenz Albe wrote:
[...]
psql:testing/test.pg_sql:42: ERROR: function
WRITE_MESSAGE_TO_TABLE(i_function => text, i_message => text, i_level =>
text, i_present_user => name, i_session_user => name,
i_transaction_timestamp => timestamp with time zone, i_transaction_id =>
Hi There,
Is anybody aware of how to encrypt bind password for ldap authentication in
pg_hba.conf. Anonymous bind is disabled in our organization so we have to use
bind ID and password but to keep them as plaintext in pg_hba.conf defeat
security purposes. We want to either encrypt it or
On 04/03/2018 11:47 AM, PegoraroF10 wrote:
Suppose a DB with dozens of schemas with same structure.
DB
Schema1
Table1
Table2
Schema2
Table1
Table2
Schema3
Table1
Table2
Then we want to execute a SQL on specific schemas and the result of it could
be a
Make a view that joins all the things, with a column providing the name of the
schema that they came from.
> On Apr 3, 2018, at 10:47 , PegoraroF10 wrote:
>
> Suppose a DB with dozens of schemas with same structure.
> DB
> Schema1
>Table1
>Table2
> Schema2
>
Suppose a DB with dozens of schemas with same structure.
DB
Schema1
Table1
Table2
Schema2
Table1
Table2
Schema3
Table1
Table2
Then we want to execute a SQL on specific schemas and the result of it could
be a UNION ALL. So, how could be a function that runs that SQL on
On 04/03/2018 09:40 AM, hmidi slim wrote:
I tried insert into availability values ('product x',
'[2018-02-02,2018-03-01]'::daterange); and I got the same result such
as insert into availability values ('product x', daterange('2018-02-02',
'2018-03-01', '[]').
Yes, those are equivalent ways
HI,
I tried* insert into availability values ('product x',
'[2018-02-02,2018-03-01]'::daterange); *and I got the same result such as*
insert into availability values ('product x', daterange('2018-02-02',
'2018-03-01', '[]').*
On Sat, Mar 31, 2018 at 1:49 PM, Radoslav Nedyalkov
wrote:
> Hi all,
> it's very simple and intuitive case but let me describe first.
> 1. session 1 calls pg_advisory_lock(1234) and succeeds.
> 2. session 2 calls pg_advisory_lock(1234) and stops on waiting.
> All fine BUT
List,
OP here. Thank you for replying. Confirms my diagnosis that it might have
to do with analyze vaccum.
Some debug info.
1. Loaded a CSV to fill the table with data.
2. performed analyse vacuum on this table after uploading.
3. I do not see any reason for dead rows because I have not updated
On 04/03/2018 07:35 AM, hmidi slim wrote:
I tried it and I got the same result.
Tried what?
--
Adrian Klaver
adrian.kla...@aklaver.com
I tried it and I got the same result.
Tomas Vondra writes:
> On 04/03/2018 11:14 AM, Ranjith Ramachandra wrote:
>> it returns
>> reltuples | n_live_tup | n_dead_tup
>> -++
>> 2.7209e+06 | 1360448 | 1360448
>>
>> If I run analyze
On 04/03/2018 11:14 AM, Ranjith Ramachandra wrote:
> I am relying on reltuples on my web app to get fast row counts.
>
> This was recommended by this article to get fast approx row
> counts: https://wiki.postgresql.org/wiki/Count_estimate
>
>
> However for some table I am getting twice as many
However no space seems to be freed to the system.
Is there any way a bloody newbie can debug this behaviour?
In our experience, autovacuum is able to contain bloating of table data,
but not bloating of indexes.
You could see where the bloating is by running the following queries:
CREATE
Thiemo Kellner wrote:
> On 03/30/18 11:14, Laurenz Albe wrote:
> > You have to consume the result before you can send the next query.
>
> I changed implementation but still get the same error but now different
> context. I tried to retrieve the result but I failed
>
> I committed the last code
I am relying on reltuples on my web app to get fast row counts.
This was recommended by this article to get fast approx row counts:
https://wiki.postgresql.org/wiki/Count_estimate
However for some table I am getting twice as many values when I try to do
this. I did some more research and came
On 03/04/2018 10:54, Kein Name wrote:
> Why would you want that? Do you have any control over the application? Any
"special" patterns used in the app?
Drive is running full :/
Sadly I have no control and knowledge whatsoever over/about the application.
I tuned the autovacuum parameters now
> Why would you want that? Do you have any control over the application?
Any "special" patterns used in the app?
Drive is running full :/
Sadly I have no control and knowledge whatsoever over/about the application.
I tuned the autovacuum parameters now for the critical tables, to have it
run
> VACUUM <> VACUUM FULL
> Normally running VACUUM via autovacuum should help reuse free space but
not actually return it to the filesystem / OS (unless it happens to be the
last blocks in the data file(s)).
> Ppl in normal/average type of installations/workloads no longer (since
8.2) run VACUUM
I have the data below, returned from a PostgreSQL table using this SQL:
SELECT ila.treelevel,
ila.app,
ila.lrflag,
ila.ic,
ila.price,
ila.treelevel-1 as parent,
ila.seq
FROM indexlistapp ila
WHERE ila.indexlistid IN
On 03/04/2018 09:36, Kein Name wrote:
However no space seems to be freed to the system.
Is there any way a bloody newbie can debug this behaviour?
VACUUM <> VACUUM FULL
Normally running VACUUM via autovacuum should help reuse free space but not
actually return it to the filesystem / OS
Hello List,
I inherited a machine with an Postgres Database, which I am fighting with
right now.
It seems as if the Database is growing bigger and bigger over time.
Once in a while I have to run a VACUUM FULL statement on a few tables,
which then releases a lot of space to the OS.
By reading the
Hi everyone,
problem is solved.
There was a delay between setup (drop replication slot) and execution.
Jakub
2018-04-03 4:43 GMT+02:00 Michael Paquier :
> On Sun, Apr 01, 2018 at 06:26:51PM +, Jakub Janeček wrote:
> > What did i do wrong? I need stop comulating WAL
23 matches
Mail list logo