> Also, it's not required, but I think a typical partitioning schema would
> have
> an index on the column being partitioned. I see you have an index on
> iot_data(metadata,lower(data)), so I still wonder whether you'd have better
> results partitioned on metadata, or otherwise maybe adding an ind
OK so I found the problem but other problem appeared.
I found out that the pg12 machine had some limits on the vm settings in
aspect of cpu and memory. Now both machines are exactly the same in aspect
of all hardware and dont have any limit.
CPU - 8
RAM - 32GB.
I tested it with cold cache :
servic
>
> I tried to do even something simpler, run the query with only the
> partition column in the where clause and the results werent good for pg12 :
>
PG12 :
postgres=# explain analyze select * from iot_data where device=51;
QUERY PLAN
--
*8 ms seems pretty slow to planning that query. Does the planning timedrop
if you execute this multiple times in the same session? Does thetime change
if you try again without any foreign keys? *
No one is using the system besides me, therefore after running the query
one time
most of the data is
I realized that the planner goes to the right partition because "(never
executed)" is mentioned near the scan of the other partitions. However,
still i'm not sure why performance is better in pg96.
בתאריך יום א׳, 8 במרץ 2020 ב-18:05 מאת Mariel Cherkassky <
mariel.cherkas...@gmail.com>:
>
Hey,
I upgraded from 96 to 12 in our test env and I'm seeing that for queries
that involve join operation between a partition table and other tables
there is degradation is performance compared to pg96 performance.
My machine : 8cpu,16gb,regular hd,linux redhat 6
pg settings :
max_wal_size = 2GB
m
I see, thank u !
Maybe I didnt see big difference because most of my tables arent so big. My
db`s size is 17GB and the largest table contains about 20M+ records.
Thanks again !
Hey Jeff,Andrew,
I continued testing the 12version vs the 96 version and it seems that there
is almost non diff and in some cases pg96 is faster than 12. I compared the
content of pg_stat_statements after each test that I have done and it seems
that the db time is almost the same and sometimes 96 i
that is the first thing I did but it isnt logged even when this parameter
is set, I guess because it is canceled before it finishes to run - which is
weird..
What do you mean by live queries ?
If I'll try to run the following query and cancel it manually(ctrl+c) :
select pg_sleep(10)
I will see in the logs the following messages :
2019-12-08 17:16:34 IST postgres 30797 LOG: statement: select
pg_sleep(10);
2019-12-08 17:16:36 IST postgres 30797 ERROR
Hey all,
I'm trying to analyze a weird situation that I have seen in my db.
Sometimes my app fails to start because of the following msg :
SQL State : null
Error Code : 0
Message: Cannot create PoolableConnectionFactory (ERROR: canceling
statement due to user request)
In the db at the same ti
Hey Jeff,
First of all thank you again for the quick response. I really appreciate
your comments.
Unfortunately I installed pg from rpms so I cant compile my current env but
I can install it from source code and migrate the data via pg_dump. Can you
explain how can I compile the sources without thi
Hey Jeff,
This example was only used to show that pg96 had better perfomance than
pg12 in a very simple case.
In all the tests that I run most of the queries took less time on 9.6`s
version. I dont know why, but as you can see after disabling the parameter
the simple test that I did showed diffe
Hey Andrew,
It seems that changing this parameter worked for me.
Setting it to zero means that there wont be any parallel workers for one
query right ?
Is it something familiar this problem with the gatherers ?
Hey all,
I'm testing performance of two identical machines one in 9.6 and the second
one is in 12. The second machine is a clone of the first one + db upgrade
to 12 beta 3 (Yes I'm aware 12.1 was released).
machine stats :
32gb ram
8 cpu
regular hd (not ssd)
my postgresql.confg settings:
max_wal
Not sure how data storage is relevant here, I was only focusing on query
optimization. Lets say that most of the data isnt moving (history data).
However, objects can be changed and therefore new revisions are added and
the previous revisions updated (their end_date is updated). If you run
queries
but the start time doesnt indicates that the object is the most recent, it
just indicates when the object was added to your table. If your queries
involve the start_time I can understand why u set it as a partition column,
otherwise is isnt useful. In most of my queries I query by one of 2 options
Whenever I have a new revision of that object, I update the end_time of the
latest revision to be now() and I add a new record of that object with
end_date null.
The null value is used to find most recent revisions of objects..
Thanks for the suggestion of infinity ! I'll try it.
Hey,
In PG12 I'm trying to create partitions by range on a date column that
might be null (indicates it is the most recent version of the object). My
PK has to include the partition column, therefore I'm getting an error that
I cant create a primary key with the specific column because it has null
Understood, thanks for explanation Tom!
In my query I wrapped the columns with distinct : distinct (extract year...
, extract quarter..).
In your query you didnt wrap the columns with distinct but you just
mentioned it. I guess this is the difference, thanks !
>
Hey,
I'm working on PG12.
I have the following table :
\d dates_table
Table "public. dates_table "
Column | Type | Collation | Nullable |Default
--+-+---+--+---
id
Hey,
This is the full output with all the columns :
WITH
sequences AS
(
SELECT oid,relname FROM pg_class WHERE relkind = 'S'
)
SELECT s.oid as seq_oid,d.*
FROM pg_depend d,sequences s
where
s.oid = d.objid
and d.deptype = 'a' and d.refobjid::regclass::text='table_A';
seq_oid | classid | objid |
>
> There are many rows, anything specific u want to see ?
Hey,
I'm handling a very weird situation. I tried to check which sequences
belong to a specific table (table_A) with the following query :
WITH
sequences AS
(
SELECT oid,relname FROM pg_class WHERE relkind = 'S'
)
SELECT s.oid as seq_oid,d.objid as objid,d.refobjid
FROM pg_depend d,sequences s
whe
I understood my problem. thanks.
בתאריך יום ב׳, 23 בספט׳ 2019 ב-13:59 מאת Mariel Cherkassky <
mariel.cherkas...@gmail.com>:
> Hey,
> I got the following partitions structure in pg12 beta 3 version :
>
> postgres=# \d+ students
> Partitioned table "public.studen
Hey,
I got the following partitions structure in pg12 beta 3 version :
postgres=# \d+ students
Partitioned table "public.students"
Column | Type | Collation | Nullable | Default | Storage | Stats
target | Description
+-+---+--+--
Hey,
I tried to get a list of all tables that has a reference to my_table. I
used two different queries :
1)select R.*
from INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE u
inner join INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS FK
on U.CONSTRAINT_CATALOG = FK.UNIQUE_CONSTRAINT_CATALOG
and U.CON
Well, if u have 10M rows, and all your queries use the same column in the
query and the data can split pretty even between the partitions, any
specific reason not to use is ? An index will help u reach a complexity of
(logn) while partition + index can be in complexity of (logm) when m = rows
in pa
Hey Michael,
first of all thanks for the quick response.
Right now the production env is on a different version(10). I'm doing all
my tests on a test environment. I'm familiar with the hash partitions but
my queries doesnt involve the product.id therefore iti isnt relevant. All
the queries uses the
Hey,
Thanks to the new partitions features in pg12 (referencing partition table
is possible) I was trying to migrate some of my tables into a partitions
structure.
Lets assume I have the following non partitions structure :
Product(id int PK,vendor int references Vendor(id),price int)
ProductPic(
Hey,
I'm trying to understand when using hash partitions can be better than
using list partition when the partition column is bigint. I understand that
If my partition column has many distinct values then If I'll use a list
partitions I might have a lot of partitions. On the other hand, with hash
Hey,
I upgraded my pg9.6 cluster to pg11.2.
As it seems after the upgrade the duration of the same flow in my
application raised from 13 minutes to 19 minutes.
The test I did :
1.reset pg_stat_statements
2.run the applicative flow
3.collect everything from pg_stat_statements
I did this test on th
Hey,
I have a very big query that consist from 3-4 subqueries that use windows
functions. There is a chance that I'll need to rewrite the query but first
I'm trying to search for other ways to improve it and I'll be happy to hear
if one of u have an idea.
Basically my table has the following struc
Hey all,
I have a questions regarding streaming replication that I would like to
ask in order to understand the feature better :
I have 2 nodes configured with replication (primary + secondary).
In my primary I configured streaming replcation + archiving. My archive
command :
gzip < %p > /var/lib/
Hi,
I wrote a script that monitored the size of a specific table of mine(dead
tuples_mb vs live tuples_mb). The script run a query on pg_stattuple every
15 minutes : select * from pg_stattuple('table_name'). I know that every
night there is a huge delete query that deletes most of the table`s
conte
oot streaming replication delays — see columns
> sent_location, write_location, flush_location, and replay_location in
> pg_stat_replication and restart_lsn in pg_replication_slots. If you have
> delay in replaying, it should be seen there.
>
> On Wed, May 29, 2019 at 11:39 Mariel Cherk
ached). So
what am I missing ?
בתאריך יום ד׳, 29 במאי 2019 ב-11:20 מאת Fabio Pardi <
f.pa...@portavita.eu>:
>
>
> On 5/29/19 9:20 AM, Mariel Cherkassky wrote:
> > First of all thanks Fabio.
> > I think that I'm missing something :
> > In the next que
y all'. That will
> maybe help others in the community and you might also get more help from
> others.
>
> answers are in line here below
>
>
>
> On 28/05/2019 10:54, Mariel Cherkassky wrote:
> > I have pg 9.6, repmgr version 4.3 .
> > I see in the logs
n cc...
>
> settings look ok.
>
> what's in the recovery.conf file then?
>
> regards,
>
> fabio pardi
>
> On 5/27/19 11:23 AM, Mariel Cherkassky wrote:
> > Hey,
> > the configuration is the same as in the primary :
> > max_wal_size = 2GB
> > min_
Hey,
PG 9.6, I have a standalone configured. I tried to start up a secondary,
run standby clone (repmgr). The clone process took 3 hours and during that
time wals were generated(mostly because of the checkpoint_timeout). As a
result of that, when I start the secondary ,I see that the secondary keep
Hey,
I have 2 nodes that are configured with streaming replication (PG 9.6,
repmgr 4.3).
I was trying to upgrade the nodes to PG11 with the doc -
https://www.postgresql.org/docs/11/pgupgrade.html
Everything goes well until I try to start the secondary and then it fails
on the next error :
2019-05-
By rules I mean DB rules (simillar to triggers but different)
בתאריך יום ד׳, 22 במאי 2019 ב-18:41 מאת Tomas Vondra <
tomas.von...@2ndquadrant.com>:
> On Wed, May 22, 2019 at 06:26:49PM +0300, Mariel Cherkassky wrote:
> >Hey,
> >I'm trying to restore a cluster
Hey,
I'm trying to restore a cluster (9.2) from 3 binary dumps (pg_dump -Fc).
Each dump contains only one database.
The sizes :
A-10GB
B-20GB
C-5GB.
For unclear reason the restore of the third database is taking alot of
time. It isnt stuck but it continues creating db rules. This database has
more
primary./ secondary). Does it
means that my backup is corrupted ?
בתאריך יום ג׳, 21 במאי 2019 ב-16:07 מאת Tomas Vondra <
tomas.von...@2ndquadrant.com>:
> On Tue, May 21, 2019 at 12:01:31PM +0300, Mariel Cherkassky wrote:
> >Tomas :
> >
> >Well, when you say it does
ffset dir.
בתאריך יום ג׳, 21 במאי 2019 ב-0:04 מאת Tomas Vondra <
tomas.von...@2ndquadrant.com>:
> On Mon, May 20, 2019 at 08:20:33PM +0300, Mariel Cherkassky wrote:
> > Hey Greg,
> > Basically my backup was made after the first pg_resetxlog so I was
> wrong.
>
> B
er a documentation on how to
> handle WAL log fill up and resetting them.
>
> On Monday, May 20, 2019, 9:08:19 AM PDT, Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
>
>
> A backup was made after the corruption appeared but before I tried using
> the pg_resetx
A backup was made after the corruption appeared but before I tried using
the pg_resetxlog command. Basically I just want to start the database with
the data that is available in the files(I'm ok with loosing data that was
in the cache and wasnt written to disk).
My question is how can I continue fr
rive before attempting
> to do anything else !
>
> https://wiki.postgresql.org/wiki/Corruption
>
> regards,
> Flo
>
> On Mon, May 20, 2019 at 4:40 PM Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
>
>> Hey,
>> I'm trying to handle a corrup
Hey,
I'm trying to handle a corruption that one of our customers is facing.
His disk space was full and as a result of that he decided to run
pg_resetxlog a few times(bad idea..) .
When I connected to the machine I saw that the db was down.
When I started the db (service postgresql start) I saw the
Hey,
I wanted to a few questions regarding the parallel parameters :
max_worker_processes and max_parallel_workers_per_gather.
1)Basically, max_worker_processes should be set to the number of cpus I
have in the machine ?
2)If I set max_worker_processes to X and max_parallel_workers_per_gather to
Y
Got it, thanks Laurenz !
בתאריך יום ד׳, 27 במרץ 2019 ב-15:20 מאת Laurenz Albe <
laurenz.a...@cybertec.at>:
> Mariel Cherkassky wrote:
> > Hi all,
> > I'm trying to analyze a deadlock that I have in one of our environments.
> > The deadlock message :
>
e and does it recover/handle all
> situations well, or is additional algorithms needed to be implemented in
> addition on top e.g. for automatic recovery (by "myself").
>
> I could start an other email chain, if this chain is meant more for
> something else.
>
> Best
Hey,
I was searching for a solution to scale my postgresql instance in the
cloud. I'm aware of that that I can create many read only replicas in the
cloud and it would improve my reading performance. I wanted to hear what
solution are you familiar with ? Are there any sharding solution that are
com
Hi all,
I'm trying to analyze a deadlock that I have in one of our environments.
The deadlock message :
06:15:49 EET db 14563 DETAIL: Process 14563 waits for ShareLock on
transaction 1017405468; blocked by process 36589.
Process 36589 waits for ShareLock on transaction 1017403840; blocked by
pro
t;select * from table for update" database
>
> Using the last releases of the major versions solve the bug for me.
>
> Best regards
>
> Em qua, 13 de mar de 2019 às 09:29, Mariel Cherkassky <
> mariel.cherkas...@gmail.com> escreveu:
>
>> Hey,
>> The logs are
tored it :
drop table table_name;
psql -d db -U username -f table.sql
בתאריך יום ד׳, 13 במרץ 2019 ב-14:24 מאת Adrien NAYRAT <
adrien.nay...@anayrat.info>:
> On 3/12/19 8:58 AM, Mariel Cherkassky wrote:
> > Apparently the issue appeared again in the same database but on
>
pen ? The db was installed in that version from the first place and *no
>> upgrade was done*
>>
>> בתאריך יום ד׳, 30 בינו׳ 2019 ב-11:14 מאת Alvaro Herrera <
>> alvhe...@2ndquadrant.com>:
>>
>>> On 2019-Jan-30, Mariel Cherkassky wrote:
>>>
updated every hour. Only during those problematic
7 hours it wasnt updated.
בתאריך יום ד׳, 6 במרץ 2019 ב-19:05 מאת Justin Pryzby <
pry...@telsasoft.com>:
> On Wed, Mar 06, 2019 at 06:47:21PM +0200, Mariel Cherkassky wrote:
> > Those settings helped but the table still grey very
Hi,
I have the next relation in my db : A(id int, info bytea,date timestamp).
Every cell in the info column is very big and because of that there is a
toasted table with the data of the info column (pg_toast.pg_toast_123456).
The relation contains the login info for every user that logs into the
s
Hi,
I was testing pgstattuple and I realized that pgstattuple is working on
toasted table but pgstattuple_approx is raising the next error msg :
ERROR: "pg_toast_18292" is not a table or materialized view
ahm, is that because the pgstattuple_approx uses visibility map ? Can
someone explain ? tnx
able like R3 would make sense to me.
>
> *Michael Lewis*
>
> On Tue, Feb 19, 2019 at 9:34 AM Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
>
>> I dont have any indexes on R (the table with the jsonb column). I was
>> asking if I can create any t
is int4range type, then I would expect that you could add a
> GiST and then use overlaps &&, or another operator. I would not expect that
> you could index (unnest data->>'ranges' for instance) to get the separated
> out range values.
>
>
>
> *Michae
Hi,
I have a table with json col : R(object int, data jsonb).
Example for content :
object | data
+---
50 | {"ranges": [[1, 1]]}
51 | {"ranges": [[5, 700],[1,5],[9,10}
52 | {"ranges":
'but then I don't have accurate statistics on my toasted table..
On Fri, Feb 15, 2019, 3:39 PM Alvaro Herrera On 2019-Feb-14, Mariel Cherkassky wrote:
>
> > I meant the anaylze, if anaylze will run very often on the original
> table,
> > arent there disadvantages f
ently.
>
>
>
> *Michael Lewis *
>
>
> On Thu, Feb 14, 2019 at 1:08 PM Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
>
>> No I don't run vacuum manually afterwards because the autovacuum should
>> run. This process happens every night
*Michael Lewis*
>
> On Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
>
>> Maybe by explaining the tables purpose it will be cleaner. The original
>> table contains rows for sessions in my app. Every session saves for itself
&
solve the issue ?
On Thu, Feb 14, 2019, 8:38 PM Michael Lewis It is curious to me that the tuples remaining count varies so wildly. Is
> this expected?
>
>
> *Michael Lewis*
>
> On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
ds ~ 17H
So autovacuum was laying down for 17h ? I think that I should increase the
cost_limit to max specifically on the toasted table. What do you think ? Am
I wrong here ?
בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת Jeff Janes <
jeff.ja...@gmail.com>:
> On Thu, Feb 7, 2019 at
I meant the anaylze, if anaylze will run very often on the original table,
arent there disadvantages for it ?
בתאריך יום ד׳, 13 בפבר׳ 2019 ב-18:54 מאת Alvaro Herrera <
alvhe...@2ndquadrant.com>:
> On 2019-Feb-13, Mariel Cherkassky wrote:
>
> > To be honest, it isnt my d
e bit problematic ?
בתאריך יום ד׳, 13 בפבר׳ 2019 ב-18:13 מאת Alvaro Herrera <
alvhe...@2ndquadrant.com>:
> On 2019-Feb-13, Mariel Cherkassky wrote:
>
> > Hey,
> > I have a very big toasted table in my db(9.2.5).
>
> Six years of bugfixes missing there ...
Hey,
I have a very big toasted table in my db(9.2.5). Autovacuum doesnt gather
statistics on it because the analyze_scale/threshold are default and as a
result autoanalyze is never run and the statistics are wrong :
select * from pg_stat_all_Tables where relname='pg_toast_13488395';
-[ RECORD 1 ]-
> > I'm trying to understand the logic behind all of these so I would be
> happy
> > if you can confirm what I understood or correct me if I'm wrong :
> > -The commit command writes all the data in the wal_buffers is written
> into the wal files.
>
> All the transaction log for the transaction has
Hey,
I'm trying to understand the logic behind all of these so I would be happy
if you can confirm what I understood or correct me if I'm wrong :
-The commit command writes all the data in the wal_buffers is written into
the wal files.
-Checkpoints writes the data itself (blocks that were changed)
mment there is a byte column and therefore the toasted table is the
problematic here.
בתאריך יום ה׳, 7 בפבר׳ 2019 ב-0:34 מאת David Rowley <
david.row...@2ndquadrant.com>:
> On Thu, 7 Feb 2019 at 02:34, Mariel Cherkassky
> wrote:
> > As I said, I set the next setting
2 מאת Jeff Janes <
jeff.ja...@gmail.com>:
> On Wed, Feb 6, 2019 at 5:29 AM Mariel Cherkassky <
> mariel.cherkas...@gmail.com> wrote:
>
>
>> Now the question is how to handle or tune it ? Is there any change that I
>> need to increase the cost_limit / cost_dela
which one you mean ? I changed the threshold and the scale for the specific
table...
בתאריך יום ד׳, 6 בפבר׳ 2019 ב-15:36 מאת dangal <
danielito.ga...@gmail.com>:
> Would it be nice to start changing those values found in the default
> postgres.conf so low?
>
>
>
> --
> Sent from:
> http
cleaning it or wait until the
vacuum_threshold hit again ?
בתאריך יום ד׳, 6 בפבר׳ 2019 ב-15:05 מאת David Rowley <
david.row...@2ndquadrant.com>:
> On Thu, 7 Feb 2019 at 00:17, Laurenz Albe
> wrote:
> >
> > On Wed, 2019-02-06 at 12:29 +0200, Mariel Cherkassky wrote:
&
Hi,
I have a table with a bytea column and its size is huge and thats why
postgres created a toasted table for that column. The original table
contains about 1K-10K rows but the toasted can contain up to 20M rows. I
assigned the next two settings for the toasted table :
alter table orig_table set
2019 ב-11:14 מאת Alvaro Herrera <
> alvhe...@2ndquadrant.com>:
>
>> On 2019-Jan-30, Mariel Cherkassky wrote:
>>
>> > It seems that the version of the db is 9.6.10 :
>> >
>>
> >
> > [1]
> >
> https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE
> >
> > Le mer. 30 janv. 2019 à 11:42, Mariel Cherkassky <
> > mariel.cherkas...@gmail.com> a écrit :
> >
> > > Hey,
> > > I'm us
Hey,
I'm using postgresql 9.6.11. I wanted to ask something about the functions
I mentioned in the title :
I created the next table :
postgres=# \d students;
Table "public. students "
Column | Type | Modifiers
--+-+---
id| integer |
name| text|
age| integ
d *no
upgrade was done*
בתאריך יום ד׳, 30 בינו׳ 2019 ב-11:14 מאת Alvaro Herrera <
alvhe...@2ndquadrant.com>:
> On 2019-Jan-30, Mariel Cherkassky wrote:
>
> > It seems that the version of the db is 9.6.10 :
> >
> > psql -U db -d db -c "select version()
GCC) 4.4.7
20120313 (Red Hat 4.4.7-23), 64-bit
(1 row)
and the error is still exist..
בתאריך שבת, 26 בינו׳ 2019 ב-12:59 מאת Adrien NAYRAT <
adrien.nay...@anayrat.info>:
> On 1/26/19 11:56 AM, Mariel Cherkassky wrote:
> > Update to the minor version should be an e
Hey,
I noticed that pg_locks has an addition row for every transaction that is
created with a locktype "virtualxid". Tried to search it online but I didnt
find an explanation for this behavior. Does anyone can explain why it
happens ?
Hi,
I'm planning our db upgrade from 9.6. Basically I wanted to check how
stable is pg11 version. I'm considering upgrading from 9.6 to 10 and then
to 11 immediatly. Is there a way to upgrade directly to 11 and jump on 10.
Thanks.
Update to the minor version should be an easy solution - yum update
postgresql . What did you mean by carful
On Sat, Jan 26, 2019, 12:48 PM Adrien NAYRAT On 1/25/19 6:20 PM, Mariel Cherkassky wrote:
> > I'm getting this issue when I try to connect to a specific db. Does it
> > m
I'm getting this issue when I try to connect to a specific db. Does it
matters what table I specify ? Should I just choose a random table from the
problematic db? If I'll dump the db and restore it it can help ?
On Fri, Jan 25, 2019, 10:19 AM Adrien NAYRAT On 1/24/19 3:14 PM, Mariel
t ? Or there is
something else that need to be done?
בתאריך יום ד׳, 23 בינו׳ 2019 ב-21:51 מאת Jerry Sievers <
gsiever...@comcast.net>:
> Mariel Cherkassky writes:
>
> > Hey,
> > I'm trying to help a guy that is using pg9.6 but I'm not so familiar
> > w
Yeah 9.6 !
On Wed, Jan 23, 2019, 9:51 PM Jerry Sievers Mariel Cherkassky writes:
>
> > Hey,
> > I'm trying to help a guy that is using pg9.6 but I'm not so familiar
> > with the error message :
> > ERROR: found xmin 16804535 from before relfrozenxid 90126
Hey,
I'm trying to help a guy that is using pg9.6 but I'm not so familiar with
the error message :
ERROR: found xmin 16804535 from before relfrozenxid 90126924
CONTEXT: automatic vacuum of table db1.public.table_1"
It seems that the error has started appearing two weeks ago. Data that I
collec
Got it, I didn't see the toast word in the command. Thanks !
On Thu, Jan 17, 2019, 10:17 PM Alvaro Herrera On 2019-Jan-17, Mariel Cherkassky wrote:
>
> > But you said that the threshold that is chosen for the toasted table is
> > identical to the originals table threshold
right now the threshold for the original table is set to 0.05 and it
it to often for the original but for the toasted table it isn't enough
because it has more then 10 m records..
On Jan 17, 2019 9:09 PM, "Alvaro Herrera" wrote:
On 2019-Jan-17, Mariel Cherkassky wrote:
> I did i
table then the toasted table ? Should they vacuumed together ?
On Jan 17, 2019 7:52 PM, "Alvaro Herrera" wrote:
On 2019-Jan-17, Mariel Cherkassky wrote:
> I tried to set the same threshold for the toasted table but got an error
> that it is a catalog table and therefore permission
Hey,
I have a table with 3 columns and one of those columns is bytea type
A(int,int,bytea).
Every row that I insert is pretty big and thats why postgresql decided to
save that column in a toasted table(pg_toasted_Aid). I had a lot of bloat
issues with that table so I set the vacuum_threshold of the
commit. How the
database will handle it ?
בתאריך יום ה׳, 10 בינו׳ 2019 ב-10:55 מאת Guillaume Lelarge <
guilla...@lelarge.info>:
> Le jeu. 10 janv. 2019 à 09:07, Mariel Cherkassky <
> mariel.cherkas...@gmail.com> a écrit :
>
>> Hey,
>> It is clear that when
Thanks Ken. I just wanted to make sure that it happened because of 9.6
packages installation and not because of any other reason.
בתאריך יום ה׳, 10 בינו׳ 2019 ב-11:42 מאת Ken Tanzer <
ken.tan...@gmail.com>:
> On Wed, Jan 9, 2019 at 7:09 AM Mariel Cherkassky <
>
. My question is, is it possible that it also deleted the 9.2
libpq file ?
בתאריך יום ד׳, 9 בינו׳ 2019 ב-18:11 מאת Tom Lane <t...@sss.pgh.pa.us
>:
> Mariel Cherkassky writes:
> > But in both of the machines I have the same os and I used the same
> > repository - post
Hey,
It is clear that when we query some data, if that data isnt in the shared
buffers pg will go bring the relevant blocks from the disk to the shared
buffers. I wanted to ask if the same logic works with
dml(insert/update/delete). I'm familiar with the writing logic, that the
checkpointer is the
the socket dir. Does those packages include
a different libpq ? What postgres package change the libpq ?
בתאריך יום ד׳, 9 בינו׳ 2019 ב-17:13 מאת Jeff Janes <
jeff.ja...@gmail.com>:
> On Wed, Jan 9, 2019 at 10:09 AM Mariel Cherkassky <
> mariel.cherkas...@gmail.com> w
1 - 100 of 153 matches
Mail list logo