On 8/10/2017 1:29 AM, l...@laurent-hasson.com wrote:
Finally, my true question was whether Postgres would support something
like worm with the performance benefits of UNLOGGED, but not the
inconveniences of auto truncates.
If you can live with the limitations, one other thing you might
On 8/10/2017 1:29 AM, l...@laurent-hasson.com wrote:
Hello George... I know about not doing top posting but was emailing
from my phone, and just recently moved to Android. I think I am still
not configured right.
Somewhat orthogonal, but any particular reason why top posts == bad,
or just
Sent from my BlackBerry - the most secure mobile device
From: gneun...@comcast.net
Sent: August 9, 2017 14:52
To: l...@laurent-hasson.com
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Unlogged tables
Please don't top post.
On 8/9/2017 2:30 PM, l...@laurent-hasson.com<mailt
Please don't top post.
On 8/9/2017 2:30 PM, l...@laurent-hasson.com wrote:
> On 8/9/2017 2:17 PM, gneun...@comcast.net wrote:
>> On Wed, 9 Aug 2017 09:14:48 -0700, Jeff Janes wrote:
>> Why doesn't the Windows scheduled shutdown signal postgres to shutdown
>> cleanly and
: Re: [PERFORM] Unlogged tables
On Wed, 9 Aug 2017 09:14:48 -0700, Jeff Janes <jeff.ja...@gmail.com> wrote:
>Why doesn't the Windows scheduled shutdown signal postgres to shutdown
>cleanly and wait for it to do so? That is what is supposed to happen.
Windows *does* signal shutdow
On Wed, 9 Aug 2017 09:14:48 -0700, Jeff Janes wrote:
>Why doesn't the Windows scheduled shutdown signal postgres to shutdown
>cleanly and wait for it to do so? That is what is supposed to happen.
Windows *does* signal shutdown (and sleep and hibernate and wakeup).
On Tue, Aug 8, 2017 at 8:20 PM, l...@laurent-hasson.com <
l...@laurent-hasson.com> wrote:
> Hello,
>
>
> We have a fairly large static dataset that we load into Postgres. We made
> the tables UNLOGGED and saw a pretty significant performance improvement
> for the loading. This was all fantastic
David, all,
* David G. Johnston (david.g.johns...@gmail.com) wrote:
> On Wed, Aug 9, 2017 at 3:39 AM, Michael Paquier
> wrote:
>
> > This triggers a table rewrite and makes sure that all the data gets
> > WAL-logged. The cost to pay for durability.
That's not
On Wed, Aug 9, 2017 at 3:39 AM, Michael Paquier
wrote:
> This triggers a table rewrite and makes sure that all the data gets
> WAL-logged. The cost to pay for durability.
>
> > Is there a way to get my cake and eat it too?
>
> Not completely. Making data durable will
On Wed, Aug 9, 2017 at 5:20 AM, l...@laurent-hasson.com
wrote:
> We have a fairly large static dataset that we load into Postgres. We made
> the tables UNLOGGED and saw a pretty significant performance improvement for
> the loading. This was all fantastic until the server
On Tue, Apr 14, 2015 at 8:41 AM, Yves Dorfsman y...@zioup.com wrote:
On 2015-04-13 17:49, Jeff Janes wrote:
One way would be to lock dirty buffers from unlogged relations into
shared_buffers (which hardly seems like a good thing) until the start of
a
super-checkpoint and then write them
On Mon, Apr 13, 2015 at 8:28 PM, David G. Johnston
david.g.johns...@gmail.com wrote:
On Mon, Apr 13, 2015 at 7:45 PM, Jim Nasby jim.na...@bluetreble.com
wrote:
There's been recent discussion of adding support for read-only tables. If
we had those, we might be able to support something
David G Johnston wrote
Well, that is half right anyway. UNLOGGED tables obey checkpoints just
like any other table. The missing feature is an option to leaved restored
the last checkpoint. Instead, not knowing whether there were changes
since the last checkpoint, the system truncated the
On 2015-04-13 17:49, Jeff Janes wrote:
One way would be to lock dirty buffers from unlogged relations into
shared_buffers (which hardly seems like a good thing) until the start of a
super-checkpoint and then write them all out as fast as possible (which kind
of defeats
On 4/14/15 10:56 AM, dgabriel wrote:
David G Johnston wrote
Well, that is half right anyway. UNLOGGED tables obey checkpoints just
like any other table. The missing feature is an option to leaved restored
the last checkpoint. Instead, not knowing whether there were changes
since the last
On Mon, Apr 13, 2015 at 4:31 PM, dgabriel gabriel.do...@gmail.com wrote:
In the event of a normal shutdown, we can flush all the writes to disk
so we know all the data has been written, so there is no need to truncate.
Isn't possible to periodically flush data to disk and in case of crush
In the event of a normal shutdown, we can flush all the writes to disk
so we know all the data has been written, so there is no need to truncate.
Isn't possible to periodically flush data to disk and in case of crush
postgres to load only the data that existed at last flush? The periodic
flush
On Monday, April 13, 2015, Matheus de Oliveira matioli.math...@gmail.com
wrote:
On Mon, Apr 13, 2015 at 4:31 PM, dgabriel gabriel.do...@gmail.com
javascript:_e(%7B%7D,'cvml','gabriel.do...@gmail.com'); wrote:
In the event of a normal shutdown, we can flush all the writes to disk
so we know
That will be a very useful feature. I don'
t care if i loss 1-2 hours of data. I know we could have some cron jobs to
dump the table periodically but the table could be big, so this operation
could be expensive. Also i have to detect when postgres crush, i have no
idea how i can detect if
On Mon, Apr 13, 2015 at 5:30 PM, Yves Dorfsman y...@zioup.com wrote:
In my experience postgres is very aggressive in getting rid of unlogged
tables, it does get rid of them from shutdowns that seem perfectly fine (no
crash). A lot of people get surprised by this.
Shutdowns in fast or smart
On 2015-04-13 14:16, dgabriel wrote:
That will be a very useful feature.
I agree, unlogged tables would be a lot more useful if they didn't disappear
on re-start.
could be expensive. Also i have to detect when postgres crush, i have no
idea how i can detect if postgres crushed. Then i have
On 4/13/15 3:49 PM, David G. Johnston wrote:
On Monday, April 13, 2015, Matheus de Oliveira
matioli.math...@gmail.com mailto:matioli.math...@gmail.com wrote:
On Mon, Apr 13, 2015 at 4:31 PM, dgabriel gabriel.do...@gmail.com
javascript:_e(%7B%7D,'cvml','gabriel.do...@gmail.com'); wrote:
Jim Nasby wrote:
Yeah, this is not something that would be very easy to accomplish, because a
buffer can get evicted and written to disk at any point. It wouldn't be too
hard to read every unlogged table during recovery and see if there are any
pages that were written after the last
On Mon, Apr 13, 2015 at 4:49 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Mon, Apr 13, 2015 at 1:49 PM, David G. Johnston
david.g.johns...@gmail.com wrote:
On Monday, April 13, 2015, Matheus de Oliveira matioli.math...@gmail.com
wrote:
On Mon, Apr 13, 2015 at 4:31 PM, dgabriel
On 4/13/15 4:13 PM, Alvaro Herrera wrote:
Jim Nasby wrote:
Yeah, this is not something that would be very easy to accomplish, because a
buffer can get evicted and written to disk at any point. It wouldn't be too
hard to read every unlogged table during recovery and see if there are any
pages
On Mon, Apr 13, 2015 at 1:49 PM, David G. Johnston
david.g.johns...@gmail.com wrote:
On Monday, April 13, 2015, Matheus de Oliveira matioli.math...@gmail.com
wrote:
On Mon, Apr 13, 2015 at 4:31 PM, dgabriel gabriel.do...@gmail.com
wrote:
In the event of a normal shutdown, we can flush
On 4/13/15 7:32 PM, David G. Johnston wrote:
The missing feature is an option to leaved restored the last
checkpoint. Instead, not knowing whether there were changes
since the last checkpoint, the system truncated the relation.
What use case is there for a
On Mon, Apr 13, 2015 at 7:45 PM, Jim Nasby jim.na...@bluetreble.com wrote:
On 4/13/15 7:32 PM, David G. Johnston wrote:
That particular use-case would probably best be served with a separate
replication channel which pushes data files from the primary to the
slaves and allows for the slave
My table is a statistics counters table, so I can live with a partial
data
loss, but not with a full data loss because many counters are weekly and
monthly.
Unlogged table can increase speed, this table has about 1.6 millions of
update per hour, but unlogged with a chance of loss all
Unlogged table can increase speed, this table has about 1.6
millions of update per hour, but unlogged with a chance of loss
all information on a crash are not a good idea for this.
pg_dump -t 'tablename' from a cron job? (Make sure to rotate dump
file names, maybe with day of week or
Hi,
On 12 November 2011 00:18, Stephen Frost sfr...@snowman.net wrote:
In a crash, unlogged tables are automatically truncated.
BTW I wonder what for they are truncated in a crash case?
--
Sergey Konoplev
Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
On 14/11/11 08:10, Sergey Konoplev wrote:
Hi,
On 12 November 2011 00:18, Stephen Frostsfr...@snowman.net wrote:
In a crash, unlogged tables are automatically truncated.
BTW I wonder what for they are truncated in a crash case?
Because they bypass the transaction-log (WAL), hence unlogged.
On 14 November 2011 12:58, Richard Huxton d...@archonet.com wrote:
Because they bypass the transaction-log (WAL), hence unlogged.
There's no way to know whether there were partial updates applied when the
system restarts.
I probably did not understand the truncate meaning correct. It
truncates
On 14/11/11 10:08, Sergey Konoplev wrote:
On 14 November 2011 12:58, Richard Huxtond...@archonet.com wrote:
Because they bypass the transaction-log (WAL), hence unlogged.
There's no way to know whether there were partial updates applied when the
system restarts.
I probably did not understand
On 14 November 2011 14:17, Richard Huxton d...@archonet.com wrote:
On 14/11/11 10:08, Sergey Konoplev wrote:
On 14 November 2011 12:58, Richard Huxtond...@archonet.com wrote:
Let's say you were doing something like UPDATE unlogged_table SET x=1 WHERE
y=2. If a crash occurs during this
noviembre de 2011 07:39 a.m.
Para: Richard Huxton
CC: Stephen Frost; Anibal David Acosta; pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] unlogged tables
On 14 November 2011 14:17, Richard Huxton d...@archonet.com wrote:
On 14/11/11 10:08, Sergey Konoplev wrote:
On 14 November 2011 12:58
Anibal David Acosta a...@devshock.com wrote:
Maybe an option like Recover from file will be useful
So, for example, daily some process do a COPY of entire table to a
file
In case of crash postgres recover content from the file.
If you need to recover file contents on a crash, then an
[mailto:kevin.gritt...@wicourts.gov]
Enviado el: lunes, 14 de noviembre de 2011 02:27 p.m.
Para: 'Richard Huxton'; Anibal David Acosta; 'Sergey Konoplev'
CC: pgsql-performance@postgresql.org; 'Stephen Frost'
Asunto: Re: [PERFORM] unlogged tables
Anibal David Acosta a...@devshock.com wrote:
Maybe
Anibal David Acosta a...@devshock.com wrote:
I am doing asynchronous commit but sometimes I think that there
are so many things in an insert/update transaction, for a table
that has not too much important information.
My table is a statistics counters table, so I can live with a
partial
* Anibal David Acosta (a...@devshock.com) wrote:
Unlogged tables are not memory tables don't?
Unlogged tables are not memory tables.
If we stop postgres server (normal stop) and start again, all information in
unlogged tables still remain?
Yes.
So, can I expect a data loss just in case of
40 matches
Mail list logo