On 11/06/2017 12:30 PM, Stephen Frost wrote:
* Lucas (luca...@gmail.com) wrote:
pg_dump was taking more than 24 hours to complete in one of my databases. I
begin to research alternatives. Parallel backup reduced the backup time to
little less than a hour, but it failed almost every time because
Em 05/11/2017 21:09, Andres Freund escreveu:
On 2017-11-05 17:38:39 -0500, Robert Haas wrote:
On Sun, Nov 5, 2017 at 5:17 AM, Lucas <luca...@gmail.com> wrote:
The patch creates a "--lock-early" option which will make pg_dump to issue
shared locks on all tables on the ba
Tom,
Yes, it is what I mean. Is what pg_dump uses to get things synchronized. It
seems to me a clear marker that the same task is using more than one
connection to accomplish the one job.
Em 08/09/2016 6:34 PM, "Tom Lane" <t...@sss.pgh.pa.us> escreveu:
> Lucas <luc
a shared lock that is already granted
to another connection in the same distributed transaction it should be
granted right away... make sense?
Em 08/09/2016 4:15 PM, "Tom Lane" <t...@sss.pgh.pa.us> escreveu:
> Lucas <luca...@gmail.com> writes:
> > I made a small mod
People,
I made a small modification in pg_dump to prevent parallel backup failures
due to exclusive lock requests made by other tasks.
The modification I made take shared locks for each parallel backup worker
at the very beginning of the job. That way, any other job that attempts to
acquire
, I think that the execution
time of the benchmark is irrelevant, assuming that the transactions follow
a normal distribution regarding accesses to warehouses.
On Wed, Oct 15, 2014 at 7:41 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Wed, Oct 15, 2014 at 6:22 AM, Lucas Lersch lucasler...@gmail.com
wrote:
On 14 October 2014 17:08, Lucas Lersch lucasler...@gmail.com wrote:
Unfortunately, in the generated trace with over 2 million buffer
requests,
only ~14k different pages are being accessed, out of the 800k of the
whole
database. Am I missing something here?
We can't tell what you're
I am recording the BufferDesc.tag.blockNum for the buffer along with the
spcNode, dbNode, relNode, also present in the tag.
On Wed, Oct 15, 2014 at 2:27 PM, Simon Riggs si...@2ndquadrant.com wrote:
On 15 October 2014 12:49, Lucas Lersch lucasler...@gmail.com wrote:
Sorry for taking so long
2014 13:44, Lucas Lersch lucasler...@gmail.com wrote:
I am recording the BufferDesc.tag.blockNum for the buffer along with the
spcNode, dbNode, relNode, also present in the tag.
The TPC-C I/O is random, so if you run it for longer you should see a
wider set.
Cacheing isn't possible
...@snowman.net wrote:
* Lucas Lersch (lucasler...@gmail.com) wrote:
So is it a possible normal behavior that running tpcc for 10min only
access
50% of the database? Furthermore, is there a guideline of parameters for
tpcc (# of warehouses, execution time, operations weight)?
Depends- you may
something here?
Best regards.
--
Lucas Lersch
, Oct 14, 2014 at 6:25 PM, Stephen Frost sfr...@snowman.net wrote:
* Lucas Lersch (lucasler...@gmail.com) wrote:
Unfortunately, in the generated trace with over 2 million buffer
requests,
only ~14k different pages are being accessed, out of the 800k of the
whole
database. Am I missing
shared_buffers is 128MB and the version of pgsql is 9.3.5
On Tue, Oct 14, 2014 at 6:31 PM, Lucas Lersch lucasler...@gmail.com wrote:
Sorry, I do not understand the question.
But I forgot to give an additional information: I am printing the page id
for the trace file in ReleaseBuffer() only
Aren't heap and index requests supposed to go through the shared buffers
anyway?
On Tue, Oct 14, 2014 at 7:02 PM, Stephen Frost sfr...@snowman.net wrote:
* Lucas Lersch (lucasler...@gmail.com) wrote:
shared_buffers is 128MB and the version of pgsql is 9.3.5
I suspect you're not tracking
into the shared_buffers.
On Tue, Oct 14, 2014 at 7:21 PM, Stephen Frost sfr...@snowman.net wrote:
* Lucas Lersch (lucasler...@gmail.com) wrote:
Aren't heap and index requests supposed to go through the shared buffers
anyway?
Sure they do, but a given page in shared_buffers can be used over and
over
Hi!
Does postgre execute the queries following a execution plan tree, where the
leafs are table scans, and the nodes are joins?
I'm looking for a database where I can get a cardinality from a partial
result of the execution... for example, print the cardinality of the results
until the next join
Hi!
Does postgre execute the queries following a execution plan tree, where the
leafs are table scans, and the nodes are joins?
I'm looking for a database where I can get a cardinality from a partial
result of the execution... for example, print the cardinality of the results
until the next join
checks and diagnostics
without external dependencies.
I wonder if the scheduler already existed before the
implementation of the autovacuum, its implementation would
not be a function executed by the in-core scheduler?
- -
Lucas
--
Sent via pgsql-hackers mailing list (pgsql-hackers
--
Lucas
to be
set only in the duration of the transaction or the next n commands?
--
Lucas Brito
20 matches
Mail list logo