At 2015-06-10 13:22:27 -0400, robertmh...@gmail.com wrote:
>
> I'm not clear on which of these options you are voting for:
>
> (1) include pg_log in pg_basebackup as we do currently
> (2) exclude it
> (3) add a switch controlling whether or not it gets excluded
>
> I can live with (3), but I bet
On 11 June 2015 at 16:15, Bruce Momjian wrote:
> I have committed the first draft of the 9.5 release notes. You can view
> the output here:
>
> http://momjian.us/pgsql_docs/release-9-5.html
>
>
Thanks Bruce.
Would you also be able to mention something about f15821e and d222585 ?
Regard
On Thu, Jun 11, 2015 at 1:51 AM, Fujii Masao wrote:
> Shouldn't pg_rewind ignore that failure of operation? If the file is not
> found in source server, the file doesn't need to be copied to destination
> server obviously. So ISTM that pg_rewind safely can skip copying that file.
> Thought?
I thi
I ran into a typo in a comment in setrefs.c. Patch attached.
Best regards,
Etsuro Fujita
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index a7f65dd..162a52e 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@
On 2015-06-11 PM 01:15, Bruce Momjian wrote:
> I have committed the first draft of the 9.5 release notes. You can view
> the output here:
>
> http://momjian.us/pgsql_docs/release-9-5.html
>
> and it will eventually appear here:
>
> http://www.postgresql.org/docs/devel/static/r
On 2015/06/05 6:51, Robert Haas wrote:
On Mon, Jun 1, 2015 at 10:44 PM, Etsuro Fujita
wrote:
Here is a doc patch to add materialized views and foreign tables to
database objects that pg_table_is_visible() can be used with.
Good catch, as usual. Committed.
Thanks for picking this up!
Best
On 2015/06/10 20:18, Robert Haas wrote:
/*
* ALTER TABLE INHERIT
*
* Add a parent to the child's parents. This verifies that all the columns and
* check constraints of the parent appear in the child and that they have the
* same data types and expressions.
*/
static void
ATPrepAddInhe
On Thu, Jun 11, 2015 at 1:15 PM, Bruce Momjian wrote:
> I have committed the first draft of the 9.5 release notes. You can view
> the output here:
>
> http://momjian.us/pgsql_docs/release-9-5.html
>
> and it will eventually appear here:
>
> http://www.postgresql.org/docs/devel/sta
On Thu, Jun 11, 2015 at 9:45 AM, Bruce Momjian wrote:
>
> I have committed the first draft of the 9.5 release notes. You can view
> the output here:
>
> http://momjian.us/pgsql_docs/release-9-5.html
>
Thanks for writing the Release notes.
Some comments:
Have pg_basebackup use a tablesp
On Wed, Jun 10, 2015 at 12:09 PM, Fujii Masao wrote:
>
> On Tue, Jun 9, 2015 at 3:29 PM, Amit Kapila
wrote:
> > On Tue, Jun 9, 2015 at 10:56 AM, Fujii Masao
wrote:
> >> Or what about removing tablespace_map file at the beginning of recovery
> >> whenever backup_label doesn't exist?
> >
> > Yes,
I have committed the first draft of the 9.5 release notes. You can view
the output here:
http://momjian.us/pgsql_docs/release-9-5.html
and it will eventually appear here:
http://www.postgresql.org/docs/devel/static/release.html
I am ready to make suggested adjustments,
Hello,
I got the following error during DBT-3 benchmark with SF=20.
psql:query21.sql:50: ERROR: invalid memory alloc request size 1073741824
psql:query21.sql:50: ERROR: invalid memory alloc request size 1073741824
It looks to me Hash node tries to 1GB area using palloc0(), but it exceeds
t
On 06/09/2015 05:17 PM, Michael Paquier wrote:
> On Wed, Jun 10, 2015 at 8:41 AM, Josh Berkus wrote:
>> On 06/09/2015 04:38 PM, Michael Paquier wrote:
>>> On Wed, Jun 10, 2015 at 8:31 AM, Josh Berkus wrote:
Tom, all:
First draft of the release announcement.
Please improve
On Mon, Jun 08, 2015 at 03:15:04PM +0200, Andres Freund wrote:
> One more thing:
> Our testing infrastructure sucks. Without writing C code it's basically
> impossible to test wraparounds and such. Even if not particularly useful
> for non-devs, I really think we should have functions for creating
On Thu, Jun 11, 2015 at 2:38 AM, Fujii Masao wrote:
> * Remove invalid option character "N" from the third argument (valid option
> string) of getopt_long().
> * Use pg_free() or pfree() to free the memory allocated by pg_malloc() or
> palloc() instead of always using free().
> * Assume problem is
On Wed, Jun 10, 2015 at 8:48 PM, Rosiński Krzysztof 2 - Detal
wrote:
> How to use this optimization ?
>
>
>
> select *
>
> from table join partitioned_table on (
>
> table.part_id = partitioned_table.id
>
> and hash_func_mod(table.part_id) = hash_func_mod(partitioned_table.id)
>
> )
>
If I re
On 6/6/15 10:32 PM, Alvaro Herrera wrote:
> Peter Eisentraut wrote:
>> With the recently released Perl 5.22.0, the tests fail thus:
>>
>> -ERROR: Global symbol "$global" requires explicit package name at line 3.
>> -Global symbol "$other_global" requires explicit package name at line 4.
>> +ERROR:
On 06/10/2015 06:08 PM, Josh Berkus wrote:
WFM. So the idea is that if json_pointer is implemented as a type, then
we'll have an operator for "jsonb - json_pointer"?
Right.
cheers
andrew
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscr
Currently, speculative insertion (the INSERT ... ON CONFLICT DO UPDATE
executor/storage infrastructure) uses checkUnique ==
UNIQUE_CHECK_PARTIAL for unique indexes, which is a constant
originally only used by deferred unique constraints. It occurred to me
that this has a number of disadvantages:
*
On Tue, Jun 2, 2015 at 2:45 AM, Mark Kirkwood wrote:
> On 01/06/15 05:29, Joel Jacobson wrote:
>
>> While anyone who is familiar with postgres would never do something as
>> stupid as to delete pg_xlog,
>> according to Google, there appears to be a fair amount of end-users out
>> there having mad
On 06/10/2015 12:00 PM, Andrew Dunstan wrote:
> We need to remove the ambiguity with jsonb_delete() by renaming the
> variant that takes a text[] (meaning a path) as the second argument to
> jsonb_delete_path. That seems uncontroversial.
Speaking as a user ... works for me.
> We need to rename th
On Wed, Jun 10, 2015 at 6:01 PM, deavid wrote:
> By now, my results were a bit disappointing: (comparing gin_btree against
> regular btree for a column with very low cardinality)
> - create index and updates: about 10-20% faster (i had a primary key, so
> btree unique checks may be here blurring t
Hi again. I tried to do some test on my office computer, but after spending
2-3 hours I gave up. I'm going to need a real SSD disk to try these things.
100k rows of my "delivery notes" table use 100MB of disk; and 2Gb of RAM
may be not enough to emulate a fast IO. (I was disabling fsync, activating
On Wed, Jun 10, 2015 at 9:10 AM, Gurjeet Singh wrote:
>
> I am in the process of writing up a doc patch, and will submit that as
> well in a short while.
>
Please find attached the patch with the doc update.
Best regards,
--
Gurjeet Singh http://gurjeet.singh.im/
physical_repl_slot_activate_
On Wed, Jun 10, 2015 at 11:48 AM, Andrew Dunstan wrote:
> Sorry for the delay on this. I've been mostly off the grid, having an all
> too rare visit from Tom "Mr Enum" Dunstan, and I misunderstood what you were
> suggesting,
Thank you for working with me to address this. I've been busy with
other
On Wed, Jun 10, 2015 at 1:58 PM, Andres Freund wrote:
>> Now that we (EnterpriseDB) have this 8-socket machine, maybe we could
>> try your patch there, bound to varying numbers of sockets.
>
> It'd be a significant amount of work to rebase it ontop current HEAD. I
> guess the easiest thing would b
Andrew Dunstan writes:
> Future plans that might affect this issue: possible implementations of
> Json Pointer (rfc 6901), Json Patch (rfc 6902) and Json Merge Patch (rfc
> 7396). The last one is on this list for completeness - it seems to me a
> lot less useful than the others, but I included
On Wed, Jun 10, 2015 at 9:00 , Andrew Dunstan
wrote:
This is an attempt to summarize What I think is now the lone
outstanding jsonb issue.
We need to remove the ambiguity with jsonb_delete() by renaming the
variant that takes a text[] (meaning a path) as the second argument
to jsonb_delete_p
This is an attempt to summarize What I think is now the lone outstanding
jsonb issue.
We need to remove the ambiguity with jsonb_delete() by renaming the
variant that takes a text[] (meaning a path) as the second argument to
jsonb_delete_path. That seems uncontroversial.
We need to rename th
On 06/05/2015 01:51 PM, Andrew Dunstan wrote:
On 06/05/2015 01:39 PM, Peter Geoghegan wrote:
On Thu, Jun 4, 2015 at 12:10 PM, Peter Geoghegan wrote:
But I agree that it's not a great contribution to science,
especially since
the index will be applied to the list of elements in the somewhat
On 2015-06-10 13:52:14 -0400, Robert Haas wrote:
> On Wed, Jun 10, 2015 at 1:39 PM, Andres Freund wrote:
> > Well, not necessarily. If you can write your algorithm in a way that
> > xadd etc are used, instead of a lock cmpxchg, you're actually never
> > spinning on x86 as it's guaranteed to succee
On Wed, Jun 10, 2015 at 1:39 PM, Andres Freund wrote:
> In the uncontended case lwlocks are just as fast as spinlocks now, with
> the exception of the local tracking array. They're faster if there's
> differences with read/write lockers.
If nothing else, the spinlock calls are inline, while the l
Hi,
Attached patch fixes the minor issues in pg_rewind. The fixes are
* Remove invalid option character "N" from the third argument (valid option
string) of getopt_long().
* Use pg_free() or pfree() to free the memory allocated by pg_malloc() or
palloc() instead of always using free().
* Assume
On 2015-06-10 13:19:14 -0400, Robert Haas wrote:
> On Wed, Jun 10, 2015 at 11:58 AM, Andres Freund wrote:
> > I think we should just gank spinlocks asap. The hard part is removing
> > them from lwlock.c's slow path and the buffer headers imo. After that we
> > should imo be fine replacing them wit
On 06/10/2015 10:22 AM, Robert Haas wrote:
On Wed, Jun 10, 2015 at 1:12 PM, Joshua D. Drake wrote:
On 06/10/2015 10:01 AM, Andres Freund wrote:
On 2015-06-10 09:57:17 -0700, Jeff Janes wrote:
Mine goal isn't that. My goal is to have a consistent backup without
having to shut down the serve
On Wed, Jun 10, 2015 at 1:12 PM, Joshua D. Drake wrote:
> On 06/10/2015 10:01 AM, Andres Freund wrote:
>> On 2015-06-10 09:57:17 -0700, Jeff Janes wrote:
>>> Mine goal isn't that. My goal is to have a consistent backup without
>>> having to shut down the server to take a cold one, or having to ma
On Wed, Jun 10, 2015 at 11:58 AM, Andres Freund wrote:
> I think we should just gank spinlocks asap. The hard part is removing
> them from lwlock.c's slow path and the buffer headers imo. After that we
> should imo be fine replacing them with lwlocks.
Mmmph. I'm not convinced there's any point i
On 06/10/2015 10:01 AM, Andres Freund wrote:
On 2015-06-10 09:57:17 -0700, Jeff Janes wrote:
Mine goal isn't that. My goal is to have a consistent backup without
having to shut down the server to take a cold one, or having to manually
juggle the pg_start_backup, etc. commands.
A basebackup
Robbie Harwood writes:
> Stephen Frost writes:
>
>> Robbie,
>>
>> * Robbie Harwood (rharw...@redhat.com) wrote:
>>
>>> We'd I think also want a new kind of HBA entry (probably something along
>>> the lines of `hostgss` to contrast with `hostssl`), but I'm not sure
>>> what we'd want to do for th
On 2015-06-10 09:57:17 -0700, Jeff Janes wrote:
> Mine goal isn't that. My goal is to have a consistent backup without
> having to shut down the server to take a cold one, or having to manually
> juggle the pg_start_backup, etc. commands.
A basebackup won't necessarily give you a consistent log t
On Wed, Jun 10, 2015 at 8:29 AM, Robert Haas wrote:
> On Mon, Jun 8, 2015 at 12:09 AM, Michael Paquier
> wrote:
> >> Recently, one of our customers has had a basebackup fail because pg_log
> >> contained files that were >8GB:
> >> FATAL: archive member "pg_log/postgresql-20150119.log" too large
Hi,
While testing pg_rewind, I got the following error and pg_rewind failed.
$ pg_rewind -D ... --source-server="..." -P
ERROR: could not open file "base/13243/16384" for reading: No
such file or directory
STATEMENT: SELECT path, begin,
pg_read_binary_file(path, begin, len
How to use this optimization ?
select *
from table join partitioned_table on (
table.part_id = partitioned_table.id
and hash_func_mod(table.part_id) = hash_func_mod(partitioned_table.id)
)
Fujii Masao wrote:
> Agreed. The attached patch defines the macro to check whether archiver is
> allowed to start up or not, and uses it everywhere except sigusr1_handler.
> I made sigusr1_handler use a different condition because only it tries to
> start archiver in PM_STARTUP postmaster state an
On Wed, Jun 10, 2015 at 8:36 AM, Andres Freund wrote:
> On 2015-06-10 08:24:23 -0700, Gurjeet Singh wrote:
> > On Wed, Jun 10, 2015 at 8:07 AM, Andres Freund
> wrote:
> > > That doesn't look right to me. Why is this code logging a standby
> > > snapshot for physical slots?
> > >
> >
> > This is
On 2015-06-10 17:57:42 +0200, Shulgin, Oleksandr wrote:
> it turns out, that the code in WalSndWriteData is setting the timestamp of
> the replication message just *after* it has been sent out to the client,
> thus the sendtime field always reads as zero.
Ugh, what a stupid bug. Thanks!
Andres
Hi Hackers,
it turns out, that the code in WalSndWriteData is setting the timestamp of
the replication message just *after* it has been sent out to the client,
thus the sendtime field always reads as zero.
Attached is a trivial patch to fix this. The physical replication path
already does the co
On 2015-06-10 11:51:06 -0400, Jan Wieck wrote:
> >ret = pg_atomic_fetch_sub_u32(&buf->state, 1);
> >
> >if (ret & BM_PIN_COUNT_WAITER)
> >{
> >pg_atomic_fetch_sub_u32(&buf->state, BM_PIN_COUNT_WAITER);
> >/* XXX: deal with race that another backend has set BM_PIN_COUNT_WAITER
> > */
> >}
>
>>> As in 200%+ slower.
>> Have you tried PTHREAD_MUTEX_ADAPTIVE_NP ?
> Yes.
Ok, if this can be validated, we might have a new case now for which my
suggestion would not be helpful. Reviewed, optimized code with short critical
sections and no hotspots by design could indeed be an exception where t
On 06/10/2015 11:34 AM, Andres Freund wrote:
If you check the section where the spinlock is held there's nontrivial
code executed. Under contention you'll have the problem that if backend
tries to acquire the the spinlock while another backend holds the lock,
it'll "steal" the cacheline on which
On 2015-06-10 17:30:33 +0200, Nils Goroll wrote:
> On 10/06/15 17:17, Andres Freund wrote:
> > On 2015-06-10 16:07:50 +0200, Nils Goroll wrote:
> > Interesting. I've been able to reproduce quite massive slowdowns doing
> > this on a 4 socket linux machine (after applying the lwlock patch that's
> >
On 2015-06-10 08:24:23 -0700, Gurjeet Singh wrote:
> On Wed, Jun 10, 2015 at 8:07 AM, Andres Freund wrote:
> > That doesn't look right to me. Why is this code logging a standby
> > snapshot for physical slots?
> >
>
> This is the new function I referred to above. The logging of the snapshot
> is
On 2015-06-10 11:12:46 -0400, Jan Wieck wrote:
> The test case is that 200 threads are running in a tight loop like this:
>
> for (...)
> {
> s_lock();
> // do something with a global variable
> s_unlock();
> }
>
> That is the most contended case I can think of, yet the short and
> pr
On 10/06/15 17:17, Andres Freund wrote:
> On 2015-06-10 16:07:50 +0200, Nils Goroll wrote:
>> On larger Linux machines, we have been running with spin locks replaced by
>> generic posix mutexes for years now. I personally haven't look at the code
>> for
>> ages, but we maintain a patch which pre
On Mon, Jun 8, 2015 at 12:09 AM, Michael Paquier
wrote:
>> Recently, one of our customers has had a basebackup fail because pg_log
>> contained files that were >8GB:
>> FATAL: archive member "pg_log/postgresql-20150119.log" too large for tar
>> format
>>
>> I think pg_basebackup should also skip
On Wed, Jun 10, 2015 at 8:07 AM, Andres Freund wrote:
> On 2015-06-10 08:00:28 -0700, Gurjeet Singh wrote:
>
> > pg_create_logical_replication_slot() prevents LSN from being
> > recycled that by looping (worst case 2 times) until there's no
> > conflict with the checkpointer recycling the segment
On 2015-06-10 16:07:50 +0200, Nils Goroll wrote:
> On larger Linux machines, we have been running with spin locks replaced by
> generic posix mutexes for years now. I personally haven't look at the code for
> ages, but we maintain a patch which pretty much does the same thing still:
Interesting. I
On 10/06/15 17:12, Jan Wieck wrote:
> for (...)
> {
> s_lock();
> // do something with a global variable
> s_unlock();
> }
OK, I understand now, thank you. I am not sure if this test case is appropriate
for the critical sections in postgres (if it was, we'd not have the problem we
are
On 06/10/2015 11:06 AM, Nils Goroll wrote:
On 10/06/15 16:18, Jan Wieck wrote:
I have played with test code that isolates a stripped down version of s_lock()
and uses it with multiple threads. I then implemented multiple different
versions of that s_lock(). The results with 200 concurrent threa
On 10/06/15 17:01, Andres Freund wrote:
>> > - The fact that well behaved mutexes have a higher initial cost could even
>> > motivate good use of them rather than optimize misuse.
> Well. There's many locks in a RDBMS that can't realistically be
> avoided. So optimizing for no and moderate cont
On 2015-06-10 08:00:28 -0700, Gurjeet Singh wrote:
> Attached is the patch that takes the former approach (initialize
> restart_lsn when the slot is created).
If it's an option that's imo a sane approach.
> pg_create_logical_replication_slot() prevents LSN from being
> recycled that by looping (w
On 10/06/15 16:18, Jan Wieck wrote:
>
> I have played with test code that isolates a stripped down version of s_lock()
> and uses it with multiple threads. I then implemented multiple different
> versions of that s_lock(). The results with 200 concurrent threads are that
> using a __sync_val_compa
On 06/10/2015 10:59 AM, Robert Haas wrote:
On Wed, Jun 10, 2015 at 10:20 AM, Tom Lane wrote:
Jan Wieck writes:
The attached patch demonstrates that less aggressive spinning and (much)
more often delaying improves the performance "on this type of machine".
Hm. One thing worth asking is why
On 2015-06-10 16:55:31 +0200, Nils Goroll wrote:
> But still I am convinced that on today's massively parallel NUMAs, spinlocks
> are
> plain wrong:
Sure. But a large number of installations are not using massive NUMA
systems, so we can't focus on optimizing for NUMA.
We definitely have quite so
On Tue, May 5, 2015 at 5:53 PM, Andres Freund wrote:
>
> > Was there any consideration for initializing restart_lsn to the latest
> > WAL write pointer when a slot is created? Or for allowing an optional
> > parameter in pg_create_(physical|logical)_replication_slot() for
> > specifying the resta
On Wed, Jun 10, 2015 at 10:20 AM, Tom Lane wrote:
> Jan Wieck writes:
>> The attached patch demonstrates that less aggressive spinning and (much)
>> more often delaying improves the performance "on this type of machine".
>
> Hm. One thing worth asking is why the code didn't converge to a good
>
On 2015-06-10 10:25:32 -0400, Tom Lane wrote:
> Andres Freund writes:
> > Unfortunately there's no portable futex support. That's what stopped us
> > from adopting them so far. And even futexes can be significantly more
> > heavyweight under moderate contention than our spinlocks - It's rather
>
On 10/06/15 16:20, Andres Freund wrote:
> That's precisely what I referred to in the bit you cut away...
I apologize, yes.
On 10/06/15 16:25, Tom Lane wrote:
> Optimizing for misuse of the mechanism is not the way.
I absolutely agree and I really appreciate all efforts towards lockless data
stru
On 06/10/2015 10:20 AM, Tom Lane wrote:
Jan Wieck writes:
The attached patch demonstrates that less aggressive spinning and (much)
more often delaying improves the performance "on this type of machine".
Hm. One thing worth asking is why the code didn't converge to a good
value of spins_per_d
Prakash Itnal wrote:
> Hello,
>
> Recently we encountered a issue where the disc space is continuously
> increasing towards 100%. Then a manual vacuum freed the disc space. But
> again it is increasing. When digged more it is found that auto-vacuuming
> was not running or it is either stucked/hang
On Wed, Jun 10, 2015 at 11:12 PM, Alvaro Herrera
wrote:
> Fujii Masao wrote:
>> On Tue, Jun 9, 2015 at 5:21 AM, Alvaro Herrera
>> wrote:
>> > Fujii Masao wrote:
>
>> > Can't we create
>> > some common function that would be called both here and on ServerLoop?
>>
>> Agreed. So, what about the att
Andres Freund writes:
> Unfortunately there's no portable futex support. That's what stopped us
> from adopting them so far. And even futexes can be significantly more
> heavyweight under moderate contention than our spinlocks - It's rather
> easy to reproduce scenarios where futexes cause signif
On 2015-06-10 16:12:05 +0200, Nils Goroll wrote:
>
> On 10/06/15 16:05, Andres Freund wrote:
> > it'll nearly always be beneficial to spin
>
> Trouble is that postgres cannot know if the process holding the lock actually
> does run, so if it doesn't, all we're doing is burn cycles and make the
> p
Jan Wieck writes:
> The attached patch demonstrates that less aggressive spinning and (much)
> more often delaying improves the performance "on this type of machine".
Hm. One thing worth asking is why the code didn't converge to a good
value of spins_per_delay without help. The value should d
On 06/10/2015 10:07 AM, Nils Goroll wrote:
On larger Linux machines, we have been running with spin locks replaced by
generic posix mutexes for years now. I personally haven't look at the code for
ages, but we maintain a patch which pretty much does the same thing still:
Ref: http://www.postgres
Fujii Masao wrote:
> On Tue, Jun 9, 2015 at 5:21 AM, Alvaro Herrera
> wrote:
> > Fujii Masao wrote:
> > Can't we create
> > some common function that would be called both here and on ServerLoop?
>
> Agreed. So, what about the attached patch?
No attachment ...
> > We also have sigusr1_handler
On 10/06/15 16:05, Andres Freund wrote:
> it'll nearly always be beneficial to spin
Trouble is that postgres cannot know if the process holding the lock actually
does run, so if it doesn't, all we're doing is burn cycles and make the problem
worse.
Contrary to that, the kernel does know, so for
Noah Misch writes:
> On Tue, Jun 09, 2015 at 12:24:02PM -0400, Tom Lane wrote:
>> Yeah, my first instinct was to blame ca325941 as well, but I don't think
>> any of that code executes during init_locale(). Also,
>> http://www.postgresql.org/message-id/20150326040321.2492.24...@wrigleys.postgresql
On larger Linux machines, we have been running with spin locks replaced by
generic posix mutexes for years now. I personally haven't look at the code for
ages, but we maintain a patch which pretty much does the same thing still:
Ref: http://www.postgresql.org/message-id/4fede0bf.7080...@schokola.d
Hi,
On 2015-06-10 09:54:00 -0400, Jan Wieck wrote:
> model name : Intel(R) Xeon(R) CPU E7- 8830 @ 2.13GHz
> numactl --hardware shows the distance to the attached memory as 10, the
> distance to every other node as 21. I interpret that as the machine having
> one NUMA bus with all cpu packag
On 06/10/2015 09:28 AM, Andres Freund wrote:
On 2015-06-10 09:18:56 -0400, Jan Wieck wrote:
On a machine with 8 sockets, 64 cores, Hyperthreaded 128 threads total, a
pgbench -S peaks with 50-60 clients around 85,000 TPS. The throughput then
takes a very sharp dive and reaches around 20,000 TPS a
On Tue, Jun 09, 2015 at 03:54:59PM -0400, David Steele wrote:
> I've certainly had quite the experience as a first-time contributor
> working on this patch. Perhaps I bit off more than I should have and I
> definitely managed to ruffle a few feathers along the way. I learned a
> lot about how the
David Rowley wrote:
> On 10 June 2015 at 02:52, Kevin Grittner wrote:
>> David Rowley wrote:
>>> The idea I discussed in the link in item 5 above gets around this
>>> problem, but it's a perhaps more surprise filled implementation
>>> as it will mean "select avg(x),sum(x),count(x) from t" is
>>>
On Wed, Jun 10, 2015 at 09:18:56AM -0400, Jan Wieck wrote:
> The attached patch demonstrates that less aggressive spinning and
> (much) more often delaying improves the performance "on this type of
> machine". The 8 socket machine in question scales to over 350,000
> TPS.
>
> The patch is meant to
On 2015-06-10 09:18:56 -0400, Jan Wieck wrote:
> On a machine with 8 sockets, 64 cores, Hyperthreaded 128 threads total, a
> pgbench -S peaks with 50-60 clients around 85,000 TPS. The throughput then
> takes a very sharp dive and reaches around 20,000 TPS at 120 clients. It
> never recovers from th
Josh,
On Tue, Jun 9, 2015 at 9:16 PM, Josh Berkus wrote:
> Dmitry, Alexander:
>
> I'm noticing a feature gap for JSONB operators; we have no way to do this:
>
> jsonb_col ? ARRAY['key1','key2','key3']
>
What documents do you expect to match this operator?
Such syntax can be interpreted in very
Hi,
I think I may have found one of the problems, PostgreSQL has on machines
with many NUMA nodes. I am not yet sure what exactly happens on the NUMA
bus, but there seems to be a tipping point at which the spinlock
concurrency wreaks havoc and the performance of the database collapses.
On a
On Wed, Jun 10, 2015 at 12:29 PM, Joshua D. Drake
wrote:
>
> On 06/09/2015 05:54 PM, Michael Paquier wrote:
>
>> Looking at the documentation what is expected is not a path to a
>> segment file, but only a segment file name:
>> http://www.postgresql.org/docs/devel/static/pgarchivecleanup.html
>>
On Wed, Jun 10, 2015 at 8:33 PM, Kouhei Kaigai wrote:
>> On 2015-06-10 PM 01:42, Kouhei Kaigai wrote:
>> >
>> > Let's assume a table which is partitioned to four portions,
>> > and individual child relations have constraint by hash-value
>> > of its ID field.
>> >
>> > tbl_parent
>> >+ tbl_c
Hello,
Recently we encountered a issue where the disc space is continuously
increasing towards 100%. Then a manual vacuum freed the disc space. But
again it is increasing. When digged more it is found that auto-vacuuming
was not running or it is either stucked/hanged.
Version: 9.1.12
Auto vacuum
> On 2015-06-10 PM 01:42, Kouhei Kaigai wrote:
> >
> > Let's assume a table which is partitioned to four portions,
> > and individual child relations have constraint by hash-value
> > of its ID field.
> >
> > tbl_parent
> >+ tbl_child_0 ... CHECK(hash_func(id) % 4 = 0)
> >+ tbl_child_1 ..
/*
* ALTER TABLE INHERIT
*
* Add a parent to the child's parents. This verifies that all the columns and
* check constraints of the parent appear in the child and that they have the
* same data types and expressions.
*/
static void
ATPrepAddInherit(Relation child_rel)
{
if (child_rel->rd_
On Wed, Jun 10, 2015 at 8:02 PM, Andres Freund wrote:
> Does somebody mind me backpatching the missing XLOG_DEBUG &&?
ISTM that it is a good idea to have it in REL9_4_STABLE as well.
Regards,
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to y
Hi,
When compiling with WAL_DEBUG defined, but wal_debug set to off, there's
a lot of DEBUG1 spew like
DEBUG: initialized 1 pages, upto 40/3977E000
DEBUG: initialized 9 pages, upto 40/3979
DEBUG: initialized 1 pages, upto 40/39792000
DEBUG: initialized 1 pages, upto 40/39794000
DEBUG: ini
On 2015-06-10 PM 05:53, Atri Sharma wrote:
> On Wed, Jun 10, 2015 at 2:16 PM, Amit Langote > wrote:
>
>>
>> Perhaps the qual needs to be pushed all the way down
>> to the Hash's underlying scan if that makes sense.
>>
>
> And that is a Pandora's box of troubles IMHO unless done in a very careful
On Wed, Jun 10, 2015 at 2:16 PM, Amit Langote wrote:
>
> Perhaps the qual needs to be pushed all the way down
> to the Hash's underlying scan if that makes sense.
>
And that is a Pandora's box of troubles IMHO unless done in a very careful
manner.
On 2015-06-10 PM 01:42, Kouhei Kaigai wrote:
>
> Let's assume a table which is partitioned to four portions,
> and individual child relations have constraint by hash-value
> of its ID field.
>
> tbl_parent
>+ tbl_child_0 ... CHECK(hash_func(id) % 4 = 0)
>+ tbl_child_1 ... CHECK(hash_fun
On 2015-06-10 01:57:22 -0400, Noah Misch wrote:
> I think I agree with everything after your first sentence. I liked your
> specific proposal to split StartupXLOG(), but making broad-appeal
> restructuring proposals is hard. I doubt we would get good results by casting
> a wide net for restructur
On 2015-06-10 11:20:19 +1200, Thomas Munro wrote:
> I was wondering about this in the context of the recent multixact
> work, since such configurations could leave you with different SLRU
> files on disk which in some versions might change the behaviour in
> interesting ways.
Note that trigger a r
On Tue, Jun 9, 2015 at 8:37 PM, Andrew Dunstan wrote:
>
> On 06/08/2015 11:19 PM, Amit Kapila wrote:
>
>>
>> I think Robert and Alvaro also seems to be inclined towards throwing
>> error for such a case, so let us do that way, but one small point is that
>> don't you think that similar code in de
100 matches
Mail list logo