Re: [HACKERS] things I learned from working on memory allocation

2014-07-15 Thread Robert Haas
On Mon, Jul 14, 2014 at 12:19 PM, Andres Freund and...@2ndquadrant.com wrote: On 2014-07-14 11:24:26 -0400, Robert Haas wrote: On Sun, Jul 13, 2014 at 2:23 PM, Andres Freund and...@2ndquadrant.com wrote: The actual if (lock != NULL) bit costs significant amounts of cycles? I'd have assumed

Re: [HACKERS] things I learned from working on memory allocation

2014-07-14 Thread Robert Haas
On Sun, Jul 13, 2014 at 2:23 PM, Andres Freund and...@2ndquadrant.com wrote: The actual if (lock != NULL) bit costs significant amounts of cycles? I'd have assumed that branch prediction takes care of that. Or is it actually the icache not keeping up? Did you measure icache vs. dcache misses?

Re: [HACKERS] things I learned from working on memory allocation

2014-07-14 Thread Andres Freund
On 2014-07-14 11:24:26 -0400, Robert Haas wrote: On Sun, Jul 13, 2014 at 2:23 PM, Andres Freund and...@2ndquadrant.com wrote: The actual if (lock != NULL) bit costs significant amounts of cycles? I'd have assumed that branch prediction takes care of that. Or is it actually the icache not

Re: [HACKERS] things I learned from working on memory allocation

2014-07-13 Thread Andres Freund
Hi Robert, On 2014-07-07 15:57:00 -0400, Robert Haas wrote: 1. I tried to write a single allocator which would be better than aset.c in two ways: first, by allowing allocation from dynamic shared memory; and second, by being more memory-efficient than aset.c. Heikki told me at PGCon that he

Re: [HACKERS] things I learned from working on memory allocation

2014-07-11 Thread Robert Haas
On Thu, Jul 10, 2014 at 1:05 AM, Amit Kapila amit.kapil...@gmail.com wrote: On Tue, Jul 8, 2014 at 1:27 AM, Robert Haas robertmh...@gmail.com wrote: 6. In general, I'm worried that it's going to be hard to keep the overhead of parallel sort from leaking into the non-parallel case. With the

Re: [HACKERS] things I learned from working on memory allocation

2014-07-11 Thread Amit Kapila
On Fri, Jul 11, 2014 at 11:15 PM, Robert Haas robertmh...@gmail.com wrote: On Thu, Jul 10, 2014 at 1:05 AM, Amit Kapila amit.kapil...@gmail.com wrote: If there is an noticeable impact, then do you think having separate file/infrastructure for parallel sort can help, basically non-parallel

Re: [HACKERS] things I learned from working on memory allocation

2014-07-09 Thread Peter Geoghegan
On Mon, Jul 7, 2014 at 7:29 PM, Peter Geoghegan p...@heroku.com wrote: I do think that's a problem with our sort implementation, but it's not clear to me whether it's *more* of a problem for parallel sort than it is for single-backend sort. If you'll forgive me for going on about my patch on

Re: [HACKERS] things I learned from working on memory allocation

2014-07-09 Thread Amit Kapila
On Tue, Jul 8, 2014 at 1:27 AM, Robert Haas robertmh...@gmail.com wrote: 6. In general, I'm worried that it's going to be hard to keep the overhead of parallel sort from leaking into the non-parallel case. With the no-allocator approach, every place that uses GetMemoryChunkSpace() or

Re: [HACKERS] things I learned from working on memory allocation

2014-07-07 Thread Peter Geoghegan
On Mon, Jul 7, 2014 at 12:57 PM, Robert Haas robertmh...@gmail.com wrote: 5. It's tempting to look at other ways of solving the parallel sort problem that don't need an allocator - perhaps by simply packing all the tuples into a DSM one after the next. But this is not easy to do, or at least

Re: [HACKERS] things I learned from working on memory allocation

2014-07-07 Thread Robert Haas
On Mon, Jul 7, 2014 at 5:37 PM, Peter Geoghegan p...@heroku.com wrote: On Mon, Jul 7, 2014 at 12:57 PM, Robert Haas robertmh...@gmail.com wrote: 5. It's tempting to look at other ways of solving the parallel sort problem that don't need an allocator - perhaps by simply packing all the tuples

Re: [HACKERS] things I learned from working on memory allocation

2014-07-07 Thread Peter Geoghegan
On Mon, Jul 7, 2014 at 7:04 PM, Robert Haas robertmh...@gmail.com wrote: The testing I did showed about a 5% overhead on REINDEX INDEX pgbench_accounts_pkey from one extra tuple copy (cf. 9f03ca915196dfc871804a1f8aad26207f601fd6). Of course that could vary by circumstance for a variety of