On Thu, Jan 8, 2015 at 6:42 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Are we sure that in such cases we will consume work_mem during
execution? In cases of parallel_workers we are sure to an extent
that if we reserve the workers then we will use it during execution.
Nonetheless, I have
On Thu, Jan 8, 2015 at 2:46 PM, Stephen Frost sfr...@snowman.net wrote:
Yeah, if we come up with a plan for X workers and end up not being able
to spawn that many then I could see that being worth a warning or notice
or something. Not sure what EXPLAIN has to do anything with it..
That seems
On Sun, Jan 11, 2015 at 9:09 AM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jan 8, 2015 at 6:42 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
2. To enable two types of shared memory queue's (error queue and
tuple queue), we need to ensure that we switch to appropriate queue
during
On Fri, Jan 9, 2015 at 10:54 PM, Stephen Frost sfr...@snowman.net wrote:
* Amit Kapila (amit.kapil...@gmail.com) wrote:
In our case as currently we don't have a mechanism to reuse parallel
workers, so we need to account for that cost as well. So based on that,
I am planing to add three new
On Sat, Jan 10, 2015 at 2:45 AM, Stefan Kaltenbrunner
ste...@kaltenbrunner.cc wrote:
On 01/09/2015 08:01 PM, Stephen Frost wrote:
Amit,
* Amit Kapila (amit.kapil...@gmail.com) wrote:
On Fri, Jan 9, 2015 at 1:02 AM, Jim Nasby jim.na...@bluetreble.com
wrote:
I agree, but we should try
On 1/9/15, 3:34 PM, Stephen Frost wrote:
* Stefan Kaltenbrunner (ste...@kaltenbrunner.cc) wrote:
On 01/09/2015 08:01 PM, Stephen Frost wrote:
Now, for debugging purposes, I could see such a parameter being
available but it should default to 'off/never-fail'.
not sure what it really would be
On 1/9/15, 11:24 AM, Stephen Frost wrote:
What I was advocating for up-thread was to consider multiple parallel
paths and to pick whichever ends up being the lowest overall cost. The
flip-side to that is increased planning time. Perhaps we can come up
with an efficient way of working out where
Amit,
* Amit Kapila (amit.kapil...@gmail.com) wrote:
On Fri, Jan 9, 2015 at 1:02 AM, Jim Nasby jim.na...@bluetreble.com wrote:
I agree, but we should try and warn the user if they set
parallel_seqscan_degree close to max_worker_processes, or at least give
some indication of what's going
Amit,
* Amit Kapila (amit.kapil...@gmail.com) wrote:
On Fri, Dec 19, 2014 at 7:57 PM, Stephen Frost sfr...@snowman.net wrote:
There's certainly documentation available from the other RDBMS' which
already support parallel query, as one source. Other academic papers
exist (and once you've
On 01/09/2015 08:01 PM, Stephen Frost wrote:
Amit,
* Amit Kapila (amit.kapil...@gmail.com) wrote:
On Fri, Jan 9, 2015 at 1:02 AM, Jim Nasby jim.na...@bluetreble.com wrote:
I agree, but we should try and warn the user if they set
parallel_seqscan_degree close to max_worker_processes, or at
* Stefan Kaltenbrunner (ste...@kaltenbrunner.cc) wrote:
On 01/09/2015 08:01 PM, Stephen Frost wrote:
Now, for debugging purposes, I could see such a parameter being
available but it should default to 'off/never-fail'.
not sure what it really would be useful for - if I execute a query I
On Fri, Jan 9, 2015 at 1:02 AM, Jim Nasby jim.na...@bluetreble.com wrote:
On 1/5/15, 9:21 AM, Stephen Frost wrote:
* Robert Haas (robertmh...@gmail.com) wrote:
I think it's right to view this in the same way we view work_mem. We
plan on the assumption that an amount of memory equal to
On Fri, Dec 19, 2014 at 7:57 PM, Stephen Frost sfr...@snowman.net wrote:
There's certainly documentation available from the other RDBMS' which
already support parallel query, as one source. Other academic papers
exist (and once you've linked into one, the references and prior work
helps
On Mon, Jan 5, 2015 at 8:31 PM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Jan 2, 2015 at 5:36 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
On Thu, Jan 1, 2015 at 11:29 PM, Robert Haas robertmh...@gmail.com
wrote:
On Thu, Jan 1, 2015 at 12:00 PM, Fabrízio de Royes Mello
On Thu, Jan 8, 2015 at 5:12 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Mon, Jan 5, 2015 at 8:31 PM, Robert Haas robertmh...@gmail.com wrote:
Sorry for incomplete mail sent prior to this, I just hit the send button
by mistake.
4. Sending ReadyForQuery() after completely sending the
On 1/5/15, 9:21 AM, Stephen Frost wrote:
* Robert Haas (robertmh...@gmail.com) wrote:
I think it's right to view this in the same way we view work_mem. We
plan on the assumption that an amount of memory equal to work_mem will
be available at execution time, without actually reserving it.
* Jim Nasby (jim.na...@bluetreble.com) wrote:
On 1/5/15, 9:21 AM, Stephen Frost wrote:
* Robert Haas (robertmh...@gmail.com) wrote:
I think it's right to view this in the same way we view work_mem. We
plan on the assumption that an amount of memory equal to work_mem will
be available at
On Fri, Jan 2, 2015 at 5:36 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Thu, Jan 1, 2015 at 11:29 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jan 1, 2015 at 12:00 PM, Fabrízio de Royes Mello
fabriziome...@gmail.com wrote:
Can we check the number of free bgworkers slots to set
* Robert Haas (robertmh...@gmail.com) wrote:
I think it's right to view this in the same way we view work_mem. We
plan on the assumption that an amount of memory equal to work_mem will
be available at execution time, without actually reserving it.
Agreed- this seems like a good approach for
On 1 January 2015 at 17:59, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jan 1, 2015 at 12:00 PM, Fabrízio de Royes Mello
fabriziome...@gmail.com wrote:
Can we check the number of free bgworkers slots to set the max workers?
The real solution here is that this patch can't throw an error
On 2 January 2015 at 11:13, Amit Kapila amit.kapil...@gmail.com wrote:
On Fri, Jan 2, 2015 at 4:09 PM, Thom Brown t...@linux.com wrote:
On 1 January 2015 at 10:34, Amit Kapila amit.kapil...@gmail.com wrote:
Running it again, I get the same issue. This is with
parallel_seqscan_degree
On Fri, Jan 2, 2015 at 4:09 PM, Thom Brown t...@linux.com wrote:
On 1 January 2015 at 10:34, Amit Kapila amit.kapil...@gmail.com wrote:
Running it again, I get the same issue. This is with
parallel_seqscan_degree set to 8, and the crash occurs with 4 and 2 too.
This doesn't happen if I
On Wed, Dec 31, 2014 at 9:46 PM, Thom Brown t...@linux.com wrote:
Another issue (FYI, pgbench2 initialised with: pgbench -i -s 100 -F 10
pgbench2):
➤ psql://thom@[local]:5488/pgbench2
# explain (analyse, buffers, verbose) select distinct bid from
pgbench_accounts;
server closed the
On Thu, Jan 1, 2015 at 12:00 PM, Fabrízio de Royes Mello
fabriziome...@gmail.com wrote:
Can we check the number of free bgworkers slots to set the max workers?
The real solution here is that this patch can't throw an error if it's
unable to obtain the desired number of background workers. It
I think one thing we could do minimize the chance of such an
error is set the value of parallel workers to be used for plan equal
to max_worker_processes if parallel_seqscan_degree is greater
than max_worker_processes. Even if we do this, still such an
error can come if user has registered
On 18 December 2014 at 16:03, Amit Kapila amit.kapil...@gmail.com wrote:
On Thu, Dec 18, 2014 at 9:22 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
On Mon, Dec 8, 2014 at 10:40 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
On Sat, Dec 6, 2014 at 5:37 PM, Stephen Frost
On 31 December 2014 at 14:20, Thom Brown t...@linux.com wrote:
On 18 December 2014 at 16:03, Amit Kapila amit.kapil...@gmail.com wrote:
On Thu, Dec 18, 2014 at 9:22 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
On Mon, Dec 8, 2014 at 10:40 AM, Amit Kapila amit.kapil...@gmail.com
On Wed, Dec 31, 2014 at 7:50 PM, Thom Brown t...@linux.com wrote:
When attempting to recreate the plan in your example, I get an error:
➤ psql://thom@[local]:5488/pgbench
# create table t1(c1 int, c2 char(500)) with (fillfactor=10);
CREATE TABLE
Time: 13.653 ms
➤
On 12/21/14, 12:42 AM, Amit Kapila wrote:
On Fri, Dec 19, 2014 at 6:21 PM, Stephen Frost sfr...@snowman.net
mailto:sfr...@snowman.net wrote:
a. Instead of passing value array, just pass tuple id, but retain the
buffer pin till master backend reads the tuple based on tupleid.
This has side
On Mon, Dec 22, 2014 at 7:34 AM, Jim Nasby jim.na...@bluetreble.com wrote:
On 12/21/14, 12:42 AM, Amit Kapila wrote:
On Fri, Dec 19, 2014 at 6:21 PM, Stephen Frost sfr...@snowman.net
mailto:sfr...@snowman.net wrote:
a. Instead of passing value array, just pass tuple id, but retain the
buffer
On Fri, Dec 19, 2014 at 6:21 PM, Stephen Frost sfr...@snowman.net wrote:
Amit,
* Amit Kapila (amit.kapil...@gmail.com) wrote:
1. Parallel workers help a lot when there is an expensive qualification
to evaluated, the more expensive the qualification the more better are
results.
I'd
Amit,
* Amit Kapila (amit.kapil...@gmail.com) wrote:
1. Parallel workers help a lot when there is an expensive qualification
to evaluated, the more expensive the qualification the more better are
results.
I'd certainly hope so. ;)
2. It works well for low selectivity quals and as the
On Fri, Dec 19, 2014 at 7:51 AM, Stephen Frost sfr...@snowman.net wrote:
3. After certain point, increasing having more number of workers won't
help and rather have negative impact, refer Test-4.
Yes, I see that too and it's also interesting- have you been able to
identify why? What is the
Robert,
* Robert Haas (robertmh...@gmail.com) wrote:
On Fri, Dec 19, 2014 at 7:51 AM, Stephen Frost sfr...@snowman.net wrote:
3. After certain point, increasing having more number of workers won't
help and rather have negative impact, refer Test-4.
Yes, I see that too and it's also
On 12/19/14 3:27 PM, Stephen Frost wrote:
We'd have to coach our users to
constantly be tweaking the enable_parallel_query (or whatever) option
for the queries where it helps and turning it off for others. I'm not
so excited about that.
I'd be perfectly (that means 100%) happy if it just
* Marko Tiikkaja (ma...@joh.to) wrote:
On 12/19/14 3:27 PM, Stephen Frost wrote:
We'd have to coach our users to
constantly be tweaking the enable_parallel_query (or whatever) option
for the queries where it helps and turning it off for others. I'm not
so excited about that.
I'd be
On Fri, Dec 19, 2014 at 9:39 AM, Stephen Frost sfr...@snowman.net wrote:
Perhaps we should reconsider our general position on hints then and
add them so users can define the plan to be used.. For my part, I don't
see this as all that much different.
Consider if we were just adding HashJoin
On 12/19/2014 04:39 PM, Stephen Frost wrote:
* Marko Tiikkaja (ma...@joh.to) wrote:
On 12/19/14 3:27 PM, Stephen Frost wrote:
We'd have to coach our users to
constantly be tweaking the enable_parallel_query (or whatever) option
for the queries where it helps and turning it off for others. I'm
On 20/12/14 03:54, Heikki Linnakangas wrote:
On 12/19/2014 04:39 PM, Stephen Frost wrote:
* Marko Tiikkaja (ma...@joh.to) wrote:
On 12/19/14 3:27 PM, Stephen Frost wrote:
We'd have to coach our users to
constantly be tweaking the enable_parallel_query (or whatever) option
for the queries
Robert,
* Robert Haas (robertmh...@gmail.com) wrote:
On Fri, Dec 19, 2014 at 9:39 AM, Stephen Frost sfr...@snowman.net wrote:
Perhaps we should reconsider our general position on hints then and
add them so users can define the plan to be used.. For my part, I don't
see this as all that
* Heikki Linnakangas (hlinnakan...@vmware.com) wrote:
On 12/19/2014 04:39 PM, Stephen Frost wrote:
* Marko Tiikkaja (ma...@joh.to) wrote:
I'd be perfectly (that means 100%) happy if it just defaulted to
off, but I could turn it up to 11 whenever I needed it. I don't
believe to be the only
On Tue, Dec 9, 2014 at 12:46 AM, Amit Kapila amit.kapil...@gmail.com wrote:
I agree with this. For a first version, I think it's OK to start a
worker up for a particular sequential scan and have it help with that
sequential scan until the scan is completed, and then exit. It should
not, as
On Sat, Dec 6, 2014 at 12:13 AM, David Rowley dgrowle...@gmail.com wrote:
It's bare-bones core support for allowing aggregate states to be merged
together with another aggregate state. I would imagine that if a query such
as:
SELECT MAX(value) FROM bigtable;
was run, then a series of
On Sat, Dec 6, 2014 at 1:50 AM, Amit Kapila amit.kapil...@gmail.com wrote:
I think we have access to this information in planner (RelOptInfo - pages),
if we want, we can use that to eliminate the small relations from
parallelism, but question is how big relations do we want to consider
for
On Sat, Dec 6, 2014 at 7:07 AM, Stephen Frost sfr...@snowman.net wrote:
For my 2c, I'd like to see it support exactly what the SeqScan node
supports and then also what Foreign Scan supports. That would mean we'd
then be able to push filtering down to the workers which would be great.
Even
On Mon, Dec 8, 2014 at 11:21 PM, Robert Haas robertmh...@gmail.com wrote:
On Sat, Dec 6, 2014 at 1:50 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
I think we have access to this information in planner (RelOptInfo -
pages),
if we want, we can use that to eliminate the small relations from
On Mon, Dec 8, 2014 at 11:27 PM, Robert Haas robertmh...@gmail.com wrote:
On Sat, Dec 6, 2014 at 7:07 AM, Stephen Frost sfr...@snowman.net wrote:
For my 2c, I'd like to see it support exactly what the SeqScan node
supports and then also what Foreign Scan supports. That would mean we'd
On Sat, Dec 6, 2014 at 5:37 PM, Stephen Frost sfr...@snowman.net wrote:
* Amit Kapila (amit.kapil...@gmail.com) wrote:
1. As the patch currently stands, it just shares the relevant
data (like relid, target list, block range each worker should
perform on etc.) to the worker and then worker
* Amit Kapila (amit.kapil...@gmail.com) wrote:
1. As the patch currently stands, it just shares the relevant
data (like relid, target list, block range each worker should
perform on etc.) to the worker and then worker receives that
data and form the planned statement which it will execute and
José,
* José Luis Tallón (jltal...@adv-solutions.net) wrote:
On 12/04/2014 07:35 AM, Amit Kapila wrote:
The number of worker backends that can be used for
parallel seq scan can be configured by using a new GUC
parallel_seqscan_degree, the default value of which is zero
and it means parallel
Amit,
* Amit Kapila (amit.kapil...@gmail.com) wrote:
postgres=# explain select c1 from t1;
QUERY PLAN
--
Seq Scan on t1 (cost=0.00..101.00 rows=100 width=4)
(1 row)
postgres=# set parallel_seqscan_degree=4;
SET
On 12/5/14, 9:08 AM, José Luis Tallón wrote:
More over, when load goes up, the relative cost of parallel working should go
up as well.
Something like:
p = number of cores
l = 1min-load
additional_cost = tuple estimate * cpu_tuple_cost * (l+1)/(c-1)
(for c1, of course)
...
On Fri, Dec 5, 2014 at 8:38 PM, José Luis Tallón jltal...@adv-solutions.net
wrote:
On 12/04/2014 07:35 AM, Amit Kapila wrote:
[snip]
The number of worker backends that can be used for
parallel seq scan can be configured by using a new GUC
parallel_seqscan_degree, the default value of which
On 4 December 2014 at 19:35, Amit Kapila amit.kapil...@gmail.com wrote:
Attached patch is just to facilitate the discussion about the
parallel seq scan and may be some other dependent tasks like
sharing of various states like combocid, snapshot with parallel
workers. It is by no means ready
On Fri, Dec 5, 2014 at 8:46 PM, Stephen Frost sfr...@snowman.net wrote:
Amit,
* Amit Kapila (amit.kapil...@gmail.com) wrote:
postgres=# explain select c1 from t1;
QUERY PLAN
--
Seq Scan on t1 (cost=0.00..101.00
On Fri, Dec 5, 2014 at 8:43 PM, Stephen Frost sfr...@snowman.net wrote:
José,
* José Luis Tallón (jltal...@adv-solutions.net) wrote:
On 12/04/2014 07:35 AM, Amit Kapila wrote:
The number of worker backends that can be used for
parallel seq scan can be configured by using a new GUC
On Sat, Dec 6, 2014 at 12:27 AM, Jim Nasby jim.na...@bluetreble.com wrote:
On 12/5/14, 9:08 AM, José Luis Tallón wrote:
More over, when load goes up, the relative cost of parallel working
should go up as well.
Something like:
p = number of cores
l = 1min-load
On Sat, Dec 6, 2014 at 10:43 AM, David Rowley dgrowle...@gmail.com wrote:
On 4 December 2014 at 19:35, Amit Kapila amit.kapil...@gmail.com wrote:
Attached patch is just to facilitate the discussion about the
parallel seq scan and may be some other dependent tasks like
sharing of various
401 - 458 of 458 matches
Mail list logo