statements. Only if we consider the popular method of
dump-restore mentioned in the thread (https://www.postgresql.org/me
ssage-id/E1VYMqi-0001P4-P4%40wrigleys.postgresql.org) with pg_dumpall -g
and then individual pg_dump, then it would be helpful to have this patch.
Regards,
Rafia Sabih
EnterpriseDB
On Tue, Oct 17, 2017 at 3:22 AM, Andres Freund wrote:
> Hi Rafia,
>
> On 2017-05-19 17:25:38 +0530, Rafia Sabih wrote:
>> head:
>> explain analyse select * from t where i < 3000;
>> QUERY PLAN
>
adding any \
> chars at the end of the line would also mean cut-and-paste of the RHS
> content would work.
>
> Thanks for the feedback!
>
> Christoph
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
more
> the scanner because on each token the next state (continued or not) must be
> decided.
>
> --
> Fabien.
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
On Wed, Nov 9, 2016 at 3:28 PM, Fabien COELHO wrote:
>
> +1. My vote is for backslash continuations.
>>
>
> I'm fine with that!
>
> --
> Fabien.
>
Looks good to me also.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
iled
> reason refer ExecReScanIndexScan.
>
Done.
Please find the attached patch for the revised version.
Just an FYI, in my recent tests on TPC-H 300 scale factor, Q16 showed
improved execution time from 830 seconds to 730 seconds with this patch
when used with parallel merge-join patch [1]
On Sun, Feb 19, 2017 at 10:11 PM, Robert Haas wrote:
> On Thu, Feb 16, 2017 at 6:41 PM, Kuntal Ghosh
> wrote:
> > On Thu, Feb 16, 2017 at 5:47 PM, Rafia Sabih
> > wrote:
> >> Other that that I updated some comments and other cleanup things. Please
> >> fin
s. So, either you should change query_data as const
> char*, or as Robert suggested, you can directly use
> estate->es_sourceText.
>
Done.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
pass_queryText_to_workers_v7.patch
Description: Binary data
--
Sent via pgsql
parallel plans
for the queries in these functions after this patch. This might be
helpful in understanding the level of parallelism this patch is
relaxing for PL functions.
Thanks to my colleagues Amit Kapila and Dilip Kumar for discussions in
this regard.
--
Regards,
Rafia Sabih
EnterpriseDB: http
On Wed, Feb 22, 2017 at 12:25 PM, Robert Haas wrote:
> Looks fine to me. Committed. I did move es_queryText to what I think
> is a more appropriate location in the structure definition.
>
> Thanks.
>
Many thanks to Robert for committing and to Kuntal and Amit for reviewing.
--
ting the patch.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> CURSOR_OPT_PARALLEL_OK);
> + else
> + exec_res = SPI_execute(querystr, estate->readonly_func, 0);
> + }
>
> The last parameter of SPI_execute is tuple count, not cursorOption,
> you need to fix this. Also, this is crossing the 80 line boundary.
>
Oops, corrected.
>
rows=1
loops=1300126)
Index Cond: (s_suppkey = lineitem.l_suppkey)
Planning time: 2.440 ms
Execution time: 6057329.179 ms
I hope there might be some way-out for such a case which includes the
benefits of the commit without hurting other cases (like this one)
this bad.
Thoughts, comments...
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
d at required places.
Next, the patch for allowing execution of such queries in parallel mode,
that involves infrastructural changes along the lines mentioned upthread
(pl_parallel_exec_support_v1.patch).
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
pl_parallel_exec_support_v1.patch
Description:
completion.
I find these two things contradictory to each other. So, is this point
missed or is there some deep reasoning behind that?
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
= 0;
> > + SpinLockRelease(&barrier->mutex);
> > +
> > + if (release)
> > + ConditionVariableBroadcast(&
> barrier->condition_variable);
> > +
> > + return last;
> > +}
> >
> > Doesn't this, a
not.
[1]
https://www.postgresql.org/message-id/ca+tgmobxehvhbjtwdupzm9bvslitj-kshxqj2um5gpdze9f...@mail.gmail.com
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
pl_parallel_opt_support_v2.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@po
ipt anyway? Also, instead of so many different files for error why don't
you combine it into one.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
ster &&
dynamic_shared_memory_type != DSM_IMPL_NONE &&
parse->commandType == CMD_SELECT &&
!parse->hasModifyingCTE &&
max_parallel_workers_per_gather > 0 &&
!IsParallelWorker() &&
!IsolationIsSerializable())
>
> On Fri, Mar 10, 2017 at 5:38 PM, Rafia Sabih
&
ding so many hashes
comes to be more costly than having an append and then join. Thought
it might be helpful to consider this case in better designing of the
algorithm. Please feel free to point out if I missed something.
Test details:
commit: b4ff8609dbad541d287b332846442b076a25a
d be good if we can keep an eye on this that it doesn't exceed the
computational bounds for a really large number of tables..
Please find the attached .out file to check the output I witnessed and
let me know if anymore information is required
Schema and data was similar to the preciou
3] LOG: duration: 2584.282 ms plan:
Query Text: select not_parallel();
Result (cost=0.00..0.26 rows=1 width=8) (actual
time=2144.315..2144.316 rows=1 loops=1)
not_parallel
--
0
(1 row)
Hence, it appears lazyEval is the main reason behind it and it should
be definitely fixed i
gt;>>problem if the plan is a parallel plan.
>>
>> And you also need to test this case what Robert have mentioned up thread.
>
> +1
Checked, nope ExecutorRun is called only once in this case and
execute_once is true here.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterpr
eqo_eval (root=0x2df7550, tour=0x2ff2430,
num_gene=6) at geqo_eval.c:102
#11 0x0074288a in random_init_pool (root=0x2df7550,
pool=0x2ff23d0) at geqo_pool.c:109
#12 0x007422a6 in geqo (root=0x2df7550, number_of_rels=6,
initial_rels=0x2ff22d0) at geqo_main.c:114
#13 0x00747f19 in make_rel_from_joinlist (root=0x2df7550,
joinlist=0x2dce940) at allpaths.c:2333
#14 0x00744e7e in make_one_rel (root=0x2df7550,
joinlist=0x2dce940) at allpaths.c:182
#15 0x00772df9 in query_planner (root=0x2df7550,
tlist=0x2dec2c0, qp_callback=0x777ce1 ,
qp_extra=0x7fffe6b4e700)
at planmain.c:254
Please let me know if any more information is required on this.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
hus segfaulting in functions like sort_inner_and_outer() which
> use those.
>
> Here's patch fixing both the issues. Please let me know if it fixes
> the issues you are seeing.
I tested the applied patch, it is fixing the reported issue.
--
Regards,
Rafia Sabih
EnterpriseDB:
es->lazyEval=true(And so far we have
> parallelism only for select). But here we calling the parameter to
> ExecutorRun as execute_once so !fcache->returnsSet || !es->lazyEval
> is the correct one and future proof.
>
Agree, done.
--
Regards,
Rafia Sabih
Ente
for error why don't you combine
>> it into one.
>
>
> Because a pgbench scripts stops on the first error, and I wanted to test
> what happens with several kind of errors.
>
if (my_command->argc > 2)
+ syntax_error(source, lineno, my_command->line, my_command->
function, and if we happen to
> pass false, it's not going to matter, because exec_run_select() is
> going to find the plan already initialized.
>
True, fixed.
The attached patch is to be applied over [1].
[1]
https://www.postgresql.org/message-id/CA%2BTgmoZ_Zu
ut and checked regression test with force_parallel_mode = regress
and all testcases are passing now.
This concerns me that should we be checking all the system defined
functions once again if they are actually parallel safe...?
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
001033a8a8 in BitmapHeapNext (node=0x1001b5187a8) at
nodeBitmapHeapscan.c:143
#13 0x1032a094 in ExecScanFetch (node=0x1001b5187a8,
accessMtd=0x1033a6c8 , recheckMtd=0x1033bab8
) at execScan.c:95
#14 0x1032a194 in ExecScan (node=0x1001b5187a8,
accessMtd=0x1033a6c8 , recheck
find attached a v8 which hopefully fixes these two issues.
Looks good to me, marking as ready for committer.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Mar 27, 2017 at 5:54 PM, Robert Haas wrote:
>
> If it's just that they are relying on unsynchronized global variables,
> then it's sufficient to mark them parallel-restricted ('r'). Do we
> really need to go all the way to parallel-unsafe ('u
this direction.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
system_defined_fn_update_v3.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
#x27;s a bit hard for me to piece through these plans, the
> formatting kind of got messed up - things are wrapped. Could you
> possibly attach the plans as attachments?
>
Sure, please find attached file for the plans before and after commit.
--
Regards,
Rafia Sabih
EnterpriseDB: http:/
On Tue, Mar 28, 2017 at 11:11 AM, Rafia Sabih
wrote:
> On Mon, Mar 27, 2017 at 12:20 PM, Thomas Munro
> wrote:
>>
>> On Sun, Mar 26, 2017 at 3:56 PM, Thomas Munro
>> wrote:
>> > But... what you said above must be a problem for Windows. I believe
>> >
aQpf%2BKS76%2Bsu7-sG_NQZGRPJkQg%40mail.gmail.com#cafitn-vxhvvi-rmjfoxkgznaqpf+ks76+su7-sg_nqzgrpj...@mail.gmail.com
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Join Filter:
((partsupp.ps_availqty)::numeric > ((0.5 * sum(lineitem.l_quantity))))
So, it looks like in the problematic area, it is not improving much.
Please
┘
>
> should be reproducible. I'd suggest additionally adding one tests that
> throws the EXPLAIN output away, but actually enables paralellism.
>
> Greetings,
>
> Andres Freund
>
> [1]
> https://coverage.postgresql.org/src/backend/executor/execParallel.c.gcov.
the nestloop-with-inner-index we already offer at the
> leaf level today.
>
> Regards,
> Jeff Davis
>
Looks like an interesting idea, however, in an attempt to test this patch I
found following error when compiling,
selfuncs.c: In function ‘mergejoinscansel’:
selfuncs.c:2901:12: error: ‘op_strategy’ undeclared (first use in this
function)
&op_strategy,
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
latest head-- commit
08aed6604de2e6a9f4d499818d7c641cbf5eb9f7
Might be in need of rebasing.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
like need a rebasing.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
3449457.34..3449457.34 rows=119994934 width=8) (actual
time=180858.448..180858.448 rows=119994608 loops=3)
Buckets: 33554432
Batches: 8 Memory Usage: 847911kB
Overall, this doesn't look like a problem of partition-wise join p
On Thu, Jul 20, 2017 at 2:44 PM, Ashutosh Bapat
wrote:
>
> On Thu, Jul 20, 2017 at 11:46 AM, Amit Langote
> wrote:
> > On 2017/07/20 15:05, Ashutosh Bapat wrote:
> >> On Wed, Jul 19, 2017 at 9:54 AM, Rafia Sabih
> >> wrote:
> >>>
> >>> P
On Wed, Jul 26, 2017 at 10:58 AM, Ashutosh Bapat <
ashutosh.ba...@enterprisedb.com> wrote:
> On Tue, Jul 25, 2017 at 11:01 AM, Rafia Sabih
> wrote:
>
> > Query plans for the above mentioned queries is attached.
> >
>
> Can you please share plans for all the queri
On Wed, Jul 26, 2017 at 11:06 AM, Ashutosh Bapat <
ashutosh.ba...@enterprisedb.com> wrote:
> On Wed, Jul 26, 2017 at 11:00 AM, Rafia Sabih
> wrote:
> >
> >
> > On Wed, Jul 26, 2017 at 10:58 AM, Ashutosh Bapat
> > wrote:
> >>
> >> On
On Wed, Jul 26, 2017 at 10:38 AM, Ashutosh Bapat
wrote:
>
> On Tue, Jul 25, 2017 at 9:39 PM, Dilip Kumar wrote:
> > On Tue, Jul 25, 2017 at 8:59 PM, Robert Haas wrote:
> >> On Tue, Jul 25, 2017 at 1:31 AM, Rafia Sabih
> >> wrote:
> >>> - other
e then. A different partitioning
> scheme may be required there.
>
Good point, will look into this direction as well.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscripti
quot;speedups for hash joins". More detail
> than that simply isn't useful to end users; and as a rule, our release
> notes are too long anyway.
>
> regards, tom lane
>
>
Just wondering if the mention of commit
0414b26bac09379a4cbf1fbd847d1cee2293c5e
C-H queries, in the
meantime would appreciate some feedback on the design, etc.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
faster_gather.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
k about in terms of
implementation and the cases where this can help or regress, will be
glad to know opinion of more people on this.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
x27;s see if anybody else shares my gut feeling. :)
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
mance significantly. One way to do
is to give this parameters as another GUC just like
min_parallel_table_scan_size, etc.
Attached .txt file gives the plan at head and with this patch,
additionally patch is attached for setting PARALLEL_TUPLE_QUEUE_SIZE
to 6553600 too.
Thoughts?
--
Regards,
Rafia
less
experimentation to ascertain something, I'll continue the experiments
and would be grateful to have more suggestions on that.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
sn't suit at
some places particularly in pgfdw_subxact_callback, not sure should
change comment or variable name
/* Disarm abort_cleanup_incomplete if it all worked. */
+ entry->changing_xact_state = abort_cleanup_failure;
Also, by any chance should we add a test-case for this?
--
Regards,
Rafia Sa
;> + * If DEFAULT is the only partiton for the table then this returns TRUE.
>> + *
>>
> Updated.
>
> [1] http://www.mail-archive.com/pgsql-hackers@postgresql.org/msg315573.html
>
Hi Beena,
I had a look at the patch from the angle of aesthetics and there are a
few cos
s the version without those changes.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
cosmetic_range_default_partition_v2.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.po
with, say, shared_buffers =
>> 8GB?
>
> I have tried same on my local machine with ssd as a storage.
>
> settings: shared_buffers = 8GB, loaded data with pg_bench scale_factor=1000.
>
> Total blocks got dumped
> autoprewarm_dump_now
> --
> 1048576
>
> 5 different load time based logs
>
> 1.
> 2017-06-04 11:30:26.460 IST [116253] LOG: autoprewarm has started
> 2017-06-04 11:30:43.443 IST [116253] LOG: autoprewarm load task ended
> -- 17 secs
>
> 2
> 2017-06-04 11:31:13.565 IST [116291] LOG: autoprewarm has started
> 2017-06-04 11:31:30.317 IST [116291] LOG: autoprewarm load task ended
> -- 17 secs
>
> 3.
> 2017-06-04 11:32:12.995 IST [116329] LOG: autoprewarm has started
> 2017-06-04 11:32:29.982 IST [116329] LOG: autoprewarm load task ended
> -- 17 secs
>
> 4.
> 2017-06-04 11:32:58.974 IST [116361] LOG: autoprewarm has started
> 2017-06-04 11:33:15.017 IST [116361] LOG: autoprewarm load task ended
> -- 17secs
>
> 5.
> 2017-06-04 12:15:49.772 IST [117936] LOG: autoprewarm has started
> 2017-06-04 12:16:11.012 IST [117936] LOG: autoprewarm load task ended
> -- 22 secs.
>
> So mostly from 17 to 22 secs.
>
> But I think I need to do tests on a larger set of configuration on
> different storage types. I shall do same and upload later. I have also
> uploaded latest performance test results (on my local machine ssd
> drive)
> configuration: shared_buffer = 8GB,
> test setting: scale_factor=300 (data fits to shared_buffers) pgbench clients
> =1
>
> TEST
> PGBENCH_RUN="./pgbench --no-vacuum --protocol=prepared --time=5 -j 1
> -c 1 --select-only postgres"
> START_TIME=$SECONDS; echo TIME, TPS; while true; do TPS=$($PGBENCH_RUN
> | grep excluding | cut -d ' ' -f 3); TIME=$((SECONDS-START_TIME));
> echo $TIME, $TPS; done
>
>
I had a look at the patch from stylistic/formatting point of view,
please find the attached patch for the suggested modifications.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
cosmetic_autoprewarm.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
tself seems out-of-place. Also, I'd
> change "at max" to "at most" and maybe reword the sentence a little.
> There's a lot of little things like this which I have tended be quite
> strict about changing before commit; I occasionally wonder whether
> it'
Index Cond: (o_orderkey = lineitem.l_orderkey)
Planning time: 3.498 ms
Execution time: 19661.054 ms
(15 rows)
This suggests that with such an idea the range of selectivity for
using parallelism can be extended for improving the performance of the
queries.
Credits:
Would like
ly I want to point that I also applied patch [1],
which I forgot to mention before.
[1]
https://www.postgresql.org/message-id/CAEepm%3D3%3DNHHko3oOzpik%2BggLy17AO%2Bpx3rGYrg3x_x05%2BBr9-A%40mail.gmail.com
> On 16 August 2017 at 18:34, Robert Haas wrote:
>> Thanks for the benchmarking results!
>&g
resh database the selectivity-estimates and plans as
reported by you and with the patched version I posted seems to be the
correct one. I'll see if I may check performance of these queries once
again to verify these.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ave a look at this patch and enlighten me
with your suggestions. :-)
[1] -
https://www.postgresql.org/message-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS%3DfHiBJmbSOF74aBQ%40mail.gmail.com
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
--
Sent via pgsql-hackers mailing list (pgsql
h Dilip that having similar mechanism for 'insert into
select...' statements would add more value to the patch, but even then
this looks like a good idea to extend parallelism for atleast a few of
the write operations
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
--
On Wed, Sep 13, 2017 at 2:29 PM, Haribabu Kommi
wrote:
>
>
> On Wed, Sep 13, 2017 at 4:17 PM, Rafia Sabih
> wrote:
>>
>> On Fri, Sep 1, 2017 at 12:31 PM, Haribabu Kommi
>> wrote:
>> >
>> > Hi All,
>> >
>> > Attached a reb
On Tue, Sep 19, 2017 at 3:50 PM, Alvaro Herrera wrote:
> Rafia Sabih wrote:
>
>> On completing the benchmark for all queries for the above mentioned
>> setup, following performance improvement can be seen,
>> Query | Patch | Head
>> 3 | 1455 | 1631
>> 4 |
On Sun, Sep 17, 2017 at 9:10 PM, Dilip Kumar wrote:
> On Wed, Sep 6, 2017 at 4:14 PM, Rafia Sabih
> wrote:
>
>> I worked on this idea of using local queue as a temporary buffer to
>> write the tuples when master is busy and shared queue is full, and it
>> gives qu
On Thu, Sep 21, 2017 at 10:34 PM, Dilip Kumar wrote:
> On Thu, Sep 21, 2017 at 4:50 PM, Rafia Sabih
> wrote:
>> On Sun, Sep 17, 2017 at 9:10 PM, Dilip Kumar wrote:
>>> On Wed, Sep 6, 2017 at 4:14 PM, Rafia Sabih
>>> wrote:
>>>
>>
>> Pl
omething like if after
backslash some spaces are there, followed by end-of-line then it
should ignore these spaces and read next line, atleast with this new
meaning of backslash. Otherwise, it should be mentioned in the docs
that backslash should not be followed by space.
--
Regards,
Rafia Sabih
where
l_shipdate <= date '1998-12-01' - interval '119' day
group by
l_returnflag,
l_linestatus
order by
l_returnflag,
l_linestatus
LIMIT 1;
Inputs of all sorts are encouraged.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
pass_queryText_to_workers_
ql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
Hi Thomas,
I was trying to analyse the performance of TPC-H queries with your patch
and came across following results,
Q9 and Q21 were crashing, both of them had following bt in core dump (I
thought it might be helpful),
#0 0x10757da4 in pfree (pointer=0x3fff78d11000) at mcxt.c:1012
#1 0x1032c574 in ExecHashIncreaseNumBatches
(hashtable=0x1003af6da60) at nodeHash.c:1124
#2 0x1032d518 in ExecHashTableInsert (hashtable=0x1003af6da60,
slot=0x1003af695c0, hashvalue=2904801109, preload=1 '\001') at
nodeHash.c:1700
#3 0x10330fd4 in ExecHashJoinPreloadNextBatch
(hjstate=0x1003af39118) at nodeHashjoin.c:886
#4 0x103301fc in ExecHashJoin (node=0x1003af39118) at
nodeHashjoin.c:376
#5 0x10308644 in ExecProcNode (node=0x1003af39118) at
execProcnode.c:490
#6 0x1031f530 in fetch_input_tuple (aggstate=0x1003af38910) at
nodeAgg.c:587
#7 0x10322b50 in agg_fill_hash_table (aggstate=0x1003af38910) at
nodeAgg.c:2304
#8 0x1032239c in ExecAgg (node=0x1003af38910) at nodeAgg.c:1942
#9 0x10308694 in ExecProcNode (node=0x1003af38910) at
execProcnode.c:509
#10 0x10302a1c in ExecutePlan (estate=0x1003af37fa0,
planstate=0x1003af38910, use_parallel_mode=0 '\000', operation=CMD_SELECT,
sendTuples=1 '\001', numberTuples=0,
direction=ForwardScanDirection, dest=0x1003af19390) at execMain.c:1587
In case you want to know, I was using TPC-H with 20 scale factor. Please
let me know if you want anymore information on this.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
0039a96808,
hashvalue=0x3fffd4a5639c) at nodeHashjoin.c:936
I was using TPC-H with scale factor 20, please let me know if there is
anything more you require in this regard.
[1]
https://www.postgresql.org/message-id/CAEepm%3D1vGcv6LBrxZeqPb_rPxfraidWAF_8_4z2ZMQ%2B7DOjj9w%40mail.gmail.com
--
Regards,
Ra
using tenk1_unique1 on tenk1
> ! (5 rows)
>
> IIUC, parallel operation being performed here is fine as parallel restricted
> function occurs above Gather node
> and just the expected output needs to be changed.
>
True, fixed it, please find the attached file for the latest
.
>
>
> Hmmm. This is not the behavior of backslash continuation in bash or python,
> I do not think that this is desirable to have a different behavior.
>
Okay, seems sensible.
>> Otherwise, it should be mentioned in the docs that backslash should not be
>> followe
allelGetQueryDesc and
ParallelWorkerMain as before (in version 1 of patch).
Please let me know your feedback over the same.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
pass_queryText_to_workers_v2.patch
Description: Binary data
--
Sent via pgsql-hackers mai
ons operated when the file is downloaded
> and saved, because it is a text file?
>
I think this is delaying the patch unnecessarily, I have attached a
version, please see if you can apply it successfully, we can proceed
with that safely then...
--
Regards,
Rafia Sabih
EnterpriseDB: h
mmitter,
I am getting whitespace errors in v3 of patch, which I corrected in
v4, however, Fabien is of the opinion that v3 is clean and is showing
whitespace errors because of downloader, etc. issues in my setup.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Jan 13, 2017 at 2:19 PM, Rafia Sabih
wrote:
> On Thu, Jan 12, 2017 at 5:39 PM, Rahila Syed wrote:
>> Hello,
>>
>> On applying the patch on latest master branch and running regression tests
>> following failure occurs.
>> I applied it on latest parallel i
On Thu, Feb 2, 2017 at 1:19 AM, Thomas Munro
wrote:
> On Thu, Feb 2, 2017 at 3:34 AM, Rafia Sabih
> wrote:
>> 9 | 62928.88 | 59077.909
>
> Thanks Rafia. At first glance this plan is using the Parallel Shared
> Hash in one place where it should pay off, that
nce attached is the patch
to remove this second comment.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
redundant_comment.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
h
SET max_parallel_workers_per_gather=4;
> SELECT count(*) FROM t1;
>
> It is showing the following warning.
> WARNING: problem in alloc set ExecutorState: detected write past
> chunk end in block 0x14f5310, chunk 0x14f6c50
Fixed.
Thanks a lot Kuntal for the review, pleas
e
> can also be moved to execParallel.c.
>
> Agree and fixed.
> Another question is don't we need to set debug_query_string in worker?
In the updated version I am setting it in ParallelQueryMain.
Please find the attached file for the revised version.
--
Regards,
Rafia Sabih
EnterpriseDB: htt
sure that the
>> nodeIndexOnlyScan.c changes match what was done there. In particular,
>> he's got this:
>>
>> if (reset_parallel_scan && node->iss_ScanDesc->parallel_s
>> can)
>> index_parallelrescan(node->iss_ScanDesc);
>>
>&
On Thu, Feb 16, 2017 at 3:40 PM, Rafia Sabih
wrote:
>
> On Thu, Feb 16, 2017 at 1:26 PM, Rahila Syed
> wrote:
>
>> I reviewed the patch. Overall it looks fine to me.
>>
>> One comment,
>>
>> >- if (index->amcanparallel &&
&g
,
> PARALLEL_KEY_QUERY_TEXT));
> Just one lookup is sufficient.
>
> Fixed.
Other that that I updated some comments and other cleanup things. Please
find the attached patch for the revised version.
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
pass_queryText_to_workers_
. Clearly, the
performance of queries improved significantly with this new operator and
considering the changes required after parallel index scan patches is less
if evaluated against the improvement in performance it offers.
Attached file:
--
1. parallel_index_only_v1.patch
This pat
to pile on.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
parallel_index_only_v2.patch
Description: Binary data
--
Sent via pgsql-hackers mailing lis
Rebased patch of parallel-index only scan based on the latest version of
parallel index scan [1] is attached.
[1] https://www.postgresql.org/message-id/CAA4eK1LiNi7_Z1%
2BPCV4y06o_v%3DZdZ1UThE%2BW9JhthX4B8uifnA%40mail.gmail.com
On Sat, Dec 24, 2016 at 7:55 PM, Rafia Sabih
wrote:
> Extrem
dy of TPC-H and TPC-DS queries, I am confident that this will
> be helpful in certain queries at higher scale factors.
>
I agree as then we do not need to disable parallelism for particular
relations as we currently do for supplier relation in Q16 of TPC-H.
[1]
https://www.postgresql.org/message-id/CAA4eK1
88 matches
Mail list logo