ory if the planner gives a bad estimate.
Of course, it's true (as Tomas points out) that this issue already
exists today to some degree, and it's also true (as he also points
out) that 1GB is an arbitrary limit. But it's also true that we use
that same arbitrary 1GB limit in a lot of
On 09/09/2015 03:55 PM, Robert Haas wrote:
On Tue, Sep 8, 2015 at 5:02 PM, Tomas Vondra
wrote:
Also, I'm not sure what other places do you have in mind (could you list
some examples?) but I'd bet we limit the allocation to 1GB because of the
palloc() limit and not because of fe
n suggest there's a possible gain. But perhaps partitioning
the checkpoints is not the best approach?
regards
--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-
7;s unable to drop just one of them.
regards
--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http:/
On 09/11/2015 06:55 PM, Robert Haas wrote:
On Wed, Sep 9, 2015 at 11:54 AM, Tomas Vondra
wrote:
Secondly, we limit the number of buckets to INT_MAX, so about 16GB (because
buckets are just pointers). No matter how awful estimate you get (or how
insanely high you set work_mem) you can
On 09/11/2015 07:16 PM, Robert Haas wrote:
On Fri, Sep 11, 2015 at 1:12 PM, Tomas Vondra
wrote:
I'm arguing for fixing the existing bug, and then addressing the case of
over-estimation separately, with proper analysis.
Well, this is part of how we're looking it differently. I
the plans easier to process by additional tools.
regards
--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your su
On 14.1.2014 00:38, Tomas Vondra wrote:
> On 13.1.2014 18:07, Alexander Korotkov wrote:
>> On Sat, Jan 11, 2014 at 6:15 AM, Tomas Vondra > <mailto:t...@fuzzy.cz>> wrote:
>>
>> On 8.1.2014 22:58, Alexander Korotkov wrote:
>> > Thanks for reporting.
I
mentioned in the post about OOM errors), and index_table is an empty
table with a GIN index. And indeed it fails, but only if I run the block
in multiple sessions in parallel.
regards
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscrip
On 21.1.2014 22:21, Heikki Linnakangas wrote:
> On 01/21/2014 11:35 AM, Heikki Linnakangas wrote:
>> On 01/21/2014 04:02 AM, Tomas Vondra wrote:
>>> On 20.1.2014 19:30, Heikki Linnakangas wrote:
>>>>
>>>> Attached is a yet another version, with more b
Are there any obvious rules what queries will benefit most from this?
The queries generated by the tool I'm using for testing are mostly of
this form:
SELECT id FROM messages
WHERE body_tsvector @ plainto_tsquery('english', 'word1 word2 ...')
ORDER BY ts_rank(...) DESC LIMIT :n;
with varying number of words and LIMIT values. During the testing today
I haven't noticed any obvious performance difference, but I haven't
spent much time on that.
regards
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
itely. I suspect it's not perfectly stable, i.e, the
this query may work fine / another one will block. In that case try to
run this [http://www.fuzzy.cz/tmp/random-queries.sql] - it's a file with
1000 generated queries, at least one of them should block (that's how I
discovered the issue).
regards
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
maybe not.
I've repeated the test several times, checked all I could think of, but
I've found nothing so far. The flags were exactly the same in both cases
(just --enable-debug and nothing else).
regards
Tomas
<>
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
e patches was below 90% of the duration on 9.3.
IMHO the table suggests with the last patch we're fine - majority of
queries (~66%) is faster than on 9.3, and the tail is very short. There
are just 2 queries that took more than 15% longer, compared to 9.3. And
we're talking about 20ms vs
On 28.1.2014 08:29, Heikki Linnakangas wrote:
> On 01/28/2014 05:54 AM, Tomas Vondra wrote:
>> Then I ran those scripts on:
>>
>>* 9.3
>>* 9.4 with Heikki's patches (9.4-heikki)
>>* 9.4 with Heikki's and first patch (9.4-alex-1)
>>
On 2.2.2014 11:45, Heikki Linnakangas wrote:
> On 01/30/2014 01:53 AM, Tomas Vondra wrote:
>> (3) A file with explain plans for 4 queries suffering ~2x slowdown,
>> and explain plans with 9.4 master and Heikki's patches is available
>> here:
>>
>&
a
dataset or a complete real-world test case at hand. Any ideas?
I certainly agree that it'd be very sad to lose the performance gain for
hstore/json. OTOH my fear is that to achieve that gain, we'll noticeably
slow down other important use cases (e.g. full-text search), which is one
of t
mber of results are high, most of time is relevance calculating and
> sorting. I propose to remove ORDER BY clause from test cases to see scan
> speed more clear.
Sure, I can do that. Or maybe one set of queries with ORDER BY, the other
one without it.
> I've dump of postgresql.org search queries from Magnus. We can add them to
> our test case.
You mean search queries from the search for mailing list archives? Sure,
we add that.
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 3 Únor 2014, 17:08, Alexander Korotkov wrote:
> On Mon, Feb 3, 2014 at 7:24 PM, Tomas Vondra wrote:
>
>> On 3 Únor 2014, 15:31, Alexander Korotkov wrote:
>> >
>> > I found my patch "0005-Ternary-consistent-implementation.patch" to be
>> > c
On 3 Únor 2014, 19:18, Alexander Korotkov wrote:
> On Mon, Feb 3, 2014 at 8:19 PM, Tomas Vondra wrote:
>
>> > > Sometimes test cases are not what we expect. For example:
>> >> >
>> >> > =# explain SELECT id FROM messages WHERE body_tsvector @@
On 3.2.2014 07:53, Oleg Bartunov wrote:
> Tomasa, it'd be nice if you use real data in your testing.
>
> One very good application of gin fast-scan is dramatic performance
> improvement of hstore/jsonb @> operator, see slides 57, 58
> http://www.sai.msu.su/~megera/postgres/talks/hstore-dublin-201
On 9.2.2014 22:51, Erik Rijkers wrote:
> On Sun, February 9, 2014 22:35, Tomas Vondra wrote:
>> On 3.2.2014 07:53, Oleg Bartunov wrote:
>>> PS. I used data delicious-rss-1250k.gz from
>>> http://randomwalker.info/data/delicious/
>>
>> I'm working on ex
hing record? (yes -> stop)
2) exists .pgpass.wallet?
2.a) ask for encryption key
2.b) read .pgpass using the decryption key
2.c) is there a matching record? (yes -> stop)
3) ask for connection info directly
BTW yes, I know what kerberos is, but many of us are dealing with
companies t
On 22.2.2014 00:02, Josh Berkus wrote:
> On 02/21/2014 09:11 AM, Tomas Vondra wrote:
>> What I think might be useful and safe at the same time is encrypted
>> .pgpass with tools asking for the encryption key. Think of it as a simple
>> passord wallet - not really useful if y
ing pg_stat_tmp pointed to tmpfs, this is basically what happens when
rebooting the machine (and why I haven't noticed this on our production
systems so far).
regards
Tomas
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 305d126..7c075ef 100644
--- a/src/ba
in the v10 version so far is slight
difference in naming - while we have json_to_hstore/hstore_to_json, we
have jsonb2hstore/hstore2jsonb. I propose to change this to
jsonb_to_hstore/hstore_to_jsonb.
May not be needed if the implicit casts go through.
regards
Tomas
--
Sent via pgsql-hackers mailing
e's often
a lot people discussing it.
Do we have a list of patches that didn't get a proper review yet / badly
need another one?
What about improving the commitfest page by displaying a number of
related e-mail messages / number of people involved? Shouldn't be
difficult to get this
.
Granted, most won't have GIN and GIST index on the same column at the
same time, but bad cost estimate may cause other issues. Maybe I could
achieve this by tweaking the various cost GUCs, but ISTM that tweaking
the cost estimation would be appropriate.
regards
Tomas
--
Sent via
alue, and it will fail ...
With btree indexes on text I would probably create an index on
substr(column,0,1000) or something like that, but doing that with JSON
seems a bit strange.
I assume we need to store the actual values in the GIN index (so a hash
is not sufficient), right?
GIST indexes wor
> pg_stat already being empty.
>
> I think there are three lines to change, as in the attached patch.
>
> Am I wrong?
>
> ...Robert
I think the patch is OK.
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
" but do that the old way (check if the
table exists in a separate command)
Opinions? Objections?
regards
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
f suppressing
> application_name when connecting to old servers.
Hmmm, I'm getting this in the log (when connection to 8.0.0):
FATAL: unrecognized configuration parameter "application_name"
but it seems to be working. I'm wondering if FATAL is appropriate here.
Tomas
--
ace, adding
records etc. The simplest way I could think of was adding a simple
"index" - a mapping of OID to position in the stat file.
I.e. a simple array of (oid, offset) pairs, stored in oid.stat.index or
something like that. This would make it quite simple to access existing
record
far.
> > However, if the statistics is overhauled, PostgreSQL would be able
> > to host a much larger number of customers
>
>
> Ah, now I get it. Thanks a ton for the detailed explanation.
>
> Yes, a BTree will sufficiently isolate per
On 4.9.2013 20:59, Alvaro Herrera wrote:
> Tomas Vondra wrote:
>
>> My idea was to keep the per-database stats, but allow some sort of
>> "random" access - updating / deleting the records in place, adding
>> records etc. The simplest way I could think of was addi
oordinating
> access. We wouldn't want to end up introducing additional contention
> points by doing this..
Yes, this is definitely an important thing to keep in mind. Maintaining
the "index" (no matter what kind of index will be used) will cause some
contention, but I believe it w
ex' allowing fast access to particular records etc. At least that
was my plan. But feel free to implement something more advanced (e.g. a
BTree storage) and we can compare the results.
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
sessions
accessing objects, i.e. updating many "rows" in the stats tables.
Now, the stats table is likely to bloat thanks of the MVCC
copy-on-update. Not a good think, and it might easily happen the price
for maintenance of the table will be much higher than what we saved.
There are proba
stats per
database.
But AFAIK there are operating systems where creating a filesystem in RAM
is not that simple - e.g. Windows. In such cases even a moderate number
of objects may be a significant issue I/O-wise. But then again, I can't
really think of reasonable a system creating that many objects in a
single database (except for e.g. a shared database using schemas instead
of databases).
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 9 Září 2013, 18:50, Robert Haas wrote:
> On Sat, Sep 7, 2013 at 6:57 PM, Tomas Vondra wrote:
>> Don't forget the stats are written only by the postmaster, all the
>> regular backends only read it (and eventually send updates back).
>
> The postmaster, or the stats col
context!" or something
like that.
8) The records in pg_aggregate.c are using either 0 (for fixed-length)
or 128. This seems slightly excessive to me. What is the reasoning
behind this? Is that because of the two NumericVar fields?
regards
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 23 Září 2013, 18:18, Pavel Stehule wrote:
> Hello
>
>
> 2013/9/22 Tomas Vondra
>
>> Hi,
>>
>> I've reviewed the v6 of the "numeric optimize" patch
>> (http://www.postgresql.org/**message-id/**CAFj8pRDQhG7Pqmf8XqXY0PnHfakkP**
>> QLPH
ory allocated by the
HashAggregate itself, but although I'm not sure how much that should be,
I would expect a "small amount" compared to the 1.6GB.
So I'm wondering what uses those 3.5GB? Is this the overhead of
HashAggregate, or am I missing something (e.g. a memory leak in
On 6.10.2013 20:37, Tomáš Janoušek wrote:
> Hi,
>
> On Sat, Oct 05, 2013 at 08:22:54PM +0200, Tomas Vondra wrote:
>> I'm on 64-bit architecture and the example works with int32, which means
>> the sizes should be about this:
>>
>> hash_element_t =&g
On 7.10.2013 14:50, k...@rice.edu wrote:
> On Mon, Oct 07, 2013 at 12:41:58AM +0200, Tomas Vondra wrote:
>>> 2. Consider using a simpler/faster hash function, like FNV[1] or Jenkins[2].
>>>For fun, try not hashing those ints at all and see how that performs
>>>
ner way of collision resolution,IMHO.
What do you mean by "growth in the main table"?
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 8 Říjen 2013, 6:23, Atri Sharma wrote:
> Hi Tomas,
>
>
>>> Consider the aspects associated with open addressing.Open addressing
>>> can quickly lead to growth in the main table.Also, chaining is a much
>>> cleaner way of collision resolution,IMHO.
>>
but with a custom aggregate. But feel free to
tweak the built-in hash table ;-)
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 8 Říjen 2013, 13:52, Atri Sharma wrote:
> On Tue, Oct 8, 2013 at 4:15 PM, Tomas Vondra wrote:
>> On 8 Říjen 2013, 11:42, Atri Sharma wrote:
>>>>
>>>> I've made some significant improvements in the chaining version (in
>>>> the
>>>>
On 8.10.2013 21:59, Heikki Linnakangas wrote:
> On 08.10.2013 17:47, Alexander Korotkov wrote:
>> Hi, Tomas!
>>
>> On Sun, Oct 6, 2013 at 3:58 AM, Tomas Vondra wrote:
>>
>>> I've attempted to rerun the benchmarks tests I did a few weeks ago, but
>>&
On 10.10.2013 13:57, Heikki Linnakangas wrote:
> On 09.10.2013 02:04, Tomas Vondra wrote:
>> On 8.10.2013 21:59, Heikki Linnakangas wrote:
>>> On 08.10.2013 17:47, Alexander Korotkov wrote:
>>>> Hi, Tomas!
>>>>
>>>> On Sun, Oct 6, 2013 at 3:58
On 12.10.2013 12:11, Alexander Korotkov wrote:
> On Sat, Oct 12, 2013 at 1:55 AM, Tomas Vondra <mailto:t...@fuzzy.cz>> wrote:
>
> Yup, this version fixed the issues. I haven't been able to do any
> benchmarks yet, all I have is some basic stats
>
d exercise. There is no
> "pigeon hole" issue. A simple multiplication by a 32-bits prime would
> fare good enough and result in zero collision.
Agreed. I'll revisit this if/when I'll need to support larger data types
in this aggregate.
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 24.9.2013 17:57, Pavel Stehule wrote:
>
>
>
> 2013/9/24 Robert Haas <mailto:robertmh...@gmail.com>>
>
> On Mon, Sep 23, 2013 at 4:15 PM, Tomas Vondra <mailto:t...@fuzzy.cz>> wrote:
>> Seems "ready for commiter" to me. I'll wait a fe
typos
in the trigger name with exactly the same result (trigger not dropped
without any kind of error message).
I see no reason to support DROP TRIGGER IF EXISTS but restrict the IF
EXISTS clause only to the trigger on the grounds of typos.
So +1 from me, both to the patch and graceful handling of missing table.
kind regards
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
mean, the FDW API is not going to get any simpler, and for me
this was a significant help.
regards
Tomas
fdw.dia
Description: application/dia-diagram
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 17 Říjen 2013, 5:32, Stephen Frost wrote:
> Alvaro,
>
> * Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
>> Tomas Vondra wrote:
>> > Attached is the set of flow charts, showing the sequence of callbacks
>> > for all the supported commands (i.e. SELECT, INS
something I missed (e.g. future
plans)?
regards
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 18 Říjen 2013, 17:52, Tom Lane wrote:
> "Tomas Vondra" writes:
>> 2) Is there any particular reason why
>> PlanForeignModify/BeginForeignModify
>> require the fdw_private to be a List*, and not a generic pointer?
>
> That data has to be copiable by copyObj
On 18.10.2013 23:35, Stephen Frost wrote:
> Tomas,
>
> * Tomas Vondra (t...@fuzzy.cz) wrote:
>> My impression from that thread was that one of the requirements is
>> reasonable versioning / diff support, and AFAIK that's not a good
>> match for any GUI-based pr
On 18.10.2013 23:52, Peter Eisentraut wrote:
> On 10/18/13 5:35 PM, Stephen Frost wrote:
>> I can't see it being a major effort to get it from the wiki into
>> the docs, though perhaps I'm being a bit over-optomistic wrt that.
>
> Hah!
>
> Consider that an image would have to work with the follow
--
(1 row)
So, what I'm doing wrong? Seems like a bug to me ...
regards
Tomas
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 21.10.2013 02:38, Tomas Vondra wrote:
> Hi,
>
> I ran into some pretty strange behavior of C-language function and
> default parameter values, both on 9.2 and 9.4devel. Consider for example
> this trivial C function:
>
> Datum
> show_bug(PG_FUNCTION_ARGS
faster.
I am changing this to Waiting on Author.
I will be on vacation without Internet access for the next 15 days,
so hopefully someone else can have a look when a new version is
posted. If it's still open I'll have a look when I get back.
Thanks for the feedback!
regards
te, so we don't add the
clause. I'm not particularly familiar with this part of the code, so I
wonder where's the best place to fix this, though.
regards
--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training &
On 09/14/2015 10:23 AM, Shulgin, Oleksandr wrote:
On Sat, Sep 12, 2015 at 11:50 AM, Tomas Vondra
mailto:tomas.von...@2ndquadrant.com>> wrote:
...
- Attempts to get plan for simple insert queries like this
INSERT INTO x SELECT * FROM x;
end with a segfault, b
a costing issue,
because while with idx1 (first plan) it was correctly detected we don't
need to evaluate the condition on the partial index, on idx2 that's not
true and we'll waste time doing that. So we probably can't just tweak
the costing a bit - this probably needs to
On 09/14/2015 01:15 PM, Shulgin, Oleksandr wrote:
On Mon, Sep 14, 2015 at 11:53 AM, Tomas Vondra
mailto:tomas.von...@2ndquadrant.com>> wrote:
On 09/14/2015 10:23 AM, Shulgin, Oleksandr wrote:
On Sat, Sep 12, 2015 at 11:50 AM, Tomas Vondra
mailto:tom
) should pretty
much make the original patch irrelevant, because check_index_only can
simply walk over the new list.
regards
--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing
Hi,
On 09/23/2015 11:25 PM, Tom Lane wrote:
Tomas Vondra writes:
On 09/11/2015 07:16 PM, Robert Haas wrote:
On Fri, Sep 11, 2015 at 1:12 PM, Tomas Vondra
wrote:
I'm arguing for fixing the existing bug, and then addressing the case of
over-estimation separately, with proper ana
s),
it's game over.
I think the restriction is unnecessary, because when estimating joins,
we effectively take cardinality of a cartesian product of all the base
relations, and then apply selectivities for all the join quals (in a
pairwise manner).
So for the three tables we take
ca
On 09/24/2015 01:51 PM, Robert Haas wrote:
On Thu, Sep 24, 2015 at 5:50 AM, Tomas Vondra
wrote:
I however quite dislike the dismissal of the possible impact. It should be
the responsibility of the person introducing the change to show that no such
impact actually exists, not just waving it
On 09/24/2015 05:09 PM, Robert Haas wrote:
On Thu, Sep 24, 2015 at 9:49 AM, Tomas Vondra
wrote:
So while it does not introduce behavior change in this particular
case (because it fails, as you point out), it introduces a behavior
change in general - it simply triggers behavior that does not
nbatches=1
2) run until exhausting work_mem
3) start batching, with the initially estimated number of batches
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-
Hi,
On 09/24/2015 06:43 PM, Josh Berkus wrote:
Tomas,
attached is v7 of the multivariate stats patch. The main improvement is
major refactoring of the clausesel.c portion - splitting the awfully
long spaghetti-style functions into smaller pieces, making it much more
understandable etc.
So
On 09/24/2015 07:04 PM, Tom Lane wrote:
Tomas Vondra writes:
But what about computing the number of expected batches, but always
start executing assuming no batching? And only if we actually fill
work_mem, we start batching and use the expected number of batches?
Hmm. You would likely be
On 09/24/2015 07:42 PM, Robert Haas wrote:
On Thu, Sep 24, 2015 at 12:40 PM, Tomas Vondra
wrote:
There are two machines - one with 32GB of RAM and work_mem=2GB, the other
one with 256GB of RAM and work_mem=16GB. The machines are hosting about the
same data, just scaled accordingly (~8x more
On 09/25/2015 02:54 AM, Robert Haas wrote:
On Thu, Sep 24, 2015 at 1:58 PM, Tomas Vondra
wrote:
Meh, you're right - I got the math wrong. It's 1.3% in both cases.
However the question still stands - why should we handle the
over-estimate in one case and not the other? We'
Hi,
On 09/25/2015 03:39 AM, David Rowley wrote:
On 24 September 2015 at 23:57, Tomas Vondra
mailto:tomas.von...@2ndquadrant.com>> wrote:
2) find_best_match_foreign_key
--
I think the comment before the function needs rephrasing (seems a
bit
Hi,
On 09/18/2015 03:46 AM, Kyotaro HORIGUCHI wrote:
Hello,
At Thu, 17 Sep 2015 17:40:27 +0200, Tomas Vondra wrote
in <55fadeeb.4000...@2ndquadrant.com>
Yes, this seems sane. I've been poking at this a bit too, and I came
to the same plan in general, except that I think it's
Hi,
On 09/26/2015 01:28 PM, Tomas Vondra wrote:
The patch does not change the check_index_only implementation - it
still needs to check the clauses, just like in v1 of the patch. To
make this re-check unnecessary, we'd have to stick the remaining
clauses somewhere, so that check_index_onl
in with f.
So it seems like s a clear improvement, both compared to master and the
previous versions of the patch.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hacker
Hello,
On 09/29/2015 12:27 PM, Kyotaro HORIGUCHI wrote:
Hi,
At Sat, 26 Sep 2015 18:00:33 +0200, Tomas Vondra wrote
in <5606c121.10...@2ndquadrant.com>
Hi,
On 09/26/2015 01:28 PM, Tomas Vondra wrote:
The patch does not change the check_index_only implementation - it
still needs to
On 09/29/2015 04:57 PM, Tomas Vondra wrote:
Hello,
On 09/29/2015 12:27 PM, Kyotaro HORIGUCHI wrote:
...
cost_index() seems to need to be fixed. It would count excluded
clauses in estimate.
Hmm, good point. The problem is that extract_nonindex_conditions uses
baserel->baserestricti
predicates may use columns not
* used in the index itself.
Anyway some description on rclauses should be added in the
comment for match_clause_to_index, instead of the comments
currently added *within* the function.
OK, will do.
kind regards
--
Tomas Vondra http://www.2ndQu
On 09/30/2015 12:55 PM, Tomas Vondra wrote:
Hello!
On 09/30/2015 10:29 AM, Kyotaro HORIGUCHI wrote:
By the way your comment for indexrinfos is as following,
* 'indexrinfos' is a list of RestrictInfo nodes from the query's WHERE
* or JOIN conditions, excluding those impli
s that use the index in multiple places?
Imagine for example table joined to itself, where both sides use the
index with different conditions.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
s index only scans unusable.
I think we effectively need a separate list of "not implied" clauses per
index-baserel combination. Maybe IndexClauseSet is not the right place,
but I don't see how IndexOptInfo could work.
regards
--
Tomas Vondra http://www.2ndQuadra
.de
- Same error message, but I don't see any conclusion except for
"cannot happen" from Greg.
2) http://www.postgresql.org/message-id/48331f9f.9030...@demabg.com
- Essentially talks about failed RAID5 array, but that does not seem
to be the case here (no RAID fa
Hi,
On 10/18/2015 08:37 PM, Andres Freund wrote:
Hi,
On 2015-10-18 20:25:29 +0200, Tomas Vondra wrote:
I've been doing a lot of filesystem testing / benchmarking recently, and
today I've ran into a really strange issue with ext4 on two SSD devices in a
RAID-0 configuration (Linu
Hello Kyotaro-san,
On 09/11/2015 06:58 PM, Tomas Vondra wrote:
>
Maybe the best solution is to abandon the ALTER TABLE approach
entirely, and instead invent a new set of commands
CREATE STATISTICS
DROP STATISTICS
(ALTER STATISTICS seems a bit excessive at this point).
Another thing
Hi,
On 10/20/2015 01:14 PM, Simon Riggs wrote:
On 19 October 2015 at 20:16, Tomas Vondra mailto:tomas.von...@2ndquadrant.com>> wrote:
Hello Kyotaro-san,
On 09/11/2015 06:58 PM, Tomas Vondra wrote:
>
Maybe the best solution is to abandon the ALTER TABLE
On 10/20/2015 05:59 PM, Tom Lane wrote:
Robert Haas writes:
On Tue, Oct 20, 2015 at 10:51 AM, Tomas Vondra
wrote:
ISTM that we could use COLLECT STATISTICS instead of ADD STATISTICS, and
use REMOVE STATISTICS instead of DROP STATISTICS. That way we can use
ALTER TABLE rather than inventing
Hi,
On 10/20/2015 09:11 PM, Tom Lane wrote:
I wrote:
Tomas Vondra writes:
The problem is that the actual syntax is ADD [COLUMN], so we
can't simply use ADD STATISTICS as that would mean a conflict in
the grammar. Resolving it means either making COLUMN
non-optional, or adding STATISTI
On 10/20/2015 11:28 PM, Simon Riggs wrote:
On 20 October 2015 at 16:48, Tomas Vondra mailto:tomas.von...@2ndquadrant.com>> wrote:
>> On 10/20/2015 09:11 PM, Tom Lane wrote:
Can't we just insist that there be only one statistics declaration
for a given column list? >&g
nces about the 67% and 95%.
The minimum threshold is 2.0 for performance of the Box-Muller
transform.
Does it make sense to explicitly mention the implementation detail
(Box-Muller transform) here?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 2
On 10/25/2015 08:11 PM, Fabien COELHO wrote:
Hello Tomas,
I've been looking at the checkpoint patches (sorting, flush and FPW
compensation) and realized we got gaussian/exponential distributions
in pgbench which is useful for simulating simple non-uniform workloads.
Indeed.
But I
hat there is no reason to have to go to
the source code to check that. But I would not provide further details.
So I'm fine with the current status.
There are alternative methods for almost every non-trivial piece of
code, and we generally don't mention that in user docs. Why sh
ator, but that's not quite manageable I guess. But why couldn't we
provide a generic infrastructure that would allow filtering "safe"
expressions and validating them on an index tuple?
kind regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Dev
Hi,
On 11/05/2015 06:51 PM, Robert Haas wrote:
On Wed, Nov 4, 2015 at 9:15 PM, Tomas Vondra
wrote:
I certainly understand there are cases that require care - like the
leakproof thing pointed out by Robert for example. I don't immediately see
why evaluation against dead rows would be a pr
601 - 700 of 1517 matches
Mail list logo