On Tue, Oct 31, 2017 at 8:06 PM Julien Rouhaud wrote:
> On Tue, Oct 31, 2017 at 8:25 PM, Alexandre de Arruda Paes
> wrote:
> > I will be very happy with a tool(or a stats table) that shows the most
> > searched values from a table(since a statistic reset).
>
As a vendor, I normally stay silent
On Tue, Oct 31, 2017 at 8:25 PM, Alexandre de Arruda Paes
wrote:
> I will be very happy with a tool(or a stats table) that shows the most
> searched values from a table(since a statistic reset). i.e.:
>
> table foo (id int, year int)
>
> top 3 searched value for year field: 2017(500x), 2016(300x)
I have not used it yet, but from the presentation, very promising:
https://medium.com/@ankane/introducing-dexter-the-automatic-indexer-for-postgres-5f8fa8b28f27
https://github.com/ankane/dexter
--
https://yves.zioup.com
gpg: 4096R/32B0F416
--
Sent via pgsql-performance mailing list (pgsql
I will be very happy with a tool(or a stats table) that shows the most
searched values from a table(since a statistic reset). i.e.:
table foo (id int, year int)
top 3 searched value for year field: 2017(500x), 2016(300x), 2015(55x)
With this info we can create partial indexes or do a table part
Thanks for reply Antony.
But from what I've read, HYPOPG only allows you to create hypothetical
indexes, so the DBA can analyze if it brings benefits.
What I would like is a tool that from a SQL Query indicates which indexes
would be recommended to decrease the response time.
Best Regards
Neto
20
Hi Neto, maybe HypoPG
Can help you:
https://github.com/dalibo/hypopg
El 31 oct. 2017 2:13 PM, "Neto pr" escribió:
>
> Hello All I'm researching on Index-Advisor Tools to be applied in SQL
> queries. At first I found this: - EnterpriseDB -
> https://www.enterprisedb.com/docs/en/9.5/asguide/EDB_
Hello All I'm researching on Index-Advisor Tools to be applied in SQL
queries. At first I found this: - EnterpriseDB -
https://www.enterprisedb.com/docs/en/9.5/asguide/EDB_Postgres_Advanced_Server_Guide.1.56.html
Someone would know of other tools for this purpose. I'd appreciate it if
you can help
On Fri, Jul 7, 2017 at 4:41 PM, Justin Pryzby wrote:
> The second change averages separate correlation values of small lists of 300
> consecutive TIDs, rather than the course-granularity/aggregate correlation of
> a
> small sample of pages across the entire index. Postgres' existing sampling is
Months ago I reported an issue with very slow index scan of tables with high
"correlation" of its indexed column, due to (we concluded at the time)
duplicate/repeated values of that column causing many lseek()s. References:
https://www.postgresql.org/message-id/flat/20160524173914.GA11880%40telsa
On Thu, Jun 8, 2017 at 11:05 AM, Jeremy Finzel wrote:
> On Thu, Jun 8, 2017 at 9:58 AM, Tom Lane wrote:
>> Merlin Moncure writes:
>> > Having said that, what I typically do in such
>> > cases (this comes a lot in database driven work queues) something like
>> > this:
>> > CREATE INDEX ON table (
Normally, I find that in these situations, it makes sense to index the
primary key of the table WHERE col is not null, because it will usually
cover the largest number of cases, and is much better than a two-value
boolean index, for example.
On Thu, Jun 8, 2017 at 9:58 AM, Tom Lane wrote:
> Merl
Merlin Moncure writes:
> On Mon, May 22, 2017 at 10:17 AM, Ariel wrote:
>> Should I do:
>>
>> CREATE INDEX ON table ((col IS NOT NULL)) WHERE col IS NOT NULL
>>
>> or:
>>
>> CREATE INDEX ON table (col) WHERE col IS NOT NULL
>>
>> I'm thinking the first index will make a smaller, simpler, inde
On Mon, May 22, 2017 at 10:17 AM, Ariel wrote:
>
> I need to be able to quickly find rows where a column is not null (only a
> small percent of the rows will have that column not null).
>
> Should I do:
>
> CREATE INDEX ON table ((col IS NOT NULL)) WHERE col IS NOT NULL
>
> or:
>
> CREATE INDEX ON
I need to be able to quickly find rows where a column is not null (only a
small percent of the rows will have that column not null).
Should I do:
CREATE INDEX ON table ((col IS NOT NULL)) WHERE col IS NOT NULL
or:
CREATE INDEX ON table (col) WHERE col IS NOT NULL
I'm thinking the first ind
Thanks for the fast reply and explanation, Tom. Overall, I have been
pleasantly surprised with the leniency of indexes on range types.
On Sat, May 20, 2017 at 5:00 PM, Tom Lane wrote:
> Zac Goldstein writes:
> > This uses the index:
> > ...
> > But this doesn't:
>
> > EXPLAIN (ANALYZE, BUF
Zac Goldstein writes:
> This uses the index:
> ...
> But this doesn't:
> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM shot
> WHERE lower(shot.matchsecond) <@ ((shot.matchsecond).match_id,
> numrange(5, 10))::matchsecond_type;
Well, yeah. After inlining the SQL functions, what you have is
>
Hello,
The index on my composite type seems to be working most of the time, but
there's a query I need to run where it's not working even with
enable_seqscan=false. The composite type uses int and numrange subcolumns,
and is designed to operate primarily as a range type.
It will probably be easi
On Sat, Aug 13, 2016 at 3:54 PM, Justin Pryzby wrote:
> On Sun, Jun 05, 2016 at 12:28:47PM -0700, Jeff Janes wrote:
>> On Sun, Jun 5, 2016 at 9:03 AM, Tom Lane wrote:
>> > Claudio Freire writes:
>> >> So correlated index scans look extra favourable vs bitmap index scans
>> >> because bitmap heap
Regarding this earlier thread:
https://www.postgresql.org/message-id/flat/20160524173914.GA11880%40telsasoft.com#20160524173914.ga11...@telsasoft.com
On Tue, May 24, 2016 at 10:39 AM, Justin Pryzby wrote:
> Summary: Non-unique btree indices are returning CTIDs for rows with same
> value of indexe
> Or, better, persuade the app to label the value "
public.push_guid
" since that is the column's type...a type you haven't defined for us. If you get to add explicit casts this should be easy...but I'm not familiar with the framework you are using.
push_guid was a CHARACTER(36) colu
On Thu, Jun 16, 2016 at 11:05 AM, Tom Lane wrote:
> meike.talb...@women-at-work.org writes:
> > When I query this through pgsql, the queries are fast as expected.
> > select * from push_topic where guid =
> 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'
> > Index Scan using push_topic_idx_topicguid on pu
meike.talb...@women-at-work.org writes:
> When I query this through pgsql, the queries are fast as expected.
> select * from push_topic where guid = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'
> Index Scan using push_topic_idx_topicguid on push_topic (cost=0.42..8.44
> rows=1 width=103) (actual time=0
gresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of
meike.talb...@women-at-work.org
Sent: Thursday, June 16, 2016 12:59 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Index not used
Hello,
I've a basic table with about 100K rows:
CREATE TABLE "public".&q
Hello,
I've a basic table with about 100K rows:
CREATE TABLE "public"."push_topic" (
"id" Serial PRIMARY KEY,
"guid" public.push_guid NOT NULL,
"authenticatorsending" Varchar(32) NOT NULL,
"authenticatorsubscription" Varchar(32) NOT NULL,
"countpushed" Integer NOT NULL,
"datecreated" ti
On Sun, Jun 5, 2016 at 9:03 AM, Tom Lane wrote:
> Claudio Freire writes:
>> So correlated index scans look extra favourable vs bitmap index scans
>> because bitmap heap scans consider random page costs sans correlation
>> effects (even though correlation applies to bitmap heap scans as
>> well).
Claudio Freire writes:
> So correlated index scans look extra favourable vs bitmap index scans
> because bitmap heap scans consider random page costs sans correlation
> effects (even though correlation applies to bitmap heap scans as
> well).
Really? How? The index ordering has nothing to do wi
On Fri, Jun 3, 2016 at 6:54 PM, Justin Pryzby wrote:
> max_wal_size| 4GB | configuration file
> min_wal_size| 6GB | configuration file
Just a minor digression -- it generally doesn't make sense to
configure the minimum for somethi
On Fri, Jun 3, 2016 at 8:54 PM, Justin Pryzby wrote:
> As a test, I did SET effective_cache_size='1MB', before running explain, and
> still does:
>
> |-> Index Scan using
> cdrs_huawei_pgwrecord_2016_05_29_recordopeningtime_idx on
> cdrs_huawei_pgwrecord_2016_05_29 (cost=0.44..1526689.
On Fri, Jun 03, 2016 at 06:26:33PM -0300, Claudio Freire wrote:
> On Wed, May 25, 2016 at 11:00 AM, Justin Pryzby wrote:
> >> > First, I found I was able to get 30-50min query results on full week's
> >> > table by
> >> > prefering a seq scan to an index scan. The row estimates seemed fine,
> >
On Wed, May 25, 2016 at 11:00 AM, Justin Pryzby wrote:
>> > First, I found I was able to get 30-50min query results on full week's
>> > table by
>> > prefering a seq scan to an index scan. The row estimates seemed fine, and
>> > the
>> > only condition is the timestamp, so the planner's use of
On Tue, May 24, 2016 at 11:23:48PM -0700, Jeff Janes wrote:
> > But note the non-uniqueness of the index column:
> > ts=# SELECT recordopeningtime, COUNT(1) FROM cdrs_huawei_pgwrecord WHERE
> > recordopeningtime>='2016-05-21' AND recordopeningtime<'2016-05-22' GROUP BY
> > 1 ORDER BY 2 DESC;
> >
On Tue, May 24, 2016 at 09:16:20PM -0700, Peter Geoghegan wrote:
> On Tue, May 24, 2016 at 10:39 AM, Justin Pryzby wrote:
> > Postgres seems to assume that the high degree of correlation of the table
> > column seen in pg_stats is how it will get data from the index scan, which
> > assumption seem
On Tue, May 24, 2016 at 10:39 AM, Justin Pryzby wrote:
> Summary: Non-unique btree indices are returning CTIDs for rows with same
> value of indexed column not in logical order, imposing a high performance
> penalty.
>
> Running PG 9.5.3 now, we have a time-based partitions of append-only tables
>
On Tue, May 24, 2016 at 9:43 PM, Tom Lane wrote:
> Yeah. I wonder what would happen if we used the same rule for index
> insertions. It would likely make insertions more expensive, but maybe
> not by much. The existing "randomization" rule for where to insert new
> items in a run of identical i
Peter Geoghegan writes:
> The basic problem is that the B-Tree code doesn't maintain this
> property. However, B-Tree index builds will create an index that
> initially has this property, because the tuplesort.c code happens to
> sort index tuples with a CTID tie-breaker.
Yeah. I wonder what wou
* Peter Geoghegan (p...@bowt.ie) wrote:
> On Tue, May 24, 2016 at 10:39 AM, Justin Pryzby wrote:
> > I was able to see great improvement without planner parameters by REINDEX
> > the
> > timestamp index. My theory is that the index/planner doesn't handle well
> > the
> > case of many tuples wit
On Tue, May 24, 2016 at 10:39 AM, Justin Pryzby wrote:
> I was able to see great improvement without planner parameters by REINDEX the
> timestamp index. My theory is that the index/planner doesn't handle well the
> case of many tuples with same column value, and returns pages out of logical
> or
Summary: Non-unique btree indices are returning CTIDs for rows with same
value of indexed column not in logical order, imposing a high performance
penalty.
Running PG 9.5.3 now, we have a time-based partitions of append-only tables
with data loaded from other sources. The tables are partitioned b
> From: Jim Nasby
>To: Jeff Janes ; Glyn Astill
>Cc: Pgsql-performance
>Sent: Wednesday, 2 December 2015, 22:32
>Subject: Re: [PERFORM] Index scan cost calculation
>
>
>On 11/30/15 5:03 PM, Jeff Janes wrote:
>> It thinks the combination of (show, type, best, bl
On 11/30/15 5:03 PM, Jeff Janes wrote:
It thinks the combination of (show, type, best, block) is enough to
get down to a single row. One index adds "flag" to that (which is not
useful to the query) and the other adds "row" to that, which is useful
but the planner doesn't think it is because onc
>
>Clauses that can't be used in an "indexable" way are excluded from the
>index selectivity, but not from the total query selectivity.
>
>> Or is it just likely that the selection of the new index is just by chance?
>
>Bingo.
>
Got it, thanks! Very much appreciated.
Glyn
--
Sent via pgsql-p
On Mon, Nov 30, 2015 at 6:03 AM, Glyn Astill wrote:
>
>
>
>
> If I create the index show+best+block+row+seat then the planner appears to
> favour that, and all is well. Despite the startup cost estimate being the
> same, and total cost being 0.01 higher. This is something I fail to
> understa
> From: Jeff Janes
> To: Glyn Astill
> Cc: Pgsql-performance
> Sent: Saturday, 28 November 2015, 19:25
> Subject: Re: [PERFORM] Index scan cost calculation
>
>
> Why does the index seats_index02 exist in the first place? It looks
> like an index designed for th
On Thu, Nov 26, 2015 at 8:11 AM, Glyn Astill wrote:
> Hi All,
>
> Using pg 9.4.5 I'm looking at a table set up by a 3rd party application and
> trying to figure out why a particular index is being chosen over another for
> updates/deletes.
>
> From what I can see the reason is that plans using e
Glyn Astill writes:
>> From: Tom Lane
>> The problem will probably go away by itself as your table grows, but
>> if you don't want to wait, you might want to reflect on which of the index
>> columns might be (partially?) functionally dependent on the other columns,
>> and whether you could redesi
- Original Message -
> From: Tom Lane
> To: Glyn Astill
> Cc: Pgsql-performance
> Sent: Thursday, 26 November 2015, 16:44
> Subject: Re: [PERFORM] Index scan cost calculation
>
> Glyn Astill writes:
>> Using pg 9.4.5 I'm looking at a table set up b
Glyn Astill writes:
> Using pg 9.4.5 I'm looking at a table set up by a 3rd party application and
> trying to figure out why a particular index is being chosen over another for
> updates/deletes.
> From what I can see the reason is that plans using either index have the same
> exactly the same
- Original Message -
> From: Glyn Astill
> To: Pgsql-performance
> Sent: Thursday, 26 November 2015, 16:11
> Subject: [PERFORM] Index scan cost calculation
>
> Hi All,
>
> Using pg 9.4.5 I'm looking at a table set up by a 3rd party application and
Hi All,
Using pg 9.4.5 I'm looking at a table set up by a 3rd party application and
trying to figure out why a particular index is being chosen over another for
updates/deletes.
>From what I can see the reason is that plans using either index have the same
>exactly the same cost. So rather I'
On Wed, Aug 26, 2015 at 3:36 PM, Tomas Vondra
wrote:
>> But I guess the answer is, no real way to tell what the box is doing
>> when it's creating an index. Yes there was a lock, no I could not find a
>> way to see how it's progressing so there was no way for me to gauge when
>> it would be done.
On 08/26/2015 10:26 PM, Tory M Blue wrote:
the table is 90GB without indexes, 285GB with indexes and bloat, The
row count is not actually completing.. 125Million rows over 13 months,
this table is probably close to 600million rows.
You don't need to do SELECT COUNT(*) if you only need an
Hi,
On 08/26/2015 11:53 PM, Tory M Blue wrote:
On Wed, Aug 26, 2015 at 2:45 PM, Qingqing Zhou
mailto:zhouqq.postg...@gmail.com>> wrote:
On Wed, Aug 26, 2015 at 1:26 PM, Tory M Blue mailto:tmb...@gmail.com>> wrote:
>
> Right now the 100% cpu process which is this index is only us
On Wed, Aug 26, 2015 at 2:45 PM, Qingqing Zhou
wrote:
> On Wed, Aug 26, 2015 at 1:26 PM, Tory M Blue wrote:
> >
> > Right now the 100% cpu process which is this index is only using 3.5GB
> > and has been for the last 15 hours
> >
>
> If 100% cpu, you can do 'sudo perf top' to see what the CPU is
On Wed, Aug 26, 2015 at 1:26 PM, Tory M Blue wrote:
>
> Right now the 100% cpu process which is this index is only using 3.5GB
> and has been for the last 15 hours
>
If 100% cpu, you can do 'sudo perf top' to see what the CPU is busy about.
Regards,
Qingqing
--
Sent via pgsql-performance mail
On Wed, Aug 26, 2015 at 12:36 PM, Igor Neyman
wrote:
>
>
>
>
> *From:* Tory M Blue [mailto:tmb...@gmail.com]
> *Sent:* Wednesday, August 26, 2015 3:26 PM
> *To:* Igor Neyman
> *Cc:* pgsql-performance
> *Subject:* Re: [PERFORM] Index creation running now for 14 hours
From: Tory M Blue [mailto:tmb...@gmail.com]
Sent: Wednesday, August 26, 2015 3:26 PM
To: Igor Neyman
Cc: pgsql-performance
Subject: Re: [PERFORM] Index creation running now for 14 hours
On Wed, Aug 26, 2015 at 12:18 PM, Igor Neyman
mailto:iney...@perceptron.com>> wrote:
From:
On Wed, Aug 26, 2015 at 12:18 PM, Igor Neyman
wrote:
>
>
>
>
> *From:* pgsql-performance-ow...@postgresql.org [mailto:
> pgsql-performance-ow...@postgresql.org] *On Behalf Of *Tory M Blue
> *Sent:* Wednesday, August 26, 2015 3:14 PM
> *To:* pgsql-performance
>
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Tory M Blue
Sent: Wednesday, August 26, 2015 3:14 PM
To: pgsql-performance
Subject: [PERFORM] Index creation running now for 14 hours
I'm running 9.3.4 with slon 2.2.3, I did a dro
I'm running 9.3.4 with slon 2.2.3, I did a drop add last night at 9pm, it
started this particular tables index creation at 10:16pm and it's still
running. 1 single core is at 100% (32 core box) and there is almost zero
I/O activity.
CentOS 6.6
16398 | clsdb | 25765 | 10 | postgres | slon.
On 01.05.2015 13:06, David Osborne wrote:
Simple... that did it... thanks!
dev=> create index on table(code,row_id);
CREATE INDEX
Time: 38088.482 ms
dev=> explain (analyse,buffers) select row_id as last_row_id from table
where code='XX' order by row_id desc limit 1;
Just out of curiosity: Is
Simple... that did it... thanks!
dev=> create index on table(code,row_id);
CREATE INDEX
Time: 38088.482 ms
dev=> explain (analyse,buffers) select row_id as last_row_id from table
where code='XX' order by row_id desc limit 1;
QUERY PLAN
> On 01 May 2015, at 13:54, David Osborne wrote:
>
> Hi,
>
> We have a query which finds the latest row_id for a particular code.
>
> We've found a backwards index scan is much slower than a forward one, to the
> extent that disabling indexscan altogether actually improves the query time.
>
Hi,
We have a query which finds the latest row_id for a particular code.
We've found a backwards index scan is much slower than a forward one, to
the extent that disabling indexscan altogether actually improves the query
time.
Can anyone suggest why this might be, and what's best to do to improv
On 26.3.2015 17:35, Jeff Janes wrote:
> On Thu, Mar 26, 2015 at 5:44 AM, Tomas Vondra
> mailto:tomas.von...@2ndquadrant.com>> wrote:
>
>> That might work IMO, but maybe we should increase the coefficient a
>> bit (say, from 1.25 to 2), not to produce needlessly long MCV
>> lists.
>
> That wouldn'
On Thu, Mar 26, 2015 at 5:44 AM, Tomas Vondra
wrote:
> On 26.3.2015 08:48, Jeff Janes wrote:
> >
> > OK, this is starting to look like a long-standing bug to me.
> >
> > If it only sees 3 distinct values, and all three are present at least
> > twice, it throws all of them into the MCV list. But i
On 26.3.2015 08:48, Jeff Janes wrote:
>
> OK, this is starting to look like a long-standing bug to me.
>
> If it only sees 3 distinct values, and all three are present at least
> twice, it throws all of them into the MCV list. But if one of those 3
> were present just once, then it tests them to s
Sorry, didn't respond to all your questions:
> What version of PostgreSQL are running? 'select version();'
PostgreSQL 9.3.4 on x86_64-pc-linux-gnu, compiled by gcc-4.6.real
(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
> What do you get when to do "analyze verbose print_list"?
# analyze verbose
On 25 March 2015 at 22:45, Jeff Janes wrote:
> How can the avg_width be 4 when the vast majority of entries are 7
> characters long?
The datatype is an enum, as I understand it, an enum type always
occupies 4 bytes
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
On Wed, Mar 25, 2015 at 1:00 PM, Feike Steenbergen <
feikesteenber...@gmail.com> wrote:
> On 25 March 2015 at 19:07, Jeff Janes wrote:
>
> > Also, I doubt that that is the problem in the first place. If you
> collect a
> > sample of 30,000 (which the default target size of 100 does), and the
> >
On Wed, Mar 25, 2015 at 1:00 PM, Feike Steenbergen <
feikesteenber...@gmail.com> wrote:
> On 25 March 2015 at 19:07, Jeff Janes wrote:
>
> > Also, I doubt that that is the problem in the first place. If you
> collect a
> > sample of 30,000 (which the default target size of 100 does), and the
> >
On 25 March 2015 at 19:07, Jeff Janes wrote:
> Also, I doubt that that is the problem in the first place. If you collect a
> sample of 30,000 (which the default target size of 100 does), and the
> frequency of the second most common is really 0.00307333 at the time you
> sampled it, you would ex
On Wed, Mar 25, 2015 at 9:07 AM, Feike Steenbergen <
feikesteenber...@gmail.com> wrote:
> I'm posting this as I am trying to understand what has happened.
> TLDR: The problem seems to be fixed now.
>
> By bumping the statistics_target we see that most_common_vals is
> having its contents filled mo
Feike Steenbergen writes:
> On 25 March 2015 at 13:45, Tomas Vondra wrote:
>>> We can also increase the 'Stats target' for this table, which will
>>> cause the statistics to contain information about 'NOT_YET_PRINTED'
>>> more often, but even then, it may not find any of these records, as
>>> the
I'm posting this as I am trying to understand what has happened.
TLDR: The problem seems to be fixed now.
By bumping the statistics_target we see that most_common_vals is
having its contents filled more often, causing way better estimates:
attname| status
inherited
Hi, thanks for having a look and thinking with us
On 25 March 2015 at 13:45, Tomas Vondra wrote:
> Can you post results for this query?
>
> SELECT stats, COUNT(*) FROM print_list group by 1
status | count
+-
ERROR | 159
PREPARED | 10162
PRINT
On 25.3.2015 13:04, Feike Steenbergen wrote:
...
> When analyzing pg_stats we have sometimes have the following: (Note:
> 'NOT_YET_PRINTED' has not been found during this analyze, these are
> real values)
>
> attname| status
> inherited | f
> null_frac
Hi,
Situation:
We have a table with 3,500,000+ rows, which contain items that need to
be printed or have been printed previously.
Most of these records have a status of 'PRINTED', we have a partial
index on this table WHERE status <> 'PRINTED'.
During normal operation there will be < 10 records
On 11/7/14, 5:14 AM, Artūras Lapinskas wrote:
thanks for your time and answer. Not treating IS NULL as equality operator
definitely helps me to make more sense out of previous explains.
You can also try creating a partial index WHERE b IS NULL. WHERE b IS NOT NULL
can also sometimes be useful
Hi,
thanks for your time and answer. Not treating IS NULL as equality
operator definitely helps me to make more sense out of previous
explains.
--
Best Regard,
Artūras Lapinskas
On Thu, Nov 06, 2014 at 12:23:12PM -0500, Tom Lane wrote:
=?utf-8?Q?Art=C5=ABras?= Lapinskas writes:
After some
=?utf-8?Q?Art=C5=ABras?= Lapinskas writes:
> After some more investigation my wild guess would be that then nulls are
> involved in query postgresql wants to double check whatever they are
> really nulls in actual relation (maybe because of dead tuples).
No, it's much simpler than that: IS NULL
After some more investigation my wild guess would be that then nulls are
involved in query postgresql wants to double check whatever they are
really nulls in actual relation (maybe because of dead tuples). To do
that it has to go and fetch pages from disk and the best way to do that
is to use b
Hello,
I am having some hard time understanding how postgresql handles null
values. As much I understand null values are stored in b-tree as simple
values (put as last or first depending on index). But it seems that
there is something really specific about them as postgresql deliberately
igno
Hello,
I am having some hard time understanding how postgresql handles null
values. As much I understand null values are stored in b-tree as simple
values (put as last or first depending on index). But it seems that
there is something really specific about them as postgresql deliberately
igno
On 13.08.2013 23:47, Daniel Cristian Cruz wrote:
Hello,
I'm trying to simplify a schema, where I had many ranges floating around.
My idea is to put them all in an array field and query like this:
SELECT
event.*
FROM event
JOIN participant_details
USING (participant_id)
WHERE
tsrange(event
I guess this is not a performance question... What kind of question would
it be? Admin, General or SQL?
2013/8/13 Daniel Cristian Cruz
> Hello,
>
> I'm trying to simplify a schema, where I had many ranges floating around.
> My idea is to put them all in an array field and query like this:
>
> S
Hello,
I'm trying to simplify a schema, where I had many ranges floating around.
My idea is to put them all in an array field and query like this:
SELECT
event.*
FROM event
JOIN participant_details
USING (participant_id)
WHERE
tsrange(event.start, event.end) && ANY (participant_details.periods
On Mon, Apr 8, 2013 at 10:02 AM, Mark Davidson wrote:
> Been trying to progress with this today. Decided to setup the database on
> my local machine to try a few things and I'm getting much more sensible
> results and a totally different query plan http://explain.depesz.com/s/KGdin
> this case t
On Sun, Apr 7, 2013 at 3:22 PM, Mark Davidson wrote:
> Takes a little longer with the INNER join unfortunately. Takes about ~3.5
> minutes, here is the query plan http://explain.depesz.com/s/EgBl.
>
> With the JOIN there might not be a match if the data does not fall within
> one of the areas tha
On Fri, Apr 5, 2013 at 8:51 AM, Mark Davidson wrote:
> Hi All,
>
> Hoping someone can help me out with some performance issues I'm having
> with the INDEX on my database. I've got a database that has a data table
> containing ~55,000,000 rows which have point data and an area table
> containing ~
Hi Jeff,
I'ved tried this test using the -S flag './pgbench -c 4 -j 2 -T 600 -S
pgbench'
Desktop gives me
./pgbench -c 4 -j 2 -T 600 -S pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 1
query mode: simple
number of clients: 4
number of threads: 2
duration: 600 s
numb
Sorry Vasillis I missed you asking that I just did './pgbench -i pgbench'
didn't specific set any values. I can try some specific ones if you can
suggest any.
On 8 April 2013 21:28, Vasilis Ventirozos wrote:
>
>
>
> On Mon, Apr 8, 2013 at 11:18 PM, Mark Davidson wrote:
>
>> Wow my results are
On Mon, Apr 8, 2013 at 12:31 PM, Mark Davidson wrote:
> Thanks for your response Vasillis. I've run pgbench on both machines
> `./pgbench -c 10 -t 1 pgbench` getting 99.800650 tps on my local
> machine and 23.825332 tps on the server so quite a significant difference.
>
These results are alm
On Mon, Apr 8, 2013 at 11:18 PM, Mark Davidson wrote:
> Wow my results are absolutely appalling compared to both of those which is
> really interesting. Are you running postgres 9.2.4 on both instances? Any
> specific configuration changes?
> Thinking there must be something up with my setup to b
Wow my results are absolutely appalling compared to both of those which is
really interesting. Are you running postgres 9.2.4 on both instances? Any
specific configuration changes?
Thinking there must be something up with my setup to be getting such a low
tps compared with you.
On 8 April 2013 21:
-c 10 means 10 clients so that should take advantage of all your cores (see
bellow)
%Cpu0 : 39.3 us, 21.1 sy, 0.0 ni, 38.7 id, 0.9 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 38.0 us, 25.0 sy, 0.0 ni, 26.0 id, 4.2 wa, 0.0 hi, 6.8 si, 0.0 st
%Cpu2 : 39.3 us, 20.4 sy, 0.0 ni, 39.0 id, 1.3 wa, 0.0 hi, 0.0 si,
Thanks for your response Vasillis. I've run pgbench on both machines
`./pgbench -c 10 -t 1 pgbench` getting 99.800650 tps on my local
machine and 23.825332 tps on the server so quite a significant difference.
Could this purely be down to the CPU clock speed or is it likely something
else causin
Hello Mark,
PostgreSQL currently doesn't support parallel query so a faster cpu even if
it has less cores would be faster for a single query, about benchmarking
you can try pgbench that you will find in the contrib,
the execution plan may be different because of different statistics, have
you analy
Been trying to progress with this today. Decided to setup the database on
my local machine to try a few things and I'm getting much more sensible
results and a totally different query plan
http://explain.depesz.com/s/KGdin this case the query took about a
minute but does sometimes take around
80 se
Takes a little longer with the INNER join unfortunately. Takes about ~3.5
minutes, here is the query plan http://explain.depesz.com/s/EgBl.
With the JOIN there might not be a match if the data does not fall within
one of the areas that is selected in the IN query.
So if we have data id (10) that
Greg Williamson wrote:
>> Thanks for your response. I tried doing what you suggested so
>> that table now has a primary key of
>> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '
>> and I've added the INDEX of
>> 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_i
1 - 100 of 722 matches
Mail list logo