On 19/12/2013 21:36, Kevin Grittner wrote:
Marc Cousin wrote:
Then we insert missing paths. This is one of the plans that fail
insert into path (path)
select path from batch
where not exists
(select 1 from path where path.path=batch.path)
group by path;
I know you
On Thu, Dec 19, 2013 at 4:40 PM, Steve Crawford
wrote:
> On 12/18/2013 12:12 PM, prashant Pandey wrote:
>>
>> Could you tell me each and every hardware parameters and OS parameters the
>> performance depends on.
>> I need the complete list of all the required parameters and how to extract
>> them
I just ran into an interesting issue on Postgres 8.4. I have a database
with about 3 months of data and when I do following query:
SELECT DATE_TRUNC('day', time) AS time_t, COUNT(*) FROM mytable GROUP BY
time_t;
EXPLAIN shows that it's doing a sort and then a GroupAggregate. There will
only be ~90
On 12/18/2013 12:12 PM, prashant Pandey wrote:
Could you tell me each and every hardware parameters and OS parameters
the performance depends on.
I need the complete list of all the required parameters and how to
extract them on Linux through system calls and files.
Please it will be highly gre
On Thu, Dec 19, 2013 at 10:44 AM, Shaun Thomas wrote:
> On 12/19/2013 11:00 AM, Dave Johansen wrote:
>
> When I run pgbench in "SELECT only" after doing "-i -s 2000" I get what
>> appears to be good performance (60k-70k tps) but if I initialize a new
>> database with "-i -s 4000" the tps drops to
On Thu, Dec 19, 2013 at 12:54 PM, Sev Zaslavsky wrote:
> On 12/19/2013 3:34 PM, Sergey Konoplev wrote:
>> On Thu, Dec 12, 2013 at 9:30 AM, Sev Zaslavsky wrote:
>>> Table rt_h_nbbo contains several hundred million rows. All rows for a
>>> given
>>> entry_date are appended to this table in an over
Marc Cousin wrote:
> Then we insert missing paths. This is one of the plans that fail
> insert into path (path)
> select path from batch
> where not exists
> (select 1 from path where path.path=batch.path)
> group by path;
I know you said you wanted to focus on a different q
On Thu, Dec 12, 2013 at 9:30 AM, Sev Zaslavsky wrote:
[...]
> Table rt_h_nbbo contains several hundred million rows. All rows for a given
> entry_date are appended to this table in an overnight process every night -
> on the order of several million rows per day.
[...]
> I perceive an ineffici
On Thu, Dec 12, 2013 at 9:30 AM, Sev Zaslavsky wrote:
> Hello,
>
> I've got a very simple table with a very simple SELECT query, but it takes
> longer on the initial run than I'd like, so I want to see if there is a
> strategy to optimize this.
>
> Table rt_h_nbbo contains several hundred millio
Sev Zaslavsky wrote:
I want to agree with everything Shaun said and add a tiny bit.
> Does loading 24Gb of data in 21 sec seem "about right"?
It's a little on the slow side. You said 1634 page reads. At 9 ms
per read that would be 14.7 seconds. But I'm basing the 9 ms per
page read on my Lin
On 19/12/2013 19:33, Jeff Janes wrote:
> QUERY PLAN
>
> --
> Nested Loop (cost=0.56..4001768.1
>
> QUERY PLAN
>
> --
> Nested Loop (cost=0.56..4001768.10 rows=479020 width=26) (actual
> time=2.303..15371.
On 12/12/2013 11:30 AM, Sev Zaslavsky wrote:
_First question_ is: Does loading 24Gb of data in 21 sec seem "about
right" (hardware specs at bottom of email)?
That's actually pretty good. 24GB is a lot of data to process.
_Second question_: Is it possible to tell postgres to physically store
On 12/17/2013 08:48 PM, Kai Sellgren wrote:
This is your select:
SELECT *
FROM "Log"
LEFT JOIN "NewsArticle" ON "NewsArticle".id = "Log"."targetId" AND
"Log"."targetType" = 'NewsArticle'
ORDER BY "Log"."createdAt" DESC
LIMIT 10
This is your index:
CREATE INDEX "Log_targetId_targetType_idx"
On 12/19/2013 11:00 AM, Dave Johansen wrote:
When I run pgbench in "SELECT only" after doing "-i -s 2000" I get what
appears to be good performance (60k-70k tps) but if I initialize a new
database with "-i -s 4000" the tps drops to 4k-7k. Is this order of
magnitude drop expected? Or is there som
2013/12/18 kolsze...@gmail.com
> Thanx for your answer
>
> My example is trivial because i want to show strange (for me) postgres
> behavior with dealing with primary keys (extreme example), in real
> situation
> user put search condition e.g. "Panas" and this generates query
> ...
> where gd.ot
On Thu, Dec 19, 2013 at 9:53 AM, Dave Johansen wrote:
>>
> On Sat, Dec 7, 2013 at 1:37 PM, Scott Marlowe
> wrote:
>> I'll add that you can use assymetric partitioning if you tend to do a
>> lot of more fine grained queries on recent data and more big roll up
>> on older ones. I.e. partition by mo
I'm working on setting up a large database (or at least what I consider to
be a large one with several tables having 10-20 million records inserted
per day), and I've been using pgbench to verify that the hardware and
database are configured in an optimal manner.
When I run pgbench in "SELECT only
Sorry for the delay response. We had some hardware/configuration issues
that appear to be solved now, so now we're starting to actually play with
modifying the database.
On Sat, Dec 7, 2013 at 1:29 PM, Jeff Janes wrote:
> On Thu, Dec 5, 2013 at 7:36 AM, Dave Johansen wrote:
>
>> I'm managing a d
Thanx for your answer
My example is trivial because i want to show strange (for me) postgres
behavior with dealing with primary keys (extreme example), in real situation
user put search condition e.g. "Panas" and this generates query
...
where gd.other_code like 'Panas%' OR g.code like 'Panas%'
.
Hi,
I have a long query that returns an extremely large result set. In my
application, I would like to report the results as they come in, so I am
creating a cursor and fetching 1000 rows at a time. After I declare the cursor
(declare C cursor for), I call "fetch 1000 from C" over and over. Usu
Could you tell me each and every hardware parameters and OS parameters the
performance depends on.
I need the complete list of all the required parameters and how to extract
them on Linux through system calls and files.
Please it will be highly great full of you to do so.
Thank you and regards.
Hi,
I'm new to PostgreSQL and trying to run this query:
SELECT *
FROM "Log"
LEFT JOIN "NewsArticle" ON "NewsArticle".id = "Log"."targetId" AND
"Log"."targetType" = 'NewsArticle'
ORDER BY "Log"."createdAt" DESC
LIMIT 10
Basically I'm finding the last 10 log entries, which point (targetType) to
ne
Hello,_
_
I've got a very simple table with a very simple SELECT query, but it
takes longer on the initial run than I'd like, so I want to see if there
is a strategy to optimize this.
Table rt_h_nbbo contains several hundred million rows. All rows for a
given entry_date are appended to this
Hello,
__
I've got a very simple table with a very simple SELECT query, but it
takes longer on the initial run than I'd like, so I want to see if there
is a strategy to optimize this.
Table rt_h_nbbo contains several hundred million rows. All rows for a
given entry_date are appended to this
> There's quite a lot of stuff that a PG process will cache in local memory
> once it's acquired the info, for example:
> - relcache (relation descriptors)
> - catcache (system catalog entries)
> - compiled trees for plpgsql functions
>
> 300mb worth of that stuff seems on the high side, but perhap
Hi,
I'm having something I feel is a bit of a limitation of the optimizer (or
something I don't understand :) ).
Sorry, this is a rather long mail.
I have a workaround for the problem below, but I don't like cheating the
optimizer for no good reason.
First a little bit of context, because the
Johann Spies writes:
> I would appreciate some help optimising the following query:
It's a mistake to imagine that indexes are going to help much with
a join of this size. Hash or merge join is going to be a lot better
than nestloop. What you need to do is make sure those will perform
as well a
I would appreciate some help optimising the following query:
with
subject_journals as(
select A.sq
fromisi.rissue A,
isi.rsc_joern_link C
WHERE
C.sc_id in
('d0963875-e438-4923-b3fa-f462e8975221',
29 matches
Mail list logo