I would appreciate some help optimising the following query:
with
subject_journals as(
select A.sq
fromisi.rissue A,
isi.rsc_joern_link C
WHERE
C.sc_id in
('d0963875-e438-4923-b3fa-f462e8975221',
Johann Spies johann.sp...@gmail.com writes:
I would appreciate some help optimising the following query:
It's a mistake to imagine that indexes are going to help much with
a join of this size. Hash or merge join is going to be a lot better
than nestloop. What you need to do is make sure those
Hi,
I'm having something I feel is a bit of a limitation of the optimizer (or
something I don't understand :) ).
Sorry, this is a rather long mail.
I have a workaround for the problem below, but I don't like cheating the
optimizer for no good reason.
First a little bit of context, because
There's quite a lot of stuff that a PG process will cache in local memory
once it's acquired the info, for example:
- relcache (relation descriptors)
- catcache (system catalog entries)
- compiled trees for plpgsql functions
300mb worth of that stuff seems on the high side, but perhaps you
Hello,
__
I've got a very simple table with a very simple SELECT query, but it
takes longer on the initial run than I'd like, so I want to see if there
is a strategy to optimize this.
Table rt_h_nbbo contains several hundred million rows. All rows for a
given entry_date are appended to this
Hello,_
_
I've got a very simple table with a very simple SELECT query, but it
takes longer on the initial run than I'd like, so I want to see if there
is a strategy to optimize this.
Table rt_h_nbbo contains several hundred million rows. All rows for a
given entry_date are appended to this
Hi,
I'm new to PostgreSQL and trying to run this query:
SELECT *
FROM Log
LEFT JOIN NewsArticle ON NewsArticle.id = Log.targetId AND
Log.targetType = 'NewsArticle'
ORDER BY Log.createdAt DESC
LIMIT 10
Basically I'm finding the last 10 log entries, which point (targetType) to
news articles.
The
Could you tell me each and every hardware parameters and OS parameters the
performance depends on.
I need the complete list of all the required parameters and how to extract
them on Linux through system calls and files.
Please it will be highly great full of you to do so.
Thank you and regards.
Hi,
I have a long query that returns an extremely large result set. In my
application, I would like to report the results as they come in, so I am
creating a cursor and fetching 1000 rows at a time. After I declare the cursor
(declare C cursor for), I call fetch 1000 from C over and over.
Thanx for your answer
My example is trivial because i want to show strange (for me) postgres
behavior with dealing with primary keys (extreme example), in real situation
user put search condition e.g. Panas and this generates query
...
where gd.other_code like 'Panas%' OR g.code like 'Panas%'
..
Sorry for the delay response. We had some hardware/configuration issues
that appear to be solved now, so now we're starting to actually play with
modifying the database.
On Sat, Dec 7, 2013 at 1:29 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Thu, Dec 5, 2013 at 7:36 AM, Dave Johansen
I'm working on setting up a large database (or at least what I consider to
be a large one with several tables having 10-20 million records inserted
per day), and I've been using pgbench to verify that the hardware and
database are configured in an optimal manner.
When I run pgbench in SELECT only
On Thu, Dec 19, 2013 at 9:53 AM, Dave Johansen davejohan...@gmail.com wrote:
On Sat, Dec 7, 2013 at 1:37 PM, Scott Marlowe scott.marl...@gmail.com
wrote:
I'll add that you can use assymetric partitioning if you tend to do a
lot of more fine grained queries on recent data and more big roll up
2013/12/18 kolsze...@gmail.com kolsze...@gmail.com
Thanx for your answer
My example is trivial because i want to show strange (for me) postgres
behavior with dealing with primary keys (extreme example), in real
situation
user put search condition e.g. Panas and this generates query
...
On 12/19/2013 11:00 AM, Dave Johansen wrote:
When I run pgbench in SELECT only after doing -i -s 2000 I get what
appears to be good performance (60k-70k tps) but if I initialize a new
database with -i -s 4000 the tps drops to 4k-7k. Is this order of
magnitude drop expected? Or is there
On 12/17/2013 08:48 PM, Kai Sellgren wrote:
This is your select:
SELECT *
FROM Log
LEFT JOIN NewsArticle ON NewsArticle.id = Log.targetId AND
Log.targetType = 'NewsArticle'
ORDER BY Log.createdAt DESC
LIMIT 10
This is your index:
CREATE INDEX Log_targetId_targetType_idx
ON Log
USING
On 12/12/2013 11:30 AM, Sev Zaslavsky wrote:
_First question_ is: Does loading 24Gb of data in 21 sec seem about
right (hardware specs at bottom of email)?
That's actually pretty good. 24GB is a lot of data to process.
_Second question_: Is it possible to tell postgres to physically store
QUERY PLAN
--
Nested Loop (cost=0.56..4001768.10 rows=479020 width=26) (actual
time=2.303..15371.237
On 19/12/2013 19:33, Jeff Janes wrote:
QUERY PLAN
--
Nested Loop (cost=0.56..4001768.10
Marc Cousin cousinm...@gmail.com wrote:
Then we insert missing paths. This is one of the plans that fail
insert into path (path)
select path from batch
where not exists
(select 1 from path where path.path=batch.path)
group by path;
I know you said you wanted to focus
On Thu, Dec 19, 2013 at 10:44 AM, Shaun Thomas stho...@optionshouse.comwrote:
On 12/19/2013 11:00 AM, Dave Johansen wrote:
When I run pgbench in SELECT only after doing -i -s 2000 I get what
appears to be good performance (60k-70k tps) but if I initialize a new
database with -i -s 4000 the
On 12/18/2013 12:12 PM, prashant Pandey wrote:
Could you tell me each and every hardware parameters and OS parameters
the performance depends on.
I need the complete list of all the required parameters and how to
extract them on Linux through system calls and files.
Please it will be highly
I just ran into an interesting issue on Postgres 8.4. I have a database
with about 3 months of data and when I do following query:
SELECT DATE_TRUNC('day', time) AS time_t, COUNT(*) FROM mytable GROUP BY
time_t;
EXPLAIN shows that it's doing a sort and then a GroupAggregate. There will
only be
On Thu, Dec 19, 2013 at 4:40 PM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
On 12/18/2013 12:12 PM, prashant Pandey wrote:
Could you tell me each and every hardware parameters and OS parameters the
performance depends on.
I need the complete list of all the required parameters and
On 19/12/2013 21:36, Kevin Grittner wrote:
Marc Cousin cousinm...@gmail.com wrote:
Then we insert missing paths. This is one of the plans that fail
insert into path (path)
select path from batch
where not exists
(select 1 from path where path.path=batch.path)
group
25 matches
Mail list logo