someCondition
$$BODY$$
LANGUAGE 'sql'.
Thank you in advance,
Mario
--
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]
I can do it quick, I can do it cheap, I can do it well. Pick any two.
---(end of broadcast)---
TIP 1: if posting
for the exact lock I developed, or there is
something more I could do to speed the things up?
Mario
--
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]
I can do it quick, I can do it cheap, I can do it well. Pick any two.
---(end of broadcast)---
TIP 1
On Tue, 2006-04-18 at 11:33 -0400, Tom Lane wrote:
Mario Splivalo [EMAIL PROTECTED] writes:
If there is concurrent locking,
you're also running a big risk of deadlock because two processes might
try to lock the same rows in different orders.
I think there is no risk of a deadlock, since
would be why is that query much much slower when
run from within function? Is there a way to see an execution plan for
the query inside the function?
Mike
--
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]
I can do it quick, I can do it cheap, I can do it well. Pick any two
:)
Mike
--
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]
I can do it quick, I can do it cheap, I can do it well. Pick any two.
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail
be most happy if someone would scream something
like four joins, smells like a poor design because design is poor, but
the system is in production, and I have to bare with it.
Mario
--
I can do it quick, I can do it cheap, I can do it well. Pick any two.
Mario Splivalo
[EMAIL PROTECTED
On Wed, 2006-05-03 at 10:20 -0500, Dave Dutcher wrote:
- Nested Loop (cost=0.00..176144.30 rows=57925 width=26)
(actual time=68.322..529472.026 rows=57925 loops=1)
- Seq Scan on ticketing_codes_played
(cost=0.00..863.25 rows=57925 width=8) (actual
On Wed, 2006-05-03 at 13:58 -0400, Tom Lane wrote:
Mario Splivalo [EMAIL PROTECTED] writes:
I have a quite large query that takes over a minute to run on my laptop.
The EXPLAIN output you provided doesn't seem to agree with the stated
query. Where'd the service_id = 1102 condition come
On Wed, 2006-05-10 at 17:10 -0500, Jim C. Nasby wrote:
On Thu, May 04, 2006 at 04:45:57PM +0200, Mario Splivalo wrote:
Well, here's the problem...
- Nested Loop (cost=0.00..176144.30 rows=57925 width=26)
(actual time=1074.984..992536.243 rows=57925 loops=1
=1.00..100011145.43
rows=747843 width=8) (actual time=0.023..4386.769 rows=747884 loops=1)
Total runtime: 56536.774 ms
(13 rows)
Thank you all in advance,
Mario
--
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]
I can do it quick, I can do it cheap, I can do it well. Pick any two
and user_subscription_credits_taken
into one table so you don't need the UNION ALL.
See, that's an idea! :) Thnx, I'll try that.
Is it inapropriate to ask about rough estimate on availableness of
8.2? :)
Mario
--
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]
I can do it quick, I can do it cheap
On Sat, 2006-06-03 at 11:43 +0200, Steinar H. Gunderson wrote:
On Sat, Jun 03, 2006 at 10:31:03AM +0100, [EMAIL PROTECTED] wrote:
I do have 2 identical beasts (4G - biproc Xeon 3.2 - 2 Gig NIC)
One beast will be apache, and the other will be postgres.
I'm using httperf/autobench for
Tom Lane wrote:
Hardly surprising --- a search on the index's lowest-order column would
require scanning practically all of the index. (If you think about the
ordering of the index entries you'll see why.) If this is a typical
query then you need a separate index on transaction_time_commit.
Robert Haas wrote:
On Fri, Feb 6, 2009 at 12:41 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Robert Haas robertmh...@gmail.com wrote:
What's weird about this example is that when he sets enable_seqscan to
off, the bitmap index scan plan is actually substantially faster, even
though it
Mario Splivalo wrote:
Robert Haas wrote:
jura=# set enable_seqscan to false;
SET
jura=# explain analyze select * from transactions where
transaction_time_commit between '2008-01-01' and '2008-01-31 23:59:59';
QUERY PLAN
I have a function, looking like this:
CREATE OR REPLACE FUNCTION get_memo_display_queue_size(a_service_id integer)
RETURNS integer AS
$BODY$
SELECT
COUNT(*)::integer
FROM
v_messages_memo
LEFT JOIN messages_memo_displayed
ON id = message_id
WHERE
Tom Lane wrote:
Mario Splivalo mario.spliv...@megafon.hr writes:
Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?
Usually the reason for this is that the planner chooses a different plan
when it has knowledge of the particular value you are searching for than
when
Guillaume Cottenceau wrote:
Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?
Usually the reason for this is that the planner chooses a different plan
when it has knowledge of the particular value you are searching for than
when it does not.
Yes, and since Mario is
Tom Lane wrote:
Mario Splivalo mario.spliv...@megafon.hr writes:
Is this difference normal?
It's hard to tell, because you aren't comparing apples to apples.
Try a prepared statement, like
[...cut...]
which should produce results similar to the function. You could
then use explain analyze
I have two tables, like this:
Big table:
CREATE TABLE photo_info_data
(
photo_id integer NOT NULL,
field_name character varying NOT NULL,
field_value character varying,
CONSTRAINT photo_info_data_pk PRIMARY KEY (photo_id, field_name)
)
WITH (OIDS=FALSE);
CREATE INDEX
Tom Lane wrote:
Mario Splivalo mario.spliv...@megafon.hr writes:
- Bitmap Heap Scan on photo_info_data u (cost=39134.84..63740.08
rows=109024 width=50) (actual time=270.464..270.469 rows=3 loops=2)
Recheck Cond: ((u.field_name)::text = (t.key)::text)
- Bitmap Index
Scott Marlowe wrote:
It's not really solved, it's just a happy coincidence that the current
plan runs well. In order to keep the query planner making good
choices you need to increase stats target for the field in the index
above. The easiest way to do so is to do this:
alter database mydb
Scott Marlowe wrote:
On Mon, Apr 6, 2009 at 6:20 AM, Mario Splivalo
mario.spliv...@megafon.hr wrote:
Scott Marlowe wrote:
It's not really solved, it's just a happy coincidence that the current
plan runs well. In order to keep the query planner making good
choices you need to increase stats
Scott Marlowe wrote:
CREATE INDEX photo_info_data_ix_field_value
ON user_info_data USING btree (field_value);
So, there is index on (user_id, field_name). Postgres is using index for
user_id (...WHERE user_id = 12345) but not on field-name (...WHERE
field_name = 'f-spot'). When I add extra
I have simple database schema, containing just three tables:
samples, drones, drones_history.
Now, those tables hold data for the drones for a simulation. Each
simulation dataset will grow to around 10 GB in around 6 months.
Since the data is not related in any way I was thinking in
The database for monitoring certain drone statuses is quite simple:
CREATE TABLE samples (
sample_id integer not null primary key,
sample_timestamp timestamp not null default now()
);
CREATE TABLE drones (
drone_id integer not null primary key,
drone_log_notice
On 11/28/2010 07:56 PM, Pierre C wrote:
When I remove foreign constraints (drones_history_fk__samples and
drones_history_fk__drones) (I leave the primary key on drones_history)
than that INSERT, even for 50k rows, takes no more than a second.
So, my question is - is there anything I can do to
On 11/28/2010 10:50 PM, Pierre C wrote:
I pasted DDL at the begining of my post.
Ah, sorry, didn't see it ;)
The only indexes tables have are the ones created because of PK
constraints. Table drones has around 100k rows. Table drones_history
has around 30M rows. I'm not sure what
On 11/29/2010 08:11 AM, Mark Kirkwood wrote:
On 29/11/10 00:46, Mario Splivalo wrote:
This is the slow part:
INSERT INTO drones_history (sample_id, drone_id, drone_log_notice,
drone_temperature, drone_pressure)
SELECT * FROM tmpUpdate;
For 100 rows this takes around 2 seconds. For 1000 rows
On 11/30/2010 12:45 PM, Dimitri Fontaine wrote:
Mario Splivalomario.spliv...@megafon.hr writes:
I have simple database schema, containing just three tables:
samples, drones, drones_history.
Now, those tables hold data for the drones for a simulation. Each simulation
dataset will grow to
On 11/29/2010 05:47 PM, Pierre C wrote:
realm_51=# vacuum analyze verbose drones;
INFO: vacuuming public.drones
INFO: scanned index drones_pk to remove 242235 row versions
DETAIL: CPU 0.02s/0.11u sec elapsed 0.28 sec.
INFO: drones: removed 242235 row versions in 1952 pages
DETAIL: CPU
On 11/29/2010 05:53 PM, Pierre C wrote:
Yes, since (sample_id, drone_id) is primary key, postgres created
composite index on those columns. Are you suggesting I add two more
indexes, one for drone_id and one for sample_id?
(sample_id,drone_id) covers sample_id but if you make searches on
On 11/30/2010 05:26 PM, Mladen Gogala wrote:
At the beginning of the load, you should defer all of the deferrable
constraints, setting constraints deferred and issuing the copy statement
within a transaction block, like this:
scott=# begin; BEGIN
Time: 0.203 ms
scott=# set constraints all
On 12/01/2010 01:51 AM, Pierre C wrote:
Now I tried removing the constraints from the history table (including
the PK) and the inserts were fast. After few 'rounds' of inserts I
added constraints back, and several round after that were fast again.
But then all the same. Insert of some 11k
On 12/01/2010 02:47 AM, Joshua D. Drake wrote:
On Sun, 2010-11-28 at 12:46 +0100, Mario Splivalo wrote:
The database for monitoring certain drone statuses is quite simple:
This is the slow part:
INSERT INTO drones_history (sample_id, drone_id, drone_log_notice,
drone_temperature
On 12/01/2010 05:34 PM, Mladen Gogala wrote:
Mario Splivalo wrote:
Yes, as Mladen Gogala had advised. No noticable change in performance -
it's still slow :)
Declaring constraints as deferrable doesn't do anything as such, you
have to actually set the constraints deferred to have
On 12/01/2010 10:43 PM, Pierre C wrote:
On Wed, 01 Dec 2010 18:24:35 +0100, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Mladen Gogala mladen.gog...@vmsinfo.com wrote:
There is a operating system which comes with a very decent extent
based file system and a defragmentation tool,
Hi, all.
I have a query, looking like this:
SELECT
pub_date
FROM
tubesite_object
INNER JOIN tubesite_image
ON tubesite_image.object_ptr_id = tubesite_object.id
WHERE
tubesite_object.site_id = 8
AND tubesite_object.pub_date E'2011-07-12
On 07/12/2011 10:04 PM, Tom Lane wrote:
Mario Splivalomario.spliv...@megafon.hr writes:
Limit (cost=0.00..415.91 rows=21 width=8) (actual
time=11263.089..11263.089 rows=0 loops=1)
- Nested Loop (cost=0.00..186249.55 rows=9404 width=8) (actual
time=11263.087..11263.087 rows=0
On 07/13/2011 12:39 AM, Tom Lane wrote:
Mario Splivalomario.spliv...@megafon.hr writes:
On 07/12/2011 10:04 PM, Tom Lane wrote:
What you need to look into is why the estimated join size is 9400 rows
when the actual join size is zero. Are both tables ANALYZEd? Are you
intentionally selecting
On 07/13/2011 12:39 AM, Tom Lane wrote:
Mario Splivalomario.spliv...@megafon.hr writes:
On 07/12/2011 10:04 PM, Tom Lane wrote:
What you need to look into is why the estimated join size is 9400 rows
when the actual join size is zero. Are both tables ANALYZEd? Are you
intentionally selecting
On 07/13/2011 02:53 AM, Mario Splivalo wrote:
On 07/13/2011 12:39 AM, Tom Lane wrote:
Mario Splivalomario.spliv...@megafon.hr writes:
On 07/12/2011 10:04 PM, Tom Lane wrote:
What you need to look into is why the estimated join size is 9400 rows
when the actual join size is zero. Are both
I have a fairly simple query:
SELECT some columns
FROM tubesite_image
INNER JOIN tubesite_object
ON (tubesite_image.object_ptr_id = tubesite_object.id)
WHERE
tubesite_object.site_id = 8
ORDER BY
tubesite_object.pub_date ASC LIMIT 21;
That query is having a bad query
On 12/06/2011 09:00 PM, Tom Lane wrote:
Mario Splivalo mario.spliv...@megafon.hr writes:
I have 8.4.8 on producion and 8.4.9 on test, could that explain the
difference in plans chosen?
I'd wonder first if you have the same statistics settings on both.
The big problem here
On 12/06/2011 09:17 PM, Kevin Grittner wrote:
The hash join path must look more expensive on the first machine,
for some reason.
Mario, could you post the result of running this query from both
servers?:
http://wiki.postgresql.org/wiki/Server_Configuration
Sure. Here is from the
On 12/06/2011 09:29 PM, Kevin Grittner wrote:
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
But both servers develop that estimate for the join size.
[sigh] Those *were* both from the production server. Please show
us the EXPLAIN ANALYZE from the other server.
Huh, right... missed
On 12/06/2011 09:00 PM, Tom Lane wrote:
Mario Splivalo mario.spliv...@megafon.hr writes:
I have 8.4.8 on producion and 8.4.9 on test, could that explain the
difference in plans chosen?
I'd wonder first if you have the same statistics settings on both.
The big problem here
On 12/07/2011 09:23 AM, Havasvölgyi Ottó wrote:
Thanks, Josh.
The only reason I tried 8.4 first is that it was available for Debian as
compiled package, so it was simpler for me to do it. Anyway I am going
to test 9.1 too. I will post about the results.
If you're using squeeze, you can get
48 matches
Mail list logo