current_date - interval
'1 month';
As the number of rows grows the time needed to execute this query takes
longer. What'd I should do improve the performance of this query?
Thank you very much
--
Arnau
---(end of broadcast)---
TIP 3: if posting
..638.00 rows=9289 width=35) (actual
time=0.41..688.34 rows=27867 loops=1)
Total runtime: 730.82 msec
That query is not using the index. Anybody knows what I'm doing wrong?
Thank you very much
--
Arnau
---(end of broadcast)---
TIP 6: Have you searched
Hi all,
I have the following table:
espsm_asme=# \d statistics_sasme
Table public.statistics_sasme
Column | Type |
Modifiers
Hi all,
COPY FROM a file with all the ID's to delete, into a temporary
table, and do a joined delete to your main table (thus, only one query).
I already did this, but I don't have idea about how to do this join,
could you give me a hint ;-) ?
Thank you very much
--
Arnau
Hi all,
Which is the best way to import data to tables? I have to import
9 rows into a column and doing it as inserts takes ages. Would be
faster with copy? is there any other alternative to insert/copy?
Cheers!
---(end of broadcast)---
in one go. This minimizes the number
of round trips between the client and the server.
Thanks Teemu! could you paste an example of one of those functions? ;-)
An example of those SELECTS also would be great, I'm not sure I have
completly understood what you mean.
--
Arnau
a sequential scan and doesn't use the index and I don't
understand why, any idea? I have the same in postgresql 8.1 and it uses
the index :-|
Thanks
--
Arnau
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http
chris smith wrote:
On 4/25/06, Arnau [EMAIL PROTECTED] wrote:
Hi all,
I have the following running on postgresql version 7.4.2:
CREATE SEQUENCE agenda_user_group_id_seq
MINVALUE 1
MAXVALUE 9223372036854775807
CYCLE
INCREMENT 1
START 1;
CREATE TABLE AGENDA_USERS_GROUPS
Tom Lane wrote:
Arnau [EMAIL PROTECTED] writes:
Seq Scan on agenda_users_groups (cost=0.00..53108.45 rows=339675
width=8) (actual time=916.903..5763.830 rows=367026 loops=1)
Filter: (group_id = 9::numeric)
Total runtime: 7259.861 ms
(3 filas)
espsm_moviltelevision=# select count
time=151.298..151.298 rows=367026 loops=1)
Index Cond: (group_id = 9::numeric)
Total runtime: 1527.039 ms
(5 rows)
Thanks
--
Arnau
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org
rows=1 width=78) (actual time=2.262..2.264 rows=1
loops=150)
Index Cond: (outer.user_id = u.user_id)
Total runtime: 76853.504 ms
(16 rows)
Do you think I could do anything to speed it up?
Cheers!!
--
Arnau
---(end of broadcast
many rows have been deleted for date. I was thinking in
creating a function, any recommendations?
Thank you very much
--
Arnau
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
.
How can I know what work_mem needs a query needs?
Regards
--
Arnau
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can
Hi all,
In a previous post, Ron Peacetree suggested to check what work_mem
needs a query needs. How that can be done?
Thanks all
--
Arnau
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
elements (tables, rules, ... ) or
there is another approach that doesn't require external things like cron
only PostgreSQL.
--
Arnau
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
definition? I
have checked the postgresql documentation I haven't been able to find
anything about.
Thanks
--
Arnau
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about
this?
--
Arnau
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Hi Bill,
In response to Arnau [EMAIL PROTECTED]:
I have postgresql 7.4.2 running on debian and I have the oddest
postgresql behaviour I've ever seen.
I do the following queries:
espsm_asme=# select customer_app_config_id, customer_app_config_name
from customer_app_config where
and minimum numbers of rows
Is there anything similar in PostgreSQL? The idea behind this is how I
can do in PostgreSQL to have tables where I can query on them very often
something like every few seconds and get results very fast without
overloading the postmaster.
Thank you very much
--
Arnau
Hi Josh,
Josh Berkus wrote:
Arnau,
Is there anything similar in PostgreSQL? The idea behind this is how I
can do in PostgreSQL to have tables where I can query on them very often
something like every few seconds and get results very fast without
overloading the postmaster.
If you're only
Hi Ansgar ,
On 2007-04-04 Arnau wrote:
Josh Berkus wrote:
Is there anything similar in PostgreSQL? The idea behind this is how
I can do in PostgreSQL to have tables where I can query on them very
often something like every few seconds and get results very fast
without overloading
Hi Thor,
Thor-Michael Støre wrote:
On 2007-04-04 Arnau wrote:
Josh Berkus wrote:
Arnau,
Is there anything similar in PostgreSQL? The idea behind this
is how I can do in PostgreSQL to have tables where I can query
on them very often something like every few seconds and get
results very fast
)
As you can see the time difference are very big
Timestamp:318.328 ms
int8 index: 120.804 ms
double precision: 57.065 ms
is this normal? am I doing anything wrong?
As rule of thumb is better to store epochs than timestamps?
Thank you very much
--
Arnau
be worse.
I have been following the list and one of the advises that appears
more often is keep your DB in memory, so if I have just one instance
instead of hundreds the performance will be better?
Thank you very much
--
Arnau
---(end of broadcast
Hi Tom,
Arnau [EMAIL PROTECTED] writes:
I have an application that works with multiple customers. Thinking in
scalability we are thinking in applying the following approaches:
- Create a separate database instance for each customer.
- We think that customer's DB will be quite small
Tom Lane wrote:
Arnau [EMAIL PROTECTED] writes:
Can you instead run things with one postmaster per machine and one
database per customer within that instance? From a performance
perspective this is likely to work much better.
What I meant is just have only one postmaster per server
if this will be the maximum memory used by PostgreSQL or
additional to this it will take more memory. Because if shared_buffers
is the maximum I could raise that value even more.
Cheers!
--
Arnau
---(end of broadcast)---
TIP 1: if posting/reading through Usenet
is fired from the user interface of the application.
Do you have any idea about how I could improve the performance of this?
Thanks all
--
Arnau
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http
Hi Michael,
Michael Glaesemann wrote:
On Jul 6, 2007, at 9:42 , Arnau wrote:
I have the following scenario, I have users and groups where a user
can belong to n groups, and a group can have n users. A user must
belogn at least to a group. So when I delete a group I must check
all
--
Arnau
---(end of broadcast)---
TIP 6: explain analyze is your friend
ON t.transaction_id = s.transaction_id
WHERE
t.timestamp_in = to_timestamp('20070101', 'MMDD')
GROUP BY date, t.type_id;
I think this could be speed up if the index idx_putrnsctns_tstampin
(index over the timestamp) could be used, but I haven't been able to do
it. Any suggestion?
Thanks all
--
Arnau
/timestamps changes. I mean, if
instead of being stored as MMDD is stored as DDMM, should we
have to change all the queries? I thought the
to_char/to_date/to_timestamp functions were intented for this purposes
--
Arnau
---(end of broadcast
than 60%.
I know it's a problem with a very big scope, but could you give me a
hint about where I should look to?
Thank you very much
--
Arnau
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
33 matches
Mail list logo