Hello
what is the best for a char field with less than 1000 characters?
a text field or a varchar(1000)
thanks
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your
On Mon, Jan 09, 2006 at 11:58:19AM +0100, TNO wrote:
what is the best for a char field with less than 1000 characters?
a text field or a varchar(1000)
They will be equivalent. text and varchar are the same type internally -- the
only differences are that varchar can have a length (but does not
Hello gentlemen,
Although this is my first post on the list, I am a fairly experienced PostgreSQL
programmer. I am writing an ERP application suite using PostgreSQL as the
preferred DBMS. Let me state that the SQL DDL is automatically generated by a
CASE tool from an ER model. The generated
On 1/8/06, Ron [EMAIL PROTECTED] wrote:
snipAmong the other tricks having lots of RAM allows:If some of your tables are Read Only or VERY rarely written to, youcan preload them at boot time and make them RAM resident using the/etc/tmpfs trick.
What is the /etc/tmpfs trick?
-K
Hi Alessandro,
Nested Loop (cost=0.00..1017.15 rows=1 width=1146) (actual
time=258.648..258.648 rows=0 loops=1)
- Seq Scan on ubicazione (cost=0.00..1011.45 rows=1 width=536)
(actual time=0.065..51.617 rows=12036 loops=1)
Filter: ((id_ente = 'dmd'::text) AND (allarme IS NULL)
On 1/9/06, Alessandro Baretta [EMAIL PROTECTED] wrote:
Hello gentlemen,
Although this is my first post on the list, I am a fairly experienced
PostgreSQL
programmer. I am writing an ERP application suite using PostgreSQL as the
preferred DBMS. Let me state that the SQL DDL is automatically
On 1/9/06, Kelly Burkhart [EMAIL PROTECTED] wrote:
On 1/8/06, Ron [EMAIL PROTECTED] wrote:
snip
Among the other tricks having lots of RAM allows:
If some of your tables are Read Only or VERY rarely written to, you
can preload them at boot time and make them RAM resident using the
Dear Virag,
AFAIK aggregates aren't indexed in postgres (at least not before 8.1, which
indexes min and max, iirc).
Also, I don't think you need to exactly determine the trace_id. Try this one
(OTOH; might be wrong):
select DISTINCT ON (a.trace_id, a.seq_no) -- See below
b.gc_minor
Matteo Beccati wrote:
Hi Alessandro,
Nested Loop (cost=0.00..1017.15 rows=1 width=1146) (actual
time=258.648..258.648 rows=0 loops=1)
- Seq Scan on ubicazione (cost=0.00..1011.45 rows=1 width=536)
(actual time=0.065..51.617 rows=12036 loops=1)
Filter: ((id_ente = 'dmd'::text)
On Jan 8, 2006, at 1:42 PM, Luke Lonergan wrote:
Have you tested the underlying filesystem for it's performance?
Run this:
time bash -c 'dd if=/dev/zero of=/my_file_system/bigfile bs=8k
count=your_memory_size_in_GB * 25 sync'
This is a 2-disk RAID0
[EMAIL PROTECTED] /opt/alt-2]#
Alessandro Baretta [EMAIL PROTECTED] writes:
Matteo Beccati wrote:
Are you sure that you recentrly ANALYZED the table ubicazione? If so,
try to increase statistics for the id_ente column.
No, this is not the problem. I increased the amount of statistics with ALTER
TABLE ... SET STATISTICS
Tom Lane wrote:
Alessandro Baretta [EMAIL PROTECTED] writes:
Matteo Beccati wrote:
Are you sure that you recentrly ANALYZED the table ubicazione? If so,
try to increase statistics for the id_ente column.
No, this is not the problem. I increased the amount of statistics with ALTER
TABLE
Question,
How exactly is Postgres and Linux use the memory?
I have serveral databases that have multi GB indexes on very large
tables. On our current servers, the indexes can fit into memory
but not the data (servers have 8 - 12 GB). However, my boss is
wanting to get new servers for me but does
Alessandro Baretta [EMAIL PROTECTED] writes:
Tom Lane wrote:
I'm curious to see how many rows the planner thinks this will produce,
and whether it will use the index or not.
dmd-freerp-1-alex=# EXPLAIN ANALYZE SELECT * FROM articolo WHERE
articolo.xdbs_modified '2006-01-08 18:25:00+01';
Peter,
On 1/9/06 9:23 AM, peter royal [EMAIL PROTECTED] wrote:
This is a 2-disk RAID0
Your 2-disk results look fine - what about your 8-disk results?
Given that you want to run in production with RAID10, the most you should
expect is 2x the 2-disk results using all 8 of your disks. If you
On Mon, Jan 09, 2006 at 01:54:48PM -0500, Chris Hoover wrote:
Question, if I have a 4GB+ index for a table on a server with 4GB ram, and I
submit a query that does an index scan, does Postgres read the entire index,
or just read the index until it finds the matching value (our extra large
On Jan 9, 2006, at 2:01 PM, Luke Lonergan wrote:
Peter,
On 1/9/06 9:23 AM, peter royal [EMAIL PROTECTED] wrote:
This is a 2-disk RAID0
Your 2-disk results look fine - what about your 8-disk results?
after some further research the 2-disk RAID0 numbers are not bad.
I have a single drive
Peter,
On 1/9/06 12:59 PM, peter royal [EMAIL PROTECTED] wrote:
Overall, I got a 50% boost in the overall speed of my test suite by
using XFS and the 16k read-ahead.
Yes, it all looks pretty good for your config, though it looks like you
might be adapter limited with the Areca - you should
See the FAQ.
---
Steinar H. Gunderson wrote:
On Mon, Jan 09, 2006 at 11:58:19AM +0100, TNO wrote:
what is the best for a char field with less than 1000 characters?
a text field or a varchar(1000)
They will be
Hello,
I've a performance problem with the planner algorithm choosen in a website.
See the difference between this:
http://klive.cpushare.com/?scheduler=cooperative
and this:
http://klive.cpushare.com/?scheduler=preemptive
(note, there's much less data to show with preemptive,
Andrea Arcangeli [EMAIL PROTECTED] writes:
It just makes no sense to me that the planner takes a difference
decision based on a not.
Why in the world would you think that? In general a NOT will change the
selectivity of the WHERE condition tremendously. If the planner weren't
sensitive to
UNLIKELY string LIKE '%% PREEMPT %%'
or:
LIKELY string NOT LIKE '%% PREEMPT %%'
You should be using contrib/tsearch2 for an un-anchored text search perhaps?
---(end of broadcast)---
TIP 4: Have you searched our list archives?
On Tue, Jan 10, 2006 at 10:29:05AM +0800, Christopher Kings-Lynne wrote:
UNLIKELY string LIKE '%% PREEMPT %%'
or:
LIKELY string NOT LIKE '%% PREEMPT %%'
You should be using contrib/tsearch2 for an un-anchored text search perhaps?
If I wanted to get the fastest speed possible,
Andrea Arcangeli [EMAIL PROTECTED] writes:
If you don't know the data, I think it's a bug that LIKE is assumed to
have a selectivity above 50%.
Extrapolating from the observation that the heuristics don't work well
on your data to the conclusion that they don't work for anybody is not
good
On Tue, 10 Jan 2006, Andrea Arcangeli wrote:
I see. I can certainly fix it by stopping using LIKE. But IMHO this
remains a bug, since until the statistics about the numberof matching
rows isn't estimated well, you should not make assumptions on LIKE/NOT
LIKE. I think you can change the code
On Mon, Jan 09, 2006 at 09:54:44PM -0500, Tom Lane wrote:
Extrapolating from the observation that the heuristics don't work well
on your data to the conclusion that they don't work for anybody is not
good logic. Replacing that code with a flat 50% is not going to happen
(or if it does, I'll
Hey folks,
I'm working with a query to get more info out with a join. The base query
works great speed wise because of index usage. When the join is tossed in, the
index is no longer used, so the query performance tanks.
Can anyone advise on how to get the index usage back?
weather=#
On Mon, Jan 09, 2006 at 09:23:38PM -0700, Robert Creager wrote:
I'm working with a query to get more info out with a join. The base
query works great speed wise because of index usage. When the join is
tossed in, the index is no longer used, so the query performance tanks.
The first query
Tom Lane wrote:
Alessandro Baretta [EMAIL PROTECTED] writes:
Tom Lane wrote:
I'm curious to see how many rows the planner thinks this will produce,
and whether it will use the index or not.
dmd-freerp-1-alex=# EXPLAIN ANALYZE SELECT * FROM articolo WHERE
articolo.xdbs_modified '2006-01-08
29 matches
Mail list logo