unsubscribe-digest
___
Gesendet von Yahoo! Mail - Jetzt mit 1GB Speicher kostenlos - Hier anmelden: http://mail.yahoo.de
---(end of broadcast)---
TIP 2: D
Dan,
On 9/1/05 4:02 PM, "Dan Harris" <[EMAIL PROTECTED]> wrote:
> Do you have any sources for that information? I am running dual
> SmartArray 6402's in my DL585 and haven't noticed anything poor about
> their performance.
I've previously posted comprehensive results using the 5i and 6xxx serie
Matthew Sackman wrote:
I need to get to the stage where I can run queries such as:
>
select street, locality_1, locality_2, city from address
where (city = 'Nottingham' or locality_2 = 'Nottingham'
or locality_1 = 'Nottingham')
and upper(substring(street from 1 for 1)) = 'A'
group b
Hi All,
I have an ODBC application( using postgres database) which has three
different operations. Each operation is having combination of SELECT and
UPDATE.
For example:
Operation A:6 Fetch + 1 Update
Operation B:9 Fetch
Operation C:5 Fetch + 3 Updat
It would be good to see EXPLAIN ANALYZE output for the three queries
below (the real vs. estimated row counts being of interest).
The number of pages in your address table might be interesting to know too.
regards
Mark
Matthew Sackman wrote (with a fair bit of snippage):
explain select local
Hi all.
In a cluster, is there any way to use the main memory of the other nodes instead of the swap? If I have a query with many sub-queries and a lot of data, I can easily fill all the memory in a node. The point is: is there any way to continue using the main memory from other nodes in the sam
On Thu, Sep 01, 2005 at 06:42:31PM +0100, Matthew Sackman wrote:
> flat_extra | character varying(100) | not null
> number | character varying(100) | not null
> street | character varying(100) | not null
> locality_1 | character varying(100) | not
Just to dig up an old thread from last month:
In case anyone was wondering we finally got a free day to put in the new
version of the software, and it's greatly improved the performance. The
solutions we employed were as follows:
- recompile everything with ecpg -t for auto-commit
- vacuum run b
At 06:22 PM 9/1/2005, Matthew Sackman wrote:
On Thu, Sep 01, 2005 at 06:05:43PM -0400, Ron wrote:
>
> Since I assume you are not going to run anything with the string
> "unstable" in its name in production (?!), why not try a decent
> production ready distro like SUSE 9.x and see how pg 8.0.3 run
Hi.
I have an interesting problem with the JDBC drivers. When I use a
select like this:
"SELECT t0.aktiv, t0.id, t0.ist_teilnehmer, t0.nachname, t0.plz,
t0.vorname FROM public.dga_dienstleister t0 WHERE t0.plz
like ?::varchar(256) ESCAPE '|'" withBindings: 1:"53111"(plz)>
the existing i
Do you have any sources for that information? I am running dual
SmartArray 6402's in my DL585 and haven't noticed anything poor about
their performance.
On Sep 1, 2005, at 2:24 PM, Luke Lonergan wrote:
Are you using the built-in HP SmartArray RAID/SCSI controllers? If
so, that
could be
On Thu, Sep 01, 2005 at 06:42:31PM +0100, Matthew Sackman wrote:
>
> "address_pc_top_index" btree (postcode_top)
> "address_pc_top_middle_bottom_index" btree (postcode_top,
> postcode_middle, postcode_bottom)
> "address_pc_top_middle_index" btree (postcode_t
On Thu, Sep 01, 2005 at 06:05:43PM -0400, Ron wrote:
> > Selection from the database is, hence the indexes.
>
> A DB _without_ indexes that fits into RAM during ordinary operation
> may actually be faster than a DB _with_ indexes that does
> not. Fitting the entire DB into RAM during ordinary o
On Thu, Sep 01, 2005 at 11:52:45PM +0200, Steinar H. Gunderson wrote:
> On Thu, Sep 01, 2005 at 10:13:59PM +0100, Matthew Sackman wrote:
> > Well that's the thing - on the queries where it decides to use the index
> > it only reads at around 3MB/s and the CPU is maxed out, whereas when it
> > doesn
At 05:06 PM 9/1/2005, Matthew Sackman wrote:
On Thu, Sep 01, 2005 at 10:09:30PM +0200, Steinar H. Gunderson wrote:
> > "address_city_index" btree (city)
> > "address_county_index" btree (county)
> > "address_locality_1_index" btree (locality_1)
> > "address_locality_2_index" btree
> -Original Message-
> From: Alvaro Herrera [mailto:[EMAIL PROTECTED]
> Sent: Thursday, September 01, 2005 3:34 PM
> To: Merlin Moncure
> Cc: Matthew Sackman; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Massive performance issues
>
> On Thu, Sep 01, 2005 at 02:04:54PM -0400,
On Thu, Sep 01, 2005 at 02:26:47PM -0700, Jeff Frost wrote:
> >Well I've got 1GB of RAM, but from analysis of its use, a fair amount
> >isn't being used. About 50% is actually in use by applications and about
> >half of the rest is cache and the rest isn't being used. Has this to do
> >with the max
On Thu, Sep 01, 2005 at 10:13:59PM +0100, Matthew Sackman wrote:
> Well that's the thing - on the queries where it decides to use the index
> it only reads at around 3MB/s and the CPU is maxed out, whereas when it
> doesn't use the index, the disk is being read at 60MB/s. So when it
> decides to us
Well I've got 1GB of RAM, but from analysis of its use, a fair amount
isn't being used. About 50% is actually in use by applications and about
half of the rest is cache and the rest isn't being used. Has this to do
with the max_fsm_pages and max_fsm_relations settings? I've pretty much
not touched
On Thu, Sep 01, 2005 at 10:54:45PM +0200, Arjen van der Meijden wrote:
> On 1-9-2005 19:42, Matthew Sackman wrote:
> >Obviously, to me, this is a problem, I need these queries to be under a
> >second to complete. Is this unreasonable? What can I do to make this "go
> >faster"? I've considered norma
On Thu, Sep 01, 2005 at 10:09:30PM +0200, Steinar H. Gunderson wrote:
> > "address_city_index" btree (city)
> > "address_county_index" btree (county)
> > "address_locality_1_index" btree (locality_1)
> > "address_locality_2_index" btree (locality_2)
> > "address_pc_bottom_index"
At 04:25 PM 9/1/2005, Tom Lane wrote:
Ron <[EMAIL PROTECTED]> writes:
> ... Your target is to have each row take <= 512B.
Ron, are you assuming that the varchar fields are blank-padded or
something? I think it's highly unlikely that he's got more than a
couple hundred bytes per row right now
This should be able to run _very_ fast.
At 01:42 PM 9/1/2005, Matthew Sackman wrote:
Hi,
I'm having performance issues with a table consisting of 2,043,133 rows. The
schema is:
\d address
Table "public.address"
Column| Type | Modifiers
--
Ron <[EMAIL PROTECTED]> writes:
> ... Your target is to have each row take <= 512B.
Ron, are you assuming that the varchar fields are blank-padded or
something? I think it's highly unlikely that he's got more than a
couple hundred bytes per row right now --- at least if the data is
what it sound
On 1-9-2005 19:42, Matthew Sackman wrote:
Obviously, to me, this is a problem, I need these queries to be under a
second to complete. Is this unreasonable? What can I do to make this "go
faster"? I've considered normalising the table but I can't work out
whether the slowness is in dereferencing t
Are you using the built-in HP SmartArray RAID/SCSI controllers? If so, that
could be your problem, they are known to have terrible and variable
performance with Linux.
The only good fix is to add a simple SCSI controller to your system (HP
sells them) and stay away from hardware RAID.
- Luke
On Thu, Sep 01, 2005 at 02:04:54PM -0400, Merlin Moncure wrote:
> > Table "public.address"
> > Column| Type | Modifiers
> > --++---
> > postcode_top | character varying(2) | not null
>
Matthew Sackman schrieb:
>Hi,
>
>I'm having performance issues with a table consisting of 2,043,133 rows. The
>schema is:
>
>\d address
> Table "public.address"
>Column| Type | Modifiers
>--++--
On Thu, Sep 01, 2005 at 03:51:35PM -0400, Merlin Moncure wrote:
> > Huh, hang on -- AFAIK there's no saving at all by doing that. Quite
> > the opposite really, because with char(x) you store the padding
> > blanks, which are omitted with varchar(x), so less I/O (not
> > necessarily a measurable
Any chance it's a vacuum thing?
Or configuration (out of the box it needs adjusting)?
Joel Fradkin
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Merlin Moncure
Sent: Thursday, September 01, 2005 2:11 PM
To: Matthew Sackman
Cc: pgsql-performance@post
Matthew Sackman <[EMAIL PROTECTED]> writes:
> Obviously, to me, this is a problem, I need these queries to be under a
> second to complete. Is this unreasonable?
Yes. Pulling twenty thousand rows at random from a table isn't free.
You were pretty vague about your disk hardware, which makes me thi
On Thu, Sep 01, 2005 at 02:04:54PM -0400, Merlin Moncure wrote:
> > Any help most gratefully received (even if it's to say that I should
> be
> > posting to a different mailing list!).
>
> this is correct list. did you run vacuum/analyze, etc?
> Please post vacuum analyze times.
2005-09-01 19:47
On Thu, Sep 01, 2005 at 02:47:06PM -0400, Tom Lane wrote:
> Matthew Sackman <[EMAIL PROTECTED]> writes:
> > Obviously, to me, this is a problem, I need these queries to be under a
> > second to complete. Is this unreasonable?
>
> Yes. Pulling twenty thousand rows at random from a table isn't free
> Table "public.address"
> Column| Type | Modifiers
> --++---
> postcode_top | character varying(2) | not null
> postcode_middle | character varying(4) | not null
> postcode_b
> I'm having performance issues with a table consisting of 2,043,133
rows.
> The
> schema is:
> locality_1 has 16650 distinct values and locality_2 has 1156 distinct
> values.
Just so you know I have a 2GHz p4 workstation with similar size (2M
rows), several keys, and can find and fetch 2k rows b
Hi,
I'm having performance issues with a table consisting of 2,043,133 rows. The
schema is:
\d address
Table "public.address"
Column| Type | Modifiers
--++---
postcode_top | character
Ulrich,
On 9/1/05 6:25 AM, "Ulrich Wisser" <[EMAIL PROTECTED]> wrote:
> My application basically imports Apache log files into a Postgres
> database. Every row in the log file gets imported in one of three (raw
> data) tables. My columns are exactly as in the log file. The import is
> run approx.
> Hi Merlin,
> > Just a thought: have you considered having apache logs write to a
> > process that immediately makes insert query(s) to postgresql?
>
> Yes we have considered that, but dismissed the idea very soon. We need
> Apache to be as responsive as possible. It's a two server setup with
> l
Hi Merlin,
schemas would be helpful.
right now I would like to know if my approach to the problem makes
sense. Or if I should rework the whole procedure of import and aggregate.
Just a thought: have you considered having apache logs write to a
process that immediately makes insert query(s
Ulrich wrote:
> Hi again,
>
> first I want to say ***THANK YOU*** for everyone who kindly shared
their
> thoughts on my hardware problems. I really appreciate it. I started to
> look for a new server and I am quite sure we'll get a serious hardware
> "update". As suggested by some people I would
Your HD raw IO rate seems fine, so the problem is not likely to be
with the HDs.
That consistent ~10x increase in how long it takes to do an import or
a select is noteworthy.
This "smells" like an interconnect problem. Was the Celeron locally
connected to the HDs while the new Xeons are net
Hi again,
first I want to say ***THANK YOU*** for everyone who kindly shared their
thoughts on my hardware problems. I really appreciate it. I started to
look for a new server and I am quite sure we'll get a serious hardware
"update". As suggested by some people I would like now to look closer
Hi!
I've set up a Package Cluster ( Fail-Over Cluster ) on our two HP DL380
G4 with MSA Storrage G2.( Xeon 3,4Ghz, 6GB Ram, 2x [EMAIL PROTECTED] Raid1)
The system is running under Suse Linux Enterprise Server.
My problem is, that the performance is very low. On our old Server
( Celeron 2Ghz with
Morgan Kita wrote:
Hi,
I am currently trying to speed up the insertion of bulk loads to my
database. I have fiddled with all of the parameters that I have seen
suggested(aka checkpoint_segments, checkpoint_timeout,
maintinence_work_mem, and shared buffers) with no success. I even
turned off fysn
44 matches
Mail list logo