a
Solaris 10 machine (V440, 2 processor, 4 Ultra320 drives, 8 gig ram) and
here's some stats:
shared_buffers = 30
work_mem = 102400
maintenance_work_mem = 1024000
bgwriter_lru_maxpages=0
bgwriter_lru_percent=0
fsync = off
wal_buffers = 128
checkpoint_segments = 64
Thank you!
Steve Conley
effective_cache_size = 679006
I really don't remember how I came up with that effective_cache_size
number
Anyway... any advice would be appreciated :)
Steve
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
' is
a nice way to do that. This way you can find yout where's the
bottleneck (memory, I/O etc.)
That's basically all I can think of right now.
Thanks for the tips :)
Steve
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner
not
really sure what the heck to do :) It's all guessing for me right now.
Steve
---(end of broadcast)---
TIP 6: explain analyze is your friend
if
on those deadlines it takes forever, because that's understandable.
However, I will look into this and see if I can figure out this
average value. This may be a valid idea, and I'll look some more at it.
Thanks!
Steve
SO ... our goal here is to make this load process take less time. It
seems
.
6 gigs currently. :)
If you could post the schema including the indexes, people might have more
ideas...
I'll have to ask first, but I'll see if I can :)
Talk to you later, and thanks for the info!
Steve
---(end of broadcast
for querying and sorting, and this method was a performance godsend
when we implemented it (with a C .so library, not using SQL in our
opclasses or anything like that).
Steve
---(end of broadcast)---
TIP 6: explain analyze is your friend
on.
Thanks!
Steve
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
, but I'd be back to square one on learning performance
stuff :)
Anyway -- I'll listen to what people have to say, and keep this in mind.
It would be an interesting test to take parts of the process and compare
at least, if not converting the whole thing.
talk to you later,
Steve
On Wed, 17 Jan
. Wagner's idea to make a much more efficient system overall. It's
going to be a pretty big programming task, but I've a feeling this
summarizer thing may just need to be re-written with a smarter system
like this to get something faster.
Thanks!
Steve
---(end
Steve wrote:
IF strlen(source.corrected_procedure_code)
THEN:
summary.procedure_code=source.corrected_procedure_code
summary.wrong_procedure_code=source.procedure_code
ELSE:
summary.procedure_code=source.procedure_code
summary.wrong_procedure_code=NULL
Um, so you test
the DB complain, but I'm not sure what's
# reasonable.
checkpoint_segments = 128
random_page_cost = 1.5
cpu_tuple_cost = 0.001
cpu_index_tuple_cost = 0.0005
cpu_operator_cost = 0.00025
effective_cache_size = 8GB
default_statistics_target = 100
Thanks for all your help!
Steve
Steve [EMAIL PROTECTED] writes:
- What is temp_buffers used for exactly?
Temporary tables. Pages of temp tables belonging to your own backend
don't ever get loaded into the main shared-buffers arena, they are read
into backend-local memory. temp_buffers is the max amount (per backend
encounter_id and receipt date are indexed columns.
I've vacuumed and analyzed the table. I tried making a combined index of
encounter_id and receipt and it hasn't worked out any better.
Thanks!
Steve
---(end of broadcast)---
TIP 5: don't forget
Could we see the exact definition of that table and its indexes?
It looks like the planner is missing the bitmap scan for some reason,
but I've not seen a case like that before.
Also, I assume the restriction on receipt date is very nonselective?
It doesn't seem to have changed the estimated
Oy vey ... I hope this is a read-mostly table, because having that many
indexes has got to be killing your insert/update performance.
Hahaha yeah these are read-only tables. Nightly inserts/updates.
Takes a few hours, depending on how many records (between 4 and 10
usually). But during
Seq Scan on detail_summary ds (cost=0.00..1902749.83 rows=9962 width=4)
Filter: ((receipt = '1998-12-30'::date) AND (encounter_id = ANY
indexes, that's the reason...
The indexes have all worked, though I'll make the change anyway.
Documentation on how to code these things is pretty sketchy and I believe
I followed an example on the site if I remember right. :/
Thanks for the info though :)
Steve
---(end
way. :)
Steve
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
make it any faster really :/
Steve
On Thu, 12 Apr 2007, Tom Lane wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
So there's a misjudgment of the number of rows returned by a factor of
about 88. That's pretty big. Since you had the same number without the
receipt date (I think...) then it's
baffled as well,
Talk to you later,
Steve
On Thu, 12 Apr 2007, Tom Lane wrote:
Steve [EMAIL PROTECTED] writes:
Here's my planner parameters:
I copied all these, and my 8.2.x still likes the bitmap scan a lot
better than the seqscan. Furthermore, I double-checked the CVS history
(however it doens't really
appear to be any faster using this plan).
Steve
On Thu, 12 Apr 2007, Tom Lane wrote:
Steve [EMAIL PROTECTED] writes:
... even if I force it to use the indexes
(enable_seqscan=off) it doesn't make it any faster really :/
Does that change the plan, or do you still
,8813282,8813283,8813284,8815534}'::integer[])))
Total runtime: 121306.233 ms
Your other question is answered in the other mail along with the
non-analyze'd query plan :D
Steve
On Thu, 12 Apr 2007, Tom Lane wrote:
Steve [EMAIL PROTECTED] writes:
... even if I force it to use the indexes
our nightly load times I can't foul up the other
queries :)
Thank you very much for all your help on this issue, too!
Steve
On Thu, 12 Apr 2007, Tom Lane wrote:
Steve [EMAIL PROTECTED] writes:
With enable_seqscan=off I get:
- Bitmap Index Scan on detail_summary_receipt_encounter_idx
to convince the customer to drop their
absurdly complicated sorts if I can come back with serious results like
what we've worked out here.
And thanks again -- have a good dinner! :)
Steve
On Thu, 12 Apr 2007, Tom Lane wrote:
Steve [EMAIL PROTECTED] writes:
Either way, it runs perfectly fast
detail_summary_receipt_encounter_idx
On Thu, 12 Apr 2007, Tom Lane wrote:
Steve [EMAIL PROTECTED] writes:
Just dropping that index had no effect, but there's a LOT of indexes that
refer to receipt. So on a hunch I tried dropping all indexes that refer
to receipt date and that worked -- so it's the indexes
for -most- of the day and only read/write at night it's acceptable
risk for us anyway. But good to know that's a reasonable value.
Steve
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe
or not if that
makes any sense :) Seems there's no silver bullet to the shared_memory
question. Or if there is, nobody can agree on it ;)
Anyway, talk to you later!
Steve
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send
On Sun, 2003-07-13 at 14:50, Steve Wampler wrote:
I've got a simple nested query:
select * from attributes where id in (select id from
attributes where (name='obsid') and (value='oid00066'));
that performs abysmally. I've heard this described as the
'classic WHERE IN' problem
.
Cheers,
Steve
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
On Wed, Nov 26, 2003 at 09:12:30PM -0800, Steve Atkins wrote:
On Thu, Nov 27, 2003 at 12:41:59PM +0800, Christopher Kings-Lynne wrote:
Does anyone have any metrics on how fast tsearch2 actually is?
I tried it on a synthetic dataset of a million documents of a hundred
words each and while
without an
index than with it. Broken in the same way.
I have 7.2.4 running on a Sun box, so I tried that too, with similar
results. tsearch just doesn't seem to work very well on this dataset
(or any other large dataset I've tried).
Cheers,
Steve
---(end of broadcast
internals/configuration.
Thanks!
Steve
--
Steve Wampler -- [EMAIL PROTECTED]
The gods that smiled on your birth are now laughing out loud.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org
)?
The table's entry is:
NOTICE: --Relation attributes_table--
NOTICE: Pages 639: Changed 0, Empty 0; Tup 52846: Vac 0, Keep 0, UnUsed 48.
Total CPU 0.00s/0.01u sec elapsed 0.01 sec.
Thanks!
Steve
--
Steve Wampler -- [EMAIL PROTECTED]
The gods that smiled on your birth are now laughing
/contrib/pgstattuple.sql:4: ERROR: stat failed
on file '$libdir/pgstattuple': No such file or directory
I don't need this right now (a reindex seems to have fixed
our problem for now...), but it sounds like it would be useful
in the future.
Thanks!
Steve
--
Steve Wampler -- [EMAIL PROTECTED
On Sun, 2003-12-07 at 09:52, Tom Lane wrote:
Steve Wampler [EMAIL PROTECTED] writes:
Hmmm, I have a feeling that's not as obvious as I thought... I can't
identify the index (named 'id_index') in the output of vacuum verbose.
In 7.2, the index reports look like
Index %s: Pages %u
as a CIDR block is often far, far larger than
the actual range involved - consider 63.255.255.255/32 and 64.0.0.0/32.
That seemed to break the indexing algorithms. I'd like to be proven
wrong on that, but would still find ipr a more useful datatype than
inet for my applications.
Cheers,
Steve
that would let you do it, but
I should add some casting code anyway. Shouldn't be too painful to do -
I'll try and get that, and some minimal documentation out today.
Cheers,
Steve
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please
On Tue, Feb 24, 2004 at 09:14:42AM -0800, Steve Atkins wrote:
On Tue, Feb 24, 2004 at 01:07:10PM +0100, Eric Jain wrote:
http://word-to-the-wise.com/ipr.tgz is a datatype that contains
a range of IPv4 addresses, and which has the various operators to
make it GIST indexable.
Great
a temp table with all the values in it and using in() on
the temp table may be a win:
begin;
create temp table t_ids(id int);
insert into t_ids(id) values (123); - repeat a few hundred times
select * from maintable where id in (select id from t_ids);
...
Cheers,
Steve
, which kernel? It seems fdatasync is broken and
syncs the inode, too.
This may be relevant.
From the man page for fdatasync on a moderately recent RedHat installation:
BUGS
Currently (Linux 2.2) fdatasync is equivalent to fsync.
Cheers,
Steve
---(end
can be relaxed?
Cheers,
Steve
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
is implemented to only use a file?
Is there something else I can do? Ultimately, this will end
up on a machine running 1+0 RAID, so I expect that will give
me some performance boost as well, but I'd like to push it
up as best I can with my current hardware setup.
Thanks for any advice!
-Steve
--
Steve
On Mon, 2004-06-07 at 02:26, Kris Jurka wrote:
On Sat, 5 Jun 2004, Steve Wampler wrote:
[I want to use copy from JDBC]
I made a patch to the driver to support COPY as a PG extension.
...
http://archives.postgresql.org/pgsql-jdbc/2003-12/msg00186.php
Thanks Kris - that patch worked
On Mon, 2004-06-07 at 10:40, Steve Wampler wrote:
Thanks Kris - that patch worked beautifully and bumped the
insert rate from ~1000 entries/second to ~9000 e/s in my
test code.
As a followup - that 9000 e/s becomes ~21,000 e/s if I don't
have the java code also dump the message to standard
,fdatasync
Just wondering.
-Steve
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
If IPv6 doesn't work, shouldn't it fall back to IPv4, or check IPv4
first, or something? Just wondering.
-Steve Bergman
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister
://marc.theaimsgroup.com/?l=reiserfsm=109363302000856
-Steve Bergman
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
from an (empty) parent table.
Adding a new partition is just a create table tablefoo () inherits(bigtable)
and removing a partition just drop table tablefoo.
Cheers,
Steve
---(end of broadcast)---
TIP 8: explain analyze is your friend
On Wed, Sep 15, 2004 at 11:16:44AM +0200, Markus Schaber wrote:
Hi,
On Tue, 14 Sep 2004 22:10:04 -0700
Steve Atkins [EMAIL PROTECTED] wrote:
Is there by any chance a set of functions to manage adding and removing
partitions? Certainly this can be done by hand, but having a set
to x86
systems - more so than the delta between Oracle on the two platforms.
Just a gut impression, but it might mean that comparing the two
databases on SPARC may not be that useful comparison if you're
planning to move to x86.
Cheers,
Steve
---(end of broadcast
,
Steve
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])
any performance hints
or hardware suggestions?
Cheers,
Steve
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
On Mon, Oct 04, 2004 at 10:38:14AM -0700, Josh Berkus wrote:
Steve,
I'm used to performance tuning on a select-heavy database, but this
will have a very different impact on the system. Does anyone have any
experience with an update heavy system, and have any performance hints
using it in the future, when it's been improved
at both the OS and the silicon levels.
Cheers,
Steve
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL
, so who knows
how reliable a chap he is... ;-)
:)
Cheers,
Steve
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
emulated
by a big NTFS file?
Even if your application needs to run under Linux, can you run the
database directly on XP (8.0RC2 hot off the presses...) and connect to
it from the Linux VM?
Cheers,
Steve
---(end of broadcast)---
TIP 8: explain
? We just upgraded our server's to dual 2.8Ghz Xeon
CPUs from dual Xeon 1.8Ghz which unfortunately HT built-in. We also
upgraded our database from version 7.3.4 to 7.4.2
Thanks.
Steve Poe
---(end of broadcast)---
TIP 3: if posting/reading through
the data partitioned, the question becomes one of
how to inhance performance/scalability. Have you considered RAIDb?
--
Steve Wampler -- [EMAIL PROTECTED]
The gods that smiled on your birth are now laughing out loud.
---(end of broadcast)---
TIP 7: don't
database server isn't written to be
distributed across hosts regardless of the distribution of the
data across filesystems.
--
Steve Wampler -- [EMAIL PROTECTED]
The gods that smiled on your birth are now laughing out loud.
---(end of broadcast)---
TIP
database backed system again.
Cheers,
Steve
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
feedback/suggestions are greatly appreciated.
Thanks.
Steve Poe
---(end of broadcast)---
TIP 8: explain analyze is your friend
be
minimized here by using PL/pgSQL?
Server Info:
Centos 3.3 (RHEL 3.x equivelent)
4GB RAM
Adaptec 2100S RAID
Qlogic QLA2100 Fibre
CPU?
Dual Xeon 2.8 CPUs with HT turned off.
Thanks again.
Steve Poe
---(end of broadcast)---
TIP 9: the planner
going
to do.
Dual Xeon 2.8 CPUs with HT turned off.
Yeah, thought it was a Xeon.
If we went with a single CPU, like Athlon/Opertron64, would CS
storming go away?
Thanks.
Steve Poe
---(end of broadcast)---
TIP 2: you can get off all lists
, we can test it out
if this is possible.
Any likelyhood this CS storm will be understood in the next couple months?
Thanks.
Steve Poe
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
is truncated or dropped, I guess.
Cheers,
Steve
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])
.
Since I am a dba novice, I did not physically build this server, nor did
I write the application the hospital runs on, but I have the opportunity
to make it better, I'd thought I should seek some advice from those who
have been down this road before. Suggestions/ideas anyone?
Thanks.
Steve Poe
of approval, then the vendor will tell us what they support.
Thanks.
Steve Poe
Tom Lane wrote:
Steve Poe [EMAIL PROTECTED] writes:
Situation: An 24/7 animal hospital (100 employees) runs their business
on Centos 3.3 (RHEL 3) Postgres 7.4.2 (because they have to)
[ itch... ] Surely
it is CPU bound. At our busiest hour, the CPU is idle
about 70% on average down to 30% idle at its heaviest. Context switching
averages about 4-5K per hour with momentary peaks to 25-30K for a
minute. Overall disk performance is poor (35mb per sec).
Thanks for your input.
Steve Poe
Steve, can we clarify that you are not currently having any performance
issues, you're just worried about failure? Recommendations should be based
on whether improving applicaiton speed is a requirement ...
Josh,
The priorities are: 1)improve safety/failure-prevention, 2) improve
performance
The Chenbros are nice, but kinda pricey ($800) if Steve doesn't need the
machine to be rackable.
If your primary goal is redundancy, you may wish to consider the possibility
of building a brand-new machine for $7k (you can do a lot of machine for
$7000 if it doesn't have to be rackable
.
Steve Poe
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
vacuum_mem = 65536
effective_cache_size = 65536
Steve Poe
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Mohan, Ross wrote:
VOIP over BitTorrent?
Now *that* I want to see. Aught to be at least as interesting
as the TCP/IP over carrier pigeon experiment - and more
challenging to boot!
--
Steve Wampler -- [EMAIL PROTECTED]
The gods that smiled on your birth are now laughing out loud
. Then there are the problems of
different accents, dilects, and languages ;)
--
Steve Wampler -- [EMAIL PROTECTED]
The gods that smiled on your birth are now laughing out loud.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map
.
This method might have been safer (and it works great with Apaches):
http://eagle.auc.ca/~dreid/
Aha - VOIPOBD as well as VOIPOBT! What more can one want?
VOIPOCP, I suppose...
--
Steve Wampler -- [EMAIL PROTECTED]
The gods that smiled on your birth are now laughing out loud
?
These two drive arrays main purpose is for our database. For those
messed with drive arrays before, how would you slice-up the drive array?
Will database performance be effected how our RAID10 is configured? Any
suggestions?
Thanks.
Steve Poe
---(end of broadcast
war, I am just trying to learn
here. It would seem the more drives you could place in a RAID
configuration, the performance would increase.
Steve Poe
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http
.
But comparing Bulkload to INSERT is a bit apples-orangish.
--
Steve Wampler -- [EMAIL PROTECTED]
The gods that smiled on your birth are now laughing out loud.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http
for more disk thrash testing.
I am new to this; maybe someone else may be able to speak from more
experience.
Regards.
Steve Poe
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's
. I used OSDB
since it is simple to implement and use. Although OSDL's OLTP testing
will closer to reality.
Steve Poe
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
,
if memory serves correctly, will occupy around 800-900M of disc space in
pg_xlog.
Steve Poe
Nurlan Mukhanov (AL/EKZ) wrote:
Hello.
I'm trying to restore my database from dump in several parrallel processes, but
restore process works too slow.
Number of rows about 100 000 000,
RAM: 8192M
CPU
, but some
clarity could help.
Steve Poe
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
the 10-15%
baseline, and 3) find out what the mean and standard deviation between
all your results.
If your results are within that range, this maybe normal. I follow-up
with you later on what I do.
Steve Poe
---(end of broadcast)---
TIP 6: Have
Tom,
Just a quick thought: after each run/sample of pgbench, I drop the
database and recreate it. When I don't my results become more skewed.
Steve Poe
Thomas F.O'Connell wrote:
Interesting. I should've included standard deviation in my pgbench
iteration patch. Maybe I'll go back and do
?
Steve Poe
Thomas F.O'Connell wrote:
Considering the default vacuuming behavior, why would this be?
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source: Open Your i
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
Joshua,
This article was in July 2002, so is there update to this information?
When will a new ODBC driver be available for testing? Is there a release
of the ODBC driver with better performance than 7.0.3.0200 for a 7.4.x
database?
Steve Poe
We have mentioned it on the list.
http
* it - it
gives outstanding performance.) However, it hasn't made into an
official release yet. I don't know why, perhaps there's
a problem yet to be solved with it ('works for me', though)?
Is this still on the board? I won't upgrade past 7.4 without it.
--
Steve Wampler -- [EMAIL PROTECTED]
The gods
24/7 operations, then the company should
be able to put some money behind what they want to put their business
on. Your mileage may vary.
Steve
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http
%.
Is that 50% just for the Dell PERC4 RAID on RH AS 3.0? Sound like
severe context switching.
Steve Poe
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL
. Is there a better way? Combining
them all into a transaction or something?
Thanks,
Steve Bergman
---(end of broadcast)---
TIP 8: explain analyze is your friend
;'
LANGUAGE 'plpgsql' IMMUTABLE;
Someone will no doubt suggest using tsearch2, and you might want to
take a look at it if you actually need full-text search, but my
experience has been that it's too slow to be useful in production, and
it's not needed for the simple leading wildcard case.
Cheers,
Steve
your experience?
I don't forsee more 10-15 concurrent sessions running for an their OLTP
application.
Thanks.
Steve Poe
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
SwapTotal: 1534056 kB
SwapFree: 1526460 kB
This is a real doosey for me, please provide any advise possible.
Steve
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister
As a follow up to this ive installed on another test Rehat 8 machine
with
7.3.4 and slow inserts are present, however on another machine with ES3
the same 15,000 inserts is about 20 times faster, anyone know of a
change
that would effect this, kernel or rehat release ?
Steve
-Original
increased the speed on my Redhat 8 servers my 20X !
Steve
-Original Message-
From: Steve Pollard
Sent: Thursday, 9 June 2005 1:27 PM
To: Steve Pollard; pgsql-performance@postgresql.org
Subject: RE: [PERFORM] Importing from pg_dump slow, low Disk IO
As a follow up to this ive installed
.
explain analyze select * from test where productlistid=3 and typeid=9
order by partnumber limit 15;
Create an index on (productlistid, typeid, partnumber) then
select * from test where productlistid=3 and typeid=9
order by productlistid, typeid, partnumber LIMIT 15;
?
Cheers,
Steve
increase the estimated
cost of handling a row relative to a sequential page fetch, which
sure sounds like it'll push plans in the right direction, but it
doesn't sound like the right knob to twiddle.
What do you have random_page_cost set to?
Cheers,
Steve
---(end
?
Have you made changes to the postgresql.conf? kernel.vm settings? IO
scheduler?
If you're not doing so already, you may consider running sar (iostat) to
monitor when the hanging occurs if their is a memory / IO bottleneck
somewhere.
Good luck.
Steve Poe
On Tue, 2005-08-09 at 12:04 -0600, Dan
is not the best
choice for the database. RAID10 would be a better solution (using 8 of
your disks) then take the remaining disk and do mirror with your pg_xlog
if possible.
Best of luck,
Steve Poe
On Thu, 2005-08-11 at 13:23 +0100, Paul Johnson wrote:
Hi all, we're running PG8 on a Sun V250 with 8GB
1 - 100 of 283 matches
Mail list logo