bit environment (Irix).
Sincerely,
Joshua D. Drake
A
--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Mammoth PostgreSQL
There is nothing else on Linux that comes close to that. Plus XFS has been
proven in a 64 bit environment (Irix).
I had lots of happy experiences with XFS when administering IRIX
boxes[1], but I don't know what differences the Linux port entailed.
Do you have details on that?
is doing
with Linux.
I would (and do) trust XFS currently over ANY other journalled option on
Linux.
Sincerely,
Joshua D. Drake
other hand, I wouldn't be surprised if it were no worse than the
other options.
A
--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql
of a
downside.
What is official?
Sincerely,
Joshua D. Drake
I would (and do) trust XFS currently over ANY other journalled
option on Linux.
I'm getting less and less inclined to trust ext3 or JFS, which floats
upwards any other boats that are lingering around...
--
Command Prompt
Octavio Alvarez wrote:
Please tell me if this timing makes sense to you for a Celeron 433 w/
RAM=256MB dedicated testing server. I expected some slowness, but not this
high.
Well delete is generally slow. If you want to delete the entire table
(and your really sure)
use truncate.
J
You can do snapshots in FreeBSD 5.x with UFS2 as well but that (
nor XFS snapshots ) will let you backup with the database server
running. Just because you will get the file exactly as it was at
a particular instant does not mean that the postmaster did not
still have some some data that was not
Mark Kirkwood wrote:
They seem pretty clean (have patched vanilla kernels + xfs for
Mandrake 9.2/9.0).
And yes, I would recommend xfs - noticeably faster than ext3, and no
sign of any mysterious hangs under load.
The hangs you are having are due to several issues... one of them is the
way
Hello,
With the new preload option is there any benefit/drawback to using
pl/Python versus
pl/pgSQL? And no... I don't care that pl/Python is now considered untrusted.
Sincerely,
Joshua D. Drake
--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support
Not an option I'm afraid. PostgreSQL just jams and stops logging after
the first rotation...
Are you using a copy truncate method to rotate the logs? In RedHat add
the keyword COPYTRUCATE to your /etc/logrotate.d/syslog file.
Sincerely,
Joshua D. Drake
I know some people use
Pablo Marrero wrote:
Hello people!
I have a question, I am going to begin a project for the University in
the area of Data Warehousing and I want to use postgres.
Do you have some recommendation to me?
Regarding what? Do you have an specific questions?
Sincerely,
Joshua D. Drake
Thanks
defaults to Reiser but also allows XFS. I would suggest XFS.
Sincerely,
Joshua D. Drake
--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Hello,
I found that if you SHMALL value was less than your SHMMAX value,
the value wouldn't take.
J
Tom Lane wrote:
Qing Zhao [EMAIL PROTECTED] writes:
My suspision is that the change i made in /etc/rc does not take
effect.Is there a way to check it?
sysctl has an option to show the values
my $pgGuru = Tom Lane; my @morepgGurus; my $howmany = 10;
while($howmany--) { push @morepgGurus, $pgGuru; }
This is just wrong...
--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 -
-With the db size
being as big as, say, 30+GB, how do I move
it on the new logical drive? (stop postgresql, and simply move it over
somehow
and make a link?)
I would stop the database, move the data directory to the new volume
using rsync then start up postgresql pointed at the new
Hello,
I would have to double check BUT I believe this is fixed in later 2.4.x
kernels as well. If you don't want to go through the hassle of 2.6
(although it really is a nice kernel) then upgrade to 2.4.26.
Sincerely,
Joshau D. Drake
Scott Marlowe wrote:
On Fri, 2004-06-18 at 09:11, Tom Lane
Hello,
It sounds to me like you are IO bound. 2x120GB hard drives just isn't
going to cut it with that many connections (as a general rule). Are you
swapping ?
Sincerely,
Joshua D. Drake
Martin Foster wrote:
I run a Perl/CGI driven website that makes extensive use of PostgreSQL
(7.4.3
Slony does. Replicator is
also live replication.
Sincerely,
Joshua D. Drake
So... wanted to put this out to the experts. Has anyone got any
recommendations or had experiences with real-time database replication
solutions that don't rely on RAID? The reason why I don't want to
rely on a hardware
in production. We
have already dealt with the 1.0 blues as they say.
I hope you understand that I, in no way have ever suggested (purposely)
anything negative about Slony. Only that I believe they serve different
technical solutions.
Sincerely,
Joshua D. Drake
Jan
--
Command Prompt, Inc., home
Christopher Browne wrote:
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] ("Joshua D. Drake") would write:
I hope you understand that I, in no way have ever suggested
(purposely) anything negative about Slony. Only that I believe they
serve different technical
100?
Sincerely,
Joshua D. Drake
On Tue, 14 Sep 2004 21:27:55 +0200, Pierre-Frédéric Caillaud
[EMAIL PROTECTED] wrote:
I have a table with ~8 million rows and I am executing a query which
should return about ~800,000 rows. The problem is that as soon as I
execute the query it absolutely kills my
Sorry, I meant 30,000 with 300 connections - not 3,000. The 300
connections
/ second is realistic, if not underestimated. As is the nature of
our site
(realtime information about online gaming), there's a huge fan base
and as a
big upset happens, we'll do 50,000 page views in a span
n this case:
CREATE OR REPLACE FUNCTION sub_text(text) returns text AS '
SELECT SUBSTR($1,10) from foo;
' LANGUAGE 'SQL' IMMUTABLE;
CREATE INDEX sub_text_idx ON foo(sub_text(doc_urn));
This works on 7.3.6???
Sincerely,
Joshua D. Drake
regards, tom lane
.
Are you saying it should be taking 10 seconds because of the type of
plan? 10 seconds seems like an awfullong time for this.
Sincerely,
Joshua D. Drake
That's my feeling as well, I thought the index was to blame because it
will be quite large, possibly large enough to not fit in memory nor
is in general going to be much smaller than a gist
index.
The smaller the index the faster it is searched.
Sincerely,
Joshua D. Drake
Regards,
--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564
Hello,
Have you tried increasing the statistics target for orderdate and
rerunning analyze?
Sincerely,
Joshua D. Drake
David Brown wrote:
I'm doing some performance profiling with a simple two-table query:
SELECT L.ProductID, sum(L.Amount)
FROM drinv H
JOIN drinvln L ON L.OrderNo = H.OrderNo
Josh Berkus wrote:
in March there was an interesting discussion on the list with the
subject postgres eating CPU on HP9000.
http://archives.postgresql.org/pgsql-performance/2004-03/msg00380.php
--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming
Hello,
What is your statistics target?
What is your effective_cache_size?
Have you tried running the query as a cursor?
Sincerely,
Joshua D. Drake
Andrew Janian wrote:
I have run ANALYZE right before running this query.
I will run EXPLAIN ANALYZE when I can. I started running the query when I
or quote the
where clauses and thus PostgreSQL would never use the indexes.
It was also missing several indexes on appropriate columns.
We offered some advice and we know that some of it was taken but
we don't know which.
Sincerely,
Joshua D. Drake
--
Command Prompt, Inc., home of Mammoth PostgreSQL
experience
with Compaq Proliant (now HP) servers. I have also heard good things
about IBM.
IBM actually sells a reasonable costing Opteron server as well.
Sincerely,
Joshua D. Drake
--
Command Prompt, Inc., home of PostgreSQL Replication, and plPHP.
Postgresql support, programming shared hosting
seems alot more honest
to me, and reasonable. The IBM gear doesn't seem that much better.
It is my experience that IBM will get within 5% of Dell if you
provide IBM with a written quote from Dell.
Sincerely,
Joshua D. Drake
And while I have concerns about some of the Dell
hardware, none
?
Mammoth PostgreSQL Replicator will automatically upgrade you to 7.3.8
which you should be running anyway.
I believe Slony will work with 7.3.2.
Sincerely,
Joshua D. Drake
Thanks again,
Saranya
Do you Yahoo!?
Read only
since Sun is looking to break into the market.
Really? I am not being sarcastic, but I found their prices pretty sad.
Did you go direct or web purchase? I have thought about using them
several times but
Sincerely,
Joshua D. Drake
On Wed, 2004-12-01 at 14:24 -0800, Josh Berkus wrote
)
An Opteron, properly tuned with PostgreSQL will always beat a Xeon
in terms of raw cpu.
RAID 10 will typically always outperform RAID 5 with the same HD config.
Fibre channel in general will always beat a normal (especially an LSI) raid.
Dell's suck for PostgreSQL.
Sincerely,
Joshua D. Drake
I'm sure any
.
Sincerely,
Joshua D. Drake
--
Command Prompt, Inc., home of PostgreSQL Replication, and plPHP.
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL
or _?_
This is a religious question :)
I'm assuming hardware RAID 10 on 15k SCSI drives is fastest disk performance.
And many, many disks -- yes.
Sincerely,
Joshua D. Drake
Any hardware-comparison benchmarks out there showing the results for
different PostgreSQL setups?
Thanks
RAID controllers tend to use i960 or StrongARM CPUs that run at speeds
that _aren't_ all that impressive. With software RAID, you can take
advantage of the _enormous_ increases in the speed of the main CPU.
I don't know so much about FreeBSD's handling of this, but on Linux,
there's pretty
Steinar H. Gunderson wrote:
On Mon, Jan 10, 2005 at 08:31:22PM -0800, Joshua D. Drake wrote:
Unless something has changed though, you can't run raid 10
with linux software raid
Hm, why not? What stops you from making two RAID-0 devices and mirroring
those? (Or the other way round, I can
used to manage also public web services with
10/15 millions records and up to 8 millions pages view by month.
Depending on your needs either:
Slony: www.slony.info
or
Replicator: www.commandprompt.com
Will both do what you want. Replicator is easier to setup but
Slony is free.
Sincerely,
Joshua D
Stephen Frost wrote:
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
Is there any solution with PostgreSQL matching these needs ... ?
You might look into pg_pool. Another
to build it for you.
Sincerely,
Joshua D. Drake
Regards,
--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator
So what we would like to get is a pool of small servers able to make one
virtual server ... for that is called a Cluster ... no ?
I know they are not using PostgreSQL ... but how a company like Google do to
get an incredible database in size and so quick access ?
You could use dblink with
.
There is absolutely zero PostgreSQL solution...
I just replied the same thing but then I was thinking. Couldn't he use
multiple databases
over multiple servers with dblink?
It is not exactly how I would want to do it, but it would provide what
he needs I think???
Sincerely,
Joshua D. Drake
You
there is not this kind of
functionnality ... it seems to be a real need for big applications no ?
Because it is really, really hard to do correctly and hard
equals expensive.
Sincerely,
Joshua D. Drake
Thanks all for your answers ...
--
Command Prompt, Inc., home of Mammoth PostgreSQL - S
Christopher Kings-Lynne wrote:
Probably by carefully partitioning their data. I can't imagine anything
being fast on a single table in 250,000,000 tuple range. Nor can I
really imagine any database that efficiently splits a single table
across multiple machines (or even inefficiently unless some
one
coming along.(around 350Gb)
Why not run two raid systems. A RAID 1 for your OS and a RAID 10 for
your database? Push all of your extra drives into the RAID 10.
Sincerely,
Joshua D. Drake
I was kind of hoping that the new PGSQL tablespaces would allow me to create
a storage container spanning
balanced query
results might differ among servers.
If there's enough demand, I would do such that enhancements to pgpool.
Well I know that Replicator could also use this functionality.
Sincerely,
Joshua D. Drake
--
Tatsuo Ishii
Is there any other solution than a Cluster for our problem
with a BEGIN or START Transaction. Thus yes pgPool
can be made
to do this.
Sincerely,
Joshua D. Drake
Tatsuo Ishii wrote:
Can I ask a question?
Suppose table A gets updated on the master at time 00:00. Until 00:03
pgpool needs to send all queries regarding A to the master only. My
question is, how can
Alex Turner wrote:
I get the following output from explain analyze on a certain subset of
a large query I'm doing.
Try increases the statistics on the listprice column with alter
table and then re-run analyze.
alter table foo alter column set statistics n
Sincerely,
Joshua D. Drake
From
updating script, I feel.
I added this to the TODO section for autovacuum:
o Do VACUUM FULL if table is nearly empty?
We should never automatically launch a vacuum full. That seems like a
really bad idea.
Sincerely,
Joshua D. Drake
I don't think autovacuum is every going
to accomplish the goal.
You could build a dual opteron with 4 GB of ram, 12 10k raptor SATA
drives with a battery backed cache for about 7k or less.
Or if they are not CPU bound just IO bound you could easily just
add an external 12 drive array (even if scsi) for less than 7k.
Sincerely,
Joshua D. Drake
.
Sincerely,
Joshua D. Drake
Here are some of my settings. I can provide more as needed:
cat /proc/sys/kernel/shmmax
175013888
max_connections = 100
#---
# RESOURCE USAGE (except WAL
to explicitly turn it on within the controller
bios.
They also have optional battery backed cache.
Sincerely,
Joshua D. Drake
--
Command Prompt, Inc., Your PostgreSQL solutions company. 503-667-4564
Custom programming, 24x7 support, managed services, and hosting
Open Source Authors: plPHP
What seems to happen is it slams into a wall of some sort, the
system goes into disk write frenzy (wait=90% CPU), and eventually
recovers and starts running for a while at a more normal speed. What
I need though, is to not have that wall happen. It is easier for me
to accept a constant
with replicator you are going to take a pretty big hit initially
during the full
sync but then you could use batch replication and only replicate every
2-3 hours.
I am pretty sure Slony has similar capabilities.
Sincerely,
Joshua D. Drake
---(end of broadcast
Matthew Nuzum wrote:
I'm eager to hear your thoughts and experiences,
Well with replicator you are going to take a pretty big hit initially
during the full
sync but then you could use batch replication and only replicate every
2-3 hours.
Sincerely,
Joshua D. Drake
Thanks, I'm looking
with a battery backup option as well.
Oh and 3ware has BBU for certain models as well.
Sincerely,
Joshua D. Drake
---(end of broadcast)---
TIP 8: explain analyze is your friend
running databases on SATA without issue. Would
I put it on a database that is expecting to have 500 connections at all
times? No.
Then again, if you have an application with that requirement, you have
the money
to buy a big fat SCSI array.
Sincerely,
Joshua D. Drake
Postgres apps, but you needed
bad blocks. It doesn't always mean you have to
replace the drive but it does mean you need to maintain it and usually
at least backup, low level (if scsi) and mark bad blocks. Then restore.
Sincerely,
Joshua D. Drake
/me remembers trying to cram an old donated 5MB (yes M) disk into an old
8088
problem i have with the plpgsql is
that the quoting is really a pain.
plpgsql but I believe that will change in a short period of time.
Sincerely,
Joshua D. Drake
--
Command Prompt, Inc., Your PostgreSQL solutions company. 503-667-4564
Custom programming, 24x7 support, managed services, and hosting
Open
Alex Turner wrote:
Not true - the recommended RAID level is RAID 10, not RAID 0+1 (at
least I would never recommend 1+0 for anything).
Uhmm I was under the impression that 1+0 was RAID 10 and that 0+1 is NOT
RAID 10.
Ref: http://www.acnc.com/raid.html
Sincerely,
Joshua D. Drake
and then re-enable them after your done with the import.
Sincerely,
Joshua D. Drake
--
Command Prompt, Inc., Your PostgreSQL solutions company. 503-667-4564
Custom programming, 24x7 support, managed services, and hosting
Open Source Authors: plPHP, pgManage, Co-Authors: plPerlNG
Reliable replication
I am up I can try to learn more about it, I am so
glad there are so many folks here willing to take time to educate us newb's.
Sincerely,
Joshua D. Drake
Command Prompt, Inc.
---(end of broadcast)---
TIP 6: Have you searched our list archives
and
that is Command Prompt, If there are others I would like to hear about
it because I would rather work with someone than against them.
Sincerely,
Joshua D. Drake
--
Your PostgreSQL solutions provider, Command Prompt, Inc.
24x7 support - 1.800.492.2240, programming, and consulting
Home
Dave Page wrote:
-Original Message-
From: Joshua D. Drake [mailto:[EMAIL PROTECTED]
Sent: 27 April 2005 17:46
To: Dave Page
Cc: Josh Berkus; Joel Fradkin; PostgreSQL Perform
Subject: Re: [PERFORM] Final decision
It is? No-one told the developers...
We have mentioned it on the list
D. Drake
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
---(end of broadcast)---
TIP 6: Have you searched our list archives
Neil Conway wrote:
Josh Berkus wrote:
Don't hold your breath. MySQL, to judge by their first clustering
implementation, has a *long* way to go before they have anything usable.
Oh? What's wrong with MySQL's clustering implementation?
Ram only tables :)
-Neil
---(end of
Hello,
It always depends on the dataset but you should try an explain analyze
on each query. It will tell you which one is more efficient for your
particular data.
Sincerely,
Joshua D. Drake
Here's the join:
# explain select child_pid from ssv_product_children, nv_products where
, it is like Oracle or DB2 and comes with a
comparable feature set. Only you can decide if that is what you need.
Sincerely,
Joshua D. Drake
Command Prompt, Inc.
--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7
slower (which
under normal use I don't find to be the case) wouldn't the reliability
of PostgreSQL make up for say the 10% net difference in performance?
Sincerely,
Joshua D. Drake
Thanks,
Amit
-Original Message-
From: Joshua D. Drake [mailto:[EMAIL PROTECTED]
Sent: Tuesday
anyway).
At Command Prompt we have also had some great success with the LSI
cards. The only thing we didn't like is the obscure way you have to
configure RAID 10.
Sincerely,
Joshua D. Drake
J. Andrew Rogers
---(end of broadcast)---
TIP 6: Have
box?
What have been your experience?
I would not run RAID + LVM in a software scenario. Software RAID is fine
however.
Sincerely,
Joshua D. Drake
I don't forsee more 10-15 concurrent sessions running for an their OLTP
application.
Thanks.
Steve Poe
---(end
Well for Opteron you should also gain from the very high memory
bandwidth and the fact that it has I believe 3 FP units per CPU.
Sincerely,
Joshua D. Drake
--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7
Three options:
9500-4LP with Raptor drives 10k rpm, raid 1 + raid 1
9500-8LP with Raptor drives 10k rpm, raid 10 + raid 1
Go for SCSI (LSI Megaraid or ICP Vortex) and take 10k drives
If you are going with Raptor drives use the LSI 150-6 SATA RAID
with the BBU.
Sincerely,
Joshua D. Drake
=3 and typeid=9);
Sincerely,
Joshua D. Drake
test
-
id| integer
partnumber| character varying(32)
productlistid | integer
typeid| integer
Indexes:
test_productlistid btree (productlistid)
test_typeid btree (typeid
(productlistid=3 and
Hello,
Also what happens if you:
set enable_seqscan = false;
explain analyze query
Sincerely,
Joshua D. Drake
typeid=9);
QUERY PLAN
--
Seq Scan on test
= 3) AND (typeid = 9))
Total runtime: 36847.754 ms
(3 rows)
Time: 36850.719 ms
On Fri, 10 Jun 2005, Joshua D. Drake wrote:
Clark Slater wrote:
hmm, i'm baffled. i simplified the query
and it is still taking forever...
What happens if you:
alter table test alter column productlistid set
Clark Slater wrote:
Query should return 132,528 rows.
O.k. then the planner is doing fine it looks like. The problem is you
are pulling 132,528 rows. I would suggest moving to a cursor which will
allow you to fetch in smaller chunks much quicker.
Sincerely,
Joshua D. Drake
vbp=# set
can bump it up on
my todo list.
Um, can't we just get that from pg_settings?
Anyway, I'll be deriving settings from the .conf file, since most of the
time the Configurator will be run on a new installation.
Aren't most of the settings all kept in the SHOW variables anyway?
Sincerely,
Joshua D
.
Sincerely,
Joshua D. Drake
JohnM
-
table definitions
-
-
db= \d contacts
Table db.contacts
Column|Type | Modifiers
--+-+---
id
,
Joshua D. Drake
I'd also be interested in knowing if this is dependant on whether I am
running 7.4, 8.0 or 8.1.
--
Your PostgreSQL solutions provider, Command Prompt, Inc.
24x7 support - 1.800.492.2240, programming, and consulting
Home of PostgreSQL Replicator, plPHP, plPerlNG
with encrypted password '';
But as I look at pg_shadow there is still a hash...
You could do:
update pg_shadow set passwd = '' where usename = 'foo';
Sincerely,
Joshua D. Drake
Larry Bailey
Sr. Oracle DBA
First American Real Estate Solution
(714) 701-3347
[EMAIL PROTECTED
Bailey, Larry wrote:
Thanks but it is still prompting for a password.
Does your pg_hba.conf require a password?
Sincerely,
Joshua D. Drake
Larry Bailey
Sr. Oracle DBA
First American Real Estate Solution
(714) 701-3347
[EMAIL PROTECTED]
-Original Message-
From: Joshua D. Drake
I would be curious as to what options were passed to jfs and xfs.
Sincerely,
Joshua D. Drake
BTW, it'd be interesting to see how UFS on FreeBSD compared.
--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom
Ron Wills wrote:
Hello all
I'm running a postgres 7.4.5, on a dual 2.4Ghz Athlon, 1Gig RAM and
an 3Ware SATA raid.
2 drives?
4 drives?
8 drives?
RAID 1? 0? 10? 5?
Currently the database is only 16G with about 2
tables with 50+ row, one table 20+ row and a few small
tables. The
Oliver Crosby wrote:
Hi,
I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.
Running scripts locally, it takes about 1.5x longer than mysql, and the
load on the server is only about 21%.
What queries?
What is your structure?
Have you tried explain analyze?
How many
MB in 2.00 seconds = 908.00 MB/sec
Timing buffered disk reads: 26 MB in 3.11 seconds = 8.36 MB/sec
[EMAIL PROTECTED] root]#
Which is just horrible.
Sincerely,
Joshua D. Drake
Patrick Welche wrote:
On Thu, Jul 21, 2005 at 09:19:04PM -0700, Luke Lonergan wrote:
Joshua,
On 7/21/05
I just want to know , for immediate data mirroring , what is the best
way for PostgreSQL . PostgreSQL is offering many mirror tools , but
which one is the best ?. Is there any other way to accomplish the task ?
You want to take a look at Slony-I or Mammoth Replicator.
http://www.slony.info/
.
Sincerely,
Joshua D. Drake
--
Your PostgreSQL solutions provider, Command Prompt, Inc.
24x7 support - 1.800.492.2240, programming, and consulting
Home of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit
http://www.commandprompt.com / http://www.postgresql.org
?
Is the query using indexes?
Is the query modifying ALOT of rows?
Of course there is also the RTFM of are you analyzing and vacuuming?
Sincerely,
Joshua D. Drake
I'm running 8.0.1 on kernel 2.6.12-3 on 64-bit Opterons if that matters..
-Dan
---(end of broadcast
Also, I am using select ... group by ... order by .. limit 1 to get
the min/max since I have already been bit by the issue of min() max()
being slower.
This specific instance is fixed in 8.1
Sincerely,
Joshua D. Drake
-Dan
---(end of broadcast
I've been asked this a couple of times and I don't know the answer: what
happens if you give XLog a single drive (unmirrored single spindle), and
that drive dies? So the question really is, should you be giving two
disks to XLog?
If that drive dies your restoring from backup. You would need
.
the postgresql.conf of both machines is here:
max_connections = 50
shared_buffers = 1000 # min 16, at least max_connections*2,
8KB each
You should look at the annotated conf:
http://www.powerpostgresql.com/Downloads/annotated_conf_80.html
Sincerely,
Joshua D. Drake
without sacrificing the performance and reliability of the database itself.
Sincerely,
Joshua D. Drake
HD's and RAM are cheap enough that you should be able to upgrade in
more ways, but do at least that upgrade!
Beyond that, the best ways to spend you limited $ are highly dependent
everything on one RAID 10. YMMV.
Really? That's interesting. My experience is different, I assume SCSI?
Software/Hardware Raid?
Sincerely,
Joshua D. Drake
Ron Peacetree
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ
constructions used by PostgreSQL.
Does anyone know why this method was choosen? Are there any papers or
researches about it?
You may want to pass this question over to pgsql-hackers.
Sincerely,
Joshua D. Drake
Thank's a lot,
Pryscila.
--
Your PostgreSQL solutions company - Command Prompt
a difference.
From my experience software raid works very, very well. However I have
never put
software raid on anything that is very heavily loaded.
I would still use hardware raid if it is very heavily loaded.
Sincerely,
Joshua D. Drake
--
Your PostgreSQL solutions company - Command Prompt
.
Sincerely,
Joshua D. Drake
--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com
.
Sincerely,
Joshua D. Drake
--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com
that increments? E.g; serial?
select * from table order by date limit 25 offset 0
You could use a cursor.
Sincerely,
Joshua D. Drake
Tables seems properly indexed, with vacuum and analyze ran regularly.
Still this very basic SQLs takes up to a minute run.
I read some recent messages
Christian Paul B. Cosinas wrote:
I try to run this command in my linux server.
VACUUM FULL pg_class;
VACUUM FULL pg_attribute;
VACUUM FULL pg_depend;
But it give me the following error:
-bash: VACUUM: command not found
That needs to be run from psql ...
I choose Polesoft
1 - 100 of 474 matches
Mail list logo