[EMAIL PROTECTED] wrote:
On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:
I understand most of these issues, and expected this kind of reply. Please,
allow me to insist that we reason on this problem and try to find a
solution. My reason for doing so is that the future
Alessandro Baretta schrieb:
[EMAIL PROTECTED] wrote:
...
It looks like this is the only possible solution at present--and in the
future, too--but it has a tremendouse performance impact on queries
returning thousands of rows.
Well actually one of the better solutions would be persistent
Josh Berkus wrote:
People:
To follow up further, what Alessandro is talking about is known as a
keyset cursor. Sybase and SQL Server used to support them; I beleive
that they were strictly read-only and had weird issues with record
visibility.
I would like to thank everyone for sharing
Hi,
I have a simple question about performance using two resources.
What's have the best performance?
lower( col1 ) LIKE lower( 'myquestion%' )
OR
col1 ILIKE 'myquestion%'
Thanks.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the
I have the answer I've been looking for and I'd like to share with all.
After help from you guys, it appeared that the real issue was using an index
for my order by X DESC clauses. For some reason that doesn't make good
sense, postgres doesn't support this, when it kinda should automatically.
There's a number of sites that have lots of info on postgresql.conf
tuning. Google for 'postgresql.conf tuning' or 'annotated
postgresql.conf'.
I know some of these sites, but who should I know if the information on
those pages is correct? The information on those pages should be
published as
On Wed, Jan 18, 2006 at 09:57:50AM +0100, Alessandro Baretta wrote:
[EMAIL PROTECTED] wrote:
On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:
I understand most of these issues, and expected this kind of reply.
Please, allow me to insist that we reason on this problem and
As far as I know the index is only used when you do a prefix search, for
example
col like 'xyz%'
I think that if you are looking for expressions such as 'A%B', you could
rephrase them like this:
col like 'A%' AND col like 'A%B'
So the database could use the index to narrow down the result
On Wed, 18 Jan 2006 [EMAIL PROTECTED] wrote:
I have the answer I've been looking for and I'd like to share with all.
After help from you guys, it appeared that the real issue was using an index
for my order by X DESC clauses. For some reason that doesn't make good
sense, postgres doesn't
[EMAIL PROTECTED] wrote:
On Wed, Jan 18, 2006 at 09:57:50AM +0100, Alessandro Baretta wrote:
I there is to be a change to PostgreSQL to optimize for this case, I
suggest it involve the caching of query plans, executor plans, query
results (materialized views?), LIMIT and OFFSET. If we had all
Your experiment made far too many assumptions and the data does not
stand up to scrutiny.
On 1/18/06, Alessandro Baretta [EMAIL PROTECTED] wrote:
Results: I'll omit the numerical data, which everyone can easily obtain in
only
a few minutes, repeating the experiment. I used several query
Even a database-wide vacuum does not take locks on more than one table.
The table locks are acquired and released one by one, as the operation
proceeds.
Has that changed recently? I have always seen vacuumdb or SQL
VACUUM (without table specifications) running as one long
On Wed, Jan 18, 2006 at 03:41:57PM +, Harry Jackson wrote:
There are various reason why google might want to limit the search
result returned ie to encourage people to narrow their search. Prevent
screen scrapers from hitting them really hard blah blah. Perhaps less
than 0.0001% of
Mindaugas wrote:
Even a database-wide vacuum does not take locks on more than one table.
The table locks are acquired and released one by one, as the operation
proceeds.
Has that changed recently? I have always seen vacuumdb or SQL
VACUUM (without table specifications) running
[EMAIL PROTECTED] (Mindaugas) writes:
Even a database-wide vacuum does not take locks on more than one
table. The table locks are acquired and released one by one, as
the operation proceeds.
Has that changed recently? I have always seen vacuumdb or SQL
VACUUM (without table
Hi,
I am currently doing
large weekly updates with fsync=off. My updates involves SELECT, UPDATE,
DELETE and etc. Setting fsync=off works for me since I take a complete
backup before the weekly update and run a "sync" and "CHECKPOINT" after each
weekly update has completed to ensure the
Obviously, I have done this to improve write performance for the update
each week. My question is if I install a 3ware or similar card to
replace my current software RAID 1 configuration, am I going to see a
very large improvement? If so, what would be a ball park figure?
Well that
My original plan was to buy a 3WARE card and put a 1GB of
memory on it to improve writes but I am not sure if that is actually going to
help the issue if fsync=off.
Benjamin Arai
[EMAIL PROTECTED]
http://www.benjaminarai.com
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On
On Sat, Jan 14, 2006 at 09:37:01PM -0500, Charles Sprickman wrote:
Following up to myself again...
On Wed, 14 Dec 2005, Charles Sprickman wrote:
Hello all,
Supermicro 1U w/SCA backplane and 4 bays
2x2.8 GHz Xeons
Adaptec 2015S zero channel RAID card
I don't want to throw away the
http://www.3ware.com/products/serial_ata2-9000.asp
Check their data sheet - the cards are BBU ready - all you have to do
is order a BBU
which you can from here:
http://www.newegg.com/Product/Product.asp?Item=N82E16815999601
Alex.
On 1/18/06, Joshua D. Drake [EMAIL PROTECTED] wrote:
On Wednesday 18 January 2006 08:54 am, Chris Browne wrote:
To the contrary, there is a whole section on what functionality to
*ADD* to VACUUM.
Near but not quite off the topic of VACUUM and new features...
I've been thinking about parsing the vacuum output and storing it in
Postgresql. All
A 3ware card will re-order your writes to put them more in disk order,
which will probably improve performance a bit, but just going from a
software RAID 1 to a hardware RAID 1, I would not imagine that you
will see much of a performance boost. Really to get better
performance you will need to
On Wed, 2006-01-18 at 10:26 -0800, Benjamin Arai wrote:
My original plan was to buy a 3WARE card and put a 1GB of memory on it
to improve writes but I am not sure if that is actually going to help
the issue if fsync=off.
My experience with a 3Ware 9500S-8 card are rather disappointing,
Here's some C to use to create the operator classes, seems to work ok.
---
#include postgres.h
#include string.h
#include fmgr.h
#include utils/date.h
/* For date sorts */
PG_FUNCTION_INFO_V1(ddd_date_revcmp);
Datum ddd_date_revcmp(PG_FUNCTION_ARGS){
DateADT
On Tue, Jan 17, 2006 at 09:04:53AM +, Marcos wrote:
I already read the documentation for to use the SPI_PREPARE and
SPI_EXEC... but sincerely I don't understand how I will use this
resource in my statements.
What statements? What problem are you trying to solve?
--
Michael Fuhr
Benjamin Arai wrote:
Obviously, I have done this to improve write performance for the update
each week. My question is if I install a 3ware or similar card to
replace my current software RAID 1 configuration, am I going to see a
very large improvement? If so, what would be a ball park
Steinar H. Gunderson wrote:
On Wed, Jan 18, 2006 at 01:58:09PM -0800, William Yu wrote:
The key is getting a card with the ability to upgrade the onboard ram.
Our previous setup was a LSI MegaRAID 320-1 (128MB), 4xRAID10,
fsync=off. Replaced it with a ARC-1170 (1GB) w/ 24x7200RPM SATA2 drives
On Wed, Jan 18, 2006 at 01:58:09PM -0800, William Yu wrote:
The key is getting a card with the ability to upgrade the onboard ram.
Our previous setup was a LSI MegaRAID 320-1 (128MB), 4xRAID10,
fsync=off. Replaced it with a ARC-1170 (1GB) w/ 24x7200RPM SATA2 drives
(split into 3 8-drive
On Wed, Jan 18, 2006 at 03:09:42PM +0100, Michael Riess wrote:
There's a number of sites that have lots of info on postgresql.conf
tuning. Google for 'postgresql.conf tuning' or 'annotated
postgresql.conf'.
I know some of these sites, but who should I know if the information on
those pages
On Wed, Jan 18, 2006 at 11:15:51AM -0800, Michael Crozier wrote:
I've been thinking about parsing the vacuum output and storing it in
Postgresql. All the tuple, page, cpu time, etc... information would be
inserted into a reasonably flat set of tables.
snip
Assuming this isn't a
On Wed, Jan 18, 2006 at 09:10:30AM +, Marcos wrote:
Hi,
I have a simple question about performance using two resources.
What's have the best performance?
lower( col1 ) LIKE lower( 'myquestion%' )
OR
col1 ILIKE 'myquestion%'
If you index lower( col1 ), then the former would
[EMAIL PROTECTED] (Michael Crozier) writes:
On Wednesday 18 January 2006 08:54 am, Chris Browne wrote:
To the contrary, there is a whole section on what functionality to
*ADD* to VACUUM.
Near but not quite off the topic of VACUUM and new features...
I've been thinking about parsing the
On Wed, Jan 18, 2006 at 10:24:17AM -0800, Joshua D. Drake wrote:
Because 3ware does not make a midrange card that has a battery backed
cache :). That is the only reason. 3ware makes good stuff.
Why worry about battery-backup if he's running with fsync off?
--
Jim C. Nasby, Sr. Engineering
On Wednesday 18 January 2006 14:52 pm, Jim C. Nasby wrote:
I think it's a good idea, but you should take a look at the recently
added functionality that allows you to investigate the contests of the
FSM via a user function (this is either in 8.1 or in HEAD; I can't
remember which).
I will
We had someone working on that for a while; I don't think it got to
the point of being something ready to unleash on the world.
Interesting. I will dig around the mailing list archives too see how they
went about it... for my own curiosity if nothing else. If you happen to
know offhand,
On Wed, Jan 18, 2006 at 03:36:04PM -0800, Michael Crozier wrote:
On Wednesday 18 January 2006 14:52 pm, Jim C. Nasby wrote:
I think it's a good idea, but you should take a look at the recently
added functionality that allows you to investigate the contests of the
FSM via a user function
36 matches
Mail list logo