The generated buffer is 140MB, not 14MB. At 14 bytes per row, that makes
sense.
I have done another test.
If I execute `FETCH ALL FROM cursor` I get a 140MB disk buffer file, on the
PostgreSQL server, reported in its log.
If I execute `FETCH 500 FROM cursor` (exactly half the rows), I see a
Thanks, Tom.
Wouldn't this mean that cursors are noticeably non-optimal even for normal
data sizes, since the entire data to be streamed from the table is always
duplicated into another buffer and then streamed?
> if you want the whole query result at once, why are you bothering with a
cursor?
My experience with cursors in PostgreSQL with Java has been to stay away from
them. We support 2 databases with our product, PostgreSQL (default) and SQL
Server. While re-encrypting data in a database the application used cursors
with a fetch size of 1000.
Worked perfectly on SQL Server and on
On Tue, Feb 21, 2017 at 7:49 AM, Pietro Pugni
wrote:
> Hi there,
> I configured an IBM X3650 M4 for development and testing purposes. It’s
> composed by:
> - 2 x Intel Xeon E5-2690 @ 2.90Ghz (2 x 8 physical Cores + HT)
> - 96GB RAM DDR3 1333MHz (12 x 8GB)
> - 2 x 146GB
Mike Beaton writes:
> New TL;DR (I'm afraid): PostgreSQL is always generating a huge buffer file
> on `FETCH ALL FROM CursorToHuge`.
I poked into this and determined that it's happening because pquery.c
executes FETCH statements the same as it does with any other
Hi there,
I configured an IBM X3650 M4 for development and testing purposes. It’s
composed by:
- 2 x Intel Xeon E5-2690 @ 2.90Ghz (2 x 8 physical Cores + HT)
- 96GB RAM DDR3 1333MHz (12 x 8GB)
- 2 x 146GB SAS HDDs @ 15k rpm configured in RAID1 (mdadm)
- 6 x 525GB SATA SSDs (over-provisioned
Suggestion #1 is to turn off any write caching on the RAID controller.
Using LSI MegaRAID we went from 3k to 5k tps to 18k just turning off write
caching. Basically it just got in the way.
> Disclaimer: I’ve done extensive testing (FIO and postgres) with a few
> different RAID controllers and HW RAID vs mdadm. We (micron) are crucial but
> I don’t personally work with the consumer drives.
>
> Verify whether you have your disk write cache enabled or disabled. If it’s
> disabled,
> Suggestion #1 is to turn off any write caching on the RAID controller. Using
> LSI MegaRAID we went from 3k to 5k tps to 18k just turning off write caching.
> Basically it just got in the way.
Write caching is disabled because I removed the expansion card of the RAID
controller. It didn’t
> I'm curious what the entry point is for micron models are capacitor enabled...
The 5100 is the entry SATA drive with full power loss protection.
http://www.anandtech.com/show/10886/micron-announces-5100-series-enterprise-sata-ssds-with-3d-tlc-nand
Fun Fact: 3D TLC can give better endurance
On Tue, Feb 21, 2017 at 1:40 PM, Wes Vaske (wvaske)
wrote:
> - HW RAID can give better performance if your drives do not have
> a capacitor backed cache (like the MX300) AND the controller has a battery
> backed cache. **Consumer drives can often get better
Disclaimer: I’ve done extensive testing (FIO and postgres) with a few different
RAID controllers and HW RAID vs mdadm. We (micron) are crucial but I don’t
personally work with the consumer drives.
Verify whether you have your disk write cache enabled or disabled. If it’s
disabled, that will
Thank you a lot for your suggestions.
> Random points/suggestions:
> *) mdadm is the way to go. I think you'll get bandwidth constrained on most
> modern hba unless they are really crappy. On reasonably modern hardware
> storage is rarely the bottleneck anymore (which is a great place to
13 matches
Mail list logo