Hi Wes
1. The first interesting thing is that prior to my mentioning this problem to
C_ a year or two back, the power loss protection was advertised everywhere
as simply that, without qualifiers about 'not inflight data'. Check out the
marketing of the M500 for the first year or so and
Mike Broers mbro...@gmail.com writes:
I had a query that was filtering with a wildcard search of a text field for
%SUCCESS%. The query took about 5 seconds and was running often so I wanted
to improve it. I suggested that the engineers include a new boolean column
for successful status. They
After a plug-pull during the create, reboot and here is the verify:
root@Dbms2:/var/tmp # ./diskchecker.pl -s newfs verify /test/biteme
verifying: 0.00%
verifying: 3.81%
verifying: 10.91%
verifying: 18.71%
verifying: 26.46%
verifying: 33.95%
verifying: 41.20%
verifying: 49.48%
verifying:
Why would you think that you don't need RAID for ZFS?
Reason I'm asking if because we are moving to ZFS on FreeBSD for our future
projects.
Because you have zraid. :-)
https://blogs.oracle.com/bonwick/entry/raid_z
General points:
1. It's my understanding that ZFS is designed to talk to
I had a query that was filtering with a wildcard search of a text field for
%SUCCESS%. The query took about 5 seconds and was running often so I wanted
to improve it. I suggested that the engineers include a new boolean column
for successful status. They implemented the requested field, but the
On 07 Jul 2015, at 16:59, Heikki Linnakangas hlinn...@iki.fi wrote:
So it lies about fsync()... The next question is, does it nevertheless
enforce the correct ordering of persisting fsync'd data? If you write to
file A and fsync it, then write to another file B and fsync it too, is it
RAID controllers are completely unnecessary for SSD as they currently
exist.
Agreed. The best solution is not to buy cheap disks and not to buy RAID
controllers now, imho.
In my own situation, I had a tight budget, high performance demand and a newish
machine with RAID controller and HDDs
Hi.
How would BBU cache help you if it lies about fsync? I suppose any RAID
controller removes data from BBU cache after it was fsynced by the drive.
As I know, there is no other magic command for drive to tell controller
that the data is safe now and can be removed from BBU cache.
Вт, 7 лип.
After bumping up work_mem from 12MB to 25MB that last materialize is indeed
hashing and this cut the query time by about 60%. Thanks, this was very
helpful and gives me something else to look for when troubleshooting
explains.
On Tue, Jul 7, 2015 at 11:10 AM, Mike Broers mbro...@gmail.com
The comment on HDDs is true and gave me another thought.
These new 'shingled' HDDs (the 8TB ones) rely on rewriting all the data on
tracks that overlap your data, any time you change the data. Result: disks
8-20x slower during writes, after they fill up.
Do they have power loss protection
Yikes. I would not be able to sleep tonight if it were not for the BBU cache in
front of these disks...
diskchecker.pl consistently reported several examples of corruption
post-power-loss (usually 10 - 30 ) on unprotected M500s/M550s, so I think it's
pretty much open to debate what types of
Note that if you still have the settings you showed in your original
post you're just moving the goal posts a few feet further back. Any
heavy load can still trigger this kind of behaviour.
On Tue, Jul 7, 2015 at 5:29 AM, eudald_v reaven.galaeind...@gmail.com wrote:
Hello guys!
I finally got
Thanks, very informative! I'll experiment with work_mem settings and report
back.
On Tue, Jul 7, 2015 at 11:02 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Mike Broers mbro...@gmail.com writes:
I had a query that was filtering with a wildcard search of a text field
for
%SUCCESS%. The query took
Hi Graeme,
Why would you think that you don't need RAID for ZFS?
Reason I'm asking if because we are moving to ZFS on FreeBSD for our future
projects.
Regards,
Wei Shan
On 8 July 2015 at 00:46, Graeme B. Bell graeme.b...@nibio.no wrote:
RAID controllers are completely unnecessary for SSD
On Tue, Jul 7, 2015 at 11:43 AM, Graeme B. Bell graeme.b...@nibio.no wrote:
The comment on HDDs is true and gave me another thought.
These new 'shingled' HDDs (the 8TB ones) rely on rewriting all the data on
tracks that overlap your data, any time you change the data. Result: disks
8-20x
On 07 Jul 2015, at 19:47, Scott Marlowe scott.marl...@gmail.com wrote:
[I know that using a shingled disk sounds crazy (it sounds crazy to me) but
you can bet there are people that just want to max out the disk bays in
their server... ]
Let's just say no online backup companies are using
On Tue, Jul 7, 2015 at 11:46 AM, Graeme B. Bell graeme.b...@nibio.no wrote:
RAID controllers are completely unnecessary for SSD as they currently
exist.
Agreed. The best solution is not to buy cheap disks and not to buy RAID
controllers now, imho.
In my own situation, I had a tight
On 07/07/2015 10:59 PM, Graeme B. Bell wrote:
Cache flushing isn't an atomic operation though. Even if the ordering
is right, you are likely to have a partial fsync on the disk when the
lights go out - isn't your FS still corrupt?
If the filesystem is worth its salt, no. Journaling filesystems
Cache flushing isn't an atomic operation though. Even if the ordering is right,
you are likely to have a partial fsync on the disk when the lights go out -
isn't your FS still corrupt?
On 07 Jul 2015, at 21:53, Heikki Linnakangas hlinn...@iki.fi wrote:
On 07/07/2015 09:01 PM, Wes Vaske
Regarding:
“lie about their fsync status.”
This is mostly semantics but it might help google searches on the issue.
A drive doesn’t support fsync(), that’s a filesystem/kernel process. A drive
will do a FLUSH CACHE. Before kernels 2.6.low numbers the fsync() call
wouldn’t sent any ATA or SCSI
On 07/07/2015 09:01 PM, Wes Vaske (wvaske) wrote:
Regarding:
“lie about their fsync status.”
This is mostly semantics but it might help google searches on the issue.
A drive doesn’t support fsync(), that’s a filesystem/kernel process. A drive will do
a FLUSH CACHE. Before kernels 2.6.low
That is a very good question, which I have raised elsewhere on the postgresql
lists previously.
In practice: I have *never* managed to make diskchecker fail with the BBU
enabled in front of the drives and I spent days trying with plug pulls till I
reached the point where as a statistical
This raises another interesting question. Does anyone hear have a document
explaining how their BBU cache works EXACTLY (at cache / sata level) on their
server? Because I haven't been able to find any for mine (Dell PERC
H710/H710P). Can anyone tell me with godlike authority and
On Tue, Jul 7, 2015 at 10:59 AM, Heikki Linnakangas hlinn...@iki.fi wrote:
On 07/07/2015 05:15 PM, Wes Vaske (wvaske) wrote:
The M500/M550/M600 are consumer class drives that don't have power
protection for all inflight data.* (like the Samsung 8x0 series and
the Intel 3x0 5x0 series).
Hi Merlin,
Long story short - thanks for the reply, but you're not measuring anything
about the parallelism of code running in a pl/pgsql environment here. You're
just measuring whether postgres can parallelise entering that environment and
get back out. Don't get me wrong - it's great that
On Tue, Jul 7, 2015 at 3:33 PM, Graeme B. Bell graeme.b...@nibio.no wrote:
Hi Merlin,
Long story short - thanks for the reply, but you're not measuring anything
about the parallelism of code running in a pl/pgsql environment here. You're
just measuring whether postgres can parallelise
1. Does the sammy nvme have *complete* power loss protection though, for all
fsync'd data?
I am very badly burned by my experiences with Crucial SSDs and their 'power
loss protection' which doesn't actually ensure all fsync'd data gets into flash.
It certainly looks pretty with all those
Hello guys!
I finally got rid of it.
It looks that at the end it was all due to transparent_hugepages values.
I disabled them and cpu spikes disappeared. I am sorry cause it's something
I usually disable on postgresql servers, but I forgot to do so on this one
and never thought about it.
Thanks
Hi Karl,
Great post, thanks.
Though I don't think it's against conventional wisdom to aggregate writes into
larger blocks rather than rely on 4k performance on ssds :-)
128kb blocks + compression certainly makes sense. But it might make less sense
I suppose if you had some incredibly high
On 7/7/2015 06:52, Graeme B. Bell wrote:
Hi Karl,
Great post, thanks.
Though I don't think it's against conventional wisdom to aggregate writes
into larger blocks rather than rely on 4k performance on ssds :-)
128kb blocks + compression certainly makes sense. But it might make less
Thanks for the Info.
So if RAID controllers are not an option, what one should use to build
big databases? LVM with xfs? BtrFs? Zfs?
Tigran.
- Original Message -
From: Graeme B. Bell graeme.b...@nibio.no
To: Steve Crawford scrawf...@pinpointresearch.com
Cc: Wes Vaske (wvaske)
I am unsure about the performance side but, ZFS is generally very attractive to
me.
Key advantages:
1) Checksumming and automatic fixing-of-broken-things on every file (not just
postgres pages, but your scripts, O/S, program files).
2) Built-in lightweight compression (doesn't help with
- Original Message -
From: Graeme B. Bell graeme.b...@nibio.no
To: Mkrtchyan, Tigran tigran.mkrtch...@desy.de
Cc: Graeme B. Bell graeme.b...@nibio.no, Steve Crawford
scrawf...@pinpointresearch.com, Wes Vaske (wvaske)
wva...@micron.com, pgsql-performance
Completely agree with Steve.
1. Intel NVMe looks like the best bet if you have modern enough hardware for
NVMe. Otherwise e.g. S3700 mentioned elsewhere.
2. RAID controllers.
We have e.g. 10-12 of these here and e.g. 25-30 SSDs, among various machines.
This might give people idea about
On 7/7/2015 05:56, Mkrtchyan, Tigran wrote:
- Original Message -
From: Graeme B. Bell graeme.b...@nibio.no
To: Mkrtchyan, Tigran tigran.mkrtch...@desy.de
Cc: Graeme B. Bell graeme.b...@nibio.no, Steve Crawford
scrawf...@pinpointresearch.com, Wes Vaske (wvaske)
wva...@micron.com,
Hi everyone,
I've written a new open source tool for easily parallelising SQL scripts in
postgres. [obligatory plug: https://github.com/gbb/par_psql ]
Using it, I'm seeing a problem I've seen in other postgres projects involving
parallelisation in the last 12 months.
Basically:
- I
On Thu, Jul 2, 2015 at 9:15 AM, Graeme B. Bell graeme.b...@nibio.no wrote:
Hi everyone,
I've written a new open source tool for easily parallelising SQL scripts
in postgres. [obligatory plug: https://github.com/gbb/par_psql ]
Using it, I'm seeing a problem I've seen in other postgres
The M500/M550/M600 are consumer class drives that don't have power protection
for all inflight data.* (like the Samsung 8x0 series and the Intel 3x0 5x0
series).
The M500DC has full power protection for inflight data and is an
enterprise-class drive (like the Samsung 845DC or Intel S3500
Thanks, this is very useful to know about the 730. When you say 'tested it with
plug-pulls', you were using diskchecker.pl, right?
Graeme.
On 07 Jul 2015, at 14:39, Karl Denninger k...@denninger.net wrote:
Incidentally while there are people who have questioned the 730 series power
loss
On Thu, Jul 2, 2015 at 1:00 PM, Wes Vaske (wvaske) wva...@micron.com
wrote:
Storage Review has a pretty good process and reviewed the M500DC when it
released last year.
http://www.storagereview.com/micron_m500dc_enterprise_ssd_review
The only database-specific info we have available are
As I have warned elsewhere,
The M500/M550 from $SOME_COMPANY is NOT SUITABLE for postgres unless you have a
RAID controller with BBU to protect yourself.
The M500/M550 are NOT plug-pull safe despite the 'power loss protection'
claimed on the packaging. Not all fsync'd data is preserved in the
On 07/07/2015 08:05 PM, Craig James wrote:
No ideas, but I ran into the same thing. I have a set of C/C++ functions
that put some chemistry calculations into Postgres as extensions (things
like, calculate the molecular weight of this molecule). As SQL
functions, the whole thing bogged down,
On 07/07/2015 05:15 PM, Wes Vaske (wvaske) wrote:
The M500/M550/M600 are consumer class drives that don't have power
protection for all inflight data.* (like the Samsung 8x0 series and
the Intel 3x0 5x0 series).
The M500DC has full power protection for inflight data and is an
enterprise-class
43 matches
Mail list logo