40k IOPS sounds like best in case, you'll never see it in the real world
marketing to me. There are a few benchmarks if you google and they all seem to
indicate the performance is probably +/- 10% of an intel x25-e. I would
personally trust intel over one of these drives.
Is it even possible
Don wrote:
With that in mind- Is anyone using the new OCZ Vertex 2 SSD's as a ZIL?
They're claiming 50k IOPS (4k Write- Aligned), 2 million hour MTBF, TRIM
support, etc. That's more write IOPS than the ZEUS (40k IOPS, $) but at
half the price of an Intel X25-E (3.3k IOPS, $400).
On Tue, May 18, 2010 at 4:28 PM, Don d...@blacksun.org wrote:
With that in mind- Is anyone using the new OCZ Vertex 2 SSD's as a ZIL?
The current Sandforce drives out don't have an ultra-capacitor on
them, so they could lose data if the system crashed. There are
supposed to be enterprise class
On 2010-05-19 08.32, sensille wrote:
Don wrote:
With that in mind- Is anyone using the new OCZ Vertex 2 SSD's as a ZIL?
They're claiming 50k IOPS (4k Write- Aligned), 2 million hour MTBF, TRIM
support, etc. That's more write IOPS than the ZEUS (40k IOPS, $) but at
half the price of an
Willard Korfhage wrote:
This afternoon, messages like the following started appearing in
/var/adm/messages:
May 18 13:46:37 fs8 scsi: [ID 365881 kern.info]
/p...@0,0/pci8086,2...@1/pci15d9,a...@0 (mpt0):
May 18 13:46:37 fs8 Log info 0x3108 received for target 5.
May 18 13:46:37 fs8
mm.. Service time of sd3..5 are waay too high to be
good working disks.
21 writes shouldn't take 1.3 seconds.
Some of your disks are not feeling well, possibly
doing
block-reallocation like mad all the time, or block
recovery of some
form. Service times should be closer to what sd1 and
How full is your filesystem? Give us the output of
zfs list
You might be having a hardware problem, or maybe it's
extremely full.
Hi Edward,
The _db filesystems have a recordsise of 16K (the others have the default
128K) :
NAME USED AVAIL REFER MOUNTPOINT
On 05/19/10 09:34 PM, Philippe wrote:
Hi !
It is strange because I've checked the SMART data of the 4 disks, and
everything seems really OK ! (on another hardware/controller, because I needed
Windows to check it). Maybe it's a problem with the SAS/SATA controller ?!
One question : if I halt
it looks like your 'sd5' disk is performing horribly
bad and except
for the horrible performance of 'sd5' (which
bottlenecks the I/O),
'sd4' would look just as bad. Regardless, the first
step would be to
investigate 'sd5'.
Hi Bob !
I've already tried the pool without the sd5 disk (so
If I create a file in a file system and then snapshot the file system.
Then delete the file.
Is it guaranteed that while the snapshot exists no new file will be created
with the same inode number as the deleted file?
--chris
--
This message posted from opensolaris.org
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
If I create a file in a file system and then snapshot the file system.
Then delete the file.
Is it guaranteed that while the snapshot exists no new file will be
created with the same inode number as the deleted file?
I
I am currently doing research on how much memory ZFS should have for a storage
server.
I came across this blog
http://constantin.glez.de/blog/2010/04/ten-ways-easily-improve-oracle-solaris-zfs-filesystem-performance
It recommends that for every TB of storage you have you want 1GB of RAM just
My work has bought a bunch of IBM servers recently as ESX hosts. They all come
with LSI SAS1068E controllers as standard, which we remove and upgrade to a
raid 5 controller.
So I had a bunch of them lying around. We've bought a 16x SAS hotswap case and
I've put in an AMD X4 955 BE with an ASUS
Well- 40k IOPS is the current claim from ZEUS- and they're the benchmark. They
use to be 17k IOPS. How real any of these numbers are from any manufacturer is
a guess.
Given the Intel's refusal to honor a cache flush, and their performance
problems with the cache disabled- I don't trust them
The reason for wanting to know is to try and find versions of a file.
If a file is renamed then the only way to know that the renamed file was the
same as a file in a snapshot would be if the inode numbers matched. However for
that to be reliable it would require the i-nodes are not reused.
As for the Vertex drives- if they are within +-10% of the Intel they're still
doing it for half of what the Intel drive costs- so it's an option- not a great
option- but still an option.
Yes, but Intel is SLC. Much more endurance.
___
zfs-discuss
On Tue, May 18, 2010 20:45, Edward Ned Harvey wrote:
The whole point of a log device is to accelerate sync writes, by providing
nonvolatile storage which is faster than the primary storage. You're not
going to get this if any part of the log device is at the other side of a
WAN. So either
http://www.natecarlson.com/2010/05/07/review-supermicros-sc847a-4u-chassis-with-36-drive-bays/
Review: SuperMicro’s SC847 (SC847A) 4U chassis with 36 drive bays
May 7, 2010 · 9 comments
in Geek Stuff, Linux, Storage, Virtualization, Work Stuff
SuperMicro SC847 Thumbnail
[Or my quest for
On Wed, May 19, 2010 02:09, thomas wrote:
Is it even possible to buy a zeus iops anywhere? I haven't been able to
find one. I get the impression they mostly sell to other vendors like sun?
I'd be curious what the price is on a 9GB zeus iops is these days?
Correct, their Zeus products are only
On Tue, 18 May 2010, Edward Ned Harvey wrote:
Either I'm crazy, or I completely miss what you're asking. You want to have
one side of a mirror attached locally, and the other side of the mirror
attached ... via iscsi or something ... across the WAN? Even if you have a
really fast WAN (1Gb or
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of John
Hoogerdijk
I'm building a campus cluster with identical
storage in two locations
with ZFS mirrors spanning both storage frames. Data
will be mirrored
using zfs. I'm looking
On Tue, May 18, 2010 20:45, Edward Ned Harvey wrote:
The whole point of a log device is to accelerate
sync writes, by providing
nonvolatile storage which is faster than the
primary storage. You're not
going to get this if any part of the log device is
at the other side of a
WAN. So
comment below...
On May 19, 2010, at 7:50 AM, John Hoogerdijk wrote:
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of John
Hoogerdijk
I'm building a campus cluster with identical
storage in two locations
with ZFS mirrors spanning both
On Wed, 19 May 2010, Deon Cui wrote:
http://constantin.glez.de/blog/2010/04/ten-ways-easily-improve-oracle-solaris-zfs-filesystem-performance
It recommends that for every TB of storage you have you want 1GB of
RAM just for the metadata.
Interesting conclusion.
Is this really the case that
Running ZFS on a Nexenta box, I had a mirror get broken and apparently
the metadata is corrupt now. If I try and mount vol2 it works but if
I try and mount -a or mount vol2/vm2 is instantly kernel panics and
reboots. Is it possible to recover from this? I don't care if I lose
the file listed
Do you have a coredump? Or a stack trace of the panic?
On Wed, 19 May 2010, John Andrunas wrote:
Running ZFS on a Nexenta box, I had a mirror get broken and apparently
the metadata is corrupt now. If I try and mount vol2 it works but if
I try and mount -a or mount vol2/vm2 is instantly
Not to my knowledge, how would I go about getting one? (CC'ing discuss)
On Wed, May 19, 2010 at 8:46 AM, Mark J Musante mark.musa...@oracle.com wrote:
Do you have a coredump? Or a stack trace of the panic?
On Wed, 19 May 2010, John Andrunas wrote:
Running ZFS on a Nexenta box, I had a
On 19.05.10 17:53, John Andrunas wrote:
Not to my knowledge, how would I go about getting one? (CC'ing discuss)
man savecore and dumpadm.
Michael
On Wed, May 19, 2010 at 8:46 AM, Mark J Musantemark.musa...@oracle.com wrote:
Do you have a coredump? Or a stack trace of the panic?
On
On Wed, May 19, 2010 at 05:33:05AM -0700, Chris Gerhard wrote:
The reason for wanting to know is to try and find versions of a file.
No, there's no such guarantee. The same inode and generation number
pair is extremely unlikely to be re-used, but the inode number itself is
likely to be re-used.
On Wed, May 19, 2010 at 07:50:13AM -0700, John Hoogerdijk wrote:
Think about the potential problems if I don't mirror the log devices
across the WAN.
If you don't mirror the log devices then your disaster recovery
semantics will be that you'll miss any transactions that hadn't been
committed to
- Deon Cui deon@gmail.com skrev:
I am currently doing research on how much memory ZFS should have for a
storage server.
I came across this blog
http://constantin.glez.de/blog/2010/04/ten-ways-easily-improve-oracle-solaris-zfs-filesystem-performance
It recommends that for every
Bob Friesenhahn wrote:
On Wed, 19 May 2010, Deon Cui wrote:
http://constantin.glez.de/blog/2010/04/ten-ways-easily-improve-oracle-solaris-zfs-filesystem-performance
It recommends that for every TB of storage you have you want 1GB of
RAM just for the metadata.
Interesting conclusion.
Hmmm... no coredump even though I configured it.
Here is the trace though I will see what I can do about the coredump
r...@cluster:/export/home/admin# zfs mount vol2/vm2
panic[cpu3]/thread=ff001f45ec60: BAD TRAP: type=e (#pf Page fault)
rp=ff001f45e950 addr=30 occurred in module zfs
et == Erik Trimble erik.trim...@oracle.com writes:
et frequently-accessed files from multiple VMs are in fact
et identical, and thus with dedup, you'd only need to store one
et copy in the cache.
although counterintuitive I thought this wasn't part of the initial
release. Maybe I'm
Miles Nordin wrote:
et == Erik Trimble erik.trim...@oracle.com writes:
et frequently-accessed files from multiple VMs are in fact
et identical, and thus with dedup, you'd only need to store one
et copy in the cache.
although counterintuitive I thought this wasn't part
OK, I got a core dump, what do I do with it now?
It is 1.2G in size.
On Wed, May 19, 2010 at 10:54 AM, John Andrunas j...@andrunas.net wrote:
Hmmm... no coredump even though I configured it.
Here is the trace though I will see what I can do about the coredump
Hello and good day,
I will have two OpenSolaris snv_134 storage servers both connected to a
SAS chassis with SAS disks used to store zpool data. One storage server
will be the active storage server and the other will be the passive fail
over storage server. Both servers will be able to access the
Well the larger size of the Vertex, coupled with their smaller claimed write
amplification should result in sufficient service life for my needs. Their
claimed MTBF also matches the Intel X25-E's.
--
This message posted from opensolaris.org
___
Since it ignores Cache Flush command and it doesn't have any persistant buffer
storage, disabling the write cache is the best you can do.
This actually brings up another question I had: What is the risk, beyond a few
seconds of lost writes, if I lose power, there is no capacitor and the cache
On May 19, 2010, at 2:29 PM, Don wrote:
Since it ignores Cache Flush command and it doesn't have any persistant
buffer storage, disabling the write cache is the best you can do.
This actually brings up another question I had: What is the risk, beyond a
few seconds of lost writes, if I
On Wed, May 19, 2010 at 02:29:24PM -0700, Don wrote:
Since it ignores Cache Flush command and it doesn't have any
persistant buffer storage, disabling the write cache is the best you
can do.
This actually brings up another question I had: What is the risk,
beyond a few seconds of lost
You can lose all writes from the last committed transaction (i.e., the
one before the currently open transaction).
And I don't think that bothers me. As long as the array itself doesn't go belly
up- then a few seconds of lost transactions are largely irrelevant- all of the
QA virtual machines
First, I suggest you open a bug at https://defect.opensolaris.org/bz
and get a bug number.
Then, name your core dump something like bug.bugnumber and upload it
using the instructions here:
http://supportfiles.sun.com/upload
Update the bug once you've uploaded the core and supply the
You can lose all writes from the last committed transaction (i.e., the
one before the currently open transaction).
I'll pick one- performance :)
Honestly- I wish I had a better grasp on the real world performance of these
drives. 50k IOPS is nice- and considering the incredible likelihood of
A recent post on StorageMojo has some interesting numbers on how
vibrations can affect disks, especially consumer drives:
http://storagemojo.com/2010/05/19/shock-vibe-and-awe/
He mentions a 2005 study that I wasn't aware of. In its conclusion it
states:
Based on the results of
I'm not having any luck hotswapping a drive attached to my Intel SASUC8I
(LSI-based) controller. The commands which work for the AMD AHCI ports don't
work for the LSI. Here's what cfgadm -a reports with all drives installed and
operational:
Ap_Id Type
Deon Cui deon.cui at gmail.com writes:
So I had a bunch of them lying around. We've bought a 16x SAS hotswap
case and I've put in an AMD X4 955 BE with an ASUS M4A89GTD Pro as
the mobo.
In the two 16x PCI-E slots I've put in the 1068E controllers I had
lying around. Everything is still
47 matches
Mail list logo