2012, at 03:06, Ragnar Sundblad wrote:
On 1 feb 2012, at 02:43, Edmund White wrote:
You will definitely want to have a Smart Array card (p411 or p811) on hand
to update the firmware on the enclosure. Make sure you're on firmware
version 0131. You may also want to update the disk firmware
I guess many of you on this list are using zfs send and receive to move
data from one machine to another, or between pools on the same machine,
for redundancy or for other purposes, and perhaps over ssh or other
channels.
Is there any standard way of doing this that people use, or has everyone
Just to follow up on this, in case there are others interested:
The D2700s seems to work quite ok for us. We have four issues with them,
all of which we will ignore for now:
- They hang when I insert an Intel SSD SATA (!) disk (I wanted to test,
both for log device and cache device, and I had
On 1 feb 2012, at 02:38, Hung-Sheng Tsao (laoTsao) wrote:
what is the server you attach to D2700?
It is different Sun/Oracle X4NN0s, x86-86 boxes.
the hp spec for d2700 did not include solaris, so not sure how you get
support from hp:-(
We don't. :-(
/ragge
On 1 feb 2012, at 02:43, Edmund White wrote:
You will definitely want to have a Smart Array card (p411 or p811) on hand
to update the firmware on the enclosure. Make sure you're on firmware
version 0131. You may also want to update the disk firmware at the same
time.
I have multipath and
Hello Rocky!
On 1 feb 2012, at 03:07, Rocky Shek wrote:
Ragnar,
Which Intel SSD do you use? We use 320 and 710. We have bad experience with
510 in the past
I tried with Intel X25-M 160 and 80 GB and a X25-E 64 GB (only because that
was what I had in my drawer). I am not sure which one of
Hello James!
On 1 feb 2012, at 02:43, James C. McPherson wrote:
The supported way to enable MPxIO is to run
# /usr/sbin/stmsboot -e
You shouldn't need to do this for mpt_sas HBAs such as
your 9205 controllers; we enable MPxIO by default on them.
If you _do_ edit scsi_vhci.conf, you
I am sorry if these are dumb questions. If there are explanations
available somewhere for those questions that I just haven't found, please
let me know! :-)
1. It has been said that when the DDT entries, some 376 bytes or so, are
rolled out on L2ARC, there still is some 170 bytes in the ARC to
Thanks for your answers!
On 2 dec 2011, at 02:54, Erik Trimble wrote:
On 12/1/2011 4:59 PM, Ragnar Sundblad wrote:
I am sorry if these are dumb questions. If there are explanations
available somewhere for those questions that I just haven't found, please
let me know! :-)
1. It has been
based), which
are 3 Gb/s. Would those (probably) work OK even if we should consider
switching to 6 Gb/s HBAs?
What 6 Gb/s HBA is currently recommended (LSI 920[05]?s).
Thanks for any advice and/or thoughts!
Ragnar Sundblad
Royal Institute of Technology
Stockholm, Sweden
On 30 nov 2011, at 14:40, Edmund White wrote:
Absolutely.
I'm using a fully-populated D2700 with an HP ProLiant DL380 G7 server
running NexentaStor.
On the HBA side, I used the LSI 9211-8i 6G controllers for the server's
internal disks (boot, a handful of large disks, Pliant SSDs for
On 8 jul 2010, at 17.23, Garrett D'Amore wrote:
You want the write cache enabled, for sure, with ZFS. ZFS will do the
right thing about ensuring write cache is flushed when needed.
That is not for sure at all, it all depends on what the right thing
is, which depends on the application and/or
On 12 apr 2010, at 22.32, Carson Gaspar wrote:
Carson Gaspar wrote:
Miles Nordin wrote:
re == Richard Elling richard.ell...@gmail.com writes:
How do you handle the case when a hotplug SATA drive is powered off
unexpectedly with data in its write cache? Do you replay the writes, or do
On 30 jun 2010, at 22.46, Garrett D'Amore wrote:
On Wed, 2010-06-30 at 22:28 +0200, Ragnar Sundblad wrote:
To be safe, the protocol needs to be able to discover that the devices
(host or disk) has been disconnected and reconnected or has been reset
and that either parts assumptions about
On 17 jun 2010, at 18.17, Richard Jahnel wrote:
The EX specs page does list the supercap
The pro specs page does not.
They do for both on the Specifications tab on the web page:
On 30 maj 2010, at 01.53, morris hooten wrote:
I have 6 zfs pools and after rebooting init 6 the vpath device path names
have changed for some unknown reason. But I can't detach, remove and reattach
to the new device namesANY HELP! please
pjde43m01 - - - -
On 24 maj 2010, at 02.44, Erik Trimble wrote:
On 5/23/2010 5:00 PM, Andreas Iannou wrote:
Is it safe or possible to do a zpool replace for multiple drives at once? I
think I have one of the troublesome WD Green drives as replacing it has
taken 39hrs and only reslivered 58Gb, I have another
On 24 maj 2010, at 10.26, Brandon High wrote:
On Mon, May 24, 2010 at 1:02 AM, Ragnar Sundblad ra...@csc.kth.se wrote:
Is that really true if you use the zpool replace command with both
the old and the new drive online?
Yes.
(Don't you mean no then? :-)
zpool replace [-f] pool
On 22 maj 2010, at 07.40, Don wrote:
The SATA power connector supplies 3.3, 5 and 12v. A complete
solution will have all three. Most drives use just the 5v, so you can
probably ignore 3.3v and 12v.
I'm not interested in building something that's going to work for every
possible drive
On 20 maj 2010, at 00.20, Don wrote:
You can lose all writes from the last committed transaction (i.e., the
one before the currently open transaction).
And I don't think that bothers me. As long as the array itself doesn't go
belly up- then a few seconds of lost transactions are largely
On 20 maj 2010, at 20.35, David Magda wrote:
On Thu, May 20, 2010 14:12, Travis Tabbal wrote:
On May 19, 2010, at 2:29 PM, Don wrote:
The data risk is a few moments of data loss. However,
if the order of the
uberblock updates is not preserved (which is why the
caches are flushed)
then
On 21 maj 2010, at 00.53, Ross Walker wrote:
On May 20, 2010, at 6:25 PM, Travis Tabbal tra...@tabbal.net wrote:
use a slog at all if it's not durable? You should
disable the ZIL
instead.
This is basically where I was going. There only seems to be one SSD that is
considered
On 2010-05-19 08.32, sensille wrote:
Don wrote:
With that in mind- Is anyone using the new OCZ Vertex 2 SSD's as a ZIL?
They're claiming 50k IOPS (4k Write- Aligned), 2 million hour MTBF, TRIM
support, etc. That's more write IOPS than the ZEUS (40k IOPS, $) but at
half the price of an
On 12 maj 2010, at 22.39, Miles Nordin wrote:
bh == Brandon High bh...@freaks.com writes:
bh If you boot from usb and move your rpool from one port to
bh another, you can't boot. If you plug your boot sata drive into
bh a different port on the motherboard, you can't
bh boot.
On 10 maj 2010, at 20.04, Miles Nordin wrote:
bh == Brandon High bh...@freaks.com writes:
bh The drive should be on the same USB port because the device
bh path is saved in the zpool.cache. If you removed the
bh zpool.cache, it wouldn't matter where the drive was plugged
bh
On 12 maj 2010, at 05.31, Brandon High wrote:
On Tue, May 11, 2010 at 8:17 PM, Richard Elling
richard.ell...@gmail.com wrote:
boot single user and mv it (just like we've done for fstab/vfstab for
the past 30+ years :-)
It would be nice to have a grub menu item that ignores the cache, so
On 6 maj 2010, at 08.17, Pasi Kärkkäinen wrote:
On Wed, May 05, 2010 at 11:32:23PM -0400, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Robert Milkowski
if you can disable ZIL and compare the performance to
On 28 apr 2010, at 14.06, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Look up the inode number of README. (for example, ls -i README)
(suppose it’s inode 12345)
find
On 25 apr 2010, at 20.12, Richard Elling wrote:
On Apr 25, 2010, at 5:45 AM, Edward Ned Harvey wrote:
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Saturday, April 24, 2010 7:42 PM
Next,
mv /a/e /a/E
ls -l a/e/.snapshot/snaptime
ENOENT?
ls -l
On 24 apr 2010, at 16.43, Richard Elling wrote:
I do not recall reaching that conclusion. I think the definition of the
problem
is what you continue to miss.
Me too then, I think. Can you please enlighten us about the
definition of the problem?
The .snapshot directories do precisely what
On 18 apr 2010, at 06.43, Richard Elling wrote:
On Apr 17, 2010, at 11:51 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave Vrona
1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs be
On 17 apr 2010, at 20.51, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave Vrona
1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs be
mirrored ?
...
Personally, I recommend the latest build
On 18 apr 2010, at 00.52, Dave Vrona wrote:
Ok, so originally I presented the X-25E as a reasonable approach. After
reading the follow-ups, I'm second guessing my statement.
Any decent alternatives at a reasonable price?
How much is reasonable? :-)
I guess there are STEC drives that
On 16 apr 2010, at 17.05, Bob Friesenhahn wrote:
On Fri, 16 Apr 2010, Kyle McDonald wrote:
But doesn't the TRIM command help here. If as the OS goes along it makes
sectors as unused, then the SSD will have a lighter wight lift to only
need to read for example 1 out of 8 (assuming sectors
On 12 apr 2010, at 19.10, Kyle McDonald wrote:
On 4/12/2010 9:10 AM, Willard Korfhage wrote:
I upgraded to the latest firmware. When I rebooted the machine, the pool was
back, with no errors. I was surprised.
I will work with it more, and see if it stays good. I've done a scrub, so
now
On 9 apr 2010, at 10.58, Andreas Höschler wrote:
Hi all,
I need to replace a disk in a zfs pool on a production server (X4240 running
Solaris 10) today and won't have access to my documentation there. That's why
I would like to have a good plan on paper before driving to that location.
On 9 apr 2010, at 12.04, Andreas Höschler wrote:
Hi Ragnar,
I need to replace a disk in a zfs pool on a production server (X4240
running Solaris 10) today and won't have access to my documentation there.
That's why I would like to have a good plan on paper before driving to that
On 9 apr 2010, at 14.17, Edward Ned Harvey wrote:
...
I recently went through an exercise very similar to this on an x4275. I
also tried to configure the HBA via the ILOM but couldn't find any way to do
it.
...
Oh no, this is a BIOS system. The card is an autonomous entity
that lives a life
On 12 mar 2010, at 03.58, Damon Atkins wrote:
...
Unfortunately DNS spoofing exists, which means forward lookups can be poison.
And IP address spoofing, and...
The best (maybe only) way to make NFS secure is NFSv4 and Kerb5 used together.
Amen!
DNS is NOT an authentication system!
IP is NOT
On 8 apr 2010, at 23.21, Miles Nordin wrote:
rs == Ragnar Sundblad ra...@csc.kth.se writes:
rs use IPSEC to make IP address spoofing harder.
IPsec with channel binding is win, but not until SA's are offloaded to
the NIC and all NIC's can do IPsec AES at line rate. Until this
happens
On 7 apr 2010, at 14.28, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jeroen Roodhart
If you're running solaris proper, you better mirror
your
ZIL log device.
...
I plan to get to test this as well, won't
On 7 apr 2010, at 18.13, Edward Ned Harvey wrote:
From: Ragnar Sundblad [mailto:ra...@csc.kth.se]
Rather: ... =19 would be ... if you don't mind loosing data written
the ~30 seconds before the crash, you don't have to mirror your log
device.
If you have a system crash, *and* a failed
On 5 apr 2010, at 04.35, Edward Ned Harvey wrote:
When running the card in copyback write cache mode, I got horrible
performance (with zfs), much worse than with copyback disabled
(which I believe should mean it does write-through), when tested
with filebench.
When I benchmark my disks, I
On 4 apr 2010, at 06.01, Richard Elling wrote:
Thank you for your reply! Just wanted to make sure.
Do not assume that power outages are the only cause of unclean shutdowns.
-- richard
Thanks, I have seen that mistake several times with other
(file)systems, and hope I'll never ever make it
On 1 apr 2010, at 06.15, Stuart Anderson wrote:
Assuming you are also using a PCI LSI HBA from Sun that is managed with
a utility called /opt/StorMan/arcconf and reports itself as the amazingly
informative model number Sun STK RAID INT what worked for me was to run,
arcconf delete (to delete
On 2 apr 2010, at 22.47, Neil Perrin wrote:
Suppose there is an application which sometimes does sync writes, and
sometimes async writes. In fact, to make it easier, suppose two processes
open two files, one of which always writes asynchronously, and one of which
always writes
Hello,
Maybe this question should be put on another list, but since there
are a lot of people here using all kinds of HBAs, this could be right
anyway;
I have a X4150 running snv_134. It was shipped with a STK RAID INT
adaptec/intel/storagetek/sun SAS HBA.
When running the card in copyback
On 18 feb 2010, at 13.55, Phil Harman wrote:
...
Whilst the latest bug fixes put the world to rights again with respect to
correctness, it may be that some of our performance workaround are still
unsafe (i.e. if my iSCSI client assumes all writes are synchronised to
nonvolatile storage,
On 19 feb 2010, at 17.35, Edward Ned Harvey wrote:
The PERC cache measurably and significantly accelerates small disk writes.
However, for read operations, it is insignificant compared to system ram,
both in terms of size and speed. There is no significant performance
improvement by
On 19 feb 2010, at 23.40, Eugen Leitl wrote:
On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:
I found the Hyperdrive 5/5M, which is a half-height drive bay sata
ramdisk with battery backup and auto-backup to compact flash at power
failure.
Promises 65,000 IOPS and thus
On 19 feb 2010, at 23.20, Ross Walker wrote:
On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad ra...@csc.kth.se wrote:
On 18 feb 2010, at 13.55, Phil Harman wrote:
...
Whilst the latest bug fixes put the world to rights again with respect to
correctness, it may be that some of our
On 19 feb 2010, at 23.22, Phil Harman wrote:
On 19/02/2010 21:57, Ragnar Sundblad wrote:
On 18 feb 2010, at 13.55, Phil Harman wrote:
Whilst the latest bug fixes put the world to rights again with respect to
correctness, it may be that some of our performance workaround are still
On 20 feb 2010, at 02.34, Rob Logan wrote:
An UPS plus disabling zil, or disabling synchronization, could possibly
achieve the same result (or maybe better) iops wise.
Even with the fastest slog, disabling zil will always be faster...
(less bytes to move)
This would probably work given
On 15 feb 2010, at 23.33, Bob Beverage wrote:
On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff
beimh...@hotmail.com wrote:
I've seen exactly the same thing. Basically, terrible
transfer rates
with Windows
and the server sitting there completely idle.
I am also seeing this behaviour.
On 28 jan 2010, at 12.11, Björn JACKE wrote:
On 2010-01-28 at 00:30 +0100 Ragnar Sundblad sent off:
Are there any plans to add unwritten extent support into ZFS or any reason
why
not?
I have no idea, but just out of curiosity - when do you want that?
when you have many data streams
On 27 jan 2010, at 10.44, Björn JACKE wrote:
On 2010-01-25 at 08:31 -0600 Mike Gerdts sent off:
You are missing the point. Compression and dedup will make it so that
the blocks in the devices are not overwritten with zeroes. The goal
is to overwrite the blocks so that a back-end storage
On 19 jan 2010, at 20.11, Ian Collins wrote:
Julian Regel wrote:
Based on what I've seen in other comments, you might be right.
Unfortunately, I don't feel comfortable backing up ZFS filesystems because
the tools aren't there to do it (built into the operating system or using
On 20 jan 2010, at 17.22, Julian Regel wrote:
It is actually not that easy.
Compare a cost of 2x x4540 with 1TB disks to equivalent solution on LTO.
Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot spare
+ 2x OS disks.
The four raidz2 group form a single pool. This
On 21 jan 2010, at 00.20, Al Hopper wrote:
I remember for about 5 years ago (before LT0-4 days) that streaming
tape drives would go to great lengths to ensure that the drive kept
streaming - because it took so much time to stop, backup and stream
again. And one way the drive firmware
Eric D. Midama did a very good job answering this, and I don't have
much to add. Thanks Eric!
On 3 jan 2010, at 07.24, Erik Trimble wrote:
I think you're confusing erasing with writing.
I am now quite certain that it actually was you who were
confusing those. I hope this discussion has
On 1 jan 2010, at 17.44, Richard Elling wrote:
On Dec 31, 2009, at 12:59 PM, Ragnar Sundblad wrote:
Flash SSDs actually always remap new writes into a
only-append-to-new-pages style, pretty much as ZFS does itself.
So for a SSD there is no big difference between ZFS and
filesystems as UFS
On 1 jan 2010, at 17.28, David Magda wrote:
On Jan 1, 2010, at 11:04, Ragnar Sundblad wrote:
But that would only move the hardware specific and dependent flash
chip handling code into the file system code, wouldn't it? What
is won with that? As long as the flash chips have larger pages
On 2 jan 2010, at 12.43, Joerg Schilling wrote:
Ragnar Sundblad ra...@csc.kth.se wrote:
I certainly agree, but there still isn't much they can do about
the WORM-like properties of flash chips, were reading is pretty
fast, writing is not to bad, but erasing is very slow and must be
done
On 2 jan 2010, at 13.10, Erik Trimble wrote:
Joerg Schilling wrote:
Ragnar Sundblad ra...@csc.kth.se wrote:
On 1 jan 2010, at 17.28, David Magda wrote:
Don't really see how things are either hardware specific or dependent.
The inner workings of a SSD flash drive
On 2 jan 2010, at 22.49, Erik Trimble wrote:
Ragnar Sundblad wrote:
On 2 jan 2010, at 13.10, Erik Trimble wrote
Joerg Schilling wrote:
the TRIM command is what is intended for an OS to notify the SSD as to
which blocks are deleted/erased, so the SSD's internal free list can be
updated
On 3 jan 2010, at 04.19, Erik Trimble wrote:
Ragnar Sundblad wrote:
On 2 jan 2010, at 22.49, Erik Trimble wrote:
Ragnar Sundblad wrote:
On 2 jan 2010, at 13.10, Erik Trimble wrote
Joerg Schilling wrote:
the TRIM command is what is intended for an OS to notify the SSD
On 3 jan 2010, at 06.07, Ragnar Sundblad wrote:
(I don't think they typically merge pages, I believe they rather
just pick pages with some freed blocks, copies the active blocks
to the end of the disk, and erases the page.)
(And of course you implement wear leveling with the same
mechanism
On 31 dec 2009, at 22.53, David Magda wrote:
On Dec 31, 2009, at 13:44, Joerg Schilling wrote:
ZFS is COW, but does the SSD know which block is in use and which is not?
If the SSD did know whether a block is in use, it could erase unused blocks
in advance. But what is an unused block on
On 1 jan 2010, at 14.14, David Magda wrote:
On Jan 1, 2010, at 04:33, Ragnar Sundblad wrote:
I see the possible win that you could always use all the working
blocks on the disk, and when blocks goes bad your disk will shrink.
I am not sure that is really what people expect, though. Apart
On 31 dec 2009, at 06.01, Richard Elling wrote:
On Dec 30, 2009, at 2:24 PM, Ragnar Sundblad wrote:
On 30 dec 2009, at 22.45, Richard Elling wrote:
On Dec 30, 2009, at 12:25 PM, Andras Spitzer wrote:
Richard,
That's an interesting question, if it's worth it or not. I guess
On 31 dec 2009, at 00.31, Bob Friesenhahn wrote:
On Wed, 30 Dec 2009, Mike Gerdts wrote:
Should the block size be a tunable so that page size of SSD (typically
4K, right?) and upcoming hard disks that sport a sector size 512
bytes?
Enterprise SSDs are still in their infancy. The
On 31 dec 2009, at 17.18, Bob Friesenhahn wrote:
On Thu, 31 Dec 2009, Ragnar Sundblad wrote:
Also, currently, when the SSDs for some very strange reason is
constructed from flash chips designed for firmware and slowly
changing configuration data and can only erase in very large chunks
On 31 dec 2009, at 19.26, Richard Elling wrote:
[I TRIMmed the thread a bit ;-)]
On Dec 31, 2009, at 1:43 AM, Ragnar Sundblad wrote:
On 31 dec 2009, at 06.01, Richard Elling wrote:
In a world with copy-on-write and without snapshots, it is obvious that
there will be a lot of blocks
On 30 dec 2009, at 22.45, Richard Elling wrote:
On Dec 30, 2009, at 12:25 PM, Andras Spitzer wrote:
Richard,
That's an interesting question, if it's worth it or not. I guess the
question is always who are the targets for ZFS (I assume everyone, though in
reality priorities has to set
On 7 dec 2009, at 18.40, Bob Friesenhahn wrote:
On Mon, 7 Dec 2009, Richard Bruce wrote:
I started copying over all the data from my existing workstation. When
copying files (mostly multi-gigabyte DV video files), network throughput
drops to zero for ~1/2 second every 8-15 seconds. This
is and what isn't
supported of the following, which we currently use or plan to use:
- UFS in a ZFS volume, mounted locally?
- a ZFS volume, iSCSI exported (soon to be COMSTAR), locally imported
again, and with a ZFS in it locally mounted/imported?
Thanks!
/ragge
On 12/03/09 15:26, Ragnar
Thank you Cindy for your reply!
On 3 dec 2009, at 18.35, Cindy Swearingen wrote:
A bug might exist but you are building a pool based on the ZFS
volumes that are created in another pool. This configuration
is not supported and possible deadlocks can occur.
I had absolutely no idea that ZFS
It seems that device names aren't always updated when importing
pools if devices have moved. I am not sure if this is only an
cosmetic issue or if it could actually be a real problem -
could it lead to the device not being found at a later import?
/ragge
(This is on snv_127.)
I ran the
better. It would probably also
give us even a little more security, since hot spares and data disks
aren't paired.
But which setup would give us better performance?
Are there other issues we should consider when choosing?
Thank you in advance for advice and hints!
Ragnar Sundblad
79 matches
Mail list logo