On 4/6/2011 11:08 AM, Erik Trimble wrote:
Traditionally, the reason for a separate /var was one of two major items:
(a) /var was writable, and / wasn't - this was typical of diskless or
minimal local-disk configurations. Modern packaging systems are making
this kind of configuration
On 2/25/2011 4:15 PM, Torrey McMahon wrote:
On 2/25/2011 3:49 PM, Tomas Ögren wrote:
On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes:
Hi All,
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
It happens
On 2/25/2011 3:49 PM, Tomas Ögren wrote:
On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes:
Hi All,
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
It happens at about 90% for me.. all of a sudden, the mail
in.mpathd is the IP multipath daemon. (Yes, it's a bit confusing that
mpathadm is the storage multipath admin tool. )
If scsi_vhci is loaded in the kernel you have storage multipathing
enabled. (Check with modinfo.)
On 2/15/2011 3:53 PM, Ray Van Dolson wrote:
I'm troubleshooting an existing
On 2/14/2011 10:37 PM, Erik Trimble wrote:
That said, given that SAN NVRAM caches are true write caches (and not
a ZIL-like thing), it should be relatively simple to swamp one with
write requests (most SANs have little more than 1GB of cache), at
which point, the SAN will be blocking on
On 1/30/2011 5:26 PM, Joerg Schilling wrote:
Richard Ellingrichard.ell...@gmail.com wrote:
ufsdump is the problem, not ufsrestore. If you ufsdump an active
file system, there is no guarantee you can ufsrestore it. The only way
to guarantee this is to keep the file system quiesced during the
On 1/25/2011 2:19 PM, Marion Hakanson wrote:
The only special tuning I had to do was turn off round-robin load-balancing
in the mpxio configuration. The Seagate drives were incredibly slow when
running in round-robin mode, very speedy without.
Interesting. Did you switch to the load-balance
On 1/18/2011 2:46 PM, Philip Brown wrote:
My specific question is, how easily does ZFS handle*temporary* SAN
disconnects, to one side of the mirror?
What if the outage is only 60 seconds?
3 minutes?
10 minutes?
an hour?
Depends on the multipath drivers and the failure mode. For example, if
Are those really your requirements? What is it that you're trying to
accomplish with the data? Make a copy and provide to an other host?
On 11/15/2010 5:11 AM, sridhar surampudi wrote:
Hi I am looking in similar lines,
my requirement is
1. create a zpool on one or many devices ( LUNs ) from
On 5/23/2010 11:49 AM, Richard Elling wrote:
FWIW, the A5100 went end-of-life (EOL) in 2001 and end-of-service-life
(EOSL) in 2006. Personally, I hate them with a passion and would like to
extend an offer to use my tractor to bury the beast:-).
I'm sure I can get some others to help. Can I
Not true. There are different ways that a storage array, and it's
controllers, connect to the host visible front end ports which might be
confusing the author but i/o isn't duplicated as he suggests.
On 4/4/2010 9:55 PM, Brad wrote:
I had always thought that with mpxio, it load-balances IO
The author mentions multipathing software in the blog entry. Kind of
hard to mix that up with cache mirroring if you ask me.
On 4/5/2010 9:16 PM, Brad wrote:
I'm wondering if the author is talking about cache mirroring where the cache
is mirrored between both controllers. If that is the
This is a topic for indiana-discuss, not zfs-discuss. If you read
through the archives of that alias you should see some pointers.
On 1/31/2010 11:38 AM, Tom Bird wrote:
Afternoon,
I note to my dismay that I can't get the community edition any more
past snv_129, this version was closest to
On 1/8/2010 10:04 AM, James Carlson wrote:
Mike Gerdts wrote:
This unsupported feature is supported with the use of Sun Ops Center
2.5 when a zone is put on a NAS Storage Library.
Ah, ok. I didn't know that.
Does anyone know how that works? I can't find it in the docs, no one
Make sure you have the latest LU patches installed. There were a lot of fixes
put back in that area within the last six months or so.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 12/30/2009 2:40 PM, Richard Elling wrote:
There are a few minor bumps in the road. The ATA PASSTHROUGH
command, which allows TRIM to pass through the SATA drivers, was
just integrated into b130. This will be more important to small servers
than SANs, but the point is that all parts of the
Suggest you start with the man page
http://docs.sun.com/app/docs/doc/819-2240/zfs-1m
On 10/15/2009 4:19 PM, Javier Conde wrote:
Hello,
I've seen in the what's new of Solaris 10 update 8 just released
that ZFS now includes the primarycache and secondarycache properties.
Is this the
As some Sun folks pointed out
1) No redundancy at the power or networking side
2) Getting 2TB drives in a x4540 would make the numbers closer
3) Performance isn't going to be that great with their design but...they
might not need it.
On 9/2/2009 2:13 PM, Michael Shadle wrote:
Yeah I wrote
Before I put one in ... anyone else seen one? Seems we support
compression on the root pool but there is no way to enable it at install
time outside of a custom script you run before the installer. I'm
thinking it should be a real install time option, have a jumpstart
keyword, etc. Same with
On 5/1/2009 2:01 PM, Miles Nordin wrote:
I've never heard of using multiple-LUN stripes for storage QoS before.
Have you actually measured some improvement in this configuration over
a single LUN? If so that's interesting.
Because of the way queing works in the OS and in most array
On 4/20/2009 7:26 PM, Robert Milkowski wrote:
Well, you need to disable cache flushes on zfs side then (or make a
firmware change work) and it will make a difference.
If you're running recent OpenSolaris/Solaris/SX builds you shouldn't
have to disable cache flushing on the array. The
On 1/20/2009 1:14 PM, Richard Elling wrote:
Orvar Korvar wrote:
What does this mean? Does that mean that ZFS + HW raid with raid-5 is not
able to heal corrupted blocks? Then this is evidence against ZFS + HW raid,
and you should only use ZFS?
Cyril Payet wrote:
Hello there,
Hitachi USP-V (sold as 9990V by Sun) provides thin provisioning,
known as Hitachi Dynamic Provisioning (HDP).
This gives a way to make the OS believes that a huge lun is
available whilst its size is not physically allocated on the
DataSystem side.
A simple
On 12/29/2008 8:20 PM, Tim wrote:
On Mon, Dec 29, 2008 at 6:09 PM, Torrey McMahon tmcmah...@yahoo.com
mailto:tmcmah...@yahoo.com wrote:
There are some mainframe filesystems that do such things. I think
there
was also an STK array - Iceberg[?] - that had similar functionality
On 12/29/2008 10:36 PM, Tim wrote:
On Mon, Dec 29, 2008 at 8:52 PM, Torrey McMahon tmcmah...@yahoo.com
mailto:tmcmah...@yahoo.com wrote:
On 12/29/2008 8:20 PM, Tim wrote:
I run into the same thing but once I say, I can add more space
without downtime they tend to smarten up
about garbage!
z
- Original Message - From: Torrey McMahon
[EMAIL PROTECTED]
To: Richard Elling [EMAIL PROTECTED]
Cc: Joseph Zhou [EMAIL PROTECTED]; William D.
Hathaway [EMAIL PROTECTED];
[EMAIL PROTECTED]; zfs-discuss@opensolaris.org;
[EMAIL PROTECTED]
Sent: Sunday, December 07
Ian Collins wrote:
On Mon 08/12/08 08:14 , Torrey McMahon [EMAIL PROTECTED] sent:
I'm pretty sure I understand the importance of a snapshot API. (You take
the snap, then you do the backup or whatever) My point is that, at
least on my quick read, you can do most of the same things
Richard Elling wrote:
Joseph Zhou wrote:
Yeah?
http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-31605/_details/Series3_FAQs.htm
Snapshot is a big deal?
Snapshot is a big deal, but you will find most hardware RAID
implementations
are somewhat limited,
- From: Torrey McMahon [EMAIL PROTECTED]
To: Richard Elling [EMAIL PROTECTED]
Cc: Joseph Zhou [EMAIL PROTECTED]; William D. Hathaway
[EMAIL PROTECTED]; [EMAIL PROTECTED];
zfs-discuss@opensolaris.org; [EMAIL PROTECTED]
Sent: Sunday, December 07, 2008 1:58 AM
Subject: Re: [zfs-discuss
You may want to ask your SAN vendor if they have a setting you can make
to no-op the cache flush. That way you don't have to worry about the
flush behavior if you change/add different arrays.
Adam N. Copeland wrote:
Thanks for the replies.
It appears the problem is that we are I/O bound. We
Richard Elling wrote:
Adam N. Copeland wrote:
Thanks for the replies.
It appears the problem is that we are I/O bound. We have our SAN guy
looking into possibly moving us to faster spindles. In the meantime, I
wanted to implement whatever was possible to give us breathing room.
Turning
Spencer Shepler wrote:
On Jul 10, 2008, at 7:05 AM, Ross wrote:
Oh god, I hope not. A patent on fitting a card in a PCI-E slot, or
using nvram with RAID (which raid controllers have been doing for
years) would just be rediculous. This is nothing more than cache,
and even with the
I'm doing some simple testing of ZFS block reuse and was wondering when
deferred frees kick in. Is it on some sort of timer to ensure data
consistency? Does an other routine call it? Would something as simple as
sync(1M) get the free block list written out so future allocations could
use the
A Darren Dunham wrote:
On Tue, Jun 10, 2008 at 05:32:21PM -0400, Torrey McMahon wrote:
However, some apps will probably be very unhappy if i/o takes 60 seconds
to complete.
It's certainly not uncommon for that to occur in an NFS environment.
All of our applications seem to hang
Richard Elling wrote:
Tobias Exner wrote:
Hi John,
I've done some tests with a SUN X4500 with zfs and MAID using the
powerd of Solaris 10 to power down the disks which weren't access for
a configured time. It's working fine...
The only thing I run into was the problem that it took
rather than living on the controller entirely.
-Andy
From: [EMAIL PROTECTED] on behalf of Torrey McMahon
Sent: Mon 5/19/2008 1:59 PM
To: Bob Friesenhahn
Cc: zfs-discuss@opensolaris.org; Kenny
Subject: Re: [zfs-discuss] ZFS and Sun Disk arrays
eric kustarz wrote:
So even with the above, if you add a vdev, slog, or l2arc later on,
that can be lost via the history being a ring buffer. There's a RFE
for essentially taking your current 'zpool status' output and
outputting a config (one that could be used to create a brand new
Tim wrote:
He wants to mount the ZFS filesystem (I'm assuming off of a backend
SAN storage array) to two heads, then round-robin NFS connections
between the heads to essentially *double* the throughput.
pNFS is the droid you are looking for.
Anyone have a pointer to a general ZFS health/monitoring module for
SunMC? There isn't one baked into SunMC proper which means I get to
write one myself if someone hasn't already done it.
Thanks.
___
zfs-discuss mailing list
I'm not an Oracle expert but I don't think Oracle checksumming can
correct data. If you have ZFS checksums enabled, and you're mirroring in
your zpools, then ZFS can self-correct as long the checksum on the other
half of the mirror is good.
Mertol Ozyoney wrote:
Don't take my words as an
Kyle McDonald wrote:
Vincent Fox wrote:
So the point is, a JBOD with a flash drive in one (or two to mirror the
ZIL) of the slots would be a lot SIMPLER.
We've all spent the last decade or two offloading functions into specialized
hardware, that has turned into these massive
Lewis Thompson wrote:
Hello,
I'm planning to use VMware Server on Ubuntu to host multiple VMs, one
of which will be a Solaris instance for the purposes of ZFS
I would give the ZFS VM two physical disks for my zpool, e.g. /dev/sda
and /dev/sdb, in addition to the VMware virtual disk for the
Robert Milkowski wrote:
Hello Darren,
DJM BTW there isn't really any such think as disk corruption there is
DJM data corruption :-)
Well, if you scratch it hard enough :)
http://www.philohome.com/hammerhead/broken-disk.jpg :-)
___
Jim Dunham wrote:
This raises a key point that that you should be aware of. ZFS does not
support shared access to the same ZFS filesystem.
unless you put NFS or something on top of it.
(I always forget that part myself.)
___
zfs-discuss
Peter Schuller wrote:
From what I read, one of the main things about ZFS is Don't trust the
underlying hardware. If this is the case, could I run Solaris under
VirtualBox or under some other emulated environment and still get the
benefits of ZFS such as end to end data integrity?
Louwtjie Burger wrote:
On 12/19/07, David Magda [EMAIL PROTECTED] wrote:
On Dec 18, 2007, at 12:23, Mike Gerdts wrote:
2) Database files - I'll lump redo logs, etc. in with this. In Oracle
RAC these must live on a shared-rw (e.g. clustered VxFS, NFS) file
system. ZFS does
Nicolas Dorfsman wrote:
Le 27 nov. 07 à 16:17, Torrey McMahon a écrit :
According to the array vendor the 99xx arrays no-op the cache flush
command. No need to set the /etc/system flag.
http://blogs.sun.com/torrey/entry/zfs_and_99xx_storage_arrays
Perfect !
Thanks Torrey
The profit stuff has been NDA for awhile but we started telling the
street a while back and they seem to like the idea. :)
Selim Daoud wrote:
wasn't that an NDA info??
s-
On 10/18/07, Torrey McMahon [EMAIL PROTECTED] wrote:
MC wrote:
Sun's storage strategy:
1) Finish Indiana
MC wrote:
Sun's storage strategy:
1) Finish Indiana and distro constructor
2) (ship stuff using ZFS-Indiana)
3) Success
4) Profit :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jonathan Edwards wrote:
On Sep 21, 2007, at 14:57, eric kustarz wrote:
Hi.
I gave a talk about ZFS during EuroBSDCon 2007, and because it won
the
the best talk award and some find it funny, here it is:
http://youtube.com/watch?v=o3TGM0T1CvE
a bit better version is here:
Did you upgrade your pools? zpool upgrade -a
John-Paul Drawneek wrote:
err, I installed the patch and am still on zfs 3?
solaris 10 u3 with kernel patch 120011-14
This message posted from opensolaris.org
___
zfs-discuss mailing list
Mark wrote:
Hi All,
Im just wondering (i figure you can do this but dont know what hardware and
stuff i would need) if I can set up a mirror of a raidz zpool across a
network.
Basically, the setup is a large volume of Hi-Def video is being streamed from
a camera, onto an editing
Has anyone thought about using snapshots and WORM devices. In theory,
you'd have to keep the WORM drive out of the pool, or as a special
device, and it would have to be a full snapshot even though we really
don't have those.
Any plans in this area? I could take a snapshot, clone it, then copy
Carisdad wrote:
Peter Tribble wrote:
# powermt display dev=all
Pseudo name=emcpower0a
CLARiiON ID=APM00043600837 []
Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B
Darren Dunham wrote:
If it helps at all. We're having a similar problem. Any LUN's
configured with their default owner to be SP B, don't get along with
ZFS. We're running on a T2000, With Emulex cards and the ssd driver.
MPXIO seems to work well for most cases, but the SAN guys are not
Bill Sommerfeld wrote:
On Mon, 2007-07-16 at 18:19 -0700, Russ Petruzzelli wrote:
Or am I just getting myself into shark infested waters?
configurations that might be interesting to play with:
(emphasis here on play...)
1) use the T3's management CLI to reconfigure the T3 into
James C. McPherson wrote:
The T3B with fw v3.x (I think) and the T4 (aka 6020 tray) allow
more than two volumes, but you're still quite restricted in what
you can do with them.
You are limited to two raid groups with slices on top of those raid
groups presented as LUNs. I'd just stick
Peter Tribble wrote:
On 7/13/07, Alderman, Sean [EMAIL PROTECTED] wrote:
I wonder what kind of card Peter's using and if there is a potential
linkage there. We've got the Sun branded Emulux cards in our sparcs. I
also wonder if Peter were able to allocate an additional LUN to his
system
[EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote on 07/13/2007 02:21:52 PM:
Peter Tribble wrote:
I've not got that far. During an import, ZFS just pokes around - there
doesn't seem to be an explicit way to tell it which particular devices
or SAN paths to use.
You can't
I really don't want to bring this up but ...
Why do we still tell people to use swap volumes? Would we have the same
sort of issue with the dump device so we need to fix it anyway?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Bryan Cantrill wrote:
On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote:
PSARC 2007/171 will be available in b68. Any documentation anywhere on
how to take advantage of it?
Some of the Sun storage arrays contain NVRAM. It would be really nice
if the array NVRAM would be
The interesting collision is going to be file system level encryption
vs. de-duplication as the former makes the latter pretty difficult.
dave johnson wrote:
How other storage systems do it is by calculating a hash value for
said file (or block), storing that value in a db, then checking every
Gary Mills wrote:
On Wed, Jun 20, 2007 at 12:23:18PM -0400, Torrey McMahon wrote:
James C. McPherson wrote:
Roshan Perera wrote:
But Roshan, if your pool is not replicated from ZFS' point of view,
then all the multipathing and raid controller backup in the world will
not make
Victor Engle wrote:
On 6/20/07, Torrey McMahon [EMAIL PROTECTED] wrote:
Also, how does replication at the ZFS level use more storage - I'm
assuming raw block - then at the array level?
___
Just to add to the previous comments. In the case where you
James C. McPherson wrote:
Roshan Perera wrote:
But Roshan, if your pool is not replicated from ZFS' point of view,
then all the multipathing and raid controller backup in the world will
not make a difference.
James, I Agree from ZFS point of view. However, from the EMC or the
customer point
This sounds familiarlike something about the powerpath device not
responding to the SCSI inquiry strings. Are you using the same version
of powerpath on both systems? Same type of array on both?
Dominik Saar wrote:
Hi there,
have a strange behavior if i´ll create a zfs pool at an EMC
Graham Perrin wrote:
We have irc://irc.freenode.net/solaris
and irc://irc.freenode.net/opensolaris
and the other channels listed at
http://blogs.sun.com/jimgris/entry/opensolaris_on_irc
AND growing discussion of ZFS in Mac- 'FUSE- and Linux-oriented channels
BUT unless I'm missing something,
Toby Thain wrote:
On 25-May-07, at 1:22 AM, Torrey McMahon wrote:
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen [EMAIL PROTECTED] wrote:
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen [EMAIL PROTECTED] wrote:
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a configured RAID? Do you know how to
(re)configure the controller or restore
Albert Chin wrote:
On Thu, May 24, 2007 at 11:55:58AM -0700, Grant Kelly wrote:
I'm getting really poor write performance with ZFS on a RAID5 volume
(5 disks) from a storagetek 6140 array. I've searched the web and
these forums and it seems that this zfs_nocacheflush option is the
solution,
to end...
Nathan.
Torrey McMahon wrote:
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen [EMAIL PROTECTED] wrote:
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a configured RAID? Do you know how
John-Paul Drawneek wrote:
Yes, i am also interested in this.
We can't afford two super fast setup so we are looking at having a huge pile
sata to act as a real time backup for all our streams.
So what can AVS do and its limitations are?
Would a just using zfs send and receive do or does AVS
Jonathan Edwards wrote:
On May 15, 2007, at 13:13, Jürgen Keil wrote:
Would you mind also doing:
ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1
to see the raw performance of underlying hardware.
This dd command is reading from the block device,
which might cache dataand
Anantha N. Srirama wrote:
For whatever reason EMC notes (on PowerLink) suggest that ZFS is not supported
on their arrays. If one is going to use a ZFS filesystem on top of a EMC array
be warned about support issues.
They should have fixed that in their matrices. It should say something
Matthew Ahrens wrote:
Aaron Newcomb wrote:
Does ZFS support any type of remote mirroring? It seems at present my
only two options to achieve this would be Sun Cluster or Availability
Suite. I thought that this functionality was in the works, but I haven't
heard anything lately.
You could put
Aaron Newcomb wrote:
Does ZFS support any type of remote mirroring? It seems at present my only two
options to achieve this would be Sun Cluster or Availability Suite. I thought
that this functionality was in the works, but I haven't heard anything lately.
AVS is working today. (See Jim
Aaron Newcomb wrote:
Terry,
Yes. AVS is pretty expensive. If ZFS did this out of the box it would be a huge
differentiator. I know ZFS does snapshots today, but if we could extend this
functionality to work across distance then we would have something that could
compete with expensive
Brian Hechinger wrote:
On Fri, Apr 27, 2007 at 02:44:02PM -0700, Malachi de ??lfweald wrote:
2. ZFS mirroring can work without the metadb, but if you want the dump
mirrored too, you need the metadb (I don't know if it needs to be mirrored,
but I wanted both disks to be identical in case one
Mike Dotson wrote:
On Sat, 2007-04-28 at 17:48 +0100, Peter Tribble wrote:
On 4/26/07, Lori Alt [EMAIL PROTECTED] wrote:
Peter Tribble wrote:
snip
Why do administrators do 'df' commands? It's to find out how much space
is used or available in a single file
Dickon Hood wrote:
[snip]
I'm currently playing with ZFS on a T2000 with 24x500GB SATA discs in an
external array that presents as SCSI. After having much 'fun' with the
Solaris SCSI driver not handling LUNs 2TB
That should work if you have the latest KJP and friends. (Actually, it
should
Marion Hakanson wrote:
[EMAIL PROTECTED] said:
We have been combing the message boards and it looks like there was a lot of
talk about this interaction of zfs+nfs back in november and before but since
i have not seen much. It seems the only fix up to that date was to disable
zil, is that
Anton B. Rang wrote:
Second, VDBench is great for testing raw block i/o devices.
I think a tool that does file system testing will get you
better data.
OTOH, shouldn't a tool that measures raw device performance be reasonable to reflect
Oracle performance when configured for raw devices?
Frank Cusack wrote:
On April 16, 2007 10:24:04 AM +0200 Selim Daoud
[EMAIL PROTECTED] wrote:
hi all ,
when doing several zfs snapshot of a given fs, there are dependencies
between snapshots that complexify the management of snapshots
is there a plan to easy thes dependencies, so we can reach
Tony Galway wrote:
I had previously undertaken a benchmark that pits “out of box”
performance of UFS via SVM, VxFS and ZFS but was waylaid due to some
outstanding availability issues in ZFS. These have been taken care of,
and I am once again undertaking this challenge on behalf of my
Frank Cusack wrote:
On April 11, 2007 11:54:38 AM +0200 Constantin Gonzalez Schmitz
[EMAIL PROTECTED] wrote:
Hi Mark,
Mark J Musante wrote:
On Tue, 10 Apr 2007, Constantin Gonzalez wrote:
Has anybody tried it yet with a striped mirror? What if the pool is
composed out of two mirrors? Can I
If I create a symlink inside a zfs file system and point the link to a
file on a ufs file system on the same node how much space should I
expect to see taken in the pool as used? Has this changed in the last
few months? I know work is being done under 6516171 to make symlinks
dittoable but I
Robert Milkowski wrote:
2. MPxIO - it tries to failover disk to second SP but looks like it
tries it forever (or very very long). After some time it should
have generated disk IO failure...
Are there any other hosts connected to this storage array? It looks like
there might be an
Richard Elling wrote:
Cyril Plisko wrote:
First of all I'd like to congratulate the ZFS boot team with the
integration of their work into ON. Great job ! I am sure there
are plenty of people waiting anxiously for this putback.
I'd also like to suggest that the material referenced by HEADS UP
Howdy folks.
I've a customer looking to use ZFS in a DR situation. They have a large
data store where they will be taking snapshots every N minutes or so,
sending the difference of the snapshot and previous snapshot with zfs
send -i to a remote host, and in case of DR firing up the secondary.
Matthew Ahrens wrote:
Torrey McMahon wrote:
Howdy folks.
I've a customer looking to use ZFS in a DR situation. They have a
large data store where they will be taking snapshots every N minutes
or so, sending the difference of the snapshot and previous snapshot
with zfs send -i to a remote
Gino Ruopolo wrote:
Conclusion:
After a day of tests we are going to think that ZFS
doesn't work well with MPXIO.
What kind of array is this? If it is not a Sun array
then how are you
configuring mpxio to recognize the array?
We are facing the same problems with a
Richard Elling wrote:
Akhilesh Mritunjai wrote:
I believe that the word would have gone around already, Google
engineers have published a paper on disk reliability. It might
supplement the ZFS FMA integration and well - all the numerous
debates on spares etc etc over here.
Good paper. They
Richard Elling wrote:
JS wrote:
I'm using ZFS on both EMC and Pillar arrays with PowerPath and MPxIO,
respectively. Both work fine - the only caveat is to drop your
sd_queue to around 20 or so, otherwise you can run into an ugly
display of bus resets.
This is sd_max_throttle or
Robert Milkowski wrote:
Hello dudekula,
Thursday, February 15, 2007, 11:08:26 AM, you wrote:
Hi all,
Please let me know the ZFS support for short writes ?
And what are short writes?
http://www.pittstate.edu/wac/newwlassignments.html#ShortWrites :-P
Claus Guttesen wrote:
Our main storage is a HDS 9585V Thunder with vxfs and raid5 on 400 GB
sata disk handled by the storage system. If I would migrate to zfs
that would mean 390 jbod's.
How so?
___
zfs-discuss mailing list
Richard Elling wrote:
One of the benefits of ZFS is that not only is head synchronization not
needed, but also block offsets do not have to be the same. For example,
in a traditional mirror, block 1 on device 1 is paired with block 1 on
device 2. In ZFS, this 1:1 mapping is not required. I
Dale Ghent wrote:
Yeah sure it might eat into STK profits, but one will still have to
go there for redundant controllers.
Repeat after me: There is no STK. There is only Sun. 8-)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Richard Elling wrote:
Good question. If you consider that mechanical wear out is what
ultimately
causes many failure modes, then the argument can be made that a spun down
disk should last longer. The problem is that there are failure modes
which
are triggered by a spin up. I've never seen
Marion Hakanson wrote:
However, given the default behavior of ZFS (as of Solaris-10U3) is to
panic/halt when it encounters a corrupted block that it can't repair,
I'm re-thinking our options, weighing against the possibility of a
significant downtime caused by a single-block corruption.
Guess
Nicolas Williams wrote:
On Fri, Jan 26, 2007 at 05:15:28PM -0700, Jason J. W. Williams wrote:
Could the replication engine eventually be integrated more tightly
with ZFS? That would be slick alternative to send/recv.
But a continuous zfs send/recv would be cool too. In fact, I think
1 - 100 of 208 matches
Mail list logo