Hi Andras,
No problems writing direct. Answers inline below. (If there are any
typo's it cause it's late and I have had a very long day ;))
andras spitzer wrote:
Scott,
Sorry for writing you directly, but most likely you have missed my
questions regarding your SW design, whenever you have
: Tue, 17 Feb 2009 21:36:38 -0800
From: Richard Elling richard.ell...@gmail.com
To: Toby Thain t...@telegraphics.com.au
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS on SAN?
Message-ID: 499b9e66.2010...@gmail.com
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Toby Thain
...@gmail.com
To: Toby Thain t...@telegraphics.com.au
mailto:t...@telegraphics.com.au
Cc: zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS on SAN?
Message-ID: 499b9e66.2010...@gmail.com
mailto:499b9e66.2010...@gmail.com
Content-Type: text/plain; charset=ISO
sl == Scott Lawson scott.law...@manukau.ac.nz writes:
sl Electricity *is* the lifeblood of available storage.
I never meant to suggest computing machinery could run without
electricity. My suggestion is, if your focus is _reliability_ rather
than availability, meaning you don't want to
On Wed, 18 Feb 2009, Miles Nordin wrote:
I just don't like the idea people are building fancy space-age data
centers and then thinking they can safely run crappy storage software
that won't handle power outages because they're above having to worry
about all that little-guy nonsense. A big
Miles Nordin wrote:
sl == Scott Lawson scott.law...@manukau.ac.nz writes:
sl Electricity *is* the lifeblood of available storage.
I never meant to suggest computing machinery could run without
electricity. My suggestion is, if your focus is _reliability_ rather
than
On Tue, February 17, 2009 01:50, Marion Hakanson wrote:
Note that the only available pool failure mode in the presence of a SAN
I/O error for these OS's has been to panic/reboot, but so far when the
systems have come back, data has been fine. We also do tape backups
of these pools, of
Hi All,
I have been watching this thread for a while and thought it was time a
chipped my 2 cents
worth in. I have been an aggressive adopter of ZFS here across all of
our Solaris
systems and have found the benefits have far outweighed any small issues
that have
arisen.
Currently I have many
hj == Henrik Johansson henr...@henkis.net writes:
hj I have been operating quite large deployments of SVM/UFS
hj VxFS/VxVM for some years and while you sometimes are forced to
hj do a filesystem check and some files might end up in
hj lost+found I have never lost a whole
On 17-Feb-09, at 3:01 PM, Scott Lawson wrote:
Hi All,
...
I have seen other people discussing power availability on other
threads
recently. If you
want it, you can have it. You just need the business case for it. I
don't buy the comments
on UPS unreliability.
Hi,
I remarked on it. FWIW,
Toby Thain wrote:
On 17-Feb-09, at 3:01 PM, Scott Lawson wrote:
Hi All,
...
I have seen other people discussing power availability on other threads
recently. If you
want it, you can have it. You just need the business case for it. I
don't buy the comments
on UPS unreliability.
Hi,
I
On Feb 17, 2009, at 21:35, Scott Lawson wrote:
Everything we have has dual power supplies, feed from dual power
rails, feed from separate switchboards, through separate very large
UPS's, backed by generators, feed by two substations and then cloned
to another data center 3 km away. HA
On 17-Feb-09, at 9:35 PM, Scott Lawson wrote:
Toby Thain wrote:
On 17-Feb-09, at 3:01 PM, Scott Lawson wrote:
Hi All,
...
I have seen other people discussing power availability on other
threads
recently. If you
want it, you can have it. You just need the business case for it. I
don't
David Magda wrote:
On Feb 17, 2009, at 21:35, Scott Lawson wrote:
Everything we have has dual power supplies, feed from dual power
rails, feed from separate switchboards, through separate very large
UPS's, backed by generators, feed by two substations and then cloned
to another data
Toby Thain wrote:
Not at all. You've convinced me. Your servers will never, ever lose
power unexpectedly.
Methinks living in Auckland has something to do with that :-)
http://en.wikipedia.org/wiki/1998_Auckland_power_crisis
When services are reliable, then complacency brings risk.
My
Hello Bob,
Sunday, February 15, 2009, 9:42:25 PM, you wrote:
BF On Sun, 15 Feb 2009, Colin Raven wrote:
As a followup; is there any ongoing sensible way to defend against the
dreaded fragmentation? A [shudder] defrag routine of some kind perhaps?
Forgive the silly questions from the
On Mon, Feb 16, 2009 at 12:26 AM, Sanjeev sanjeev.bagew...@sun.com wrote:
Sriram,
On Mon, Feb 16, 2009 at 11:12:42AM +0530, Sriram Narayanan wrote:
On Mon, Feb 16, 2009 at 9:11 AM, Sanjeev sanjeev.bagew...@sun.com
wrote:
Sendai,
On Fri, Feb 13, 2009 at 03:21:25PM -0800, Andras
t == Tim t...@tcsac.net writes:
t Uhhh, S10 box that provide zfs backed iSCSI is NOT fine. Cite
t the plethora of examples on this list of how the fault
t management stack takes so long to respond it's basically
t unusable as it stands today.
well...if we are talking about
Hi all,
Ok, this might be to stir some things up again but I would like to
make this more clear.
I have been reading this and other threads regarding ZFS on SAN and
how well ZFS can recover from a serious error such as a cached disk
array goes down or the connection to the SAN is lost.
On Tue, 17 Feb 2009, Henrik Johansson wrote:
We are currently evaluating if we should begin to implement ZFS in our SAN. I
can see great opportunities with ZFS but if we have a higher risk of loosing
entire pools that is a serious issue. I am aware that the other filesystems
might not be in
bfrie...@simple.dallas.tx.us said:
A 12-disk pool that I built a year ago is still working fine with absolutely
no problems at all. Another two disk pool built using cheap large USB
drives has been running for maybe eight months, with no problems.
We have non-redundant ZFS pools on an HDS
Hello Bob,
Saturday, February 14, 2009, 6:16:54 PM, you wrote:
BF If you do use ZFS's redundancy features, it is important to consider
BF resilver time. Try to keep volume size small enough that it may be
BF resilvered in a reasonable amount of time.
Well, in most cases resilver in ZFS
On Sun, 15 Feb 2009, Robert Milkowski wrote:
Well, in most cases resilver in ZFS should be quicker than resilver in
a disk array because ZFS will resilver only blocks which are actually
in use while most disk arrays will blindly resilver full disk drives.
So assuming you still have plenty
On Sun, Feb 15, 2009 at 5:00 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sun, 15 Feb 2009, Robert Milkowski wrote:
Well, in most cases resilver in ZFS should be quicker than resilver in
a disk array because ZFS will resilver only blocks which are actually
in use while most
On Sun, 15 Feb 2009, Colin Raven wrote:
Pardon me for jumping into this discussion. I invariably lurk and keep mouth
firmly shut. In this case however, curiosity and a degree of alarm bade me
to jump incould you elaborate on 'fragmentation' since the only context
I know this is Windows. Now
On Sun, Feb 15, 2009 at 8:02 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sun, 15 Feb 2009, Colin Raven wrote:
Pardon me for jumping into this discussion. I invariably lurk and keep
mouth
firmly shut. In this case however, curiosity and a degree of alarm bade me
to jump
On Sun, 15 Feb 2009, Colin Raven wrote:
As a followup; is there any ongoing sensible way to defend against the
dreaded fragmentation? A [shudder] defrag routine of some kind perhaps?
Forgive the silly questions from the sidelines.ignorance knows no
bounds apparently :)
There is no
Sendai,
On Fri, Feb 13, 2009 at 03:21:25PM -0800, Andras Spitzer wrote:
Hi,
When I read the ZFS manual, it usually recommends to configure redundancy at
the ZFS layer, mainly because there are features that will work only with
redundant configuration (like corrupted data correction), also
Sriram,
On Mon, Feb 16, 2009 at 11:12:42AM +0530, Sriram Narayanan wrote:
On Mon, Feb 16, 2009 at 9:11 AM, Sanjeev sanjeev.bagew...@sun.com wrote:
Sendai,
On Fri, Feb 13, 2009 at 03:21:25PM -0800, Andras Spitzer wrote:
Hi,
When I read the ZFS manual, it usually recommends to
On 14-Feb-09, at 2:40 AM, Andras Spitzer wrote:
Damon,
Yes, we can provide simple concat inside the array (even though
today we provide RAID5 or RAID1 as our standard, and using Veritas
with concat), the question is more of if it's worth it to switch
the redundancy from the array to the
as == Andras Spitzer wsen...@gmail.com writes:
as So, you telling me that even if the SAN provides redundancy
as (HW RAID5 or RAID1), people still configure ZFS with either
as raidz or mirror?
There's some experience that, in the case where the storage device or
the FC mesh glitches
Hi,
When I read the ZFS manual, it usually recommends to configure redundancy at
the ZFS layer, mainly because there are features that will work only with
redundant configuration (like corrupted data correction), also it implies that
the overall robustness will improve.
My question is simple,
Damon,
Yes, we can provide simple concat inside the array (even though today we
provide RAID5 or RAID1 as our standard, and using Veritas with concat), the
question is more of if it's worth it to switch the redundancy from the array to
the ZFS layer.
The RAID5/1 features of the high-end EMC
Bob Friesenhahn wrote:
On Tue, 16 Dec 2008, Reed Gregory wrote:
8 Hardware RAID-5 Groups ( 5 drives each) and 2 SAN hot spares.
zraid of these 8 Raid Groups. ~ 14TB usable.
I did read in a FAQ that doing double redundancy is not recommended
since parity would have to be calculated twice.
On Tue, 16 Dec 2008, Reed Gregory wrote:
8 Hardware RAID-5 Groups ( 5 drives each) and 2 SAN hot spares.
zraid of these 8 Raid Groups. ~ 14TB usable.
I did read in a FAQ that doing double redundancy is not recommended
since parity would have to be calculated twice. I was wondering
what
Note that it is expected that the cluster will force
import, so in a
i was talking about creation, not import.
You must be running an older version of Solaris. The
s10u4 + sc 3.2
Anyway, bug has now been accepted.
With cluster and SAN, zfs does _yet_ not behave normally :)
Thanks for your
When moving pools, we use of course export/import or sczbt suncluster stuff.
Nevertheless, we dont want to use zfs as global FS with concurrent access, just
use it like svm or vxvm to declare volumes usable by cluster's nodes (and
used by only once at a time).
so, it seems to me a bit
Christophe Rolland wrote:
When moving pools, we use of course export/import or sczbt suncluster stuff.
Nevertheless, we dont want to use zfs as global FS with concurrent access,
just use it like svm or vxvm to declare volumes usable by cluster's nodes
(and used by only once at a time).
so,
This is probably because ZFS is not supported as a global filesystem. If you
move the
zpool between cluster nodes, you'll need to zpool export it on the first node,
and
zpool import it on the second node.
-Tim
This message posted from opensolaris.org
Hi
I got a san disk visible on two nodes (global or zone).
On the first node, i can create a pool using zpool create x1 sandisk.
If i try to reuse this disk on the first node, i got a vdev in use warning.
If i try to create a pool on the second node using the same disk, zpool create
x2 sandisk,
Christophe Rolland wrote:
Hi all
we consider using ZFS for various storages (DB, etc). Most features are
great, especially the ease of use.
Nevertheless, a few questions :
- we are using SAN disks, so most JBOD recommandations dont apply, but I did
not find many experiences of zpool of a
Hi Robert,
thanks for the answer.
You are not the only one. It's somewhere on ZFS developers list...
yes, i checked this on the whole list.
so, lets wait for the feature.
Actually it should complain and using -f (force)
on the active node, yes.
but if we want to reuse the luns on the other
Hello Christophe,
Friday, February 1, 2008, 7:55:31 PM, you wrote:
CR Hi all
CR we consider using ZFS for various storages (DB, etc). Most
CR features are great, especially the ease of use.
CR Nevertheless, a few questions :
CR - we are using SAN disks, so most JBOD recommandations dont
CR
Hi all
we consider using ZFS for various storages (DB, etc). Most features are great,
especially the ease of use.
Nevertheless, a few questions :
- we are using SAN disks, so most JBOD recommandations dont apply, but I did
not find many experiences of zpool of a few terabytes on Luns... anybody
Todd Sawyers wrote:
I am planning to use zfs with fiber attached san disk from a emc symmetrix's
Based on a note in the admin guide it appears that even though
the symmetrixs will handle the hardware raid it is still advisable to create
a zfs mirror on the host side to take full advantage of
Hi,
I just deploy the ZFS on an SAN attach disk array and it's working fine.
How do i get dual pathing advantage of the disk ( like DMP in Veritas).
Can someone point to correct doc and setup.
Thanks in Advance.
Rgds
Vikash Gupta
This message posted from opensolaris.org
http://docs.sun.com/source/819-0139/index.html
On 2/17/07, Vikash Gupta [EMAIL PROTECTED] wrote:
Hi,
I just deploy the ZFS on an SAN attach disk array and it's working fine.
How do i get dual pathing advantage of the disk ( like DMP in Veritas).
Can someone point to correct doc and setup.
47 matches
Mail list logo