It also wouldn't be a bad idea for ZFS to also verify drives designated as
hot spares in fact have sufficient capacity to be compatible replacements
for particular configurations, prior to actually being critically required
(as if drives otherwise appearing to have equivalent capacity may not, it
+1
On Thu, Jan 22, 2009 at 11:12 PM, Paul Schlie sch...@comcast.net wrote:
It also wouldn't be a bad idea for ZFS to also verify drives designated as
hot spares in fact have sufficient capacity to be compatible replacements
for particular configurations, prior to actually being critically
Would this work? (to get rid of an EFI label).
dd if=/dev/zero of=/dev/dsk/thedisk bs=1024k count=1
Then use
format
format might complain that the disk is not labeled. You
can then label the disk.
Dale
Antonius wrote:
can you recommend a walk-through for this process, or
yes, that's exactly what I did. the issue is that I can't get the corrected
label to be written once I've zero'd the drive. I get and error from fdisk that
apparently views the backup label
--
This message posted from opensolaris.org
___
zfs-discuss
not quite .. it's 16KB at the front and 8MB back of the disk (16384
sectors) for the Solaris EFI - so you need to zero out both of these
of course since these drives are 1TB you i find it's easier to format
to SMI (vtoc) .. with format -e (choose SMI, label, save, validate -
then choose
you mentioned one, so what do you recomend as a workaround?.
I've tried re-initialing the disks on another system's HW RAID controller, but
still get the same error.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
The user DEFINITELY isn't expecting 5 bytes, or what you meant to
say 5000 bytes, they're expecting 500GB. You know, 536,870,912,000
bytes. But even if the drive mfg's calculated it correctly, they wouldn't
even be getting that due to filesystem overhead.
Then you have a very
so you're suggesting I buy 750s to replace the 500s. then if a 750 fails buy
another bigger drive again?
Have you filed a bug/rfe to fix this in ZFS in future?
Anyway, you only need to change the 750GB drives if:
- all 500GBs drives are replace by 750GB disks
- and they're all
I believe this is an fdisk issue. But I don't think any
of the fdisk engineers hang out on this forum.
You might try partitioning the disk on another OS.
-- richard
Antonius wrote:
I'll attach 2 files of output from 2 disks:
c4d0 is a current member of the zpool that is a sibling (as in a
Grab the AOE driver and pull aoelabinit out of the package.They wrote it
just for forcing EFI or Sun labels onto disks when the normal Solaris tools get
in the way. coraid's website looks like it's broken at the moment, so you may
need to find it elsewhere on the web.
--
This message
can you recommend a walk-through for this process, or a bit more of a
description? I'm not quite sure how I'd use that utility to repair the EFI label
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Mon, Jan 19, 2009 at 5:39 PM, Adam Leventhal a...@eng.sun.com wrote:
And again, I say take a look at the market today, figure out a
percentage,
and call it done. I don't think you'll find a lot of users crying foul
over
losing 1% of their drive space when they don't already cry foul
Ross wrote:
The problem is they might publish these numbers, but we
really have
no way of controlling what number manufacturers will
choose to use
in the future.
If for some reason future 500GB drives all turn out to be slightly
smaller than the current ones you're going to
mj == Moore, Joe joe.mo...@siemens.com writes:
mj For a ZFS pool, (until block pointer rewrite capability) this
mj would have to be a pool-create-time parameter.
naw. You can just make ZFS do it all the time, like the other storage
vendors do. no parameters.
You can invent
Miles Nordin wrote:
mj == Moore, Joe joe.mo...@siemens.com writes:
mj For a ZFS pool, (until block pointer rewrite capability) this
mj would have to be a pool-create-time parameter.
naw. You can just make ZFS do it all the time, like the other storage
vendors do. no
[I hate to keep dragging this thread forward, but...]
Moore, Joe wrote:
And there is no way to change this after the pool has been created,
since after that time, the disk size can't be changed. So whatever
policy is used by default, it is very important to get it right.
Today, vdev size can
jm == Moore, Joe joe.mo...@siemens.com writes:
jm Sysadmins should not be required to RTFS.
I never said they were. The comparison was between hardware RAID and
ZFS, not between two ZFS alternatives. The point: other systems'
behavior is enitely secret. Therefore, secret opaque
On Tue, Jan 20, 2009 at 2:26 PM, Moore, Joe joe.mo...@siemens.com wrote:
Other storage vendors have specific compatibility requirements for the
disks you are allowed to install in their chassis.
And again, the reason for those requirements is 99% about making money, not
a technical one. If
The user DEFINITELY isn't expecting 5 bytes, or what you meant to say
5000
bytes, they're expecting 500GB. You know, 536,870,912,000 bytes. But even if
the drive mfg's
calculated it correctly, they wouldn't even be getting that due to filesystem
overhead.
I doubt there are
so you're suggesting I buy 750s to replace the 500s. then if a 750 fails buy
another bigger drive again?
the drives are RMA replacements for the other disks that faulted in the array
before. they are the same brand, model and model number, apparently not so
under the label though, but no way I
yes, it's the same make and model as most of the other disks in the zpool and
reports the same number of sectors
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
The problem is they might publish these numbers, but we really have no way of
controlling what number manufacturers will choose to use in the future.
If for some reason future 500GB drives all turn out to be slightly smaller than
the current ones you're going to be stuck. Reserving 1-2% of
I'm going waaay out on a limb here, as a non-programmer...but...
Since the source is open, maybe community members should organize and
work on some sort of sizing algorithm? I can certainly imagine Sun
deciding to do this in the future - I can also imagine that it's not
at the top of Sun's
Ross wrote:
The problem is they might publish these numbers, but we really have no way of
controlling what number manufacturers will choose to use in the future.
If for some reason future 500GB drives all turn out to be slightly smaller
than the current ones you're going to be stuck.
Richard,
Ross wrote:
The problem is they might publish these numbers, but we really have
no way of controlling what number manufacturers will choose to use
in the future.
If for some reason future 500GB drives all turn out to be slightly
smaller than the current ones you're going to
Since it's done in software by HDS, NetApp, and EMC, that's complete
bullshit. Forcing people to spend 3x the money for a Sun drive that's
identical to the seagate OEM version is also bullshit and a piss-poor
answer.
I didn't know that HDS, NetApp, and EMC all allow users to replace their
Jim Dunham wrote:
Richard,
Ross wrote:
The problem is they might publish these numbers, but we really have
no way of controlling what number manufacturers will choose to use
in the future.
If for some reason future 500GB drives all turn out to be slightly
smaller than the
edm == Eric D Mudama edmud...@bounceswoosh.org writes:
edm If, instead of having ZFS manage these differences, a user
edm simply created slices that were, say, 98%
if you're willing to manually create slices, you should be able to
manually enable the write cache, too, while you're in
On Mon, Jan 19, 2009 at 11:05 AM, Adam Leventhal a...@eng.sun.com wrote:
Since it's done in software by HDS, NetApp, and EMC, that's complete
bullshit. Forcing people to spend 3x the money for a Sun drive that's
identical to the seagate OEM version is also bullshit and a piss-poor
Creating a slice, instead of using the whole disk, will cause ZFS to
not enable write-caching on the underlying device.
Correct. Engineering trade-off. Since most folks don't read the manual,
or the best practices guide, until after they've hit a problem, it is really
just a CYA entry :-(
Since it's done in software by HDS, NetApp, and EMC, that's complete
bullshit. Forcing people to spend 3x the money for a Sun drive that's
identical to the seagate OEM version is also bullshit and a piss-poor
answer.
I didn't know that HDS, NetApp, and EMC all allow users to
On Mon, 19 Jan 2009, Adam Leventhal wrote:
Are you telling me zfs is deficient to the point it can't handle basic
right-sizing like a 15$ sata raid adapter?
How do there $15 sata raid adapters solve the problem? The more details you
could provide the better obviously.
It is really quite
On Mon, Jan 19, 2009 at 12:39 PM, Adam Leventhal a...@eng.sun.com wrote:
Sorry, I must have missed your point. I thought that you were saying that
HDS, NetApp, and EMC had a different model. Were you merely saying that the
software in those vendors' products operates differently than ZFS?
On Mon, Jan 19, 2009 at 1:12 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Mon, 19 Jan 2009, Adam Leventhal wrote:
Are you telling me zfs is deficient to the point it can't handle basic
right-sizing like a 15$ sata raid adapter?
How do there $15 sata raid adapters solve the
On Mon, Jan 19, 2009 at 01:35:22PM -0600, Tim wrote:
Are you telling me zfs is deficient to the point it can't handle basic
right-sizing like a 15$ sata raid adapter?
How do there $15 sata raid adapters solve the problem? The more details you
could provide the better obviously.
They
Tim wrote:
On Mon, Jan 19, 2009 at 1:12 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us mailto:bfrie...@simple.dallas.tx.us wrote:
On Mon, 19 Jan 2009, Adam Leventhal wrote:
Are you telling me zfs is deficient to the point it can't
handle basic
On Mon, Jan 19, 2009 at 2:55 PM, Adam Leventhal a...@eng.sun.com wrote:
Drive vendors, it would seem, have an incentive to make their 500GB
drives
as small as possible. Should ZFS then choose some amount of padding at the
end of each device and chop it off as insurance against a slightly
And again, I say take a look at the market today, figure out a percentage,
and call it done. I don't think you'll find a lot of users crying foul over
losing 1% of their drive space when they don't already cry foul over the
false advertising that is drive sizes today.
Perhaps it's quaint,
So the place we are arriving is to push the RFE for shrinkable pools?
Warning the user about the difference in actual drive size, then
offering to shrink the pool to allow a smaller device seems like a
nice solution to this problem.
The ability to shrink pools might be very useful in other
On Sat, 17 Jan 2009 23:18:35 PST
Antonius antoni...@gmail.com wrote:
Maybe the other disk has an EFI label?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS sxce snv105 ++
+ All that's really worth doing is what we do for others (Lewis Carrol)
So you're saying zfs does absolutely no right-sizing? That sounds like a
bad idea all around...
You can use a bigger disk; NOT a smaller disk.
Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
If so what should I do to remedy that? just reformat it?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
meh
- Original Message -
From: Antonius antoni...@gmail.com
To: zfs-discuss@opensolaris.org
Sent: Sunday, January 18, 2009 6:54 AM
Subject: Re: [zfs-discuss] replace same sized disk fails with too small
error
If so what should I do to remedy that? just reformat
On Sun, Jan 18, 2009 at 5:18 AM, casper@sun.com wrote:
So you're saying zfs does absolutely no right-sizing? That sounds like a
bad idea all around...
You can use a bigger disk; NOT a smaller disk.
Casper
Right, which is an absolutely piss poor design decision and why every major
Right, which is an absolutely piss poor design decision and why
every major storage vendor right-sizes drives. What happens if I
have an old maxtor drive in my pool whose 500g is just slightly
larger than every other mfg on the market? You know, the one who is
no longer making their
Right, which is an absolutely piss poor design decision and why every major
storage vendor right-sizes drives. What happens if I have an old maxtor
drive in my pool whose 500g is just slightly larger than every other mfg
on the market? You know, the one who is no longer making their own drives
On Sun, 18 Jan 2009, Tim wrote:
Right, which is an absolutely piss poor design decision and why every major
storage vendor right-sizes drives. What happens if I have an old maxtor
drive in my pool whose 500g is just slightly larger than every other mfg
on the market? You know, the one who is
On Sun, Jan 18, 2009 at 16:51, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
I appreciate that in these times of financial hardship that you can
not afford a 750GB drive to replace the oversized 500GB drive. Sorry
to hear about your situation.
That's easy to say, but what if there were
On Sun, Jan 18, 2009 at 10:17 AM, casper@sun.com wrote:
Right, which is an absolutely piss poor design decision and why every
major
storage vendor right-sizes drives. What happens if I have an old maxtor
drive in my pool whose 500g is just slightly larger than every other mfg
on the
On Sun, 18 Jan 2009, Will Murnane wrote:
That's easy to say, but what if there were no larger alternative?
Suppose I have a pool composed of those 1.5TB Seagate disks, and
Hitachi puts out some of the same capacity that are actually
slightly smaller. A drive fails in my array, I buy a Hitachi
On Sun, Jan 18, 2009 at 10:16 AM, Adam Leventhal a...@eng.sun.com wrote:
Right, which is an absolutely piss poor design decision and why every major
storage vendor right-sizes drives. What happens if I have an old maxtor
drive in my pool whose 500g is just slightly larger than every other mfg
On Sun, Jan 18, 2009 at 12:19 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sun, 18 Jan 2009, Will Murnane wrote:
That's easy to say, but what if there were no larger alternative?
Suppose I have a pool composed of those 1.5TB Seagate disks, and
Hitachi puts out some of the
-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Bob
Friesenhahn
Sent: Sunday, January 18, 2009 1:19 PM
To: Will Murnane
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] replace same sized disk fails with too small
error
On Sun, 18 Jan 2009, Will Murnane
On Sun, Jan 18, 2009 at 18:19, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
What do you propose that OpenSolaris should do about this?
Take drive size, divide by 100, round down to two significant digits.
Floor to a multiple of that size. This method wastes no more than 1%
of the disk
On Sun, 18 Jan 2009, Will Murnane wrote:
Most drives are sold with two significant digits in the size: 320 GB,
400 GB, 640GB, 1.0 TB, etc. I don't see this changing any time
particularly soon; unless someone starts selling a 1.25 TB drive or
something, two digits will suffice. Even then,
On Sun, Jan 18, 2009 at 1:30 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sun, 18 Jan 2009, Will Murnane wrote:
Most drives are sold with two significant digits in the size: 320 GB,
400 GB, 640GB, 1.0 TB, etc. I don't see this changing any time
particularly soon; unless
On Sun, Jan 18 at 13:43, Tim wrote:
You look at the size of the drive and you take a set percentage off... If
it's a LUN and it's so far off it still can't be added with the
percentage that works across the board for EVERYTHING ELSE, you change the
size of the LUN at the storage array
...@gmail.com
Cc: zfs-discuss@opensolaris.org
Sent: Sunday, January 18, 2009 2:30 PM
Subject: Re: [zfs-discuss] replace same sized disk fails with too small
error
On Sun, 18 Jan 2009, Will Murnane wrote:
Most drives are sold with two significant digits in the size: 320 GB,
400 GB, 640GB, 1.0 TB
On Sun, Jan 18, 2009 at 1:56 PM, Eric D. Mudama
edmud...@bounceswoosh.orgwrote:
On Sun, Jan 18 at 13:43, Tim wrote:
You look at the size of the drive and you take a set percentage off...
If
it's a LUN and it's so far off it still can't be added with the
percentage that works across the
I ran into a bad label causing this once.
br/br/
Usually the s2 slice is a good bet for your whole disk device, but if it's EFI
labeled, you need to use p0 (somebody correct me if I'm wrong).
br/br/
I like to zero the first few megs of a drive before doing any of this stuff.
This will destroy
] replace same sized disk fails with too small
error
I ran into a bad label causing this once.
br/br/
Usually the s2 slice is a good bet for your whole disk device, but if it's
EFI labeled, you need to use p0 (somebody correct me if I'm wrong).
br/br/
I like to zero the first few megs
comment at the bottom...
Tim wrote:
On Sun, Jan 18, 2009 at 1:56 PM, Eric D. Mudama
edmud...@bounceswoosh.org mailto:edmud...@bounceswoosh.org wrote:
On Sun, Jan 18 at 13:43, Tim wrote:
You look at the size of the drive and you take a set percentage
off... If
On Sun, Jan 18, 2009 at 2:43 PM, Richard Elling richard.ell...@sun.comwrote:
comment at the bottom...
DIY. Personally, I'd be more upset if ZFS reserved any sectors
for some potential swap I might want to do later, but may never
need to do. If you want to reserve some space for swappage,
Tim wrote:
On Sun, Jan 18, 2009 at 2:43 PM, Richard Elling richard.ell...@sun.com
mailto:richard.ell...@sun.com wrote:
comment at the bottom...
DIY. Personally, I'd be more upset if ZFS reserved any sectors
for some potential swap I might want to do later, but may never
On Sun, Jan 18, 2009 at 3:39 PM, Richard Elling richard.ell...@sun.comwrote:
Tim wrote:
It is naive to think that different storage array vendors
would care about people trying to use another array vendors
disks in their arrays. In fact, you should get a flat,
impersonal, not supported
On Sun, Jan 18 at 15:00, Tim wrote:
If you're so concerned with the storage *lying* or *hiding* space, I
assume you're leading the charge at Sun to properly advertise drive sizes,
right? Because the 1TB drive I can buy from Sun today is in no way,
shape, or form able to store 1TB of
I'm having an issue replacing a failed 500GB disk with another new one with the
error that the disk is too small. The problem is that it isn't. Is there any
help anyone can offer here?
I've tried adding it once set as a spare or seperate from the pool and with
different formats and configs all
Volume name =
ascii name = SAMSUNG-S0VVJ1CP30539-0001-465.76GB
bytes/sector= 512
sectors = 976760063
accessible sectors = 976760030
Part TagFlag First Sector Size Last Sector
0usrwm 256 465.75GB 976743646
1
68 matches
Mail list logo