This is one of the greatest annoyances of ZFS. I don't really understand how,
a zvol's space can not be accurately enumerated from top to bottom of the tree
in 'df' output etc. Why does a zvol divorce the space used from the root of
the volume?
Gregg Wonderly
On Feb 6, 2013, at 5:26 PM
Have you tried importing the pool with that drive completely unplugged? Which
HBA are you using? How many of these disks are on same or separate HBAs?
Gregg Wonderly
On Jan 8, 2013, at 12:05 PM, John Giannandrea j...@meer.net wrote:
I seem to have managed to end up with a pool
.
Thank you,
Jerry
On 10/26/12 10:02 AM, Gregg Wonderly wrote:
I've been using this card
http://www.newegg.com/Product/Product.aspx?Item=N82E16816117157
for my Solaris/Open Indiana installations because it has 8 ports. One of
the issues that this card seems to have
wouldn't have to figure
out how to do the reboot shuffle. Instead, you could just shuffle the symlinks.
Gregg Wonderly
On Nov 9, 2012, at 10:47 AM, Jim Klimov jimkli...@cos.ru wrote:
There are times when ZFS options can not be applied at the moment,
i.e. changing desired mountpoints of active
I've been using this card
http://www.newegg.com/Product/Product.aspx?Item=N82E16816117157
for my Solaris/Open Indiana installations because it has 8 ports. One of the
issues that this card seems to have, is that certain failures can cause other
secondary problems in other drives on the same
What is the error message you are seeing on the replace? This sounds like a
slice size/placement problem, but clearly, prtvtoc seems to think that
everything is the same. Are you certain that you did prtvtoc on the correct
drive, and not one of the active disks by mistake?
Gregg Wonderly
On Aug 28, 2012, at 6:01 AM, Murray Cullen themurma...@gmail.com wrote:
I've copied an old home directory from an install of OS 134 to the data pool
on my OI install. Opensolaris apparently had wine installed as I now have a
link to / in my data pool. I've tried everything I can think of to
a missing device.
The older OS and ZFS version may in fact have a misbehavior due to some error
condition not being correctly managed.
Gregg Wonderly
On Aug 2, 2012, at 4:49 PM, Richard Elling richard.ell...@gmail.com wrote:
On Aug 1, 2012, at 12:21 AM, Suresh Kumar wrote:
Dear ZFS-Users
be on the same disk. So it's not guaranteed to help if you
have a disk failure.
I thought I understood that copies would not be on the same disk, I guess I
need to go read up on this again.
Gregg Wonderly
___
zfs-discuss mailing list
zfs-discuss
.
That would make it much more nice to use ZFS so that admins could always take
action on multiple pools and devices without being burdened by the constant
problems with failing devices locking you out of system administration
activities.
Gregg Wonderly
On Jul 28, 2012, at 6:45 AM, Antonio S. Cofiño
that be the win?
Gregg Wonderly
On Jul 11, 2012, at 5:56 AM, Sašo Kiselkov wrote:
On 07/11/2012 12:24 PM, Justin Stringfellow wrote:
Suppose you find a weakness in a specific hash algorithm; you use this
to create hash collisions and now imagined you store the hash collisions
in a zfs dataset with dedup
of the algorithms for random number of bits
is just silly. Where's the real data that tells us what we need to know?
Gregg Wonderly
On Jul 11, 2012, at 9:02 AM, Sašo Kiselkov wrote:
On 07/11/2012 03:57 PM, Gregg Wonderly wrote:
Since there is a finite number of bit patterns per block, have you
into how to approach
the problem, and then some time to do the computations.
Huge space, but still finite…
Gregg Wonderly
On Jul 11, 2012, at 9:13 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Gregg Wonderly
seems ridiculous
to propose.
Gregg Wonderly
On Jul 11, 2012, at 9:22 AM, Bob Friesenhahn wrote:
On Wed, 11 Jul 2012, Sašo Kiselkov wrote:
the hash isn't used for security purposes. We only need something that's
fast and has a good pseudo-random output distribution. That's why I
looked toward
Yes, but from the other angle, the number of unique 128K blocks that you can
store on your ZFS pool, is actually finitely small, compared to the total
space. So the patterns you need to actually consider is not more than the
physical limits of the universe.
Gregg Wonderly
On Jul 11, 2012
of magnitude,
would you be okay with that? What assurances would you be content with using
my ZFS pool?
Gregg Wonderly
On Jul 11, 2012, at 9:43 AM, Sašo Kiselkov wrote:
On 07/11/2012 04:30 PM, Gregg Wonderly wrote:
This is exactly the issue for me. It's vital to always have verify on. If
you
I'm just suggesting that the time frame of when 256-bits or 512-bits is less
safe, is closing faster than one might actually think, because social elements
of the internet allow a lot more effort to be focused on a single problem
than one might consider.
Gregg Wonderly
On Jul 11, 2012, at 9
You're entirely sure that there could never be two different blocks that can
hash to the same value and have different content?
Wow, can you just send me the cash now and we'll call it even?
Gregg
On Jul 11, 2012, at 9:59 AM, Sašo Kiselkov wrote:
On 07/11/2012 04:56 PM, Gregg Wonderly wrote
:
On 07/11/2012 05:58 PM, Gregg Wonderly wrote:
You're entirely sure that there could never be two different blocks that can
hash to the same value and have different content?
Wow, can you just send me the cash now and we'll call it even?
You're the one making the positive claim and I'm
On Jul 11, 2012, at 12:06 PM, Sašo Kiselkov wrote:
I say, in fact that the total number of unique patterns that can exist on
any pool is small, compared to the total, illustrating that I understand how
the key space for the algorithm is small when looking at a ZFS pool, and
thus could
?
Gregg Wonderly
On 6/16/2012 2:02 AM, Scott Aitken wrote:
On Sat, Jun 16, 2012 at 08:54:05AM +0200, Stefan Ring wrote:
when you say remove the device, I assume you mean simply make it unavailable
for import (I can't remove it from the vdev).
Yes, that's what I meant.
root@openindiana-01:/mnt
On Jun 16, 2012, at 9:49 AM, Scott Aitken wrote:
On Sat, Jun 16, 2012 at 09:09:53AM -0500, Gregg Wonderly wrote:
Use 'dd' to replicate as much of lofi/2 as you can onto another device, and
then
cable that into place?
It looks like you just need to put a functioning, working
On Jun 16, 2012, at 10:13 AM, Scott Aitken wrote:
On Sat, Jun 16, 2012 at 09:58:40AM -0500, Gregg Wonderly wrote:
On Jun 16, 2012, at 9:49 AM, Scott Aitken wrote:
On Sat, Jun 16, 2012 at 09:09:53AM -0500, Gregg Wonderly wrote:
Use 'dd' to replicate as much of lofi/2 as you can onto
loss much more often.
Gregg Wonderly
On 1/24/2012 9:50 AM, Stefan Ring wrote:
After having read this mailing list for a little while, I get the
impression that there are at least some people who regularly
experience on-disk corruption that ZFS should be able to report and
handle. I’ve been
that I've been using for
root.So, I put in one of my 1.5TB spares for the moment, until I decide
whether or not to order a new small drive.
On Mon, Dec 19, 2011 at 3:55 PM, Gregg Wonderly gregg...@gmail.com
mailto:gregg...@gmail.com wrote:
That's why I'm asking. I think it should always
. The attached
mirror doesn't have to be the same size as the first component.
On Thu, Dec 15, 2011 at 11:27 PM, Gregg Wonderly gregg...@gmail.com
mailto:gregg...@gmail.com wrote:
Cindy, will it ever be possible to just have attach mirror the surfaces,
including the partition tables
that it can't be corrected
by re-reading enough times.
It looks like you've started mirroring some of the drives. That's really what
you should be doing for the other non-mirror drives.
Gregg Wonderly
___
zfs-discuss mailing list
zfs-discuss
nervous that the other half is going to fall over.
I'm not trying to be hard nosed about this, I'm just trying to share my angst
and frustration with the details that drove me in that direction.
Gregg Wonderly
On 12/16/2011 2:56 AM, Andrew Gabriel wrote:
On 12/16/11 07:27 AM, Gregg Wonderly
Cindy, will it ever be possible to just have attach mirror the surfaces,
including the partition tables? I spent an hour today trying to get a new
mirror on my root pool. There was a 250GB disk that failed. I only had a
1.5TB handy as a replacement. prtvtoc ... | fmthard does not work in
installed, but by now I just want to know how it might be done from a
shell prompt.
rm ./-c ./-O ./-k
And many versions of getopt support the use of -- as the end of options
indicator so that you can do
rm -- -c -O -k
to remove those as well.
Gregg Wonderly
On 11/10/2011 7:42 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of darkblue
1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B
pools to have disk sets
removed. It would provide the needed basic mechanism of just moving stuff
around to eliminate the use of a particular part of the pool that you wanted to
remove.
Gregg Wonderly
___
zfs-discuss mailing list
zfs-discuss
I've been building a few 6disk boxes for VirtualBox servers, and I am also
surveying how I will add more disks as these boxes need it. Looking around on
the HCL, I see the Lycom PE-103 is supported. That's just 2 more disks, I'm
typically going to want to add a raid-z w/spare to my zpools, so
33 matches
Mail list logo