On Thu, May 17, 2012 at 2:50 PM, Jim Klimov jimkli...@cos.ru wrote:
New question: if the snv_117 does see the 3Tb disks well,
the matter of upgrading the OS becomes not so urgent - we
might prefer to delay that until the next stable release
of OpenIndiana or so.
There were some pretty major
A small follow-up on my tests, just in case readers are
interested in some numbers: the UltraStar 3Tb disk got
filled up by a semi-random selection of data from our old
pool in 24 hours sharp, including large dump files and
small sourcedirs via rsync, and zome recursive zfs sends
of VM storage
New question: if the snv_117 does see the 3Tb disks well,
the matter of upgrading the OS becomes not so urgent - we
might prefer to delay that until the next stable release
of OpenIndiana or so.
Now that I think of it, when was raidz3 introduced?..
I don't see it in the zpool manpage as of SXCE
2012-05-18 1:39, Jim Klimov написал:
A small follow-up on my tests, just in case readers are
interested in some numbers: the UltraStar 3Tb disk got
filled up by a semi-random selection of data from our old
pool in 24 hours sharp
One more number: the smaller pool completed its scrub in
57
On Fri, 18 May 2012, Jim Klimov wrote:
Would there be substantial issues if we start out making
and filling the new raidz3 8+3 pool in SXCE snv_129 (with
zpool v22) or snv_130, and later upgrade the big zpool
along with the major OS migration, that can be avoided
by a preemptive upgrade to
Jim Klimov jimkli...@cos.ru wrote:
We know that large redundancy is highly recommended for
big HDDs, so in-place autoexpansion of the raidz1 pool
onto 3Tb disks is out of the question.
Before I started to use my thumper, I reconfigured it to use RAID-Z2.
This allows me to just replace disks
2012-05-16 6:18, Bob Friesenhahn wrote:
You forgot IDEA #6 where you take advantage of the fact that zfs can be
told to use sparse files as partitions. This is rather like your IDEA #3
but does not require that disks be partitioned.
This is somewhat the method of making missing devices when
2012-05-16 13:30, Joerg Schilling написал:
Jim Klimovjimkli...@cos.ru wrote:
We know that large redundancy is highly recommended for
big HDDs, so in-place autoexpansion of the raidz1 pool
onto 3Tb disks is out of the question.
Before I started to use my thumper, I reconfigured it to use
On Wed, May 16, 2012 at 1:45 PM, Jim Klimov jimkli...@cos.ru wrote:
Your idea actually evolved for me into another (#7?), which
is simple and apparent enough to be ingenious ;)
DO use the partitions, but split the 2.73Tb drives into a
roughly 2.5Tb partition followed by a 250Gb partition of
Hello fellow BOFH,
I also went by that title in a previous life ;)
2012-05-16 21:58, bofh wrote:
Err, why go to all that trouble? Replace one disk per pool. Wait for
resilver to finish. Replace next disk. Once all/enough disks have
been replaced, turn on autoexpand, and you're done.
As
There's something going on then. I have 7x 3TB disk at home, in
raidz3, so about 12TB usable. 2.5TB actually used. Scrubbing takes
about 2.5 hours. I had done the resilvering as well, and that did not
take 15 hours/drive. Copying 3TBs onto 2.5 SATA drives did take more
than a day, but a 2.5
On Wed, 16 May 2012, Jim Klimov wrote:
Your idea actually evolved for me into another (#7?), which
is simple and apparent enough to be ingenious ;)
DO use the partitions, but split the 2.73Tb drives into a
roughly 2.5Tb partition followed by a 250Gb partition of
the same size as vdevs of the
bofh goodb...@gmail.com wrote:
There's something going on then. I have 7x 3TB disk at home, in
raidz3, so about 12TB usable. 2.5TB actually used. Scrubbing takes
about 2.5 hours. I had done the resilvering as well, and that did not
take 15 hours/drive. Copying 3TBs onto 2.5 SATA drives
2012-05-16 22:21, bofh wrote:
There's something going on then. I have 7x 3TB disk at home, in
raidz3, so about 12TB usable. 2.5TB actually used. Scrubbing takes
about 2.5 hours. I had done the resilvering as well, and that did not
take 15 hours/drive.
That is the critical moment ;)
The
2012-05-15 19:17, casper@oracle.com wrote:
Your old release of Solaris (nearly three years old) doesn't support
disks over 2TB, I would think.
(A 3TB is 3E12, the 2TB limit is 2^41 and the difference is around 800Gb)
While this was proven correct by my initial experiments,
it seems that
Hello all, I'd like some practical advice on migration of a
Sun Fire X4500 (Thumper) from aging data disks to a set of
newer disks. Some questions below are my own, others are
passed from the customer and I may consider not all of them
sane - but must ask anyway ;)
1) They hope to use 3Tb disks,
Hello all, I'd like some practical advice on migration of a
Sun Fire X4500 (Thumper) from aging data disks to a set of
newer disks. Some questions below are my own, others are
passed from the customer and I may consider not all of them
sane - but must ask anyway ;)
1) They hope to use 3Tb disks,
Urgent interrupt processed, I got back to my questions :)
Thanks Casper for his suggestion, the box is scheduled to
reboot soon and I'll try newer Solaris (oi_151a3 probably)
as well. UPDATE: Yes, oi_151a3 has seen all 2.73Tb of
the disk, so my old question is resolved: the original
Thumper (Sun
You forgot IDEA #6 where you take advantage of the fact that zfs can
be told to use sparse files as partitions. This is rather like your
IDEA #3 but does not require that disks be partitioned.
This opens up many possibilities. Whole vdevs can be virtualized to
files on (i.e. moved onto)
19 matches
Mail list logo