Just a note to pass on in case anyone runs into the same situation.
I have a DELL R510 that is running just fine, up until the day that I needed to
import a pool from a USB hard drive. I plug in the disk, check it with rmformat
and try to import the zpool. And it sits there for practically
'Edward Ned Harvey' wrote:
From: Henrik Johansen [mailto:hen...@scannet.dk]
The 10g models are stable - especially the R905's are real workhorses.
You would generally consider all your machines stable now?
Can you easily pdsh to all those machines?
Yes - the only problem child has been 1
I'd like to see those docs as well.
As all HW raids are driven by software, of course - and software can be buggy.
I don't want to heat up the discussion about ZFS managed discs vs. HW raids,
but if RAID5/6 would be that bad, no one would use it anymore.
So… just post the link and I will take a
We got a R710 + 3 MD1000s running zfs, with intel 10GE network card.
There was a period of time that R710 freezing randomly, when we used osol b12x
release. I checked in google and there were reports of freezes caused by a new
mpt driver used in b12x release which could be the cause. Changed
On 13 oct. 2010, at 18:37, Marty Scholes wrote:
The only thing that still stands out is that network operations (iSCSI and
NFS) to external drives are slow, correct?
Just for completeness, what happens if you scp a file to the three different
pools? If the results are the same as NFS
Sorry for the long post but I know trying to decide on hardware often want to
see details about what people are using.
I have the following AS-2022G-URF machine running OpenGaryIndiana[1] that I am
starting to use.
I successfully transferred a deduped zpool with 1.x TB of files and 60 or so
a diff to list the file differences between snapshots
http://arc.opensolaris.org/caselog/PSARC/2010/105/mail
Dave
On 10/13/10 15:48, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of dirk schelfhout
Wanted to test
On Wed, October 13, 2010 21:26, Edward Ned Harvey wrote:
I highly endorse mirrors for nearly all purposes.
Are you a member of BAARF?
http://www.miracleas.com/BAARF/BAARF2.html
:)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
From: David Magda [mailto:dma...@ee.ryerson.ca]
On Wed, October 13, 2010 21:26, Edward Ned Harvey wrote:
I highly endorse mirrors for nearly all purposes.
Are you a member of BAARF?
http://www.miracleas.com/BAARF/BAARF2.html
Never heard of it. I don't quite get it ... They want
On 14-Oct-10, at 3:27 AM, Stephan Budach wrote:
I'd like to see those docs as well.
As all HW raids are driven by software, of course - and software can
be buggy.
It's not that the software 'can be buggy' - that's not the point here.
The point being made is that conventional RAID just
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Toby Thain
I don't want to heat up the discussion about ZFS managed discs vs.
HW raids, but if RAID5/6 would be that bad, no one would use it
anymore.
It is. And there's no reason not
I had to upgrade zfs
zfs upgrade -a
then
pfexec zfs set sharesmb=off data
pfexec zfs set sharesmb=on data
after this zfs diff failed with the old snapshots.
But with newly created snapshots it worked.
Thanks Tim,
Dirk
--
This message posted from opensolaris.org
I know that this is not necessarily the right forum, but the FreeBSD forum
haven't been able to help me...
I recently updated my FreeBSD 8.0 RC3 to 8.1 and after the update I can't
import my zpool. My computer says that no such pool exists, even though it can
be seen with the zpool status
On Thu, Oct 14, 2010 at 11:47 PM, Oskar oskars.ga...@gmail.com wrote:
I know that this is not necessarily the right forum, but the FreeBSD forum
haven't been able to help me...
I recently updated my FreeBSD 8.0 RC3 to 8.1 and after the update I can't
import my zpool. My computer says that
Sounding more and more like a networking issue - are
the network cards set up in an aggregate? I had some
similar issues on GbE where there was a mismatch
between the aggregate settings on the switches and
the LACP settings on the server. Basically the
network was wasting a ton of time
I've had a few people sending emails directly suggesting it might have
something to do with the ZIL/SLOG. I guess I should have said that the issue
happen both ways, whether we copy TO or FROM the Nexenta box.
--
This message posted from opensolaris.org
Our next test is to try with a different kind of HBA,
we have a Dell H800 lying around.
ok... we're making progress. After swapping the LSI HBA for a Dell H800 the
issue disappeared. Now, I'd rather not use those controllers because they
don't have a JBOD mode. We have no choice but to make
rewar...@hotmail.com said:
ok... we're making progress. After swapping the LSI HBA for a Dell H800 the
issue disappeared. Now, I'd rather not use those controllers because they
don't have a JBOD mode. We have no choice but to make individual RAID0
volumes for each disks which means we need
Earlier you said you had eliminated the ZIL as an
issue, but one difference
between the Dell H800 and the LSI HBA is that the
H800 has an NV cache (if
you have the battery backup present).
A very simple test would be when things are running
slow, try disabling
the ZIL temporarily, to see
I am relatively new to OpenSolaris / ZFS (have been using it for maybe 6
months). I recently added 6 new drives to one of my servers and I would like to
create a new RAIDZ2 pool called 'marketData'.
I figured the command to do this would be something like:
zpool create marketData raidz2
On Oct 14, 2010, at 5:08 PM, Derek G Nokes wrote:
I am relatively new to OpenSolaris / ZFS (have been using it for maybe 6
months). I recently added 6 new drives to one of my servers and I would like
to create a new RAIDZ2 pool called 'marketData'.
I figured the command to do this would
Derek,
I am relatively new to OpenSolaris / ZFS (have been using it for maybe 6
months). I recently added 6 new drives to one of my servers and I would like
to create a new RAIDZ2 pool called 'marketData'.
I figured the command to do this would be something like:
zpool create
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian D
ok... we're making progress. After swapping the LSI HBA for a Dell
H800 the issue disappeared. Now, I'd rather not use those controllers
because they don't have a JBOD mode. We have
0n Thu, Oct 14, 2010 at 09:54:09PM -0400, Edward Ned Harvey wrote:
If you happen to find that MegaCLI is the right tool for your hardware, let
me know, and I'll paste a few commands here, which will simplify your life.
When I first started using it, I found it terribly
Thank you both. I did try without specifying the 's0' portion before posting
and got the following error:
r...@dnokes.homeip.net:~# zpool create marketData raidz2 c0t5000C5001A6B9C5Ed0
c0t5000C5001A81E100d0 c0t5000C500268C0576d0 c0t5000C500268C5414d0
c0t5000C500268CFA6Bd0
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Wilkinson, Alex
can you paste them anyway ?
Note: If you have more than one adapter, I believe you can specify -aALL in
the commands below, instead of -a0
I have 2 disks (slots 4 5) that
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Derek G Nokes
r...@dnokes.homeip.net:~# zpool create marketData raidz2
c0t5000C5001A6B9C5Ed0 c0t5000C5001A81E100d0 c0t5000C500268C0576d0
c0t5000C500268C5414d0 c0t5000C500268CFA6Bd0
27 matches
Mail list logo