If your entire pool consisted of a single mirror of two disks, A and B,
and you detached B at some point in the past, you *should* be able to
recover the pool as it existed when you detached B. However, I just
tried that experiment on a test pool and it didn't work. I will
investigate further
Hi Dominic,
I've built a home fileserver using ZFS and I'd be happy to help. I've written
up my experiences, from the search for suitable devices thru researching
compatible hardware, and finally configuring it to share files.
I also build a second box for backups, again using ZFS, and used
Jeff thank you very much for taking time to look at this.
My entire pool consisted of a single mirror of two slices on different disks A
and B. I attach a third slice on disk C and wait for resilver and then detach
it. Now disks A and B burned and I have only disk C at hand.
bbr
This
Rick, I have the same motherboard on my backup machine and got 48MBytes/sec
sustained on a 650GB transfer (but that was using iSCSI), so I suggest two
things:
1. Make sure you are using the latest stable -- i.e. not beta, BIOS update. You
can use a USB thumbdrive to install it, and can save
Urgh. This is going to be harder than I thought -- not impossible,
just hard.
When we detach a disk from a mirror, we write a new label to indicate
that the disk is no longer in use. As a side effect, this zeroes out
all the old uberblocks. That's the bad news -- you have no uberblocks.
The
Rick, I have the same motherboard on my backup
machine and got 48MBytes/sec sustained on a 650GB
transfer (but that was using iSCSI), so I suggest two
things:
1. Make sure you are using the latest stable -- i.e.
not beta, BIOS update. You can use a USB thumbdrive
to install it, and can
Hello zfs-discuss,
S10U4+patches, SPARC
If I attach a disk to vdev in a pool to get mirrored configuration
then during resilver zpool iostat 1 will report only reads being
done from pool and basically no writes. If I do zpool iostat -v 1
then I can see it is writing to new device
If I understand you correctly the steps to follow are:
read each sector (dd bs=512 count=1 split=n is enough?)
decompress it (any tools implementing the algo lzjb?)
size = 1024?
structure might be objset_phys_t?
take the oldest birth time as the root block
Hi,
ZFS won't boot on my machine.
I discovered, that the lu manpages are there, but not
the new binaries.
So I tried to set up ZFS boot manually:
zpool create -f Root c0t1d0s0
lucreate -n nv88_zfs -A nv88 finally on ZFS -c nv88_ufs -p Root -x /zones
zpool set bootfs=Root/nv88_zfs
Ulrich Graef wrote:
Hi,
ZFS won't boot on my machine.
I discovered, that the lu manpages are there, but not
the new binaries.
So I tried to set up ZFS boot manually:
zpool create -f Root c0t1d0s0
lucreate -n nv88_zfs -A nv88 finally on ZFS -c nv88_ufs -p Root -x /zones
zpool set
Hey, hi Rick!
The obvious thing that is wrong is the network being recognised as 100Mbps and
not 1000. Hopefully, the read/write speeds will fix themselves once the network
problem is fixed.
As it's the same cable you had working previously at 1000Mbps on your other
computer and the same
Is there anyone who has successfully put together a high-powered mini-itx ZFS
box that would be willing to post their system specs?
I'm eyeballing the KI690-AM2...
http://www.albatron.com.tw/english/product/mb/pro_detail.asp?rlink=Overviewno=239
...but am having a difficult time locating it and
Rick,
Glad it worked ;-)
Now if I were you, I would not upgrade the BIOS unless you really want/need to.
I look forward to seeing your revised speed test data for reads and writes with
the gigabit network speed working correctly. I think it should make a little
difference -- I'm guessing
Folks,
How can I find out zpool id without using zpool import? zpool list
and zpool status does not have option as of Solaris 10U5.. Any back door
to grab this property will be helpful.
Thank you
Ajay
___
zfs-discuss mailing list
On Tue, 29 Apr 2008, Krzys wrote:
I am not sure, I had very ok system when I did originally build it and when I
did originally started to use zfs, but now its so horribly slow. I do believe
that amount of snaps that I have are causing it.
This seems like a bold assumption without supportive
This is present as the 'guid' property in Solaris Nevada. If you're on
a previous release, you can do one of the following:
- 'zdb -l device in pool' and look for the 'pool_guid' property (if
you're using whole disks you'll still need the s0 slice).
- '::walk spa | ::print spa_t spa_name
Ajay Kumar wrote:
Folks,
How can I find out zpool id without using zpool import? zpool list
and zpool status does not have option as of Solaris 10U5.. Any back door
to grab this property will be helpful.
It seems to be a heck of a lot easier to just use zpool import without
the -a
Hello Richard,
Tuesday, April 29, 2008, 5:51:01 PM, you wrote:
RE Ajay Kumar wrote:
Folks,
How can I find out zpool id without using zpool import? zpool list
and zpool status does not have option as of Solaris 10U5.. Any back door
to grab this property will be helpful.
RE It
For example I am trying to copy 1.4G file from my /var/mail to /d/d1
directory
which is zfs file system on mypool2 pool. It takes 25 minutes to copy it,
while
copying it to tmp directory only takes few seconds. Whats wrong with this?
Why
its so long to copy that wile to my zfs
Robert Milkowski wrote:
Hello Richard,
Tuesday, April 29, 2008, 5:51:01 PM, you wrote:
RE Ajay Kumar wrote:
Folks,
How can I find out zpool id without using zpool import? zpool list
and zpool status does not have option as of Solaris 10U5.. Any back door
to grab this property
On Tue, Apr 29, 2008 at 07:41:01AM -0700, Benjamin Ellison wrote:
Is there anyone who has successfully put together a high-powered mini-itx
ZFS box that would be willing to post their system specs?
I'm eyeballing the KI690-AM2...
I wonder how hard it would be to get Solaris running on the new ReadyNAS.
http://www.netgear.com/Products/Storage/ReadyNASPro.aspx
Wes Felter - [EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I have a problem on one of my systems with zfs. I used to have zpool created
with 3 luns on SAN. I did not have to put any raid or anything on it since it
was already using raid on SAN. Anyway server rebooted and I cannot zee my
pools.
When I do try to import it it does fail. I am using EMC
Hi Rick,
I have the same problem as you. (sorry for my english)
I have installed the same OS on a gigabyte mother board. I wanted to make a NAS
with the nice ZFS.
First I tried the new smb kernel implementation: file navigation (on windows)
and streaming were too slow. File transfer was
FYI -
If you're doing anything with CIFS and performance, you'll want this
fix:
6686647 smbsrv scalability impacted by memory management issues
Which was putback into build 89 of nevada.
- Eric
On Thu, Apr 24, 2008 at 09:46:04AM -0700, Rick wrote:
Recently I've installed SXCE nv86 for the
How do you ascertain the current zfs vdev cache size (e.g.
zfs_vdev_cache_size) via mdb or kstat or any other cmd?
Thanks in advance,
Brad
--
The Zone Manager
http://TheZoneManager.COM
http://opensolaris.org/os/project/zonemgr
___
zfs-discuss mailing
If you're doing anything with CIFS and performance,
you'll want this
fix:
6686647 smbsrv scalability impacted by memory
management issues
Which was putback into build 89 of nevada.
- Eric
Thank you Eric. This is the second time someone has mentioned this to me. I
imagine it's a
Hi Rick,
So just to verify, you never managed to get more than 10 MBytes/sec across the
link due to the network only giving you a 100 Mbps connection?
Simon
This message posted from opensolaris.org
___
zfs-discuss mailing list
Dominic Kay wrote:
Hi
Firstly apologies for the spam if you got this email via multiple aliases.
I'm trying to document a number of common scenarios where ZFS is used
as part of the solution such as email server, $homeserver, RDBMS and
so forth but taken from real implementations where
So just to verify, you never managed to get more than
10 MBytes/sec across the link due to the network only
giving you a 100 Mbps connection?
Hi Simon,
I'll try to clear this up. Sorry for the confusion.
The server the Solaris M2N-E is replacing had 2 NICs. When I removed the
physical box,
Hi,
I have a pool /zfs01 with two sub file systems /zfs01/rep1 and /zfs01/rep2. I
used [i]zfs share[/i] to make all of these mountable over NFS, but clients have
to mount either rep1 or rep2 individually. If I try to mount /zfs01 it shows
directories for rep1 and rep2, but none of their
On Tue, 29 Apr 2008, Tim Wood wrote:
but that makes it sound like this issue was resolved by changing the
NFS client behavior in solaris. Since my NFS client machines are
going to be linux machines that doesn't help me any.
Yes, Solaris 10 does nice helpful things that other OSs don't do.
I did a fresh install of Nevada. I have two zpools that contains the
devices c0t0d0s4 and c0t1d0s4. Couldn't find a way to attach the
missing device without it being imported. Any help would be
appreciated
bash-3.2# zpool import
pool: nfs-share
id: 6871731259521181476
state:
33 matches
Mail list logo