If you're getting nobody:nobody on an NFS mount you have an NFS version
mismatch, (usually between V3 V4) to get around this use the following as
mount options on the client:
hard,bg,intr,vers=3
e.g:
mount -o hard,bg,intr,vers=3 server:/pool/zfs /mountpoint
--
This message posted from
Hi Daniel,
D'oh...
I found a related bug when I looked at this yesterday but I didn't think
it was your problem because you didn't get a busy message.
See this RFE:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6700597
Cindy
On 04/07/10 17:59, Daniel Bakken wrote:
We have found
On 08 April, 2010 - Cindy Swearingen sent me these 2,6K bytes:
Hi Daniel,
D'oh...
I found a related bug when I looked at this yesterday but I didn't think
it was your problem because you didn't get a busy message.
See this RFE:
On Thu, 8 Apr 2010, Erik Trimble wrote:
While that's great in theory, there's getting to be a consensus that 1TB
7200RPM 3.5 Sata drives are really going to be the last usable capacity.
Agreed. The 2.5 form factor is rapidly emerging. I see that
enterprise 6-Gb/s SAS drives are available
On Apr 8, 2010, at 8:52 AM, Bob Friesenhahn wrote:
On Thu, 8 Apr 2010, Erik Trimble wrote:
While that's great in theory, there's getting to be a consensus that 1TB
7200RPM 3.5 Sata drives are really going to be the last usable capacity.
I doubt that 1TB (or even 1.5TB) 3.5 disks are being
On 12 mar 2010, at 03.58, Damon Atkins wrote:
...
Unfortunately DNS spoofing exists, which means forward lookups can be poison.
And IP address spoofing, and...
The best (maybe only) way to make NFS secure is NFSv4 and Kerb5 used together.
Amen!
DNS is NOT an authentication system!
IP is NOT
We're starting to grow our ZFS environment and really need to start
standardizing our monitoring procedures.
OS tools are great for spot troubleshooting and sar can be used for
some trending, but we'd really like to tie this into an SNMP based
system that can generate graphs for us (via RRD or
Ray,
Here is my short list of Performance Metrics I track on 7410 Performance
Rigs via 7000 Analytics.
Cheers,
Joel.
m:analytics datasets ls
Datasets:
DATASET STATE INCORE ONDISK NAME
dataset-000 active 1016K 75.9M arc.accesses[hit/miss]
dataset-001 active390K 37.9M
-20100408, although
it's very strange to do that so far.
It would also be possible to use ugly post-Unix directory layouts, ex
/pkg/marker/usr/bin and /pkg/marker/etc and
/pkg/marker/var/db/pkg, and then make /pkg/marker into a ZFS that
could be snapshotted and rolled back. It is odd in pkgsrc world
rs == Ragnar Sundblad ra...@csc.kth.se writes:
rs use IPSEC to make IP address spoofing harder.
IPsec with channel binding is win, but not until SA's are offloaded to
the NIC and all NIC's can do IPsec AES at line rate. Until this
happens you need to accept there will be some protocols
Hi Richard
Thanks for your comments. OK ZFS is COW, I understand, but, this also means
a waste of valuable space of my L2ARC SSD device, more than 60% of the space
is consumed by COW !!!. I do not get it ?
On Sat, Apr 3, 2010 at 11:35 PM, Richard Elling richard.ell...@gmail.comwrote:
On Apr 1,
On Thu, Apr 08, 2010 at 12:14:55AM -0700, Erik Trimble wrote:
Daniel Carosone wrote:
Go with the 2x7 raidz2. When you start to really run out of space,
replace the drives with bigger ones.
While that's great in theory, there's getting to be a consensus that 1TB
7200RPM 3.5 Sata drives are
On Fri, 2010-04-09 at 08:07 +1000, Daniel Carosone wrote:
On Thu, Apr 08, 2010 at 12:14:55AM -0700, Erik Trimble wrote:
Daniel Carosone wrote:
Go with the 2x7 raidz2. When you start to really run out of space,
replace the drives with bigger ones.
While that's great in theory, there's
On 08 April, 2010 - Abdullah Al-Dahlawi sent me these 12K bytes:
Hi Richard
Thanks for your comments. OK ZFS is COW, I understand, but, this also means
a waste of valuable space of my L2ARC SSD device, more than 60% of the space
is consumed by COW !!!. I do not get it ?
The rest can and
mingli liming...@gmail.com writes:
Thank Erik, and I will try it, but the new question is that the root
of the NFS server mapped as nobody at the NFS client.
For this issue, I set up a new test NFS server and NFS client, and
with the same option, at this test environment, the file owner
On Apr 8, 2010, at 3:23 PM, Tomas Ögren wrote:
On 08 April, 2010 - Abdullah Al-Dahlawi sent me these 12K bytes:
Hi Richard
Thanks for your comments. OK ZFS is COW, I understand, but, this also means
a waste of valuable space of my L2ARC SSD device, more than 60% of the space
is consumed
On 8 apr 2010, at 23.21, Miles Nordin wrote:
rs == Ragnar Sundblad ra...@csc.kth.se writes:
rs use IPSEC to make IP address spoofing harder.
IPsec with channel binding is win, but not until SA's are offloaded to
the NIC and all NIC's can do IPsec AES at line rate. Until this
happens
Do the following ZFS stats look ok?
::memstat
Page Summary Pages MB %Tot
Kernel 106619 832 28%
ZFS File Data 79817 623 21%
Anon 28553 223 7%
Exec and libs 3055 23 1%
Page cache 18024 140 5%
Free (cachelist) 2880 22 1%
Free (freelist) 146309
Do the following ZFS stats look ok?
::memstat
Page Summary Pages MB %Tot
Kernel 106619 832 28%
ZFS File Data 79817 623 21%
Anon 28553 223 7%
Exec and libs 3055 23 1%
Page cache 18024 140 5%
Free (cachelist) 2880 22 1%
Free (freelist)
On 04/ 9/10 10:48 AM, Erik Trimble wrote:
Well
The problem is (and this isn't just a ZFS issue) that resilver and scrub
times /are/ very bad for1TB disks. This goes directly to the problem
of redundancy - if you don't really care about resilver/scrub issues,
then you really shouldn't
On Thu, Apr 08, 2010 at 03:48:54PM -0700, Erik Trimble wrote:
Well
To be clear, I don't disagree with you; in fact for a specific part of
the market (at least) and a large part of your commentary, I agree. I
just think you're overstating the case for the rest.
The problem is (and this
Well I would like to thank everyone for there comments and ideas.
I finally have this machine up and running with Nexenta Community edition and
am really liking the GUI for administering it. It suits my needs perfectly and
is running very well. I ended up going with 2 X 7 RaidZ2 vdevs in one
On Thu, 8 Apr 2010, Jason S wrote:
One thing i have noticed that seems a littler different from my
previous hardware raid controller (Areca) is the data is not
constantly being written to the spindles. For example i am copying
some large files to the array right now (approx 4 gigs a file) and
On Apr 8, 2010, at 6:19 PM, Daniel Carosone wrote:
As for error rates, this is something zfs should not be afraid
of. Indeed, many of us would be happy to get drives with less internal
ECC overhead and complexity for greater capacity, and tolerate the
resultant higher error rates,
On Thu, Apr 08, 2010 at 08:36:43PM -0700, Richard Elling wrote:
On Apr 8, 2010, at 6:19 PM, Daniel Carosone wrote:
As for error rates, this is something zfs should not be afraid
of. Indeed, many of us would be happy to get drives with less internal
ECC overhead and complexity for greater
I thought I might chime in with my thoughts and experiences. For starters, I
am very new to both OpenSolaris and ZFS, so take anything I say with a grain of
salt. I have a home media server / backup server very similar to what the OP
is looking for. I am currently using 4 x 1TB and 4 x 2TB
On Apr 8, 2010, at 9:06 PM, Daniel Carosone wrote:
On Thu, Apr 08, 2010 at 08:36:43PM -0700, Richard Elling wrote:
On Apr 8, 2010, at 6:19 PM, Daniel Carosone wrote:
As for error rates, this is something zfs should not be afraid
of. Indeed, many of us would be happy to get drives with less
27 matches
Mail list logo