dudekula mastan wrote:
Hi ALL,
Is it possible to install solaris 10 on HP-VISUALIZE XL - CLASS server ?
The ZFS discussion alias is probably not the best place to ask this.
In general they way to find out about Solaris support on a particular
hardware platform is to look at the
Anton B. Rang writes:
If your database performance is dominated by sequential reads, ZFS may
not be the best solution from a performance perspective. Because ZFS
uses a write-anywhere layout, any database table which is being
updated will quickly become scattered on the disk, so that
What happens is that /home/thomas/zfs gets mounted
and then the
automounter starts. (Or /home/thomas is found
missing and then
the zfs mount is not completed)
Probably requires legacy mount point.
Casper
___
I'm experiencing this same
Ok, so I'm planning on wiping my test pool that seems to have problems
with non-spare disks being marked as spares, but I can't destroy it:
# zpool destroy -f zmir
cannot iterate filesystems: I/O error
Anyone know how I can nuke this for good?
Jim
This message posted from opensolaris.org
on a blade 1500...
bash-3.00# zfs set sharenfs=rw pool
cannot set sharenfs for 'pool': out of space
bash-3.00# zpool iostat pool
capacity operationsbandwidth
pool used avail read write read write
-- - - - - - -
pool
Jakob Praher wrote:
hi all,
I'd like to build a solid storage server using zfs and opensolaris. The
server more or less should have a NAS role, thus using nfsv4 to export
the data to other nodes.
...
what would be your reasonable advice?
First of all, figure out what you need in terms of
BTW, I'm also unable to export the pool -- same error.
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello James,
Saturday, November 18, 2006, 11:34:52 AM, you wrote:
JM as far as I can see, your setup does not mee the minimum
JM redundancy requirements for a Raid-Z, which is 3 devices.
JM Since you only have 2 devices you are out on a limb.
Actually only two disks for raid-z is fine and you
Nevermind:
# zfs destroy [EMAIL PROTECTED]:28
cannot open '[EMAIL PROTECTED]:28': I/O error
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
You are likely hitting:
6397052 unmounting datasets should process /etc/mnttab instead of traverse DSL
Which was fixed in build 46 of Nevada. In the meantime, you can remove
/etc/zfs/zpool.cache manually and reboot, which will remove all your
pools (which you can then re-import on an individual
You are likely hitting:
6397052 unmounting datasets should process
/etc/mnttab instead of traverse DSL
Which was fixed in build 46 of Nevada. In the
meantime, you can remove
/etc/zfs/zpool.cache manually and reboot, which will
remove all your
pools (which you can then re-import on an
Robert Milkowski wrote:
Hello eric,
Saturday, December 9, 2006, 7:07:49 PM, you wrote:
ek Jim Mauro wrote:
Could be NFS synchronous semantics on file create (followed by
repeated flushing of the write cache). What kind of storage are you
using (feel free to send privately if you need to)
This worked.
I've restarted my testing but I've been fdisking each drive before I
add it to the pool, and so far the system is behaving as expected
when I spin a drive down, i.e., the hot spare gets automatically used.
This makes me wonder if it's possible to ensure that the forced
addition of
By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors.
Should I file this as a bug, or should I just not do that :-
Ko,
This message posted from opensolaris.org
Jim Hranicky wrote:
By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors.
So you had a pool and were sharing filesystems over NFS, NFS clients had
active mounts, you removed
Jim Hranicky wrote:
By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors.
Should I file this as a bug, or should I just not do that :-
Don't do that. The same should happen if you umount
Gino Ruopolo wrote:
Hi All,
we have some ZFS pools on production with more than 100s fs and more
than 1000s snapshots on them. Now we do backups with zfs send/receive
with some scripting but I'm searching for a way to mirror each zpool
to an other one for backup purposes (so including all
A while back we had a Sun engineer come to our office and talk about the
benefits of ZFS. I asked him the question Can the uber block become corrupt
and what happeneds if it does?, to which he did not have the answer but swore
to me that he would get it to me. I still haven't gotten that answer
IANA ZFS guru, but I have read explanations like this:
When ZFS reads in the uberblock, it computes the uberblock's checksum and compares
it against the stored checksum for that block. If they don't match, it uses
another copy of the uberblock.
Ross Hosman wrote:
A while back we had a Sun
A while back we had a Sun engineer come to our office and talk about
the benefits of ZFS. I asked him the question Can the uber block
become corrupt and what happeneds if it does?, to which he did not
have the answer but swore to me that he would get it to me. I still
haven't gotten that
Hello Richard,
Tuesday, December 5, 2006, 7:01:17 AM, you wrote:
RE Dale Ghent wrote:
Similar to UFS's onerror mount option, I take it?
RE Actually, it would be interesting to see how many customers change the
RE onerror setting. We have some data, just need more days in the hour.
Sometimes
Hello Ben,
Monday, December 11, 2006, 9:34:18 PM, you wrote:
BR Robert Milkowski wrote:
Hello eric,
Saturday, December 9, 2006, 7:07:49 PM, you wrote:
ek Jim Mauro wrote:
Could be NFS synchronous semantics on file create (followed by
repeated flushing of the write cache). What kind
Hello Darren,
Tuesday, December 12, 2006, 2:10:30 AM, you wrote:
A while back we had a Sun engineer come to our office and talk about
the benefits of ZFS. I asked him the question Can the uber block
become corrupt and what happeneds if it does?, to which he did not
have the answer but swore
DD To reduce the chance of it affecting the integrety of the filesystem,
DD there are multiple copies of the UB written, each with a checksum and a
DD generation number. When starting up a pool, the oldest generation copy
DD that checks properly will be used. If the import can't find any
BR Yes, absolutely. Set var in /etc/system, reboot, system come up. That
BR happened almost 2 months ago, long before this lock insanity problem
BR popped up.
For the archives, a high level of lock activity can always be a problem.
The worst cases I've experienced were with record locking over
Hi Everybody,
I have some problems in solaris 10 installation.
After installing the first CD , I removed the CD from CDrom , after that the
machine is getting rebooting again and again. It is not asking second CD to
install.
If you have any idea. Please tell me.
Thanks
Robert Milkowski wrote:
Hello Richard,
Tuesday, December 5, 2006, 7:01:17 AM, you wrote:
RE Dale Ghent wrote:
Similar to UFS's onerror mount option, I take it?
RE Actually, it would be interesting to see how many customers change the
RE onerror setting. We have some data, just need more
On 12/12/2006, at 8:48 AM, Richard Elling wrote:
Jim Hranicky wrote:
By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors. Should I file this as a bug, or
should I just not do that :-
28 matches
Mail list logo