Since there is no answer yet here's a simpler(?) question,
Why does zpool think that I have 2 c2d0?
Even if all disks are offline, zpool still lists two c2d0 instead of c2d0 and
c3d0
It seems that a logical name is confused with the physical, or something...
This message posted from
Does copy-on-write happen every time when any data block of ZFS is getting
modified?
Yes. (Data block or meta-data block, with the sole exception of the set of
überblocks.)
Also where exactly COWed data written
I'm not quite sure what you're asking here. Data, whether newly written or
Hi,
Also where exactly COWed data written
I'm not quite sure what you're asking here. Data, whether newly written or
copy-on-write, goes to a newly allocated block, which may reside on any
vdev, and will be spread across devices if using RAID.
My exact doubt is, if COW is default
sudarshan sridhar wrote:
I'm not quite sure what you're asking here. Data, whether newly written or
copy-on-write, goes to a newly allocated block, which may reside on any
vdev, and will be spread across devices if using RAID.
My exact doubt is, if COW is default behavior of ZFS then does
sudarshan sridhar wrote:
/Hi,/
//
/ Also where exactly COWed data written
/
I'm not quite sure what you're asking here. Data, whether newly
written or copy-on-write, goes to a newly allocated block, which
may reside on any vdev, and will be spread across devices if using
RAID.
My
On Sun, 6 Jan 2008, James C. McPherson wrote:
Al Hopper wrote:
...
It's not recommended practice to modify the zone config files directly (bad
boy James!).
Bad boy Al for making an unwarranted assumption about what
I have or have not done!
Whoops!
While configuring the zone you can do
Hi,
I have a strange problem with a zfs filesystem.
zfs scrub stuff reports no errors.
[16:50]charon:...kaputt/Crossroads# pwd
/stuff/backups/kaputt/Crossroads
[16:51]charon:...kaputt/Crossroads# ls
01 - Introspection (Crossroads by Mind.In.A.Box).flac
[...]
Peter Braam's talk, which has more information related to ZFS/Lustre FS
integration:
https://hpc.sun.com/blog/richbruecknersuncom/video-lustre-file-system-presented-sun-hpc-consortium-reno
ZFS/DMU benchmarks:
https://mail.clusterfs.com/pipermail/lustre-announce/2007-November/000147.html
Rayson
Hi,
Not sure if it's the case here. However I've seen Value too
large for defined data type errors on systems which had date (year)
set incorrectly.
On 1/7/08, Arne Schwabe [EMAIL PROTECTED] wrote:
Hi,
I have a strange problem with a zfs filesystem.
zfs scrub stuff reports no errors.
We had that with NetApps, and added this to /etc/system
nfs:nfs_allow_preepoch_time=1
But that might be entirely unrelated.
Lund
Sengor wrote:
Hi,
Not sure if it's the case here. However I've seen Value too
large for defined data type errors on systems which had date (year)
set
10 matches
Mail list logo