I questioning these recommendations to increase my understanding.  

--- opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

>From: Edward Ned Harvey <opensolarisisdeadlongliveopensola...@nedharvey.com>
>> From:  Yaverot

>> rpool remains 1% inuse. tank reports 100% full (with 1.44G free), 

>I recommend:
>When creating your new pool, use slices of the new disks, which are 99% of
>the size of the new disks instead of using the whole new disks.  Because
>this is a more reliable way of avoiding the problem "my new replacement disk
>for the failed disk is slightly smaller than the failed disk and therefore I
>can't replace."

1. While performance isn't my top priority, doesn't using slices make a 
significant difference?
2. Doesn't snv_134 that I'm running already account for variances in these 
nominally-same disks?
3. The market refuses to sell disks under $50, therefore I won't be able to buy 
drives of 'matching' capacity anyway. 

>I also recommend:
>In every pool, create some space reservation.  So when and if you ever hit
>100% usage again and start to hit the system crash scenario, you can do a
>zfs destroy (snapshot) and delete the space reservation, in order to avoid
>the system crash scenario you just witnessed.  Hopefully.

1. Why would tank being practically full affect management of other pools and 
start the crash scenario I encountered? rpool & rpool/swap remained at 1% use, 
the apparent trigger was doing a "zpool destroy others" which is neither the 
rpool the system runs out of, nor tank.

2. How can a zfs destroy ($snapshot) complete when both "zpool destroy" and 
"zfs list" fail to complete? 

3. Assuming I want to do such an allocation, is this done with quota & 
reservation? Or is it snapshots as you suggest?
If it is snapshots is this the process:
create snapshot @normal-pre-reservation
write (reservation size) random data to pool
create snapshot @reserved_chunk
delete random data
create snapshot @normal_post_reservation

Now the only unique data (of significance) in @reserved_chunk is just that. And 
@reserved_chunk should be excluded from backups, and the @normals can be 
deleted per whatever standard snapshot policy I have is.

Would it make more sense to make another filesystem in the pool, fill it enough 
and keep it handy to delete? Or is there some advantage to zfs destroy 
(snapshot) over zfs destroy (filesystem)? While I am thinking about the system 
and have extra drives, like now, is the time to make plans for the next "system 
is full" event.  




_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to