Hi William.

On 06/05/09 04:26, William Schumann wrote:
> Jack,
>
> Jack Schwartz wrote:
>> Hi William.
>>
>> William Schumann wrote:
>>> Jack,
>>> I've modified the proposal to include some protection for vdevs. If 
>>> a vdev is a disk, it must have the attribute "use_entire_disk" if 
>>> the disk is labeled or formatted. Suggestions on a better name for 
>>> this attribute gladly accepted - I don't think that 
>>> "use_entire_disk" makes it adequately clear that the all formatting 
>>> will be destroyed by zfs/zpool create.
>> wipe_disk or wipe_entire_disk?
>> erase_disk or erase_entire_disk?
> Well, it doesn't erase the disk - erasure to me means a low-level 
> format or something that wipes out all the bits.  Same with wipe.
> I thought about 'allow_device_reformat', but that sounds too 
> technical. Maybe 'force_reformatting' - the term "force" is used by 
> zpool create.
IMO, from an end user perspective, the data is gone, so its as good as 
erased.
>>>
>>> Modified example: install on boot disk, use some selected disk as 
>>> raid2 mirror, and use another selected disk over 30GB for zfs pool 
>>> newpool mounted at /export1
>>> <ai_target_device>
>>> <target_device_select_boot_disk> <!-- use the existing boot device -->
>>> <mirror> <!-- put mirror selected disk named "mirrordev" -->
>>> <vdev use_entire_disk="true">mirrordev</vdev>
>>> </mirror>
>>> <mirror_type>raid2</mirror_type>
>>> </ai_target_device>
>> OK.  The above uses the boot disk and sets up a raid2 mirror using 
>> "mirrordev".  This makes a forward reference as discussed a few days 
>> ago.
>>> <ai_target_device> <!-- find disk for a new pool - disk bigger than 
>>> 30G -->
>>> <reference_name>newpooldisk</reference_name>
>>> <target_select_min_size>30<target_select_min_size>
>>> </ai_target_device>
>> If I understand this correctly, an arbitrary disk is being picked by 
>> the system here for a disk to be called "newpooldisk".  We need 
>> protection for newpooldisk too.
>>> <ai_target_device> <!-- just grab another disk for use as a mirror -->
>>> <reference_name>mirrordev</reference_name>
>>> <!-- just assume that disk is appropriate for raid2 mirror -->
>>> </ai_target_device>
>> Another arbitrary disk gets recycled here to become "mirrordev".  
>> Protection is needed.
>>> <ai_zfs_pool>
>>> <zpool_create name=newpool> <!-- describe the new pool -->
>>> <zpool_options>-m /export1</zpool_options> <!-- specify mount point -->
>>> <vdev use_entire_disk="true">
>>> newpooldisk <!-- use selected disk named "newpooldisk", overwriting 
>>> any formatting -->
>>> </vdev>
>> Ah, here's newpooldisk's protection.  I suggest that the protection 
>> be a part of the declaration of newpooldisk, not part of the 
>> declaration of the zpool which will use it.  After all, it is the 
>> disk being protected, not the zpool.
> Firstly, newpooldisk is protected by default.  Here, the protection is 
> overridden.
> Also, consider the case where a device name is used.  Disk selection 
> criteria could be absent completely if the vdev is a device name, so 
> we should at least be able to override the protection on the vdev.
Yes, that's true...
>
>
> You do make a good point that the place where the disk is selected is 
> a logical place for overriding the protection, and I suspect that if 
> we have the override only on the vdev, someone will wonder why.  My 
> opinion would be to have the protection override with the vdev only as 
> the simpler option.
Another, perhaps more salient point, is that it would be less confusing 
to the user, than for there to be two places in the manifest where the 
same protection can be specified.
>>> </zpool_create>
>>> </ai_zfs_pool>
>>>
>>> Slices and partitions could also be similarly protected. Should they 
>>> also be protected?
>> IMO, if the system can arbitrarily pick a slice or partition, 
>> protection is needed.  If a user has to explicitly ask for the slice 
>> or partition, then protection for it isn't as critical but I would 
>> still do it.  
> That would be the behavior in this design - if the disk is selected 
> and used in a vdev, any data on the disk is protected by default - the 
> protection must be overridden in the vdev.
What if one slice is to be preserved, and another is available to be 
picked by the installer for erasure/reuse?  It is important to be 
explicit in which slices are at risk for recycling and which ones 
aren't, since one's precious data may be at stake.
>> (Don't newfs, format and other destructive commands print "last 
>> mounted as blah"?)
> I suppose the last mount name could be dug from the superblock for 
> whatever file system mounted it, but I would rather not get into 
> multiple file system discovery at this point.  Criteria to use at this 
> point might include, disk label, slice table for SPARC, partitions 
> defined for x86.
On a system with many disks and slices, we'll need to somehow identify 
the slices so the user can know which ones are at risk for overwrite.

I also think that the slice needs to be interrogated somehow to know 
whether or not it was previously mounted.  How else would the installer 
know?  Not sure if there are system calls to do this, or if the 
superblock has to be read and interpreted.
>>> A slice could be checked for existing files, but it would have to be 
>>> mounted to do this. Perhaps we can just assume that the user knows 
>>> what he/she is doing if slices and partitions are specifed.
>>>
>>> Jack, FYI, there is a new disk selection element: 
>>> target_device_select_unformatted_disk which can be used to make the 
>>> selection process safer.
>> This is OK, but protection for previously-used disks is what we 
>> really need.
>>
>> This was probably addressed earlier, but... another concern I have is 
>> that disk cwtxdysz names may move around (or at least they used to) 
>> upon reinstall.  This will cause nasty problems if different slices 
>> than those intended are overwritten.  How do we insure that an 
>> intended slice is really the one which is being used when cwtxdysz 
>> names are used?
> The format(1M) volname, volume name, can be selected with 
> target_device_select_volume_name, and target_device_select_device_path 
> (device path under /devices, also referred to as PCI path and 
> phys_path).  There was also discussion about 
> target_device_select_devid_contains, which searches for a substring 
> from the long devid, which is supposedly unique, but there is a 
> libdiskmgt bug on this. (libdiskmgt appears to be assigning this to 
> slices).
OK.

    Thanks,
    Jack
>>>
>>> Updated schema: 
>>> http://www.opensolaris.org/os/project/caiman/auto_install/ai_manifest_schema.xml
>>>  
>>>
>> Since I'm suggesting other changes, I'll review this once the rest of 
>> the dust settles.
> OK, thanks,
> William
>>
>>    Thanks,     Jack
>>>
>>> Thank you,
>>> William
>>>
>>> William Schumann wrote:
>>>> Jack,
>>>> Good point. Read on.
>>>>
>>>> The use case here could be deployment of a computer that is slated 
>>>> for complete reinitialization and removal of any existing data.
>>>>
>>>> In principle, the security for the disk is not provided in 
>>>> selection criteria, but in the partition, slice, and zfs pool 
>>>> creation. The design attempts to preserve data in all cases, unless 
>>>> specified otherwise.
>>>>
>>>> However, you have identified a case where the mirror is created 
>>>> without regard to what might be on the disk. I would propose that 
>>>> the default behavior should be to prevent creating of zpools and 
>>>> mirrors on "disks that have data", unless we offer an element to 
>>>> override that protection. "Disks that have data" must be more 
>>>> clearly described.
>>>>
>>>> Thanks for pointing this out,
>>>> William
>>>>
>>>> Jack Schwartz wrote:
>>>>> Hi William.
>>>>>
>>>>> On 05/26/09 07:01, William Schumann wrote:
>>>>>> (snip)
>>>>>>
>>>>>> Example: install on boot disk, use some selected disk as raid2 
>>>>>> mirror, and use another selected disk over 30GB for zfs pool 
>>>>>> newpool mounted at /export1
>>>>> Sounds dangerous to have the system pick an arbitrary disk based 
>>>>> on size. If we do this, we should check the disk label that the 
>>>>> disk was not used, to prevent accidential erasure.
>>>>>
>>>>> Thanks,
>>>>> Jack
>>>> _______________________________________________
>>>> caiman-discuss mailing list
>>>> caiman-discuss at opensolaris.org
>>>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>>


Reply via email to