Re: RFC: Project geom-events

2011-10-10 Thread perryh
Lev Serebryakov l...@freebsd.org wrote: GPT must have backup copy in last sector by standard ... In that case, shouldn't it refuse to install on any provider that is not in fact a disk, so as not to create configurations that cannot work properly? MBR doesn;t have any additional metadata. How

Re: RFC: Project geom-events

2011-10-10 Thread perryh
Lev Serebryakov l...@freebsd.org wrote: GPT _must_ be placed twice -- at first and last sectors (really, more than one sectors). By standard. Secondary copy must be at end of disk. Period. Then, by standard GPT cannot coexist with GLABEL. Such setup should be disallowed, or at least big

Re: RFC: Project geom-events

2011-10-10 Thread perryh
John j...@freebsd.org wrote: ... gpart should show warning message if user is trying to put GPT on non real disk devices. ... This also seem to prevent something useful like: # camcontrol inquiry da0 pass2: HP EH0146FAWJB HPDD Fixed Direct Access SCSI-5 device pass2: Serial Number

Re: RFC: Project geom-events

2011-10-09 Thread Miroslav Lachman
Lev Serebryakov wrote: Hello, Miroslav. You wrote 6 октября 2011 г., 16:59:19: [...] The current state is simply wrong, because user can do something what cannot work and is not documented anywhere. It is Ok in UNIX way, in general. You should be able to shoot your leg, it is good :)

Re: RFC: Project geom-events

2011-10-09 Thread John
- Miroslav Lachman's Original Message - Lev Serebryakov wrote: Hello, Miroslav. You wrote 6 ?? 2011 ??., 16:59:19: [...] The current state is simply wrong, because user can do something what cannot work and is not documented anywhere. It is Ok in UNIX way, in

Re: RFC: Project geom-events

2011-10-08 Thread Lev Serebryakov
Hello, Daniel. You wrote 8 октября 2011 г., 0:13:54: GPT (and MBR) metadata placement is dictated from outside world, where is no GEOM and geom_label. They INTENDED to be used on DISKS. BIOSes should be able to find it :) Certainly GPT and MBR must place an instance of the partition table

Re: RFC: Project geom-events

2011-10-08 Thread Lev Serebryakov
Hello, Lev. You wrote 8 октября 2011 г., 13:52:21: GPT must have backup copy in last sector by standard ... In that case, shouldn't it refuse to install on any provider that is not in fact a disk, so as not to create configurations that cannot work properly? Installation of FreeBSD on

Re: RFC: Project geom-events

2011-10-08 Thread Lev Serebryakov
Hello, Ivan. You wrote 8 октября 2011 г., 0:23:14: If you think this should be explicitely handled, please file a PR which requests the modification of gpart so that it detects that a GPT is being created in anything other than a raw drive, and warns the user. It should be mentioned in

Re: RFC: Project geom-events

2011-10-08 Thread Daniel Kalchev
On Oct 8, 2011, at 12:05 , Lev Serebryakov wrote: Hello, Ivan. You wrote 8 октября 2011 г., 0:23:14: If you think this should be explicitely handled, please file a PR which requests the modification of gpart so that it detects that a GPT is being created in anything other than a raw

Re: RFC: Project geom-events

2011-10-07 Thread Lev Serebryakov
Hello, Perryh. You wrote 7 октября 2011 г., 18:06:38: GPT (and MBR) metadata placement is dictated from outside world, where is no GEOM and geom_label. They INTENDED to be used on DISKS. BIOSes should be able to find it :) Certainly GPT and MBR must place an instance of the partition table

Re: RFC: Project geom-events

2011-10-07 Thread perryh
Lev Serebryakov l...@freebsd.org wrote: GPT (and MBR) metadata placement is dictated from outside world, where is no GEOM and geom_label. They INTENDED to be used on DISKS. BIOSes should be able to find it :) Certainly GPT and MBR must place an instance of the partition table where the BIOS

Re: RFC: Project geom-events

2011-10-07 Thread Lev Serebryakov
Hello, Perryh. You wrote 7 октября 2011 г., 18:06:38: GPT (and MBR) metadata placement is dictated from outside world, where is no GEOM and geom_label. They INTENDED to be used on DISKS. BIOSes should be able to find it :) Certainly GPT and MBR must place an instance of the partition table

Re: RFC: Project geom-events

2011-10-07 Thread Daniel Kalchev
On 07.10.11 22:44, Lev Serebryakov wrote: Hello, Perryh. You wrote 7 октября 2011 г., 18:06:38: GPT (and MBR) metadata placement is dictated from outside world, where is no GEOM and geom_label. They INTENDED to be used on DISKS. BIOSes should be able to find it :) Certainly GPT and MBR

Re: RFC: Project geom-events

2011-10-07 Thread Ivan Voras
2011/10/7 Daniel Kalchev dan...@digsys.bg: Then, by standard GPT cannot coexist with GLABEL. Such setup should be disallowed, or at least big nasty message that you have just shoot yourself in the leg should be output. (period) GPT cannot coexist with ANY GEOM CLASS which writes metadata to

Re: RFC: Project geom-events

2011-10-06 Thread Lev Serebryakov
Hello, John-Mark. You wrote 6 октября 2011 г., 2:53:53: gmirror0 gstripe0 ada0 ada1 gstripe1 ada2 ada3 and administrator kills gstripe0, for example, geom_mirror will send event, because from its point of view it is not administrative

Re: RFC: Project geom-events

2011-10-06 Thread Lev Serebryakov
Hello, Alexander. You wrote 6 октября 2011 г., 1:34:33: That works perfect for case when class (geom_raid) is known to work on raw device. Other RAID classes can be used over partitions, so some care should be taken to avoid false positives. Oh, yes... I see... I'm not sure here. In

Re: RFC: Project geom-events

2011-10-06 Thread Ivan Voras
On 06/10/2011 00:12, Miroslav Lachman wrote: Scot Hetzel wrote: 2011/10/5 Miroslav Lachman000.f...@quip.cz: I am waiting years for the moment, when these GEOM problems will be fixed, so I am really glad to see your interest! It will be move to right direction even if changes will not be

Re: RFC: Project geom-events

2011-10-06 Thread Daniel Kalchev
On 06.10.11 14:07, Ivan Voras wrote: Um, you do realize this is a physical problem with metadata location and cannot be solved in any meaningful way? Geom_label stores its label in the last sector of the device, and GPT stores the secondary / backup table also at the end of the device. The

Re: RFC: Project geom-events

2011-10-06 Thread Ivan Voras
On 06/10/2011 13:29, Daniel Kalchev wrote: On 06.10.11 14:07, Ivan Voras wrote: Um, you do realize this is a physical problem with metadata location and cannot be solved in any meaningful way? Geom_label stores its label in the last sector of the device, and GPT stores the secondary /

Re: RFC: Project geom-events

2011-10-06 Thread Miroslav Lachman
Ivan Voras wrote: The point was that glabel on disk device is successful, gpartitioning on glabeled device is successful, but metadata handling / device tasting is wrong after reboot and this should be fixed, not worked around. Otherwise thank you for example with GPT labels, it can be

Re: RFC: Project geom-events

2011-10-06 Thread Daniel Kalchev
On 06.10.11 15:36, Ivan Voras wrote: On 06/10/2011 13:29, Daniel Kalchev wrote: On 06.10.11 14:07, Ivan Voras wrote: Um, you do realize this is a physical problem with metadata location and cannot be solved in any meaningful way? Geom_label stores its label in the last sector of the device,

Re: RFC: Project geom-events

2011-10-06 Thread Ivan Voras
I am not a GEOM expert, but isn't it wrong concept, that glabel writes its metadata and publish original device size? It does not. # diskinfo -v /dev/md0 /dev/md0 512 # sectorsize 104857600 # mediasize in bytes (100M) 204800 # mediasize in

Re: RFC: Project geom-events

2011-10-06 Thread Lev Serebryakov
Hello, Daniel. You wrote 6 октября 2011 г., 15:29:58: The proper way for this is to have these things store their metadata in the first/last sector of the provider, not the underlying device. This means that, if you have GPT within GLABEL, for example -- you will only see the GPT label if

Re: RFC: Project geom-events

2011-10-06 Thread Lev Serebryakov
Hello, Miroslav. You wrote 6 октября 2011 г., 16:59:19: I am not a GEOM expert, but isn't it wrong concept, that glabel writes its metadata and publish original device size? If some GEOM write metadata at last sector (or first), then it should shrink the published size (or offset). Or is the

Re: RFC: Project geom-events

2011-10-06 Thread Daniel Kalchev
On 06.10.11 17:04, Pieter de Goeje wrote: The layering *is* correct and you *can* create a GPT inside a glabel label, but then 1) you get device names like /dev/label/somethingp1, /dev/label/somethingp2, etc. .. and, you overwrite the last sector of the device, not of the provider. This is

Re: RFC: Project geom-events

2011-10-06 Thread Pieter de Goeje
On Thursday, October 06, 2011 02:43:03 PM Daniel Kalchev wrote: On 06.10.11 15:36, Ivan Voras wrote: On 06/10/2011 13:29, Daniel Kalchev wrote: On 06.10.11 14:07, Ivan Voras wrote: Um, you do realize this is a physical problem with metadata location and cannot be solved in any meaningful

Re: RFC: Project geom-events

2011-10-06 Thread Andrey V. Elsukov
On 06.10.2011 16:36, Ivan Voras wrote: 2) this makes the device unbootable as the GPT partition is per definition not valid. It still stores the primary partition table on the first sector (and the following sectors...), but its secondary table is stored at one sector short of device's last

Re: RFC: Project geom-events

2011-10-05 Thread Lev Serebryakov
Hello, Miroslav. You wrote 5 октября 2011 г., 1:27:03: I am still missing one thing - dropped provider is not marked as failed RAID provider and is accessible for anything like normal disk device. So in some edge cases, the system can boot from failed RAID component instead of degraded RAID.

Re: RFC: Project geom-events

2011-10-05 Thread Lev Serebryakov
Hello, Andrey. You wrote 5 октября 2011 г., 9:07:16: It seems that you could change only geom_dev.c to get most of what you want. Actually, the part of your changes related to the DISCONNECT events, and maybe DESTROY events could be implemented in the geom_dev. Does geom_dev knows all needed

Re: RFC: Project geom-events

2011-10-05 Thread Lev Serebryakov
Hello, Andrey. You wrote 5 октября 2011 г., 10:27:10: It seems that you could change only geom_dev.c to get most of what you want. Actually, the part of your changes related to the DISCONNECT events, and maybe DESTROY events could be implemented in the geom_dev. Does geom_dev knows all

Re: RFC: Project geom-events

2011-10-05 Thread Lev Serebryakov
Hello, Stephane. You wrote 5 октября 2011 г., 10:25:51: On 10/05/2011 03:19 PM, Lev Serebryakov wrote: A bit unrelated, but are there plans to integrate hardware RAID (mps/mfi/mpt/amr) failure notification in the same way as this would be done for GEOM ? As in, one framework and way to manage

Re: RFC: Project geom-events

2011-10-05 Thread Andrey V. Elsukov
On 05.10.2011 10:39, Lev Serebryakov wrote: (1) Class and name of GEOM which is affected. (2) Name of provider which is affected. (3) Name of underlying provider which is lost (consumer from reporting GEOM's point of view). (4) Resulting state of affected provider (fixable,

Re: RFC: Project geom-events

2011-10-05 Thread Miroslav Lachman
Lev Serebryakov wrote: Hello, Miroslav. You wrote 5 октября 2011 г., 1:27:03: I am still missing one thing - dropped provider is not marked as failed RAID provider and is accessible for anything like normal disk device. So in some edge cases, the system can boot from failed RAID component

Re: RFC: Project geom-events

2011-10-05 Thread Lev Serebryakov
Hello, Andrey. You wrote 5 октября 2011 г., 11:51:36: On 05.10.2011 10:39, Lev Serebryakov wrote: (1) Class and name of GEOM which is affected. (2) Name of provider which is affected. (3) Name of underlying provider which is lost (consumer from reporting GEOM's point of

Re: RFC: Project geom-events

2011-10-05 Thread Lev Serebryakov
Hello, Miroslav. You wrote 5 октября 2011 г., 12:24:06: What RAID do you mean exactly? geom_stripe? geom_mirrot? geom_raid? Something else? I am mostly using geom_mirror. [SKIPPED] Oh, I see. Unfortunately, there is no GEOM metadata infrastructire, GEOMs are too generic for this. I

Re: RFC: Project geom-events

2011-10-05 Thread Alexander Motin
On 05.10.2011 11:58, Lev Serebryakov wrote: Hello, Miroslav. You wrote 5 октября 2011 г., 12:24:06: What RAID do you mean exactly? geom_stripe? geom_mirrot? geom_raid? Something else? I am mostly using geom_mirror. [SKIPPED] Oh, I see. Unfortunately, there is no GEOM metadata

Re: RFC: Project geom-events

2011-10-05 Thread Lev Serebryakov
Hello, Alexander. You wrote 5 октября 2011 г., 13:18:34: geom_raid addresses this problem in own way. As soon as RAID BIOSes expect RAIDs to be built on raw physical devices and probe order is not discussed, geom_raid exclusively opens underlying providers immediately after detecting

Re: RFC: Project geom-events

2011-10-05 Thread Miroslav Lachman
Lev Serebryakov wrote: Hello, Miroslav. You wrote 5 октября 2011 г., 12:24:06: What RAID do you mean exactly? geom_stripe? geom_mirrot? geom_raid? Something else? I am mostly using geom_mirror. [SKIPPED] Oh, I see. Unfortunately, there is no GEOM metadata infrastructure, GEOMs are

Re: RFC: Project geom-events

2011-10-05 Thread Scot Hetzel
2011/10/5 Miroslav Lachman 000.f...@quip.cz: I am waiting years for the moment, when these GEOM problems will be fixed, so I am really glad to see your interest! It will be move to right direction even if changes will not be backward compatible. The current state is too fragile to be used in

Re: RFC: Project geom-events

2011-10-05 Thread Alexander Motin
On 05.10.2011 12:29, Lev Serebryakov wrote: You wrote 5 октября 2011 г., 13:18:34: geom_raid addresses this problem in own way. As soon as RAID BIOSes expect RAIDs to be built on raw physical devices and probe order is not discussed, geom_raid exclusively opens underlying providers immediately

Re: RFC: Project geom-events

2011-10-05 Thread Miroslav Lachman
Scot Hetzel wrote: 2011/10/5 Miroslav Lachman000.f...@quip.cz: I am waiting years for the moment, when these GEOM problems will be fixed, so I am really glad to see your interest! It will be move to right direction even if changes will not be backward compatible. The current state is too

Re: RFC: Project geom-events

2011-10-05 Thread John-Mark Gurney
Lev Serebryakov wrote this message on Wed, Oct 05, 2011 at 12:51 +0400: Hello, Andrey. You wrote 5 ??? 2011 ?., 11:51:36: On 05.10.2011 10:39, Lev Serebryakov wrote: (1) Class and name of GEOM which is affected. (2) Name of provider which is affected. (3) Name of

RFC: Project geom-events

2011-10-04 Thread Lev Serebryakov
Hello, Freebsd-geom. I've just committed (a branch with) the project, I worked on for last month (and imagined for last two years). It is GEOM Events subsystem. What is it? We now have pretty impressive set of GEOM modules, which covers many areas: infrastructure support (like

Re: RFC: Project geom-events

2011-10-04 Thread Lev Serebryakov
Hello, Lev. You wrote 4 октября 2011 г., 22:05:07: Patch against CURRENT is attached. Oh, sorry, it seems, that patch is too big for list. http://lev.serebryakov.spb.ru/download/geom-events-1.0.head.patch.gz -- // Black Lion AKA Lev Serebryakov l...@freebsd.org

Re: RFC: Project geom-events

2011-10-04 Thread Alexander Motin
On 04.10.2011 21:12, Freddie Cash wrote: 2011/10/4 Lev Serebryakov l...@freebsd.org mailto:l...@freebsd.org One thing is missed from software RAIDs is spare drives and state monitoring (yes, I know, that geom_raid supports spare drivers for metadata formats which supports them,

Re: RFC: Project geom-events

2011-10-04 Thread Freddie Cash
2011/10/4 Lev Serebryakov l...@freebsd.org One thing is missed from software RAIDs is spare drives and state monitoring (yes, I know, that geom_raid supports spare drivers for metadata formats which supports them, but it not universal solution). Sounds impressive! Will be very useful for

Re: RFC: Project geom-events

2011-10-04 Thread Garrett Cooper
On Oct 4, 2011, at 11:12 AM, Freddie Cash wrote: 2011/10/4 Lev Serebryakov l...@freebsd.org One thing is missed from software RAIDs is spare drives and state monitoring (yes, I know, that geom_raid supports spare drivers for metadata formats which supports them, but it not universal

Re: RFC: Project geom-events

2011-10-04 Thread Lev Serebryakov
Hello, Freddie. You wrote 4 октября 2011 г., 22:12:32: Sounds impressive! Will be very useful for those using GEOM-based RAID (gmirror, gstripe, graid3, graid5, etc). Just curious: would the geom-events framework, and in particular the geom-events script, be useful for ZFS setups, for

Re: RFC: Project geom-events

2011-10-04 Thread Freddie Cash
On Tue, Oct 4, 2011 at 12:15 PM, Garrett Cooper yaneg...@gmail.com wrote: On Oct 4, 2011, at 11:12 AM, Freddie Cash wrote: 2011/10/4 Lev Serebryakov l...@freebsd.org One thing is missed from software RAIDs is spare drives and state monitoring (yes, I know, that geom_raid supports spare

Re: RFC: Project geom-events

2011-10-04 Thread Alexander Motin
Miroslav Lachman wrote: Lev Serebryakov wrote: One thing is missed from software RAIDs is spare drives and state monitoring (yes, I know, that geom_raid supports spare drivers for metadata formats which supports them, but it not universal solution). I am still missing one thing - dropped

Re: RFC: Project geom-events

2011-10-04 Thread Miroslav Lachman
Lev Serebryakov wrote: [...] One thing is missed from software RAIDs is spare drives and state monitoring (yes, I know, that geom_raid supports spare drivers for metadata formats which supports them, but it not universal solution). I am still missing one thing - dropped provider is not

Re: RFC: Project geom-events

2011-10-04 Thread Andrey V. Elsukov
On 04.10.2011 22:05, Lev Serebryakov wrote: So, here it is. GEOM Events. Project consists of several parts (all are ready and commited to project branch!): Hi, Lev (5) Changes in all geom classes to post these events. It seems that you could change only geom_dev.c to get most of