John wrote:
> > ... gpart should show warning message if user is trying to put
> > GPT on non real disk devices.
...
>This also seem to prevent something useful like:
>
> # camcontrol inquiry da0
> pass2: Fixed Direct Access SCSI-5 device
> pass2: Serial Number 3TB1BKGX9036W9EN
> pass2:
Lev Serebryakov wrote:
> >> GPT _must_ be placed twice -- at first and last sectors
> >> (really, more than one sectors). By standard. Secondary
> >> copy must be at end of disk. Period.
> > Then, "by standard" GPT cannot coexist with GLABEL. Such setup
> > should be disallowed, or at least big n
Lev Serebryakov wrote:
> GPT must have backup copy in last sector by standard ...
In that case, shouldn't it refuse to install on any provider that is
not in fact a disk, so as not to create configurations that cannot
work properly?
> MBR doesn;t have any additional metadata. How adding one wil
- Miroslav Lachman's Original Message -
> Lev Serebryakov wrote:
> >Hello, Miroslav.
> >You wrote 6 ?? 2011 ??., 16:59:19:
>
> [...]
>
> >>The current state is simply wrong, because user can do something what
> >>cannot work and is not documented anywhere.
> > It is Ok in UN
Lev Serebryakov wrote:
Hello, Miroslav.
You wrote 6 октября 2011 г., 16:59:19:
[...]
The current state is simply wrong, because user can do something what
cannot work and is not documented anywhere.
It is Ok in UNIX way, in general. You should be able to shoot your
leg, it is good :)
On Oct 8, 2011, at 12:05 , Lev Serebryakov wrote:
> Hello, Ivan.
> You wrote 8 октября 2011 г., 0:23:14:
>
>> If you think this should be explicitely handled, please file a PR
>> which requests the modification of gpart so that it detects that a GPT
>> is being created in anything other than a r
Hello, Ivan.
You wrote 8 октября 2011 г., 0:23:14:
> If you think this should be explicitely handled, please file a PR
> which requests the modification of gpart so that it detects that a GPT
> is being created in anything other than a raw drive, and warns the
> user.
It should be mentioned in d
Hello, Lev.
You wrote 8 октября 2011 г., 13:52:21:
>> GPT must have backup copy in last sector by standard ...
> In that case, shouldn't it refuse to install on any provider that is
> not in fact a disk, so as not to create configurations that cannot
> work properly?
Installation of FreeBSD on s
Hello, Daniel.
You wrote 8 октября 2011 г., 0:13:54:
GPT (and MBR) metadata placement is dictated from outside world,
where is no GEOM and geom_label. They INTENDED to be used on DISKS.
BIOSes should be able to find it :)
>>> Certainly GPT and MBR must place an instance of the pa
2011/10/7 Daniel Kalchev :
> Then, "by standard" GPT cannot coexist with GLABEL. Such setup should be
> disallowed, or at least big nasty message that you have just shoot yourself
> in the leg should be output. (period)
GPT cannot coexist with ANY GEOM CLASS which writes metadata to the last sect
On 07.10.11 22:44, Lev Serebryakov wrote:
Hello, Perryh.
You wrote 7 октября 2011 г., 18:06:38:
GPT (and MBR) metadata placement is dictated from outside world,
where is no GEOM and geom_label. They INTENDED to be used on DISKS.
BIOSes should be able to find it :)
Certainly GPT and MBR mu
Hello, Perryh.
You wrote 7 октября 2011 г., 18:06:38:
>> GPT (and MBR) metadata placement is dictated from outside world,
>> where is no GEOM and geom_label. They INTENDED to be used on DISKS.
>> BIOSes should be able to find it :)
> Certainly GPT and MBR must place an instance of the partition
Lev Serebryakov wrote:
> GPT (and MBR) metadata placement is dictated from outside world,
> where is no GEOM and geom_label. They INTENDED to be used on DISKS.
> BIOSes should be able to find it :)
Certainly GPT and MBR must place an instance of the partition table
where the BIOS expects it, b
Hello, Perryh.
You wrote 7 октября 2011 г., 18:06:38:
>> GPT (and MBR) metadata placement is dictated from outside world,
>> where is no GEOM and geom_label. They INTENDED to be used on DISKS.
>> BIOSes should be able to find it :)
> Certainly GPT and MBR must place an instance of the partition
On 06.10.2011 16:36, Ivan Voras wrote:
> 2) this makes the device unbootable as the GPT partition is per
> definition not valid. It still stores the primary partition table on the
> first sector (and the following sectors...), but its secondary table is
> stored at one sector short of device's last
On Thursday, October 06, 2011 02:43:03 PM Daniel Kalchev wrote:
> On 06.10.11 15:36, Ivan Voras wrote:
> > On 06/10/2011 13:29, Daniel Kalchev wrote:
> >> On 06.10.11 14:07, Ivan Voras wrote:
> >>> Um, you do realize this is a "physical" problem with metadata location
> >>> and cannot be solved in
On 06.10.11 17:04, Pieter de Goeje wrote:
The layering *is* correct and you *can* create a GPT inside a glabel
label, but then
1) you get device names like /dev/label/somethingp1,
/dev/label/somethingp2, etc.
.. and, you overwrite the last sector of the device, not of the
provider. This is in
Hello, Miroslav.
You wrote 6 октября 2011 г., 16:59:19:
> I am not a GEOM expert, but isn't it wrong concept, that glabel writes
> its metadata and publish original device size? If some GEOM write
> metadata at last sector (or first), then it should shrink the published
> size (or offset). Or is
Hello, Daniel.
You wrote 6 октября 2011 г., 15:29:58:
> The proper way for this is to have these things store their metadata in
> the first/last sector of the provider, not the underlying device.
> This means that, if you have GPT within GLABEL, for example -- you will
> only see the GPT label i
> I am not a GEOM expert, but isn't it wrong concept, that glabel writes
> its metadata and publish original device size?
It does not.
# diskinfo -v /dev/md0
/dev/md0
512 # sectorsize
104857600 # mediasize in bytes (100M)
204800 # mediasize in
On 06.10.11 15:36, Ivan Voras wrote:
On 06/10/2011 13:29, Daniel Kalchev wrote:
On 06.10.11 14:07, Ivan Voras wrote:
Um, you do realize this is a "physical" problem with metadata location
and cannot be solved in any meaningful way? Geom_label stores its label
in the last sector of the device
Ivan Voras wrote:
The point was that glabel on disk device is successful, gpartitioning on
> glabeled device is successful, but metadata handling / device tasting is
> wrong after reboot and this should be fixed, not worked around.
>
> Otherwise thank you for example with GPT labels, it can be
On 06/10/2011 13:29, Daniel Kalchev wrote:
>
>
> On 06.10.11 14:07, Ivan Voras wrote:
>>
>> Um, you do realize this is a "physical" problem with metadata location
>> and cannot be solved in any meaningful way? Geom_label stores its label
>> in the last sector of the device, and GPT stores the "se
On 06.10.11 14:07, Ivan Voras wrote:
Um, you do realize this is a "physical" problem with metadata location
and cannot be solved in any meaningful way? Geom_label stores its label
in the last sector of the device, and GPT stores the "secondary" /
backup table also at the end of the device. The
On 06/10/2011 00:12, Miroslav Lachman wrote:
> Scot Hetzel wrote:
>> 2011/10/5 Miroslav Lachman<000.f...@quip.cz>:
>>> I am waiting years for the moment, when these GEOM problems will be
>>> fixed,
>>> so I am really glad to see your interest!
>>> It will be move to right direction even if changes
Hello, Alexander.
You wrote 6 октября 2011 г., 1:34:33:
> That works perfect for case when class (geom_raid) is known to work on
> raw device. Other RAID classes can be used over partitions, so some care
> should be taken to avoid false positives.
Oh, yes... I see...
>> I'm not sure here.
> I
Hello, John-Mark.
You wrote 6 октября 2011 г., 2:53:53:
>>gmirror0
>> gstripe0
>>ada0
>>ada1
>> gstripe1
>>ada2
>>ada3
>>
>> and administrator kills gstripe0, for example, geom_mirror will send
>> event, because from its point of view it is not adm
Lev Serebryakov wrote this message on Wed, Oct 05, 2011 at 12:51 +0400:
> Hello, Andrey.
> You wrote 5 ??? 2011 ?., 11:51:36:
>
> > On 05.10.2011 10:39, Lev Serebryakov wrote:
> >>>(1) Class and name of GEOM which is affected.
> >>>(2) Name of provider which is affected.
> >>>(3) N
Scot Hetzel wrote:
2011/10/5 Miroslav Lachman<000.f...@quip.cz>:
I am waiting years for the moment, when these GEOM problems will be fixed,
so I am really glad to see your interest!
It will be move to right direction even if changes will not be backward
compatible.
The current state is too fragi
On 05.10.2011 12:29, Lev Serebryakov wrote:
> You wrote 5 октября 2011 г., 13:18:34:
>> geom_raid addresses this problem in own way. As soon as RAID BIOSes
>> expect RAIDs to be built on raw physical devices and probe order is not
>> discussed, geom_raid exclusively opens underlying providers immed
2011/10/5 Miroslav Lachman <000.f...@quip.cz>:
> I am waiting years for the moment, when these GEOM problems will be fixed,
> so I am really glad to see your interest!
> It will be move to right direction even if changes will not be backward
> compatible.
> The current state is too fragile to be us
Lev Serebryakov wrote:
Hello, Miroslav.
You wrote 5 октября 2011 г., 12:24:06:
What RAID do you mean exactly? geom_stripe? geom_mirrot? geom_raid?
Something else?
I am mostly using geom_mirror.
[SKIPPED]
Oh, I see. Unfortunately, there is no GEOM metadata infrastructure,
GEOMs are t
Hello, Alexander.
You wrote 5 октября 2011 г., 13:18:34:
> geom_raid addresses this problem in own way. As soon as RAID BIOSes
> expect RAIDs to be built on raw physical devices and probe order is not
> discussed, geom_raid exclusively opens underlying providers immediately
> after detecting suppo
On 05.10.2011 11:58, Lev Serebryakov wrote:
> Hello, Miroslav.
> You wrote 5 октября 2011 г., 12:24:06:
>
>>>What RAID do you mean exactly? geom_stripe? geom_mirrot? geom_raid?
>>> Something else?
>> I am mostly using geom_mirror.
> [SKIPPED]
> Oh, I see. Unfortunately, there is no GEOM me
Hello, Miroslav.
You wrote 5 октября 2011 г., 12:24:06:
>>What RAID do you mean exactly? geom_stripe? geom_mirrot? geom_raid?
>> Something else?
> I am mostly using geom_mirror.
[SKIPPED]
Oh, I see. Unfortunately, there is no GEOM metadata infrastructire,
GEOMs are too generic for this. I
Hello, Andrey.
You wrote 5 октября 2011 г., 11:51:36:
> On 05.10.2011 10:39, Lev Serebryakov wrote:
>>>(1) Class and name of GEOM which is affected.
>>>(2) Name of provider which is affected.
>>>(3) Name of underlying provider which is lost (consumer from
>>>reporting GEOM's po
Lev Serebryakov wrote:
Hello, Miroslav.
You wrote 5 октября 2011 г., 1:27:03:
I am still missing one thing - dropped provider is not marked as failed
RAID provider and is accessible for anything like normal disk device. So
in some edge cases, the system can boot from failed RAID component
inste
On 05.10.2011 10:39, Lev Serebryakov wrote:
>>(1) Class and name of GEOM which is affected.
>>(2) Name of provider which is affected.
>>(3) Name of underlying provider which is lost (consumer from
>>reporting GEOM's point of view).
>>(4) Resulting state of affected provider
Hello, Stephane.
You wrote 5 октября 2011 г., 10:25:51:
> On 10/05/2011 03:19 PM, Lev Serebryakov wrote:
> A bit unrelated, but are there plans to integrate hardware RAID
> (mps/mfi/mpt/amr) failure notification in the same way as this would be
> done for GEOM ? As in, "one framework and way to ma
Hello, Andrey.
You wrote 5 октября 2011 г., 10:27:10:
>> It seems that you could change only geom_dev.c to get most of what you want.
>> Actually, the part of your changes related to the DISCONNECT events, and
>> maybe DESTROY events could be implemented in the geom_dev.
> Does geom_dev knows al
Hello, Andrey.
You wrote 5 октября 2011 г., 9:07:16:
> It seems that you could change only geom_dev.c to get most of what you want.
> Actually, the part of your changes related to the DISCONNECT events, and
> maybe DESTROY events could be implemented in the geom_dev.
Does geom_dev knows all need
Hello, Miroslav.
You wrote 5 октября 2011 г., 1:27:03:
> I am still missing one thing - dropped provider is not marked as failed
> RAID provider and is accessible for anything like normal disk device. So
> in some edge cases, the system can boot from failed RAID component
> instead of degraded RA
On 04.10.2011 22:05, Lev Serebryakov wrote:
> So, here it is. GEOM Events.
>
> Project consists of several parts (all are ready and commited to
> project branch!):
>
Hi, Lev
> (5) Changes in all geom classes to post these events.
It seems that you could change only geom_dev.c to get mos
Lev Serebryakov wrote:
[...]
One thing is missed from software RAIDs is spare drives and state
monitoring (yes, I know, that geom_raid supports spare drivers for
metadata formats which supports them, but it not universal solution).
I am still missing one thing - dropped provider is not mar
Miroslav Lachman wrote:
> Lev Serebryakov wrote:
>>One thing is missed from software RAIDs is spare drives and state
>> monitoring (yes, I know, that geom_raid supports spare drivers for
>> metadata formats which supports them, but it not universal solution).
>
> I am still missing one thing -
On Tue, Oct 4, 2011 at 12:15 PM, Garrett Cooper wrote:
> On Oct 4, 2011, at 11:12 AM, Freddie Cash wrote:
>
> > 2011/10/4 Lev Serebryakov
> >
> >> One thing is missed from software RAIDs is spare drives and state
> >> monitoring (yes, I know, that geom_raid supports spare drivers for
> >> metada
Hello, Freddie.
You wrote 4 октября 2011 г., 22:12:32:
> Sounds impressive! Will be very useful for those using GEOM-based
> RAID (gmirror, gstripe, graid3, graid5, etc).
> Just curious: would the geom-events framework, and in particular
> the geom-events script, be useful for ZFS setups, for i
On Oct 4, 2011, at 11:12 AM, Freddie Cash wrote:
> 2011/10/4 Lev Serebryakov
>
>> One thing is missed from software RAIDs is spare drives and state
>> monitoring (yes, I know, that geom_raid supports spare drivers for
>> metadata formats which supports them, but it not universal solution).
>>
>
2011/10/4 Lev Serebryakov
> One thing is missed from software RAIDs is spare drives and state
> monitoring (yes, I know, that geom_raid supports spare drivers for
> metadata formats which supports them, but it not universal solution).
>
Sounds impressive! Will be very useful for those using GE
On 04.10.2011 21:12, Freddie Cash wrote:
> 2011/10/4 Lev Serebryakov mailto:l...@freebsd.org>>
>
> One thing is missed from software RAIDs is spare drives and state
> monitoring (yes, I know, that geom_raid supports spare drivers for
> metadata formats which supports them, but it not
Hello, Lev.
You wrote 4 октября 2011 г., 22:05:07:
>Patch against CURRENT is attached.
Oh, sorry, it seems, that patch is too big for list.
http://lev.serebryakov.spb.ru/download/geom-events-1.0.head.patch.gz
--
// Black Lion AKA Lev Serebryakov
___
Hello, Freebsd-geom.
I've just committed (a branch with) the project, I worked on for
last month (and imagined for last two years).
It is GEOM Events subsystem.
What is it?
We now have pretty impressive set of GEOM modules, which covers many
areas: infrastructure support (like geom_part
52 matches
Mail list logo