On Sun, 14 Jun 2015 17:19:31 -0500
Goldwyn Rodrigues wrote:
>
>
> On 06/12/2015 01:46 PM, David Teigland wrote:
> > When a node fails, its dirty areas get special treatment from other nodes
> > using the area_resyncing() function. Should the suspend_list be created
> > before any reads or
On Sun, 14 Jun 2015 17:19:31 -0500
Goldwyn Rodrigues rgold...@suse.com wrote:
On 06/12/2015 01:46 PM, David Teigland wrote:
When a node fails, its dirty areas get special treatment from other nodes
using the area_resyncing() function. Should the suspend_list be created
before any
On 06/12/2015 01:46 PM, David Teigland wrote:
When a node fails, its dirty areas get special treatment from other nodes
using the area_resyncing() function. Should the suspend_list be created
before any reads or writes from the file system are processed by md? It
seems to me that gfs journal
On 06/12/2015 01:46 PM, David Teigland wrote:
When a node fails, its dirty areas get special treatment from other nodes
using the area_resyncing() function. Should the suspend_list be created
before any reads or writes from the file system are processed by md? It
seems to me that gfs journal
When a node fails, its dirty areas get special treatment from other nodes
using the area_resyncing() function. Should the suspend_list be created
before any reads or writes from the file system are processed by md? It
seems to me that gfs journal recovery could read/write to dirty regions
(from
When a node fails, its dirty areas get special treatment from other nodes
using the area_resyncing() function. Should the suspend_list be created
before any reads or writes from the file system are processed by md? It
seems to me that gfs journal recovery could read/write to dirty regions
(from
On Wed, 10 Jun 2015 16:07:44 -0500
David Teigland wrote:
> On Thu, Jun 11, 2015 at 06:31:31AM +1000, Neil Brown wrote:
> > What is your interest in this? I'm always happy for open discussion and
> > varied input, but it would help to know to what extent you are a stake
> > holder?
>
> Using
On Wed, Jun 10, 2015 at 04:07:44PM -0500, David Teigland wrote:
> > Also a slightly less adversarial tone would make me feel more
> > comfortable, though maybe I'm misreading your intent.
>
> You're probably misreading "concerned".
>
> The initial responses to my inquiry were severely lacking in
On Thu, Jun 11, 2015 at 06:31:31AM +1000, Neil Brown wrote:
> What is your interest in this? I'm always happy for open discussion and
> varied input, but it would help to know to what extent you are a stake
> holder?
Using the dlm correctly is non-trivial and should be reviewed.
If the dlm is
On Wed, 10 Jun 2015 10:01:51 -0500
David Teigland wrote:
> Isn't this process what staging is for?
No it isn't.
Staging is useful for code drops. i.e. multiple other developers want to
collaborate to improve some code that the maintainer doesn't want to accept.
So it goes into staging, "the
On Wed, Jun 10, 2015 at 12:05:33PM -0500, David Teigland wrote:
> Separate bitmaps for each node sounds like a much better design than the
> cmirror design which used a single shared bitmap (I argued for using a
> single bitmap when cmirror was being designed.)
Sorry misspoke, I argued for one
On Wed, Jun 10, 2015 at 11:23:25AM -0500, Goldwyn Rodrigues wrote:
> To start with, the goal of (basic) MD RAID1 is to keep the two
> mirrored device consistent _all_ of the time. In case of a device
> failure, it should degrade the array pointing to the failed device,
> so it can be
To start with, the goal of (basic) MD RAID1 is to keep the two mirrored
device consistent _all_ of the time. In case of a device failure, it
should degrade the array pointing to the failed device, so it can be
(hot)removed/replaced. Now, take the same concepts to multiple nodes
using the same
On 06/10/2015 10:01 AM, David Teigland wrote:
On Tue, Jun 09, 2015 at 10:33:08PM -0500, Goldwyn Rodrigues wrote:
some real world utility to warrant the potential maintenance effort.
We do have a valid real world utility. It is to provide
high-availability of RAID1 storage over the cluster.
On Wed, Jun 10, 2015 at 10:27:27AM -0500, Goldwyn Rodrigues wrote:
> I thought I answered that:
> To use a software RAID1 across multiple nodes of a cluster. Let me
> explain in more words..
>
> In a cluster with multiple nodes with a shared storage, such as a
> SAN. The shared device becomes a
On Tue, Jun 09, 2015 at 10:33:08PM -0500, Goldwyn Rodrigues wrote:
> >>>some real world utility to warrant the potential maintenance effort.
> >>
> >>We do have a valid real world utility. It is to provide
> >>high-availability of RAID1 storage over the cluster. The
> >>distributed locking is
On 06/10/2015 03:00 AM, Richard Weinberger wrote:
On Wed, Jun 10, 2015 at 5:33 AM, Goldwyn Rodrigues wrote:
David Lang has already responded: The idea is to use a RAID device
(currently only level 1 mirroring is supported) with multiple nodes of the
cluster.
Here is a description on how to
On Wed, Jun 10, 2015 at 5:33 AM, Goldwyn Rodrigues wrote:
> David Lang has already responded: The idea is to use a RAID device
> (currently only level 1 mirroring is supported) with multiple nodes of the
> cluster.
>
> Here is a description on how to use it:
>
On Wed, Jun 10, 2015 at 5:33 AM, Goldwyn Rodrigues rgold...@suse.com wrote:
David Lang has already responded: The idea is to use a RAID device
(currently only level 1 mirroring is supported) with multiple nodes of the
cluster.
Here is a description on how to use it:
On 06/10/2015 03:00 AM, Richard Weinberger wrote:
On Wed, Jun 10, 2015 at 5:33 AM, Goldwyn Rodrigues rgold...@suse.com wrote:
David Lang has already responded: The idea is to use a RAID device
(currently only level 1 mirroring is supported) with multiple nodes of the
cluster.
Here is a
On Tue, Jun 09, 2015 at 10:33:08PM -0500, Goldwyn Rodrigues wrote:
some real world utility to warrant the potential maintenance effort.
We do have a valid real world utility. It is to provide
high-availability of RAID1 storage over the cluster. The
distributed locking is required only
To start with, the goal of (basic) MD RAID1 is to keep the two mirrored
device consistent _all_ of the time. In case of a device failure, it
should degrade the array pointing to the failed device, so it can be
(hot)removed/replaced. Now, take the same concepts to multiple nodes
using the same
On Wed, Jun 10, 2015 at 10:27:27AM -0500, Goldwyn Rodrigues wrote:
I thought I answered that:
To use a software RAID1 across multiple nodes of a cluster. Let me
explain in more words..
In a cluster with multiple nodes with a shared storage, such as a
SAN. The shared device becomes a single
On 06/10/2015 10:01 AM, David Teigland wrote:
On Tue, Jun 09, 2015 at 10:33:08PM -0500, Goldwyn Rodrigues wrote:
some real world utility to warrant the potential maintenance effort.
We do have a valid real world utility. It is to provide
high-availability of RAID1 storage over the cluster.
On Wed, 10 Jun 2015 10:01:51 -0500
David Teigland teigl...@redhat.com wrote:
Isn't this process what staging is for?
No it isn't.
Staging is useful for code drops. i.e. multiple other developers want to
collaborate to improve some code that the maintainer doesn't want to accept.
So it goes
On Wed, Jun 10, 2015 at 12:05:33PM -0500, David Teigland wrote:
Separate bitmaps for each node sounds like a much better design than the
cmirror design which used a single shared bitmap (I argued for using a
single bitmap when cmirror was being designed.)
Sorry misspoke, I argued for one
On Wed, Jun 10, 2015 at 11:23:25AM -0500, Goldwyn Rodrigues wrote:
To start with, the goal of (basic) MD RAID1 is to keep the two
mirrored device consistent _all_ of the time. In case of a device
failure, it should degrade the array pointing to the failed device,
so it can be
On Thu, Jun 11, 2015 at 06:31:31AM +1000, Neil Brown wrote:
What is your interest in this? I'm always happy for open discussion and
varied input, but it would help to know to what extent you are a stake
holder?
Using the dlm correctly is non-trivial and should be reviewed.
If the dlm is
On Wed, 10 Jun 2015 16:07:44 -0500
David Teigland teigl...@redhat.com wrote:
On Thu, Jun 11, 2015 at 06:31:31AM +1000, Neil Brown wrote:
What is your interest in this? I'm always happy for open discussion and
varied input, but it would help to know to what extent you are a stake
holder?
On Wed, Jun 10, 2015 at 04:07:44PM -0500, David Teigland wrote:
Also a slightly less adversarial tone would make me feel more
comfortable, though maybe I'm misreading your intent.
You're probably misreading concerned.
The initial responses to my inquiry were severely lacking in any
just noticed the existence of clustered MD for the first time.
It is a major new user of the dlm, and I have some doubts about it.
When did this appear on the mailing list for review?
It first appeared in December, 2014 on the RAID mailing list.
http://marc.info/?l=linux-raid=141891941330336=2
I
On Tue, 9 Jun 2015, David Teigland wrote:
We do have a valid real world utility. It is to provide
high-availability of RAID1 storage over the cluster. The
distributed locking is required only during cases of error and
superblock updates and is not required during normal operations,
which makes
;>I've just noticed the existence of clustered MD for the first time.
> >>>It is a major new user of the dlm, and I have some doubts about it.
> >>>When did this appear on the mailing list for review?
> >>
> >>It first appeared in December, 2014 on the RAID mailing
Hi David,
On 06/09/2015 02:45 PM, David Teigland wrote:
On Tue, Jun 09, 2015 at 02:26:25PM -0500, Goldwyn Rodrigues wrote:
On 06/09/2015 01:22 PM, David Teigland wrote:
I've just noticed the existence of clustered MD for the first time.
It is a major new user of the dlm, and I have some
Hi David,
On 06/09/2015 02:45 PM, David Teigland wrote:
On Tue, Jun 09, 2015 at 02:26:25PM -0500, Goldwyn Rodrigues wrote:
On 06/09/2015 01:22 PM, David Teigland wrote:
I've just noticed the existence of clustered MD for the first time.
It is a major new user of the dlm, and I have some
On Tue, Jun 09, 2015 at 02:26:25PM -0500, Goldwyn Rodrigues wrote:
> On 06/09/2015 01:22 PM, David Teigland wrote:
> >I've just noticed the existence of clustered MD for the first time.
> >It is a major new user of the dlm, and I have some doubts about it.
> >When did this
On 06/09/2015 01:22 PM, David Teigland wrote:
I've just noticed the existence of clustered MD for the first time.
It is a major new user of the dlm, and I have some doubts about it.
When did this appear on the mailing list for review?
It first appeared in December, 2014 on the RAID mailing
I've just noticed the existence of clustered MD for the first time.
It is a major new user of the dlm, and I have some doubts about it.
When did this appear on the mailing list for review?
Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of
I've just noticed the existence of clustered MD for the first time.
It is a major new user of the dlm, and I have some doubts about it.
When did this appear on the mailing list for review?
Dave
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
On 06/09/2015 01:22 PM, David Teigland wrote:
I've just noticed the existence of clustered MD for the first time.
It is a major new user of the dlm, and I have some doubts about it.
When did this appear on the mailing list for review?
It first appeared in December, 2014 on the RAID mailing
On Tue, 9 Jun 2015, David Teigland wrote:
We do have a valid real world utility. It is to provide
high-availability of RAID1 storage over the cluster. The
distributed locking is required only during cases of error and
superblock updates and is not required during normal operations,
which makes
On Tue, Jun 09, 2015 at 02:26:25PM -0500, Goldwyn Rodrigues wrote:
On 06/09/2015 01:22 PM, David Teigland wrote:
I've just noticed the existence of clustered MD for the first time.
It is a major new user of the dlm, and I have some doubts about it.
When did this appear on the mailing list
On Tue, Jun 09, 2015 at 03:08:11PM -0500, Goldwyn Rodrigues wrote:
Hi David,
On 06/09/2015 02:45 PM, David Teigland wrote:
On Tue, Jun 09, 2015 at 02:26:25PM -0500, Goldwyn Rodrigues wrote:
On 06/09/2015 01:22 PM, David Teigland wrote:
I've just noticed the existence of clustered MD
Hi David,
On 06/09/2015 02:45 PM, David Teigland wrote:
On Tue, Jun 09, 2015 at 02:26:25PM -0500, Goldwyn Rodrigues wrote:
On 06/09/2015 01:22 PM, David Teigland wrote:
I've just noticed the existence of clustered MD for the first time.
It is a major new user of the dlm, and I have some
Hi David,
On 06/09/2015 02:45 PM, David Teigland wrote:
On Tue, Jun 09, 2015 at 02:26:25PM -0500, Goldwyn Rodrigues wrote:
On 06/09/2015 01:22 PM, David Teigland wrote:
I've just noticed the existence of clustered MD for the first time.
It is a major new user of the dlm, and I have some
just noticed the existence of clustered MD for the first time.
It is a major new user of the dlm, and I have some doubts about it.
When did this appear on the mailing list for review?
It first appeared in December, 2014 on the RAID mailing list.
http://marc.info/?l=linux-raidm=141891941330336w=2
I
46 matches
Mail list logo