Understood thanks !
On Sun, 6 Nov 2022, 21:33 Jeremy McMillan,
wrote:
> Think of each AZ as being a massive piece of server hardware running VMs
> or workloads for you. When hardware (or infrastructure maintenance process)
> fails, assume everything on one AZ is lost at the same time.
>
> On Sun
Think of each AZ as being a massive piece of server hardware running VMs or
workloads for you. When hardware (or infrastructure maintenance process)
fails, assume everything on one AZ is lost at the same time.
On Sun, Nov 6, 2022, 09:58 Surinder Mehra wrote:
> That's partially true. Whole excerc
That's partially true. Whole excercise of configuring AZ as backup filter
is because we want to handle AZ level failure.
Anyway, thanks for inputs. Will figure out further steps
On Sun, 6 Nov 2022, 20:55 Jeremy McMillan,
wrote:
> Don't configure 2 backups when you only have two failure domains.
Don't configure 2 backups when you only have two failure domains.
You're worried about node level failure, but you're telling Ignite to worry
about AZ level failure.
On Sat, Nov 5, 2022, 21:57 Surinder Mehra wrote:
> Yeah I think there is a misunderstanding. Although I figured out my
> answers
Yeah I think there is a misunderstanding. Although I figured out my answers
from our discussion, I will try one final attempt to clarify my point on 2X
space for node3
Node setup:
Node1 and node 2 placed in AZ1
Node 3 placed in AZ2
Since I am using AZ as backup filter as I mentioned in my first
On Tue, Nov 1, 2022 at 10:02 AM Surinder Mehra wrote:
> Even if we have 2 copies of data and primary and backup copy would be
> stored in different AZs. My question remains valid in this case as well.
>
I think additional backup copies in the same AZ are superfluous if we start
with the assumpti
Thanks for suggestions. Will try to ensure infra as suggested and will
explore topology validator if this can be used.
On Tue, 1 Nov 2022, 21:51 Jeremy McMillan,
wrote:
> Can you tell two stories which start out all nodes in the intended cluster
> configuration are down, one story resulting in a
Can you tell two stories which start out all nodes in the intended cluster
configuration are down, one story resulting in a successful cluster
startup, but the other detecting an invalid configuration, and refusing to
start?
I can anticipate problems understanding what to do when the first node
at
You can use a Topology Validator to define when a cache is valid.
> On 1 Nov 2022, at 15:02, Surinder Mehra wrote:
>
> Even if we have 2 copies of data and primary and backup copy would be stored
> in different AZs. My question remains valid in this case as well.
>
> Do we have to ensure node
Even if we have 2 copies of data and primary and backup copy would be
stored in different AZs. My question remains valid in this case as well.
Do we have to ensure nodes in two AZs are always present or does ignite
have a way to indicate it couldn't create backups. Silently killing backups
is not
Thanks for your reply. Let me try to answer your 2 questions below.
1. I understand that it sacrifices the backups incase it can't place
backups appropriately. Question is, is it possible to fail the deployment
rather than risking single copy of data present in cluster. If this only
copy goes down,
This question is a design question.
What kids of fault states do you expect to tolerate? What is your failure
budget?
Why are you trying to make more than 2 copies of the data distribute across
only two failure domains?
Also "fail fast" means discover your implementation defects faster than
your
Using the AWS tutorial will get you a backup filter using this
implementation: ClusterNodeAttributeAffinityBackupFilter
There is logic to prevent a cascade of backup data onto survivor nodes in
case of multiple concurrent failures if you read the documentation.
https://ignite.apache.org/releases/
gentle reminder.
One additional question: We have observed that if available AZs are less
than backups count, ignite skips creating backups. Is this correct
understanding? If yes, how can we fail fast if backups can not be placed
due to AZ limitation?
On Mon, Oct 31, 2022 at 6:30 PM Surinder Mehra
Hi,
As per link attached, to ensure primary and backup partitions are not
stored on same node, We used AWS AZ as backup filter and now I can see if I
start two ignite nodes on the same machine, primary partitions are evenly
distributed but backups are always zero which is expected.
https://www.gri
15 matches
Mail list logo