Bill Bogstad wrote:
easier
if you only have to do this on a per server basis rather then per session.
Which is one place where VM portability shines. State information is
maintained within the VM container. It's still not perfect. A VM's
checkpoint information cannot be updated in real time w
On Tue, Apr 1, 2014 at 3:08 PM, Richard Pieri wrote:
> Derek Martin wrote:
>>
>> It really depends, but mostly it isn't. One of my earliest gigs was
>> managing just such an environment, which mostly included a few custom
>> applications which were not designed to be clustered. With the right
>>
Derek Martin wrote:
It really depends, but mostly it isn't. One of my earliest gigs was
managing just such an environment, which mostly included a few custom
applications which were not designed to be clustered. With the right
hardware, it's fairly trivial. But the right hardware is expensive.
On Tue, Apr 01, 2014 at 12:21:20PM -0400, Richard Pieri wrote:
> Like I wrote yesterday, the hard part is clustering applications.
It really depends, but mostly it isn't. One of my earliest gigs was
managing just such an environment, which mostly included a few custom
applications which were not
Tom Metro wrote:
Isn't this just a natural indicator of immaturity in this market? As the
tech matures, the core needs will be figured out and productized.
HA not mature? Not a product? It's been a mature product in various
forms from various vendors for something like 4 decades.
Like I wrot
> On Mar 31, 2014, at 10:56 PM, John Abreau wrote:
> Christoph's talk a couple weeks ago on LXC and Docker, and Federico's talk
> last year on OpenStack, look to me like the early stages of establishing
> such a taxonomy and building an infrastructure to make the common cases
> easier to develop
Rich Braun wrote:
> Jabr observed:
>> adding HA to a legacy application after the fact is a lot like adding
>> security to an application after it's been developed, instead of
>> addressing security as part of the application development process.
>
> Very astute.
Isn't this just a natural indicat
Jabr observed:
> adding HA to a legacy application after the fact is a lot like adding
> security to an application after it's been developed, instead of
> addressing security as part of the application development process.
Very astute. Wouldn't it be nice if there were some OpenStack-like HA
fram
As I see it, the problem is that we still treat each and every HA cluster
as a unique snowflake and build the whole thing from scratch.
But while there are many aspects to HA that are dependent on the particular
set of services and applications being run,there are enough commonalities
that it shou
Tom Metro wrote:
I think much of the reset ends up being carefully developed in-house
configurations that haven't been shared back with the community.
That's because a HA configuration is unique to the services and
applications it's wrapped around. I can share how I implemented a given
HA clu
Rich Braun wrote:
> My goal is simple: mash the power button or yank the network cable from
> either of these machines, and have all the apps still running. Then plug the
> machine back in and have all state restored to full redundancy without having
> to type any commands.
>
> For me, the use-c
On Mon, Mar 31, 2014 at 4:06 PM, Richard Pieri wrote:
> How hard could it be? Really hard. Designing and building reliable HA
> clusters from scratch is one of the hardest things a sysadmin can be called
> upon to do.
Yup. Very tough for legacy apps not designed for anything fancier than
reboot
Bill Ricker wrote:
(Split-brain is why i've avoided remote auto-restart. If you need
distributed HA, you need to architect for hot-hot distributed
load-balancing -- not easily retrofitted to monolithic legacy apps!)
This is a lot of why there's no such thing as a turnkey HA cluster
installatio
On Mon, Mar 31, 2014 at 11:03 AM, Richard Pieri wrote:
> Bill Ricker wrote:
>
>> I've seen a big-name commercial block-replication solution duplicate
>> trashed data to the cold spare ... wasn't pretty !
>>
>
> Another great example of how replication is not backup.
Exactly.
Extra copies of blo
On 03/31/2014 03:00 PM, Rich Braun wrote:
For me, the use-case for HA technology really isn't just about
designing around failure. The more-important thing for me as a
weekend-hobbyist user is being able to take something down, mess with
it/upgrade it/overhaul it for a few hours or days, and pu
Kent Borg wrote regarding HA:
> ... But that just saves you from losing
> your primary hardware in a flood, fire, theft, etc.
>
> There are more ways for things go go wrong. The software maybe has a
> bug that messes up your data, or a human maybe fat-fingers a command ...
For me, the use-case for
> ma...@mohawksoft.com wrote:
>> OK, that's a pretty stupid thing to do. Who would do that? That's the
>
> DRDB does precisely this.
That will teach me to come in mid-thread. Yes, I have looked at that
before. That isn't a backup, per se' that's a HA fail-over mechanism.
In the case of "A" being
ma...@mohawksoft.com wrote:
OK, that's a pretty stupid thing to do. Who would do that? That's the
DRDB does precisely this.
worse of both worlds. Not only are you backing up EVERY block, you aren't
even preserving old data. Hell you aren't even excluding uninitialized
disk blocks. So, even if
John Abreau wrote:
I believe you missed Rich's point. He's not talking about advances in
solving the problem, he's talking about advances in making it easier and
less expensive to deploy the solution.
We reached the point of least cost and least effort several decades ago.
--
Rich P.
_
On Mon, Mar 31, 2014 at 12:04 PM, Richard Pieri wrote:
> You don't see advancement in HA clustering because we reached the pinnacle
> over 30 years ago. It's a well-understood problem with literally decades of
> history backing up a handful of best practices. Anything new is just a
> specific imp
Daniel Feenberg wrote:
would be pretty straightforward. I don't know why client-side HA
features have never shown up in standards since DNS was defined, but
they haven't.
Because at the time DNS became a standard a typical "client" was
something like a VT-100 or DECserver wired to the highly a
> ma...@mohawksoft.com wrote:
>> I currently work at a fairly high end deduplicated backup/recovery
>> system
>> company. In a deduplicated system, a "new" backup should not ever be
>> able
>> to trash an old backup. Period. Only "new" data is added to a
>> deduplicated
>> pool and old references a
On Mon, 31 Mar 2014, Rich Braun wrote:
Edward Ned Harvey wrote:
Hehhehe - No. The goal is mash the power button, with the results described
above, while using only 2 servers and free software. ;-)
Well, if the free or low-cost software existed to make it work well, I'd
eagerly pony up to
Rich Braun wrote:
Well, if the free or low-cost software existed to make it work well, I'd
eagerly pony up to pay the electric bill running a third server.
You mean like Red Hat Cluster Suite and Pacemaker?
You don't see advancement in HA clustering because we reached the
pinnacle over 30 yea
ma...@mohawksoft.com wrote:
I currently work at a fairly high end deduplicated backup/recovery system
company. In a deduplicated system, a "new" backup should not ever be able
to trash an old backup. Period. Only "new" data is added to a deduplicated
pool and old references are untouched. Old dat
> Bill Ricker wrote:
>> I've seen a big-name commercial block-replication solution duplicate
>> trashed data to the cold spare ... wasn't pretty !
>
> Another great example of how replication is not backup.
I call FUD! that is more of an example of how a bad program can corrupt data.
I currently
On 03/31/2014 11:03 AM, Richard Pieri wrote:
Bill Ricker wrote:
I've seen a big-name commercial block-replication solution duplicate
trashed data to the cold spare ... wasn't pretty !
Another great example of how replication is not backup.
Or, another way of looking at it: a demonstration t
Edward Ned Harvey wrote:
> Hehhehe - No. The goal is mash the power button, with the results described
> above, while using only 2 servers and free software. ;-)
Well, if the free or low-cost software existed to make it work well, I'd
eagerly pony up to pay the electric bill running a third ser
Bill Ricker wrote:
I've seen a big-name commercial block-replication solution duplicate
trashed data to the cold spare ... wasn't pretty !
Another great example of how replication is not backup.
--
Rich P.
___
Discuss mailing list
Discuss@blu.org
htt
> From: discuss-bounces+blu=nedharvey@blu.org [mailto:discuss-
> bounces+blu=nedharvey@blu.org] On Behalf Of Rich Braun
>
> My goal is simple: mash the power button or yank the network cable from
> either of these machines, and have all the apps still running. Then plug the
> machine bac
On Sun, Mar 30, 2014 at 7:31 PM, Richard Pieri wrote:
> Just be sure to do your backups because DRBD will happily replicate
> trashed data to the cold node.
I've seen a big-name commercial block-replication solution duplicate
trashed data to the cold spare ... wasn't pretty !
--
Bill
@n1vux
Kent Borg wrote:
There is a non-free DRBD that handles more than two nodes. There might
be something useful down that path.
The point of a quorum disk is the race condition. When the cluster
splits the nodes race to fence the quorum disk and write their
signatures. Fencing ensures that only o
On 03/30/2014 05:51 PM, Richard Pieri wrote:
A more detailed plan for the basic Xen + checkpoint + DRBD
configuration I described.
I played with Xen back when, but I didn't like the too-tight coupling
between the host and guest OSs. I forget what burned me but I upgraded
something (the host?)
Kent Borg wrote:
Not a high performance model but a high availability model that doesn't
care much about what happens inside the VMs. A given VM that isn't
otherwise interested in rebooting might be run for years in such a rig.
A more detailed plan for the basic Xen + checkpoint + DRBD configur
I am sure I have blathered about this before...
Something I looked at at a previous job (they didn't bite) was replacing
a hodpodge of physically dying servers with a pair of modern servers,
each big enough to carry the load, but set up rather redundantly. More
recently I was thinking I might
Rich Braun wrote:
It's 2014 and I figured that maybe the state of the art in RAIS (true
clustering of servers vs. disks) might have gotten somewhere since the last
time I looked at the idea in about 2011.
There's no such thing. The only major implementation of paired
(mirrored) processing that
Bill wrote:
> ...The nicest HA solutions available today do
> require apps be "cloud" enabled, which is to say fully virtualized;
Quick response, thanks! Yes, I do virtualize things here (using VirtualBox)
but that doesn't solve much of the problem for a home user. I've actually
been de-virtuali
Hi Rich !
Commercial practice varies. The nicest HA solutions available today do
require apps be "cloud" enabled, which is to say fully virtualized; you can
then in-house them by building your own mini-cloud.
Choice 1 is whether storage is replicated or shared. Shared can be a
cluster FS or a bac
It's 2014 and I figured that maybe the state of the art in RAIS (true
clustering of servers vs. disks) might have gotten somewhere since the last
time I looked at the idea in about 2011.
I have two home servers (down from 3, the electric bills are punitive where I
now live) with a dozen services r
39 matches
Mail list logo