Cameron Smith wrote:
Still a little fuzzy about where my data can go in relation to drbd.
My lower level is:
/dev/sda2 /data
my drbd device is:
/dev/drbd1
The docs say not to access the lower level device after drbd is running
so where do I put my files and data?
If it's actually on the drb
A-Ha once again! :)
I was trying to mount the secondary /dev/drbd1. DUH.
Can't do that!!
On the primary:
I created a mount point in root called drbd1:
mkdir /drbd1
I made a filesystem on the drbd device:
mkfs.ext3 /dev/drbd1
I mounted the device:
mount -t ext3 /dev/drbd1 /drbd1
Voila!!! :)
Cameron Smith wrote:
Still a little fuzzy about where my data can go in relation to drbd.
My lower level is:
/dev/sda2 /data
my drbd device is:
/dev/drbd1
The docs say not to access the lower level device after drbd is
running so where do I put my files and data?
If it's actually on the drbd1
Still a little fuzzy about where my data can go in relation to drbd.
My lower level is:
/dev/sda2 /data
my drbd device is:
/dev/drbd1
The docs say not to access the lower level device after drbd is running so
where do I put my files and data?
If it's actually on the drbd1 device how to I access
So today, i've started a new setup from scratch and it's doing what i want..
It's a NFS + DRBD + heartbeat on Debian 5.
When i reboot, shutdown a machine or anything the failover works great.
But i need some auto split brain fixing.. I know it's a big no-no but in our
situation it's just needed be
On 2010-03-15 15:47, Olivier LAMBERT wrote:
:D
That's I'm using right now, but it's for Xen on the top.
I think this is the only good reason to do that.
Is this to try and distribute the load, or?
Otherwise, why not just use a floating IP address?
--
Med venlig hilsen
Christian Iversen
_
You need to use a cluster-aware filesystem such as ocfs2 or gfs to be
able to mount a DRBD device on both boxes at the same time. it looks
like you are mounting the underlying /dev/sd* block device, not /dev/drbd1.
On 3/17/2010 11:40 AM, Cameron Smith wrote:
I am stuck.
I did in an initial in
I am stuck.
I did in an initial install of DRBD and was able to achieve a sync from
primary to secondary!
Maybe.
My partition that is source on primary and destination on secondary is:
/dev/sda2 /data
When I create files in the /data directory I can see that DRBD is
recognizing that things are
Hello,
we use two Citrix XenServer 5.5 with DRBD and DRBD-Proxy and we have slow HDD
performance.
We use an extra NIC for DRBD on both servern and the connection for DRBD is a
SDSL with 2MBit sym.
For dom0 is dom0_mem configured with 2048MB.
Here is our /etc/drbd.conf:
global { usage
Am Montag, 15. März 2010 19:28:34 schrieb dbarker:
> Frank Hoffmann wrote:
> > (primary/primary, protocol C) and one resource r0.
> > r0 is mounted (ext4) one both nodes (rw, sync).
>
> As far as I know, ext4 is not a clustering file system (aware that it may
> be shared) and will not reread block
On Wed, Mar 17, 2010 at 04:07:25AM -0700, dbarker wrote:
>
>
>
> Lars Ellenberg wrote:
> >
> > In case you where asking something, I must have missed the question,
> > sorry :-]
> >
> >
>
> Sorry I was a bit obscure. The question is "What happens if you adjust
> verify-alg while connected Pr
On Wed, Mar 17, 2010 at 11:48:27AM +0100, Jordi Espasa Clofent wrote:
> Ok, after some test work I've realized that the real problem is:
>
> - If you perform a very aggressive shutdown (as power off directly
> is) to the active node, the pasive node hasn't all the records and
> mysql slave loses r
Lars Ellenberg wrote:
>
> In case you where asking something, I must have missed the question,
> sorry :-]
>
>
Sorry I was a bit obscure. The question is "What happens if you adjust
verify-alg while connected Primary/Primary and then do a verify?"
I had already started the verify when I fou
Ok, after some test work I've realized that the real problem is:
- If you perform a very aggressive shutdown (as power off directly is)
to the active node, the pasive node hasn't all the records and mysql
slave loses replication.
- if you perform a regular shutdown to the active node, passive
On Tue, Mar 16, 2010 at 07:18:20AM -0700, dbarker wrote:
>
> In a thread last October (split-brain after trying to verify?) Lars Ellenberg
> said:
> "You adjusted network parameters (verify-alg), which we still cannot do
> while keeping the connection.
> So your first "adjust" to add the verify-al
On Wed, Mar 17, 2010 at 07:08:20AM +, Henning Bitsch wrote:
> Hi,
>
> >
>
> I have a problem running drbd 8.3.7-1 on Debian Lenny (2.6.26-AMD64-Xen).
> I have six drbd devices with a total of 3 TB. Both nodes are Supermicro AMD
> Opteron boxes (one 12 core, one 4 core) with a dedicated 1 G
Hi,
>
I have a problem running drbd 8.3.7-1 on Debian Lenny (2.6.26-AMD64-Xen).
I have six drbd devices with a total of 3 TB. Both nodes are Supermicro AMD
Opteron boxes (one 12 core, one 4 core) with a dedicated 1 GBit connection for
DRBD and Adaptec 5800 Raid controllers. One side is a NVID
17 matches
Mail list logo