I'm trying to wrap my head around storage setups that might work for
virtualization and I wonder if people here have experience with creating a
drbd setup for this purpose.
What I am currently planning to implement is this:
2 Storage 8-bay nodes with 8gb RAM and a dual-core Xeon processor.
Each
On 02/15/2011 04:07 AM, Mike Lovell wrote:
On 02/14/2011 12:59 PM, Dennis Jacobfeuerborn wrote:
I'm trying to wrap my head around storage setups that might work for
virtualization and I wonder if people here have experience with creating
a drbd setup for this purpose.
What I am curr
On 02/15/2011 07:15 AM, Mike Lovell wrote:
On 02/14/2011 08:29 PM, yvette hirth wrote:
Mike Lovell wrote:
On 02/14/2011 12:59 PM, Dennis Jacobfeuerborn wrote:
I'm particularly worried about the networking side being a bottleneck
for the setup. I was looking into 10gbit and infin
On 02/15/2011 11:15 AM, James Masson wrote:
I wouldn't use Raid5/6
take a look at these NFS stats - my VM hosting workload is 90% write.
For my everyday VM workloads, the only time there are significant reads from
the VM shared storage
is at VM boot time, and even then, the VM host and storag
Hi,
I'm trying set up a redundant iscsi server using drbd but it seems I'm
unable to get the os to recognize the partitioning of the drbd device:
[root@storage2 ~]# fdisk -l /dev/drbd1
Disk /dev/drbd1: 1073 MB, 1073672192 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 1
On 03/08/2011 04:25 PM, Felix Frank wrote:
On 03/08/2011 04:23 PM, Dennis Jacobfeuerborn wrote:
Hi,
I'm trying set up a redundant iscsi server using drbd but it seems I'm
unable to get the os to recognize the partitioning of the drbd device:
[root@storage2 ~]# fdisk -l /dev/drbd1
On 03/08/2011 04:51 PM, Felix Frank wrote:
While kpartx works I'm wondering why it is necessary. I would expect
/dev/drbd1 to behave like a regular block device and the partitions to
show up if not after writing the new table then at least after a reboot.
Eventually I will probably go for an LV
On 03/09/2011 09:18 AM, Felix Frank wrote:
while partitioning a partition is possible and rather straight-forward,
it sure isn't standard practice.
I wasn't actually suggesting to create "partitions in partitions" I just
wasn't aware that the drbd device nodes are partitions and not basic
block
On 04/24/2011 10:05 PM, Digimer wrote:
Comments in-line.
On 04/24/2011 11:34 AM, Whit Blauvelt wrote:
Digimer,
All useful stuff. Thanks. I hadn't considered three rather than two
networks. That's a good case for it.
Here's what I'm trying to scope out, and from your comments it looks to be
te
On 06/27/2011 10:20 AM, Felix Frank wrote:
On 06/26/2011 05:30 PM, Andreas Hofmeister wrote:
With 'dd' on a system with similar performance (8x15k SAS disks, LSI,
10GBE), I get 1GByte/s into the local block device but only 600MByte/s
into the DRBD device. Using another benchmark ("fio" with asyn
On 08/05/2011 09:53 AM, Bart Coninckx wrote:
On 08/05/11 08:29, Caspar Smit wrote:
Hi Matt,
1000M means 1000 Mb/s NOT 1000mbps. To reach 1000M you should have at
least one (probably two) 10gbit interface(s). Since you have two 1gbit
interfaces (bonded with balance-rr?) a value between 100M and
NOTE: 3ware does not play well with anything, has nothing to do with drbd
Can people please stop making these sweeping statements without providing
any context or hard data at all? You say 3ware sucks an areca rules yet
others say areca sucks and 3ware rules.
Without hard data this is just sc
On 05/01/2012 08:01 AM, Elias Chatzigeorgiou wrote:
> Hello,
>
> I am looking for some consultancy regarding an openfiler HA cluster.
>
> The setup is mainly based on the well known howtoforge article
>
> (http://www.howtoforge.com/openfiler-2.99-active-passive-with-corosync-pacemaker-and-drbd)
On 05/01/2012 03:13 PM, Elias Chatzigeorgiou wrote:
> Hi Dennis,
>what makes you think openfiler is dead? I believe there are many shops
> that rely on OF for a quick storage solution.
There is no active development taking place. If you go to the community
page and follow the "coding" link you
On 06/06/2012 08:16 AM, Torsten Rosenberger wrote:
> Am 06.06.2012 05:50, schrieb Yount, William D:
>> I understand what heartbeat does in the general sense. Actually configuring
>> it correctly and making it work the way it is supposed to is the problem.
>>
>> I have read the official DRBD/Heartb
On 06/12/2012 04:04 PM, Eduardo Diaz - Gmail wrote:
> Don't use crossover cables.. In my experience use crossover cables for two
> node cluster make only problems... use a simple switch..
Why would a setup with 2 cables and a switch be more reliable than just a
single cable? That doesn't make sens
On 06/13/2012 05:56 PM, William Seligman wrote:
> On 6/13/12 11:45 AM, Arnold Krille wrote:
>> On Wednesday 13 June 2012 09:26:45 Felix Frank wrote:
>>> On 06/12/2012 08:23 PM, Dennis Jacobfeuerborn wrote:
>>>>> Don't use crossover cables.. In my experie
Hi,
I just set up a traditional drbd primary/secondary system between two hosts
that are connected with gigabit ports. The problem is that the resync rate
is only 5M even though in the config file I set it to 40M.
I also tried:
drbdadm disk-options --resync-rate=40M drbd1
which doesn't result in
On 08/03/2012 09:37 AM, Felix Frank wrote:
> Hi,
>
> On 08/03/2012 03:27 AM, Dennis Jacobfeuerborn wrote:
>> The problem is that the resync rate
>> is only 5M even though in the config file I set it to 40M.
>
> what's telling you this 5M figure? What does "
On 08/03/2012 03:27 AM, Dennis Jacobfeuerborn wrote:
> Hi,
> I just set up a traditional drbd primary/secondary system between two hosts
> that are connected with gigabit ports. The problem is that the resync rate
> is only 5M even though in the config file I set it to 40M.
>
On 08/14/2012 02:43 AM, Dennis Jacobfeuerborn wrote:
> On 08/03/2012 03:27 AM, Dennis Jacobfeuerborn wrote:
>> Hi,
>> I just set up a traditional drbd primary/secondary system between two hosts
>> that are connected with gigabit ports. The problem is that the resync rate
>
Hi,
now that I got the re-sync issue sorted by moving to 8.3 instead of 8.4 I'm
investigating why I'm seeing an i/o load of 20% even when I run a fio
sequential write test throttled to 5M/s. Once I disconnect the secondary
node the i/o wait time decreases significantly so it seems the connection
to
On 08/15/2012 03:54 AM, Phil Frost wrote:
> On 08/14/2012 08:26 PM, Dennis Jacobfeuerborn wrote:
>> Hi,
>> now that I got the re-sync issue sorted by moving to 8.3 instead of 8.4 I'm
>> investigating why I'm seeing an i/o load of 20% even when I run a fio
>> se
On 08/16/2012 08:38 AM, Mia Lueng wrote:
> Hi All:
> I setup a drbd device on a storage . Its write performance can reach
> to 300MB/s tested by dd. But when I setup a drbd device on it , use dd
> to test its write performance (peer node does not connect). The test
> result is only 40MB/s. drbd ve
On 08/14/2012 03:51 PM, Nik Martin wrote:
> On 08/13/2012 07:43 PM, Dennis Jacobfeuerborn wrote:
>> On 08/03/2012 03:27 AM, Dennis Jacobfeuerborn wrote:
>>> Hi,
>>> I just set up a traditional drbd primary/secondary system between two hosts
>>> that are connecte
On 13.06.2013 00:19, Robinson, Eric wrote:
Hi Dirk – Thanks for the feedback. I do need some clarification, though.
DRBD replicates disk block changes to a standby volume. If the primary
node suddenly fails, the cluster manager promotes the standby node to
primary and starts the MySQL service. Lo
On 23.07.2014 23:42, Robinson, Eric wrote:
>> Sounds like you want to run Raid1 (drbd) over Raid0 (ssd+md?). This is more
>> fragile than Raid0
>> over Raid1, so you might consider implementing Raid0 using DM over multiple
>> drbd devices,
>> each mirroring a single ssd.
>
>> Andreas
>
>
> Our
On 24.07.2014 03:38, Robinson, Eric wrote:
> On 23.07.2014 23:42, Robinson, Eric wrote:
>>> Sounds like you want to run Raid1 (drbd) over Raid0 (ssd+md?). This
>>> is more fragile than Raid0 over Raid1, so you might consider
>>> implementing Raid0 using DM over multiple drbd devices, each mirrori
Hi,
I'm running a small-ish pacemaker+drbd cluster of two virtual machines
and when I try to resize it the resizing of the block device seems to
work fine but when i finally try to also resize the ext4 filesystem on
/dev/drbd1 I get this:
[root@nfs2 ~]# resize2fs -p /dev/drbd1
resize2fs 1.41.12 (1
What specifically don't you get about this? The features DRBD provides
are no different when running in a virtual machine so I'm not sure what
you think makes this fundamentally problematic?
Regards,
Dennis
On 18.02.2015 17:49, Prater, James K. wrote:
> I continue to see people run DRDB cluster
Hi,
is it possible to add a volume to an existing resource that already has
a volume 0 defined? If so how would the meta-data be initialized for
that new volume since the meta-disk parameter is per volume but the
create-md command only specifies the complete resource?
Regards,
Dennis
__
On 08.06.2015 17:35, Lars Ellenberg wrote:
> On Mon, Jun 08, 2015 at 04:36:22AM +0200, Dennis Jacobfeuerborn wrote:
>> Hi,
>> is it possible to add a volume to an existing resource that already has
>> a volume 0 defined? If so how would the meta-data be initialized for
>>
On 26.06.2015 12:42, JA E wrote:
> Hi,
>
> I am newbie to clustering. I managed to setup cluster with
> pacemaker,corosysnc, drbd and pcs in two nodes. But after test restart of
> both node it seems drbd can't mount the desired folder when controlled by
> pacemaker, manually it's fine.
>
>
>> *[
Hi,
I'm trying to setup drbd on two Centos 7 systems and while the manual
init of drbd went fine once I try to start drbd using pacemaker I get an
error in the logs:
Aug 3 04:08:48 nfs1 kernel: drbd: initialized. Version: 8.4.6
(api:1/proto:86-101)
Aug 3 04:08:48 nfs1 kernel: drbd: GIT-hash:
833
On 15.10.2016 12:56, Gandalf Corvotempesta wrote:
> Hi to all
> i would like to spin up a new shared storage
> should i use drbd 8 or 9?
>
> Additionally and more important: are there any ways to totally avoid
> splitbrains? Obviously, the network used for sync is fully redundant (at
> least 2 or
Hi,
today I set up a few nodes to experiment with linstor/drbd9 and while
initially things looked good I now ran into a problem.
I set up 1 controller and 3 satellite nodes. The satellite nodes each
have 100G storage in an LVM volume group. Satellite and Controller are
deployed using docker and ar
36 matches
Mail list logo