Hi All,
I have a problem and I hope that someone can help me.
I have a 3 node cluster with Proxmox 4.4 and DRBD 9.06.1 with
drbdmanage 0.99.4 working in production from 2016 and for some reason I
don't like to update to the new version for the moment .
Now Proxmox is still working but
Hi,
In /var/lib/drbd.d of my DRBD9 nodes there are not only files .res,
like drbdmanage_vm-106-disk-1.res, but also files .res.q, like
drbdmanage_vm-105-disk-1.res.q
What are they and what they are for? Can be eventually manually removed?
Thanks,
Michele
To DRBD's guys:
After removing and restoring one VM (vm106) I found a strange
situation on my 3 nodes PVE with DRBD9
on the Primary node:
root@mpve2:~# drbdadm status
.drbdctrl role:Primary
volume:0 disk:UpToDate
volume:1 disk:UpToDate
mpve1 role:Secondary
volume:0
scritto:
Il 11/04/2017 16:41, Michele Rossetti ha scritto:
Thanks Yannis,
unfortunatly drbdmanage resume-all nor drbdmanage restart get no effect.
I'm afraid I'll have to delete and recreate resources...
Why? Have you ane issue with your resources, apoart of "pending actions" ?
Do you think that "pending action" is not an issue?
So I'll try to move the VM's between nodes, just to be sure, and if
it'll work I will leave things as they are...
Michele
At 16:59 +0200 11.04.2017, Roberto Resoli wrote:
Il 11/04/2017 16:41, Michele Rossetti ha scritto:
Tha
, Michele Rossetti
<<mailto:rosse...@sardi.it>rosse...@sardi.it> wrote:
Any help or suggestion about my previous post?
Michele
___
drbd-user mailing list
<mailto:drbd-user@lists.linbit.com>drbd-user@lists.linbit.com
<http://lis
Any help or suggestion about my previous post?
Michele
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
ts.linbit.com/mailman/listinfo/drbd-user
--
""""""""""""""""""""""""""""""""""""""""""""
The same problem I had some days ago creating a KVM on a node not
primary, the creation failed but, after some attempts I found with
"drbdmanage a" 6 disk created and I had to remove them with
"drbdmanage remove-resource -f"
It seems a problem related with the VM creation on node not primary.
So why when I used "drbdmanage remove-resource -f
vm-101-disk-6" and others like this the resources
was removed and these are not more on the disks?
The resources not only disappear on the list
produced by "drbdmanage a" but are not on disks
with "locate".
Robert, please let us better know,
Hi,
on a 3 node Proxmox 4.4 with DRBD4 system, trying to clone one VM
with the PVE GUI (probably wrongly, we try to clone from a resource
in secondary state instead of primary, executing from primary it
worked), now we see, on PVE GUI Storage>drbd1>Content seven disk
belonging to the same
In a 3 node Proxmox 4.4 with DRBD9, with vm100, vm104 and vm105
starting at boot in HA, at the moment used for testing.
After some update (first to drbdmanage 0.98, now drbdmanage 0.99.2) I
have some problems.
Starting system I have vm100 starting on mpve3 (node 3) and vm104 and
vm105 not
Any help about my previous message?
Michele
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Hi All,
In a 3 node Proxmox 4.4 with DRBD9, with vm100, vm104 and vm105
starting at boot in HA.
After some update (now drbdmanage 0.99.2) I have some problems.
Starting system I have vm100 starting on mpve3 (node 3) and vm104 and
vm105 not starting.
Those are the output of drbd-overview and
--
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""&q
I'm freaking out!.
In my previous message I sent the syslog about drbdmanaged startup at
boot time, hoping for help.
Moreover today, starting servers, not only I had to make drbdmanage
restart but after done it , checking drbdsetup status I found a
problem on vm-104:
On PVE1:
You can find the syslog about yesterday drbdmanaged startup at boot here:
http://www.sardi.it/log_drbdmanaged.txt
I'm not able to understand the problem.
Thank in advance for any further help.
Michele
Il 13/01/2017 21:01, Michele Rossetti ha scritto:
Stupid me, now all working, maybe
Stupid me, now all working, maybe the resources was gone down upgrading?
Thanks a lot,
Michele
Il 13/01/2017 19:58, Michele Rossetti ha scritto:
root@mpve1:~# drbdsetup status
.drbdctrl role:Secondary
volume:0 disk:UpToDate
volume:1 disk:UpToDate
mpve2 role:Secondary
volume:0
|
+--+
Any help or suggestions?
Thanks,
Michele
Il 10/01/2017 10:14, Roberto Resoli ha scritto:
Il 09/01/2017 19:20, Michele Rossetti ha scritto:
This means that in PVE cluster of 3 servers with DRBD9 updated isn't
possible to restore KVM virtual machines?
Other people
was created by an update to 4.4 or a DRBD update.
Michele
Il 09/01/2017 19:20, Michele Rossetti ha scritto:
This means that in PVE cluster of 3 servers with DRBD9 updated isn't
possible to restore KVM virtual machines?
Other people on list with the same problem or is only in your
configuration
This means that in PVE cluster of 3 servers with DRBD9 updated isn't
possible to restore KVM virtual machines?
Other people on list with the same problem or is only in your configuration?
Just to know before update ;-)
Thanks,
Michele
Il 28/12/2016 16:00, Roberto Resoli ha scritto:
Il
Thanks Roland,
theresources seems the same on all servers:
root@mpve1:~# drbdmanage list-assignments
WARNING:root:Could not read configuration file '/etc/drbdmanaged.cfg'
+--+
| Node | Resource | Vol ID |
| State
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
--
"""""""""""""""""""""""""""&qu
Thanks, Rob,
I understand that you use PVE and DRBD only for testing and not in
production, right?
Il 02/01/2017 14:53, Michele Rossetti ha scritto:
One advice from who use drbd9 on PVE in production, please: we have a
cluster of 3 PVe 4.3 servers with the last DRBD version available prom
One advice from who use drbd9 on PVE in production, please: we have a
cluster of 3 PVe 4.3 servers with the last DRBD version available
prom PVE repository.
I read on the list that there are several problems with new DRBD
version, not more included on PVE repository, but I don't understand
--
"""""""""""""""""""""""""""""""""""""""""""""""""""""&quo
Hi All,
I don't know if I'm involved in a split brain situation on my new
Proxmox4 with DRBD9 3 nodes, this is the output of drbd-overview:
root@mpve2:~# drbd-overview
0:.drbdctrl/0 Connected(3*) Secondary(3*)
UpTo(mpve2)/UpTo(mpve3,mpve1)
1:.drbdctrl/1
communication between nodes seems to be OK, I can reach each
node from the others with ssh.
Michele
--
"""""""""""""""""""""""""""""""""&q
, as the
nodes already exist?).
Regards,
Michele
--
""""""""""""""""""""""""""""""""""""""""""
Hi all,
I've set a cluster of 3 servers with Proxmox 4.2 and DRBD9; Proxmox
on 2 RAID1 HDD and DBRBD9 on 4 RAID 10 HDD.
DRBD9 use a complete mesh network configuration, with 2 dedicated
Ethernet interfaces for each node, connected with cross patch (no
switch connections).
Here below the
30 matches
Mail list logo