[DRBD-user] drbdmanage error message: "IndexError: list index out of range"

2016-12-27 Thread T.J. Yang
Following error message was from deleting my last node(alpha) in a 3 node
cluster.

[root@alpha drbd.d]# drbdmanage  nT
raceback (most recent call last):
  File "/usr/bin/drbdmanage", line 19, in 
drbdmanage_client.main()
  File "/usr/lib/python2.7/site-packages/drbdmanage_client.py", line 4109,
in main
client.run()
  File "/usr/lib/python2.7/site-packages/drbdmanage_client.py", line 1372,
in run
self.parse(sys.argv[1:])
  File "/usr/lib/python2.7/site-packages/drbdmanage_client.py", line 1235,
in parse
args.func(args)
  File "/usr/lib/python2.7/site-packages/drbdmanage_client.py", line 2175,
in cmd_list_nodes
node_filter=node_filter_arg)
  File "/usr/lib/python2.7/site-packages/drbdmanage_client.py", line 2157,
in _get_nodes
dbus.Array([], signature="s"))
  File "/usr/lib/python2.7/site-packages/drbdmanage_client.py", line 153,
in dsc
if not isinstance(chk[0], dbus.Struct):
IndexError: list index out of range
[root@alpha drbd.d]# drbdmanage --version
drbdmanage 0.98.2; GIT-hash: a736db03f8037f5ad7d5c0ccfd679b8184130d98
[root@alpha drbd.d]#


-- 
T.J. Yang
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] Testing DRBD9*.rpms: Not able to join the second node

2016-12-27 Thread T.J. Yang
Hi

I am testing out latest DRBD9 rpms, hoping to create bravo,alpha and
charlie 3 nodes cluster. Need pointer on where I did wrong.

1. rpms created and installed on centos 7.3

[root@bravo log]# ls /root/drbd9-rpms/
drbd-bash-completion-8.9.10-1.el7.centos.x86_64.rpm
drbd-debuginfo-8.9.10-1.el7.centos.x86_64.rpm
drbd-heartbeat-8.9.10-1.el7.centos.x86_64.rpm
drbd-kernel-debuginfo-9.0.6-1.el7.centos.x86_64.rpm
drbdmanage-0.98.2-1.noarch.rpm
drbdmanage-0.98.2-1.src.rpm
drbd-pacemaker-8.9.10-1.el7.centos.x86_64.rpm
drbd-udev-8.9.10-1.el7.centos.x86_64.rpm
drbd-utils-8.9.10-1.el7.centos.x86_64.rpm
drbd-xen-8.9.10-1.el7.centos.x86_64.rpm
kmod-drbd-9.0.6_3.10.0_514.2.2-1.el7.centos.x86_64.rpm
[root@bravo log]#

2.  Trying add 2nd node alpha from bravo node.

drbdmanage add-node alpha  192.168.174.136

The command failed after waiting on server (which server ?)

I login into alpha to run the command manually.

[root@alpha drbd9-rpms]# /bin/python /usr/bin/drbdmanage join -p 6999
192.168.174.136 1 bravo 192.168.174.141 0 AMcP2OIcqCwO+iTer5Bx
You are going to join an existing drbdmanage cluster.
CAUTION! Note that:
  * Any previous drbdmanage cluster information may be removed
  * Any remaining resources managed by a previous drbdmanage installation
that still exist on this system will no longer be managed by drbdmanage

Confirm:

  yes/no: yes
Waiting for server: ...
Error: Server currently not ready, please retry later
[root@alpha drbd9-rpms]#

3. messages from bravo:dmesg

[ 8924.782800] drbd .drbdctrl/0 drbd0: my node_id: 0
[ 8924.783326] drbd .drbdctrl/0 drbd0: drbd_bm_resize called with capacity
== 8112
[ 8924.783893] drbd .drbdctrl/0 drbd0: resync bitmap: bits=1014 words=496
pages=1
[ 8924.784785] drbd .drbdctrl/0 drbd0: recounting of set bits took
additional 0ms
[ 8924.785320] drbd .drbdctrl/0 drbd0: Suspended AL updates
[ 8924.785909] drbd .drbdctrl/0 drbd0: disk( Attaching -> Inconsistent )
[ 8924.786426] drbd .drbdctrl/0 drbd0: attached to current UUID:
0004
[ 8924.793622] drbd .drbdctrl/1 drbd1: disk( Diskless -> Attaching )
[ 8924.794266] drbd .drbdctrl/1 drbd1: Maximum number of peer devices = 31
[ 8924.794863] drbd .drbdctrl/1 drbd1: my node_id: 0
[ 8924.795341] drbd .drbdctrl/1 drbd1: Adjusting my ra_pages to backing
device's (32 -> 1024)
[ 8924.795869] drbd .drbdctrl/1 drbd1: my node_id: 0
[ 8924.796384] drbd .drbdctrl/1 drbd1: drbd_bm_resize called with capacity
== 8112
[ 8924.796943] drbd .drbdctrl/1 drbd1: resync bitmap: bits=1014 words=496
pages=1
[ 8924.800141] drbd .drbdctrl/1 drbd1: recounting of set bits took
additional 0ms
[ 8924.800703] drbd .drbdctrl/1 drbd1: Suspended AL updates
[ 8924.801227] drbd .drbdctrl/1 drbd1: disk( Attaching -> Inconsistent )
[ 8924.801735] drbd .drbdctrl/1 drbd1: attached to current UUID:
0004
[ 8924.805723] drbd .drbdctrl: Preparing cluster-wide state change
2748733687 (0->-1 7683/4609)
[ 8924.805754] drbd .drbdctrl: Committing cluster-wide state change
2748733687 (0ms)
[ 8924.806785] drbd .drbdctrl: role( Secondary -> Primary )
[ 8924.807294] drbd .drbdctrl/0 drbd0: disk( Inconsistent -> UpToDate )
[ 8924.807834] drbd .drbdctrl/1 drbd1: disk( Inconsistent -> UpToDate )
[ 8924.808402] drbd .drbdctrl/0 drbd0: size = 4056 KB (4056 KB)
[ 8924.809295] drbd .drbdctrl/1 drbd1: size = 4056 KB (4056 KB)
[ 8924.810123] drbd .drbdctrl: Forced to consider local data as UpToDate!
[ 8924.810637] drbd .drbdctrl/0 drbd0: new current UUID: 2691EE64A4980DDF
weak: FFFE
[ 8924.811408] drbd .drbdctrl/1 drbd1: new current UUID: CFE588BC8E2BF993
weak: FFFE
[ 8924.817828] drbd .drbdctrl: role( Primary -> Secondary )
[ 8925.540669] drbd .drbdctrl: Preparing cluster-wide state change
4197545272 (0->-1 3/1)
[ 8925.540788] drbd .drbdctrl: Committing cluster-wide state change
4197545272 (1ms)
[ 8925.541798] drbd .drbdctrl: role( Secondary -> Primary )
[ 8925.970160] drbd .drbdctrl/1 drbd1: new current UUID: 8A126F160C84B47D
weak: FFFE
[ 9031.800065] drbd .drbdctrl alpha: Starting sender thread (from drbdsetup
[42363])
[ 9031.802265] drbd .drbdctrl alpha: conn( StandAlone -> Unconnected )
[ 9031.804923] drbd .drbdctrl/0 drbd0: new current UUID: E095B253DA29FF13
weak: FFFE
[ 9031.805838] drbd .drbdctrl alpha: Starting receiver thread (from
drbd_w_.drbdctr [42225])
[ 9031.806840] drbd .drbdctrl alpha: conn( Unconnected -> Connecting )
[ 9031.807425] drbd .drbdctrl tcp:alpha: bind before listen failed, err =
-99
[ 9031.808002] drbd .drbdctrl alpha: Failed to initiate connection, err=-99
[ 9031.808551] drbd .drbdctrl alpha: conn( Connecting -> Disconnecting )
[ 9031.811144] drbd .drbdctrl alpha: Connection closed
[ 9031.813713] drbd .drbdctrl alpha: conn( Disconnecting -> StandAlone )
[ 9031.814300] drbd .drbdctrl alpha: Terminating receiver thread
[ 9166.872047] drbd .drbdctrl alpha: Terminating sender thread
[ 9173.689001] drbd .drbdctrl alpha: Starting sender thread (from drbdsetup
[42510])
[ 

Re: [DRBD-user] Testing new DRBD9 dedicated repo for PVE

2016-12-27 Thread Jean-Daniel Tissot
On 27/12/2016 12:08, Roland Kammerer wrote :
> On Tue, Dec 27, 2016 at 11:26:45AM +0100, Jean-Daniel Tissot wrote:
>> 
>>   
>> >   http-equiv="Content-Type">
>>   
>>   
>> On 27/12/2016  10:19, Roberto Resoli
>>   wrote :
> Please stop sending html-only mail.
Sorry for that, I do not see my mail was HTML-only.
>
>> I used DRBD 9 knowing it is not recommended for production.
> Hm, there might be some over interpretation, right? IMO all that was
> said is that if you don't need the additional features of DRBD9 (more
> nodes, auto-promote, RDMA, resource management with drbdmanage), then
> stay with drbd 8.4. IMO drbd9 improved a lot since .0, its simply your
> decision.
In https://pve.proxmox.com/wiki/DRBD9 they said "DRBD9 integration is 
introduced in Proxmox VE 4.x as technology preview".
I don't know what they mean by preview.
I don't have any commercial support for Proxmox or DRBD. Our financial  
resources are limited.

>> Since we are on a production cluster, is it dangerous to switch to
>> Linbit repository ?
> Switching the repo isn't. Proxmox drbdmange had very minimal
> modifications, basically using a different storage plugin as default. We
> even ship that modification in our repo. The real thing is that the
> drbdmanage *versions* shipped are very different. Get familiar with the
> new one, especially the changed startup, try it in some VMs. Know what
> the implications are if nodes are mission and you try to start the
> cluster. There are many posts about that already, I don't repeat it here
> again.
>
>> When DRBD 9 will be stable and really usable in production?
> That's not a serious question, right? When will Linux be stable?
For sure, a lot of my servers are on Debian / Jessie, they call it stable. That 
does not mean there is no bugs, but Jessie can be used in production.
What I mean is "Is DRBD 9 in testing mode ? And can we use it in production ?"

Sorry for my bad english.

Regards, Jean-Daniel
>
> Regards, rck
>
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Testing new DRBD9 dedicated repo for PVE

2016-12-27 Thread Roland Kammerer
On Tue, Dec 27, 2016 at 11:26:45AM +0100, Jean-Daniel Tissot wrote:
> 
>   
>http-equiv="Content-Type">
>   
>   
> On 27/12/2016  10:19, Roberto Resoli
>   wrote :

Please stop sending html-only mail.

> I used DRBD 9 knowing it is not recommended for production.

Hm, there might be some over interpretation, right? IMO all that was
said is that if you don't need the additional features of DRBD9 (more
nodes, auto-promote, RDMA, resource management with drbdmanage), then
stay with drbd 8.4. IMO drbd9 improved a lot since .0, its simply your
decision.

> Since we are on a production cluster, is it dangerous to switch to
> Linbit repository ?

Switching the repo isn't. Proxmox drbdmange had very minimal
modifications, basically using a different storage plugin as default. We
even ship that modification in our repo. The real thing is that the
drbdmanage *versions* shipped are very different. Get familiar with the
new one, especially the changed startup, try it in some VMs. Know what
the implications are if nodes are mission and you try to start the
cluster. There are many posts about that already, I don't repeat it here
again.

> When DRBD 9 will be stable and really usable in production?

That's not a serious question, right? When will Linux be stable?

Regards, rck
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Testing new DRBD9 dedicated repo for PVE

2016-12-27 Thread Roberto Resoli
Il 27/12/2016 11:26, Jean-Daniel Tissot ha scritto:
> On 27/12/2016  10:19, Roberto Resoli wrote :
>> I have successfully done the transition to linbit repo drbdmanage.
> Hi,
> Well, I have done a three nodes PVE cluster. I can't use CEPH or others
> storage technologies.
> I used DRBD 9 knowing it is not recommended for production.
> I had some problems at the beginning, but now it's works well.
> HA is working. Live migration is working well.
> DRBD sync take quite a long time sometimes but if it's take too long,
> rebooting the node correct this problem.
> For now, I use Proxmox repository and I don't see DRBD Manage is no more
> present on it.

It will:

https://forum.proxmox.com/threads/drbdmanage-license-change.30404/

On the same forum, Philipp Reisner clarifies:

https://forum.proxmox.com/threads/drbdmanage-license-change.30404/#post-152680

> Since we are on a production cluster, is it dangerous to switch to
> Linbit repository ?

I don't think so, but it is a major switch, so do that with careful
planning and make sure to have a backup of every vm before starting.

bye,
rob
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Testing new DRBD9 dedicated repo for PVE

2016-12-27 Thread Jean-Daniel Tissot

  
  
On 27/12/2016  10:19, Roberto Resoli
  wrote :


  I have successfully done the transition to linbit repo drbdmanage.


Hi,
Well, I have done a three nodes PVE cluster. I can't use CEPH or
others storage technologies.
I used DRBD 9 knowing it is not recommended for production.
I had some problems at the beginning, but now it's works well.
HA is working. Live migration is working well.
DRBD sync take quite a long time sometimes but if it's take too
long, rebooting the node correct this problem.
For now, I use Proxmox repository and I don't see DRBD Manage is no
more present on it.

Since we are on a production cluster, is it dangerous to switch to
Linbit repository ?
When DRBD 9 will be stable and really usable in production?
Thanks in advance.
Bests regards, Jean-Daniel

-- 
  Bien cordialement, Jean-Daniel
TISSOT
  Administrateur Systèmes et Réseaux
  Tel: +33 3 81 666 440 Fax: +33 3 81 666 568
  
  Laboratoire
Chrono-environnement
  16, Route de Gray
  25030 BESANÇON Cédex
  
  Plan
et Accès

  

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Testing new DRBD9 dedicated repo for PVE

2016-12-27 Thread Roberto Resoli
Il 23/12/2016 12:22, Roberto Resoli ha scritto:
>> So, this is essentially a stop-the-world and then upgrade scenario.
> I agree. Will try directly that.

I have successfully done the transition to linbit repo drbdmanage.

I stopped all vms (but one), upgraded to new components as in the [1]
note in

http://lists.linbit.com/pipermail/drbd-user/2016-December/023418.html

.

There are still some problems in shutdown/startup of drbdmanage, in
particular at node startup. It is something I am still investigating into.

Given that I have a quite particular network setup with full mesh
networking without dedicated switch[1], I suspect this is playing a a role.

All is up and running nicely, in any case.

rob

[1] http://lists.linbit.com/pipermail/drbd-user/2016-August/023187.html

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd-9.0.6 and drbd-utils-8.9.10

2016-12-27 Thread Roland Kammerer
On Fri, Dec 23, 2016 at 02:55:18PM +0100, Philipp Reisner wrote:
> http://www.drbd.org/download/drbd/utils/drbd-utils-8.9.10.tar.gz

There was a minor flaw in the packaged "./configure" which made drbdmon
building impossible without regenerating the script. No additional/code
changes.

186a59a714084026c074ce7d8f2a9d11  drbd-utils-8.9.10.tar.gz

Regards, rck


signature.asc
Description: Digital signature
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user