Markus, Indeed your logs says that SSH to port 22 failed ... try to further investigate (tcpdump during migration??).
Regards, Luca On Wed, Jun 5, 2013 at 10:56 AM, Arns, Markus <[email protected]> wrote: > Hi Luca,**** > > ** ** > > thanks for your fast reply.**** > > The IP address and the subnet should be fine. I’ve checked that further > times.**** > > I’m also able the ping from Node 2 -> Node 3 **** > > root@node2:~# ping 10.162.192.3**** > > PING 10.162.192.3 (10.162.192.3) 56(84) bytes of data.**** > > 64 bytes from 10.162.192.3: icmp_req=1 ttl=64 time=0.220 ms**** > > 64 bytes from 10.162.192.3: icmp_req=2 ttl=64 time=0.234 ms**** > > 64 bytes from 10.162.192.3: icmp_req=3 ttl=64 time=0.233 ms**** > > ** ** > > And SSH connection works also:**** > > root@node2:~# ssh [email protected]**** > > Linux node3 2.6.32-20-pve #1 SMP Wed May 15 08:23:27 CEST 2013 x86_64**** > > ** ** > > The programs included with the Debian GNU/Linux system are free software;* > *** > > the exact distribution terms for each program are described in the**** > > individual files in /usr/share/doc/*/copyright.**** > > ** ** > > Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent**** > > permitted by applicable law.**** > > Last login: Tue Jun 4 15:46:18 2013 from node1.sp.tum.de**** > > root@node3:~#**** > > ** ** > > So both shouldn’t work if there is no route.**** > > ** ** > > Regards**** > > Markus**** > > ** ** > > ** ** > > *Von:* [email protected] [mailto: > [email protected]] *Im Auftrag von *Luca Fornasari > *Gesendet:* Mittwoch, 5. Juni 2013 10:48 > *An:* [email protected] > *Betreff:* Re: [PVE-User] migration aborted Can't connect to destination > address using public key**** > > ** ** > > Hi Markus,**** > > ** ** > > Let's speak about your first problem (Node 2 -> Node 3) first:**** > > The error is pretty obvious ;) "No route to host"**** > > Please check IP addresses and subnetmasks.**** > > ** ** > > Cheers,**** > > Luca**** > > ** ** > > On Wed, Jun 5, 2013 at 10:35 AM, Arns, Markus <[email protected]> wrote:* > *** > > Hi all,**** > > **** > > I’ve taken over an Proxmox (not live) Cluster with 3 Nodes. I’m also > really new in this family, so be patient please. ;-)**** > > **** > > Proxmox Version of…**** > > - Node 1: 3.0**** > > - Node 2: 3.0 **** > > - Node 3: 3.0**** > > **** > > root@node1:~# pvecm n**** > > Node Sts Inc Joined Name**** > > 0 M 0 2013-06-04 12:40:42 /dev/block/8:49**** > > 1 M 768 2013-06-04 12:40:10 node1**** > > 2 M 780 2013-06-04 13:26:56 node2**** > > 3 M 772 2013-06-04 12:40:10 node3**** > > **** > > root@node2:~# pvecm n**** > > Node Sts Inc Joined Name**** > > 1 M 788 2013-06-05 10:27:00 node1**** > > 2 M 784 2013-06-05 10:27:00 node2**** > > 3 M 788 2013-06-05 10:27:00 node3**** > > **** > > root@node3:~# pvecm n**** > > Node Sts Inc Joined Name**** > > 1 M 772 2013-06-04 12:40:11 node1**** > > 2 M 780 2013-06-04 13:26:56 node2**** > > 3 M 760 2013-06-04 12:34:25 node3**** > > **** > > **** > > If I try to migrate an CT from Node 2 -> Node 3**** > > Jun 05 10:15:22 # /usr/bin/ssh -o 'BatchMode=yes' [email protected]/bin/true > Jun 05 10:15:22 ssh: connect to host 10.162.195.3 port 22: No route to host > Jun 05 10:15:22 ERROR: migration aborted (duration 00:00:03): Can't > connect to destination address using public key > TASK ERROR: migration aborted**** > > **** > > I’m able to ssh between all those Nodes, so I don’t know where the problem > is.**** > > **** > > Another problem which I have:**** > > If I’m migrate an CT from Node 2 -> Node 1, I’ receiving follow log. The > migration works, but the CT stays offline.**** > > Jun 05 10:12:18 starting migration of CT 107 to node 'node1' (10.162.192.1) > Jun 05 10:12:18 container is running - using online migration > Jun 05 10:12:18 container data is on shared storage 'local' > Jun 05 10:12:18 start live migration - suspending container > Jun 05 10:12:18 dump container state > Jun 05 10:12:19 dump 2nd level quota > Jun 05 10:12:20 initialize container on remote node 'node1' > Jun 05 10:12:21 initializing remote quota > Jun 05 10:12:21 turn on remote quota > Jun 05 10:12:21 load 2nd level quota > Jun 05 10:12:21 # /usr/bin/ssh -o 'BatchMode=yes' [email protected]'(vzdqload > 107 -U -G -T < /var/lib/vz/dump/quotadump.107 && vzquota reload2 > 107)' > Jun 05 10:12:21 ERROR: online migrate failure - Failed to load 2nd level > quota: bash: /var/lib/vz/dump/quotadump.107: No such file or directory > Jun 05 10:12:21 start final cleanup > Jun 05 10:12:21 ERROR: migration finished with problems (duration 00:00:04) > TASK ERROR: migration problems**** > > **** > > If I try to start those CT on Node 1, it’s starting well on Node 2**** > > Starting container ... > vzquota : (warning) Quota is running for id 107 already > Container is mounted > Adding IP address(es): 10.162.192.19 > Setting CPU units: 1000 > Setting CPUs: 8 > Unable to start init, probably incorrect template > Container start failed > Killing container ... > Container was stopped > Container is unmounted > TASK ERROR: command 'vzctl start 107' failed: exit code 47**** > > **** > > **** > > Any help it’s much appreciated.**** > > **** > > Regards**** > > Markus **** > > **** > > > _______________________________________________ > pve-user mailing list > [email protected] > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user**** > > ** ** >
_______________________________________________ pve-user mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
