Re: Default route is not configured on Redundant VPC VR (tier2)

2015-12-06 Thread Remi Bergsma
Satoru Sakaya, thanks for reporting the issue. Will discuss it with Wilder. 

Daan, I assume he means the 2nd tier on the router(s) which returns without ip 
after the reboots and failovers that are described. (this is the default gw for 
vms in this tier and the vip in the router)

Let's try to reproduce and write a Marvin test for it. 

Regards, Remi 

Sent from my iPhone

> On 06 Dec 2015, at 09:20, Daan Hoogland  wrote:
> 
> Satoru san,
> 
> Knowing your reports in the past I assume this is reproducible and a
> genuine bug. This leaves me to wonder, does it matter which tier is
> rebooted first?
> 
> It is certainly not supposed to happen.
> 
> On Sun, Dec 6, 2015 at 2:32 AM, giraffeg forestg 
> wrote:
> 
>> Hi all.
>> 
>> My environment:
>> 
>> CloudStack 4.6.1 , CentOS7
>> http://packages.shapeblue.com/cloudstack/upstream/centos7/4.6/
>> 
>> Hypervisor CentOS7 , KVM
>> 
>> SystemVM
>> 
>> http://cloudstack.apt-get.eu/systemvm/4.6/systemvm64template-4.6.0-kvm.qcow2.bz2
>> 
>> 
>> Steps to reproduce:
>> 
>> 1)Create VPC (Redundant VPC offering)
>> 
>> 2)Create tier1 & tier2
>> 
>> 3)Create VM Instance on tier1 & tier2
>> 
>> 4)Check VPC VR IP Address (no problem)
>> 
>> root@r-9-VM:~# ip a
>> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
>>link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>inet 127.0.0.1/8 scope host lo
>> 2: eth0:  mtu 1500 qdisc pfifo_fast state
>> UP qlen 1000
>>link/ether 0e:00:a9:fe:00:9a brd ff:ff:ff:ff:ff:ff
>>inet 169.254.0.154/16 brd 169.254.255.255 scope global eth0
>> 3: eth1:  mtu 1500 qdisc pfifo_fast state
>> UP qlen 1000
>>link/ether 06:ac:80:00:00:21 brd ff:ff:ff:ff:ff:ff
>>inet 10.0.1.102/24 brd 10.0.1.255 scope global eth1
>>inet 10.0.1.103/24 brd 10.0.1.255 scope global secondary eth1
>> 4: eth2:  mtu 1500 qdisc pfifo_fast state
>> UP qlen 1000
>>link/ether 02:00:7f:b8:00:05 brd ff:ff:ff:ff:ff:ff
>>inet 172.16.0.67/24 brd 172.16.0.255 scope global eth2
>>inet 172.16.0.1/24 brd 172.16.0.255 scope global secondary eth2
>> 5: eth3:  mtu 1500 qdisc pfifo_fast state
>> UP qlen 1000
>>link/ether 02:00:03:56:00:04 brd ff:ff:ff:ff:ff:ff
>>inet 172.16.1.25/24 brd 172.16.1.255 scope global eth3
>>inet 172.16.1.1/24 brd 172.16.1.255 scope global secondary eth3
>> root@r-9-VM:~#
>> 
>> root@r-10-VM:~# ip a
>> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
>>link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>inet 127.0.0.1/8 scope host lo
>> 2: eth0:  mtu 1500 qdisc pfifo_fast state
>> UP qlen 1000
>>link/ether 0e:00:a9:fe:00:49 brd ff:ff:ff:ff:ff:ff
>>inet 169.254.0.73/16 brd 169.254.255.255 scope global eth0
>> 3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
>>link/ether 06:ac:80:00:00:21 brd ff:ff:ff:ff:ff:ff
>>inet 10.0.1.102/24 brd 10.0.1.255 scope global eth1
>>inet 10.0.1.103/24 brd 10.0.1.255 scope global secondary eth1
>> 4: eth2:  mtu 1500 qdisc pfifo_fast state
>> UP qlen 1000
>>link/ether 02:00:19:11:00:06 brd ff:ff:ff:ff:ff:ff
>>inet 172.16.0.233/24 brd 172.16.0.255 scope global eth2
>> 5: eth3:  mtu 1500 qdisc pfifo_fast state
>> UP qlen 1000
>>link/ether 02:00:20:19:00:05 brd ff:ff:ff:ff:ff:ff
>>inet 172.16.1.231/24 brd 172.16.1.255 scope global eth3
>> root@r-10-VM:~#
>> 
>> 
>> 5)Reboot VPC VR r-9-VM
>> 6)Check VPC VR IP Address (no problem)
>> 
>> root@r-9-VM:~# ip a
>> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
>>link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>inet 127.0.0.1/8 scope host lo
>> 2: eth0:  mtu 1500 qdisc pfifo_fast state
>> UP qlen 1000
>>link/ether 0e:00:a9:fe:01:28 brd ff:ff:ff:ff:ff:ff
>>inet 169.254.1.40/16 brd 169.254.255.255 scope global eth0
>> 3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
>>link/ether 06:ac:80:00:00:21 brd ff:ff:ff:ff:ff:ff
>>inet 10.0.1.103/24 brd 10.0.1.255 scope global eth1
>>inet 10.0.1.102/24 brd 10.0.1.255 scope global secondary eth1
>> 4: eth2:  mtu 1500 qdisc pfifo_fast state
>> UP qlen 1000
>>link/ether 02:00:7f:b8:00:05 brd ff:ff:ff:ff:ff:ff
>>inet 172.16.0.67/24 brd 172.16.0.255 scope global eth2
>> 5: eth3:  mtu 1500 qdisc pfifo_fast state
>> UP qlen 1000
>>link/ether 02:00:03:56:00:04 brd ff:ff:ff:ff:ff:ff
>>inet 172.16.1.25/24 brd 172.16.1.255 scope global eth3
>> root@r-9-VM:~#
>> 
>> root@r-10-VM:~# ip a
>> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
>>link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>inet 127.0.0.1/8 scope 

Re: cloudstack 4.6 Unable to add the host

2015-12-06 Thread 李学强

在 2015/11/29 19:39, Erik Weber 写道:

yum update nss



yes , when I "yum update" the system, The host can add to the cloudstack.

Thank you all !


Re: Default route is not configured on Redundant VPC VR (tier2)

2015-12-06 Thread Daan Hoogland
Satoru san,

Knowing your reports in the past I assume this is reproducible and a
genuine bug. This leaves me to wonder, does it matter which tier is
rebooted first?

It is certainly not supposed to happen.

On Sun, Dec 6, 2015 at 2:32 AM, giraffeg forestg 
wrote:

> Hi all.
>
> My environment:
>
> CloudStack 4.6.1 , CentOS7
> http://packages.shapeblue.com/cloudstack/upstream/centos7/4.6/
>
> Hypervisor CentOS7 , KVM
>
> SystemVM
>
> http://cloudstack.apt-get.eu/systemvm/4.6/systemvm64template-4.6.0-kvm.qcow2.bz2
>
>
> Steps to reproduce:
>
> 1)Create VPC (Redundant VPC offering)
>
> 2)Create tier1 & tier2
>
> 3)Create VM Instance on tier1 & tier2
>
> 4)Check VPC VR IP Address (no problem)
>
> root@r-9-VM:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 0e:00:a9:fe:00:9a brd ff:ff:ff:ff:ff:ff
> inet 169.254.0.154/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 06:ac:80:00:00:21 brd ff:ff:ff:ff:ff:ff
> inet 10.0.1.102/24 brd 10.0.1.255 scope global eth1
> inet 10.0.1.103/24 brd 10.0.1.255 scope global secondary eth1
> 4: eth2:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 02:00:7f:b8:00:05 brd ff:ff:ff:ff:ff:ff
> inet 172.16.0.67/24 brd 172.16.0.255 scope global eth2
> inet 172.16.0.1/24 brd 172.16.0.255 scope global secondary eth2
> 5: eth3:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 02:00:03:56:00:04 brd ff:ff:ff:ff:ff:ff
> inet 172.16.1.25/24 brd 172.16.1.255 scope global eth3
> inet 172.16.1.1/24 brd 172.16.1.255 scope global secondary eth3
> root@r-9-VM:~#
>
> root@r-10-VM:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 0e:00:a9:fe:00:49 brd ff:ff:ff:ff:ff:ff
> inet 169.254.0.73/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
> link/ether 06:ac:80:00:00:21 brd ff:ff:ff:ff:ff:ff
> inet 10.0.1.102/24 brd 10.0.1.255 scope global eth1
> inet 10.0.1.103/24 brd 10.0.1.255 scope global secondary eth1
> 4: eth2:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 02:00:19:11:00:06 brd ff:ff:ff:ff:ff:ff
> inet 172.16.0.233/24 brd 172.16.0.255 scope global eth2
> 5: eth3:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 02:00:20:19:00:05 brd ff:ff:ff:ff:ff:ff
> inet 172.16.1.231/24 brd 172.16.1.255 scope global eth3
> root@r-10-VM:~#
>
>
> 5)Reboot VPC VR r-9-VM
> 6)Check VPC VR IP Address (no problem)
>
> root@r-9-VM:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 0e:00:a9:fe:01:28 brd ff:ff:ff:ff:ff:ff
> inet 169.254.1.40/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
> link/ether 06:ac:80:00:00:21 brd ff:ff:ff:ff:ff:ff
> inet 10.0.1.103/24 brd 10.0.1.255 scope global eth1
> inet 10.0.1.102/24 brd 10.0.1.255 scope global secondary eth1
> 4: eth2:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 02:00:7f:b8:00:05 brd ff:ff:ff:ff:ff:ff
> inet 172.16.0.67/24 brd 172.16.0.255 scope global eth2
> 5: eth3:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 02:00:03:56:00:04 brd ff:ff:ff:ff:ff:ff
> inet 172.16.1.25/24 brd 172.16.1.255 scope global eth3
> root@r-9-VM:~#
>
> root@r-10-VM:~# ip a
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 0e:00:a9:fe:00:49 brd ff:ff:ff:ff:ff:ff
> inet 169.254.0.73/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 06:ac:80:00:00:21 brd ff:ff:ff:ff:ff:ff
> inet 10.0.1.102/24 brd 10.0.1.255 scope global eth1
> inet 10.0.1.103/24 brd 10.0.1.255 scope global secondary eth1
> 4: eth2:  mtu 1500 qdisc 

Re: Default route is not configured on Redundant VPC VR (tier2)

2015-12-06 Thread Remi Bergsma
Ok, check. You mean the order of the reboot of the routers. Let's make some 
scenarios and try them. 

> On 06 Dec 2015, at 12:06, Daan Hoogland  wrote:
> 
> I know what his example shows, Remi. I'm just wondering whether the order
> of rebooting might be significant.
> 
> On Sun, Dec 6, 2015 at 11:57 AM, Remi Bergsma 
> wrote:
> 
>> Satoru Sakaya, thanks for reporting the issue. Will discuss it with Wilder.
>> 
>> Daan, I assume he means the 2nd tier on the router(s) which returns
>> without ip after the reboots and failovers that are described. (this is the
>> default gw for vms in this tier and the vip in the router)
>> 
>> Let's try to reproduce and write a Marvin test for it.
>> 
>> Regards, Remi
>> 
>> Sent from my iPhone
>> 
>>> On 06 Dec 2015, at 09:20, Daan Hoogland  wrote:
>>> 
>>> Satoru san,
>>> 
>>> Knowing your reports in the past I assume this is reproducible and a
>>> genuine bug. This leaves me to wonder, does it matter which tier is
>>> rebooted first?
>>> 
>>> It is certainly not supposed to happen.
>>> 
>>> On Sun, Dec 6, 2015 at 2:32 AM, giraffeg forestg <
>> giraffefore...@gmail.com>
>>> wrote:
>>> 
 Hi all.
 
 My environment:
 
 CloudStack 4.6.1 , CentOS7
 http://packages.shapeblue.com/cloudstack/upstream/centos7/4.6/
 
 Hypervisor CentOS7 , KVM
 
 SystemVM
>> http://cloudstack.apt-get.eu/systemvm/4.6/systemvm64template-4.6.0-kvm.qcow2.bz2
 
 
 Steps to reproduce:
 
 1)Create VPC (Redundant VPC offering)
 
 2)Create tier1 & tier2
 
 3)Create VM Instance on tier1 & tier2
 
 4)Check VPC VR IP Address (no problem)
 
 root@r-9-VM:~# ip a
 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
   inet 127.0.0.1/8 scope host lo
 2: eth0:  mtu 1500 qdisc pfifo_fast
>> state
 UP qlen 1000
   link/ether 0e:00:a9:fe:00:9a brd ff:ff:ff:ff:ff:ff
   inet 169.254.0.154/16 brd 169.254.255.255 scope global eth0
 3: eth1:  mtu 1500 qdisc pfifo_fast
>> state
 UP qlen 1000
   link/ether 06:ac:80:00:00:21 brd ff:ff:ff:ff:ff:ff
   inet 10.0.1.102/24 brd 10.0.1.255 scope global eth1
   inet 10.0.1.103/24 brd 10.0.1.255 scope global secondary eth1
 4: eth2:  mtu 1500 qdisc pfifo_fast
>> state
 UP qlen 1000
   link/ether 02:00:7f:b8:00:05 brd ff:ff:ff:ff:ff:ff
   inet 172.16.0.67/24 brd 172.16.0.255 scope global eth2
   inet 172.16.0.1/24 brd 172.16.0.255 scope global secondary eth2
 5: eth3:  mtu 1500 qdisc pfifo_fast
>> state
 UP qlen 1000
   link/ether 02:00:03:56:00:04 brd ff:ff:ff:ff:ff:ff
   inet 172.16.1.25/24 brd 172.16.1.255 scope global eth3
   inet 172.16.1.1/24 brd 172.16.1.255 scope global secondary eth3
 root@r-9-VM:~#
 
 root@r-10-VM:~# ip a
 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
   inet 127.0.0.1/8 scope host lo
 2: eth0:  mtu 1500 qdisc pfifo_fast
>> state
 UP qlen 1000
   link/ether 0e:00:a9:fe:00:49 brd ff:ff:ff:ff:ff:ff
   inet 169.254.0.73/16 brd 169.254.255.255 scope global eth0
 3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
   link/ether 06:ac:80:00:00:21 brd ff:ff:ff:ff:ff:ff
   inet 10.0.1.102/24 brd 10.0.1.255 scope global eth1
   inet 10.0.1.103/24 brd 10.0.1.255 scope global secondary eth1
 4: eth2:  mtu 1500 qdisc pfifo_fast
>> state
 UP qlen 1000
   link/ether 02:00:19:11:00:06 brd ff:ff:ff:ff:ff:ff
   inet 172.16.0.233/24 brd 172.16.0.255 scope global eth2
 5: eth3:  mtu 1500 qdisc pfifo_fast
>> state
 UP qlen 1000
   link/ether 02:00:20:19:00:05 brd ff:ff:ff:ff:ff:ff
   inet 172.16.1.231/24 brd 172.16.1.255 scope global eth3
 root@r-10-VM:~#
 
 
 5)Reboot VPC VR r-9-VM
 6)Check VPC VR IP Address (no problem)
 
 root@r-9-VM:~# ip a
 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
   inet 127.0.0.1/8 scope host lo
 2: eth0:  mtu 1500 qdisc pfifo_fast
>> state
 UP qlen 1000
   link/ether 0e:00:a9:fe:01:28 brd ff:ff:ff:ff:ff:ff
   inet 169.254.1.40/16 brd 169.254.255.255 scope global eth0
 3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
   link/ether 06:ac:80:00:00:21 brd ff:ff:ff:ff:ff:ff
   inet 10.0.1.103/24 brd 10.0.1.255 scope global eth1
   inet 10.0.1.102/24 brd 10.0.1.255 scope global secondary eth1
 4: eth2: 

Re: confirm unsubscribe from users@cloudstack.apache.org

2015-12-06 Thread norbert . klein


Zitat von users-h...@cloudstack.apache.org:


Hi! This is the ezmlm program. I'm managing the
users@cloudstack.apache.org mailing list.

I'm working for my owner, who can be reached
at users-ow...@cloudstack.apache.org.

To confirm that you would like

   norbert.kl...@infosecprojects.net

removed from the users mailing list, please send a short reply
to this address:


users-uc.1449404542.lafjmigbnljoppialcnc-norbert.klein=infosecprojects@cloudstack.apache.org


Usually, this happens when you just hit the "reply" button.
If this does not work, simply copy the address and paste it into
the "To:" field of a new message.

or click here:

mailto:users-uc.1449404542.lafjmigbnljoppialcnc-norbert.klein=infosecprojects@cloudstack.apache.org

I haven't checked whether your address is currently on the mailing list.
To see what address you used to subscribe, look at the messages you are
receiving from the mailing list. Each message has your address hidden
inside its return path; for example, m...@xdd.ff.com receives messages
with return path:  
 and put the entire address listed above
into the "Subject:" line.


--- Administrative commands for the users list ---

I can handle administrative requests automatically. Please
do not send them to the list address! Instead, send
your message to the correct command address:

To subscribe to the list, send a message to:
   

To remove your address from the list, send a message to:
   

Send mail to the following for info and FAQ for this list:
   
   

Similar addresses exist for the digest list:
   
   

To get messages 123 through 145 (a maximum of 100 per request), mail:
   

To get an index with subject and author for messages 123-456 , mail:
   

They are always returned as sets of 100, max 2000 per request,
so you'll actually get 100-499.

To receive all messages with the same subject as message 12345,
send a short message to:
   

The messages should contain one line or word of text to avoid being
treated as sp@m, but I will ignore their content.
Only the ADDRESS you send to is important.

You can start a subscription for an alternate address,
for example "john@host.domain", just add a hyphen and your
address (with '=' instead of '@') after the command word:

Re: Default route is not configured on Redundant VPC VR (tier2)

2015-12-06 Thread Daan Hoogland
I know what his example shows, Remi. I'm just wondering whether the order
of rebooting might be significant.

On Sun, Dec 6, 2015 at 11:57 AM, Remi Bergsma 
wrote:

> Satoru Sakaya, thanks for reporting the issue. Will discuss it with Wilder.
>
> Daan, I assume he means the 2nd tier on the router(s) which returns
> without ip after the reboots and failovers that are described. (this is the
> default gw for vms in this tier and the vip in the router)
>
> Let's try to reproduce and write a Marvin test for it.
>
> Regards, Remi
>
> Sent from my iPhone
>
> > On 06 Dec 2015, at 09:20, Daan Hoogland  wrote:
> >
> > Satoru san,
> >
> > Knowing your reports in the past I assume this is reproducible and a
> > genuine bug. This leaves me to wonder, does it matter which tier is
> > rebooted first?
> >
> > It is certainly not supposed to happen.
> >
> > On Sun, Dec 6, 2015 at 2:32 AM, giraffeg forestg <
> giraffefore...@gmail.com>
> > wrote:
> >
> >> Hi all.
> >>
> >> My environment:
> >>
> >> CloudStack 4.6.1 , CentOS7
> >> http://packages.shapeblue.com/cloudstack/upstream/centos7/4.6/
> >>
> >> Hypervisor CentOS7 , KVM
> >>
> >> SystemVM
> >>
> >>
> http://cloudstack.apt-get.eu/systemvm/4.6/systemvm64template-4.6.0-kvm.qcow2.bz2
> >>
> >>
> >> Steps to reproduce:
> >>
> >> 1)Create VPC (Redundant VPC offering)
> >>
> >> 2)Create tier1 & tier2
> >>
> >> 3)Create VM Instance on tier1 & tier2
> >>
> >> 4)Check VPC VR IP Address (no problem)
> >>
> >> root@r-9-VM:~# ip a
> >> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> >>link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >>inet 127.0.0.1/8 scope host lo
> >> 2: eth0:  mtu 1500 qdisc pfifo_fast
> state
> >> UP qlen 1000
> >>link/ether 0e:00:a9:fe:00:9a brd ff:ff:ff:ff:ff:ff
> >>inet 169.254.0.154/16 brd 169.254.255.255 scope global eth0
> >> 3: eth1:  mtu 1500 qdisc pfifo_fast
> state
> >> UP qlen 1000
> >>link/ether 06:ac:80:00:00:21 brd ff:ff:ff:ff:ff:ff
> >>inet 10.0.1.102/24 brd 10.0.1.255 scope global eth1
> >>inet 10.0.1.103/24 brd 10.0.1.255 scope global secondary eth1
> >> 4: eth2:  mtu 1500 qdisc pfifo_fast
> state
> >> UP qlen 1000
> >>link/ether 02:00:7f:b8:00:05 brd ff:ff:ff:ff:ff:ff
> >>inet 172.16.0.67/24 brd 172.16.0.255 scope global eth2
> >>inet 172.16.0.1/24 brd 172.16.0.255 scope global secondary eth2
> >> 5: eth3:  mtu 1500 qdisc pfifo_fast
> state
> >> UP qlen 1000
> >>link/ether 02:00:03:56:00:04 brd ff:ff:ff:ff:ff:ff
> >>inet 172.16.1.25/24 brd 172.16.1.255 scope global eth3
> >>inet 172.16.1.1/24 brd 172.16.1.255 scope global secondary eth3
> >> root@r-9-VM:~#
> >>
> >> root@r-10-VM:~# ip a
> >> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> >>link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >>inet 127.0.0.1/8 scope host lo
> >> 2: eth0:  mtu 1500 qdisc pfifo_fast
> state
> >> UP qlen 1000
> >>link/ether 0e:00:a9:fe:00:49 brd ff:ff:ff:ff:ff:ff
> >>inet 169.254.0.73/16 brd 169.254.255.255 scope global eth0
> >> 3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
> >>link/ether 06:ac:80:00:00:21 brd ff:ff:ff:ff:ff:ff
> >>inet 10.0.1.102/24 brd 10.0.1.255 scope global eth1
> >>inet 10.0.1.103/24 brd 10.0.1.255 scope global secondary eth1
> >> 4: eth2:  mtu 1500 qdisc pfifo_fast
> state
> >> UP qlen 1000
> >>link/ether 02:00:19:11:00:06 brd ff:ff:ff:ff:ff:ff
> >>inet 172.16.0.233/24 brd 172.16.0.255 scope global eth2
> >> 5: eth3:  mtu 1500 qdisc pfifo_fast
> state
> >> UP qlen 1000
> >>link/ether 02:00:20:19:00:05 brd ff:ff:ff:ff:ff:ff
> >>inet 172.16.1.231/24 brd 172.16.1.255 scope global eth3
> >> root@r-10-VM:~#
> >>
> >>
> >> 5)Reboot VPC VR r-9-VM
> >> 6)Check VPC VR IP Address (no problem)
> >>
> >> root@r-9-VM:~# ip a
> >> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> >>link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >>inet 127.0.0.1/8 scope host lo
> >> 2: eth0:  mtu 1500 qdisc pfifo_fast
> state
> >> UP qlen 1000
> >>link/ether 0e:00:a9:fe:01:28 brd ff:ff:ff:ff:ff:ff
> >>inet 169.254.1.40/16 brd 169.254.255.255 scope global eth0
> >> 3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
> >>link/ether 06:ac:80:00:00:21 brd ff:ff:ff:ff:ff:ff
> >>inet 10.0.1.103/24 brd 10.0.1.255 scope global eth1
> >>inet 10.0.1.102/24 brd 10.0.1.255 scope global secondary eth1
> >> 4: eth2:  mtu 1500 qdisc pfifo_fast
> state
> >> UP qlen 1000
> >>link/ether 02:00:7f:b8:00:05 brd ff:ff:ff:ff:ff:ff
> >>inet 172.16.0.67/24 brd 172.16.0.255 scope global eth2

Re: loudstack 4.6 计算结点加入不了

2015-12-06 Thread 李学强

在 2015/11/29 15:36, Wei ZHOU 写道:

你的libvirt配置好了吗?

2015-11-29 4:21 GMT+01:00 李学强 :


各位大侠,
 我配置centos 6.5 A (管理控制台,nfs), centos 6.5 B kvm计算结点的 cloudstack
4.6,怎么计算结点加入不了呢,求救啊

  我是参考下面文档配置的,http://docs.cloudstack.apache.org/projects /cloudstack-
installation/en/4.6/qig.html

在/var/log/cloudstack/agent/agent.log日志中有这样的错误

2015-11-29 05:29:50,617 ERROR [utils.nio.NioConnection] (main:null) Unable
to initialize the threads.
java.io.IOException: Connection closed with -1 on reading size.
 at com.cloud.utils.nio.Link.doHandshake(Link.java:513)
 at com.cloud.utils.nio.NioClient.init(NioClient.java:80)
 at com.cloud.utils.nio.NioConnection.start(NioConnection.java:88)
 at com.cloud.agent.Agent.start(Agent.java:227)
 at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:399)
 at
com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:367)
 at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:351)
 at com.cloud.agent.AgentShell.start(AgentShell.java:461)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at
org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:177)
2015-11-29 05:29:50,620 INFO  [utils.exception.CSExceptionErrorCode]
(main:null) Could not find exception:
com.cloud.utils.exception.NioConnectionException in error code list for
exceptions
2015-11-29 05:29:50,620 ERROR [cloud.agent.AgentShell] (main:null) Unable
to start agent:
com.cloud.utils.exception.CloudRuntimeException: Unable to start the
connection!
 at com.cloud.agent.Agent.start(Agent.java:229)
 at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:399)
 at
com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:367)
 at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:351)
 at com.cloud.agent.AgentShell.start(AgentShell.java:461)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at
org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:177)
Caused by: com.cloud.utils.exception.NioConnectionException: Connection
closed with -1 on reading size.
 at com.cloud.utils.nio.NioConnection.start(NioConnection.java:94)
 at com.cloud.agent.Agent.start(Agent.java:227)
 ... 9 more
Caused by: java.io.IOException: Connection closed with -1 on reading size.
 at com.cloud.utils.nio.Link.doHandshake(Link.java:513)
 at com.cloud.utils.nio.NioClient.init(NioClient.java:80)
 at com.cloud.utils.nio.NioConnection.start(NioConnection.java:88)
 ... 10 more
2015-11-29 05:29:50,625 INFO  [cloud.agent.Agent]
(AgentShutdownThread:null) Stopping the agent: Reason = sig.kill
2015-11-29 05:34:50,195 INFO  [cloud.agent.AgentShell] (main:null) Agent
started
2015-11-29 05:34:50,196 INFO  [cloud.agent.AgentShell] (main:null)
Implementation Version is 4.6.0
2015-11-29 05:34:50,198 INFO  [cloud.agent.AgentShell] (main:null)
agent.properties found at /etc/cloudstack/agent/agent.properties
2015-11-29 05:34:50,202 INFO  [cloud.agent.AgentShell] (main:null)
Defaulting to using properties file for storage
2015-11-29 05:34:50,208 INFO  [cloud.agent.AgentShell] (main:null)
Defaulting to the constant time backoff algorithm
2015-11-29 05:34:50,223 INFO  [cloud.utils.LogUtils] (main:null) log4j
configuration found at /etc/cloudstack/agent/log4j-cloud.xml
2015-11-29 05:34:50,236 INFO  [cloud.agent.AgentShell] (main:null)
Preferring IPv4 address family for agent connection
2015-11-29 05:34:50,288 INFO  [cloud.agent.Agent] (main:null) id is
2015-11-29 05:34:50,318 INFO  [kvm.resource.LibvirtConnection] (main:null)
No existing libvirtd connection found. Opening a new one


2015-11-29 05:34:50,633 INFO  [org.reflections.Reflections] (main:null)
Reflections took 70 ms to scan 1 urls, producing 7 keys and 10 values
2015-11-29 05:34:50,749 INFO [kvm.resource.LibvirtComputingResource]
(main:null) No libvirt.vif.driver specified. Defaults to BridgeVifDriver.
2015-11-29 05:34:50,770 INFO  [cloud.agent.Agent] (main:null) Agent [id =
new : type = LibvirtComputingResource : zone = 1 : pod = 1 : workers = 5 :
host = 123.1.177.65 : port = 8250
2015-11-29 05:34:50,773 INFO  [utils.nio.NioClient] (main:null) Connecting
to 123.1.177.65:8250
2015-11-29 05:35:50,902 ERROR [utils.nio.NioConnection] (main:null) Unable
to initialize the threads.
java.io.IOException: Connection closed with -1 on reading size.
 at com.cloud.utils.nio.Link.doHandshake(Link.java:513)
 at 

Re: cloudstack-management start failed (CloudStack 4.6.1 + CentOS7)

2015-12-06 Thread giraffeg forestg
Good morning,

Thank you.
It worked fine.


[root@acs ~]# cloudstack-setup-management --tomcat7
Starting to configure CloudStack Management Server:
Configure Firewall ...[OK]
Configure CloudStack Management Server ...[OK]
CloudStack Management Server setup is Done!
[root@acs ~]# ls -la /etc/cloudstack/management
total 136
drwxr-xr-x. 3 root root   4096 Dec  6 16:54 .
drwxr-xr-x. 4 root root   4096 Dec  5 19:36 ..
drwxrwx---. 3 root cloud  4096 Dec  5 18:52 Catalina
-rw-r--r--. 1 root root   8945 Dec  1 20:22 catalina.policy
-rw-r--r--. 1 root root   3794 Dec  1 20:22 catalina.properties
-rw-r--r--. 1 root root   1653 Dec  1 20:22 classpath.conf
-rw-r--r--. 1 root root   2211 Dec  6 10:13 cloudmanagementserver.keystore
-rw-r--r--. 1 root root   1357 Dec  1 20:22 commons-logging.properties
-rw-r-. 1 root cloud  3137 Dec  5 18:56 db.properties
-rw-r--r--. 1 root root979 Dec  1 20:22 environment.properties
-rw-r--r--. 1 root root927 Dec  1 20:22 java.security.ciphers
-rw-r--r--. 1 root root  8 Dec  5 18:54 key
-rw-r--r--. 1 root root   7020 Dec  1 20:22 log4j-cloud.xml
-rw-r--r--. 1 root root   6722 Dec  1 20:22 server7-nonssl.xml
-rw-r--r--. 1 root root   7251 Dec  1 20:22 server7-ssl.xml
lrwxrwxrwx. 1 root root 45 Dec  6 16:54 server.xml ->
/etc/cloudstack/management/server7-nonssl.xml
-rw-r--r--. 1 root root   1383 Dec  1 20:22 tomcat-users.xml
-rw-r--r--. 1 root root  50475 Dec  1 20:22 web.xml
[root@acs ~]#

[root@acs ~]# systemctl status cloudstack-management.service
cloudstack-management.service - CloudStack Management Server
   Loaded: loaded (/usr/lib/systemd/system/cloudstack-management.service;
enabled)
   Active: inactive (dead) since Sun 2015-12-06 16:51:51 JST; 3min 6s ago
 Main PID: 3164 (code=exited, status=143)
   CGroup: /system.slice/cloudstack-management.service

Dec 06 16:51:51 acs.dom.local server[3164]: Dec 06, 2015 4:51:51 PM
org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
Dec 06 16:51:51 acs.dom.local server[3164]: SEVERE: The web application
[/client] appears to have started a thread named [FileWatchdog] but has
failed...ory leak.
Dec 06 16:51:51 acs.dom.local server[3164]: Dec 06, 2015 4:51:51 PM
org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
Dec 06 16:51:51 acs.dom.local server[3164]: SEVERE: The web application
[/client] appears to have started a thread named [AgentManager-Handler-1]
but ...ory leak.
Dec 06 16:51:51 acs.dom.local systemd[1]: Stopped CloudStack Management
Server.
Dec 06 16:53:00 acs.dom.local systemd[1]: Stopped CloudStack Management
Server.
Dec 06 16:53:14 acs.dom.local systemd[1]: Stopped CloudStack Management
Server.
Dec 06 16:53:36 acs.dom.local systemd[1]: Stopped CloudStack Management
Server.
Dec 06 16:54:13 acs.dom.local systemd[1]: Stopped CloudStack Management
Server.
Dec 06 16:54:35 acs.dom.local systemd[1]: Stopped CloudStack Management
Server.
Hint: Some lines were ellipsized, use -l to show in full.
[root@acs ~]#

[root@acs ~]# systemctl start cloudstack-management.service
[root@acs ~]#

[root@acs ~]# systemctl status cloudstack-management.service
cloudstack-management.service - CloudStack Management Server
   Loaded: loaded (/usr/lib/systemd/system/cloudstack-management.service;
enabled)
   Active: active (running) since Sun 2015-12-06 16:55:42 JST; 2s ago
 Main PID: 11377 (java)
   CGroup: /system.slice/cloudstack-management.service
   mq11377 java -Djava.awt.headless=true
-Dcom.sun.management.jmxremote=false -Xmx2g -XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/var/log/cloudsta...

Dec 06 16:55:43 acs.dom.local server[11377]: Dec 06, 2015 4:55:43 PM
org.apache.coyote.AbstractProtocol init
Dec 06 16:55:43 acs.dom.local server[11377]: INFO: Initializing
ProtocolHandler ["ajp-bio-20400"]
Dec 06 16:55:43 acs.dom.local server[11377]: Dec 06, 2015 4:55:43 PM
org.apache.catalina.startup.Catalina load
Dec 06 16:55:43 acs.dom.local server[11377]: INFO: Initialization processed
in 1092 ms
Dec 06 16:55:43 acs.dom.local server[11377]: Dec 06, 2015 4:55:43 PM
org.apache.catalina.core.StandardService startInternal
Dec 06 16:55:43 acs.dom.local server[11377]: INFO: Starting service Catalina
Dec 06 16:55:43 acs.dom.local server[11377]: Dec 06, 2015 4:55:43 PM
org.apache.catalina.core.StandardEngine startInternal
Dec 06 16:55:43 acs.dom.local server[11377]: INFO: Starting Servlet Engine:
Apache Tomcat/7.0.54
Dec 06 16:55:43 acs.dom.local server[11377]: Dec 06, 2015 4:55:43 PM
org.apache.catalina.startup.HostConfig deployDirectory
Dec 06 16:55:43 acs.dom.local server[11377]: INFO: Deploying web
application directory /usr/share/cloudstack-management/webapps/client
[root@acs ~]#



Best regards.

---
Satoru Nakaya (Japan CloudStack User Group)


2015-12-06 15:47 GMT+09:00 Remi Bergsma :

> Good morning,
>
> On centos7 try running this:
>
> cloudstack-setup-management --tomcat7
>
> When i try that, it works fine for me. This is due to tomcat 6/7