Using CentOS-6.x on KVM-hosts - what are the threats?

2017-02-17 Thread Vladimir Melnik
Dear colleagues,

I've just realized that my KVM-hosts are running CentOS-6 whilst it's
recommended to use CentOS-7 with the new versions of ACS. Everything
seems to be fine (some of these hosts are working for a few years),
hosts are working and things are great, but I'd like to ask a couple of
questions. Here they are.

(1) How high is the chance of the next version of ACS (4.10 or 4.11)
will be incompatible with CentOS-6? Should I worry about that and
consider upgrading to CentOS-7 immediately?

(2) What ACS features I'm missing because of that? I suppose that I'll
be disappointed if I try to limit a VM's IO-consumption, just because
old good QEMU-0.9 won't support it. Am I right? Are there other things
that are worth of upgraging to CentOS-7?

Thank you very much in advance for your replies!

-- 
V.Melnik


Re: Ubuntu 16.04, Openvswitch, cloudstack-agent and secondary storage IP

2017-02-17 Thread Engelmann Florian
Hi there,

it looks like the IP mismatch is irrelevant in this case:

10.1.2.238:/exports/zh-ep-z00/template/tmpl/2/296 53685002240 362133504 
53322868736   1% /mnt/4276a235-9ab3-3f5f-b6e3-a19142e7f524
root@ewcstack-vh023-test:~# ip r g 10.1.2.238
10.1.2.238 dev secstore0  src 10.1.2.233 
cache 

Secondary storage is woking fine anyway.

All the best,
Flo



From: Engelmann Florian 
Sent: Friday, February 17, 2017 1:26 PM
To: users@cloudstack.apache.org
Subject: Ubuntu 16.04, Openvswitch, cloudstack-agent and secondary storage IP

Hi,

we are currently building a new test setup: ACS + Ubuntu 16.04 KVM + OVS

I am stuck with the getting the wrong secondary IP from the KVM host:


[...]
"publicIpAddress":"10.1.0.233","publicNetmask":"255.255.255.240","publicMacAddress":"24:8a:07:6c:75:30","privateIpAddress":"10.1.0.233","privateMacAddress":"24:8a:07:6c:75:30","privateNetmask":"255.255.255.240","storageIpAddress":"10.1.0.233","storageNetmask":"255.255.255.240","storageMacAddress":"24:8a:07:6c:75:30","resourceName":"LibvirtComputingResource","gatewayIpAddress":"10.1.0.225","wait":0}},
[...]

The secondary storage IP should be 10.1.2.233

I did add the secondary storage label like:

KVM traffic label   "secstore0"

and the OVS configuration looks like:

# ovs-vsctl show
c736c85f-badb-4fec-9fd9-fdc94ceed776
Bridge "cloudbr0"
Port "bond0"
Interface "enp136s0"
Interface "enp136s0d1"
Port "cloudbr0"
Interface "cloudbr0"
type: internal
Port secstore0
tag: 2007
Interface secstore0
type: internal
Bridge "cloud0"
Port "cloud0"
Interface "cloud0"
type: internal
ovs_version: "2.5.0"

And the IP configuration looks like:

8: ovs-system:  mtu 1500 qdisc noop state DOWN group 
default qlen 1
link/ether 76:50:ad:b6:37:90 brd ff:ff:ff:ff:ff:ff
18: cloud0:  mtu 1500 qdisc noqueue state 
UNKNOWN group default qlen 1
link/ether ae:bf:4c:94:c6:42 brd ff:ff:ff:ff:ff:ff
inet 169.254.0.1/16 scope global cloud0
   valid_lft forever preferred_lft forever
inet6 fe80::acbf:4cff:fe94:c642/64 scope link
   valid_lft forever preferred_lft forever
19: cloudbr0:  mtu 1500 qdisc noqueue state 
UNKNOWN group default qlen 1
link/ether 24:8a:07:6c:75:30 brd ff:ff:ff:ff:ff:ff
inet 10.1.0.233/28 brd 10.1.0.239 scope global cloudbr0
   valid_lft forever preferred_lft forever
inet6 fe80::268a:7ff:fe6c:7530/64 scope link
   valid_lft forever preferred_lft forever
20: bond0:  mtu 1500 qdisc noqueue state 
UNKNOWN group default qlen 1
link/ether ce:13:53:5a:d8:e1 brd ff:ff:ff:ff:ff:ff
inet6 fe80::cc13:53ff:fe5a:d8e1/64 scope link
   valid_lft forever preferred_lft forever
21: secstore0:  mtu 1500 qdisc noqueue state 
UNKNOWN group default qlen 1
link/ether 8a:eb:53:70:69:40 brd ff:ff:ff:ff:ff:ff
inet 10.1.2.233/28 brd 10.1.2.239 scope global secstore0
   valid_lft forever preferred_lft forever
inet6 fe80::88eb:53ff:fe70:6940/64 scope link
   valid_lft forever preferred_lft forever


Any idea why ACS is not able to fetch the correct IP used to access secondary 
storage?

All the best,
Flo




EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

T  +41 44 466 60 00
F  +41 44 466 60 10

florian.engelm...@everyware.ch
www.everyware.ch


smime.p7s
Description: S/MIME cryptographic signature


Re: Ubuntu 16.04, Openvswitch networking issue

2017-02-17 Thread Tim Mackey
In order to use OVS, you need both the virtual switch and the control
plane. Those messages largely boil down to "We can't find an appropriate
control plane for your chosen network topology and hypervisor." This then
raises the questions of which control plane are you attempting to use, and
which hypervisor. Here's a deck[1] I prepared for CloudStack 4.5 which
outlines the impact of hypervisor choices. Slide 15 is most relevant to
your problem. If you have a supported control plane, then its likely
misconfigured and CloudStack doesn't know how to talk to it. Newer
CloudStack versions will vary of course, but the principles are the same,
plus you didn't mention your version ;).

[1]
https://www.slideshare.net/TimMackey/selecting-the-correct-hypervisor-for-cloudstack-45

-tim

On Fri, Feb 17, 2017 at 9:36 AM, Engelmann Florian <
florian.engelm...@everyware.ch> wrote:

> Hi,
>
> another error I am able to solve:
>
> 2017-02-17 15:24:36,097 DEBUG [c.c.a.ApiServlet] 
> (catalina-exec-26:ctx-30020483)
> (logid:d303f8ef) ===START===  192.168.252.76 -- GET  command=createNetwork&
> response=json=e683eeaa-92c9-4651-91b9-165939f9000c&
> name=net-kvm008=net-kvm008
> 2017-02-17 15:24:36,135 DEBUG [c.c.n.g.BigSwitchBcfGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design this network, the physical isolation type is not BCF_SEGMENT
> 2017-02-17 15:24:36,136 DEBUG [o.a.c.n.c.m.ContrailGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design this network
> 2017-02-17 15:24:36,137 DEBUG [c.c.a.m.DirectAgentAttache]
> (DirectAgent-144:ctx-b2cdad73) (logid:eb129204) Seq
> 179-6955246674520311671: Response Received:
> 2017-02-17 15:24:36,137 DEBUG [c.c.a.t.Request] 
> (StatsCollector-5:ctx-4298a591)
> (logid:eb129204) Seq 179-6955246674520311671: Received:  { Ans: , MgmtId:
> 345049101620, via: 179(ewcstack-vh003-test), Ver: v1, Flags: 10, {
> GetStorageStatsAnswer } }
> 2017-02-17 15:24:36,137 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) design called
> 2017-02-17 15:24:36,138 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
> (StatsCollector-5:ctx-4298a591) (logid:eb129204)
> getCommandHostDelegation: class com.cloud.agent.api.GetStorageStatsCommand
> 2017-02-17 15:24:36,138 DEBUG [c.c.h.XenServerGuru] 
> (StatsCollector-5:ctx-4298a591)
> (logid:eb129204) getCommandHostDelegation: class com.cloud.agent.api.
> GetStorageStatsCommand
> 2017-02-17 15:24:36,139 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design this network, the physical isolation type is not MIDO
> 2017-02-17 15:24:36,139 DEBUG [c.c.a.m.DirectAgentAttache]
> (DirectAgent-72:ctx-656a03ae) (logid:dd7ada9e) Seq 217-8596245788743434945:
> Executing request
> 2017-02-17 15:24:36,141 DEBUG [c.c.n.g.NiciraNvpGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design this network
> 2017-02-17 15:24:36,142 DEBUG [o.a.c.n.o.OpendaylightGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design this network
> 2017-02-17 15:24:36,144 DEBUG [c.c.n.g.OvsGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design this network
> 2017-02-17 15:24:36,163 DEBUG [o.a.c.n.g.SspGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) SSP not
> configured to be active
> 2017-02-17 15:24:36,164 DEBUG [c.c.n.g.BrocadeVcsGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design this network
> 2017-02-17 15:24:36,165 DEBUG [c.c.n.g.NuageVspGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design network using network offering 54 on physical network 200
> 2017-02-17 15:24:36,166 DEBUG [o.a.c.e.o.NetworkOrchestrator]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Releasing
> lock for Acct[3426fb73-70ad-47d9-9c5d-355f34891438-fen]
> 2017-02-17 15:24:36,188 DEBUG [c.c.a.ApiServlet]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) ===END===
> 192.168.252.76 -- GET  command=createNetwork&
> response=json=e683eeaa-92c9-4651-91b9-165939f9000c&
> name=net-kvm008=net-kvm00
>
>
> We do not use BigSwitch or anything like this, just plain Openvswitch with
> Ubuntu 16.04. Any idea whats going on?
>
> All the best,
> Florian
>
> EveryWare AG
> Florian Engelmann
> Systems Engineer
> Zurlindenstrasse 52a
> CH-8003 Zürich
>
> T  +41 44 466 60 00
> F  +41 44 466 60 10
>
> florian.engelm...@everyware.ch
> www.everyware.ch


Re: Ubuntu 16.04, Openvswitch networking issue

2017-02-17 Thread Rafael Weingärtner
I think we may need more information. ACS version, network deployment type,
and hypervisors?

On Fri, Feb 17, 2017 at 10:02 AM, Engelmann Florian <
florian.engelm...@everyware.ch> wrote:

> Hi,
>
> sorry I ment "I am NOT able to solve"
>
> 
> From: Engelmann Florian 
> Sent: Friday, February 17, 2017 3:36 PM
> To: users@cloudstack.apache.org
> Subject: Ubuntu 16.04, Openvswitch networking issue
>
> Hi,
>
> another error I am able to solve:
>
> 2017-02-17 15:24:36,097 DEBUG [c.c.a.ApiServlet] (catalina-exec-26:ctx-
> 30020483) (logid:d303f8ef) ===START===  192.168.252.76 -- GET
> command=createNetwork=json=e683eeaa-
> 92c9-4651-91b9-165939f9000c=net-kvm008=
> net-kvm008
> 2017-02-17 15:24:36,135 DEBUG [c.c.n.g.BigSwitchBcfGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design this network, the physical isolation type is not BCF_SEGMENT
> 2017-02-17 15:24:36,136 DEBUG [o.a.c.n.c.m.ContrailGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design this network
> 2017-02-17 15:24:36,137 DEBUG [c.c.a.m.DirectAgentAttache]
> (DirectAgent-144:ctx-b2cdad73) (logid:eb129204) Seq
> 179-6955246674520311671: Response Received:
> 2017-02-17 15:24:36,137 DEBUG [c.c.a.t.Request] 
> (StatsCollector-5:ctx-4298a591)
> (logid:eb129204) Seq 179-6955246674520311671: Received:  { Ans: , MgmtId:
> 345049101620, via: 179(ewcstack-vh003-test), Ver: v1, Flags: 10, {
> GetStorageStatsAnswer } }
> 2017-02-17 15:24:36,137 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) design called
> 2017-02-17 15:24:36,138 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru]
> (StatsCollector-5:ctx-4298a591) (logid:eb129204)
> getCommandHostDelegation: class com.cloud.agent.api.GetStorageStatsCommand
> 2017-02-17 15:24:36,138 DEBUG [c.c.h.XenServerGuru] 
> (StatsCollector-5:ctx-4298a591)
> (logid:eb129204) getCommandHostDelegation: class com.cloud.agent.api.
> GetStorageStatsCommand
> 2017-02-17 15:24:36,139 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design this network, the physical isolation type is not MIDO
> 2017-02-17 15:24:36,139 DEBUG [c.c.a.m.DirectAgentAttache]
> (DirectAgent-72:ctx-656a03ae) (logid:dd7ada9e) Seq 217-8596245788743434945:
> Executing request
> 2017-02-17 15:24:36,141 DEBUG [c.c.n.g.NiciraNvpGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design this network
> 2017-02-17 15:24:36,142 DEBUG [o.a.c.n.o.OpendaylightGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design this network
> 2017-02-17 15:24:36,144 DEBUG [c.c.n.g.OvsGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design this network
> 2017-02-17 15:24:36,163 DEBUG [o.a.c.n.g.SspGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) SSP not
> configured to be active
> 2017-02-17 15:24:36,164 DEBUG [c.c.n.g.BrocadeVcsGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design this network
> 2017-02-17 15:24:36,165 DEBUG [c.c.n.g.NuageVspGuestNetworkGuru]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to
> design network using network offering 54 on physical network 200
> 2017-02-17 15:24:36,166 DEBUG [o.a.c.e.o.NetworkOrchestrator]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Releasing
> lock for Acct[3426fb73-70ad-47d9-9c5d-355f34891438-fen]
> 2017-02-17 15:24:36,188 DEBUG [c.c.a.ApiServlet]
> (catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) ===END===
> 192.168.252.76 -- GET  command=createNetwork&
> response=json=e683eeaa-92c9-4651-91b9-165939f9000c&
> name=net-kvm008=net-kvm00
>
>
> We do not use BigSwitch or anything like this, just plain Openvswitch with
> Ubuntu 16.04. Any idea whats going on?
>
> All the best,
> Florian
>
> EveryWare AG
> Florian Engelmann
> Systems Engineer
> Zurlindenstrasse 52a
> CH-8003 Zürich
>
> T  +41 44 466 60 00
> F  +41 44 466 60 10
>
> florian.engelm...@everyware.ch
> www.everyware.ch
>



-- 
Rafael Weingärtner


Re: Ubuntu 16.04, Openvswitch networking issue

2017-02-17 Thread Engelmann Florian
Hi,

sorry I ment "I am NOT able to solve"


From: Engelmann Florian 
Sent: Friday, February 17, 2017 3:36 PM
To: users@cloudstack.apache.org
Subject: Ubuntu 16.04, Openvswitch networking issue

Hi,

another error I am able to solve:

2017-02-17 15:24:36,097 DEBUG [c.c.a.ApiServlet] 
(catalina-exec-26:ctx-30020483) (logid:d303f8ef) ===START===  192.168.252.76 -- 
GET  
command=createNetwork=json=e683eeaa-92c9-4651-91b9-165939f9000c=net-kvm008=net-kvm008
2017-02-17 15:24:36,135 DEBUG [c.c.n.g.BigSwitchBcfGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design this network, the physical isolation type is not BCF_SEGMENT
2017-02-17 15:24:36,136 DEBUG [o.a.c.n.c.m.ContrailGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design this network
2017-02-17 15:24:36,137 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgent-144:ctx-b2cdad73) (logid:eb129204) Seq 179-6955246674520311671: 
Response Received:
2017-02-17 15:24:36,137 DEBUG [c.c.a.t.Request] (StatsCollector-5:ctx-4298a591) 
(logid:eb129204) Seq 179-6955246674520311671: Received:  { Ans: , MgmtId: 
345049101620, via: 179(ewcstack-vh003-test), Ver: v1, Flags: 10, { 
GetStorageStatsAnswer } }
2017-02-17 15:24:36,137 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) design called
2017-02-17 15:24:36,138 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru] 
(StatsCollector-5:ctx-4298a591) (logid:eb129204) getCommandHostDelegation: 
class com.cloud.agent.api.GetStorageStatsCommand
2017-02-17 15:24:36,138 DEBUG [c.c.h.XenServerGuru] 
(StatsCollector-5:ctx-4298a591) (logid:eb129204) getCommandHostDelegation: 
class com.cloud.agent.api.GetStorageStatsCommand
2017-02-17 15:24:36,139 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design this network, the physical isolation type is not MIDO
2017-02-17 15:24:36,139 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgent-72:ctx-656a03ae) (logid:dd7ada9e) Seq 217-8596245788743434945: 
Executing request
2017-02-17 15:24:36,141 DEBUG [c.c.n.g.NiciraNvpGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design this network
2017-02-17 15:24:36,142 DEBUG [o.a.c.n.o.OpendaylightGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design this network
2017-02-17 15:24:36,144 DEBUG [c.c.n.g.OvsGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design this network
2017-02-17 15:24:36,163 DEBUG [o.a.c.n.g.SspGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) SSP not 
configured to be active
2017-02-17 15:24:36,164 DEBUG [c.c.n.g.BrocadeVcsGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design this network
2017-02-17 15:24:36,165 DEBUG [c.c.n.g.NuageVspGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design network using network offering 54 on physical network 200
2017-02-17 15:24:36,166 DEBUG [o.a.c.e.o.NetworkOrchestrator] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Releasing lock 
for Acct[3426fb73-70ad-47d9-9c5d-355f34891438-fen]
2017-02-17 15:24:36,188 DEBUG [c.c.a.ApiServlet] (catalina-exec-26:ctx-30020483 
ctx-430b6ae1) (logid:d303f8ef) ===END===  192.168.252.76 -- GET  
command=createNetwork=json=e683eeaa-92c9-4651-91b9-165939f9000c=net-kvm008=net-kvm00


We do not use BigSwitch or anything like this, just plain Openvswitch with 
Ubuntu 16.04. Any idea whats going on?

All the best,
Florian

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

T  +41 44 466 60 00
F  +41 44 466 60 10

florian.engelm...@everyware.ch
www.everyware.ch


smime.p7s
Description: S/MIME cryptographic signature


Ubuntu 16.04, Openvswitch networking issue

2017-02-17 Thread Engelmann Florian
Hi,

another error I am able to solve:

2017-02-17 15:24:36,097 DEBUG [c.c.a.ApiServlet] 
(catalina-exec-26:ctx-30020483) (logid:d303f8ef) ===START===  192.168.252.76 -- 
GET  
command=createNetwork=json=e683eeaa-92c9-4651-91b9-165939f9000c=net-kvm008=net-kvm008
2017-02-17 15:24:36,135 DEBUG [c.c.n.g.BigSwitchBcfGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design this network, the physical isolation type is not BCF_SEGMENT
2017-02-17 15:24:36,136 DEBUG [o.a.c.n.c.m.ContrailGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design this network
2017-02-17 15:24:36,137 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgent-144:ctx-b2cdad73) (logid:eb129204) Seq 179-6955246674520311671: 
Response Received: 
2017-02-17 15:24:36,137 DEBUG [c.c.a.t.Request] (StatsCollector-5:ctx-4298a591) 
(logid:eb129204) Seq 179-6955246674520311671: Received:  { Ans: , MgmtId: 
345049101620, via: 179(ewcstack-vh003-test), Ver: v1, Flags: 10, { 
GetStorageStatsAnswer } }
2017-02-17 15:24:36,137 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) design called
2017-02-17 15:24:36,138 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru] 
(StatsCollector-5:ctx-4298a591) (logid:eb129204) getCommandHostDelegation: 
class com.cloud.agent.api.GetStorageStatsCommand
2017-02-17 15:24:36,138 DEBUG [c.c.h.XenServerGuru] 
(StatsCollector-5:ctx-4298a591) (logid:eb129204) getCommandHostDelegation: 
class com.cloud.agent.api.GetStorageStatsCommand
2017-02-17 15:24:36,139 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design this network, the physical isolation type is not MIDO
2017-02-17 15:24:36,139 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgent-72:ctx-656a03ae) (logid:dd7ada9e) Seq 217-8596245788743434945: 
Executing request
2017-02-17 15:24:36,141 DEBUG [c.c.n.g.NiciraNvpGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design this network
2017-02-17 15:24:36,142 DEBUG [o.a.c.n.o.OpendaylightGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design this network
2017-02-17 15:24:36,144 DEBUG [c.c.n.g.OvsGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design this network
2017-02-17 15:24:36,163 DEBUG [o.a.c.n.g.SspGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) SSP not 
configured to be active
2017-02-17 15:24:36,164 DEBUG [c.c.n.g.BrocadeVcsGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design this network
2017-02-17 15:24:36,165 DEBUG [c.c.n.g.NuageVspGuestNetworkGuru] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Refusing to 
design network using network offering 54 on physical network 200
2017-02-17 15:24:36,166 DEBUG [o.a.c.e.o.NetworkOrchestrator] 
(catalina-exec-26:ctx-30020483 ctx-430b6ae1) (logid:d303f8ef) Releasing lock 
for Acct[3426fb73-70ad-47d9-9c5d-355f34891438-fen]
2017-02-17 15:24:36,188 DEBUG [c.c.a.ApiServlet] (catalina-exec-26:ctx-30020483 
ctx-430b6ae1) (logid:d303f8ef) ===END===  192.168.252.76 -- GET  
command=createNetwork=json=e683eeaa-92c9-4651-91b9-165939f9000c=net-kvm008=net-kvm00


We do not use BigSwitch or anything like this, just plain Openvswitch with 
Ubuntu 16.04. Any idea whats going on?

All the best,
Florian

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

T  +41 44 466 60 00
F  +41 44 466 60 10

florian.engelm...@everyware.ch
www.everyware.ch

smime.p7s
Description: S/MIME cryptographic signature


Ubuntu 16.04, Openvswitch, cloudstack-agent and secondary storage IP

2017-02-17 Thread Engelmann Florian
Hi,

we are currently building a new test setup: ACS + Ubuntu 16.04 KVM + OVS

I am stuck with the getting the wrong secondary IP from the KVM host:


[...]
"publicIpAddress":"10.1.0.233","publicNetmask":"255.255.255.240","publicMacAddress":"24:8a:07:6c:75:30","privateIpAddress":"10.1.0.233","privateMacAddress":"24:8a:07:6c:75:30","privateNetmask":"255.255.255.240","storageIpAddress":"10.1.0.233","storageNetmask":"255.255.255.240","storageMacAddress":"24:8a:07:6c:75:30","resourceName":"LibvirtComputingResource","gatewayIpAddress":"10.1.0.225","wait":0}},
[...]

The secondary storage IP should be 10.1.2.233

I did add the secondary storage label like:

KVM traffic label   "secstore0"

and the OVS configuration looks like:

# ovs-vsctl show
c736c85f-badb-4fec-9fd9-fdc94ceed776
Bridge "cloudbr0"
Port "bond0"
Interface "enp136s0"
Interface "enp136s0d1"
Port "cloudbr0"
Interface "cloudbr0"
type: internal
Port secstore0
tag: 2007
Interface secstore0
type: internal
Bridge "cloud0"
Port "cloud0"
Interface "cloud0"
type: internal
ovs_version: "2.5.0"

And the IP configuration looks like:

8: ovs-system:  mtu 1500 qdisc noop state DOWN group 
default qlen 1
link/ether 76:50:ad:b6:37:90 brd ff:ff:ff:ff:ff:ff
18: cloud0:  mtu 1500 qdisc noqueue state 
UNKNOWN group default qlen 1
link/ether ae:bf:4c:94:c6:42 brd ff:ff:ff:ff:ff:ff
inet 169.254.0.1/16 scope global cloud0
   valid_lft forever preferred_lft forever
inet6 fe80::acbf:4cff:fe94:c642/64 scope link 
   valid_lft forever preferred_lft forever
19: cloudbr0:  mtu 1500 qdisc noqueue state 
UNKNOWN group default qlen 1
link/ether 24:8a:07:6c:75:30 brd ff:ff:ff:ff:ff:ff
inet 10.1.0.233/28 brd 10.1.0.239 scope global cloudbr0
   valid_lft forever preferred_lft forever
inet6 fe80::268a:7ff:fe6c:7530/64 scope link 
   valid_lft forever preferred_lft forever
20: bond0:  mtu 1500 qdisc noqueue state 
UNKNOWN group default qlen 1
link/ether ce:13:53:5a:d8:e1 brd ff:ff:ff:ff:ff:ff
inet6 fe80::cc13:53ff:fe5a:d8e1/64 scope link 
   valid_lft forever preferred_lft forever
21: secstore0:  mtu 1500 qdisc noqueue state 
UNKNOWN group default qlen 1
link/ether 8a:eb:53:70:69:40 brd ff:ff:ff:ff:ff:ff
inet 10.1.2.233/28 brd 10.1.2.239 scope global secstore0
   valid_lft forever preferred_lft forever
inet6 fe80::88eb:53ff:fe70:6940/64 scope link 
   valid_lft forever preferred_lft forever


Any idea why ACS is not able to fetch the correct IP used to access secondary 
storage?

All the best,
Flo




EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

T  +41 44 466 60 00
F  +41 44 466 60 10

florian.engelm...@everyware.ch
www.everyware.ch


smime.p7s
Description: S/MIME cryptographic signature


Re: Router VM: patchviasocket.py timeout issue on 1 out of 4 networks

2017-02-17 Thread Jeff Hair
I have noticed that sometimes the arping stuff can slow down the router
drastically. At least in 4.7, arping will try to ping a hostname of "None"
(Python's equivalent of null). I'm wondering if this has been fixed in
newer versions.

*Jeff Hair*
Technical Lead and Software Developer

Tel: (+354) 415 0200
j...@greenqloud.com
www.greenqloud.com

On Fri, Feb 17, 2017 at 4:22 AM, Syahrul Sazli Shaharir 
wrote:

> Hi,
>
> For the benefit of others, I've worked around this issue by:-
> - SSH to router VM via local IP through host running the VM
> - tail /var/log/cloud.log to pinpoint the source of loop
> - Find the looping script in /opt/cloud , edit and disable the looping part
> - If you do this quick (before timeout), VM will resume boot
> successfully - if not try again until succeed.
>
> Full notes for my particular case:
> https://pulasan.my/doku.php?id=cs_masalah_router_vm
>
> Thanks.
>
> On Tue, Dec 20, 2016 at 10:09 AM, Syahrul Sazli Shaharir
>  wrote:
> > On Mon, Dec 19, 2016 at 8:54 PM, Simon Weller  wrote:
> >> When you're in the console, can you ping the host ip?
> >
> > Yes - some (not all) of the IPs assigned on the host.
> >
> >> What are your ip tables rules on this host currently?
> >
> > Chain INPUT (policy ACCEPT)
> > target prot opt source   destination
> > ACCEPT udp  --  0.0.0.0/00.0.0.0/0udp dpt:53
> > ACCEPT tcp  --  0.0.0.0/00.0.0.0/0tcp dpt:53
> > ACCEPT udp  --  0.0.0.0/00.0.0.0/0udp dpt:67
> > ACCEPT tcp  --  0.0.0.0/00.0.0.0/0tcp dpt:67
> >
> > Chain FORWARD (policy ACCEPT)
> > target prot opt source   destination
> > ACCEPT all  --  0.0.0.0/0192.168.122.0/24 ctstate
> > RELATED,ESTABLISHED
> > ACCEPT all  --  192.168.122.0/24 0.0.0.0/0
> > ACCEPT all  --  0.0.0.0/00.0.0.0/0
> > REJECT all  --  0.0.0.0/00.0.0.0/0
> > reject-with icmp-port-unreachable
> > REJECT all  --  0.0.0.0/00.0.0.0/0
> > reject-with icmp-port-unreachable
> >
> > Chain OUTPUT (policy ACCEPT)
> > target prot opt source   destination
> > ACCEPT udp  --  0.0.0.0/00.0.0.0/0udp dpt:68
> >
> >> Can you dump the routing table as well?
> >
> > Kernel IP routing table
> > Destination Gateway Genmask Flags   MSS Window  irtt
> Iface
> > 0.0.0.0 172.16.30.33  0.0.0.0 UG0 0
> >   0 cloudbr2.304
> > 10.1.30.0   0.0.0.0 255.255.255.0   U 0 0  0
> bond1
> > 10.2.30.0   0.0.0.0 255.255.255.0   U 0 0
> > 0 cloudbr2.352
> > 10.3.30.0   0.0.0.0 255.255.255.0   U 0 0
> > 0 cloudbr2.353
> > 172.16.30.32  0.0.0.0 255.255.255.224 U 0 0
> >   0 cloudbr2.304
> > 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0
> cloud0
> > 192.168.122.0   0.0.0.0 255.255.255.0   U 0 0  0
> virbr0
> >
> >> Have you tried a restart of one of the working networks to see if it
> fails on restart?
> >
> > Yes, and able to restart OK. I logged on to each network's router VM
> > console during the patchviasocket.py stage onwards, and find the
> > following difference (both VMs were created and booted in the same
> > host):-
> >
> > 1. Working network router VM ( http://pastebin.com/Y6zpDa6M ) :-
> > .
> >
> > Dec 20 01:37:55 r-686-VM cloud: Boot up process done
> > Dec 20 01:37:55 r-686-VM cloud: VR config: configuation format version
> 1.0
> > Dec 20 01:37:55 r-686-VM cloud: VR config: creating file:
> > /var/cache/cloud/monitor_service.json
> > Dec 20 01:37:55 r-686-VM cloud: VR config: create file success
> > Dec 20 01:37:55 r-686-VM cloud: VR config: executing:
> > /opt/cloud/bin/update_config.py monitor_service.json
> > Dec 20 01:38:16 r-686-VM cloud: VR config: execution success
> > Dec 20 01:38:16 r-686-VM cloud: VR config: creating file:
> > /var/cache/cloud/vm_dhcp_entry.json
> > Dec 20 01:38:16 r-686-VM cloud: VR config: create file success
> > Dec 20 01:38:16 r-686-VM cloud: VR config: executing:
> > /opt/cloud/bin/update_config.py vm_dhcp_entry.json
> > Dec 20 01:38:38 r-686-VM cloud: VR config: execution success
> > Dec 20 01:38:38 r-686-VM cloud: VR config: creating file:
> > /var/cache/cloud/vm_dhcp_entry.json
> > Dec 20 01:38:38 r-686-VM cloud: VR config: create file success
> > Dec 20 01:38:38 r-686-VM cloud: VR config: executing:
> > /opt/cloud/bin/update_config.py vm_dhcp_entry.json
> > Dec 20 01:39:01 r-686-VM cloud: VR config: execution success
> > Dec 20 01:39:01 r-686-VM cloud: VR config: creating file:
> > /var/cache/cloud/vm_metadata.json
> > Dec 20 01:39:01 r-686-VM cloud: VR config: create file success
> > Dec 20 01:39:01 r-686-VM cloud: VR config: executing:
> > /opt/cloud/bin/update_config.py vm_metadata.json
> > Dec 20 01:39:21 

About French Doc Translation - Traduction Française

2017-02-17 Thread Antoine Le Morvan

Hi everybody.

My name is Antoine, and I started translating the CloudStack Doc Admin 
in French Language.
My Pull Request has now been merge 
(https://github.com/apache/cloudstack-docs-admin/pull/41).

What follows in this message is for French contributors.


Bonjour,

J'ai commencé la traduction de la documentation d'administration de 
CloudStack, et j'ai atteints les 50%.

Le premier PR a été validé la semaine dernière.

Tout le monde est dorénavant libre de compiler la documentation en 
Français (voir https://github.com/apache/cloudstack-docs-admin) en 
attendant que le travail soit intégré dans ReadTheDoc (ce qui ne devrait 
pas tarder).


Si parmi vous certains sont interessés pour participer à cette 
traduction, n'hésitez pas. Participer à la documentation ou la 
traduction est un excellent moyen de commencer à contribuer à des 
projets libres.


Pour ceux qui ne parleraient pas ou peu l'Anglais, ce n'est pas grave. 
Il est possible de travailler sur la correction de la partie française 
qui a déjà été traduite (un peu rapidement) au niveau de la grammaire ou 
de l'orthographe, mais également s'assurer de la cohérence globale du texte.


Alors n'hésitez pas à contribuer. Je suis disponible pour vous aider à 
commencer.


Cordialement

Thanks to all,


Antoine.