Re: Additional local storage as primary storage

2019-07-17 Thread Ivan Kudryavtsev
Shared mountpoint is ok

чт, 18 июл. 2019 г., 12:14 Fariborz Navidan :

> Hi
>
> I have already used this way. I feel local NFS mount point adds another
> layer over local storage and can affect IO speed and performance. What do
> you think of it? What about SharedMountPoint option? Between this and local
> NFS which one offers better performance?
>
> Thanks
>
> On Thu, Jul 18, 2019 at 5:38 AM Ivan Kudryavtsev  >
> wrote:
>
> > Hi,
> >
> > As for 4.11.2, no way to have multiple local storages configured for a
> > single host. There is no simple way to overcome it. The only one I see
> is a
> > pretty ugly - locally mounted NFS, created as a cluster wide storage when
> > only a single host added to a single cluster...
> >
> > In short, it's not supported, only one local storage per host. It's a
> great
> > feature request, but unsure many people use that topology.
> >
> > чт, 18 июл. 2019 г., 4:04 Fariborz Navidan :
> >
> > > Hello,
> > >
> > > I have a few mount points which refer to different block devices on
> local
> > > machine. I am trying to add them as additional primary local storage
> to
> > > CS, Unfortunately, when adding primary storage there is no Filesystem
> > > option to choose. As a result I have managed to modify the storage_pool
> > > table settting the storage type to Filesystem. Then shows its state as
> > > "Up". However it because the path is under / such as /home and / is on
> > > different disk, it mistakenly detects the storage capcity as it is for
> > root
> > > filesystem and not the real size of filesystem /home belongs to.
> > >
> > > Any idea how to fix this?
> > >
> > > Thanks
> > >
> >
>


Re: Additional local storage as primary storage

2019-07-17 Thread Fariborz Navidan
Hi

I have already used this way. I feel local NFS mount point adds another
layer over local storage and can affect IO speed and performance. What do
you think of it? What about SharedMountPoint option? Between this and local
NFS which one offers better performance?

Thanks

On Thu, Jul 18, 2019 at 5:38 AM Ivan Kudryavtsev 
wrote:

> Hi,
>
> As for 4.11.2, no way to have multiple local storages configured for a
> single host. There is no simple way to overcome it. The only one I see is a
> pretty ugly - locally mounted NFS, created as a cluster wide storage when
> only a single host added to a single cluster...
>
> In short, it's not supported, only one local storage per host. It's a great
> feature request, but unsure many people use that topology.
>
> чт, 18 июл. 2019 г., 4:04 Fariborz Navidan :
>
> > Hello,
> >
> > I have a few mount points which refer to different block devices on local
> > machine. I am trying to add them as additional primary local storage  to
> > CS, Unfortunately, when adding primary storage there is no Filesystem
> > option to choose. As a result I have managed to modify the storage_pool
> > table settting the storage type to Filesystem. Then shows its state as
> > "Up". However it because the path is under / such as /home and / is on
> > different disk, it mistakenly detects the storage capcity as it is for
> root
> > filesystem and not the real size of filesystem /home belongs to.
> >
> > Any idea how to fix this?
> >
> > Thanks
> >
>


Re: Agent LB for CloudStack failed

2019-07-17 Thread Nicolas Vazquez
Thanks,

I suspect the culprit is the background task trying to reconnect to the 
preferred host (which runs every 60 seconds).

I would suggest disabling the background task by setting the interval to 0. As 
you do not want to change your 'host' global configuration to propagate a new 
list to the agents, you should do it this way:

- Add this line to agent.properties: host.lb.check.interval=0
- Restart the agent

Please let me know if this fixes your issue.


Regards,

Nicolas Vazquez


From: li jerry 
Sent: Thursday, July 18, 2019 12:00 AM
To: d...@cloudstack.apache.org ; 
users@cloudstack.apache.org 
Subject: 答复: Agent LB for CloudStack failed

Hi Nicolas

test-ceph-node01

[root@test-ceph-node01 ~]# cat /etc/cloudstack/agent/agent.properties
#Storage
#Wed Jul 17 10:39:18 CST 2019
workers=5
guest.network.device=br0
private.network.device=br0
port=8250
resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
pod=1
zone=1
hypervisor.type=kvm
guid=88ca642a-e319-3369-b2c9-39c2b2bddc7c
public.network.device=br0
cluster=1
local.storage.uuid=ec28176f-a3db-4383-90c8-6dcdbc45c3e0
keystore.passphrase=O8VdcZqBwWMMxwk2
domr.scripts.dir=scripts/network/domr/kvm
LibvirtComputingResource.id=1
host=172.17.1.141,172.17.1.142@roundrobin

this is test-ceph-node02

[root@test-ceph-node02 ~]# cat /etc/cloudstack/agent/agent.properties
#Storage
#Wed Jul 17 10:58:23 CST 2019
guest.network.device=br0
workers=5
private.network.device=br0
port=8250
resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
pod=1
zone=1
guid=649cbe62-dcac-36ae-a62c-699f0e0b8af1
hypervisor.type=kvm
cluster=1
public.network.device=br0
local.storage.uuid=2fc2f796-0614-40cf-bfdf-37a9429520fb
domr.scripts.dir=scripts/network/domr/kvm
keystore.passphrase=vB48rgCk58vNJC6N
host=172.17.1.142,172.17.1.141@roundrobin
LibvirtComputingResource.id=4

test-ceph-node03

[root@test-ceph-node03 ~]# cat /etc/cloudstack/agent/agent.properties
#Storage
#Wed Jul 17 10:39:18 CST 2019
guest.network.device=br0
workers=5
private.network.device=br0
port=8250
resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
pod=1
zone=1
hypervisor.type=kvm
guid=4d3742c4-8678-3f21-a841-c1ffa32d0a8d
public.network.device=br0
cluster=1
local.storage.uuid=31ee15cf-b3b2-4387-b081-7c47971b9e68
keystore.passphrase=ACgs24DnBgYkORvh
domr.scripts.dir=scripts/network/domr/kvm
LibvirtComputingResource.id=5
host=172.17.1.141,172.17.1.142@roundrobin

test-ceph-node04
[root@test-ceph-node04 ~]# cat /etc/cloudstack/agent/agent.properties
#Storage
#Wed Jul 17 10:58:22 CST 2019
guest.network.device=br0
workers=5
private.network.device=br0
port=8250
resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
pod=1
zone=1
hypervisor.type=kvm
guid=bfd4b7ba-fd5f-365d-b4d8-a6e8e7c78c0c
public.network.device=br0
cluster=1
local.storage.uuid=2d5004ff-37b1-4f66-bff0-e71ac211f1da
keystore.passphrase=r3D4upcAOdWbwE9p
domr.scripts.dir=scripts/network/domr/kvm
LibvirtComputingResource.id=6
host=172.17.1.142,172.17.1.141@roundrobin

发件人: Nicolas Vazquez
发送时间: 2019年7月18日 10:56
收件人: users@cloudstack.apache.org; 
d...@cloudstack.apache.org
主题: Re: Agent LB for CloudStack failed

Hi Jerry,

I'll request some additional information. Can you provide me with the value 
stored on agent.properties for 'host' property on each KVM host? I suspect that 
the global setting has not been propagated to the agents, as it is trying to 
reconnect instead of connecting to the next management server once it is down.


Regards,

Nicolas Vazquez


From: li jerry 
Sent: Monday, July 15, 2019 10:20 PM
To: users@cloudstack.apache.org ; 
d...@cloudstack.apache.org 
Subject: Agent LB for CloudStack failed

Hello everyone

My kvm Agent LB on 4.11.2/4.11.3 failed. When the preferred managment node is 
forced to power off, the agent will not immediately connect to the second 
management node.After 15 minutes, the agent issues a "No route to host" error 
and connects to the second management node.

management node:
acs-mn01,172.17.1.141
acs-mn02,172.17.1.142

mysql db node:
acs-db01

kvmm agent node:
test-ceph-node01
test-ceph-node02
test-ceph-node03
test-ceph-node04


global seting

host=172.17.1.142,172.17.1.141
indirect.agent.lb.algorithm=roundrobin
indirect.agent.lb.check.interval=60


Partial agnet logs:

2019-07-15 23:22:39,340 DEBUG [cloud.agent.Agent] (UgentTask-5:null) (logid:) 
Sending ping: Seq 1-19: { Cmd , MgmtId: -1, via: 1, Ver : v1, Flags: 11, 
[{"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"_hostVmStateReport":{},"_gatewayAccessible":true,"_vnetAccessible":true,"hostType
 ":"Routing","hostId":1,"wait":0}}] }
2019-07-15 23:23:09,960 DEBUG [utils.nio.NioConnection] 
(Agent-NioConnectionHandler-1:null) (logid:) Location 1: Socket 
Socket[addr=/172.17.1.142,port=8250,localport= 34854] closed 

Re: "Command failed due to Internal Server Error" when stopping a VM

2019-07-17 Thread Nicolas Vazquez
Hi,

I have created this PR to fix the issue: 
https://github.com/apache/cloudstack/pull/3501, please test


Regards,

Nicolas Vazquez


From: Jevgeni Zolotarjov 
Sent: Tuesday, July 16, 2019 6:45 PM
To: users@cloudstack.apache.org 
Subject: Re: "Command failed due to Internal Server Error" when stopping a VM

+1

Have experienced exactly the same problem. My host is centos7.
Would be interested to get the solution.

On Tue, 16 Jul 2019, 23:57 daniel bellido, 
wrote:

> Hello,
>
> I've done a fresh install of cloudstack 4.11.3 on 2 ubuntu 18.04 servers
> (1 server hosts the management server , the other hosts a KVM host).
> I've configured the cloud using the wizard.
> Everything works fine except that I receive "internal server errors" when
> I stop the VMs . From the UI , the status stays as "stopping'; so the
> workaround is to go to the DB and set the status as "stopped". Expunging
> the VM results also in a internal server error too.
>
> Looking at the management server logs, I can see these errors :
>
>
> 2019-07-16 22:49:59,967 DEBUG [o.a.c.n.t.BasicNetworkTopology]
> (AgentManager-Handler-2:null) (logid:) REMOVING DHCP ENTRY RULE
> 2019-07-16 22:49:59,967 DEBUG [o.a.c.n.t.BasicNetworkTopology]
> (AgentManager-Handler-2:null) (logid:) Applying dhcp entry in network
> Ntwk[204|Guest|6]
> 2019-07-16 22:49:59,971 WARN  [c.c.a.m.AgentManagerImpl]
> (AgentManager-Handler-2:null) (logid:) Caught:
> java.lang.NullPointerException
> at
> org.apache.cloudstack.network.topology.BasicNetworkVisitor.visit(BasicNetworkVisitor.java:201)
> at com.cloud.network.rules.DhcpEntryRules.accept(DhcpEntryRules.java:64)
> at
> org.apache.cloudstack.network.topology.BasicNetworkTopology.applyRules(BasicNetworkTopology.java:390)
> at
> org.apache.cloudstack.network.topology.BasicNetworkTopology.removeDhcpEntry(BasicNetworkTopology.java:464)
> at
> com.cloud.network.element.VirtualRouterElement.removeDhcpEntry(VirtualRouterElement.java:972)
> at
> org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.cleanupNicDhcpDnsEntry(NetworkOrchestrator.java:2933)
> at com.cloud.vm.UserVmManagerImpl.finalizeStop(UserVmManagerImpl.java:4389)
> at
> com.cloud.vm.VirtualMachineManagerImpl.sendStop(VirtualMachineManagerImpl.java:1485)
> at
> com.cloud.vm.VirtualMachineManagerImpl.handlePowerOffReportWithNoPendingJobsOnVM(VirtualMachineManagerImpl.java:4186)
> at
> com.cloud.vm.VirtualMachineManagerImpl.scanStalledVMInTransitionStateOnUpHost(VirtualMachineManagerImpl.java:4238)
> at
> com.cloud.vm.VirtualMachineManagerImpl.processCommands(VirtualMachineManagerImpl.java:3076)
> at
> com.cloud.agent.manager.AgentManagerImpl.handleCommands(AgentManagerImpl.java:317)
> at
> com.cloud.agent.manager.AgentManagerImpl$AgentHandler.processRequest(AgentManagerImpl.java:1296)
> at
> com.cloud.agent.manager.AgentManagerImpl$AgentHandler.doTask(AgentManagerImpl.java:1383)
> at
> com.cloud.agent.manager.ClusteredAgentManagerImpl$ClusteredAgentHandler.doTask(ClusteredAgentManagerImpl.java:712)
> at com.cloud.utils.nio.Task.call(Task.java:83)
> at com.cloud.utils.nio.Task.call(Task.java:29)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>
>
>
> 2019-07-16 22:51:00,004 ERROR [o.a.c.f.m.MessageDispatcher]
> (AgentManager-Handler-5:null) (logid:) Unexpected exception when calling
> com.cloud.vm.ClusteredVirtualMachineManagerImpl.HandlePowerStateReport
> java.lang.reflect.InvocationTargetException
> at sun.reflect.GeneratedMethodAccessor153.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.cloudstack.framework.messagebus.MessageDispatcher.dispatch(MessageDispatcher.java:75)
> at
> org.apache.cloudstack.framework.messagebus.MessageDispatcher.onPublishMessage(MessageDispatcher.java:45)
> at
> org.apache.cloudstack.framework.messagebus.MessageBusBase$SubscriptionNode.notifySubscribers(MessageBusBase.java:441)
> at
> org.apache.cloudstack.framework.messagebus.MessageBusBase.publish(MessageBusBase.java:178)
> at
> com.cloud.vm.VirtualMachinePowerStateSyncImpl.processReport(VirtualMachinePowerStateSyncImpl.java:147)
> at
> com.cloud.vm.VirtualMachinePowerStateSyncImpl.processHostVmStatePingReport(VirtualMachinePowerStateSyncImpl.java:68)
> at
> com.cloud.vm.VirtualMachineManagerImpl.processCommands(VirtualMachineManagerImpl.java:3071)
> at
> com.cloud.agent.manager.AgentManagerImpl.handleCommands(AgentManagerImpl.java:317)
> at
> com.cloud.agent.manager.AgentManagerImpl$AgentHandler.processRequest(AgentManagerImpl.java:1296)
> at
> com.cloud.agent.manager.AgentManagerImpl$AgentHandler.doTask(AgentManagerImpl.java:1383)
> at
> 

答复: Agent LB for CloudStack failed

2019-07-17 Thread li jerry
Hi Nicolas

test-ceph-node01

[root@test-ceph-node01 ~]# cat /etc/cloudstack/agent/agent.properties
#Storage
#Wed Jul 17 10:39:18 CST 2019
workers=5
guest.network.device=br0
private.network.device=br0
port=8250
resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
pod=1
zone=1
hypervisor.type=kvm
guid=88ca642a-e319-3369-b2c9-39c2b2bddc7c
public.network.device=br0
cluster=1
local.storage.uuid=ec28176f-a3db-4383-90c8-6dcdbc45c3e0
keystore.passphrase=O8VdcZqBwWMMxwk2
domr.scripts.dir=scripts/network/domr/kvm
LibvirtComputingResource.id=1
host=172.17.1.141,172.17.1.142@roundrobin

this is test-ceph-node02

[root@test-ceph-node02 ~]# cat /etc/cloudstack/agent/agent.properties
#Storage
#Wed Jul 17 10:58:23 CST 2019
guest.network.device=br0
workers=5
private.network.device=br0
port=8250
resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
pod=1
zone=1
guid=649cbe62-dcac-36ae-a62c-699f0e0b8af1
hypervisor.type=kvm
cluster=1
public.network.device=br0
local.storage.uuid=2fc2f796-0614-40cf-bfdf-37a9429520fb
domr.scripts.dir=scripts/network/domr/kvm
keystore.passphrase=vB48rgCk58vNJC6N
host=172.17.1.142,172.17.1.141@roundrobin
LibvirtComputingResource.id=4

test-ceph-node03

[root@test-ceph-node03 ~]# cat /etc/cloudstack/agent/agent.properties
#Storage
#Wed Jul 17 10:39:18 CST 2019
guest.network.device=br0
workers=5
private.network.device=br0
port=8250
resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
pod=1
zone=1
hypervisor.type=kvm
guid=4d3742c4-8678-3f21-a841-c1ffa32d0a8d
public.network.device=br0
cluster=1
local.storage.uuid=31ee15cf-b3b2-4387-b081-7c47971b9e68
keystore.passphrase=ACgs24DnBgYkORvh
domr.scripts.dir=scripts/network/domr/kvm
LibvirtComputingResource.id=5
host=172.17.1.141,172.17.1.142@roundrobin

test-ceph-node04
[root@test-ceph-node04 ~]# cat /etc/cloudstack/agent/agent.properties
#Storage
#Wed Jul 17 10:58:22 CST 2019
guest.network.device=br0
workers=5
private.network.device=br0
port=8250
resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
pod=1
zone=1
hypervisor.type=kvm
guid=bfd4b7ba-fd5f-365d-b4d8-a6e8e7c78c0c
public.network.device=br0
cluster=1
local.storage.uuid=2d5004ff-37b1-4f66-bff0-e71ac211f1da
keystore.passphrase=r3D4upcAOdWbwE9p
domr.scripts.dir=scripts/network/domr/kvm
LibvirtComputingResource.id=6
host=172.17.1.142,172.17.1.141@roundrobin

发件人: Nicolas Vazquez
发送时间: 2019年7月18日 10:56
收件人: users@cloudstack.apache.org; 
d...@cloudstack.apache.org
主题: Re: Agent LB for CloudStack failed

Hi Jerry,

I'll request some additional information. Can you provide me with the value 
stored on agent.properties for 'host' property on each KVM host? I suspect that 
the global setting has not been propagated to the agents, as it is trying to 
reconnect instead of connecting to the next management server once it is down.


Regards,

Nicolas Vazquez


From: li jerry 
Sent: Monday, July 15, 2019 10:20 PM
To: users@cloudstack.apache.org ; 
d...@cloudstack.apache.org 
Subject: Agent LB for CloudStack failed

Hello everyone

My kvm Agent LB on 4.11.2/4.11.3 failed. When the preferred managment node is 
forced to power off, the agent will not immediately connect to the second 
management node.After 15 minutes, the agent issues a "No route to host" error 
and connects to the second management node.

management node:
acs-mn01,172.17.1.141
acs-mn02,172.17.1.142

mysql db node:
acs-db01

kvmm agent node:
test-ceph-node01
test-ceph-node02
test-ceph-node03
test-ceph-node04


global seting

host=172.17.1.142,172.17.1.141
indirect.agent.lb.algorithm=roundrobin
indirect.agent.lb.check.interval=60


Partial agnet logs:

2019-07-15 23:22:39,340 DEBUG [cloud.agent.Agent] (UgentTask-5:null) (logid:) 
Sending ping: Seq 1-19: { Cmd , MgmtId: -1, via: 1, Ver : v1, Flags: 11, 
[{"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"_hostVmStateReport":{},"_gatewayAccessible":true,"_vnetAccessible":true,"hostType
 ":"Routing","hostId":1,"wait":0}}] }
2019-07-15 23:23:09,960 DEBUG [utils.nio.NioConnection] 
(Agent-NioConnectionHandler-1:null) (logid:) Location 1: Socket 
Socket[addr=/172.17.1.142,port=8250,localport= 34854] closed on read. Probably 
-1 returned: No route to host
2019-07-15 23:23:09,960 DEBUG [utils.nio.NioConnection] 
(Agent-NioConnectionHandler-1:null) (logid:) Closing socket 
Socket[addr=/172.17.1.142,port=8250,localport=34854]
2019-07-15 23:23:09,961 DEBUG [cloud.agent.Agent] (Agent-Handler-4:null) 
(logid:a4e4de49) Clearing watch list: 2
2019-07-15 23:23:09,962 INFO [cloud.agent.Agent] (Agent-Handler-4:null) 
(logid:a4e4de49) Lost connection to host: 172.17.1.142. Attempting reconnection 
while we still have 0 commands in Progress.
2019-07-15 23:23:09,963 INFO [utils.nio.NioClient] (Agent-Handler-4:null) 
(logid:a4e4de49) NioClient connection closed
2019-07-15 23:23:09,964 INFO 

Re: Agent LB for CloudStack failed

2019-07-17 Thread Nicolas Vazquez
Hi Jerry,

I'll request some additional information. Can you provide me with the value 
stored on agent.properties for 'host' property on each KVM host? I suspect that 
the global setting has not been propagated to the agents, as it is trying to 
reconnect instead of connecting to the next management server once it is down.


Regards,

Nicolas Vazquez


From: li jerry 
Sent: Monday, July 15, 2019 10:20 PM
To: users@cloudstack.apache.org ; 
d...@cloudstack.apache.org 
Subject: Agent LB for CloudStack failed

Hello everyone

My kvm Agent LB on 4.11.2/4.11.3 failed. When the preferred managment node is 
forced to power off, the agent will not immediately connect to the second 
management node.After 15 minutes, the agent issues a "No route to host" error 
and connects to the second management node.

management node:
acs-mn01,172.17.1.141
acs-mn02,172.17.1.142

mysql db node:
acs-db01

kvmm agent node:
test-ceph-node01
test-ceph-node02
test-ceph-node03
test-ceph-node04


global seting

host=172.17.1.142,172.17.1.141
indirect.agent.lb.algorithm=roundrobin
indirect.agent.lb.check.interval=60


Partial agnet logs:

2019-07-15 23:22:39,340 DEBUG [cloud.agent.Agent] (UgentTask-5:null) (logid:) 
Sending ping: Seq 1-19: { Cmd , MgmtId: -1, via: 1, Ver : v1, Flags: 11, 
[{"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"_hostVmStateReport":{},"_gatewayAccessible":true,"_vnetAccessible":true,"hostType
 ":"Routing","hostId":1,"wait":0}}] }
2019-07-15 23:23:09,960 DEBUG [utils.nio.NioConnection] 
(Agent-NioConnectionHandler-1:null) (logid:) Location 1: Socket 
Socket[addr=/172.17.1.142,port=8250,localport= 34854] closed on read. Probably 
-1 returned: No route to host
2019-07-15 23:23:09,960 DEBUG [utils.nio.NioConnection] 
(Agent-NioConnectionHandler-1:null) (logid:) Closing socket 
Socket[addr=/172.17.1.142,port=8250,localport=34854]
2019-07-15 23:23:09,961 DEBUG [cloud.agent.Agent] (Agent-Handler-4:null) 
(logid:a4e4de49) Clearing watch list: 2
2019-07-15 23:23:09,962 INFO [cloud.agent.Agent] (Agent-Handler-4:null) 
(logid:a4e4de49) Lost connection to host: 172.17.1.142. Attempting reconnection 
while we still have 0 commands in Progress.
2019-07-15 23:23:09,963 INFO [utils.nio.NioClient] (Agent-Handler-4:null) 
(logid:a4e4de49) NioClient connection closed
2019-07-15 23:23:09,964 INFO [cloud.agent.Agent] (Agent-Handler-4:null) 
(logid:a4e4de49) Reconnecting to host:172.17.1.142
2019-07-15 23:23:09,964 INFO [utils.nio.NioClient] (Agent-Handler-4:null) 
(logid:a4e4de49) Connecting to 172.17.1.142:8250
2019-07-15 23:23:12,972 ERROR [utils.nio.NioConnection] (Agent-Handler-4:null) 
(logid:a4e4de49) Unable to initialize the threads.
java.net.NoRouteToHostException: No route to host
 At sun.nio.ch.Net.connect0(Native Method)
 At sun.nio.ch.Net.connect(Net.java:454)
 At sun.nio.ch.Net.connect(Net.java:446)
 At sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
 At com.cloud.utils.nio.NioClient.init(NioClient.java:56)
 At com.cloud.utils.nio.NioConnection.start(NioConnection.java:95)
 At com.cloud.agent.Agent.reconnect(Agent.java:517)
 At com.cloud.agent.Agent$ServerHandler.doTask(Agent.java:1091)
 At com.clo

nicolas.vazq...@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 



Re: Additional local storage as primary storage

2019-07-17 Thread Ivan Kudryavtsev
Hi,

As for 4.11.2, no way to have multiple local storages configured for a
single host. There is no simple way to overcome it. The only one I see is a
pretty ugly - locally mounted NFS, created as a cluster wide storage when
only a single host added to a single cluster...

In short, it's not supported, only one local storage per host. It's a great
feature request, but unsure many people use that topology.

чт, 18 июл. 2019 г., 4:04 Fariborz Navidan :

> Hello,
>
> I have a few mount points which refer to different block devices on local
> machine. I am trying to add them as additional primary local storage  to
> CS, Unfortunately, when adding primary storage there is no Filesystem
> option to choose. As a result I have managed to modify the storage_pool
> table settting the storage type to Filesystem. Then shows its state as
> "Up". However it because the path is under / such as /home and / is on
> different disk, it mistakenly detects the storage capcity as it is for root
> filesystem and not the real size of filesystem /home belongs to.
>
> Any idea how to fix this?
>
> Thanks
>


Additional local storage as primary storage

2019-07-17 Thread Fariborz Navidan
Hello,

I have a few mount points which refer to different block devices on local
machine. I am trying to add them as additional primary local storage  to
CS, Unfortunately, when adding primary storage there is no Filesystem
option to choose. As a result I have managed to modify the storage_pool
table settting the storage type to Filesystem. Then shows its state as
"Up". However it because the path is under / such as /home and / is on
different disk, it mistakenly detects the storage capcity as it is for root
filesystem and not the real size of filesystem /home belongs to.

Any idea how to fix this?

Thanks


Additional local storage pool as primary storage

2019-07-17 Thread Fariborz Navidan
Hello,


Re: [ANNOUNCE] Andrija Panic has joined the PMC

2019-07-17 Thread Simon Weller
Congrats Andrija!!


From: Paul Angus 
Sent: Saturday, July 13, 2019 10:02 AM
To: users@cloudstack.apache.org; d...@cloudstack.apache.org; 
priv...@cloudstack.apache.org
Subject: [ANNOUNCE] Andrija Panic has joined the PMC

Fellow CloudStackers,



It gives me great pleasure to say that Adrija has been invited to join the
PMC and has gracefully accepted.


Please joining me in congratulating Andrija!




Kind regards,



Paul Angus

CloudStack PMC


Re: [ANNOUNCE] Sven Vogel has joined the PMC

2019-07-17 Thread Simon Weller
Congrats Sven!


From: Boris Stoyanov 
Sent: Tuesday, July 16, 2019 2:08 AM
To: users@cloudstack.apache.org; priv...@cloudstack.apache.org; 
d...@cloudstack.apache.org
Subject: Re: [ANNOUNCE] Sven Vogel has joined the PMC

Congrats Sven!

On 13.07.19, 18:45, "Paul Angus"  wrote:

Fellow CloudStackers,



It gives me great pleasure to say that Sven has been invited to join the
PMC and has gracefully accepted.


Please joining me in congratulating Sven!




Kind regards,



Paul Angus

CloudStack PMC



boris.stoya...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue





Re: [ANNOUNCE] Gabriel Beims Bräscher has joined the PMC

2019-07-17 Thread Simon Weller
Congrats Gabriel!


From: Paul Angus 
Sent: Saturday, July 13, 2019 11:00 AM
To: users@cloudstack.apache.org; d...@cloudstack.apache.org; 
priv...@cloudstack.apache.org
Subject: [ANNOUNCE] Gabriel Beims Bräscher has joined the PMC

Fellow CloudStackers,


Its non-stop today!



It gives me great pleasure to say that Gabriel has been invited to join the
PMC and has gracefully accepted.


Please joining me in congratulating Sven!




Kind regards,



Paul Angus

CloudStack PMC


Re: [ANNOUNCE] Bobby (Boris Stoyanov) has joined the PMC

2019-07-17 Thread Simon Weller
Congrats Bobby!!



From: Paul Angus 
Sent: Tuesday, July 16, 2019 4:12 AM
To: priv...@cloudstack.apache.org; d...@cloudstack.apache.org; 
users@cloudstack.apache.org
Subject: [ANNOUNCE] Bobby (Boris Stoyanov) has joined the PMC

Fellow CloudStackers,



It gives me great pleasure to say that Bobby has been invited to join the
PMC and has gracefully accepted.



Please join me in congratulating  Bobby!





Kind regards,





Paul Angus

CloudStack PMC


Re: Using S3/Minio as the only secondary storage

2019-07-17 Thread Jean-Francois Nadeau
Thanks Will,

I remember having the discussion with Pierre-Luc on his use of Swift for
templates.  I was curious about the differences on S3 vs Swift for SS since
looking at the CS UI when it comes to setting up an S3 image store... the
NFS staging is optional.  And this make sense to me if your object storage
is fast and accessible locally,  why the need for staging/caching.The
documentation could mention if it is possible to use S3 secondary and
nothing else,  starting with if SSVM templates can be uploaded to a
bucket.I will certainly ask Syed later today :)

best

Jfn

On Wed, Jul 17, 2019 at 6:59 AM Will Stevens  wrote:

> Hey JF,
> We use the Swift object store as the storage backend for secondary
> storage.  I have not tried the S3 integration, but the last time I looked
> at the code for this (admittedly, a long time ago) the Swift and s3 logic
> was more intertwined than I liked. The CloudOps/cloud.ca team had to do a
> lot of work to get the Swift integration to a reasonable working state. I
> believe all of our changes have been upstreamed quite some time ago. I
> don't know if anyone is doing this for the S3 implementation.
>
> I can't speak to the S3 implementation because I have not looked at it in a
> very long time, but the Swift implementation requires a "temporary NFS
> staging area" that essentially acts kind of like a buffer between the
> object store and primary storage when templates and such are used by the
> hosts.
>
> I think Pierre-Luc and Syed have a clearer picture of all the moving
> pieces, but that is a quick summary of what I know without digging in.
>
> Hope that helps.
>
> Cheers,
>
> Will
>
> On Tue, Jul 16, 2019, 10:24 PM Jean-Francois Nadeau <
> the.jfnad...@gmail.com>
> wrote:
>
> > Hello Everyone,
> >
> > I was wondering if it was common or even recommended to use an S3
> > compatible storage system as the only secondary storage provider ?
> >
> > The environment is 4.11.3.0 with KVM (Centos 7.6),  and our tier1 storage
> > solution also provides an S3 compatible object store (apparently Minio
> > under the hood).
> >
> > I have always used NFS to install the SSVM templates and the install
> script
> > (cloud-install-sys-tmplt) only takes a mount point.  How, if possible,
> > would I proceed with S3 only storage ?
> >
> > best,
> >
> > Jean-Francois
> >
>


Re: Unable to log in to the cloudstack management page (Web UI)

2019-07-17 Thread Riepl, Gregor (SWISS TXT)
Hi Eiji,

>   mysql> select * from cloud.account where id = 2;
>   ++--+--+
> --+---+-+-++--
> --+-+-+
>   | id | account_name | uuid | type |
> domain_id | state   | removed | cleanup_needed | network_domain |
> default_zone_id | default |
>   ++--+--+
> --+---+-+-++--
> --+-+-+
>   |  2 | admin| 483d0fcf-da63-11e3-8ea9-24be05a86042 |1
> | 1 | enabled | NULL|  0 |
> NULL   |NULL |   1 |
>   ++--+--+
> --+---+-+-++--
> --+-+-+
>   1 row in set (0.00 sec)
> 
> 
> "type", "state" and "removed" seem to be good.
> Should I check records in other tables?

Hmm... This looks good to me.

Checking further... Do you have a commands.properties that has the
correct mappings from role to API calls?

This is the only other place I can find that could be relevant -
according to: 
https://github.com/apache/cloudstack/blob/4.2/plugins/acl/static-role-based/src/org/apache/cloudstack/acl/StaticRoleBasedAPIAccessChecker.java

Regards,
Gregor


Re: Using S3/Minio as the only secondary storage

2019-07-17 Thread Riepl, Gregor (SWISS TXT)
Hi Jean-François

> I have always used NFS to install the SSVM templates and the install
> script (cloud-install-sys-tmplt) only takes a mount point.  How, if
> possible, would I proceed with S3 only storage ?

CloudStack doesn't support object storage as a backend for the
secondary storage. You'd have to use something like s3fs-fuse[1],
ObjectiveFS[2] or RioFS[3].

But I have no idea how well that will work... Take into consideration
that secondary storage will be mounted on at least the Management
Server and the SSVMs.

I think NFS is your only option for now.
What you can do, however, is mounting the S3 object store on one
machine and exporting it as an NFS share to the other hosts.

Regards,
Gregor

[1] https://github.com/s3fs-fuse/s3fs-fuse
[2] https://objectivefs.com/
[3] https://github.com/skoobe/riofs