Re: Oracle Vm 3.4

2018-06-07 Thread Prakash Sharma
+1 to this question.

On Thu, Jun 7, 2018 at 9:39 PM, Rodrigo Jorge  wrote:

> Hello there,
>
> The oracle vm 3.4 doesn't supported on Cloudstack 4.11 ?
>
> Rodrigo
>



-- 
Thanks & Regards

Prakash Sharma


Re: Oracle Vm 3.4

2018-06-07 Thread Rohit Yadav
The current ovm3 hypervisor plugin may require some fixes to support the latest 
version.

Get Outlook for Android


From: Rodrigo Jorge 
Sent: Thursday, June 7, 2018 7:09:01 PM
To: users@cloudstack.apache.org
Subject: Oracle Vm 3.4

Hello there,

The oracle vm 3.4 doesn't supported on Cloudstack 4.11 ?

Rodrigo

rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



Re: vSphere/vCenter 6.7 Support

2018-06-07 Thread Suresh Kumar Anaparti
Hi Carlos,

May be future cloudstack release(s) would include support for 6.7.

-Suresh

On Thu, Jun 7, 2018 at 6:22 AM, Carlos Cesario 
wrote:

> Hi Suresh,
>
> Thanks by info.
> Is It possible add support to version 6.7 once this os the last VMware
> version?
>
> Regards
> Carlos
>
> De: Suresh Kumar Anaparti
> Enviado: quarta-feira, 6 de junho 14:53
> Assunto: Re: vSphere/vCenter 6.7 Support
> Para: users@cloudstack.apache.org
>
>
> Hi Carlos, I think the current vmware api in Cloudstack supports till 6.5.
> Thanks, Suresh On Wed, Jun 6, 2018 at 11:15 PM, Carlos Cesario wrote: > Hi
> guys, > > > Is it vSphere 6.7 supported by Cloudstack!?! > > > regards > >
> > Carlos >
>
>


Oracle Vm 3.4

2018-06-07 Thread Rodrigo Jorge
Hello there,

The oracle vm 3.4 doesn't supported on Cloudstack 4.11 ?

Rodrigo


RE: Cloudstack 4.11.1 RC1. SystemVM Config Issue with Xenserver 7.1. Detected as xen-domU.

2018-06-07 Thread James Richards
Hi Paul

Thanks for your reply, it appears the SystemVM's are running as HVM - so those 
options are missing. 

Although is that the intended deployment since this commit?  
https://github.com/apache/cloudstack/pull/2465#issuecomment-370807868 

Cheers

James Richards
Senior Systems Engineer

D: +441183654816
M: 4480.00
T: 0800 970 9292

www.pulsant.com


Pulsant is a limited company registered in England and Wales.

Registered number: 03625971. 
Registered office: Cadogan House, Rose Kiln Lane, Reading, Berks RG2 0HP.

Please note that Pulsant may monitor email traffic data and also the content of 
email for the purposes of security and staff training.

Email confidentiality notice
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the sender. Any offers or 
quotation of service are subject to formal specification. Errors and omissions 
excepted. Please note that any views or opinions presented in this email are 
solely those of the author and do not necessarily represent those of Pulsant 
Limited or any companies within the wider group. Finally, the recipient should 
check this email and any attachments for the presence of viruses. Pulsant 
Limited accept no liability for any damage caused by any virus transmitted by 
this email.


 

-Original Message-
From: Paul Angus  
Sent: 04 June 2018 10:23
To: users@cloudstack.apache.org
Subject: RE: Cloudstack 4.11.1 RC1. SystemVM Config Issue with Xenserver 7.1. 
Detected as xen-domU.

Hi James,

This looks like a slightly different issue as the system VM is not getting it’s 
config when booting. Can you look at the system VM through XenCenter, and look 
at the ‘Boot Options’ for that VM, are there ‘OS boot parameters’ shown there, 
something like:


OS boot parameters:  -- quiet 
console=hvc0%template=domP%type=secstorage%host=10.2.3.192%port=8250%name=s-2-VM%zone=1%pod=1%guid=s-2-VM%workers=10%resource=com.cloud.storage.resource.PremiumSecondaryStorageResource%instance=SecStorage%sslcopy=false%role=templateProcessor%mtu=1500%eth2ip=10.1.35.2%eth2mask=255.255.224.0%gateway=10.1.63.254%public.network.device=eth2%eth0ip=169.254.1.107%eth0mask=255.255.0.0%eth1ip=10.2.6.11%eth1mask=255.255.0.0%mgmtcidr=10.2.0.0/16%localgw=10.2.254.254%private.network.device=eth1%internaldns1=8.8.8.8%internaldns2=8.8.4.4%dns1=8.8.8.8%dns2=8.8.4.4%nfsVersion=null

Kind regards,

Paul Angus

From: James Richards 
Sent: 01 June 2018 13:01
To: users@cloudstack.apache.org
Subject: Cloudstack 4.11.1 RC1. SystemVM Config Issue with Xenserver 7.1. 
Detected as xen-domU.

Hi

I’m attempting to integrate Cloudstack on top of Centos 7, with Xenserver 7.1 
as the hypervisor.

Initially I came up against the problem described here, 
https://github.com/apache/cloudstack/issues/2561 . The SystemVM’s did not 
complete configuration, with the ‘systemvm type=’ observed within cloud.log.

I noticed the commit 
https://github.com/shapeblue/cloudstack/commit/8533def696dacb989b8fde17403bfca98e6139b0
 is intended to the solve the issue.So I rebuilt everything from scratch 
using Shape Blues 4.11.1 RC1 
http://packages.shapeblue.com/testing/4111rc1/centos7/4.11/  with the SystemVM 
template from here.  http://packages.shapeblue.com/systemvmtemplate/4.11.1-rc1/

Despite using the 4.11.1 RC1 packages,  I am still having the same issue.

So within the cloud.log I still see

Executing cloud-early-config
Detected that we are running inside xen-domU Scripts checksum detected : 
oldmd5=***….
Patched scripts using media/cdrom/cloud-scripts.tgz Patching cloud service 
Configuring systemvm type= Finished setting up systemvm

That was typed out manually as I don’t have network access.. 

Any ideas what could be causing it and whether the probem is considered fixed 
in 4.11.1 RC1?

Any help would be greatly appreciated.

Regards

James Richards
Senior Systems Engineer

D: +441183654816
M: 4480.00
T: 0800 970 9292

pulsant.com

[Pulsant]





Address: Pulsant - Cadogan House
Rose Kiln Lane, Reading, RG2 0HP

[Twitter]

[Twitter]

[LinkedIn]

[YouTube]



Please note that Pulsant may monitor email traffic data and also the content of 
email for the purposes of security and staff training.

Email confidentiality notice
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the sender. Any offers or 
quotation of service are subject to formal specification. Errors and omissions 
excepted. Please note that any views or opinions presented in this email are 
solely those of the author and do not 

Re: advanced networking with public IPs direct to VMs

2018-06-07 Thread Jon Marshall
Yes, all basic. I read a Shapeblue doc that recommended splitting traffic 
across multiple NICs even in basic networking mode so that is what I am trying 
to do.


With single NIC you do not get the NFS storage message.


I have the entire management server logs for both scenarios after I pulled the 
power to one of the compute nodes but from the single NIC setup these seem to 
be the relevant lines -


2018-06-04 10:17:10,972 DEBUG [c.c.n.NetworkUsageManagerImpl] 
(AgentTaskPool-3:ctx-8627b348) (logid:ef7b8230) Disconnected called on 4 with 
status Down
2018-06-04 10:17:10,972 DEBUG [c.c.h.Status] (AgentTaskPool-3:ctx-8627b348) 
(logid:ef7b8230) Transition:[Resource state = Enabled, Agent event = HostDown, 
Host id = 4, name = dcp-cscn2.local]
2018-06-04 10:17:10,981 WARN  [o.a.c.alerts] (AgentTaskPool-3:ctx-8627b348) 
(logid:ef7b8230) AlertType:: 7 | dataCenterId:: 1 | podId:: 1 | clusterId:: 
null | message:: Host is down, name: dcp-cscn2.local (id:4), availability zone: 
dcpz1, pod: dcp1
2018-06-04 10:17:11,000 DEBUG [c.c.h.CheckOnAgentInvestigator] 
(HA-Worker-1:ctx-f763f12f work-17) (logid:77c56778) Unable to reach the agent 
for VM[User|i-2-6-VM]: Resource [Host:4] is unreachable: Host 4: Host with 
specified id is not in the right state: Down
2018-06-04 10:17:11,006 DEBUG [c.c.h.KVMInvestigator] 
(AgentTaskPool-2:ctx-a6f6dbd1) (logid:774553ff) Neighbouring host:5 returned 
status:Down for the investigated host:4
2018-06-04 10:17:11,006 DEBUG [c.c.h.KVMInvestigator] 
(AgentTaskPool-2:ctx-a6f6dbd1) (logid:774553ff) HA: HOST is ineligible legacy 
state Down for host 4
2018-06-04 10:17:11,006 DEBUG [c.c.h.HighAvailabilityManagerImpl] 
(AgentTaskPool-2:ctx-a6f6dbd1) (logid:774553ff) KVMInvestigator was able to 
determine host 4 is in Down
2018-06-04 10:17:11,006 INFO  [c.c.a.m.AgentManagerImpl] 
(AgentTaskPool-2:ctx-a6f6dbd1) (logid:774553ff) The agent from host 4 state 
determined is Down
2018-06-04 10:17:11,006 ERROR [c.c.a.m.AgentManagerImpl] 
(AgentTaskPool-2:ctx-a6f6dbd1) (logid:774553ff) Host is down: 
4-dcp-cscn2.local. Starting HA on the VMs

At the moment I only need to assign public IPs direct to VMs rather than using 
NAT with the virtual router but would be happy to go with advanced networking 
if it would make things easier :)


From: Rafael Weingärtner 
Sent: 07 June 2018 10:35
To: users
Subject: Re: advanced networking with public IPs direct to VMs

Ah so, it is not an advanced setup; even when you use multiple NICs.
Can you confirm that the message ""Agent investigation was requested on
host, but host does not support investigation because it has no NFS
storage. Skipping investigation." does not appear when you use a single
NIC? Can you check other log entries that might appear when the host is
marked as "down"?

On Thu, Jun 7, 2018 at 6:30 AM, Jon Marshall  wrote:

> It is all basic networking at the moment for all the setups.
>
>
> If you want me to I can setup a single NIC solution again and run any
> commands you need me to do.
>
>
> FYI when I setup single NIC I use the guided  installtion option in the UI
> rather than manual setup which I do for the multiple NIC scenario.
>
>
> Happy to set it up if it helps.
>
>
>
>
> 
> From: Rafael Weingärtner 
> Sent: 07 June 2018 10:23
> To: users
> Subject: Re: advanced networking with public IPs direct to VMs
>
> Ok, so that explains the log message. This is looking like a bug to me. It
> seems that in Zone wide the host state (when disconnected) is not being
> properly identified due to this NFS thing, and as a consequency it has a
> side effect in VM HA.
>
> We would need some inputs from guys that have advanced networking
> deployments and Zone wide storage.
>
> I do not see how the all in one NIC deployment scenario is working though.
> This method "com.cloud.ha.KVMInvestigator.isAgentAlive(Host)" is dead
> simple, if there is no NFS in the cluster (NFS storage pools found for a
> host's cluster), KVM hosts will be detected as "disconnected" and not down
> with that warning message you noticed.
>
> When you say "all in one NIC", is it an advanced network deployment where
> you put all traffic in a single network, or is it a basic networking that
> you are doing?
>
> On Thu, Jun 7, 2018 at 6:06 AM, Jon Marshall 
> wrote:
>
> > zone wide.
> >
> >
> > 
> > From: Rafael Weingärtner 
> > Sent: 07 June 2018 10:04
> > To: users
> > Subject: Re: advanced networking with public IPs direct to VMs
> >
> > What type of storage are you using? Zone wide? Or cluster "wide" storage?
> >
> > On Thu, Jun 7, 2018 at 4:25 AM, Jon Marshall 
> > wrote:
> >
> > > Rafael
> > >
> > >
> > > Here is the output as requested -
> > >
> > >
> > >
> > > mysql> mysql> select * from cloud.storage_pool where removed is null;
> > > ++--+--+
> > > ---+--++++--
> > > 

Re: advanced networking with public IPs direct to VMs

2018-06-07 Thread Rafael Weingärtner
Ah so, it is not an advanced setup; even when you use multiple NICs.
Can you confirm that the message ""Agent investigation was requested on
host, but host does not support investigation because it has no NFS
storage. Skipping investigation." does not appear when you use a single
NIC? Can you check other log entries that might appear when the host is
marked as "down"?

On Thu, Jun 7, 2018 at 6:30 AM, Jon Marshall  wrote:

> It is all basic networking at the moment for all the setups.
>
>
> If you want me to I can setup a single NIC solution again and run any
> commands you need me to do.
>
>
> FYI when I setup single NIC I use the guided  installtion option in the UI
> rather than manual setup which I do for the multiple NIC scenario.
>
>
> Happy to set it up if it helps.
>
>
>
>
> 
> From: Rafael Weingärtner 
> Sent: 07 June 2018 10:23
> To: users
> Subject: Re: advanced networking with public IPs direct to VMs
>
> Ok, so that explains the log message. This is looking like a bug to me. It
> seems that in Zone wide the host state (when disconnected) is not being
> properly identified due to this NFS thing, and as a consequency it has a
> side effect in VM HA.
>
> We would need some inputs from guys that have advanced networking
> deployments and Zone wide storage.
>
> I do not see how the all in one NIC deployment scenario is working though.
> This method "com.cloud.ha.KVMInvestigator.isAgentAlive(Host)" is dead
> simple, if there is no NFS in the cluster (NFS storage pools found for a
> host's cluster), KVM hosts will be detected as "disconnected" and not down
> with that warning message you noticed.
>
> When you say "all in one NIC", is it an advanced network deployment where
> you put all traffic in a single network, or is it a basic networking that
> you are doing?
>
> On Thu, Jun 7, 2018 at 6:06 AM, Jon Marshall 
> wrote:
>
> > zone wide.
> >
> >
> > 
> > From: Rafael Weingärtner 
> > Sent: 07 June 2018 10:04
> > To: users
> > Subject: Re: advanced networking with public IPs direct to VMs
> >
> > What type of storage are you using? Zone wide? Or cluster "wide" storage?
> >
> > On Thu, Jun 7, 2018 at 4:25 AM, Jon Marshall 
> > wrote:
> >
> > > Rafael
> > >
> > >
> > > Here is the output as requested -
> > >
> > >
> > >
> > > mysql> mysql> select * from cloud.storage_pool where removed is null;
> > > ++--+--+
> > > ---+--++++--
> > > --++--+---+-
> > > +-+-+-+-
> > > ---+---+---++---
> > > --+---+
> > > | id | name | uuid | pool_type
>  |
> > > port | data_center_id | pod_id | cluster_id | used_bytes |
> > capacity_bytes |
> > > host_address | user_info | path| created |
> > removed
> > > | update_time | status | storage_provider_name | scope | hypervisor |
> > > managed | capacity_iops |
> > > ++--+--+
> > > ---+--++++--
> > > --++--+---+-
> > > +-+-+-+-
> > > ---+---+---++---
> > > --+---+
> > > |  1 | ds1  | a234224f-05fb-3f4c-9b0f-c51ebdf9a601 |
> NetworkFilesystem |
> > > 2049 |  1 |   NULL |   NULL | 6059720704 |
> > 79133933568 |
> > > 172.30.5.2   | NULL  | /export/primary | 2018-06-05 13:45:01 | NULL
> > > | NULL| Up | DefaultPrimary| ZONE  | KVM|
> > >  0 |  NULL |
> > > ++--+--+
> > > ---+--++++--
> > > --++--+---+-
> > > +-+-+-+-
> > > ---+---+---++---
> > > --+---+
> > > 1 row in set (0.00 sec)
> > >
> > > mysql>
> > >
> > > Do you think this problem is related to my NIC/bridge configuration or
> > the
> > > way I am configuring the zone ?
> > >
> > > Jon
> > > 
> > > From: Rafael Weingärtner 
> > > Sent: 07 June 2018 06:45
> > > To: users
> > > Subject: Re: advanced networking with public IPs direct to VMs
> > >
> > > Can you also post the result of:
> > > select * from cloud.storage_pool where removed is null
> > >
> > > On Wed, Jun 6, 2018 at 3:06 PM, Dag Sonstebo <
> dag.sonst...@shapeblue.com
> > >
> > > wrote:
> > >
> > > > Hi Jon,
> > > >
> > > > Still confused where your primary storage pools are – are you sure
> your
> > > > hosts are in cluster 1?
> > > >
> > > > Quick question just to make sure - assuming 

Re: advanced networking with public IPs direct to VMs

2018-06-07 Thread Jon Marshall
It is all basic networking at the moment for all the setups.


If you want me to I can setup a single NIC solution again and run any commands 
you need me to do.


FYI when I setup single NIC I use the guided  installtion option in the UI 
rather than manual setup which I do for the multiple NIC scenario.


Happy to set it up if it helps.





From: Rafael Weingärtner 
Sent: 07 June 2018 10:23
To: users
Subject: Re: advanced networking with public IPs direct to VMs

Ok, so that explains the log message. This is looking like a bug to me. It
seems that in Zone wide the host state (when disconnected) is not being
properly identified due to this NFS thing, and as a consequency it has a
side effect in VM HA.

We would need some inputs from guys that have advanced networking
deployments and Zone wide storage.

I do not see how the all in one NIC deployment scenario is working though.
This method "com.cloud.ha.KVMInvestigator.isAgentAlive(Host)" is dead
simple, if there is no NFS in the cluster (NFS storage pools found for a
host's cluster), KVM hosts will be detected as "disconnected" and not down
with that warning message you noticed.

When you say "all in one NIC", is it an advanced network deployment where
you put all traffic in a single network, or is it a basic networking that
you are doing?

On Thu, Jun 7, 2018 at 6:06 AM, Jon Marshall  wrote:

> zone wide.
>
>
> 
> From: Rafael Weingärtner 
> Sent: 07 June 2018 10:04
> To: users
> Subject: Re: advanced networking with public IPs direct to VMs
>
> What type of storage are you using? Zone wide? Or cluster "wide" storage?
>
> On Thu, Jun 7, 2018 at 4:25 AM, Jon Marshall 
> wrote:
>
> > Rafael
> >
> >
> > Here is the output as requested -
> >
> >
> >
> > mysql> mysql> select * from cloud.storage_pool where removed is null;
> > ++--+--+
> > ---+--++++--
> > --++--+---+-
> > +-+-+-+-
> > ---+---+---++---
> > --+---+
> > | id | name | uuid | pool_type |
> > port | data_center_id | pod_id | cluster_id | used_bytes |
> capacity_bytes |
> > host_address | user_info | path| created |
> removed
> > | update_time | status | storage_provider_name | scope | hypervisor |
> > managed | capacity_iops |
> > ++--+--+
> > ---+--++++--
> > --++--+---+-
> > +-+-+-+-
> > ---+---+---++---
> > --+---+
> > |  1 | ds1  | a234224f-05fb-3f4c-9b0f-c51ebdf9a601 | NetworkFilesystem |
> > 2049 |  1 |   NULL |   NULL | 6059720704 |
> 79133933568 |
> > 172.30.5.2   | NULL  | /export/primary | 2018-06-05 13:45:01 | NULL
> > | NULL| Up | DefaultPrimary| ZONE  | KVM|
> >  0 |  NULL |
> > ++--+--+
> > ---+--++++--
> > --++--+---+-
> > +-+-+-+-
> > ---+---+---++---
> > --+---+
> > 1 row in set (0.00 sec)
> >
> > mysql>
> >
> > Do you think this problem is related to my NIC/bridge configuration or
> the
> > way I am configuring the zone ?
> >
> > Jon
> > 
> > From: Rafael Weingärtner 
> > Sent: 07 June 2018 06:45
> > To: users
> > Subject: Re: advanced networking with public IPs direct to VMs
> >
> > Can you also post the result of:
> > select * from cloud.storage_pool where removed is null
> >
> > On Wed, Jun 6, 2018 at 3:06 PM, Dag Sonstebo  >
> > wrote:
> >
> > > Hi Jon,
> > >
> > > Still confused where your primary storage pools are – are you sure your
> > > hosts are in cluster 1?
> > >
> > > Quick question just to make sure - assuming management/storage is on
> the
> > > same NIC when I setup basic networking the physical network has the
> > > management and guest icons already there and I just edit the KVM
> labels.
> > If
> > > I am running storage over management do I need to drag the storage icon
> > to
> > > the physical network and use the same KVM label (cloudbr0) as the
> > > management or does CS automatically just use the management NIC ie. I
> > would
> > > only need to drag the storage icon across in basic setup if I wanted it
> > on
> > > a different NIC/IP subnet ?  (hope that makes sense !)
> > >
> > > >> I would do both – set up your 2/3 physical networks, name isn’t that
> > > important – but then drag the 

Re: advanced networking with public IPs direct to VMs

2018-06-07 Thread Rafael Weingärtner
Ok, so that explains the log message. This is looking like a bug to me. It
seems that in Zone wide the host state (when disconnected) is not being
properly identified due to this NFS thing, and as a consequency it has a
side effect in VM HA.

We would need some inputs from guys that have advanced networking
deployments and Zone wide storage.

I do not see how the all in one NIC deployment scenario is working though.
This method "com.cloud.ha.KVMInvestigator.isAgentAlive(Host)" is dead
simple, if there is no NFS in the cluster (NFS storage pools found for a
host's cluster), KVM hosts will be detected as "disconnected" and not down
with that warning message you noticed.

When you say "all in one NIC", is it an advanced network deployment where
you put all traffic in a single network, or is it a basic networking that
you are doing?

On Thu, Jun 7, 2018 at 6:06 AM, Jon Marshall  wrote:

> zone wide.
>
>
> 
> From: Rafael Weingärtner 
> Sent: 07 June 2018 10:04
> To: users
> Subject: Re: advanced networking with public IPs direct to VMs
>
> What type of storage are you using? Zone wide? Or cluster "wide" storage?
>
> On Thu, Jun 7, 2018 at 4:25 AM, Jon Marshall 
> wrote:
>
> > Rafael
> >
> >
> > Here is the output as requested -
> >
> >
> >
> > mysql> mysql> select * from cloud.storage_pool where removed is null;
> > ++--+--+
> > ---+--++++--
> > --++--+---+-
> > +-+-+-+-
> > ---+---+---++---
> > --+---+
> > | id | name | uuid | pool_type |
> > port | data_center_id | pod_id | cluster_id | used_bytes |
> capacity_bytes |
> > host_address | user_info | path| created |
> removed
> > | update_time | status | storage_provider_name | scope | hypervisor |
> > managed | capacity_iops |
> > ++--+--+
> > ---+--++++--
> > --++--+---+-
> > +-+-+-+-
> > ---+---+---++---
> > --+---+
> > |  1 | ds1  | a234224f-05fb-3f4c-9b0f-c51ebdf9a601 | NetworkFilesystem |
> > 2049 |  1 |   NULL |   NULL | 6059720704 |
> 79133933568 |
> > 172.30.5.2   | NULL  | /export/primary | 2018-06-05 13:45:01 | NULL
> > | NULL| Up | DefaultPrimary| ZONE  | KVM|
> >  0 |  NULL |
> > ++--+--+
> > ---+--++++--
> > --++--+---+-
> > +-+-+-+-
> > ---+---+---++---
> > --+---+
> > 1 row in set (0.00 sec)
> >
> > mysql>
> >
> > Do you think this problem is related to my NIC/bridge configuration or
> the
> > way I am configuring the zone ?
> >
> > Jon
> > 
> > From: Rafael Weingärtner 
> > Sent: 07 June 2018 06:45
> > To: users
> > Subject: Re: advanced networking with public IPs direct to VMs
> >
> > Can you also post the result of:
> > select * from cloud.storage_pool where removed is null
> >
> > On Wed, Jun 6, 2018 at 3:06 PM, Dag Sonstebo  >
> > wrote:
> >
> > > Hi Jon,
> > >
> > > Still confused where your primary storage pools are – are you sure your
> > > hosts are in cluster 1?
> > >
> > > Quick question just to make sure - assuming management/storage is on
> the
> > > same NIC when I setup basic networking the physical network has the
> > > management and guest icons already there and I just edit the KVM
> labels.
> > If
> > > I am running storage over management do I need to drag the storage icon
> > to
> > > the physical network and use the same KVM label (cloudbr0) as the
> > > management or does CS automatically just use the management NIC ie. I
> > would
> > > only need to drag the storage icon across in basic setup if I wanted it
> > on
> > > a different NIC/IP subnet ?  (hope that makes sense !)
> > >
> > > >> I would do both – set up your 2/3 physical networks, name isn’t that
> > > important – but then drag the traffic types to the correct one and make
> > > sure the labels are correct.
> > > Regards,
> > > Dag Sonstebo
> > > Cloud Architect
> > > ShapeBlue
> > >
> > > On 06/06/2018, 12:39, "Jon Marshall"  wrote:
> > >
> > > Dag
> > >
> > >
> > > Do you mean  check the pools with "Infrastructure -> Primary
> Storage"
> > > and "Infrastructure -> Secondary Storage" within the UI ?
> > >
> > >
> > > If so Primary Storage has a state of UP, secondary storage does not
> > > show a state as such so 

Re: advanced networking with public IPs direct to VMs

2018-06-07 Thread Jon Marshall
zone wide.



From: Rafael Weingärtner 
Sent: 07 June 2018 10:04
To: users
Subject: Re: advanced networking with public IPs direct to VMs

What type of storage are you using? Zone wide? Or cluster "wide" storage?

On Thu, Jun 7, 2018 at 4:25 AM, Jon Marshall  wrote:

> Rafael
>
>
> Here is the output as requested -
>
>
>
> mysql> mysql> select * from cloud.storage_pool where removed is null;
> ++--+--+
> ---+--++++--
> --++--+---+-
> +-+-+-+-
> ---+---+---++---
> --+---+
> | id | name | uuid | pool_type |
> port | data_center_id | pod_id | cluster_id | used_bytes | capacity_bytes |
> host_address | user_info | path| created | removed
> | update_time | status | storage_provider_name | scope | hypervisor |
> managed | capacity_iops |
> ++--+--+
> ---+--++++--
> --++--+---+-
> +-+-+-+-
> ---+---+---++---
> --+---+
> |  1 | ds1  | a234224f-05fb-3f4c-9b0f-c51ebdf9a601 | NetworkFilesystem |
> 2049 |  1 |   NULL |   NULL | 6059720704 |79133933568 |
> 172.30.5.2   | NULL  | /export/primary | 2018-06-05 13:45:01 | NULL
> | NULL| Up | DefaultPrimary| ZONE  | KVM|
>  0 |  NULL |
> ++--+--+
> ---+--++++--
> --++--+---+-
> +-+-+-+-
> ---+---+---++---
> --+---+
> 1 row in set (0.00 sec)
>
> mysql>
>
> Do you think this problem is related to my NIC/bridge configuration or the
> way I am configuring the zone ?
>
> Jon
> 
> From: Rafael Weingärtner 
> Sent: 07 June 2018 06:45
> To: users
> Subject: Re: advanced networking with public IPs direct to VMs
>
> Can you also post the result of:
> select * from cloud.storage_pool where removed is null
>
> On Wed, Jun 6, 2018 at 3:06 PM, Dag Sonstebo 
> wrote:
>
> > Hi Jon,
> >
> > Still confused where your primary storage pools are – are you sure your
> > hosts are in cluster 1?
> >
> > Quick question just to make sure - assuming management/storage is on the
> > same NIC when I setup basic networking the physical network has the
> > management and guest icons already there and I just edit the KVM labels.
> If
> > I am running storage over management do I need to drag the storage icon
> to
> > the physical network and use the same KVM label (cloudbr0) as the
> > management or does CS automatically just use the management NIC ie. I
> would
> > only need to drag the storage icon across in basic setup if I wanted it
> on
> > a different NIC/IP subnet ?  (hope that makes sense !)
> >
> > >> I would do both – set up your 2/3 physical networks, name isn’t that
> > important – but then drag the traffic types to the correct one and make
> > sure the labels are correct.
> > Regards,
> > Dag Sonstebo
> > Cloud Architect
> > ShapeBlue
> >
> > On 06/06/2018, 12:39, "Jon Marshall"  wrote:
> >
> > Dag
> >
> >
> > Do you mean  check the pools with "Infrastructure -> Primary Storage"
> > and "Infrastructure -> Secondary Storage" within the UI ?
> >
> >
> > If so Primary Storage has a state of UP, secondary storage does not
> > show a state as such so not sure where else to check it ?
> >
> >
> > Rerun of the command -
> >
> > mysql> select * from cloud.storage_pool where cluster_id = 1;
> > Empty set (0.00 sec)
> >
> > mysql>
> >
> > I think it is something to do with my zone creation rather than the
> > NIC, bridge setup although I can post those if needed.
> >
> > I may try to setup just the 2 NIC solution you mentioned although as
> I
> > say I had the same issue with that ie. host goes to "Altert" state and
> same
> > error messages.  The only time I can get it to go to "Down" state is when
> > it is all on the single NIC.
> >
> > Quick question just to make sure - assuming management/storage is on
> > the same NIC when I setup basic networking the physical network has the
> > management and guest icons already there and I just edit the KVM labels.
> If
> > I am running storage over management do I need to drag the storage icon
> to
> > the physical network and use the same KVM label (cloudbr0) as the
> > management or does CS automatically just use the management NIC ie. I
> would
> > only need to drag the storage 

Re: advanced networking with public IPs direct to VMs

2018-06-07 Thread Rafael Weingärtner
What type of storage are you using? Zone wide? Or cluster "wide" storage?

On Thu, Jun 7, 2018 at 4:25 AM, Jon Marshall  wrote:

> Rafael
>
>
> Here is the output as requested -
>
>
>
> mysql> mysql> select * from cloud.storage_pool where removed is null;
> ++--+--+
> ---+--++++--
> --++--+---+-
> +-+-+-+-
> ---+---+---++---
> --+---+
> | id | name | uuid | pool_type |
> port | data_center_id | pod_id | cluster_id | used_bytes | capacity_bytes |
> host_address | user_info | path| created | removed
> | update_time | status | storage_provider_name | scope | hypervisor |
> managed | capacity_iops |
> ++--+--+
> ---+--++++--
> --++--+---+-
> +-+-+-+-
> ---+---+---++---
> --+---+
> |  1 | ds1  | a234224f-05fb-3f4c-9b0f-c51ebdf9a601 | NetworkFilesystem |
> 2049 |  1 |   NULL |   NULL | 6059720704 |79133933568 |
> 172.30.5.2   | NULL  | /export/primary | 2018-06-05 13:45:01 | NULL
> | NULL| Up | DefaultPrimary| ZONE  | KVM|
>  0 |  NULL |
> ++--+--+
> ---+--++++--
> --++--+---+-
> +-+-+-+-
> ---+---+---++---
> --+---+
> 1 row in set (0.00 sec)
>
> mysql>
>
> Do you think this problem is related to my NIC/bridge configuration or the
> way I am configuring the zone ?
>
> Jon
> 
> From: Rafael Weingärtner 
> Sent: 07 June 2018 06:45
> To: users
> Subject: Re: advanced networking with public IPs direct to VMs
>
> Can you also post the result of:
> select * from cloud.storage_pool where removed is null
>
> On Wed, Jun 6, 2018 at 3:06 PM, Dag Sonstebo 
> wrote:
>
> > Hi Jon,
> >
> > Still confused where your primary storage pools are – are you sure your
> > hosts are in cluster 1?
> >
> > Quick question just to make sure - assuming management/storage is on the
> > same NIC when I setup basic networking the physical network has the
> > management and guest icons already there and I just edit the KVM labels.
> If
> > I am running storage over management do I need to drag the storage icon
> to
> > the physical network and use the same KVM label (cloudbr0) as the
> > management or does CS automatically just use the management NIC ie. I
> would
> > only need to drag the storage icon across in basic setup if I wanted it
> on
> > a different NIC/IP subnet ?  (hope that makes sense !)
> >
> > >> I would do both – set up your 2/3 physical networks, name isn’t that
> > important – but then drag the traffic types to the correct one and make
> > sure the labels are correct.
> > Regards,
> > Dag Sonstebo
> > Cloud Architect
> > ShapeBlue
> >
> > On 06/06/2018, 12:39, "Jon Marshall"  wrote:
> >
> > Dag
> >
> >
> > Do you mean  check the pools with "Infrastructure -> Primary Storage"
> > and "Infrastructure -> Secondary Storage" within the UI ?
> >
> >
> > If so Primary Storage has a state of UP, secondary storage does not
> > show a state as such so not sure where else to check it ?
> >
> >
> > Rerun of the command -
> >
> > mysql> select * from cloud.storage_pool where cluster_id = 1;
> > Empty set (0.00 sec)
> >
> > mysql>
> >
> > I think it is something to do with my zone creation rather than the
> > NIC, bridge setup although I can post those if needed.
> >
> > I may try to setup just the 2 NIC solution you mentioned although as
> I
> > say I had the same issue with that ie. host goes to "Altert" state and
> same
> > error messages.  The only time I can get it to go to "Down" state is when
> > it is all on the single NIC.
> >
> > Quick question just to make sure - assuming management/storage is on
> > the same NIC when I setup basic networking the physical network has the
> > management and guest icons already there and I just edit the KVM labels.
> If
> > I am running storage over management do I need to drag the storage icon
> to
> > the physical network and use the same KVM label (cloudbr0) as the
> > management or does CS automatically just use the management NIC ie. I
> would
> > only need to drag the storage icon across in basic setup if I wanted it
> on
> > a different NIC/IP subnet ?  (hope that makes sense !)
> >
> > On the plus side I have been at this for so long now 

Re: advanced networking with public IPs direct to VMs

2018-06-07 Thread Jon Marshall
Rafael


Here is the output as requested -



mysql> mysql> select * from cloud.storage_pool where removed is null;
++--+--+---+--++++++--+---+-+-+-+-++---+---++-+---+
| id | name | uuid | pool_type | port | 
data_center_id | pod_id | cluster_id | used_bytes | capacity_bytes | 
host_address | user_info | path| created | removed | 
update_time | status | storage_provider_name | scope | hypervisor | managed | 
capacity_iops |
++--+--+---+--++++++--+---+-+-+-+-++---+---++-+---+
|  1 | ds1  | a234224f-05fb-3f4c-9b0f-c51ebdf9a601 | NetworkFilesystem | 2049 | 
 1 |   NULL |   NULL | 6059720704 |79133933568 | 172.30.5.2 
  | NULL  | /export/primary | 2018-06-05 13:45:01 | NULL| NULL| 
Up | DefaultPrimary| ZONE  | KVM|   0 |  NULL |
++--+--+---+--++++++--+---+-+-+-+-++---+---++-+---+
1 row in set (0.00 sec)

mysql>

Do you think this problem is related to my NIC/bridge configuration or the way 
I am configuring the zone ?

Jon

From: Rafael Weingärtner 
Sent: 07 June 2018 06:45
To: users
Subject: Re: advanced networking with public IPs direct to VMs

Can you also post the result of:
select * from cloud.storage_pool where removed is null

On Wed, Jun 6, 2018 at 3:06 PM, Dag Sonstebo 
wrote:

> Hi Jon,
>
> Still confused where your primary storage pools are – are you sure your
> hosts are in cluster 1?
>
> Quick question just to make sure - assuming management/storage is on the
> same NIC when I setup basic networking the physical network has the
> management and guest icons already there and I just edit the KVM labels. If
> I am running storage over management do I need to drag the storage icon to
> the physical network and use the same KVM label (cloudbr0) as the
> management or does CS automatically just use the management NIC ie. I would
> only need to drag the storage icon across in basic setup if I wanted it on
> a different NIC/IP subnet ?  (hope that makes sense !)
>
> >> I would do both – set up your 2/3 physical networks, name isn’t that
> important – but then drag the traffic types to the correct one and make
> sure the labels are correct.
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 06/06/2018, 12:39, "Jon Marshall"  wrote:
>
> Dag
>
>
> Do you mean  check the pools with "Infrastructure -> Primary Storage"
> and "Infrastructure -> Secondary Storage" within the UI ?
>
>
> If so Primary Storage has a state of UP, secondary storage does not
> show a state as such so not sure where else to check it ?
>
>
> Rerun of the command -
>
> mysql> select * from cloud.storage_pool where cluster_id = 1;
> Empty set (0.00 sec)
>
> mysql>
>
> I think it is something to do with my zone creation rather than the
> NIC, bridge setup although I can post those if needed.
>
> I may try to setup just the 2 NIC solution you mentioned although as I
> say I had the same issue with that ie. host goes to "Altert" state and same
> error messages.  The only time I can get it to go to "Down" state is when
> it is all on the single NIC.
>
> Quick question just to make sure - assuming management/storage is on
> the same NIC when I setup basic networking the physical network has the
> management and guest icons already there and I just edit the KVM labels. If
> I am running storage over management do I need to drag the storage icon to
> the physical network and use the same KVM label (cloudbr0) as the
> management or does CS automatically just use the management NIC ie. I would
> only need to drag the storage icon across in basic setup if I wanted it on
> a different NIC/IP subnet ?  (hope that makes sense !)
>
> On the plus side I have been at this for so long now and done so many
> rebuilds I could do it in my sleep now 
>
>
> 
> From: Dag Sonstebo 
> Sent: 06 June 2018 12:28
> To: users@cloudstack.apache.org
> Subject: Re: advanced networking with public IPs direct to VMs
>
> Looks OK to me Jon.
>
> The one thing that throws me is your storage pools – can you rerun

Re: advanced networking with public IPs direct to VMs

2018-06-07 Thread Jon Marshall
Dag


Am not an SQL expert by any means but does this not show hosts are in cluster 1 
-


mysql> select name, cluster_id from cloud.host;
+-++
| name| cluster_id |
+-++
| dcp-cscn1.local |  1 |
| v-2-VM  |   NULL |
| s-1-VM  |   NULL |
| dcp-cscn2.local |  1 |
| dcp-cscn3.local |  1 |
+-++
5 rows in set (0.00 sec)

mysql>

I only have one cluster and those are the hosts I am using.


Jon



From: Dag Sonstebo 
Sent: 06 June 2018 19:06
To: users@cloudstack.apache.org
Subject: Re: advanced networking with public IPs direct to VMs

Hi Jon,

Still confused where your primary storage pools are – are you sure your hosts 
are in cluster 1?

Quick question just to make sure - assuming management/storage is on the same 
NIC when I setup basic networking the physical network has the management and 
guest icons already there and I just edit the KVM labels. If I am running 
storage over management do I need to drag the storage icon to the physical 
network and use the same KVM label (cloudbr0) as the management or does CS 
automatically just use the management NIC ie. I would only need to drag the 
storage icon across in basic setup if I wanted it on a different NIC/IP subnet 
?  (hope that makes sense !)

>> I would do both – set up your 2/3 physical networks, name isn’t that 
>> important – but then drag the traffic types to the correct one and make sure 
>> the labels are correct.
Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 06/06/2018, 12:39, "Jon Marshall"  wrote:

Dag


Do you mean  check the pools with "Infrastructure -> Primary Storage" and 
"Infrastructure -> Secondary Storage" within the UI ?


If so Primary Storage has a state of UP, secondary storage does not show a 
state as such so not sure where else to check it ?


Rerun of the command -

mysql> select * from cloud.storage_pool where cluster_id = 1;
Empty set (0.00 sec)

mysql>

I think it is something to do with my zone creation rather than the NIC, 
bridge setup although I can post those if needed.

I may try to setup just the 2 NIC solution you mentioned although as I say 
I had the same issue with that ie. host goes to "Altert" state and same error 
messages.  The only time I can get it to go to "Down" state is when it is all 
on the single NIC.

Quick question just to make sure - assuming management/storage is on the 
same NIC when I setup basic networking the physical network has the management 
and guest icons already there and I just edit the KVM labels. If I am running 
storage over management do I need to drag the storage icon to the physical 
network and use the same KVM label (cloudbr0) as the management or does CS 
automatically just use the management NIC ie. I would only need to drag the 
storage icon across in basic setup if I wanted it on a different NIC/IP subnet 
?  (hope that makes sense !)

On the plus side I have been at this for so long now and done so many 
rebuilds I could do it in my sleep now 



From: Dag Sonstebo 
Sent: 06 June 2018 12:28
To: users@cloudstack.apache.org
Subject: Re: advanced networking with public IPs direct to VMs

Looks OK to me Jon.

The one thing that throws me is your storage pools – can you rerun your 
query: select * from cloud.storage_pool where cluster_id = 1;

Do the pools show up as online in the CloudStack GUI?

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 06/06/2018, 12:08, "Jon Marshall"  wrote:

Don't know whether this helps or not but I logged into the SSVM and ran 
an ifconfig -


eth0: flags=4163  mtu 1500
inet 169.254.3.35  netmask 255.255.0.0  broadcast 
169.254.255.255
ether 0e:00:a9:fe:03:23  txqueuelen 1000  (Ethernet)
RX packets 141  bytes 20249 (19.7 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 108  bytes 16287 (15.9 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163  mtu 1500
inet 172.30.3.34  netmask 255.255.255.192  broadcast 172.30.3.63
ether 1e:00:3b:00:00:05  txqueuelen 1000  (Ethernet)
RX packets 56722  bytes 4953133 (4.7 MiB)
RX errors 0  dropped 44573  overruns 0  frame 0
TX packets 11224  bytes 1234932 (1.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth2: flags=4163  mtu 1500
inet 172.30.4.86  netmask 255.255.255.128  broadcast 
172.30.4.127
ether 1e:00:d9:00:00:53  txqueuelen 1000  (Ethernet)
RX packets 366191  bytes 435300557 (415.1 MiB)
RX errors 0  dropped 39456  overruns 0  frame 0
TX 

Re: How exactly does CloudStack stop a VM?

2018-06-07 Thread Suresh Kumar Anaparti
Hi Zhang,

Cloudstack would usually trigger a hypervisor level shutdown cmd for the
guest OS to stop the the guest VM. In case of XenServer, a XAPI command for
shutdown VM is sent from Cloudstack. Attempted for hard shutdown if force
flag is set, else a clean shutdown and if the shutdown operations fails in
any case, hard shutdown the VM. The force flag is false by default and set
to true in some cases. What is your case where Cloudstack is shutting down
guest VMs?

-Suresh

On Thu, Jun 7, 2018 at 5:22 AM, Yiping Zhang  wrote:

> Our VM instances do have xentools installed, though still at 6.2 version,
> whereas our hypervisors have been upgraded to XenServer 6.5 since the VM
> instances were created
>
>
> On 6/6/18, 4:20 PM, "Jean-Francois Nadeau" 
> wrote:
>
> If the xentools are installed and running in the guest OS it should
> detect
> the shutdown sent via XAPI.
>
> On Wed, Jun 6, 2018 at 6:58 PM, Yiping Zhang 
> wrote:
>
> > We are using XenServers with our CloudStack instances.
> >
> > On 6/6/18, 3:11 PM, "Jean-Francois Nadeau" 
> > wrote:
> >
> > On KVM,  AFAIK the shutdown is the equivalent of pressing the
> power
> > button.  To get the Linux OS to catch this and initiate a clean
> > shutdown,
> > you need the ACPID service running in the guest OS.
> >
> > On Wed, Jun 6, 2018 at 6:01 PM, Yiping Zhang  >
> > wrote:
> >
> > > Hi, all:
> > >
> > > We have a few VM instances which will hang when issue a Stop
> command
> > from
> > > CloudStack web UI or thru API calls, due to the app’s own
> > startup/stop
> > > script in guest OS was not properly invoked.  The app’s
> startup/stop
> > script
> > > works properly if we issue shutdown/reboot command in guest OS
> > directly.
> > >
> > > Hence here is my question:  when CloudStack tries to stop a
> running
> > VM
> > > instance, what is the exact command it sends to VM to stop it,
> with
> > or
> > > without forced flag?  What are the interactions between the
> > CloudStack, the
> > > hypervisor and the guest VM?
> > >
> > > Yiping
> > >
> >
> >
> >
>
>
>