Re: Problems with KVM HA & STONITH

2018-04-04 Thread victor

Hello Rohit,

Is the Host HA provider start working with Ceph. The reason I am asking 
is because, I am not able to create a VM with Ceph storage in a kvm host 
with HA enabled and I am getting the following error while creating VM.



.cloud.exception.StorageUnavailableException: Resource [StoragePool:2] 
is unreachable: Unable to create 
Vol[9|vm=6|DATADISK]:com.cloud.utils.exception.CloudRuntimeException: 
org.libvirt.LibvirtException: unsupported configuration: only RAW 
volumes are supported by this storage pool



Regards
Victor

On 11/04/2017 09:53 PM, Rohit Yadav wrote:

Hi James, (/cc Simon and others),


A new feature exists in upcoming ACS 4.11, Host HA:

https://cwiki.apache.org/confluence/display/CLOUDSTACK/Host+HA

You can read more about it here as well: 
http://www.shapeblue.com/host-ha-for-kvm-hosts-in-cloudstack/

This feature can use a custom HA provider, with default HA provider implemented 
for KVM and NFS, and uses ipmi based fencing (STONITH) of the host. The current 
HA mechanism provides no such method of fencing (powering off) a host and it 
depends under what circumstances the VM HA is failing (environment issues, ACS 
version etc).

As Simon mentioned, we have a (host) HA provider that works with Ceph in near 
future.

Regards.


From: Simon Weller 
Sent: Thursday, November 2, 2017 7:27:22 PM
To: users@cloudstack.apache.org
Subject: Re: Problems with KVM HA & STONITH

James,


Ceph is a great solution and we run all of our ACS storage on Ceph. Note that it adds 
another layer of complexity to your installation, so you're going need to develop some 
expertise with that platform to get comfortable with how it works. Typically you don't 
want to mix Ceph with your ACS hosts. We in fact deploy 3 separate Ceph Monitors, and 
then scale OSDs as required on a per cluster basis in order to add additional resiliency 
(So every KVM ACS cluster has it's own Ceph "POD").  We also use Ceph for S3 
storage (on completely separate Ceph clusters) for some other services.


NFS is much simpler to maintain for smaller installations in my opinion. If the 
IO load you're looking at isn't going to be insanely high, you could look at 
building a 2 node NFS cluster using pacemaker and DRDB for data replication 
between nodes. That would reduce your storage requirement to 2 fairly low power 
servers (NFS is not very cpu intensive). Currently on a host failure when using 
a storage other than NFS on KVM, you will not see HA occur until you take the 
failed host out of the ACS cluster. This is a historical limitation because ACS 
could not confirm the host had been fenced correctly, so to avoid potential 
data corruption (due to 2 hosts mounting the same storage), it doesn't do 
anything until the operator intervenes. As of ACS 4.10, IPMI based fencing is 
now supported on NFS and we're planning on developing similar support for Ceph.


Since you're an school district, I'm more than happy to jump on the phone with 
you to talk you through these options if you'd like.


- Si



From: McClune, James 
Sent: Thursday, November 2, 2017 8:28 AM
To: users@cloudstack.apache.org
Subject: Re: Problems with KVM HA & STONITH

Hi Simon,

Thanks for getting back to me. I created one single NFS share and added it
as primary storage. I think I better understand how the storage works, with
ACS.

I was able to get HA working with one NFS storage, which is good. However,
is there a way to incorporate multiple NFS storage pools and still have the
HA functionality? I think something like GlusterFS or Ceph (like Ivan and
Dag described) will work better.

Thank you Simon, Ivan, and Dag for your assistance!
James

On Wed, Nov 1, 2017 at 10:10 AM, Simon Weller 
wrote:


James,


Try just configuring a single NFS server and see if your setup works. If
you have 3 NFS shares, across all 3 hosts, i'm wondering whether ACS is
picking the one you rebooted as the storage for your VMs and when that
storage goes away (when you bounce the host), all storage for your VMs
vanishes and ACS tries to reboot your other hosts.


Normally in a simple ACS setup, you would have a separate storage server
that can serve up NFS to all hosts. If a host dies, then a VM would be
brought up on a spare hosts since all hosts have access to the same storage.

Your other option is to use local storage, but that won't provide HA.


- Si



From: McClune, James 
Sent: Monday, October 30, 2017 2:26 PM
To: users@cloudstack.apache.org
Subject: Re: Problems with KVM HA & STONITH

Hi Dag,

Thank you for responding back. I am currently running ACS 4.9 on an Ubuntu
14.04 VM. I have the three nodes, each having about 1TB of primary storage
(NFS) and 1TB of secondary storage (NFS). I added each NFS share into ACS.
All nodes are in a cluster.


Re: Committee to Sort through CCC Presentation Submissions

2018-04-04 Thread Rafael Weingärtner
I think everybody that “raised their hands here” already signed up to
review.

Mike, what about if we only gathered the reviews from Apache main review
system, and then we use that to decide which presentations will get in
CloudStack tracks? Then, we reduce the work on our side (we also remove
bias…). I do believe that the review from other peers from Apache community
(even the one outside from our small community) will be fair and technical
(meaning, without passion and or favoritism).

Having said that, I think we only need a small group of PMCs to gather the
results and out of the best ranked proposals, we pick the ones to our
tracks.

What do you (Mike) and others think?


On Tue, Apr 3, 2018 at 5:07 PM, Tutkowski, Mike 
wrote:

> Hi Ron,
>
> I don’t actually have insight into how many people have currently signed
> up online to be CFP reviewers for ApacheCon. At present, I’m only aware of
> those who have responded to this e-mail chain.
>
> We should be able to find out more in the coming weeks. We’re still quite
> early in the process.
>
> Thanks for your feedback,
> Mike
>
> On 4/1/18, 9:18 AM, "Ron Wheeler"  wrote:
>
> How many people have signed up to be reviewers?
>
> I don't think that scheduling is part of the review process and that
> can
> be done by the person/team "organizing" ApacheCon on behalf of the PMC.
>
> To me review is looking at content for
> - relevance
> - quality of the presentations (suggest fixes to content, English,
> graphics, etc.)
> This should result in a consensus score
> - Perfect - ready for prime time
> - Needs minor changes as documented by the reviewers
> - Great topic but needs more work - perhaps a reviewer could volunteer
> to work with the presenter to get it ready if chosen
> - Not recommended for topic or content reasons
>
> The reviewers could also make non-binding recommendations about the
> balance between topics - marketing(why Cloudstack),
> Operations/implementation, Technical details, Roadmap, etc. based on
> what they have seen.
>
> This should be used by the organizers to make the choices and organize
> the program.
> The organizers have the final say on the choice of presentations and
> schedule
>
> Reviewers are there to help the process not control it.
>
> I would be worried that you do not have enough reviewers rather than
> too
> many.
> Then the work falls on the PMC and organizers.
>
> When planning meetings, I would recommend that you clearly separate the
> roles and only invite the reviewers to the meetings about review. Get
> the list of presentation to present to the reviewers and decide if
> there
> are any instructions that you want to give to reviewers.
> I would recommend that you keep the organizing group small. Membership
> should be set by the PMC and should be people that are committed to the
> ApacheCon project and have the time. The committee can request help for
> specific tasks from others in the community who are not on the
> committee.
>
> I would also recommend that organizers do not do reviews. They should
> read the finalists but if they do reviews, there may be a suggestion of
> favouring presentations that they reviewed. It also ensures that the
> organizers are not getting heat from rejected presenters - "it is the
> reviewers fault you did not get selected".
>
> My advice is to get as many reviewers as you can so that no one is
> essential and each reviewer has a limited number of presentations to
> review but each presentation gets reviewed by multiple people. Also
> bear
> in mind that not all reviewers have the same ability to review each
> presentation.
> Reviews should be anonymous and only the summary comments given to the
> presenter. Reviewers of a presentation should be able to discuss the
> presentation during the review to make sure that reviewers do not feel
> isolated or get lost when they hit content that they don't understand
> fully.
>
>
>
> Ron
>
>
> On 01/04/2018 12:20 AM, Tutkowski, Mike wrote:
> > Thanks for the feedback, Will!
> >
> > I agree with the approach you outlined.
> >
> > Thanks for being so involved in the process! Let’s chat with Giles
> once he’s back to see if we can get your questions answered.
> >
> >> On Mar 31, 2018, at 10:14 PM, Will Stevens 
> wrote:
> >>
> >> In the past the committee was chosen as a relatively small group in
> order
> >> to make it easier to manage feedback.  In order to make it fair to
> everyone
> >> in the community, I would suggest that instead of doing it with a
> small
> >> group, we do it out in the open on a scheduled call.
> >>
> >> We will have to get a list of the talks that are CloudStack
> specific from
> >> 

RE: systemvm

2018-04-04 Thread Paul Angus
Hi Swastik,

Have you tried running:

sh /usr/local/cloud/systemvm/ssvm-check.sh

on the ssvm?  It's (almost) impossible for the script not to be there.

Can you paste the contents of /var/log/cloud.log  to pastebin or something.

paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-Original Message-
From: Swastik Mittal  
Sent: 04 April 2018 18:40
To: users@cloudstack.apache.org
Subject: RE: systemvm

@Paul

Yes, I explicitly created it like normal nfs share, and gave the path of the 
exported directory in cloudstack.

I ain't getting any ssvm-check file in my ssvm. I did try changing the system 
vm template from 4.11 to 4.6 but the result was the same.

I was successfully able to launch a vm though with similar configurations but 
with ACS 4.6 and 4.4 .

On 4 Apr 2018 11:05 p.m., "Paul Angus"  wrote:

> Did you explicitly create it or just let cloudstack sort itself out?
>
> FYI, ssvm-check is at:
> /usr/local/cloud/systemvm/ssvm-check.sh
>
> In the ssvm itself
>
>
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>
> -Original Message-
> From: Swastik Mittal 
> Sent: 04 April 2018 18:25
> To: users@cloudstack.apache.org
> Subject: RE: systemvm
>
> Hey Paul
>
> Yes, I have my management working as the storage as well.
>
> Regards
> Swastik
>
> On 4 Apr 2018 10:39 p.m., "Paul Angus"  wrote:
>
> > Have you configured a storage network on the same subnet as the
> > management network?  You have two interfaces on the same subnet.
> >
> > paul.an...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> >
> >
> >
> >
> > -Original Message-
> > From: Swastik Mittal 
> > Sent: 04 April 2018 11:46
> > To: users@cloudstack.apache.org
> > Subject: Re: systemvm
> >
> > @Stephen
> >
> > "host" in global settings is set to 10.1.0.15 which is the ip address of
> > the management server.
> > Yes, I'll work on getting ssvm-check file.
> >
> > Thanks
> > Swastik
> >
> > On 4/4/18, Swastik Mittal  wrote:
> > > @Stephen
> > >
> > > Request to internal server mentioned in the global sec.storage.. after
> > > registering the iso successfully, get's stuck on HEAD request. As you
> > > mentioned there is an issue in route path from SSVM. Not able to
> > > figure out how do I find it.
> > >
> > > regards
> > > Swastik
> > >
> > > On 4/4/18, Swastik Mittal  wrote:
> > >> Hey @Stephen
> > >>
> > >> I am able to ping my management from ssvm. Also wget to internal
> > >> server works fine, it took some time to establish connection
> > >> initially.
> > >>
> > >> I don't have any ssvm-check.sh file. I forgot to mention it on this
> > >> thread.
> > >>
> > >> Outputs:
> > >>
> > >> root@s-1-VM:~# ip a s
> > >> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> > >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > >> inet 127.0.0.1/8 scope host lo
> > >> 2: eth0:  mtu 1500 qdisc pfifo_fast
> > >> state UP qlen 1000
> > >> link/ether 0e:00:a9:fe:02:2a brd ff:ff:ff:ff:ff:ff
> > >> inet 169.254.2.42/16 brd 169.254.255.255 scope global eth0
> > >> 3: eth1:  mtu 1500 qdisc pfifo_fast
> > >> state UP qlen 1000
> > >> link/ether 1e:00:ce:00:00:0e brd ff:ff:ff:ff:ff:ff
> > >> inet 10.1.0.43/24 brd 10.1.0.255 scope global eth1
> > >> 4: eth2:  mtu 1500 qdisc pfifo_fast
> > >> state UP qlen 1000
> > >> link/ether 1e:00:a1:00:00:a2 brd ff:ff:ff:ff:ff:ff
> > >> inet 10.1.0.191/24 brd 10.1.0.255 scope global eth2
> > >>
> > >>
> > >> root@s-1-VM:~# ip r s
> > >> default via 10.1.0.2 dev eth2
> > >> 10.1.0.0/24 dev eth1  proto kernel  scope link  src 10.1.0.43
> > >> 10.1.0.0/24 dev eth2  proto kernel  scope link  src 10.1.0.191
> > >> 169.254.0.0/16 dev eth0  proto kernel  scope link  src 169.254.2.42
> > >>
> > >> Yes, my storage and management are the same.
> > >>
> > >> root@s-1-VM:~# route -n
> > >> Kernel IP routing table
> > >> Destination Gateway Genmask Flags  Metric Ref
> Use
> > >> Iface
> > >> 0.0.0.0   10.1.0.2   0.0.0.0UG  0
> > >> 00eth2
> > >> 10.1.0.0  0.0.0.0   255.255.255.0U   0  0
> > >>   0eth1
> > >> 10.1.0.0  0.0.0.0   255.255.255.0U   0  0
> > >>   0eth2
> > >> 169.254.0.0 0.0.0.0   255.255.0.0   U   0  0
> > >>  0eth0
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> On 4/4/18, Stephan Seitz  wrote:
> > >>> Hu!
> > >>>
> > >>> I'ld recommend to log in to your ssvm and check if everything 

RE: systemvm

2018-04-04 Thread Swastik Mittal
@Paul

Yes, I explicitly created it like normal nfs share, and gave the path of
the exported directory in cloudstack.

I ain't getting any ssvm-check file in my ssvm. I did try changing the
system vm template from 4.11 to 4.6 but the result was the same.

I was successfully able to launch a vm though with similar configurations
but with ACS 4.6 and 4.4 .

On 4 Apr 2018 11:05 p.m., "Paul Angus"  wrote:

> Did you explicitly create it or just let cloudstack sort itself out?
>
> FYI, ssvm-check is at:
> /usr/local/cloud/systemvm/ssvm-check.sh
>
> In the ssvm itself
>
>
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> -Original Message-
> From: Swastik Mittal 
> Sent: 04 April 2018 18:25
> To: users@cloudstack.apache.org
> Subject: RE: systemvm
>
> Hey Paul
>
> Yes, I have my management working as the storage as well.
>
> Regards
> Swastik
>
> On 4 Apr 2018 10:39 p.m., "Paul Angus"  wrote:
>
> > Have you configured a storage network on the same subnet as the
> > management network?  You have two interfaces on the same subnet.
> >
> > paul.an...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> >
> >
> >
> >
> > -Original Message-
> > From: Swastik Mittal 
> > Sent: 04 April 2018 11:46
> > To: users@cloudstack.apache.org
> > Subject: Re: systemvm
> >
> > @Stephen
> >
> > "host" in global settings is set to 10.1.0.15 which is the ip address of
> > the management server.
> > Yes, I'll work on getting ssvm-check file.
> >
> > Thanks
> > Swastik
> >
> > On 4/4/18, Swastik Mittal  wrote:
> > > @Stephen
> > >
> > > Request to internal server mentioned in the global sec.storage.. after
> > > registering the iso successfully, get's stuck on HEAD request. As you
> > > mentioned there is an issue in route path from SSVM. Not able to
> > > figure out how do I find it.
> > >
> > > regards
> > > Swastik
> > >
> > > On 4/4/18, Swastik Mittal  wrote:
> > >> Hey @Stephen
> > >>
> > >> I am able to ping my management from ssvm. Also wget to internal
> > >> server works fine, it took some time to establish connection
> > >> initially.
> > >>
> > >> I don't have any ssvm-check.sh file. I forgot to mention it on this
> > >> thread.
> > >>
> > >> Outputs:
> > >>
> > >> root@s-1-VM:~# ip a s
> > >> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> > >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > >> inet 127.0.0.1/8 scope host lo
> > >> 2: eth0:  mtu 1500 qdisc pfifo_fast
> > >> state UP qlen 1000
> > >> link/ether 0e:00:a9:fe:02:2a brd ff:ff:ff:ff:ff:ff
> > >> inet 169.254.2.42/16 brd 169.254.255.255 scope global eth0
> > >> 3: eth1:  mtu 1500 qdisc pfifo_fast
> > >> state UP qlen 1000
> > >> link/ether 1e:00:ce:00:00:0e brd ff:ff:ff:ff:ff:ff
> > >> inet 10.1.0.43/24 brd 10.1.0.255 scope global eth1
> > >> 4: eth2:  mtu 1500 qdisc pfifo_fast
> > >> state UP qlen 1000
> > >> link/ether 1e:00:a1:00:00:a2 brd ff:ff:ff:ff:ff:ff
> > >> inet 10.1.0.191/24 brd 10.1.0.255 scope global eth2
> > >>
> > >>
> > >> root@s-1-VM:~# ip r s
> > >> default via 10.1.0.2 dev eth2
> > >> 10.1.0.0/24 dev eth1  proto kernel  scope link  src 10.1.0.43
> > >> 10.1.0.0/24 dev eth2  proto kernel  scope link  src 10.1.0.191
> > >> 169.254.0.0/16 dev eth0  proto kernel  scope link  src 169.254.2.42
> > >>
> > >> Yes, my storage and management are the same.
> > >>
> > >> root@s-1-VM:~# route -n
> > >> Kernel IP routing table
> > >> Destination Gateway Genmask Flags  Metric Ref
> Use
> > >> Iface
> > >> 0.0.0.0   10.1.0.2   0.0.0.0UG  0
> > >> 00eth2
> > >> 10.1.0.0  0.0.0.0   255.255.255.0U   0  0
> > >>   0eth1
> > >> 10.1.0.0  0.0.0.0   255.255.255.0U   0  0
> > >>   0eth2
> > >> 169.254.0.0 0.0.0.0   255.255.0.0   U   0  0
> > >>  0eth0
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> On 4/4/18, Stephan Seitz  wrote:
> > >>> Hu!
> > >>>
> > >>> I'ld recommend to log in to your ssvm and check if everything is
> > >>> able to connect.
> > >>>
> > >>> I second dag's suggestion to double check your network setup.
> > >>>
> > >>> Inside your ssvm I'ld run
> > >>>
> > >>> /usr/local/cloud/systemvm/ssvm-check.sh
> > >>>
> > >>> also
> > >>>
> > >>> ip a s
> > >>> ip r s
> > >>>
> > >>>
> > >>> As an educated guess: did you setup your storage-network to the same
> > >>> cidr as your management-network?
> > >>>
> > >>> if yes, maybe the default route inside your ssvm is setup wrong (on
> > >>> the wrong NIC 

RE: systemvm

2018-04-04 Thread Paul Angus
Did you explicitly create it or just let cloudstack sort itself out?

FYI, ssvm-check is at:
/usr/local/cloud/systemvm/ssvm-check.sh

In the ssvm itself



paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-Original Message-
From: Swastik Mittal  
Sent: 04 April 2018 18:25
To: users@cloudstack.apache.org
Subject: RE: systemvm

Hey Paul

Yes, I have my management working as the storage as well.

Regards
Swastik

On 4 Apr 2018 10:39 p.m., "Paul Angus"  wrote:

> Have you configured a storage network on the same subnet as the 
> management network?  You have two interfaces on the same subnet.
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>
> -Original Message-
> From: Swastik Mittal 
> Sent: 04 April 2018 11:46
> To: users@cloudstack.apache.org
> Subject: Re: systemvm
>
> @Stephen
>
> "host" in global settings is set to 10.1.0.15 which is the ip address of
> the management server.
> Yes, I'll work on getting ssvm-check file.
>
> Thanks
> Swastik
>
> On 4/4/18, Swastik Mittal  wrote:
> > @Stephen
> >
> > Request to internal server mentioned in the global sec.storage.. after
> > registering the iso successfully, get's stuck on HEAD request. As you
> > mentioned there is an issue in route path from SSVM. Not able to
> > figure out how do I find it.
> >
> > regards
> > Swastik
> >
> > On 4/4/18, Swastik Mittal  wrote:
> >> Hey @Stephen
> >>
> >> I am able to ping my management from ssvm. Also wget to internal
> >> server works fine, it took some time to establish connection
> >> initially.
> >>
> >> I don't have any ssvm-check.sh file. I forgot to mention it on this
> >> thread.
> >>
> >> Outputs:
> >>
> >> root@s-1-VM:~# ip a s
> >> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >> inet 127.0.0.1/8 scope host lo
> >> 2: eth0:  mtu 1500 qdisc pfifo_fast
> >> state UP qlen 1000
> >> link/ether 0e:00:a9:fe:02:2a brd ff:ff:ff:ff:ff:ff
> >> inet 169.254.2.42/16 brd 169.254.255.255 scope global eth0
> >> 3: eth1:  mtu 1500 qdisc pfifo_fast
> >> state UP qlen 1000
> >> link/ether 1e:00:ce:00:00:0e brd ff:ff:ff:ff:ff:ff
> >> inet 10.1.0.43/24 brd 10.1.0.255 scope global eth1
> >> 4: eth2:  mtu 1500 qdisc pfifo_fast
> >> state UP qlen 1000
> >> link/ether 1e:00:a1:00:00:a2 brd ff:ff:ff:ff:ff:ff
> >> inet 10.1.0.191/24 brd 10.1.0.255 scope global eth2
> >>
> >>
> >> root@s-1-VM:~# ip r s
> >> default via 10.1.0.2 dev eth2
> >> 10.1.0.0/24 dev eth1  proto kernel  scope link  src 10.1.0.43
> >> 10.1.0.0/24 dev eth2  proto kernel  scope link  src 10.1.0.191
> >> 169.254.0.0/16 dev eth0  proto kernel  scope link  src 169.254.2.42
> >>
> >> Yes, my storage and management are the same.
> >>
> >> root@s-1-VM:~# route -n
> >> Kernel IP routing table
> >> Destination Gateway Genmask Flags  Metric RefUse
> >> Iface
> >> 0.0.0.0   10.1.0.2   0.0.0.0UG  0
> >> 00eth2
> >> 10.1.0.0  0.0.0.0   255.255.255.0U   0  0
> >>   0eth1
> >> 10.1.0.0  0.0.0.0   255.255.255.0U   0  0
> >>   0eth2
> >> 169.254.0.0 0.0.0.0   255.255.0.0   U   0  0
> >>  0eth0
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> On 4/4/18, Stephan Seitz  wrote:
> >>> Hu!
> >>>
> >>> I'ld recommend to log in to your ssvm and check if everything is
> >>> able to connect.
> >>>
> >>> I second dag's suggestion to double check your network setup.
> >>>
> >>> Inside your ssvm I'ld run
> >>>
> >>> /usr/local/cloud/systemvm/ssvm-check.sh
> >>>
> >>> also
> >>>
> >>> ip a s
> >>> ip r s
> >>>
> >>>
> >>> As an educated guess: did you setup your storage-network to the same
> >>> cidr as your management-network?
> >>>
> >>> if yes, maybe the default route inside your ssvm is setup wrong (on
> >>> the wrong NIC or errenously set up twice on two NICs)
> >>>
> >>>
> >>> cheers,
> >>>
> >>> - Stephan
> >>>
> >>>
> >>>
> >>>
> >>> Am Mittwoch, den 04.04.2018, 13:53 +0530 schrieb Swastik Mittal:
>  @Dag
> 
>  By legacy I meant one way ssl. I have set ca strictness for client
>  as false.
> 
>  I am using 1 nic common for all the network, that is one bridge
>  serving both public and private network.
> 
>  I am setting up a basic zone so I set my management within ip range
>  of 10 and guest within a range of 100, and my statement vms get ip
>  assigned within those ranges successfully.
> 
>  I used these similar configuration with ACL 4.6 and was able to run

RE: systemvm

2018-04-04 Thread Swastik Mittal
Hey Paul

Yes, I have my management working as the storage as well.

Regards
Swastik

On 4 Apr 2018 10:39 p.m., "Paul Angus"  wrote:

> Have you configured a storage network on the same subnet as the management
> network?  You have two interfaces on the same subnet.
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> -Original Message-
> From: Swastik Mittal 
> Sent: 04 April 2018 11:46
> To: users@cloudstack.apache.org
> Subject: Re: systemvm
>
> @Stephen
>
> "host" in global settings is set to 10.1.0.15 which is the ip address of
> the management server.
> Yes, I'll work on getting ssvm-check file.
>
> Thanks
> Swastik
>
> On 4/4/18, Swastik Mittal  wrote:
> > @Stephen
> >
> > Request to internal server mentioned in the global sec.storage.. after
> > registering the iso successfully, get's stuck on HEAD request. As you
> > mentioned there is an issue in route path from SSVM. Not able to
> > figure out how do I find it.
> >
> > regards
> > Swastik
> >
> > On 4/4/18, Swastik Mittal  wrote:
> >> Hey @Stephen
> >>
> >> I am able to ping my management from ssvm. Also wget to internal
> >> server works fine, it took some time to establish connection
> >> initially.
> >>
> >> I don't have any ssvm-check.sh file. I forgot to mention it on this
> >> thread.
> >>
> >> Outputs:
> >>
> >> root@s-1-VM:~# ip a s
> >> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >> inet 127.0.0.1/8 scope host lo
> >> 2: eth0:  mtu 1500 qdisc pfifo_fast
> >> state UP qlen 1000
> >> link/ether 0e:00:a9:fe:02:2a brd ff:ff:ff:ff:ff:ff
> >> inet 169.254.2.42/16 brd 169.254.255.255 scope global eth0
> >> 3: eth1:  mtu 1500 qdisc pfifo_fast
> >> state UP qlen 1000
> >> link/ether 1e:00:ce:00:00:0e brd ff:ff:ff:ff:ff:ff
> >> inet 10.1.0.43/24 brd 10.1.0.255 scope global eth1
> >> 4: eth2:  mtu 1500 qdisc pfifo_fast
> >> state UP qlen 1000
> >> link/ether 1e:00:a1:00:00:a2 brd ff:ff:ff:ff:ff:ff
> >> inet 10.1.0.191/24 brd 10.1.0.255 scope global eth2
> >>
> >>
> >> root@s-1-VM:~# ip r s
> >> default via 10.1.0.2 dev eth2
> >> 10.1.0.0/24 dev eth1  proto kernel  scope link  src 10.1.0.43
> >> 10.1.0.0/24 dev eth2  proto kernel  scope link  src 10.1.0.191
> >> 169.254.0.0/16 dev eth0  proto kernel  scope link  src 169.254.2.42
> >>
> >> Yes, my storage and management are the same.
> >>
> >> root@s-1-VM:~# route -n
> >> Kernel IP routing table
> >> Destination Gateway Genmask Flags  Metric RefUse
> >> Iface
> >> 0.0.0.0   10.1.0.2   0.0.0.0UG  0
> >> 00eth2
> >> 10.1.0.0  0.0.0.0   255.255.255.0U   0  0
> >>   0eth1
> >> 10.1.0.0  0.0.0.0   255.255.255.0U   0  0
> >>   0eth2
> >> 169.254.0.0 0.0.0.0   255.255.0.0   U   0  0
> >>  0eth0
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> On 4/4/18, Stephan Seitz  wrote:
> >>> Hu!
> >>>
> >>> I'ld recommend to log in to your ssvm and check if everything is
> >>> able to connect.
> >>>
> >>> I second dag's suggestion to double check your network setup.
> >>>
> >>> Inside your ssvm I'ld run
> >>>
> >>> /usr/local/cloud/systemvm/ssvm-check.sh
> >>>
> >>> also
> >>>
> >>> ip a s
> >>> ip r s
> >>>
> >>>
> >>> As an educated guess: did you setup your storage-network to the same
> >>> cidr as your management-network?
> >>>
> >>> if yes, maybe the default route inside your ssvm is setup wrong (on
> >>> the wrong NIC or errenously set up twice on two NICs)
> >>>
> >>>
> >>> cheers,
> >>>
> >>> - Stephan
> >>>
> >>>
> >>>
> >>>
> >>> Am Mittwoch, den 04.04.2018, 13:53 +0530 schrieb Swastik Mittal:
>  @Dag
> 
>  By legacy I meant one way ssl. I have set ca strictness for client
>  as false.
> 
>  I am using 1 nic common for all the network, that is one bridge
>  serving both public and private network.
> 
>  I am setting up a basic zone so I set my management within ip range
>  of 10 and guest within a range of 100, and my statement vms get ip
>  assigned within those ranges successfully.
> 
>  I used these similar configuration with ACL 4.6 and was able to run
>  vm's successfully.
> 
>  Regards
>  Swastik
> 
>  On 4 Apr 2018 1:44 p.m., "Dag Sonstebo"
>  
>  wrote:
> 
>  >
>  > Swastik,
>  >
>  > Your issue is most likely with your network configuration rather
>  > than anything to do with firewalls or system VM templates.
>  >
>  > First of all – what do you mean by legacy mode? Are you 

RE: systemvm

2018-04-04 Thread Paul Angus
Have you configured a storage network on the same subnet as the management 
network?  You have two interfaces on the same subnet.

paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-Original Message-
From: Swastik Mittal  
Sent: 04 April 2018 11:46
To: users@cloudstack.apache.org
Subject: Re: systemvm

@Stephen

"host" in global settings is set to 10.1.0.15 which is the ip address of the 
management server.
Yes, I'll work on getting ssvm-check file.

Thanks
Swastik

On 4/4/18, Swastik Mittal  wrote:
> @Stephen
>
> Request to internal server mentioned in the global sec.storage.. after 
> registering the iso successfully, get's stuck on HEAD request. As you 
> mentioned there is an issue in route path from SSVM. Not able to 
> figure out how do I find it.
>
> regards
> Swastik
>
> On 4/4/18, Swastik Mittal  wrote:
>> Hey @Stephen
>>
>> I am able to ping my management from ssvm. Also wget to internal 
>> server works fine, it took some time to establish connection 
>> initially.
>>
>> I don't have any ssvm-check.sh file. I forgot to mention it on this 
>> thread.
>>
>> Outputs:
>>
>> root@s-1-VM:~# ip a s
>> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>> 2: eth0:  mtu 1500 qdisc pfifo_fast 
>> state UP qlen 1000
>> link/ether 0e:00:a9:fe:02:2a brd ff:ff:ff:ff:ff:ff
>> inet 169.254.2.42/16 brd 169.254.255.255 scope global eth0
>> 3: eth1:  mtu 1500 qdisc pfifo_fast 
>> state UP qlen 1000
>> link/ether 1e:00:ce:00:00:0e brd ff:ff:ff:ff:ff:ff
>> inet 10.1.0.43/24 brd 10.1.0.255 scope global eth1
>> 4: eth2:  mtu 1500 qdisc pfifo_fast 
>> state UP qlen 1000
>> link/ether 1e:00:a1:00:00:a2 brd ff:ff:ff:ff:ff:ff
>> inet 10.1.0.191/24 brd 10.1.0.255 scope global eth2
>>
>>
>> root@s-1-VM:~# ip r s
>> default via 10.1.0.2 dev eth2
>> 10.1.0.0/24 dev eth1  proto kernel  scope link  src 10.1.0.43
>> 10.1.0.0/24 dev eth2  proto kernel  scope link  src 10.1.0.191
>> 169.254.0.0/16 dev eth0  proto kernel  scope link  src 169.254.2.42
>>
>> Yes, my storage and management are the same.
>>
>> root@s-1-VM:~# route -n
>> Kernel IP routing table
>> Destination Gateway Genmask Flags  Metric RefUse
>> Iface
>> 0.0.0.0   10.1.0.2   0.0.0.0UG  0
>> 00eth2
>> 10.1.0.0  0.0.0.0   255.255.255.0U   0  0
>>   0eth1
>> 10.1.0.0  0.0.0.0   255.255.255.0U   0  0
>>   0eth2
>> 169.254.0.0 0.0.0.0   255.255.0.0   U   0  0
>>  0eth0
>>
>>
>>
>>
>>
>>
>>
>> On 4/4/18, Stephan Seitz  wrote:
>>> Hu!
>>>
>>> I'ld recommend to log in to your ssvm and check if everything is 
>>> able to connect.
>>>
>>> I second dag's suggestion to double check your network setup.
>>>
>>> Inside your ssvm I'ld run
>>>
>>> /usr/local/cloud/systemvm/ssvm-check.sh
>>>
>>> also
>>>
>>> ip a s
>>> ip r s
>>>
>>>
>>> As an educated guess: did you setup your storage-network to the same 
>>> cidr as your management-network?
>>>
>>> if yes, maybe the default route inside your ssvm is setup wrong (on 
>>> the wrong NIC or errenously set up twice on two NICs)
>>>
>>>
>>> cheers,
>>>
>>> - Stephan
>>>
>>>
>>>
>>>
>>> Am Mittwoch, den 04.04.2018, 13:53 +0530 schrieb Swastik Mittal:
 @Dag

 By legacy I meant one way ssl. I have set ca strictness for client 
 as false.

 I am using 1 nic common for all the network, that is one bridge 
 serving both public and private network.

 I am setting up a basic zone so I set my management within ip range 
 of 10 and guest within a range of 100, and my statement vms get ip 
 assigned within those ranges successfully.

 I used these similar configuration with ACL 4.6 and was able to run 
 vm's successfully.

 Regards
 Swastik

 On 4 Apr 2018 1:44 p.m., "Dag Sonstebo" 
 
 wrote:

 >
 > Swastik,
 >
 > Your issue is most likely with your network configuration rather 
 > than anything to do with firewalls or system VM templates.
 >
 > First of all – what do you mean by legacy mode? Are you referring 
 > to advanced or basic zone?
 >
 > Secondly – can you tell us how you have configured your networking?
 >
 > - How many NICs you are using and how have you configured them
 > - What management vs public IP ranges you are using
 > - How you have mapped your networking in CloudStack against the 
 > underlying hardware NICs
 > - Can you also check what your “host” global setting is set to

Re: Upgrade CloudStack from 4.9.2.0 to 4.11.0

2018-04-04 Thread Dag Sonstebo
Hi Stephan,

Thanks for the summary – can you log these as new issues in the new issues 
tracker https://github.com/apache/cloudstack/issues  please (note not Jira).

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 04/04/2018, 10:39, "Stephan Seitz"  wrote:

Hi!

We're currently using XenServer instead of VMware, so I just don't know
if you really need to build your own packages. Afaik shapeblue's public
repository has been built with noredist.

Here's short list (sorry, we didn't report everything to the bugtracker
by now) of caveats:

* There's a more precise dashboard (XX.XX% instead of XX%)
-> Nice, but only works with locale set to EN or C or anything with
decimalpoints instead of commas :) ... consequently the default
language of the GUI will also be selected identical to your locale.

-> Ldap Authentication doesn't work. You need to apply https://github.c
om/apache/cloudstack/pull/2517 to get this working.

-> Adding a NIC to a running VM (only tested in Advanced Zone,
Xenserver, Shared Network to add) fails with an "duplicate MAC-address" 
error. See my post on the ML yesterday.

-> cloudstack-usage doesn't start since (at least Ubuntu, deb packages)
the update doesn't clean old libs from /usr/share/cloudstack-
usage/libs. For us cleanup and reinstall fixed that.

-> SSVM's java keystore doesn't contain Let's Encrypt Root-CA (maybe
others are also missing) so don't expect working downloads from
cdimage.debian.org via https :)

-> A few nasty popups occur (but can be ignored) in the GUI e.g.
selecting a project and viewing networks.

-> A minor documentation bug in the upgrade document: The apt-get.eu
Repository doesn't contain 4.11 right now. download.cloudstack.org
does.


To us none of that problems was a stopper, but your mileage may vary.

cheers,

- Stephan


Am Mittwoch, den 04.04.2018, 11:08 +0200 schrieb Marc Poll Garcia:
> Hello,
> 
> My current infrastructure is Apache Cloudstack 4.9.2 with VMware
> hosts and
> the management server on CentOS.
> 
> 
> I'm planning to perform an upgrade from the actual 4.9.2 versión to
> the
> latest one.
> 
> I found this tutorial from Cloudstack website:
> 
> http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/e
> n/4.11.0.0/upgrade/upgrade-4.9.html
> 
> But i don't know if any of you already did it, and had upgraded the
> system?
> I was wondering if anyone had any issues during the execution of the
> process.
> 
> And also if someone can send more info, or another guide to follow or
> best
> practice?
> 
> We've check it out and found that we need to compile our own
> cloudstack
> software because we're using vmware hosts, is it true? any
> suggestions?
> 
> Thanks in advance.
> 
> Kind regards.
> 
> 
-- 

Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-44
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht
Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin




dag.sonst...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



RE: ACS 4.11 creates isolated Net only with IP Range 10.1.1.0/24

2018-04-04 Thread Paul Angus
Hi Melanie,

Yes looks like a UI bug.
I tried through the UI with the browser in developer mode to see what was sent. 
 The guest netmask is missing.

http://192.168.56.11:8080/client/api?command=createNetwork=json=6d0ebc4e-1f70-4a87-b29b-cb98f9ab9594=test_net=test-sourcenat=e6425385-e5d2-457a-b3df-4845b9e79edc=10.0.0.1&_=1522849857187

Please could you file a github issue for it at: 

https://github.com/apache/cloudstack/issues

Kind regards,

Paul Angus

paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-Original Message-
From: Melanie Desaive  
Sent: 04 April 2018 13:39
To: users@cloudstack.apache.org
Subject: ACS 4.11 creates isolated Net only with IP Range 10.1.1.0/24

Hi all,

after upgrading to 4.11 we have the issue, that isolated nets created with the 
web-UI are always created for the IP range 10.1.1.0/24 - no matter what values 
are filled in the fields "Guest Gateway" and "Guest Netmask".

Creating an isolated network with CloudMonkey works perfectly using the
syntax:

create network displaytext=deleteme-cloudmonkey name=deleteme-cloudmonkey 
networkofferingid= zoneid= projectid= gateway=172.17.1.1 
netmask=255.255.252.0 networkdomain=deleteme-cloudmonkey.heinlein-intern.de

Could this be a bug with 4.11? Can someone reproduce this behaviour?

Greetings,

Melanie
--
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



ACS 4.11 creates isolated Net only with IP Range 10.1.1.0/24

2018-04-04 Thread Melanie Desaive
Hi all,

after upgrading to 4.11 we have the issue, that isolated nets created
with the web-UI are always created for the IP range 10.1.1.0/24 - no
matter what values are filled in the fields "Guest Gateway" and "Guest
Netmask".

Creating an isolated network with CloudMonkey works perfectly using the
syntax:

create network displaytext=deleteme-cloudmonkey
name=deleteme-cloudmonkey networkofferingid= zoneid=
projectid= gateway=172.17.1.1 netmask=255.255.252.0
networkdomain=deleteme-cloudmonkey.heinlein-intern.de

Could this be a bug with 4.11? Can someone reproduce this behaviour?

Greetings,

Melanie
-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature


Re: systemvm

2018-04-04 Thread Swastik Mittal
@Stephen

"host" in global settings is set to 10.1.0.15 which is the ip address
of the management server.
Yes, I'll work on getting ssvm-check file.

Thanks
Swastik

On 4/4/18, Swastik Mittal  wrote:
> @Stephen
>
> Request to internal server mentioned in the global sec.storage.. after
> registering the iso successfully, get's stuck on HEAD request. As you
> mentioned there is an issue in route path from SSVM. Not able to
> figure out how do I find it.
>
> regards
> Swastik
>
> On 4/4/18, Swastik Mittal  wrote:
>> Hey @Stephen
>>
>> I am able to ping my management from ssvm. Also wget to internal
>> server works fine, it took some time to establish connection
>> initially.
>>
>> I don't have any ssvm-check.sh file. I forgot to mention it on this
>> thread.
>>
>> Outputs:
>>
>> root@s-1-VM:~# ip a s
>> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>> 2: eth0:  mtu 1500 qdisc pfifo_fast
>> state UP qlen 1000
>> link/ether 0e:00:a9:fe:02:2a brd ff:ff:ff:ff:ff:ff
>> inet 169.254.2.42/16 brd 169.254.255.255 scope global eth0
>> 3: eth1:  mtu 1500 qdisc pfifo_fast
>> state UP qlen 1000
>> link/ether 1e:00:ce:00:00:0e brd ff:ff:ff:ff:ff:ff
>> inet 10.1.0.43/24 brd 10.1.0.255 scope global eth1
>> 4: eth2:  mtu 1500 qdisc pfifo_fast
>> state UP qlen 1000
>> link/ether 1e:00:a1:00:00:a2 brd ff:ff:ff:ff:ff:ff
>> inet 10.1.0.191/24 brd 10.1.0.255 scope global eth2
>>
>>
>> root@s-1-VM:~# ip r s
>> default via 10.1.0.2 dev eth2
>> 10.1.0.0/24 dev eth1  proto kernel  scope link  src 10.1.0.43
>> 10.1.0.0/24 dev eth2  proto kernel  scope link  src 10.1.0.191
>> 169.254.0.0/16 dev eth0  proto kernel  scope link  src 169.254.2.42
>>
>> Yes, my storage and management are the same.
>>
>> root@s-1-VM:~# route -n
>> Kernel IP routing table
>> Destination Gateway Genmask Flags  Metric RefUse
>> Iface
>> 0.0.0.0   10.1.0.2   0.0.0.0UG  0
>> 00eth2
>> 10.1.0.0  0.0.0.0   255.255.255.0U   0  0
>>   0eth1
>> 10.1.0.0  0.0.0.0   255.255.255.0U   0  0
>>   0eth2
>> 169.254.0.0 0.0.0.0   255.255.0.0   U   0  0
>>  0eth0
>>
>>
>>
>>
>>
>>
>>
>> On 4/4/18, Stephan Seitz  wrote:
>>> Hu!
>>>
>>> I'ld recommend to log in to your ssvm and check if everything is able
>>> to connect.
>>>
>>> I second dag's suggestion to double check your network setup.
>>>
>>> Inside your ssvm I'ld run
>>>
>>> /usr/local/cloud/systemvm/ssvm-check.sh
>>>
>>> also
>>>
>>> ip a s
>>> ip r s
>>>
>>>
>>> As an educated guess: did you setup your storage-network to the same
>>> cidr as your management-network?
>>>
>>> if yes, maybe the default route inside your ssvm is setup wrong (on the
>>> wrong NIC or errenously set up twice on two NICs)
>>>
>>>
>>> cheers,
>>>
>>> - Stephan
>>>
>>>
>>>
>>>
>>> Am Mittwoch, den 04.04.2018, 13:53 +0530 schrieb Swastik Mittal:
 @Dag

 By legacy I meant one way ssl. I have set ca strictness for client as
 false.

 I am using 1 nic common for all the network, that is one bridge
 serving
 both public and private network.

 I am setting up a basic zone so I set my management within ip range
 of 10
 and guest within a range of 100, and my statement vms get ip assigned
 within those ranges successfully.

 I used these similar configuration with ACL 4.6 and was able to run
 vm's
 successfully.

 Regards
 Swastik

 On 4 Apr 2018 1:44 p.m., "Dag Sonstebo" 
 wrote:

 >
 > Swastik,
 >
 > Your issue is most likely with your network configuration rather
 > than
 > anything to do with firewalls or system VM templates.
 >
 > First of all – what do you mean by legacy mode? Are you referring
 > to
 > advanced or basic zone?
 >
 > Secondly – can you tell us how you have configured your networking?
 >
 > - How many NICs you are using and how have you configured them
 > - What management vs public IP ranges you are using
 > - How you have mapped your networking in CloudStack against the
 > underlying
 > hardware NICs
 > - Can you also check what your “host” global setting is set to
 >
 > Regards,
 > Dag Sonstebo
 > Cloud Architect
 > ShapeBlue
 >
 > On 04/04/2018, 09:07, "Swastik Mittal" 
 > wrote:
 >
 > @jagdish
 >
 > Yes I was using the same link.
 >
 >
 > dag.sonst...@shapeblue.com
 > www.shapeblue.com
 > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
 > 

Re: systemvm

2018-04-04 Thread Swastik Mittal
@Stephen

Request to internal server mentioned in the global sec.storage.. after
registering the iso successfully, get's stuck on HEAD request. As you
mentioned there is an issue in route path from SSVM. Not able to
figure out how do I find it.

regards
Swastik

On 4/4/18, Swastik Mittal  wrote:
> Hey @Stephen
>
> I am able to ping my management from ssvm. Also wget to internal
> server works fine, it took some time to establish connection
> initially.
>
> I don't have any ssvm-check.sh file. I forgot to mention it on this thread.
>
> Outputs:
>
> root@s-1-VM:~# ip a s
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> 2: eth0:  mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 0e:00:a9:fe:02:2a brd ff:ff:ff:ff:ff:ff
> inet 169.254.2.42/16 brd 169.254.255.255 scope global eth0
> 3: eth1:  mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 1e:00:ce:00:00:0e brd ff:ff:ff:ff:ff:ff
> inet 10.1.0.43/24 brd 10.1.0.255 scope global eth1
> 4: eth2:  mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 1e:00:a1:00:00:a2 brd ff:ff:ff:ff:ff:ff
> inet 10.1.0.191/24 brd 10.1.0.255 scope global eth2
>
>
> root@s-1-VM:~# ip r s
> default via 10.1.0.2 dev eth2
> 10.1.0.0/24 dev eth1  proto kernel  scope link  src 10.1.0.43
> 10.1.0.0/24 dev eth2  proto kernel  scope link  src 10.1.0.191
> 169.254.0.0/16 dev eth0  proto kernel  scope link  src 169.254.2.42
>
> Yes, my storage and management are the same.
>
> root@s-1-VM:~# route -n
> Kernel IP routing table
> Destination Gateway Genmask Flags  Metric RefUse
> Iface
> 0.0.0.0   10.1.0.2   0.0.0.0UG  0
> 00eth2
> 10.1.0.0  0.0.0.0   255.255.255.0U   0  0
>   0eth1
> 10.1.0.0  0.0.0.0   255.255.255.0U   0  0
>   0eth2
> 169.254.0.0 0.0.0.0   255.255.0.0   U   0  0
>  0eth0
>
>
>
>
>
>
>
> On 4/4/18, Stephan Seitz  wrote:
>> Hu!
>>
>> I'ld recommend to log in to your ssvm and check if everything is able
>> to connect.
>>
>> I second dag's suggestion to double check your network setup.
>>
>> Inside your ssvm I'ld run
>>
>> /usr/local/cloud/systemvm/ssvm-check.sh
>>
>> also
>>
>> ip a s
>> ip r s
>>
>>
>> As an educated guess: did you setup your storage-network to the same
>> cidr as your management-network?
>>
>> if yes, maybe the default route inside your ssvm is setup wrong (on the
>> wrong NIC or errenously set up twice on two NICs)
>>
>>
>> cheers,
>>
>> - Stephan
>>
>>
>>
>>
>> Am Mittwoch, den 04.04.2018, 13:53 +0530 schrieb Swastik Mittal:
>>> @Dag
>>>
>>> By legacy I meant one way ssl. I have set ca strictness for client as
>>> false.
>>>
>>> I am using 1 nic common for all the network, that is one bridge
>>> serving
>>> both public and private network.
>>>
>>> I am setting up a basic zone so I set my management within ip range
>>> of 10
>>> and guest within a range of 100, and my statement vms get ip assigned
>>> within those ranges successfully.
>>>
>>> I used these similar configuration with ACL 4.6 and was able to run
>>> vm's
>>> successfully.
>>>
>>> Regards
>>> Swastik
>>>
>>> On 4 Apr 2018 1:44 p.m., "Dag Sonstebo" 
>>> wrote:
>>>
>>> >
>>> > Swastik,
>>> >
>>> > Your issue is most likely with your network configuration rather
>>> > than
>>> > anything to do with firewalls or system VM templates.
>>> >
>>> > First of all – what do you mean by legacy mode? Are you referring
>>> > to
>>> > advanced or basic zone?
>>> >
>>> > Secondly – can you tell us how you have configured your networking?
>>> >
>>> > - How many NICs you are using and how have you configured them
>>> > - What management vs public IP ranges you are using
>>> > - How you have mapped your networking in CloudStack against the
>>> > underlying
>>> > hardware NICs
>>> > - Can you also check what your “host” global setting is set to
>>> >
>>> > Regards,
>>> > Dag Sonstebo
>>> > Cloud Architect
>>> > ShapeBlue
>>> >
>>> > On 04/04/2018, 09:07, "Swastik Mittal" 
>>> > wrote:
>>> >
>>> > @jagdish
>>> >
>>> > Yes I was using the same link.
>>> >
>>> >
>>> > dag.sonst...@shapeblue.com
>>> > www.shapeblue.com
>>> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>>> > @shapeblue
>>> >
>>> >
>>> >
>>> > On 4 Apr 2018 1:07 p.m., "Jagdish Patil" >> > >
>>> > wrote:
>>> >
>>> > > Hey Swastik,
>>> > >
>>> > > download.cloudstack.org link doesn't look like an issue, but
>>> > which
>>> > version
>>> > > and which hypervisor are you using?
>>> > >
>>> > > For KVM, download this:
>>> > > 

Re: systemvm

2018-04-04 Thread Dag Sonstebo
Swastik,

What is your “host” global setting set to?

Also – the check script should be in /usr/local/cloud/systemvm/ssvm-check.sh – 
if this is missing you have other issues – possibly with the systemvm.iso.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 04/04/2018, 11:07, "Swastik Mittal"  wrote:

Hey @Stephen

I am able to ping my management from ssvm. Also wget to internal
server works fine, it took some time to establish connection
initially.

I don't have any ssvm-check.sh file. I forgot to mention it on this thread.

Outputs:

root@s-1-VM:~# ip a s
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
2: eth0:  mtu 1500 qdisc pfifo_fast
state UP qlen 1000
link/ether 0e:00:a9:fe:02:2a brd ff:ff:ff:ff:ff:ff
inet 169.254.2.42/16 brd 169.254.255.255 scope global eth0
3: eth1:  mtu 1500 qdisc pfifo_fast
state UP qlen 1000
link/ether 1e:00:ce:00:00:0e brd ff:ff:ff:ff:ff:ff
inet 10.1.0.43/24 brd 10.1.0.255 scope global eth1
4: eth2:  mtu 1500 qdisc pfifo_fast
state UP qlen 1000
link/ether 1e:00:a1:00:00:a2 brd ff:ff:ff:ff:ff:ff
inet 10.1.0.191/24 brd 10.1.0.255 scope global eth2


root@s-1-VM:~# ip r s
default via 10.1.0.2 dev eth2
10.1.0.0/24 dev eth1  proto kernel  scope link  src 10.1.0.43
10.1.0.0/24 dev eth2  proto kernel  scope link  src 10.1.0.191
169.254.0.0/16 dev eth0  proto kernel  scope link  src 169.254.2.42

Yes, my storage and management are the same.

root@s-1-VM:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags  Metric RefUse 
Iface
0.0.0.0   10.1.0.2   0.0.0.0UG  0
00eth2
10.1.0.0  0.0.0.0   255.255.255.0U   0  0
  0eth1
10.1.0.0  0.0.0.0   255.255.255.0U   0  0
  0eth2
169.254.0.0 0.0.0.0   255.255.0.0   U   0  0
 0eth0







On 4/4/18, Stephan Seitz  wrote:
> Hu!
>
> I'ld recommend to log in to your ssvm and check if everything is able
> to connect.
>
> I second dag's suggestion to double check your network setup.
>
> Inside your ssvm I'ld run
>
> /usr/local/cloud/systemvm/ssvm-check.sh
>
> also
>
> ip a s
> ip r s
>
>
> As an educated guess: did you setup your storage-network to the same
> cidr as your management-network?
>
> if yes, maybe the default route inside your ssvm is setup wrong (on the
> wrong NIC or errenously set up twice on two NICs)
>
>
> cheers,
>
> - Stephan
>
>
>
>
> Am Mittwoch, den 04.04.2018, 13:53 +0530 schrieb Swastik Mittal:
>> @Dag
>>
>> By legacy I meant one way ssl. I have set ca strictness for client as
>> false.
>>
>> I am using 1 nic common for all the network, that is one bridge
>> serving
>> both public and private network.
>>
>> I am setting up a basic zone so I set my management within ip range
>> of 10
>> and guest within a range of 100, and my statement vms get ip assigned
>> within those ranges successfully.
>>
>> I used these similar configuration with ACL 4.6 and was able to run
>> vm's
>> successfully.
>>
>> Regards
>> Swastik
>>
>> On 4 Apr 2018 1:44 p.m., "Dag Sonstebo" 
>> wrote:
>>
>> >
>> > Swastik,
>> >
>> > Your issue is most likely with your network configuration rather
>> > than
>> > anything to do with firewalls or system VM templates.
>> >
>> > First of all – what do you mean by legacy mode? Are you referring
>> > to
>> > advanced or basic zone?
>> >
>> > Secondly – can you tell us how you have configured your networking?
>> >
>> > - How many NICs you are using and how have you configured them
>> > - What management vs public IP ranges you are using
>> > - How you have mapped your networking in CloudStack against the
>> > underlying
>> > hardware NICs
>> > - Can you also check what your “host” global setting is set to
>> >
>> > Regards,
>> > Dag Sonstebo
>> > Cloud Architect
>> > ShapeBlue
>> >
>> > On 04/04/2018, 09:07, "Swastik Mittal" 
>> > wrote:
>> >
>> > @jagdish
>> >
>> > Yes I was using the same link.
>> >
>> >
>> > dag.sonst...@shapeblue.com
>> > www.shapeblue.com
>> > 53 Chandos Place, Covent Garden, London  WC2N 

Re: systemvm

2018-04-04 Thread Swastik Mittal
Hey @Stephen

I am able to ping my management from ssvm. Also wget to internal
server works fine, it took some time to establish connection
initially.

I don't have any ssvm-check.sh file. I forgot to mention it on this thread.

Outputs:

root@s-1-VM:~# ip a s
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
2: eth0:  mtu 1500 qdisc pfifo_fast
state UP qlen 1000
link/ether 0e:00:a9:fe:02:2a brd ff:ff:ff:ff:ff:ff
inet 169.254.2.42/16 brd 169.254.255.255 scope global eth0
3: eth1:  mtu 1500 qdisc pfifo_fast
state UP qlen 1000
link/ether 1e:00:ce:00:00:0e brd ff:ff:ff:ff:ff:ff
inet 10.1.0.43/24 brd 10.1.0.255 scope global eth1
4: eth2:  mtu 1500 qdisc pfifo_fast
state UP qlen 1000
link/ether 1e:00:a1:00:00:a2 brd ff:ff:ff:ff:ff:ff
inet 10.1.0.191/24 brd 10.1.0.255 scope global eth2


root@s-1-VM:~# ip r s
default via 10.1.0.2 dev eth2
10.1.0.0/24 dev eth1  proto kernel  scope link  src 10.1.0.43
10.1.0.0/24 dev eth2  proto kernel  scope link  src 10.1.0.191
169.254.0.0/16 dev eth0  proto kernel  scope link  src 169.254.2.42

Yes, my storage and management are the same.

root@s-1-VM:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags  Metric RefUse Iface
0.0.0.0   10.1.0.2   0.0.0.0UG  0
00eth2
10.1.0.0  0.0.0.0   255.255.255.0U   0  0
  0eth1
10.1.0.0  0.0.0.0   255.255.255.0U   0  0
  0eth2
169.254.0.0 0.0.0.0   255.255.0.0   U   0  0
 0eth0







On 4/4/18, Stephan Seitz  wrote:
> Hu!
>
> I'ld recommend to log in to your ssvm and check if everything is able
> to connect.
>
> I second dag's suggestion to double check your network setup.
>
> Inside your ssvm I'ld run
>
> /usr/local/cloud/systemvm/ssvm-check.sh
>
> also
>
> ip a s
> ip r s
>
>
> As an educated guess: did you setup your storage-network to the same
> cidr as your management-network?
>
> if yes, maybe the default route inside your ssvm is setup wrong (on the
> wrong NIC or errenously set up twice on two NICs)
>
>
> cheers,
>
> - Stephan
>
>
>
>
> Am Mittwoch, den 04.04.2018, 13:53 +0530 schrieb Swastik Mittal:
>> @Dag
>>
>> By legacy I meant one way ssl. I have set ca strictness for client as
>> false.
>>
>> I am using 1 nic common for all the network, that is one bridge
>> serving
>> both public and private network.
>>
>> I am setting up a basic zone so I set my management within ip range
>> of 10
>> and guest within a range of 100, and my statement vms get ip assigned
>> within those ranges successfully.
>>
>> I used these similar configuration with ACL 4.6 and was able to run
>> vm's
>> successfully.
>>
>> Regards
>> Swastik
>>
>> On 4 Apr 2018 1:44 p.m., "Dag Sonstebo" 
>> wrote:
>>
>> >
>> > Swastik,
>> >
>> > Your issue is most likely with your network configuration rather
>> > than
>> > anything to do with firewalls or system VM templates.
>> >
>> > First of all – what do you mean by legacy mode? Are you referring
>> > to
>> > advanced or basic zone?
>> >
>> > Secondly – can you tell us how you have configured your networking?
>> >
>> > - How many NICs you are using and how have you configured them
>> > - What management vs public IP ranges you are using
>> > - How you have mapped your networking in CloudStack against the
>> > underlying
>> > hardware NICs
>> > - Can you also check what your “host” global setting is set to
>> >
>> > Regards,
>> > Dag Sonstebo
>> > Cloud Architect
>> > ShapeBlue
>> >
>> > On 04/04/2018, 09:07, "Swastik Mittal" 
>> > wrote:
>> >
>> > @jagdish
>> >
>> > Yes I was using the same link.
>> >
>> >
>> > dag.sonst...@shapeblue.com
>> > www.shapeblue.com
>> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> > @shapeblue
>> >
>> >
>> >
>> > On 4 Apr 2018 1:07 p.m., "Jagdish Patil" > > >
>> > wrote:
>> >
>> > > Hey Swastik,
>> > >
>> > > download.cloudstack.org link doesn't look like an issue, but
>> > which
>> > version
>> > > and which hypervisor are you using?
>> > >
>> > > For KVM, download this:
>> > > http://download.cloudstack.org/systemvm/4.11/systemvmtemplat
>> > e-4.11.0-kvm.
>> > > qcow2.bz2
>> > >
>> > > Regards,
>> > > Jagdish Patil
>> > >
>> > > On Wed, Apr 4, 2018 at 1:00 PM Swastik Mittal <
>> > mittal.swas...@gmail.com>
>> > > wrote:
>> > >
>> > > > Hey @jagdish
>> > > >
>> > > > I was using download.cloudstack.org to download systemVM.
>> > Is
>> > there any
>> > > > bug within the template uploaded here?
>> > > >
>> > > > @Soundar
>> > > >
>> > > > I did disable 

Re: Upgrade CloudStack from 4.9.2.0 to 4.11.0

2018-04-04 Thread Stephan Seitz
Hi!

We're currently using XenServer instead of VMware, so I just don't know
if you really need to build your own packages. Afaik shapeblue's public
repository has been built with noredist.

Here's short list (sorry, we didn't report everything to the bugtracker
by now) of caveats:

* There's a more precise dashboard (XX.XX% instead of XX%)
-> Nice, but only works with locale set to EN or C or anything with
decimalpoints instead of commas :) ... consequently the default
language of the GUI will also be selected identical to your locale.

-> Ldap Authentication doesn't work. You need to apply https://github.c
om/apache/cloudstack/pull/2517 to get this working.

-> Adding a NIC to a running VM (only tested in Advanced Zone,
Xenserver, Shared Network to add) fails with an "duplicate MAC-address" 
error. See my post on the ML yesterday.

-> cloudstack-usage doesn't start since (at least Ubuntu, deb packages)
the update doesn't clean old libs from /usr/share/cloudstack-
usage/libs. For us cleanup and reinstall fixed that.

-> SSVM's java keystore doesn't contain Let's Encrypt Root-CA (maybe
others are also missing) so don't expect working downloads from
cdimage.debian.org via https :)

-> A few nasty popups occur (but can be ignored) in the GUI e.g.
selecting a project and viewing networks.

-> A minor documentation bug in the upgrade document: The apt-get.eu
Repository doesn't contain 4.11 right now. download.cloudstack.org
does.


To us none of that problems was a stopper, but your mileage may vary.

cheers,

- Stephan


Am Mittwoch, den 04.04.2018, 11:08 +0200 schrieb Marc Poll Garcia:
> Hello,
> 
> My current infrastructure is Apache Cloudstack 4.9.2 with VMware
> hosts and
> the management server on CentOS.
> 
> 
> I'm planning to perform an upgrade from the actual 4.9.2 versión to
> the
> latest one.
> 
> I found this tutorial from Cloudstack website:
> 
> http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/e
> n/4.11.0.0/upgrade/upgrade-4.9.html
> 
> But i don't know if any of you already did it, and had upgraded the
> system?
> I was wondering if anyone had any issues during the execution of the
> process.
> 
> And also if someone can send more info, or another guide to follow or
> best
> practice?
> 
> We've check it out and found that we need to compile our own
> cloudstack
> software because we're using vmware hosts, is it true? any
> suggestions?
> 
> Thanks in advance.
> 
> Kind regards.
> 
> 
-- 

Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-44
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht
Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: This is a digitally signed message part


Re: systemvm

2018-04-04 Thread Stephan Seitz
Hu!

I'ld recommend to log in to your ssvm and check if everything is able
to connect.

I second dag's suggestion to double check your network setup.

Inside your ssvm I'ld run

/usr/local/cloud/systemvm/ssvm-check.sh

also

ip a s
ip r s


As an educated guess: did you setup your storage-network to the same
cidr as your management-network?

if yes, maybe the default route inside your ssvm is setup wrong (on the
wrong NIC or errenously set up twice on two NICs)


cheers,

- Stephan




Am Mittwoch, den 04.04.2018, 13:53 +0530 schrieb Swastik Mittal:
> @Dag
> 
> By legacy I meant one way ssl. I have set ca strictness for client as
> false.
> 
> I am using 1 nic common for all the network, that is one bridge
> serving
> both public and private network.
> 
> I am setting up a basic zone so I set my management within ip range
> of 10
> and guest within a range of 100, and my statement vms get ip assigned
> within those ranges successfully.
> 
> I used these similar configuration with ACL 4.6 and was able to run
> vm's
> successfully.
> 
> Regards
> Swastik
> 
> On 4 Apr 2018 1:44 p.m., "Dag Sonstebo" 
> wrote:
> 
> > 
> > Swastik,
> > 
> > Your issue is most likely with your network configuration rather
> > than
> > anything to do with firewalls or system VM templates.
> > 
> > First of all – what do you mean by legacy mode? Are you referring
> > to
> > advanced or basic zone?
> > 
> > Secondly – can you tell us how you have configured your networking?
> > 
> > - How many NICs you are using and how have you configured them
> > - What management vs public IP ranges you are using
> > - How you have mapped your networking in CloudStack against the
> > underlying
> > hardware NICs
> > - Can you also check what your “host” global setting is set to
> > 
> > Regards,
> > Dag Sonstebo
> > Cloud Architect
> > ShapeBlue
> > 
> > On 04/04/2018, 09:07, "Swastik Mittal" 
> > wrote:
> > 
> > @jagdish
> > 
> > Yes I was using the same link.
> > 
> > 
> > dag.sonst...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> > 
> > 
> > 
> > On 4 Apr 2018 1:07 p.m., "Jagdish Patil"  > >
> > wrote:
> > 
> > > Hey Swastik,
> > >
> > > download.cloudstack.org link doesn't look like an issue, but
> > which
> > version
> > > and which hypervisor are you using?
> > >
> > > For KVM, download this:
> > > http://download.cloudstack.org/systemvm/4.11/systemvmtemplat
> > e-4.11.0-kvm.
> > > qcow2.bz2
> > >
> > > Regards,
> > > Jagdish Patil
> > >
> > > On Wed, Apr 4, 2018 at 1:00 PM Swastik Mittal <
> > mittal.swas...@gmail.com>
> > > wrote:
> > >
> > > > Hey @jagdish
> > > >
> > > > I was using download.cloudstack.org to download systemVM.
> > Is
> > there any
> > > > bug within the template uploaded here?
> > > >
> > > > @Soundar
> > > >
> > > > I did disable firewall services but din't work. I'll check
> > it again
> > > though.
> > > >
> > > > On 4/4/18, soundar rajan  wrote:
> > > > > disabled firewalld service on the hostname and check. you
> > should
> > able
> > > to
> > > > > access using console window.
> > > > >
> > > > > On Wed, Apr 4, 2018 at 10:07 AM, Swastik Mittal <
> > > > mittal.swas...@gmail.com>
> > > > > wrote:
> > > > >
> > > > >> Hey,
> > > > >>
> > > > >> I am installing ACS 4.11 (legacy mode), with management
> > and
> > host on
> > > same
> > > > >> server and out-of-band management disabled. My host is
> > enabled
> > and up
> > > > and
> > > > >> ssvm successfully running. Though agent state column
> > shows only
> > '-'.
> > > > >>
> > > > >> CPVM is also running successfully but when I open
> > console
> > window I get
> > > > >> unable to connect. Also I din't find check file in SSVM
> > (accessed
> > > > through
> > > > >> terminal using ssh).
> > > > >>
> > > > >> From SSVM I can ssh into management but wget command to
> > management
> > > local
> > > > >> host ain't working (is stuck at connecting but is not
> > able to
> > > connect.).
> > > > >>
> > > > >> Agent log does not show any error, just mentions "trying
> > to
> > fetch
> > > > storage
> > > > >> pool from libvirt" all the time. I checked my storage
> > pool
> > through
> > > > "virsh
> > > > >> pool-list" and it shows the storage pool mentioned in
> > local
> > storage
> > > > under
> > > > >> agent.properties.
> > > > >>
> > > > >> Any ideas?
> > > > >>
> > > > >> Regards
> > > > >> Swastik
> > > > >>
> > > > >
> > > >
> > >
> > 
> > 
> > 
-- 

Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-44
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG: HRB 

Upgrade CloudStack from 4.9.2.0 to 4.11.0

2018-04-04 Thread Marc Poll Garcia
Hello,

My current infrastructure is Apache Cloudstack 4.9.2 with VMware hosts and
the management server on CentOS.


I'm planning to perform an upgrade from the actual 4.9.2 versión to the
latest one.

I found this tutorial from Cloudstack website:

http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html

But i don't know if any of you already did it, and had upgraded the system?
I was wondering if anyone had any issues during the execution of the
process.

And also if someone can send more info, or another guide to follow or best
practice?

We've check it out and found that we need to compile our own cloudstack
software because we're using vmware hosts, is it true? any suggestions?

Thanks in advance.

Kind regards.


-- 
Marc Poll Garcia
Technology Infrastructure . Àrea de Serveis TIC
Telèfon:  93.405.43.57

[image: UPCnet]

--
Aquest correu electrònic pot contenir informació confidencial o legalment
protegida i està exclusivament dirigit a la persona o entitat destinatària.
Si vostè no és el destinatari final o persona encarregada de recollir-lo,
no està autoritzat a llegir-lo, retenir-lo, modificar-lo, distribuir-lo,
copiar-lo ni a revelar el seu contingut. Si ha rebut aquest correu
electrònic per error, li preguem que informi al remitent i elimini del seu
sistema el missatge i el material annex que pugui contenir.
Gràcies per la seva col.laboració.
--

*** Si us plau, no m'imprimeixis. Vull seguir sent digital ***
*** Por favor, no me imprimas. Quiero seguir siendo digital ***
*** Please, don't print me. I want to remain digital ***
--


Re: systemvm

2018-04-04 Thread Swastik Mittal
@Dag

By legacy I meant one way ssl. I have set ca strictness for client as false.

I am using 1 nic common for all the network, that is one bridge serving
both public and private network.

I am setting up a basic zone so I set my management within ip range of 10
and guest within a range of 100, and my statement vms get ip assigned
within those ranges successfully.

I used these similar configuration with ACL 4.6 and was able to run vm's
successfully.

Regards
Swastik

On 4 Apr 2018 1:44 p.m., "Dag Sonstebo"  wrote:

> Swastik,
>
> Your issue is most likely with your network configuration rather than
> anything to do with firewalls or system VM templates.
>
> First of all – what do you mean by legacy mode? Are you referring to
> advanced or basic zone?
>
> Secondly – can you tell us how you have configured your networking?
>
> - How many NICs you are using and how have you configured them
> - What management vs public IP ranges you are using
> - How you have mapped your networking in CloudStack against the underlying
> hardware NICs
> - Can you also check what your “host” global setting is set to
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 04/04/2018, 09:07, "Swastik Mittal"  wrote:
>
> @jagdish
>
> Yes I was using the same link.
>
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> On 4 Apr 2018 1:07 p.m., "Jagdish Patil" 
> wrote:
>
> > Hey Swastik,
> >
> > download.cloudstack.org link doesn't look like an issue, but which
> version
> > and which hypervisor are you using?
> >
> > For KVM, download this:
> > http://download.cloudstack.org/systemvm/4.11/systemvmtemplat
> e-4.11.0-kvm.
> > qcow2.bz2
> >
> > Regards,
> > Jagdish Patil
> >
> > On Wed, Apr 4, 2018 at 1:00 PM Swastik Mittal <
> mittal.swas...@gmail.com>
> > wrote:
> >
> > > Hey @jagdish
> > >
> > > I was using download.cloudstack.org to download systemVM. Is
> there any
> > > bug within the template uploaded here?
> > >
> > > @Soundar
> > >
> > > I did disable firewall services but din't work. I'll check it again
> > though.
> > >
> > > On 4/4/18, soundar rajan  wrote:
> > > > disabled firewalld service on the hostname and check. you should
> able
> > to
> > > > access using console window.
> > > >
> > > > On Wed, Apr 4, 2018 at 10:07 AM, Swastik Mittal <
> > > mittal.swas...@gmail.com>
> > > > wrote:
> > > >
> > > >> Hey,
> > > >>
> > > >> I am installing ACS 4.11 (legacy mode), with management and
> host on
> > same
> > > >> server and out-of-band management disabled. My host is enabled
> and up
> > > and
> > > >> ssvm successfully running. Though agent state column shows only
> '-'.
> > > >>
> > > >> CPVM is also running successfully but when I open console
> window I get
> > > >> unable to connect. Also I din't find check file in SSVM
> (accessed
> > > through
> > > >> terminal using ssh).
> > > >>
> > > >> From SSVM I can ssh into management but wget command to
> management
> > local
> > > >> host ain't working (is stuck at connecting but is not able to
> > connect.).
> > > >>
> > > >> Agent log does not show any error, just mentions "trying to
> fetch
> > > storage
> > > >> pool from libvirt" all the time. I checked my storage pool
> through
> > > "virsh
> > > >> pool-list" and it shows the storage pool mentioned in local
> storage
> > > under
> > > >> agent.properties.
> > > >>
> > > >> Any ideas?
> > > >>
> > > >> Regards
> > > >> Swastik
> > > >>
> > > >
> > >
> >
>
>
>


Re: systemvm

2018-04-04 Thread Dag Sonstebo
Swastik,

Your issue is most likely with your network configuration rather than anything 
to do with firewalls or system VM templates. 

First of all – what do you mean by legacy mode? Are you referring to advanced 
or basic zone?

Secondly – can you tell us how you have configured your networking?

- How many NICs you are using and how have you configured them
- What management vs public IP ranges you are using
- How you have mapped your networking in CloudStack against the underlying 
hardware NICs
- Can you also check what your “host” global setting is set to

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 04/04/2018, 09:07, "Swastik Mittal"  wrote:

@jagdish

Yes I was using the same link.


dag.sonst...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

On 4 Apr 2018 1:07 p.m., "Jagdish Patil"  wrote:

> Hey Swastik,
>
> download.cloudstack.org link doesn't look like an issue, but which version
> and which hypervisor are you using?
>
> For KVM, download this:
> http://download.cloudstack.org/systemvm/4.11/systemvmtemplate-4.11.0-kvm.
> qcow2.bz2
>
> Regards,
> Jagdish Patil
>
> On Wed, Apr 4, 2018 at 1:00 PM Swastik Mittal 
> wrote:
>
> > Hey @jagdish
> >
> > I was using download.cloudstack.org to download systemVM. Is there any
> > bug within the template uploaded here?
> >
> > @Soundar
> >
> > I did disable firewall services but din't work. I'll check it again
> though.
> >
> > On 4/4/18, soundar rajan  wrote:
> > > disabled firewalld service on the hostname and check. you should able
> to
> > > access using console window.
> > >
> > > On Wed, Apr 4, 2018 at 10:07 AM, Swastik Mittal <
> > mittal.swas...@gmail.com>
> > > wrote:
> > >
> > >> Hey,
> > >>
> > >> I am installing ACS 4.11 (legacy mode), with management and host on
> same
> > >> server and out-of-band management disabled. My host is enabled and up
> > and
> > >> ssvm successfully running. Though agent state column shows only '-'.
> > >>
> > >> CPVM is also running successfully but when I open console window I 
get
> > >> unable to connect. Also I din't find check file in SSVM (accessed
> > through
> > >> terminal using ssh).
> > >>
> > >> From SSVM I can ssh into management but wget command to management
> local
> > >> host ain't working (is stuck at connecting but is not able to
> connect.).
> > >>
> > >> Agent log does not show any error, just mentions "trying to fetch
> > storage
> > >> pool from libvirt" all the time. I checked my storage pool through
> > "virsh
> > >> pool-list" and it shows the storage pool mentioned in local storage
> > under
> > >> agent.properties.
> > >>
> > >> Any ideas?
> > >>
> > >> Regards
> > >> Swastik
> > >>
> > >
> >
>




Re: systemvm

2018-04-04 Thread Swastik Mittal
@jagdish

Yes I was using the same link.

On 4 Apr 2018 1:07 p.m., "Jagdish Patil"  wrote:

> Hey Swastik,
>
> download.cloudstack.org link doesn't look like an issue, but which version
> and which hypervisor are you using?
>
> For KVM, download this:
> http://download.cloudstack.org/systemvm/4.11/systemvmtemplate-4.11.0-kvm.
> qcow2.bz2
>
> Regards,
> Jagdish Patil
>
> On Wed, Apr 4, 2018 at 1:00 PM Swastik Mittal 
> wrote:
>
> > Hey @jagdish
> >
> > I was using download.cloudstack.org to download systemVM. Is there any
> > bug within the template uploaded here?
> >
> > @Soundar
> >
> > I did disable firewall services but din't work. I'll check it again
> though.
> >
> > On 4/4/18, soundar rajan  wrote:
> > > disabled firewalld service on the hostname and check. you should able
> to
> > > access using console window.
> > >
> > > On Wed, Apr 4, 2018 at 10:07 AM, Swastik Mittal <
> > mittal.swas...@gmail.com>
> > > wrote:
> > >
> > >> Hey,
> > >>
> > >> I am installing ACS 4.11 (legacy mode), with management and host on
> same
> > >> server and out-of-band management disabled. My host is enabled and up
> > and
> > >> ssvm successfully running. Though agent state column shows only '-'.
> > >>
> > >> CPVM is also running successfully but when I open console window I get
> > >> unable to connect. Also I din't find check file in SSVM (accessed
> > through
> > >> terminal using ssh).
> > >>
> > >> From SSVM I can ssh into management but wget command to management
> local
> > >> host ain't working (is stuck at connecting but is not able to
> connect.).
> > >>
> > >> Agent log does not show any error, just mentions "trying to fetch
> > storage
> > >> pool from libvirt" all the time. I checked my storage pool through
> > "virsh
> > >> pool-list" and it shows the storage pool mentioned in local storage
> > under
> > >> agent.properties.
> > >>
> > >> Any ideas?
> > >>
> > >> Regards
> > >> Swastik
> > >>
> > >
> >
>


Re: systemvm

2018-04-04 Thread Jagdish Patil
Hey Swastik,

download.cloudstack.org link doesn't look like an issue, but which version
and which hypervisor are you using?

For KVM, download this:
http://download.cloudstack.org/systemvm/4.11/systemvmtemplate-4.11.0-kvm.qcow2.bz2

Regards,
Jagdish Patil

On Wed, Apr 4, 2018 at 1:00 PM Swastik Mittal 
wrote:

> Hey @jagdish
>
> I was using download.cloudstack.org to download systemVM. Is there any
> bug within the template uploaded here?
>
> @Soundar
>
> I did disable firewall services but din't work. I'll check it again though.
>
> On 4/4/18, soundar rajan  wrote:
> > disabled firewalld service on the hostname and check. you should able to
> > access using console window.
> >
> > On Wed, Apr 4, 2018 at 10:07 AM, Swastik Mittal <
> mittal.swas...@gmail.com>
> > wrote:
> >
> >> Hey,
> >>
> >> I am installing ACS 4.11 (legacy mode), with management and host on same
> >> server and out-of-band management disabled. My host is enabled and up
> and
> >> ssvm successfully running. Though agent state column shows only '-'.
> >>
> >> CPVM is also running successfully but when I open console window I get
> >> unable to connect. Also I din't find check file in SSVM (accessed
> through
> >> terminal using ssh).
> >>
> >> From SSVM I can ssh into management but wget command to management local
> >> host ain't working (is stuck at connecting but is not able to connect.).
> >>
> >> Agent log does not show any error, just mentions "trying to fetch
> storage
> >> pool from libvirt" all the time. I checked my storage pool through
> "virsh
> >> pool-list" and it shows the storage pool mentioned in local storage
> under
> >> agent.properties.
> >>
> >> Any ideas?
> >>
> >> Regards
> >> Swastik
> >>
> >
>


Re: systemvm

2018-04-04 Thread Swastik Mittal
Hey @jagdish

I was using download.cloudstack.org to download systemVM. Is there any
bug within the template uploaded here?

@Soundar

I did disable firewall services but din't work. I'll check it again though.

On 4/4/18, soundar rajan  wrote:
> disabled firewalld service on the hostname and check. you should able to
> access using console window.
>
> On Wed, Apr 4, 2018 at 10:07 AM, Swastik Mittal 
> wrote:
>
>> Hey,
>>
>> I am installing ACS 4.11 (legacy mode), with management and host on same
>> server and out-of-band management disabled. My host is enabled and up and
>> ssvm successfully running. Though agent state column shows only '-'.
>>
>> CPVM is also running successfully but when I open console window I get
>> unable to connect. Also I din't find check file in SSVM (accessed through
>> terminal using ssh).
>>
>> From SSVM I can ssh into management but wget command to management local
>> host ain't working (is stuck at connecting but is not able to connect.).
>>
>> Agent log does not show any error, just mentions "trying to fetch storage
>> pool from libvirt" all the time. I checked my storage pool through "virsh
>> pool-list" and it shows the storage pool mentioned in local storage under
>> agent.properties.
>>
>> Any ideas?
>>
>> Regards
>> Swastik
>>
>


Re: systemvm

2018-04-04 Thread soundar rajan
disabled firewalld service on the hostname and check. you should able to
access using console window.

On Wed, Apr 4, 2018 at 10:07 AM, Swastik Mittal 
wrote:

> Hey,
>
> I am installing ACS 4.11 (legacy mode), with management and host on same
> server and out-of-band management disabled. My host is enabled and up and
> ssvm successfully running. Though agent state column shows only '-'.
>
> CPVM is also running successfully but when I open console window I get
> unable to connect. Also I din't find check file in SSVM (accessed through
> terminal using ssh).
>
> From SSVM I can ssh into management but wget command to management local
> host ain't working (is stuck at connecting but is not able to connect.).
>
> Agent log does not show any error, just mentions "trying to fetch storage
> pool from libvirt" all the time. I checked my storage pool through "virsh
> pool-list" and it shows the storage pool mentioned in local storage under
> agent.properties.
>
> Any ideas?
>
> Regards
> Swastik
>


Re: systemvm

2018-04-04 Thread Jagdish Patil
Hey Swastik,

Which SSVM are you using?

Install SSVM from this link:
http://cloudstack.apt-get.eu/systemvm/4.11/

Regards,
Jagdish Patil

On Wed, Apr 4, 2018 at 10:07 AM Swastik Mittal 
wrote:

> Hey,
>
> I am installing ACS 4.11 (legacy mode), with management and host on same
> server and out-of-band management disabled. My host is enabled and up and
> ssvm successfully running. Though agent state column shows only '-'.
>
> CPVM is also running successfully but when I open console window I get
> unable to connect. Also I din't find check file in SSVM (accessed through
> terminal using ssh).
>
> From SSVM I can ssh into management but wget command to management local
> host ain't working (is stuck at connecting but is not able to connect.).
>
> Agent log does not show any error, just mentions "trying to fetch storage
> pool from libvirt" all the time. I checked my storage pool through "virsh
> pool-list" and it shows the storage pool mentioned in local storage under
> agent.properties.
>
> Any ideas?
>
> Regards
> Swastik
>