Have you configured a storage network on the same subnet as the management 
network?  You have two interfaces on the same subnet.

paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-----Original Message-----
From: Swastik Mittal <mittal.swas...@gmail.com> 
Sent: 04 April 2018 11:46
To: users@cloudstack.apache.org
Subject: Re: systemvm

@Stephen

"host" in global settings is set to 10.1.0.15 which is the ip address of the 
management server.
Yes, I'll work on getting ssvm-check file.

Thanks
Swastik

On 4/4/18, Swastik Mittal <mittal.swas...@gmail.com> wrote:
> @Stephen
>
> Request to internal server mentioned in the global sec.storage.. after 
> registering the iso successfully, get's stuck on HEAD request. As you 
> mentioned there is an issue in route path from SSVM. Not able to 
> figure out how do I find it.
>
> regards
> Swastik
>
> On 4/4/18, Swastik Mittal <mittal.swas...@gmail.com> wrote:
>> Hey @Stephen
>>
>> I am able to ping my management from ssvm. Also wget to internal 
>> server works fine, it took some time to establish connection 
>> initially.
>>
>> I don't have any ssvm-check.sh file. I forgot to mention it on this 
>> thread.
>>
>> Outputs:
>>
>> root@s-1-VM:~# ip a s
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
>>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>     inet 127.0.0.1/8 scope host lo
>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast 
>> state UP qlen 1000
>>     link/ether 0e:00:a9:fe:02:2a brd ff:ff:ff:ff:ff:ff
>>     inet 169.254.2.42/16 brd 169.254.255.255 scope global eth0
>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast 
>> state UP qlen 1000
>>     link/ether 1e:00:ce:00:00:0e brd ff:ff:ff:ff:ff:ff
>>     inet 10.1.0.43/24 brd 10.1.0.255 scope global eth1
>> 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast 
>> state UP qlen 1000
>>     link/ether 1e:00:a1:00:00:a2 brd ff:ff:ff:ff:ff:ff
>>     inet 10.1.0.191/24 brd 10.1.0.255 scope global eth2
>>
>>
>> root@s-1-VM:~# ip r s
>> default via 10.1.0.2 dev eth2
>> 10.1.0.0/24 dev eth1  proto kernel  scope link  src 10.1.0.43
>> 10.1.0.0/24 dev eth2  proto kernel  scope link  src 10.1.0.191
>> 169.254.0.0/16 dev eth0  proto kernel  scope link  src 169.254.2.42
>>
>> Yes, my storage and management are the same.
>>
>> root@s-1-VM:~# route -n
>> Kernel IP routing table
>> Destination     Gateway         Genmask         Flags  Metric Ref    Use
>> Iface
>> 0.0.0.0           10.1.0.2           0.0.0.0            UG      0
>> 0        0    eth2
>> 10.1.0.0          0.0.0.0           255.255.255.0    U       0      0
>>       0    eth1
>> 10.1.0.0          0.0.0.0           255.255.255.0    U       0      0
>>       0    eth2
>> 169.254.0.0     0.0.0.0           255.255.0.0       U       0      0
>>      0    eth0
>>
>>
>>
>>
>>
>>
>>
>> On 4/4/18, Stephan Seitz <s.se...@heinlein-support.de> wrote:
>>> Hu!
>>>
>>> I'ld recommend to log in to your ssvm and check if everything is 
>>> able to connect.
>>>
>>> I second dag's suggestion to double check your network setup.
>>>
>>> Inside your ssvm I'ld run
>>>
>>> /usr/local/cloud/systemvm/ssvm-check.sh
>>>
>>> also
>>>
>>> ip a s
>>> ip r s
>>>
>>>
>>> As an educated guess: did you setup your storage-network to the same 
>>> cidr as your management-network?
>>>
>>> if yes, maybe the default route inside your ssvm is setup wrong (on 
>>> the wrong NIC or errenously set up twice on two NICs)
>>>
>>>
>>> cheers,
>>>
>>> - Stephan
>>>
>>>
>>>
>>>
>>> Am Mittwoch, den 04.04.2018, 13:53 +0530 schrieb Swastik Mittal:
>>>> @Dag
>>>>
>>>> By legacy I meant one way ssl. I have set ca strictness for client 
>>>> as false.
>>>>
>>>> I am using 1 nic common for all the network, that is one bridge 
>>>> serving both public and private network.
>>>>
>>>> I am setting up a basic zone so I set my management within ip range 
>>>> of 10 and guest within a range of 100, and my statement vms get ip 
>>>> assigned within those ranges successfully.
>>>>
>>>> I used these similar configuration with ACL 4.6 and was able to run 
>>>> vm's successfully.
>>>>
>>>> Regards
>>>> Swastik
>>>>
>>>> On 4 Apr 2018 1:44 p.m., "Dag Sonstebo" 
>>>> <dag.sonst...@shapeblue.com>
>>>> wrote:
>>>>
>>>> >
>>>> > Swastik,
>>>> >
>>>> > Your issue is most likely with your network configuration rather 
>>>> > than anything to do with firewalls or system VM templates.
>>>> >
>>>> > First of all – what do you mean by legacy mode? Are you referring 
>>>> > to advanced or basic zone?
>>>> >
>>>> > Secondly – can you tell us how you have configured your networking?
>>>> >
>>>> > - How many NICs you are using and how have you configured them
>>>> > - What management vs public IP ranges you are using
>>>> > - How you have mapped your networking in CloudStack against the 
>>>> > underlying hardware NICs
>>>> > - Can you also check what your “host” global setting is set to
>>>> >
>>>> > Regards,
>>>> > Dag Sonstebo
>>>> > Cloud Architect
>>>> > ShapeBlue
>>>> >
>>>> > On 04/04/2018, 09:07, "Swastik Mittal" <mittal.swas...@gmail.com>
>>>> > wrote:
>>>> >
>>>> >     @jagdish
>>>> >
>>>> >     Yes I was using the same link.
>>>> >
>>>> >
>>>> > dag.sonst...@shapeblue.com
>>>> > www.shapeblue.com
>>>> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>>>> >
>>>> >
>>>> >
>>>> > On 4 Apr 2018 1:07 p.m., "Jagdish Patil" <jagdishpatil...@gmail.com
>>>> > >
>>>> > wrote:
>>>> >
>>>> >     > Hey Swastik,
>>>> >     >
>>>> >     > download.cloudstack.org link doesn't look like an issue, but
>>>> > which
>>>> > version
>>>> >     > and which hypervisor are you using?
>>>> >     >
>>>> >     > For KVM, download this:
>>>> >     > http://download.cloudstack.org/systemvm/4.11/systemvmtemplat
>>>> > e-4.11.0-kvm.
>>>> >     > qcow2.bz2
>>>> >     >
>>>> >     > Regards,
>>>> >     > Jagdish Patil
>>>> >     >
>>>> >     > On Wed, Apr 4, 2018 at 1:00 PM Swastik Mittal <
>>>> > mittal.swas...@gmail.com>
>>>> >     > wrote:
>>>> >     >
>>>> >     > > Hey @jagdish
>>>> >     > >
>>>> >     > > I was using download.cloudstack.org to download systemVM.
>>>> > Is
>>>> > there any
>>>> >     > > bug within the template uploaded here?
>>>> >     > >
>>>> >     > > @Soundar
>>>> >     > >
>>>> >     > > I did disable firewall services but din't work. I'll check
>>>> > it again
>>>> >     > though.
>>>> >     > >
>>>> >     > > On 4/4/18, soundar rajan <bsoundara...@gmail.com> wrote:
>>>> >     > > > disabled firewalld service on the hostname and check. you
>>>> > should
>>>> > able
>>>> >     > to
>>>> >     > > > access using console window.
>>>> >     > > >
>>>> >     > > > On Wed, Apr 4, 2018 at 10:07 AM, Swastik Mittal <
>>>> >     > > mittal.swas...@gmail.com>
>>>> >     > > > wrote:
>>>> >     > > >
>>>> >     > > >> Hey,
>>>> >     > > >>
>>>> >     > > >> I am installing ACS 4.11 (legacy mode), with management
>>>> > and
>>>> > host on
>>>> >     > same
>>>> >     > > >> server and out-of-band management disabled. My host is
>>>> > enabled
>>>> > and up
>>>> >     > > and
>>>> >     > > >> ssvm successfully running. Though agent state column
>>>> > shows only
>>>> > '-'.
>>>> >     > > >>
>>>> >     > > >> CPVM is also running successfully but when I open
>>>> > console
>>>> > window I get
>>>> >     > > >> unable to connect. Also I din't find check file in SSVM
>>>> > (accessed
>>>> >     > > through
>>>> >     > > >> terminal using ssh).
>>>> >     > > >>
>>>> >     > > >> From SSVM I can ssh into management but wget command to
>>>> > management
>>>> >     > local
>>>> >     > > >> host ain't working (is stuck at connecting but is not
>>>> > able to
>>>> >     > connect.).
>>>> >     > > >>
>>>> >     > > >> Agent log does not show any error, just mentions "trying
>>>> > to
>>>> > fetch
>>>> >     > > storage
>>>> >     > > >> pool from libvirt" all the time. I checked my storage
>>>> > pool
>>>> > through
>>>> >     > > "virsh
>>>> >     > > >> pool-list" and it shows the storage pool mentioned in
>>>> > local
>>>> > storage
>>>> >     > > under
>>>> >     > > >> agent.properties.
>>>> >     > > >>
>>>> >     > > >> Any ideas?
>>>> >     > > >>
>>>> >     > > >> Regards
>>>> >     > > >> Swastik
>>>> >     > > >>
>>>> >     > > >
>>>> >     > >
>>>> >     >
>>>> >
>>>> >
>>>> >
>>> --
>>>
>>> Heinlein Support GmbH
>>> Schwedter Str. 8/9b, 10119 Berlin
>>>
>>> http://www.heinlein-support.de
>>>
>>> Tel: 030 / 405051-44
>>> Fax: 030 / 405051-19
>>>
>>> Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht
>>> Berlin-Charlottenburg,
>>> Geschäftsführer: Peer Heinlein -- Sitz: Berlin
>>>
>>>
>>
>

Reply via email to