Re: Secondary storage is not secondary properly

2017-08-14 Thread Asanka Gunasekara
Hi Guys, when a system VM is in start state in ui, it does not give any
option to delete stop start etc. What if it to change db entry of the
system VMS to running state. Will it give me the option to destroy? If so
what would be the database relation?

Thanks and regards

Asanka

On 15 Aug 2017 6:42 am, "Asanka Gunasekara"  wrote:

> Hi Dag, I deleted both the system VMs and from the cloudstack it says
> starting but I dont see the the VM been generated, now its since more than
> 12 houres
>
> Thanks and Regards
>
> Asanka
>
> On 10 August 2017 at 21:47, Asanka Gunasekara  wrote:
>
>> Thank you Dag,
>>
>>
>> On 10 Aug 2017 3:09 am, "Dag Sonstebo" 
>> wrote:
>>
>> Sure, let us know how you get on. The fact that the previous ssvm check
>> showed up with 172.17.101.1 was probably down to the wrong “host” global
>> setting – since the SSVM didn’t know where to contact management I would
>> guess it used a default override.
>>
>> Regards,
>> Dag Sonstebo
>> Cloud Architect
>> ShapeBlue
>>  S: +44 20 3603 0540  | dag.sonst...@shapeblue.com |
>> http://www.shapeblue.com  | Twitter:@ShapeBlue
>> 
>>
>>
>> On 09/08/2017, 17:54, "Asanka Gunasekara"  wrote:
>>
>> Hi Dag, pleas give me few days as I am on an implementation visit to a
>> remote site. But below are some of the test I performed before.
>>
>> 1. Ping from ssvm to NFS is possible
>> 2. Manually mounting NFS to /tmp/secondary is possible without any
>> Issy
>>
>> From the previous run of the ssvm check it is looking for a serve IP
>> 172.17.101.1. Where registered NFS share is 172.17.101.253
>>
>> I will run the check again on the first chance I get
>>
>> Thank you and best regards
>>
>> Asanka
>>
>>
>> On 9 Aug 2017 12:55 pm, "Dag Sonstebo" 
>> wrote:
>>
>> OK , can you post up the results of the ssvm check again?
>>
>> As suggested previously on this thread – can you try to
>> 1) Ping the NFS server from the SSVM (SSVM check does this as well) –
>> if
>> this doesn’t work then you have a networking issue.
>> 2) Depending on ping - manually mount the secondary NFS share on your
>> SSVM.
>> If this doesn’t work then you need to investigate the logs at the NFS
>> end
>> to see why the NFS handshake fails.
>>
>> Regards,
>> Dag Sonstebo
>> Cloud Architect
>> ShapeBlue
>>
>> On 08/08/2017, 19:14, "Asanka Gunasekara"  wrote:
>>
>> Hi Dag
>>
>> After changing localhost to management server ip oin global
>> configuration I
>> dont see the management server error. But the NFS error still
>> persist
>>
>> Thanks and Regards
>>
>> Asanka
>>
>>
>> dag.sonst...@shapeblue.com
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>
>>
>>
>>
>> dag.sonst...@shapeblue.com
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>
>>
>>
>> On 8 August 2017 at 23:33, Asanka Gunasekara  wrote:
>>
>> > Hi Dag, thanks for reply
>> >
>> > Did the change and the VMs are being rebuilt
>> >
>> > NFS server configuration, I took this from the installation
>> guied
>> >
>> > [root@share ~]# cat /etc/exports
>> > /share_smb/export/secondary *(rw,async,no_root_squash,no_
>> subtree_check)
>> > /share_smb/export/primary *(rw,async,no_root_squash,no_s
>> ubtree_check)
>> > [root@share ~]#
>> >
>> >
>> > On 8 August 2017 at 17:06, Dag Sonstebo <
>> dag.sonst...@shapeblue.com>
>> > wrote:
>> >
>> >> Hi Asanka,
>> >>
>> >> Can you change your “host” global setting to your management
>> server
>> IP
>> >> (it’s currently set to “localhost”), restart your management
>> service
>> and
>> >> then destroy your SSVM + let this recreate.
>> >>
>> >> Once done run the check again and let us know the outcome.
>> >>
>> >> Can you also show us the configuration of your NFS share –
>> i.e. what
>> >> parameters are set etc.
>> >>
>> >> Regards,
>> >> Dag Sonstebo
>> >> Cloud Architect
>> >> ShapeBlue
>> >>
>> >> On 08/08/2017, 10:28, "Asanka Gunasekara" 
>> wrote:
>> >>
>> >> Hi Guys,
>> >>
>> >> ssvm-check.sh command output
>> >>
>> >> https://snag.gy/bzpE5n.jpg
>> >>
>> >> Details of my nfs share
>> >>
>> >> https://snag.gy/WgJxCY.jpg
>> >>
>> >> Thanks and Best Regards
>> >>
>> >> Asanka
>> >>
>> >>
>> >>
>> >>
>> >> 

Re: Secondary storage is not secondary properly

2017-08-14 Thread Asanka Gunasekara
Hi Dag, I deleted both the system VMs and from the cloudstack it says
starting but I dont see the the VM been generated, now its since more than
12 houres

Thanks and Regards

Asanka

On 10 August 2017 at 21:47, Asanka Gunasekara  wrote:

> Thank you Dag,
>
>
> On 10 Aug 2017 3:09 am, "Dag Sonstebo"  wrote:
>
> Sure, let us know how you get on. The fact that the previous ssvm check
> showed up with 172.17.101.1 was probably down to the wrong “host” global
> setting – since the SSVM didn’t know where to contact management I would
> guess it used a default override.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>  S: +44 20 3603 0540  | dag.sonst...@shapeblue.com |
> http://www.shapeblue.com  | Twitter:@ShapeBlue
> 
>
>
> On 09/08/2017, 17:54, "Asanka Gunasekara"  wrote:
>
> Hi Dag, pleas give me few days as I am on an implementation visit to a
> remote site. But below are some of the test I performed before.
>
> 1. Ping from ssvm to NFS is possible
> 2. Manually mounting NFS to /tmp/secondary is possible without any Issy
>
> From the previous run of the ssvm check it is looking for a serve IP
> 172.17.101.1. Where registered NFS share is 172.17.101.253
>
> I will run the check again on the first chance I get
>
> Thank you and best regards
>
> Asanka
>
>
> On 9 Aug 2017 12:55 pm, "Dag Sonstebo" 
> wrote:
>
> OK , can you post up the results of the ssvm check again?
>
> As suggested previously on this thread – can you try to
> 1) Ping the NFS server from the SSVM (SSVM check does this as well) –
> if
> this doesn’t work then you have a networking issue.
> 2) Depending on ping - manually mount the secondary NFS share on your
> SSVM.
> If this doesn’t work then you need to investigate the logs at the NFS
> end
> to see why the NFS handshake fails.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 08/08/2017, 19:14, "Asanka Gunasekara"  wrote:
>
> Hi Dag
>
> After changing localhost to management server ip oin global
> configuration I
> dont see the management server error. But the NFS error still
> persist
>
> Thanks and Regards
>
> Asanka
>
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> On 8 August 2017 at 23:33, Asanka Gunasekara  wrote:
>
> > Hi Dag, thanks for reply
> >
> > Did the change and the VMs are being rebuilt
> >
> > NFS server configuration, I took this from the installation guied
> >
> > [root@share ~]# cat /etc/exports
> > /share_smb/export/secondary *(rw,async,no_root_squash,no_
> subtree_check)
> > /share_smb/export/primary *(rw,async,no_root_squash,no_s
> ubtree_check)
> > [root@share ~]#
> >
> >
> > On 8 August 2017 at 17:06, Dag Sonstebo <
> dag.sonst...@shapeblue.com>
> > wrote:
> >
> >> Hi Asanka,
> >>
> >> Can you change your “host” global setting to your management
> server
> IP
> >> (it’s currently set to “localhost”), restart your management
> service
> and
> >> then destroy your SSVM + let this recreate.
> >>
> >> Once done run the check again and let us know the outcome.
> >>
> >> Can you also show us the configuration of your NFS share – i.e.
> what
> >> parameters are set etc.
> >>
> >> Regards,
> >> Dag Sonstebo
> >> Cloud Architect
> >> ShapeBlue
> >>
> >> On 08/08/2017, 10:28, "Asanka Gunasekara" 
> wrote:
> >>
> >> Hi Guys,
> >>
> >> ssvm-check.sh command output
> >>
> >> https://snag.gy/bzpE5n.jpg
> >>
> >> Details of my nfs share
> >>
> >> https://snag.gy/WgJxCY.jpg
> >>
> >> Thanks and Best Regards
> >>
> >> Asanka
> >>
> >>
> >>
> >>
> >> dag.sonst...@shapeblue.com
> >> www.shapeblue.com
> >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> >> @shapeblue
> >>
> >>
> >>
> >> On 8 August 2017 at 14:48, Asanka Gunasekara 
> wrote:
> >>
> >> > Thanks Makrand
> >> >
> >> > On 8 August 2017 at 14:42, Makrand <
> makrandsa...@gmail.com>
> wrote:
> >> >
> >> >> Asanka,
> >> >>
> >> >> The 

Re: CS VLAN configuration in a Cisco 3560 switch

2017-08-14 Thread Simon Weller
Luis,


So Cisco don't use tagged/untagged. You build the vlan (or vlan range) and then 
apply it to a trunk interface.

The 'native' keyword in the interface 'switchport trunk native'  stanza sets 
the default untagged vlan for that particular port.


Try something like this:

vlan 65
 name public
vlan 100-200
 name my-guest-vlans
exit

interface GigabitEthernet1/0/1
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk native vlan 65
switchport trunk allowed vlan 100-200
exit


Now be really careful with the number of vlans you allocate if you're running 
spanning tree, as spanning-tree will start to have problems with large numbers 
of vlans.


With Cloudstack in advanced mode, we find that running the management network 
as native is often a better design. You can then allocate a vlan for public and 
just tell CloudStack what the vlan is and it will use it. You can then just 
include that vlan in your vlan allowed statement: switchport trunk allowed vlan 
65,100-200


- Si


From: Luis 
Sent: Monday, August 14, 2017 12:42 PM
To: users@cloudstack.apache.org
Subject: CS VLAN configuration in a Cisco 3560 switch

Hi
I have a question, following the manual for an advance networking I am trying 
to configure VLAN's in a Cisco 3560 but i am cofuse, is this all I need
Can somebody post a complete example base on their experience?
Thank you.

This is what I have
untagged  VLAN 65 for public traffice
tagged VLAN traffic for ranges 600-1000

for tagged trafficeinterface GigabitEthernet1/0/1
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk native vlan 100-900
exit


CS VLAN configuration in a Cisco 3560 switch

2017-08-14 Thread Luis
Hi
I have a question, following the manual for an advance networking I am trying 
to configure VLAN's in a Cisco 3560 but i am cofuse, is this all I need
Can somebody post a complete example base on their experience?
Thank you.

This is what I have
untagged  VLAN 65 for public traffice
tagged VLAN traffic for ranges 600-1000

for tagged trafficeinterface GigabitEthernet1/0/1
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk native vlan 100-900
exit


Fwd: Error on uploading a SSL Certificate with cloudmonkey

2017-08-14 Thread Dennis Meyer
Hi,

i want to implement SSL offloading on the vpc router loadbalancer,
referencing on this ticket: https://issues.apache.
org/jira/browse/CLOUDSTACK-4821 it should be implemented with version 4.3.
And following this document: https://cwiki.apache
.org/confluence/display/CLOUDSTACK/SSL+Termination+Support there should be
a gui which doenst exist. In the api documentation are severall calls
belonging to ssl offloading (https://cloudstack.apache.org/api/apidocs-4.9/
).

i tried to upload a ssl certificate with cloudmonkey like the file attached.

I receive the following error after execution:

Error 530: DB Exception on: com.mysql.jdbc.JDBC4PreparedStatement@3e689c30:
INSERT INTO sslcerts (sslcerts.id, sslcerts.uuid, sslcerts.certificate,
sslcerts.chain, sslcerts.key, sslcerts.password, sslcerts.account_id,
sslcerts.domain_id, sslcerts.fingerprint) VALUES (null,
_binary'62bebc90-7cbe-4cfc-9ed0-d6cf71b5bf6a', _binary'-BEGIN
CERTIFICATE-\nMIID6TCCAtGgAwIBAgIJANNyfmHIdXU6MA0GCSqGSI
b3DQEBCwUAMIGKMQswCQYD\nVQQGEwJERTEWMBQGA1UECAwNVGVsZXR1YmJ5
bGFuZDEXMBUGA1UEBwwOU2NobHVt\ncGZoYXVzZW4xFjAUBgNVBAoMDU1hcm
lhY3JvbiBMdGQxETAPBgNVBAsMCElUIENy\nb3dkMR8wHQYDVQQDDBZhd2Vz
b21lLnNlcnZlci50ZXN0aW5nMB4XDTE3MDgxMDEx\nMDEzN1oXDTE4MDgxMD
ExMDEzN1owgYoxCzAJBgNVBAYTAkRFMRYwFAYDVQQIDA1U\nZWxldHViYnls
YW5kMRcwFQYDVQQHDA5TY2hsdW1wZmhhdXNlbjEWMBQGA1UECgwN\nTWFyaW
Fjcm9uIEx0ZDERMA8GA1UECwwISVQgQ3Jvd2QxHzAdBgNVBAMMFmF3ZXNv\n
bWUuc2VydmVyLnRlc3RpbmcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIB\nAQCYHGJp+NieuEFoSLjmDAHxlX8pVGGXKYw68MpvoMzGkT3zHxpvZz
sI71FaoteX\nC5kyKv8o485KbTGsVTQlkYtuw9mzuOwhEfQ3DsgsX0OxrJeW
GCMwKWtq/O1P5Mk+\nXUitCNDCDg9j89KGe0PRQZ8XGMn3mOEpsBuTe0ST6k
ZTrAl4hA/0hkmp5HEkW9JO\nwUqzcHeCLM2It5o2cfoIOMjJEO30u6pFeQKc
OyGkjijtsUC2Oo6FfJJx9fSSj1dU\nn7a17/yLt98PIP6OMc22g5Q3plV+
9hLoPH+pHrpEqp35abeu8s32uRLLxuObiI7w\nzKgMSi5k/wT30p65v1ONA/
x9AgMBAAGjUDBOMB0GA1UdDgQWBBTfOZmo+l5/QkD1\nlpQR1dDYfkEbVjAf
BgNVHSMEGDAWgBTfOZmo+l5/QkD1lpQR1dDYfkEbVjAMBgNV\nHRMEBTADAQ
H/MA0GCSqGSIb3DQEBCwUAA4IBAQBg2Q5p6F/W+Ktsp28YL5UtOqz+\nqRRD
jLhzUvCdMNR9KChJEhbe3hz/g+OWq+WNhXCEbEpHH0D1b0Ie850DpJjScUmd
\ni8nm29EB+0HLKVBKKK/Y+iAQHIr/ujNqPdZWSfMJGs9vc2Rxq2NW1+FyUn
8gbUdc\n62GW+46spA0ESGO3bjp/LyPIEvtWnvIf9PUdaCwXDOD0KMcdX9rz
eCEaXxm0b5hy\nWvz0JGKaNWyNmmUFBdP/FtEvyDQjS4vWkOIVKoRGVHst9h
6ksMql1W5FtByGDivm\nt5FmzMxWHgPgeafQ66tv8UD+kLLNTMENSoxN8lEpa550wg4gGRfNnvVQFK3F\n-END
CERTIFICATE-', null, _binary'2KtHB01zqydgCbWU6FXun2
WsnhAYgSId3BNVCZYr4tYDIEqbKokxb+v/7eu+ZcbJbA/0mY5b9GdHLmU8Gc
d9VBEibYfCEQ8RXXDgVu11xEYo4eeKv6boVA3nluW1Uk4K53vLVKIWGs3Gxq
sc99k70GBy/BfaqC/bp4FndSpIA+qBn8I5DEll3z5MZFGnlhcmqxkH+zkqwb
oGoCDvO4RBuVdkCwkKO92mNsN8Wq9Es9RzMTIrYZ4INk0/B7jesU72KyAev+
R/EKmAuC7+mOqtJ2j/tneoLJHdDk8/+ufIsf/vNQk7SjaoBwY0nGFJH+MhMj
jpJ87XHZtRSyfvDsUORNce0On/RwfyG4LmiSvf3au7NSTNIrH9vQAJ9Jb/MY
vA0cFrZHEnIy5FvYzkjnigL1YdzLkyK6fWllkcRtuAUcOPfXCMNrCgMU5z3x
DfWYICpkJX0hkLRWAR2UpgO5XEBaG5wW3iQy/GNzeG3xG6GP8B8IJ/huVPKA
IcxqXJ0EdcK+npUanMfIoXbKm13gbfWHUNmMviLOXBVswcPxrIV3/NVfNsC9
ZcskvfzKKrFhKJT6+nym5H0w9USDuMvGBoSCeXxygH9NCgz7pEjsdl3YE/
uPW3L7TAxzL4V/8gkfHOY5O2s9/g6rXUJbAKMUOKlBQr3uO5cCeXWP6xOu/L
HEhcBJaJpiQlYVaxEaof4de235uPmUlEsUKV1KaI3X0PXN0V+Rdd9zQXXvle
NpXOSkK009MRwwiVbfcfvgugn9ISxGhX7TmWUBNhMZtmpn51zSmOHnOQfURL
vnkgxt63orjkNt0I8w4ggecQmpPjjWLILVkbhjZHNoP6JzfM0GTHx2XbaL/O
xhUbmDdRXhmVySNECZpzqFIKIsvqmY9YAM0lf5o4JNJd/LbiowLtL/3ruaLx
oXN8gxMeY16EACZw05GMGvOH6Tm9cIQ7dQIGp68w3mby4kyzID0i3qeesRiZ
PLDaQloJC3uOIdy8jiywqBKOt2W02SibYomdjBHPTkznZ3HzBWqsUr/
EghI0VyBQvs/wZgn0a1JMbAhXnQmpbf89KJiHdlrhVRdcCEauMJd+YE47IfH
HqUHurdgU3fnTnkGRuGGyq2GhxDhnxVsYlLH37VQocXYc7U/nfstAVmB5j7D
NLdDh2KdQo94KrBoMezgf+V/C8ey7CjxIoGTNxi7ETozEKe367TngjxeP6Ph
pWjaOk2LueVLwexjIpSvHZGUVGlC+W2+1XsSJ9/R78ztaEyK5YNQWfMXYOI
TB/+HxvJ/5xUtirL5SRUky35kz4/OlVmI3q1coYERv+KThSmMWx0sJLF9Yv2
DXWoZ33JfqaZoZTUyfzgPfyB4+knQuZPrjCmSNUoA+sPDYUUC8yGsH+i
MHQTskTOHaPEudOVEgb71jEOirKkTT/bkBfNc2pvkLNS5ATDWqDlPChy42rE
s5bIPLObulM+dL7FOUbV5TUfIbG92IJhXgoU5vGKtl0SSucImfdWvGRvBjcH
hzsJit0H7HJh2ENrKiF+aohiJYh9O55yrfkI1r3Cl82oMnP9O+66spwMhXv4
qfkTLqY8CgbSNGLuLpR2hkKmb5PARtM9QkEVuPTzw5/meO2CRVzBPSCVucCq
PYcbRCEDJTsQVmyIZbIH6JcMX0tOryEr17h+eSLGmKgOY14saEmx0+HG4nRs
sup5EIyyca4fka/PG2/q5kXBBLVo7xoJn9b9tE00x5zz5V2FIzwO4eLBQ+mw
BZmNxpRk9vNYGKtTn70JVOLt5s0Ces/eejsZeqnWnHG9H76gtx9yHB7zRgg0
jGbSXCxrbPZfRBtiYMME5RivM7rVgeRBs8CFt73QDs+FCQb+68USE8/J5Qee
Q6TfOqgexSmMcTYfn6ad3grDZoQ4qPjkGqVOck+9jtENGa/sT/v6wwQM+eDv
pSptMBvwbRZAqgnE55T88iIV/OzVke4pXvajXp+0mDka92EP6+cqDU3RuVBS
et2kY+KY3Uzjxbq2fDDHngvPbFohB81me8xRqXIqrl1SEjTjJLHLcrs2fWoi
tZxcHfihx8qhOrOhkCum1I3vRsTcfRvom5d7mMOwkQO7/ET7ESjRrYoZ2Pkh
wPfWMu3qnXSLkQNRSqF43+KxVicHpTYdFNZQIpXBUqSinFFqqweTbhrEhE7H
nxmdXngb0R2+agelXeWAe0uJRc/6DtVRAMxWSVTJ3A1AWWG8eWrvXrntSlFH
wm5WZUBTZcLwy21BiX6tR9H/3HlgP5QsmzeVwUFk8oKobN9pt935c=', null, 28, 1,
_binary'9F:70:21:B2:EE:0C:5D:22:DC:68:71:D5:57:98:82:A5:28:6
8:1C:3F:A4:BE:F3:DF:ED:55:3C:15:2A:8A:C3:5A')


i have the following version installed:

cloudstack-common.x86_64  4.9.2.0-1.el7.centos
cloudstack-management.x86_64  4.9.2.0-1.el7.centos

Re: KVM evaluation

2017-08-14 Thread Luis
Hi
I use to run a Basic Set Up of CS 4.6 with KVM, I never had a problem running 
different distributions of Ubuntu, CentOS, Windows XP, Vista or 2012, CS 
running on Ubuntu 14.04 and two host with CentOS 6.2, I also tried XenServer 
6.5, I had a good experience now I am trying to do an Advance installation

  From: S. Brüseke - proIO GmbH 
 To: users@cloudstack.apache.org 
 Sent: Monday, August 14, 2017 5:48 AM
 Subject: KVM evaluation
   
Hi,

we want to give KVM a change in our CS installation. Can somebody provide 
his/her experience?
- Which version of KVM are you using?
- What host OS are you running KVM on?
- What do you tweak on your KVM installations?

Thank you very much for sharing this information!

Mit freundlichen Grüßen / With kind regards,

Swen



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 



   

KVM evaluation

2017-08-14 Thread S . Brüseke - proIO GmbH
Hi,

we want to give KVM a change in our CS installation. Can somebody provide 
his/her experience?
- Which version of KVM are you using?
- What host OS are you running KVM on?
- What do you tweak on your KVM installations?

Thank you very much for sharing this information!

Mit freundlichen Grüßen / With kind regards,

Swen



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




Re: Error on Terraform with creating many networks

2017-08-14 Thread Simon Weller

Dennis,


Cloudstack is an Orchestrator, so it works well with all of the major 
hypervisors (KVM, XenServer, VMWare et al). Each hypervisor has it's own 
limitations and all of them work slightly differently in terms of how they 
support various features.


- Si



From: Dennis Meyer 
Sent: Monday, August 14, 2017 6:32 AM
To: users@cloudstack.apache.org
Subject: Re: Error on Terraform with creating many networks

Is this limitation only on XenServer? What Hypervisor works best with
CloudStack?

2017-08-14 13:27 GMT+02:00 Paul Angus :

> XenServer only supports 7 vNICs on a guest which obviously limits the
> number of tiers, and although vSphere supports 10, there seems to be a code
> bug which means guests can't start if they have more than 7 NICs attached
> (although you can attach more once the instances has started).
>
>
> Kind regards,
>
> Paul Angus
>
> paul.an...@shapeblue.com
> www.shapeblue.com
[http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]

Shapeblue - The CloudStack Company
www.shapeblue.com
Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
framework developed by ShapeBlue to deliver the rapid deployment of a 
standardised ...



> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> -Original Message-
> From: Rene Moser [mailto:m...@renemoser.net]
> Sent: 14 August 2017 11:12
> To: users@cloudstack.apache.org
> Subject: Re: Error on Terraform with creating many networks
>
> Hi Dennis
>
> On 08/14/2017 11:46 AM, Dennis Meyer wrote:
> > Hi,
> > i try to create a vpc and some networks with terraform.
>
> Not sure but wasn't there a issue with more than (or equal 8) VPC networks?
>
> https://issues.apache.org/jira/browse/CLOUDSTACK-
>
> However, not sure which versions of cloudstack is affected.
>
> René
>
>


Re: Error on Terraform with creating many networks

2017-08-14 Thread Dennis Meyer
Is this limitation only on XenServer? What Hypervisor works best with
CloudStack?

2017-08-14 13:27 GMT+02:00 Paul Angus :

> XenServer only supports 7 vNICs on a guest which obviously limits the
> number of tiers, and although vSphere supports 10, there seems to be a code
> bug which means guests can't start if they have more than 7 NICs attached
> (although you can attach more once the instances has started).
>
>
> Kind regards,
>
> Paul Angus
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> -Original Message-
> From: Rene Moser [mailto:m...@renemoser.net]
> Sent: 14 August 2017 11:12
> To: users@cloudstack.apache.org
> Subject: Re: Error on Terraform with creating many networks
>
> Hi Dennis
>
> On 08/14/2017 11:46 AM, Dennis Meyer wrote:
> > Hi,
> > i try to create a vpc and some networks with terraform.
>
> Not sure but wasn't there a issue with more than (or equal 8) VPC networks?
>
> https://issues.apache.org/jira/browse/CLOUDSTACK-
>
> However, not sure which versions of cloudstack is affected.
>
> René
>
>


Re: Error on Terraform with creating many networks

2017-08-14 Thread Rene Moser
Hi Dennis

On 08/14/2017 11:46 AM, Dennis Meyer wrote:
> Hi,
> i try to create a vpc and some networks with terraform.

Not sure but wasn't there a issue with more than (or equal 8) VPC networks?

https://issues.apache.org/jira/browse/CLOUDSTACK-

However, not sure which versions of cloudstack is affected.

René



Error on Terraform with creating many networks

2017-08-14 Thread Dennis Meyer
Hi,
i try to create a vpc and some networks with terraform.
Thats my main.tf:

# creating the vpc
resource "cloudstack_vpc" "myvpc" {
  name  = "myvpc"
  cidr  = "10.230.0.0/16"
  vpc_offering  = "Default VPC offering"
  zone  = "${var.zone}"
  project   = "${var.project}"
}

# create our network inside the vpc
resource "cloudstack_network" "public" {
  name  = "public"
  cidr  = "10.230.200.0/24"
  network_offering  = "Default VPC Network with LB"
  zone  = "${var.zone}"
  vpc_id= "${cloudstack_vpc.myvpc.id}"
  project   = "${var.project}"
  acl_id= "ebefcc96-75f5-11e7-adb3-e2bd27d4977e" # default
static id of cloudstack
  depends_on= ["cloudstack_vpc.myvpc"]
}

# create our network inside the vpc
resource "cloudstack_network" "network1" {
  name  = "network1"
  cidr  = "10.230.1.0/24"
  gateway   = "10.230.1.1"
  network_offering  = "${var.network_offering_internalwithrouter}"
  zone  = "${var.zone}"
  vpc_id= "${cloudstack_vpc.myvpc.id}"
  project   = "${var.project}"
  acl_id= "ebefcc96-75f5-11e7-adb3-e2bd27d4977e" # default
static id of cloudstack
  depends_on= ["cloudstack_network.public"]
}

# create our network inside the vpc
resource "cloudstack_network" "network2" {
  name  = "network2"
  cidr  = "10.230.2.0/24"
  gateway   = "10.230.2.1"
  network_offering  = "${var.network_offering_internalwithrouter}"
  zone  = "${var.zone}"
  vpc_id= "${cloudstack_vpc.myvpc.id}"
  project   = "${var.project}"
  acl_id= "ebefcc96-75f5-11e7-adb3-e2bd27d4977e" # default
static id of cloudstack
  depends_on= ["cloudstack_network.network1"]
}

# create our network inside the vpc
resource "cloudstack_network" "network3" {
  name  = "network3"
  cidr  = "10.230.3.0/24"
  gateway   = "10.230.3.1"
  network_offering  = "${var.network_offering_internalwithrouter}"
  zone  = "${var.zone}"
  vpc_id= "${cloudstack_vpc.myvpc.id}"
  project   = "${var.project}"
  acl_id= "ebefcc96-75f5-11e7-adb3-e2bd27d4977e" # default
static id of cloudstack
  depends_on= ["cloudstack_network.network2"]
}
# create our network inside the vpc
resource "cloudstack_network" "network4" {
  name  = "network4"
  cidr  = "10.230.4.0/24"
  gateway   = "10.230.4.1"
  network_offering  = "${var.network_offering_internalwithrouter}"
  zone  = "${var.zone}"
  vpc_id= "${cloudstack_vpc.myvpc.id}"
  project   = "${var.project}"
  acl_id= "ebefcc96-75f5-11e7-adb3-e2bd27d4977e" # default
static id of cloudstack
  depends_on= ["cloudstack_network.network3"]
}
# create our network inside the vpc
resource "cloudstack_network" "network5" {
  name  = "network5"
  cidr  = "10.230.5.0/24"
  gateway   = "10.230.5.1"
  network_offering  = "${var.network_offering_internalwithrouter}"
  zone  = "${var.zone}"
  vpc_id= "${cloudstack_vpc.myvpc.id}"
  project   = "${var.project}"
  acl_id= "ebefcc96-75f5-11e7-adb3-e2bd27d4977e" # default
static id of cloudstack
  depends_on= ["cloudstack_network.network4"]
}
# create our network inside the vpc
resource "cloudstack_network" "network6" {
  name  = "network6"
  cidr  = "10.230.6.0/24"
  gateway   = "10.230.6.1"
  network_offering  = "${var.network_offering_internalwithrouter}"
  zone  = "${var.zone}"
  vpc_id= "${cloudstack_vpc.myvpc.id}"
  project   = "${var.project}"
  acl_id= "ebefcc96-75f5-11e7-adb3-e2bd27d4977e" # default
static id of cloudstack
  depends_on= ["cloudstack_network.network5"]
}


And thats the terraform error i get after applying this:
cloudstack_network.network6: Creating...
  acl_id:   "" => "ebefcc96-75f5-11e7-adb3-e2bd27d4977e"
  cidr: "" => "10.230.6.0/24"
  display_text: "" => ""
  endip:"" => ""
  gateway:  "" => "10.230.6.1"
  name: "" => "network6"
  network_domain:   "" => ""
  network_offering: "" => "VPC Default Network"
  project:  "" => "Meyer_TF"
  startip:  "" => ""
  tags.%:   "" => ""
  vpc_id:   "" => "5a418049-6e9a-49ef-af98-953f53a3262d"
  zone: "" => "Germany Nuernberg QSC"
Error applying plan:

1 error(s) occurred:

* cloudstack_network.network6: 1 error(s) occurred:

* cloudstack_network.network6: Error creating network network6: CloudStack
API error 530 (CSExceptionErrorCode: 4250): Internal error executing
command, please contact your system administrator

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated 

Re: CloudStack-UI 1.0.7 released on July, 25, 2017

2017-08-14 Thread Ivan Kudryavtsev
Hi, Rene. Thanks for update. I forwarded it to the team. We'll try to fix
it ASAP and update release.

2017-08-14 16:29 GMT+07:00 Rene Moser :

> Hi Ivan
>
> Thanks for your update! Appreciate the progress.
>
>
> Just a note after a quick test: It seems there is an error in the docker
> image, I was not able to start a container with 1.0.7, but the very same
> command works with 1.0.6.
>
> $ docker logs cloudstack-ui
>
>
>
> 2017/08/14 09:27:12 [emerg] 13#13: host not found in upstream
> "cs-extensions" in /etc/nginx/conf.d/default.conf:22
> nginx: [emerg] host not found in upstream "cs-extensions" in
> /etc/nginx/conf.d/default.conf:22
>
> Regards
> René
>
>


-- 
With best regards, Ivan Kudryavtsev
Bitworks Software, Ltd.
Cell: +7-923-414-1515
WWW: http://bitworks.software/ 


Re: CloudStack-UI 1.0.7 released on July, 25, 2017

2017-08-14 Thread Rene Moser
Hi Ivan

Thanks for your update! Appreciate the progress.


Just a note after a quick test: It seems there is an error in the docker
image, I was not able to start a container with 1.0.7, but the very same
command works with 1.0.6.

$ docker logs cloudstack-ui



2017/08/14 09:27:12 [emerg] 13#13: host not found in upstream
"cs-extensions" in /etc/nginx/conf.d/default.conf:22
nginx: [emerg] host not found in upstream "cs-extensions" in
/etc/nginx/conf.d/default.conf:22

Regards
René



CloudStack-UI 1.0.7 released on July, 25, 2017

2017-08-14 Thread Ivan Kudryavtsev
If you don't see a properly marked document and would like to see the same
press release with images, follow the link:
https://github.com/bwsw/cloudstack-ui/wiki/107-ReleaseNotes-En

Release 1.0.7 Overview

On August 8, 2017 we released CloudStack-UI 1.0.7. This is experimental
release. We do not recommend to use it in production environments.

During the work on the current release the team's main efforts were
directed to the implementation of plugin functionality that is absent in
the native CloudStack API, namely we created two new plugins - WebShell and
Pulse.

Also, the release includes the support of several standard features:

   - Periodic volume snapshots;
   - Capability to specify the default compute offering for the zone.

Also, the release includes the support of several standard features:

   - Periodic volume snapshots;
   - Capability to specify the default compute offering for the zone.

WebShell
plugin (experimental function)

WebShell is a CloudStack-UI extension designed to perform a clientless SSH
connection to a virtual machine. The extension is activated in the
CloudStack-UI configuration file and is supported by an additional Docker
container. As for the way of WebShell usage, the plugin is similar to NoVNC
interface provided by CloudStack. However, WebShell uses the SSH protocol
and doesn’t allow VM emergency management.

The need for this extension is determined by the shortcomings of the NoVNC
interface, that obstructs its usage for everyday administrative purposes:

   - Low interactivity and slow throughput of the terminal interface;
   - Lack of possibility to copy/paste text from the user's local machine;
   - Missing feature to complete the session by timeout;
   - Access to the virtual machine in out-of-band mode, which allows to
   perform a number of insecure operations.

WebShell plugin solves these problems:

   - Provides high interactivity, which is especially useful when working
   with information that contains large amounts of text;
   - Allows copying and pasting text from the workstation;
   - Enables configuration of the session completion timeout, thereby
   improving the security of the system;
   - Doesn’t provide an access to the VM in out-of-band mode;

In future releases this plug-in will be extended with additional features
such as integration with the VM access key store and dashboard for
efficient work with many open SSH sessions.

This feature is not available in basic CloudStack UI and API. Plugin
deployment and configuration instructions can be found on the plug-in page.
Pulse
plugin (experimental function)

Pulse plugin is designed for visualization of virtual machines performance
statistics. Currently this CloudStack-UI extension is only compatible with
ACS clusters that use the KVM hypervisor. With help of sensors that collect
virtual machines performance statistics via the Libvirt API and store them
in an InfluxDB datastore and RESTful statistics server, CloudStack-UI is
able to display CPU, RAM,disk IO and network traffic utilization in the
form of convenient visual charts.

Pulse allows users of Apache CloudStack to monitor current and previous
operational states of virtual machines. The plugin supports various view
scales like minutes, hours, days and enables data overlays to monitor peak
and average values.

We consider this plugin very important for the CloudStack ecosystem as
currently there is no built-in functionality to track VM operational
states, although it is vital for system administrators to successfully
operate virtual servers.

Plugin deployment and configuration Instructions can be found on the
plug-in page.
Deployment
Instructions

The release can be found at GitHub releases:
https://github.com/bwsw/cloudstack-ui/releases/tag/1.0.7

Prepared Docker image is available at Dockerhub:
https://hub.docker.com/r/bwsw/cloudstack-ui/

You can pull it with:

# docker pull bwsw / cloudstack-ui: 1.0.7

The project changelog is here:
https://github.com/bwsw/cloudstack-ui/wiki/Changelog

Deployment guide and project info can be found at GitHub pages:
https://bwsw.github.io/cloudstack-ui/
Release
1.0.8 expectations

The main goal for the future 1.0.8 release is to review current code and
fix errors found. Additional features and improvements will be added to the
release backlog as low priority tasks.
Community
Message

Dear community member, we will be thankful if you

   - try the project and provide us with a feedback;
   - share the information about the project and the release in social

Different Time/Timezone on ACS node and hypervisors

2017-08-14 Thread Makrand
Hey All,

Is there any possible problem or impact if cloudstack VM/Machine have
different time/time zone than rest of hypervisor hots in the cloud? Let's
assume both scenarios:-
a) One ACS VM/Machine managing multiple zones (ACS VM in one location wiht
native timezone. Other hypervisors pools have natuve time set )
b) Each zone has it's dedicated ACS VM (ACS VM have time set as GMT, where
as hypervisor have native time set)

FYI, we are facing some issues with snapshot (XENserver as hypervisor).
Just ruling out possible reasons.


--
Makrand