Re: Virtio vs IDE emulation, per OS - where is this info ?

2016-08-24 Thread Frank Louwers
to my knowledge, only the DB has that info!

Frank

> On 24 Aug 2016, at 16:39, Andrija Panic  wrote:
> 
> Hi all,
> 
> Im trying to build the simple table for our support guys, of which OS type
> (as seen in ACS) is virtio emualted, and which is IDE/Intel emulation.
> 
> I dont seem to find this in DB, except the list of OS types, families etc.
> 
> Can someone give some info, where should I find if OS is supposed to be
> virtio or ide/intel/emulated ?
> 
> Thanks,
> 
> -- 
> 
> Andrija Panić



Re: CloudStack Design: Ceph and local storage

2016-06-17 Thread Frank Louwers
Has anyone looked at StorPool? They seem to be doing an OnApp-storage like 
setup…

Regards,

Frank


> On 17 Jun 2016, at 16:24, Dustin Wright  
> wrote:
> 
> Thank you for the valuable feedback. I have been considering the same setup
> Jeroen proposed.
> 
> I think what most of us want is something like onapp integrated storage. At
> least that is what it sounds like to me.
> 
> It would be neat if ACS had a system vm baked in for running ceph, or some
> type of distributed storage. Either VM or object storage. IMO, Running big
> expensive enterprise SAN for your block storage doesn't feel very
> cloud-like.
> 
> Dustin
> 
> On Fri, Jun 17, 2016 at 9:59 AM, Stephan Seitz <
> s.se...@secretresearchfacility.com> wrote:
> 
>> Hi!
>> 
>> Independently from cloudstack, I'ld strongly recommend to not use ceph
>> and hypervisors on the very same machines. If you just want to build a
>> POC this is fine, but If you put load on it, you'll see unpredictible
>> behavior (at least on the ceph side) due to heavy I/O demands.
>> Ceph recommends at least 1 Core and 1 GB RAM as a rule of thumb for
>> each OSD.
>> BTW. I also won't run a ceph cluster with only two nodes. Your MON
>> should be able to form a quorum, so you'ld need at least three  nodes.
>> 
>> If you run a cluster with less than about 6 or 8 nodes, I'ld give
>> gluster a try. I've never tried it myself but I assume this should
>> be usable as "pre-setup" Storage at least with KVM Hosts.
>> 
>> cheers,
>> 
>> - Stephan
>> 
>> 
>> 
>> Am Freitag, den 17.06.2016, 13:36 +0200 schrieb Jeroen Keerrel:
>>> Good afternoon from Hamburg, Germany!
>>> 
>>> Short question:
>>> Is it feasible to use CloudStack with Ceph on local storage? As in
>>> “hyperconverged”?
>>> 
>>> Before ramping up the infrastructure, I’d like to be sure, before
>>> buying new hardware.
>>> 
>>> At the moment: 2 Hosts, each 2 6c XEON CPU, 24GB RAM and each have 6
>>> 300GB SAS drives.
>>> 
>>> According to CEPH, they advise bigger disks and separate storage
>>> “nodes”.
>>> CloudStack documentation says: Smaller, High RPM disks.
>>> 
>>> What would you advise? Buy separate “Storage Nodes” or  ramp up the
>>> current nodes?
>>> 
>>> Cheers!
>>> Jeroen
>>> 
>>> 
>>> 
>>> Jeroen Keerl
>>> Keerl IT Services GmbH
>>> Birkenstraße 1b . 21521 Aumühle
>>> +49 177 6320 317
>>> www.keerl-it.com
>>> i...@keerl-it.com
>>> Geschäftsführer. Jacobus J. Keerl
>>> Registergericht Lubeck. HRB-Nr. 14511
>>> Unsere Allgemeine Geschäftsbedingungen finden Sie hier.
>>> 
>>> 
>> 



Re: CloudStack Logging

2016-04-21 Thread Frank Louwers
Hi Paul,

I’d love to see improvement in that area! Especially for my operational admins, 
the biggest issue we face is this:

- Someone wants to deploy a new VM, and it fails.

- Currently, it’s very hard to figure out exactly why. The best they get is 
“Capacity Planner failure”.

- They come to me :)


I think these failures are quite an easy use-case to improve logging on. If a 
VM can’t be started because of a “capacity” issue, it would we good to log (in 
a non-debug, non-info log):

- Which hosts were considered for CPU/Mem (based on zone, tags etc) (list them 
by name pref, not by “host 24” which means nothing to them as that number isn’t 
anywhere in the CS UI)

- Which of those considered hosts have the capacity needed (and for those that 
are excluded, weather they are excluded for RAM or CPU reasons)

- Exact same for Storage Pools

This would solve over 95% of my “Frank, I can’t find out why this VM won’t 
deploy” logging issues.

Regards,

Frank


> On 20 Apr 2016, at 22:05, Paul Angus  wrote:
> 
> Hi,
> 
> I don't think that it's a question of importance. INFO vs DEBUG should be 
> telling you different things. ERROR and WARN are also quite different.
> In general I'm targeting the cloud operational admins, the people who need to 
> know the health of their cloud and deal with issues as they're constantly 
> reading the log.
> 
> However, I'm not proposing to add or remove any messages, just revisit the 
> categorisation. If the 'operator' has their logging on debug they'd actually 
> see the same messages. The idea is to make turning the logging down to INFO 
> feasible.
> 
> You'd turn the logging back up to pick up code issues rather than just 
> operational issues.
> 
> Some VERY simplified definitions might be:
> 
> DEBUG:  inner workings of CloudStack that only a developer can 'understand'
> INFO:  actions that CloudStack is performing or information analysis (host 5 
> is full so not going to use it).
> WARN: this isn't good
> ERROR: something just didn't work.
> 
> 
> Grabbing a couple of examples
> 
> these should be debug - I can't 'do' anything with this information.
> 
> INFO  [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (AsyncJobMgr-Heartbeat-1:ctx-4119d5bc) Begin cleanup expired async-jobs
> INFO  [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (AsyncJobMgr-Heartbeat-1:ctx-4119d5bc) End cleanup expired async-jobs
> INFO  [c.c.h.v.r.VmwareResource] (DirectAgentCronJob-2:ctx-3209f65c) Scan 
> hung worker VM to recycle
> INFO  [c.c.h.v.r.VmwareResource] (DirectAgentCronJob-113:ctx-56d9f9f2) Scan 
> hung worker VM to recycle
> 
> Whereas this should be WARN. I wouldn't want to lose this by switching off 
> DEBUG
> 
> DEBUG [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-91226be7) Detected 
> management node left, id:2, nodeIP:10.2.0.6
> DEBUG [c.c.c.ClusterManagerImpl] (Cluster-Heartbeat-1:ctx-0bccd078) Detected 
> management node left, id:2, nodeIP:10.2.0.6
> 
> 
> 
> 
> Kind regards,
> 
> Paul Angus
> 
> Regards,
> 
> Paul Angus
> 
> paul.an...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
> 
> -Original Message-
> From: Chaz PC [mailto:dreeems4e...@hotmail.com] 
> Sent: 20 April 2016 11:32
> To: users@cloudstack.apache.org
> Subject: RE: CloudStack Logging
> 
> Hello,
> I have a question what is the criteria that you are going to follow to decide 
> the log record importance. Who is the user that you will design the logging 
> process for. Is it for finding issues with the installation or is it for 
> security breaches or auditing.
> These questions are good to put in mind when designing the logging process.
> 
> 
> Sent from my Samsung Galaxy smartphone. 
>  Original message 
> From: Paul Angus  
> Date: 20/04/2016  1:01 PM  (GMT+04:00) To: 
> d...@cloudstack.apache.org, users@cloudstack.apache.org Subject: 
> RE: CloudStack Logging   Hi Simon,
> 
> My gut says that it's probably not worth going back before 4.3.  There have 
> been large changes since then but I think that it's better to get as much 
> data as possible, limiting to 4.6+ might not give a particularly large 
> install base.
> 
> 
> Kind regards,
> 
> Paul Angus
> 
> Regards,
> 
> Paul Angus
> 
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> 
> -Original Message-
> From: Simon Weller [mailto:swel...@ena.com]
> Sent: 19 April 2016 21:51
> To: users@cloudstack.apache.org; d...@cloudstack.apache.org
> Subject: Re: CloudStack Logging
> 
> Paul,
> 
> Are you wanting for focus on logs on releases later than a particular version 
> (e.g. 4.6)?
> 
> - Si
> 
> From: Paul Angus 
> Sent: Tuesday, April 19, 2016 10:55 AM
> To: users@cloudstack.apache.org; d...@cloudstack.apache.org
> Subject: RE: CloudStack Logging
> 
> No problem.
> 
> Good question - Any and all logs.  

Re: Self-fencing when storage not available

2016-02-03 Thread Frank Louwers

> On 03 Feb 2016, at 19:32, Nux!  wrote:
> 
> You can modify the script to not reboot, but until we find a better way to 
> deal with it this is correct behaviour. It sucks it reboots VMs on healthy 
> storage though.

Not only does it reboot on healty nodes, it only works on NFS. Not on another 
type of primary storage, not on local storage etc...
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "Vladislav Nazarenko" 
>> To: users@cloudstack.apache.org
>> Sent: Wednesday, 3 February, 2016 16:24:34
>> Subject: Re: Self-fencing when storage not available
> 
>> Hi Glenn,
>> 
>> we use KVM ... I also found the script:
>> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh
>> 
>> Just wondering if we can avoid rebooting ...
>> 
>> Thanks
>> Vlad
>> 
>> On 03.02.2016 16:57, Glenn Wagner wrote:
>>> Hi Vlad,
>>> 
>>> Can I ask what hypervisor are you using?
>>> The default action is to reboot the hosts
>>> 
>>> This is done in the heatbeat file on xenserver
>>> /opt/xensource/bin/xenheartbeat.sh
>>> 
>>> Thanks
>>> Glenn
>>> 
>>> 
>>> 
>>> ShapeBlue 
>>> Glenn Wagner
>>> Senior Consultant   ,   ShapeBlue
>>> 
>>> d:  * | s: +27 21 527 0091* 
>>> |   m:  *+27 73 917 4111* 
>>> 
>>> e:  *glenn.wag...@shapeblue.com | t: *
>>>    |  w:
>>> *www.shapeblue.com* 
>>> 
>>> a:
>>> 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South
>>> Africa
>>> 
>>> 
>>> Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue
>>> Services India LLP is a company incorporated in India and is operated
>>> under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda
>>> is a company incorporated in Brasil and is operated under license from
>>> Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The
>>> Republic of South Africa and is traded under license from Shape Blue
>>> Ltd. ShapeBlue is a registered trademark.
>>> This email and any attachments to it may be confidential and are
>>> intended solely for the use of the individual to whom it is addressed.
>>> Any views or opinions expressed are solely those of the author and do
>>> not necessarily represent those of Shape Blue Ltd or related
>>> companies. If you are not the intended recipient of this email, you
>>> must neither take any action based upon its contents, nor copy or show
>>> it to anyone. Please contact the sender if you believe you have
>>> received this email in error.
>>> 
>>> 
>>> -Original Message-
>>> From: Vladislav Nazarenko [mailto:vladislav.nazare...@gmail.com]
>>> Sent: Wednesday, 03 February 2016 5:08 PM
>>> To: users@cloudstack.apache.org
>>> Subject: Self-fencing when storage not available
>>> 
>>> Hi All,
>>> 
>>> I'm testing Cloudstack 4.6 now
>>> 
>>> When doing some tests with storage (nfs), I was wondering, that the
>>> hosts self-fence itself by reboot, when a storage not writable ...
>>> 
>>> Even more, my cluster had a working storage and I added one more,
>>> which was not writable due to missing user permissions... So
>>> effectively all the VMs hard drive were located on the working
>>> storage, but the problem with permissions on the new one causes the
>>> entire cluster to reboot :(
>>> 
>>> Is this a bug or a correct behavior?
>>> 
>>> Is it able to avoid rebooting at least?
>>> 
>>> Thank you in advance
>>> Vlad
>>> 
>>> 
>>> Find out more about ShapeBlue and our range of CloudStack related
>>> services:
>>> IaaS Cloud Design & Build
>>>  | CSForge – rapid
>>> IaaS deployment framework 
>>> CloudStack Consulting  |
>>> CloudStack Software Engineering
>>> 
>>> CloudStack Infrastructure Support
>>>  | CloudStack
>>> Bootcamp Training Courses 



Re: Summary: -1 LTS

2016-01-11 Thread Frank Louwers
All,

I am +1 on TLS: We have a custom branch of CloudStack, with a few custom 
patches. Some of which make sense for everyone, and we’ve committed them back, 
or plan to do so, but most of them only work for our specific case, or “cur 
corners” by dropping features we don’t need.

An LTS branch would allow us to keep our patches “good” against LTS. Our 
current tree is based on 4.5. I’d need to perform some manual patchwork to make 
them apply against 4.6, let alone 4.7. Having an LTS would mean I’d only have 
to do this every few years...

I know this might sound selfish. But I assume I am not the only one in this 
case…

Regards,

Frank


> On 11 Jan 2016, at 16:18, Daan Hoogland  wrote:
> 
> (rant alert) I have been stating this in the discuss thread and I don't
> agree with your conclusion; with our new workflow any release is a LTS as
> long as we maintain the discipline of allowing only bugfixes on the release
> they first appeared in (or 4.6 as a start point) If we maintain that
> discipline during review any release henceforth is an LTS. Of course people
> can pay others to backport outside the Apache CloudStack project if they
> want, as well. but the notion that we don't have an LTS at the moment
> hurts. (end of rant)
> 
> On Mon, Jan 11, 2016 at 3:25 PM, Rene Moser  wrote:
> 
>> LTS by the community is not an option for now:
>> 
>> Most of the threads/users/devs had concerns or are skeptical how it can
>> be done in practice.
>> 
>> As we recently changed the release process, it seems to "early" to
>> change it again or add new processes to it.
>> 
>> I still think CloudStack need some kind of LTS to serve business needs
>> but unsure if _we_ as community should do it.
>> 
>> Thanks for participating.
>> 
>> Regards
>> René
>> 
>> 
>> 
>> 
> 
> 
> -- 
> Daan



Re: What SDN are you using?

2015-11-30 Thread Frank Louwers
Hi Vadim,

Are you using VyOS in a Cloudstack environment?


> On 30 Nov 2015, at 11:20, Vadim Kimlaychuk  wrote:
> 
> I don't know if applicable, but I had very good impression from VyOS 
> (http://vyos.net/wiki/Main_Page).
> 
> Vadim.
> 
> On 2015-11-29 20:56, Nux! wrote:
> 
>> Hello,
>> So besides the folks using Nicira, can anyone recommend any other SDN thing 
>> or should I stay with v(x)lans?
>> I'm kind of removed from this side of things since my deployments tend to be 
>> with Security Grups in a single or few VLANs, any suggestions welcome, 
>> preferably free/foss.
>> Lucian
>> --
>> Sent from the Delta quadrant using Borg technology!
>> Nux!
>> www.nux.ro [1]
> 
> 
> 
> Links:
> --
> [1] http://www.nux.ro



Re: KVM HA is broken, let's fix it

2015-10-12 Thread Frank Louwers

> On 10 Oct 2015, at 12:35, Remi Bergsma  wrote:
> 
> Can you please explain what the issue is with KVM HA? In my tests, HA starts 
> all VMs just fine without the hypervisor coming back. At least that is on 
> current 4.6. Assuming a cluster of multiple nodes of course. It will then do 
> a neighbor check from another host in the same cluster. 
> 
> Also, malfunctioning NFS leads to corruption and therefore we fence a box 
> when the shared storage is unreliable. Combining primary and secondary NFS is 
> not a good idea for production in my opinion. 

Well, it depends how you look at it, and what your situation is.

If you use 1 NFS export als primary storage (and only NFS), then yes, the 
system works as one would expect, and doesn’t need to be fixed.

However, HA is “not functioning” in any of these scenario’s:

- you don’t use NFS as your only primary storage
- you use more than one NFS primary storage

Even worse: imagine you only use local storage as primary storage, but have 1 
NFS configured (as the UI “wizard” forces you to configure one). You don’t have 
any active VM configured on the primary storage. You then perform maintenance 
on the NFS storage, and take it offline…

All your hosts will then reboot, resulting in major downtime, that’s completely 
unnecessary. There’s not even an option to disable this at this point… We’ve 
removed the reboot instructions from the HA script on all our instances…

Regards,

Frank

Re: CS 4.5.2: all hosts reboot after 3 days at production

2015-09-15 Thread Frank Louwers
Important correction: it monitors the health of the first primary NFS (or 
otherwise “distributed and mounted”) filesystem. If you don’t use NFS als 
(main) primary storage, it’s safe to disable that reboot. If you know your NFS 
has “issues” from time to time, and have controls around that, yes, you can 
disable that…

Frank



> On 15 Sep 2015, at 09:08, Abhinandan Prateek 
>  wrote:
> 
> The heartbeat script monitors the health of the primary storage by using a 
> timestamp that is written to each primary store.
> In case the primary storage is unreachable it reboots the XenServer in order 
> to protect the virtual machines from corruption.
> 
>> On 14-Sep-2015, at 8:48 pm, Vadim Kimlaychuk  wrote:
>> 
>> Remi,
>> 
>>   I will definitely enable HA when find who is rebooting the host. I known 
>> circumstances when it happens and I know that it is storage-related. 
>> Hardware health is monitored by SNMP and there were no problems with 
>> temperature, CPU, RAM or HDD ranges. In case of HW failure I should 
>> theoretically have kernel panic or crash dumps. But there is none. Will 
>> experiment a bit.
>> 
>>   Thank you,
>> 
>> Vadim.
>> 
>> On 2015-09-14 17:35, Remi Bergsma wrote:
>> 
>>> Hi Vadim,
>>> It can also be XenHA but I remember you already said it is off. Did you 
>>> check the hardware health?
>>> I'd recommend turning on XenHA as otherwise in case of a failure you will 
>>> not have an automatic recovery.
>>> Regards,
>>> Remi
>>> On 14/09/15 15:09, "Vadim Kimlaychuk"  wrote:
>>> Remi,
>>> I have analyzed script xenheartbeat.sh and it seems it is useless,
>>> because relies on file /opt/cloud/bin/heartbeat that has 0 length. It is
>>> not set-up during installation and there is no such a step in
>>> documentation for setting it up. Logically admin must run
>>> "setup_heartbeat_file.sh" to make heartbeat work. If this file is 0
>>> length then script checks nothing and log this message every minute:
>>> Sep 14 04:43:53 xcp1 heartbeat: Problem with heartbeat, no iSCSI or NFS
>>> mount defined in /opt/cloud/bin/heartbeat!
>>> That means it can't reboot host, because it doesn't check
>>> anything. Isn't it ?
>>> Is there any other script that may reboot host if when there is a
>>> problem with storage?
>>> Vadim.
>>> On 2015-09-14 15:40, Remi Bergsma wrote:
>>> Hi Vadim,
>>> This does indeed reboot a box, once storage fails:
>>> echo b > /proc/sysrq-trigger
>>> Removing it doesn't make sense, as there are serious issues once you
>>> hit this code. I'd recommend making sure the storage is reliable.
>>> Regards, Remi
>>> On 14/09/15 08:13, "Vadim Kimlaychuk"  wrote:
>>> Remi,
>>> I have analyzed situation and found that storage may cause problem
>>> with host reboot as you wrote before in this thread. Reason for that --
>>> we do offline backups from NFS server at that time when hosts fail.
>>> Basically we copy all files in primary and secondary storage offsite.
>>> This process starts precisely at 00:00 and somewhere around 00:10 -
>>> 00:40 XenServer host starts to reboot.
>>> Reading old threads I have found that
>>> /opt/cloud/bin/xenheartbeat.sh may do this job. Particularly last lines
>>> at my xenheartbeat.sh are:
>>> -
>>> /usr/bin/logger -t heartbeat "Problem with $hb: not reachable for
>>> $(($(date +%s) - $lastdate)) seconds, rebooting system!"
>>> echo b > /proc/sysrq-trigger
>>> -
>>> The only "unclear" moment is -- I don't have such line in my logs.
>>> May this command "echo b > /proc/sysrq-trigger" prevent from writing to
>>> syslog file? Documentation says that it does reboot immediately without
>>> synchronizing FS. It seems there is no other place that may do it, but
>>> still I am not 100% sure.
>>> Vadim.
>>> On 2015-09-13 18:26, Vadim Kimlaychuk wrote:
>>> Remi,
>>> Thank you for hint. At least one problem is identified:
>>> [root@xcp1 ~]# xe pool-⁠list params=all | grep -⁠E
>>> "ha-⁠enabled|ha-⁠config"
>>> ha-⁠enabled ( RO): false
>>> ha-⁠configuration ( RO):
>>> Where should I look for storage errors? Host? Management server? I have
>>> checked /var/log/messages and there were only regular messages, no
>>> "fence" or "reboot" commands.
>>> I have dedicated NFS server that should be accessible all the time (at
>>> least NIC interfaces are bonded in master-slave mode). Server is used
>>> for both primary and secondary storage.
>>> Thanks,
>>> Vadim.
>>> On 2015-⁠09-⁠13 14:38, Remi Bergsma wrote:
>>> Hi Vadim,
>>> Not sure what the problem is. Although I do know that when shared
>>> storage is used, both CloudStack and XenServer will fence (reboot) the
>>> box to prevent corruption in case access to the network or the storage
>>> is not possible. What storage do you use?
>>> What does this return on a XenServer?:
>>> xe pool-⁠list params=all | grep -⁠E "ha-⁠enabled|ha-⁠config"
>>> HA should be on, or else a hypervisor crash will 

RE: duplicate use of tips?

2015-08-24 Thread Frank Louwers
Somesh, it just happened again:

mysql select * from nic_secondary_ips where ip4_address = '37.72.162.58';
++--+--+---+--+-++-++---+
| id | uuid                                 | vmId | nicId | ip4_address  | 
ip6_address | network_id | created             | account_id | domain_id |
++--+--+---+--+-++-++---+
| 14 | becda4a0-b912-4015-b69f-c9b889f74762 |   55 |    69 | 37.72.162.58 | 
NULL        |        204 | 2015-03-19 20:35:09 |         14 |         2 |
| 22 | b4b30ca2-f06c-4c1c-a048-f92678611eb2 | 5014 | 19177 | 37.72.162.58 | 
NULL        |        204 | 2015-08-24 09:03:45 |          2 |         1 |
++--+--+---+--+-++-++---+

mysql select * from nics where id = 69\G
*** 1. row ***
            id: 69
          uuid: 23670e25-377a-40d1-ad29-0517127709b5
   instance_id: 55
   mac_address: 06:ba:92:00:00:46
   ip4_address: 37.72.162.105
       netmask: 255.255.254.0
       gateway: 37.72.162.1
       ip_type: Ip4
 broadcast_uri: vlan://untagged
    network_id: 204
          mode: Dhcp
         state: Reserved
      strategy: Start
 reserver_name: DirectPodBasedNetworkGuru
reservation_id: e9f76078-6d8f-4e46-bf4c-35e2814bb8e1
     device_id: 0
   update_time: 2015-04-01 13:02:35
 isolation_uri: NULL
   ip6_address: NULL
   default_nic: 1
       vm_type: User
       created: 2015-03-18 14:09:53
       removed: NULL
   ip6_gateway: NULL
      ip6_cidr: NULL
  secondary_ip: 1
   display_nic: 1

There is only one entry in user_ip_address, the entry for the “new” instance, 
but I am sure there was an entry for the “old” instances that got overriden.
-- 
Frank Louwers
Openminds bvba
T: 09/225 82 91
www.openminds.be

On 21 Aug 2015 at 19:08:59, Somesh Naidu (somesh.na...@citrix.com) wrote:

Frank,  

 That’s not really true: This is from the “nics” table in the Database:  
What about the removed column?  

Also, did you check the table user_ip_address table for the IPs? Are there 
multiple entries for those? If not then it should not be possible for mgmt 
server to assign one IP to two VMs at the same time unless the record is 
incorrectly not being marked as Allocated while assignment.  


Regards,  
Somesh  


-Original Message-  
From: Frank Louwers [mailto:fr...@openminds.be]  
Sent: Friday, August 21, 2015 3:30 AM  
To: users@cloudstack.apache.org  
Subject: Re: duplicate use of tips?  

On 21 Aug 2015 at 04:44:13, Abhinandan Prateek 
(abhinandan.prat...@shapeblue.com) wrote:  
If you are manually assigning the ips better use the ips that are outside the 
cidr that cloudstack manages.   
The ips assigned by cloudstack dhcp are as per cloudstack assignments anything 
that happen outside VR’s dhcp is unknown to cloudstack.   


That’s not really true: This is from the “nics” table in the Database:  



+---+-+--+--++--+  

| id    | instance_id | state    | ip4_address  | network_id | secondary_ip |  

+---+-+--+--++--+  

|    72 |          58 | Reserved | x.y.x.92     |        204 |            1 |  

| 19092 |        4929 | Reserved | x.y.z.92     |        204 |            0 |  

+---+-+--+--++———+  







 On 20-Aug-2015, at 11:29 pm, Frank Louwers fr...@openminds.be wrote:   
   
 Hi,   
   
 In a zone with Basic Networking, I’ve assigned a certain netblock x.y.z.0/24 
 to the Guest network in CloudStack.   
   
 I have a VM-A that has primary ip address x.y.z.92 and secondary address on 
 the same nic x.y.z.52.   
   
 For various reasons, *both* ips were configured manually, so not using the 
 VR’s dhcp.   
   
 A while back, the x.y.z.92 was (manually) deconfigured on VM-A (so VM-A only 
 used x.y.x.52, but both ips are still configured to belong to VM-A in 
 Cloudstack).   
   
 A few days ago, a new instance (VM-B) was spun up, and CS assigned ip 
 x.y.z.92 to that VM. Why would it do that?   
 Today, a new instance was spun up (VM-C), but CS assigned x.y.z.52 to VM-C…   
   
 How and why does this happen?! Is this because VM-A does not use dhcp? That 
 might explain the .92 re-assignement, but certainly not the .52 reassignment, 
 as secondary ips don’t use dhcp anyhow…   
   
 Can anyone tell me what’s going on, and what can be done to prevent his? 
 Running CS 4.4.2 at the moment, considering upgrading to 4.5.1 (or 4.5.2)   
   
 Regards,   
 Frank   
   

Find out more about ShapeBlue and our range of CloudStack related services   

IaaS Cloud Design  Buildhttp://shapeblue.com/iaas-cloud-design-and-build

Re: duplicate use of tips?

2015-08-21 Thread Frank Louwers
On 21 Aug 2015 at 04:44:13, Abhinandan Prateek 
(abhinandan.prat...@shapeblue.com) wrote:
If you are manually assigning the ips better use the ips that are outside the 
cidr that cloudstack manages. 
The ips assigned by cloudstack dhcp are as per cloudstack assignments anything 
that happen outside VR’s dhcp is unknown to cloudstack. 


That’s not really true: This is from the “nics” table in the Database:



+---+-+--+--++--+

| id    | instance_id | state    | ip4_address  | network_id | secondary_ip |

+---+-+--+--++--+

|    72 |          58 | Reserved | x.y.x.92     |        204 |            1 |

| 19092 |        4929 | Reserved | x.y.z.92     |        204 |            0 |

+---+-+--+--++———+







 On 20-Aug-2015, at 11:29 pm, Frank Louwers fr...@openminds.be wrote: 
 
 Hi, 
 
 In a zone with Basic Networking, I’ve assigned a certain netblock x.y.z.0/24 
 to the Guest network in CloudStack. 
 
 I have a VM-A that has primary ip address x.y.z.92 and secondary address on 
 the same nic x.y.z.52. 
 
 For various reasons, *both* ips were configured manually, so not using the 
 VR’s dhcp. 
 
 A while back, the x.y.z.92 was (manually) deconfigured on VM-A (so VM-A only 
 used x.y.x.52, but both ips are still configured to belong to VM-A in 
 Cloudstack). 
 
 A few days ago, a new instance (VM-B) was spun up, and CS assigned ip 
 x.y.z.92 to that VM. Why would it do that? 
 Today, a new instance was spun up (VM-C), but CS assigned x.y.z.52 to VM-C… 
 
 How and why does this happen?! Is this because VM-A does not use dhcp? That 
 might explain the .92 re-assignement, but certainly not the .52 reassignment, 
 as secondary ips don’t use dhcp anyhow… 
 
 Can anyone tell me what’s going on, and what can be done to prevent his? 
 Running CS 4.4.2 at the moment, considering upgrading to 4.5.1 (or 4.5.2) 
 
 Regards, 
 Frank 
 

Find out more about ShapeBlue and our range of CloudStack related services 

IaaS Cloud Design  Buildhttp://shapeblue.com/iaas-cloud-design-and-build// 
CSForge – rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/ 
CloudStack Consultinghttp://shapeblue.com/cloudstack-consultancy/ 
CloudStack Software 
Engineeringhttp://shapeblue.com/cloudstack-software-engineering/ 
CloudStack Infrastructure 
Supporthttp://shapeblue.com/cloudstack-infrastructure-support/ 
CloudStack Bootcamp Training Courseshttp://shapeblue.com/cloudstack-training/ 

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England  Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company 
registered by The Republic of South Africa and is traded under license from 
Shape Blue Ltd. ShapeBlue is a registered trademark. 


duplicate use of tips?

2015-08-20 Thread Frank Louwers
Hi,

In a zone with Basic Networking, I’ve assigned a certain netblock x.y.z.0/24 to 
the Guest network in CloudStack.

I have a VM-A that has primary ip address x.y.z.92 and secondary address on the 
same nic x.y.z.52.

For various reasons, *both* ips were configured manually, so not using the VR’s 
dhcp.

A while back, the x.y.z.92 was (manually) deconfigured on VM-A (so VM-A only 
used x.y.x.52, but both ips are still configured to belong to VM-A in 
Cloudstack).

A few days ago, a new instance  (VM-B) was spun up, and CS assigned ip x.y.z.92 
to that VM. Why would it do that?
Today, a new instance was spun up (VM-C), but CS assigned x.y.z.52 to VM-C…

How and why does this happen?! Is this because VM-A does not use dhcp? That 
might explain the .92 re-assignement, but certainly not the .52 reassignment, 
as secondary ips don’t use dhcp anyhow…

Can anyone tell me what’s going on, and what can be done to prevent his? 
Running CS 4.4.2 at the moment, considering upgrading to 4.5.1 (or 4.5.2)

Regards,
Frank



CS Manager down: all hypervisors reboot

2015-08-19 Thread Frank Louwers
Hi all,

We had an interesting outage this morning. We took the Cloudstack Manager node 
down for hardware upgrades and kernel updates, and it seems all “non-dedicated” 
hosts rebooted.

We run KVM on CS 4.4.latest.

Is this “normal behaviour”, why does it do that, and how do I disable that?

The Manager is also a primary storage provider (NFS export), but all VMs use 
local storage (except 1).


Regards,

Frank

RE: CS Manager down: all hypervisors reboot

2015-08-19 Thread Frank Louwers
So migrating my primary storage to iSCSI would (as a side effect) disable the 
fencing/rebooting? 


On 19 Aug 2015 at 21:46:35, Somesh Naidu (somesh.na...@citrix.com) wrote:

 how would this work if primary storage were eg iSCSI? 
I believe we perform the heartbeat check and host fencing for NFS storage only. 

 Is there no way to disable that, except for modifying kvmheartbeat.sh? 
AFAIK, there isn't. 

RE: CS Manager down: all hypervisors reboot

2015-08-19 Thread Frank Louwers
On 19 Aug 2015 at 20:44:47, Somesh Naidu (somesh.na...@citrix.com) wrote:
Management server down would not result in host being rebooted. Primary storage 
down will. 

As you mentioned, you have hosted your primary storage (NFS) on the management 
server node. So yes, taking it down will cause all host connected to it to 
reboot. It doesn’t matter how many VMs use that particular storage. 

I am sure if there is a better way of doing this but you could modify 
kvmheartbeat.sh to disable reboot on loosing primary storage connection. 
HI Somesh,

Thanks for the explanation!

Am I right that (after reading your mail and 
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201508.mbox/%3ccalfpzo5cotx0qz+d_oxezjgytau+fa+mzxg_yqeuzswi_9g...@mail.gmail.com%3e
 by Marcus) that this will happen even if no VMs on the host set to “HA”?

What would then be the procedure to perform maintenance on the (first?) primary 
NFS storage server and how would this work if primary storage were eg iSCSI?

That sill would not explain why the “dedicated” hosts didn’t reboot, but I 
assume I should take a look at kvmheartbeat.sh then. Is there no way to disable 
that, except for modifying kvmheartbeat.sh?



Regards,



Frank



Regards, 
Somesh 

-Original Message- 
From: Frank Louwers [mailto:fr...@openminds.be] 
Sent: Wednesday, August 19, 2015 12:19 PM 
To: users@cloudstack.apache.org 
Subject: CS Manager down: all hypervisors reboot 

Hi all, 

We had an interesting outage this morning. We took the Cloudstack Manager node 
down for hardware upgrades and kernel updates, and it seems all “non-dedicated” 
hosts rebooted. 

We run KVM on CS 4.4.latest. 

Is this “normal behaviour”, why does it do that, and how do I disable that? 

The Manager is also a primary storage provider (NFS export), but all VMs use 
local storage (except 1). 


Regards, 

Frank 


Difference between the main and the shapeblue cloudstack?

2015-05-20 Thread Frank Louwers
Hi all,

Is there a list of the patches (and their rationale) Shapeblue includes in 
“their” release of Cloudstack? 

Frank Louwers
Openminds bvba

Tel: +32 9 225 82 91





Re: root password for hosts: needed past setup?

2015-04-24 Thread Frank Louwers
Thanks for your reply Rohit.

We are on KVM indeed. So I can safely remove the password...

Frank


 On 24 Apr 2015, at 15:05, Rohit Yadav rohit.ya...@shapeblue.com wrote:
 
 
 It also depends on what kind of host you have (KVM/Xen/VMware etc?), for 
 example once the agent on a KVM host is setup it may not need to use it 
 unless the agent is down (and it needs to log into the host using the 
 credentials and try to start it again).
 
 Regards,
 Rohit Yadav
 Software Architect, ShapeBlue
 M. +91 88 262 30892 | rohit.ya...@shapeblue.com
 Blog: bhaisaab.org | Twitter: @_bhaisaab
 
 
 
 Find out more about ShapeBlue and our range of CloudStack related services
 
 IaaS Cloud Design  Buildhttp://shapeblue.com/iaas-cloud-design-and-build//
 CSForge – rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/
 CloudStack Consultinghttp://shapeblue.com/cloudstack-consultancy/
 CloudStack Software 
 Engineeringhttp://shapeblue.com/cloudstack-software-engineering/
 CloudStack Infrastructure 
 Supporthttp://shapeblue.com/cloudstack-infrastructure-support/
 CloudStack Bootcamp Training 
 Courseshttp://shapeblue.com/cloudstack-training/
 
 This email and any attachments to it may be confidential and are intended 
 solely for the use of the individual to whom it is addressed. Any views or 
 opinions expressed are solely those of the author and do not necessarily 
 represent those of Shape Blue Ltd or related companies. If you are not the 
 intended recipient of this email, you must neither take any action based upon 
 its contents, nor copy or show it to anyone. Please contact the sender if you 
 believe you have received this email in error. Shape Blue Ltd is a company 
 incorporated in England  Wales. ShapeBlue Services India LLP is a company 
 incorporated in India and is operated under license from Shape Blue Ltd. 
 Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
 operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company 
 registered by The Republic of South Africa and is traded under license from 
 Shape Blue Ltd. ShapeBlue is a registered trademark.


root password for hosts: needed past setup?

2015-04-21 Thread Frank Louwers
Hi,

When adding a host, the webUI asks for a username and password to connect to 
the new host. Is this only used for the initial setup over ssh, or is this used 
later on as well?

We tend to disable passwords as much as possible, in favour of ssh keys, and I 
was wondering if I could safely remove the passwords once the Host has gone 
through the initial setup?


Regards,

Frank Louwers