The only problem with this is if you are dong NFS, the VHDs are named very
cryptic so you don't have an idea which VM is which.
Regards,
Marty Godsey
-Original Message-
From: ilya [mailto:ilya.mailing.li...@gmail.com]
Sent: Monday, November 7, 2016 12:25 PM
To:
I believe you don’t need to do anything on hypervisor part any more. On VMware
it works as described below and should work the same for other hypervisors.
You can configure it on per VM or per template level by inserting the following
configuration into user_vm_details or vm_template_details
I have a CS environment setup and I'd like to know how to call CloudStack to
create a VM with sockets and core distinction.
Right now it seems to be created all vms with x number of sockets with 1 core
per socket.
Hypervisor : XenServer
CS Version : 4.5.0
Any ideas?
Jermy
Consider using SAN/NAS level snapshots.
On 11/3/16 9:12 AM, a...@globalchangemusic.org wrote:
>
>
> How about KVM?
>
> On 2016-11-02 16:47, Sergey Levitskiy wrote:
>
>> Veeam works OK for VMware based implementations. You can tag VMs and based
>> on vsphere tag Veeam will automatically
Hello, I am going to setup basic cloudstack 4.9.0.1 on centos 7 server. The
problem is the host is show up but its on alert state. There is no error on
both management and agent log. The ssh also running but i dont know its
running appropriately or not. Any fix suggestion ? Thank you
Marty
If you mean disk names being cryptic - you are correct.
Speaking of NetApp NFS level snapshot backups, it creates a .snapshot
directory with structure for hourly, daily, weekly, monthly, etc..
I had many occasions where a user would mistakenly delete VMs and i had
to reverse the deletion
Thanks Sergey, appreciate your reply. Points noted.
Cheers.
On Mon, Nov 7, 2016 at 10:44 AM, Sergey Levitskiy <
sergey.levits...@autodesk.com> wrote:
> What essentially happens here that for each domain, you have access with
> your key, ACS executes 12 queries to get domain limits. Even if
Hi,
After our upgrade from CloudStack 4.2 to 4.8.1.1, we noted that some (but
not all) of our VMs are not able to get DHCP from the VR. This gives
problem when the VM is restarted and cannot get up and running because
unable to get IP.
I logged in to the VR and found below messages showing that