Has anyone been successful in adding a data disk to a vm running server 2012 R2
on CS 4.4.0 with vmware as the hypervisor?
Seems like I may have a found a bug. CS adds the datadisk as a SCSI disk (SCSI
0:0), however it's using the LSILogicParallel controller. So I think the
problem is becuase
Hello Dan,
You don't need NAT for IPs in the same subnet to reach each other, but you will
need it if you want SSVM to reach the internet (and download templates etc).
Also by the sound of it you are running a Basic network with security groups,
so make sure your security groups will allow
Hi,
I can help you get devcloud up. What issues did you have?
On 14 Nov 2014 04:40, Imesh Gunaratne im...@apache.org wrote:
Hi All,
We recently introduced CloudStack support in Apache Stratos and trying to
figureout the best way to demonstrate this.
Really appreciate if you could guide us
Erik,
I just noticed this in the Citrix CloudPlatform 4.3.0.2 release notes (
http://support.citrix.com/servlet/KbServlet/download/38098-102-713723/CitrixCloudPlatform4.3.0.2ReleaseNotes.pdf),
but i can't find this issue in JIRA:
CS-24473
Problem: Restarting VR on an isolate network
with egress
Hi,
We configured Cloudstack 4.4 with a management server and a compute
node(management server with nfs shares for primary and secondary). We
can manually mount the nfs shares(both primary and secondary), however,
the secondary storage is not being detected by the Cloudstack setup. We
are
Seems like your SSVM can't mount the NFS - check this guide:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSVM,+templates,+Secondary+storage+troubleshooting
for troubleshooting access. Check the routing table inside SSVM...
On 14 November 2014 10:13, Alerts ale...@supportpro.com wrote:
Hi George,
Can you ping 192.168.254.6 and can you rpcbind -p 192.168.254.6
against it?
eric
On 11/14/2014 at 6:24 AM, Alerts wrote:Hi,
We configured Cloudstack 4.4 with a management server and a compute
node(management server with nfs shares for primary and secondary). We
can manually mount
While waiting for help I was watching Interface stats on my router and even
when the deploy template failed and the download virtual disk failed I still
see the traffic going to secondary storage. The jobs are failing but continuing
to run. Has anyone else seen this problem.
Sent on a Sprint
Thank you all for the hints, I'm confusing about the
/etc/network/interfaces configuration on my ubuntu KVM Hypervisor. If I
follow the official guide, the machine just lost network connection and
could not be logged in anymore. While if I add entry for eth0 itself(
auto eth0
iface eth0 inet
Hi guys,
I'm wondering why us there a check
inside /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh
?
I understand that the KVM host checks availability of Primary Storage, and
reboots itself if it can't write to storage.
But, if we have say, 3 NFS in a cluster, then lot
Thanks for the tip, Sanjeev. The snapshot is around ~2.5gb.
I found the “s3.singleupload.max.size” parameter and changed it to 0 so that
multi-part upload is always used, and restarted cs management. So far, I am
still getting the same error I pasted before:
Exception: Attempt to put
Hello,
Yes, we are able to ping 192.168.254.6 . Please find the below details .
==
PING 192.168.254.6 (192.168.254.6) 56(84) bytes of data.
64 bytes from 192.168.254.6: icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from 192.168.254.6: icmp_seq=2 ttl=64 time=0.036 ms
64 bytes from
Did you try to troubleshoot SSVM as indicated in the link I sent you ?
On 14 November 2014 17:08, Alerts ale...@supportpro.com wrote:
Hello,
Yes, we are able to ping 192.168.254.6 . Please find the below details .
==
PING 192.168.254.6 (192.168.254.6) 56(84) bytes of data.
64
It is there (I believe) because cloudstack is acting as a cluster manager
for KVM. It is using NFS to determine if it is 'alive' on the network, and
if it is not, it reboots itself to avoid having a split brain scenario
where VMs start coming up on other hosts when they are already running on
this
Hi Marcus, thanks for explaining.
maybe a side question: like storage/host tags to guarantee each host only
uses one NFS - what do you mean by this ? that is, how would you implent
this? I know of tags, but I only know how to make sure certain Compute/Disk
offerings use certain Compute/Storage
Small update: I was able to get past this error by editing
/etc/xapi.d/plugins/s3xen on the hypervisor, and adding this line to the s3
function:
filename = %s.vhd % filename.replace('/dev/VG_XenStorage-',
'/var/run/sr-mount/').replace('VHD-', ‘')
It just changes the filename to what it
My temporary work-around is to setup a cron job to delete that bad
route when the server restarts, but I'm trying to find a primary fix.
Management server public IP: xx.47.90.4
Management server private IP: 10.1.40.3
Currently, the management.network.cidr is set to xx.47.90.0/24.
Should that be
Hi Ian,
Thanks for your quick response. This is the issue I encountered when I
executed devcloud4 (binary-installation-advanced):
== management: [2014-11-15T02:09:23+00:00] INFO: Running queued delayed
notifications before re-raising exception
== management: [2014-11-15T02:09:23+00:00] ERROR:
18 matches
Mail list logo