ACS version: 4.2.1
Hypervisors: KVM
Storage pool type: CLVM
Since we upgraded from 4.1 to 4.2.1 moving volumes to a different primary
storage pool fail. I've enabled debug on the agents side and I think there
is a problem with the format type conversion
Volume on database has format QCOW2
On 20.04.2014 10:57, Salvatore Sciacco wrote:
ACS version: 4.2.1
Hypervisors: KVM
Storage pool type: CLVM
Since we upgraded from 4.1 to 4.2.1 moving volumes to a different
primary
storage pool fail. I've enabled debug on the agents side and I think
there
is a problem with the format type
2014-04-20 12:31 GMT+02:00 Nux! n...@li.nux.ro:
It looks like a bug, qemu-img convert should be used instead of cp -f,
among others.
I suppose that some code was added to do a simple copy when format is the
same, this wasn't the case with 4.1.1 version.
Do you mind opening an issue in
On 20.04.2014 13:24, Salvatore Sciacco wrote:
2014-04-20 12:31 GMT+02:00 Nux! n...@li.nux.ro:
It looks like a bug, qemu-img convert should be used instead of cp
-f,
among others.
I suppose that some code was added to do a simple copy when format is
the
same, this wasn't the case with
I am attempting to assist myself in this, but I am not quite finding out
what I should do. I suspect this is exactly why I am getting a 404 error
for the CS UI:
https://www.dropbox.com/s/2xsrwj931hi4948/Screenshot%202014-04-20%2010.33.46.png
2014-04-20 10:31:25,965 WARN
Hi Michael,
I usually build on CentOS, but I've had a run at the Ubuntu build, it all looks
ok.
Have you got somewhere I can upload these debs to?
Regards
Paul Angus
Cloud Architect
S: +44 20 3603 0540 | M: +447711418784 | T: CloudyAngus
paul.an...@shapeblue.com
-Original Message-
Thanks for the video, i have one more question,
So CS/management server needs to have access to storage network? i have two
networks one for storage and one for regular traffic, my hyperviser hosts
can connect to storage but my management server doesn't have connectivity
to storage, i am
Ram,
The management server(s) need to have access to secondary storage, not primary.
If you have placed your pri and sec storage devices on a common network
(perfectly acceptable config) then you just need to ensure the management
servers have access to the sec storage devices. Best practice
Hello all,
There is some bug after upgrade from 4.1.1 to 4.3.0
KVM Hypervizor
Agent settings labels :
guest.network.device=cloudbr1
private.network.device=cloudbr1
public.network.device=cloudbr0
After upgrade to 4.3 SSVM, all VR's started with multiple [public]
interfaces,
I have some VR's
Geoff,
Thank you that is what i wanted to have. I am planning to have NFS for
secondary and CLVM for primary as cloudstack doesn't support direct SAN,
not sure is there any other solution to present my SAN luns to all the VM
hosts.
Ram
On Sun, Apr 20, 2014 at 3:07 PM, Geoff Higginbottom
Hello,
Question — I am following the directions found here:
http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/latest/hypervisor_installation.html?highlight=network
I am wondering something about the interfaces, I do as it says; however, I am
unable to obtain connect to my
Hi
After upgrade and restarting system-VM's
all VR started with some bad network configuration, egress rules stopped
work.
also some staticNAT rules,
there is ip addr show from one of VR's
root@r-256-VM:~# ip addr show
1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN
No idea, but have you verified that the vm is running the new system
vm template? What happens if you destroy the router and let it
recreate?
On Sun, Apr 20, 2014 at 6:20 PM, Serg Senko kernc...@gmail.com wrote:
Hi
After upgrade and restarting system-VM's
all VR started with some bad network
No, it has nothing to do with ssh or libvirt daemon. It's the literal
unix socket that is created for virtio-serial communication when the
qemu process starts. The question is why the system is refusing access
to the socket. I assume this is being attempted as root.
On Sat, Apr 19, 2014 at 9:58
You may want to look in the qemu log of the vm to see if there's
something deeper going on, perhaps the qemu process is not fully
starting due to some other issue. /var/log/libvirt/qemu/v-1-VM.log, or
something like that.
On Sun, Apr 20, 2014 at 11:22 PM, Marcus shadow...@gmail.com wrote:
No, it
Type brctl show
And check your public interface of your router is plugged into cloudbr0 or
cloudbr1..If its plugged to cloubr0 and then need to detach from cloudbr0 and
attach that interface to cloudbr1 and need to apply the all the iptables rules
. Take the backup of iptables rules with
Hi,
I have same issue after upgrade 4.1.1 to 4.3.0
Take a look, in CS4.2 VR you have NIC's eth0,eth1,eth2.
In CS 4.3 VR you have 4 NIC's where the eth2 and eth3 is the same.
How CS4.3 is passed the QA?
On Sat, Apr 12, 2014 at 12:16 AM, motty cruz motty.c...@gmail.com wrote:
I have a testing
Hi,
What does mean In 4.3 traffic labels are not considering ?
It's temporary or traffic labels is deprecated now ?
Does mean, anyone with KVM traffic labels environment can't upgrade to
4.3.0?
On Thu, Apr 10, 2014 at 5:05 PM, Suresh Sadhu suresh.sa...@citrix.comwrote:
Did you used
Hi,
Yes sure,
root@r-256-VM:~# cat /etc/cloudstack-release
Cloudstack Release 4.3.0 (64-bit) Wed Jan 15 00:27:19 UTC 2014
Also I tried to destroy the VR and re-create, VR up with same problem.
The cloudstack-sysvmadm script haven't receive success answer from VR's.
I have a finish rolling
Sorry, actually I see the 'connection refused' is just your own test
after the fact. By that time the vm may be shut down, so connection
refused would make sense.
What happens if you do this:
'virsh dumpxml v-1-VM /tmp/v-1-VM.xml' while it is running
stop the cloudstack agent
'virsh destroy
Its temporary and its regression bug caused due to other last min commit. due
to this traffic labels are not considering.
Regards
Sadhu
-Original Message-
From: Serg Senko [mailto:kernc...@gmail.com]
Sent: 21 April 2014 11:12
To: users@cloudstack.apache.org
Subject: Re: Cloudstack
Hi,
I am using CS 4.3 with esxi vCenter 5.5, while creating system vms its
giving below error,
with reference to this bug fix
https://issues.apache.org/jira/browse/CLOUDSTACK-4875. I though it should
be in CS 4.3
2014-04-21 11:09:25,158 DEBUG [c.c.a.m.DirectAgentAttache]
使用cloudstack的时候主要就是存储的问题,二级存储使用nfs,是不是必须单独提供
一台服务器来提供服务?用一台机子即作为管理节点也提供二级存储的nfs服务可以
吗? 主存储的话 就是用clvm,clvm属于local 还是 share? Cloudstack里面的
local是不是就是指的本地磁盘?
***
北京英孚泰克信息技术有限公司研发中心
北京市海淀区小南庄路怡秀园甲1号亿德大厦7层 (100089)
Fax:86-10-82561105
23 matches
Mail list logo