Compiling 4.11.2.0 from source with test failure

2019-02-12 Thread Yiping Zhang


Hi, all:



I am trying to compile CloudStack from the source using 4.11.2.0 branch. The 
build fails with one test failure in NioTest.java. How can I fix this error?





2019-02-12 11:07:47,541 INFO  [utils.testcase.NioTest] (main:) Clients stopped.

2019-02-12 11:07:47,541 INFO  [utils.testcase.NioTest] (main:) Server stopped.

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 60.106 sec <<< 
FAILURE! - in com.cloud.utils.testcase.NioTest

testConnection(com.cloud.utils.testcase.NioTest)  Time elapsed: 60.103 sec  <<< 
ERROR!

org.junit.runners.model.TestTimedOutException: test timed out after 6 
milliseconds

at java.lang.Thread.sleep(Native Method)

at com.cloud.utils.testcase.NioTest.testConnection(NioTest.java:145)





(skip lots of output here)





Results :



Tests in error:

  NioTest.testConnection:145 ? TestTimedOut test timed out after 6 
milliseco...



Tests run: 300, Failures: 0, Errors: 1, Skipped: 1


Thanks,

Yiping


Re: [DISCUSS] Release effort for 4.11.2.0

2018-09-28 Thread Yiping Zhang
Hi, Rafael:

I am glad to get the final word here, and much relieved that 4.11.1.0 was not 
affected after all!

Thanks,

Yiping

On 9/28/18, 11:54 AM, "Rafael Weingärtner"  wrote:

Hello Yiping Zhang,

It was a misunderstanding. CLOUDSTACK-10240 was never merged into 4.11. It
was only merged into master (4.12). Indeed we had a problem with managed
storage, but Mike and I already fixed it. However, it did not affect any
released version of ACS.

On Fri, Sep 28, 2018 at 12:56 PM Rohit Yadav 
wrote:

> Hi Yiping,
>
>
> Based on what I can understand, the fix most likely went into master but
> not the 4.11 branch. I've pinged Rafael on the PR:
> https://github.com/apache/cloudstack/pull/2761
>
>
> - Rohit
>
> <https://cloudstack.apache.org>
>
>
    >
> 
> From: Yiping Zhang 
> Sent: Friday, September 28, 2018 9:56:33 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] Release effort for 4.11.2.0
>
> Hi, Rohit:
>
> Back in July, there was a thread discussion here about CLOUDSTACK-10240.
>
> The gist of it is that the PR for CLOUDSTACK-10240 was merged into 4.11.x
> branch and it introduced a regression.
>
> My question here is whether the fix for said regression is included in
> 4.11.2.0 RC1 and onwards ?  We are waiting for this fix to start our ACS
> upgrade to 4.11.2.0, so this is quite important for us.
>
> Thanks,
>
> Yiping
>
> On 8/28/18, 3:33 AM, "Rohit Yadav"  wrote:
>
> All,
>
>
> We're about 4 weeks into the schedule, we've following items remaining
> towards the 4.11.2.0 milestone:
>
> https://github.com/apache/cloudstack/milestone/6
>
>
> In next 1-2 weeks, we'll aim to test and stabilize the 4.11 branch,
> which will then lead to cutting of RC1.
>
>
> Please share if there are any blockers/critical/major issues you've
> found in 4.11.0.0 or 4.11.1.0 releases that we should aim to fix in
> 4.11.2.0. Thanks.
>
>
> - Rohit
>
> <https://cloudstack.apache.org>
>
>
>
> 
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com<http://www.shapeblue.com>
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
> From: Rohit Yadav
> Sent: Thursday, August 2, 2018 2:27:25 PM
> To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
> Subject: [DISCUSS] Release effort for 4.11.2.0
>
>
> All,
>
>
> The recent CloudStack 4.11.1.0 release received a good reception but
> this thread is to gather feedback especially list of bugs and issues from
> the community that we should aim to fix towards the next minor LTS 
4.11.2.0
> release.
>
>
> Here is a rough timeline proposal for the same:
>
>
> 0-4 week: Get feedback from the community, gather and triage list of
> issues, start fixing/testing/reviewing them
>
> 4-6 week: Stabilize 4.11 branch towards 4.11.2.0, cut RC and start
> voting
>
> 6-8 week: Iterate over RCs/voting and release!
>
>
> To limit the scope for RM, blocker/critical issues will take priority.
> Paul will continue as RM for the 4.11.2.0 release, with assistance from
> Boris, Daan, and myself.
>
>
> For reference, this is the 4.11.2.0 milestone PR/issues list:
>
> https://github.com/apache/cloudstack/milestone/6
>
>
> Thoughts, issues you want to discuss, feedback? Thanks.
>
>
> - Rohit
>
>
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
>

-- 
Rafael Weingärtner




Re: [DISCUSS] Release effort for 4.11.2.0

2018-09-28 Thread Yiping Zhang
Hi, Rohit:

Back in July, there was a thread discussion here about CLOUDSTACK-10240.

The gist of it is that the PR for CLOUDSTACK-10240 was merged into 4.11.x 
branch and it introduced a regression.  

My question here is whether the fix for said regression is included in 4.11.2.0 
RC1 and onwards ?  We are waiting for this fix to start our ACS upgrade to 
4.11.2.0, so this is quite important for us.

Thanks,

Yiping

On 8/28/18, 3:33 AM, "Rohit Yadav"  wrote:

All,


We're about 4 weeks into the schedule, we've following items remaining 
towards the 4.11.2.0 milestone:

https://github.com/apache/cloudstack/milestone/6


In next 1-2 weeks, we'll aim to test and stabilize the 4.11 branch, which 
will then lead to cutting of RC1.


Please share if there are any blockers/critical/major issues you've found 
in 4.11.0.0 or 4.11.1.0 releases that we should aim to fix in 4.11.2.0. Thanks.


- Rohit







rohit.ya...@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 

From: Rohit Yadav
Sent: Thursday, August 2, 2018 2:27:25 PM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: [DISCUSS] Release effort for 4.11.2.0


All,


The recent CloudStack 4.11.1.0 release received a good reception but this 
thread is to gather feedback especially list of bugs and issues from the 
community that we should aim to fix towards the next minor LTS 4.11.2.0 release.


Here is a rough timeline proposal for the same:


0-4 week: Get feedback from the community, gather and triage list of 
issues, start fixing/testing/reviewing them

4-6 week: Stabilize 4.11 branch towards 4.11.2.0, cut RC and start voting

6-8 week: Iterate over RCs/voting and release!


To limit the scope for RM, blocker/critical issues will take priority. Paul 
will continue as RM for the 4.11.2.0 release, with assistance from Boris, Daan, 
and myself.


For reference, this is the 4.11.2.0 milestone PR/issues list:

https://github.com/apache/cloudstack/milestone/6


Thoughts, issues you want to discuss, feedback? Thanks.


- Rohit




Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-17 Thread Yiping Zhang
Hi, Mike, Rafael:

Thanks for the clarification of what "managed storage" is and working on fixing 
the broken bits.

Yiping

On 7/16/18, 8:28 PM, "Tutkowski, Mike"  wrote:

Another comment here: The part that is broken is if you try to let 
CloudStack pick the primary storage on the destination side. That code no 
longer exists in 4.11.1.

On 7/16/18, 9:24 PM, "Tutkowski, Mike"  wrote:

To follow up on this a bit: Yes, you should be able to migrate a VM and 
its storage from one cluster to another today using non-managed (traditional) 
primary storage with XenServer (both the source and destination primary 
storages would be cluster scoped). However, that is one of the features that 
was broken in 4.11.1 that we are discussing in this thread.

On 7/16/18, 9:20 PM, "Tutkowski, Mike"  
wrote:

For a bit of info on what managed storage is, please take a look at 
this document:


https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%20in%20CloudStack.docx?dl=0

The short answer is that you can have zone-wide managed storage 
(for XenServer, VMware, and KVM). However, there is no current zone-wide 
non-managed storage for XenServer.
    
    On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:

I assume by "managed storage", you guys mean primary storages, 
either zone -wide or cluster-wide.

For Xen hypervisor, ACS does not support "zone-wide" primary 
storage yet. Still, I can live migrate a VM with data disks between clusters 
with storage migration from web GUI, today.  So, your statement below does not 
reflect current behavior of the code.


   - If I want to migrate a VM across clusters, but if 
at least one of its
   volumes is placed in a cluster-wide managed storage, 
the migration is not
   allowed. Is that it?

[Mike] Correct














Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Yiping Zhang
I assume by "managed storage", you guys mean primary storages, either zone 
-wide or cluster-wide.

For Xen hypervisor, ACS does not support "zone-wide" primary storage yet. 
Still, I can live migrate a VM with data disks between clusters with storage 
migration from web GUI, today.  So, your statement below does not reflect 
current behavior of the code.


   - If I want to migrate a VM across clusters, but if at least one of 
its
   volumes is placed in a cluster-wide managed storage, the migration 
is not
   allowed. Is that it?

[Mike] Correct






Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Yiping Zhang
Why is it listed as fixed in 4.11.1.0 in the release note, If the code only 
exist in 4.11.2?



On 7/16/18, 12:43 PM, "Tutkowski, Mike"  wrote:

OK, as Rafael noted, looks like it’s in 4.11.2. My regression tests were 
run against 4.11.1. I thought we only allowed bug fixes when going to a new RC, 
but it appears we are not strictly enforcing that rule.

On 7/16/18, 1:40 PM, "Tutkowski, Mike"  wrote:

When I ran my suite of tests on 4.11.1, I did not encounter this issue. 
Also, looking at the code now, it appears this new code is first in 4.12.

On 7/16/18, 1:36 PM, "Yiping Zhang"  wrote:


Is this code already in ACS 4.11.1.0? 

CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to 
release note here, 
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/ja/master/fixed_issues.html,
 but in the JIRA ticket itself, the "fixed version/s" field says 4.12.

We are using XenServer clusters with shared NFS storages and I am 
about to migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between 
clusters a lot, this is going to be a blocker for us.  Someone please confirm.

Thanks

Yiping


On 7/14/18, 11:20 PM, "Tutkowski, Mike"  
wrote:

Hi,

While running managed-storage regression tests tonight, I 
noticed a problem that is not related to managed storage.

CLOUDSTACK-10240 is a ticket asking that we allow the migration 
of a virtual disk that’s on local storage to shared storage. In the process of 
enabling this feature, the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method was 
re-written in a way that completely breaks at least one use case: Migrating a 
VM across compute clusters (at least supported in XenServer). If, say, a 
virtual disk resides on shared storage in the source compute cluster, we must 
be able to copy this virtual disk to shared storage in the destination compute 
cluster.

As the code is currently written, this is no longer possible. 
It also seems that the managed-storage logic has been dropped for some reason 
in the new implementation.

Rafael – It seems that you worked on this feature. Would you be 
able to look into this and create a PR?

Thanks,
Mike










Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Yiping Zhang

Is this code already in ACS 4.11.1.0? 

CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to release note 
here, 
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/ja/master/fixed_issues.html,
 but in the JIRA ticket itself, the "fixed version/s" field says 4.12.

We are using XenServer clusters with shared NFS storages and I am about to 
migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between clusters a lot, 
this is going to be a blocker for us.  Someone please confirm.

Thanks

Yiping


On 7/14/18, 11:20 PM, "Tutkowski, Mike"  wrote:

Hi,

While running managed-storage regression tests tonight, I noticed a problem 
that is not related to managed storage.

CLOUDSTACK-10240 is a ticket asking that we allow the migration of a 
virtual disk that’s on local storage to shared storage. In the process of 
enabling this feature, the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method was 
re-written in a way that completely breaks at least one use case: Migrating a 
VM across compute clusters (at least supported in XenServer). If, say, a 
virtual disk resides on shared storage in the source compute cluster, we must 
be able to copy this virtual disk to shared storage in the destination compute 
cluster.

As the code is currently written, this is no longer possible. It also seems 
that the managed-storage logic has been dropped for some reason in the new 
implementation.

Rafael – It seems that you worked on this feature. Would you be able to 
look into this and create a PR?

Thanks,
Mike




FW: vCPU priority setting for Xen VM

2018-06-15 Thread Yiping Zhang
Cross post to dev list, since I have not received any comments on user's list 
for about a week.

On 6/8/18, 2:04 PM, "Yiping Zhang"  wrote:

Hi, all:

I am trying to find out more info about VM’s vCPU priority settings on 
XenServer.

I noticed that my VM instances have various vCPU weight associated with 
them, even for instances using the same service offering. I am wondering how 
does CloudStack set vCPU priority for VM instances?

Thanks,

Yiping





Re: Multiple Physical Networks in Basic Networking (KVM)

2018-06-11 Thread Yiping Zhang
It's been a long time, honestly, I have to take a long trip down my memory to 
remember the circumstance where we had problems.

On 6/11/18, 2:01 AM, "Dag Sonstebo"  wrote:

Hi Yiping,

“In the course of last three years, we found many features are NOT 
implemented for this deployment mode, or API's not working properly.  So be 
warned!”

>> Since you have some time served on this setup it would be great if you 
can share those issues, and ideally log Github issues for them 
(https://github.com/apache/cloudstack/issues). 

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 10/06/2018, 23:00, "Yiping Zhang"  wrote:

We have been using "advanced networking with security groups" on 
XenServer clusters (using linux bridge network backend, instead of open 
vSwitch) for over three years now in production..  AFAICT, this is not an 
officially supported/endorsed deployment scenario.We are a private 
enterprise deployment. We use our external routers as GW and VLAN separation is 
done at corporate network layer using real firewalls. 

In the course of last three years, we found many features are NOT 
implemented for this deployment mode, or API's not working properly.  So be 
warned!

Any improvements on this deployment scenario, or bring it to fully 
supported status, will be warmly welcomed by this user


On 6/9/18, 1:31 AM, "Wido den Hollander"  wrote:



On 06/08/2018 03:54 PM, Dag Sonstebo wrote:
> Ivan – not sure how you deal with per-network VM bandwidth (or 
what your use case is) so probably worth testing in the lab.
> 

Isn't that done by libvirt in the XML? In Basic Zone at least that
works. It is part of the service offering.

> Wido – agree, I don’t see why our current “basic zone” can’t be 
deprecated in the long run for “advanced zone with security groups” since they 
serve the same purpose and the latter gives more flexibility. There may be use 
cases where they don’t behave the same – but personally I’ve not come across 
any issues.
> 

I wouldn't know those cases. I'll test and see how it works out. 
Give me
some time and I'll get back to this topic.

Might even be possible to convert a Basic Zone to a Advanced Zone by
doing some database mutations.

Wido

> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
> 
> On 08/06/2018, 14:44, "Wido den Hollander"  wrote:
> 
> 
> 
> On 06/08/2018 03:32 PM, Dag Sonstebo wrote:
> > Hi Ivan,
> > 
> > Not quite – “advanced zone with security group” allows you 
to have multiple “basic” type networks isolated within their own VLANs and with 
security groups isolation between VMs / accounts. The VR only does DNS/DHCP, 
not GW/NAT.
> > 
> 
> Hmm, yes, that was actually what we/I is/are looking for. The 
main
> reason for Basic Networking is the shared services we offer 
on a public
> cloud.
> 
> A VR dies as soon as there is any flood, so that's why we 
have our
> physical routers do the work.
> 
> I thought that what you mentioned is "DirectAttached" 
networking.
> 
> But that brings me to the question why we still have Basic 
Networking
> :-) In earlier conversations I had with people I think that 
on the
> longer run Basic Networking can be dropped/merged in favor of 
Advanced
> Networking with Security Groups then, right?
> 
> Accounts/VMs are deployed Inside the same VLAN and isolation 
is done by
> Security Groups.
> 
> Sounds right, let me dig into that!
> 
> Wido
> 
> > Regards,
> > Dag Sonstebo
> > Cloud Architect
> > ShapeBlue
> > 
> > On 08/06/2018, 14:26, "Ivan Kudryavtsev" 
 wrote:
> > 
> > Hi, Dag. Not exactly. Advanced zone uses VR as a GW 
with SNAT/DNAT which is
> > not quite good for public cloud in my case

Re: Multiple Physical Networks in Basic Networking (KVM)

2018-06-10 Thread Yiping Zhang
We have been using "advanced networking with security groups" on XenServer 
clusters (using linux bridge network backend, instead of open vSwitch) for over 
three years now in production..  AFAICT, this is not an officially 
supported/endorsed deployment scenario.We are a private enterprise 
deployment. We use our external routers as GW and VLAN separation is done at 
corporate network layer using real firewalls. 

In the course of last three years, we found many features are NOT implemented 
for this deployment mode, or API's not working properly.  So be warned!

Any improvements on this deployment scenario, or bring it to fully supported 
status, will be warmly welcomed by this user


On 6/9/18, 1:31 AM, "Wido den Hollander"  wrote:



On 06/08/2018 03:54 PM, Dag Sonstebo wrote:
> Ivan – not sure how you deal with per-network VM bandwidth (or what your 
use case is) so probably worth testing in the lab.
> 

Isn't that done by libvirt in the XML? In Basic Zone at least that
works. It is part of the service offering.

> Wido – agree, I don’t see why our current “basic zone” can’t be 
deprecated in the long run for “advanced zone with security groups” since they 
serve the same purpose and the latter gives more flexibility. There may be use 
cases where they don’t behave the same – but personally I’ve not come across 
any issues.
> 

I wouldn't know those cases. I'll test and see how it works out. Give me
some time and I'll get back to this topic.

Might even be possible to convert a Basic Zone to a Advanced Zone by
doing some database mutations.

Wido

> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
> 
> On 08/06/2018, 14:44, "Wido den Hollander"  wrote:
> 
> 
> 
> On 06/08/2018 03:32 PM, Dag Sonstebo wrote:
> > Hi Ivan,
> > 
> > Not quite – “advanced zone with security group” allows you to have 
multiple “basic” type networks isolated within their own VLANs and with 
security groups isolation between VMs / accounts. The VR only does DNS/DHCP, 
not GW/NAT.
> > 
> 
> Hmm, yes, that was actually what we/I is/are looking for. The main
> reason for Basic Networking is the shared services we offer on a 
public
> cloud.
> 
> A VR dies as soon as there is any flood, so that's why we have our
> physical routers do the work.
> 
> I thought that what you mentioned is "DirectAttached" networking.
> 
> But that brings me to the question why we still have Basic Networking
> :-) In earlier conversations I had with people I think that on the
> longer run Basic Networking can be dropped/merged in favor of Advanced
> Networking with Security Groups then, right?
> 
> Accounts/VMs are deployed Inside the same VLAN and isolation is done 
by
> Security Groups.
> 
> Sounds right, let me dig into that!
> 
> Wido
> 
> > Regards,
> > Dag Sonstebo
> > Cloud Architect
> > ShapeBlue
> > 
> > On 08/06/2018, 14:26, "Ivan Kudryavtsev"  
wrote:
> > 
> > Hi, Dag. Not exactly. Advanced zone uses VR as a GW with 
SNAT/DNAT which is
> > not quite good for public cloud in my case. Despite that it 
really solves
> > the problem. But I would like to have it as simple as possible, 
without VR
> > as a GW and xNAT.
> > 
> > пт, 8 июн. 2018 г., 15:21 Dag Sonstebo 
:
> > 
> > > Wido / Ivan – I’m probably missing something – but is the 
feature you are
> > > looking for not the same functionality we currently have in 
“advanced zones
> > > with security groups”?
> > >
> > > Regards,
> > > Dag Sonstebo
> > > Cloud Architect
> > > ShapeBlue
> > >
> > > On 08/06/2018, 14:14, "Ivan Kudryavtsev" 
 wrote:
> > >
> > > Hi Wido, I also very interested in similar deployment, 
especially
> > > combined
> > > with the capability of setting different network 
bandwidth for
> > > different
> > > networks, like
> > > 10.0.0.0/8 intra dc with 1g bandwidth per vm and white 
ipv4/ipv6 with
> > > regular bandwidth management. But it seem it takes very 
big redesign
> > > of VM
> > > settings and VR redesign is also required.
> > >
> > > When I tried to investigate if it possible with ACS basic 
network,
> > > didn't
> > > succeed with any relevant information.
> > >
> > >
> > > пт, 8 июн. 2018 г., 14:56 Wido den Hollander 
:
> 

[Feature Request] support for cluster wide VM operations

2018-06-01 Thread Yiping Zhang
Hi, all:

When making CloudStack API calls on VM instances, many APIs accept parameters 
to filter VM instances based on domain, pod, and host, where the VM instances 
belong to.  However, we can’t filter VM instances based on the cluster!  I am 
wondering if this is a deliberate design decision or it is just an unfortunate 
neglect?

For example, when calling listVirtualMachines API, I can list all instances 
belonging to a domain or running on a host, but I also would like to be able to 
list all instances running on a particular cluster; for startVirtualMachine 
API, I’d like to be able to start an instance on any hosts belonging to a 
particular cluster (for affinity group concerns).

IMHO, supporting such cluster wide operations, wherever it makes sense to do 
so, would be a great convenience improvement for cloud admins and operators 
alike.

Yiping


Re: [DISCUSS] CloudStack graceful shutdown

2018-04-10 Thread Yiping Zhang
As a cloud admin, I would love to have this feature.  

It so happens that I just accidentally restarted my ACS management server 
while two instances are migrating to another Xen cluster (via storage 
migration, not live migration).  As results, both instances 
ends up with corrupted data disk which can't be reattached or migrated.

Any feature which prevents this from happening would be great.  A low hanging 
fruit is simply checking for 
if there are any async jobs running, especially any kind of migration jobs or 
other known long running type of 
jobs and warn the operator  so that he has a chance to abort server shutdowns.

Yiping

On 4/5/18, 3:13 PM, "ilya musayev"  wrote:

Andrija

This is a tough scenario.

As an admin, they way i would have handled this situation, is to advertise
the upcoming outage and then take away specific API commands from a user a
day before - so he does not cause any long running async jobs. Once
maintenance completes - enable the API commands back to the user. However -
i dont know who your user base is and if this would be an acceptable
solution.

Perhaps also investigate what can be done to speed up your long running
tasks...

As a side node, we will be working on a feature that would allow for a
graceful termination of the process/job, meaning if agent noticed a
disconnect or termination request - it will abort the command in flight. We
can also consider restarting this tasks again or what not - but it would
not be part of this enhancement.

Regards
ilya

On Thu, Apr 5, 2018 at 6:47 AM, Andrija Panic 
wrote:

> Hi Ilya,
>
> thanks for the feedback - but in "real world", you need to "understand"
> that 60min is next to useless timeout for some jobs (if I understand this
> specific parameter correctly ?? - job is really canceled, not only job
> monitoring is canceled ???) -
>
> My value for the  "job.cancel.threshold.minutes" is 2880 minutes (2 days?)
>
> I can tell you when you have CEPH/NFS (CEPH even "worse" case, since 
slower
> read durign qemu-img convert process...) of 500GB, then imagine snapshot
> job will take many hours. Should I mention 1TB volumes (yes, we had
> client's like that...)
> Than attaching 1TB volume, that was uploaded to ACS (lives originally on
> Secondary Storage, and takes time to be copied over to NFS/CEPH) will take
> up to few hours.
> Then migrating 1TB volume from NFS to CEPH, or CEPH to NFS, also takes
> time...etc.
>
> I'm just giving you feedback as "user", admin of the cloud, zero DEV 
skills
> here :) , just to make sure you make practical decisions (and I admit I
> might be wrong with my stuff, but just giving you feedback from our public
> cloud setup)
>
>
> Cheers!
>
>
>
>
> On 5 April 2018 at 15:16, Tutkowski, Mike 
> wrote:
>
> > Wow, there’s been a lot of good details noted from several people on how
> > this process works today and how we’d like it to work in the near 
future.
> >
> > 1) Any chance this is already documented on the Wiki?
> >
> > 2) If not, any chance someone would be willing to do so (a flow diagram
> > would be particularly useful).
> >
> > > On Apr 5, 2018, at 3:37 AM, Marc-Aurèle Brothier 
> > wrote:
> > >
> > > Hi all,
> > >
> > > Good point ilya but as stated by Sergey there's more thing to consider
> > > before being able to do a proper shutdown. I augmented my script I 
gave
> > you
> > > originally and changed code in CS. What we're doing for our 
environment
> > is
> > > as follow:
> > >
> > > 1. the MGMT looks for a change in the file /etc/lb-agent which 
contains
> > > keywords for HAproxy[2] (ready, maint) so that HA-proxy can disable 
the
> > > mgmt on the keyword "maint" and the mgmt server stops a couple of
> > > threads[1] to stop processing async jobs in the queue
> > > 2. Looks for the async jobs and wait until there is none to ensure you
> > can
> > > send the reconnect commands (if jobs are running, a reconnect will
> result
> > > in a failed job since the result will never reach the management
> server -
> > > the agent waits for the current job to be done before reconnecting, 
and
> > > discard the result... rooms for improvement here!)
> > > 3. Issue a reconnectHost command to all the hosts connected to the 
mgmt
> > > server so that they reconnect to another one, otherwise the mgmt must
> be
> > up
> > > since it is used to forward commands to agents.
> > > 4. when all agents are reconnected, we can shutdown the management
> server
> > > and perform the maintenance.
> > >
> > > One issue 

Re: storage affinity groups

2016-09-10 Thread Yiping Zhang
Yes, we are currently considering creating multiple clusters.  The downside of 
this approach is we need many more hosts.

Yiping

On 9/9/16, 4:05 PM, "Simon Weller" <swel...@ena.com> wrote:

Why not just use different primary storage per cluster. You then can 
control your storage failure domains on a cluster basis.

Simon Weller/ENA
(615) 312-6068

-Original Message-
From: Will Stevens [wstev...@cloudops.com]
Received: Friday, 09 Sep 2016, 5:46PM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: Re: storage affinity groups

I have not really thought through this use case, but off the top of my
head, you MAY be able to do something like use host anti-affinity and then
use different primary storage per host affinity.  I know this is not the
ideal solution, but it will limit the primary storage failure domain to a
set of affinity hosts.  This pushes the responsibility of HA to the
application deployer, which I think you are expecting to the be case
anyway.  You still have a single point of failure with the load balancers
unless you implement GSLB.

This will likely complicate your capacity management, but it may be a short
term solution for your problem until a better solution is developed.

If I think of other potential solutions I will post them, but that is what
I have for right now.

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Sep 9, 2016 at 3:44 PM, Yiping Zhang <yzh...@marketo.com> wrote:

> Will described my use case perfectly.
>
> Ideally, the underlying storage technology used for the cloud should
> provide the reliability required.  But not every company has the money for
> the best storage technology on the market. So the next best thing is to
> provide some fault tolerance redundancy through the app and at the same
> time make it easy to use for end users and administrators alike.
>
> Regards,
>
> Yiping
>
> On 9/9/16, 11:49 AM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:
>
> Yep, based on the recent e-mail Yiping sent, I would agree, Will.
>
> At the time being, you have two options: 1) storage tagging 2)
> fault-tolerant primary storage like a SAN.
> 
> From: williamstev...@gmail.com <williamstev...@gmail.com> on behalf
> of Will Stevens <wstev...@cloudops.com>
> Sent: Friday, September 9, 2016 12:44 PM
> To: dev@cloudstack.apache.org
> Subject: Re: storage affinity groups
>
> My understanding is that he wants to do anti-affinity across primary
> storage endpoints.  So if he has two web servers, it would ensure that
> one
> of his web servers is on Primary1 and the other is on Primary2.  This
> means
> that if he loses a primary storage for some reason, he only loses one
> of
> his load balanced web servers.
>
> Does that sound about right?
>
> *Will STEVENS*
> Lead Developer
>
> *CloudOps* *| *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
>
> On Fri, Sep 9, 2016 at 2:40 PM, Tutkowski, Mike <
> mike.tutkow...@netapp.com>
> wrote:
>
> > Hi Yiping,
> >
> > Reading your most recent e-mail, it seems like you are looking for a
> > feature that does more than simply makes sure virtual disks are
> roughly
> > allocated equally across the primary storages of a given cluster.
> >
> > At first, that is what I imagined your request to be.
> >
> > From this e-mail, though, it looks like this is something you'd like
> users
> > to be able to personally choose (ex. a user might want virtual disk
> 1 on
> > different storage than virtual disk 2).
> >
> > Is that a fair representation of your request?
> >
> > If so, I believe storage tagging (as was mentioned by Marty) is the
> only
> > way to do that at present. It does, as you indicated, lead to a
> > proliferation of offerings, however.
> >
> > As for how I personally solve this issue: I do not run a cloud. I
> work for
> > a storage vendor. In our situation, the clustered SAN that we
> dev

Re: storage affinity groups

2016-09-09 Thread Yiping Zhang
I wanted first to see what other people think about this feature. That’s why I 
posted it on Dev list. If enough people consider it as an useful feature for 
ACS,  then I can make formal feature request.

On 9/9/16, 1:25 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:

With CloudStack as it currently stands, I believe you will need to resort 
to storage tagging for your use case then.
____
    From: Yiping Zhang <yzh...@marketo.com>
Sent: Friday, September 9, 2016 1:44 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

Will described my use case perfectly.

Ideally, the underlying storage technology used for the cloud should 
provide the reliability required.  But not every company has the money for the 
best storage technology on the market. So the next best thing is to provide 
some fault tolerance redundancy through the app and at the same time make it 
easy to use for end users and administrators alike.

Regards,

Yiping

On 9/9/16, 11:49 AM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:

Yep, based on the recent e-mail Yiping sent, I would agree, Will.

At the time being, you have two options: 1) storage tagging 2) 
fault-tolerant primary storage like a SAN.

From: williamstev...@gmail.com <williamstev...@gmail.com> on behalf of 
Will Stevens <wstev...@cloudops.com>
Sent: Friday, September 9, 2016 12:44 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

My understanding is that he wants to do anti-affinity across primary
storage endpoints.  So if he has two web servers, it would ensure that 
one
of his web servers is on Primary1 and the other is on Primary2.  This 
means
that if he loses a primary storage for some reason, he only loses one of
his load balanced web servers.

Does that sound about right?

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Sep 9, 2016 at 2:40 PM, Tutkowski, Mike 
<mike.tutkow...@netapp.com>
wrote:

> Hi Yiping,
>
> Reading your most recent e-mail, it seems like you are looking for a
> feature that does more than simply makes sure virtual disks are 
roughly
> allocated equally across the primary storages of a given cluster.
>
> At first, that is what I imagined your request to be.
>
> From this e-mail, though, it looks like this is something you'd like 
users
> to be able to personally choose (ex. a user might want virtual disk 1 
on
> different storage than virtual disk 2).
>
> Is that a fair representation of your request?
>
> If so, I believe storage tagging (as was mentioned by Marty) is the 
only
> way to do that at present. It does, as you indicated, lead to a
> proliferation of offerings, however.
>
> As for how I personally solve this issue: I do not run a cloud. I 
work for
> a storage vendor. In our situation, the clustered SAN that we develop 
is
> highly fault tolerant. If the SAN is offline, then it probably means 
your
> entire datacenter is offline (ex. power loss of some sort).
    >
> Talk to you later,
> Mike
> 
> From: Yiping Zhang <yzh...@marketo.com>
> Sent: Friday, September 9, 2016 11:08 AM
> To: dev@cloudstack.apache.org
> Subject: Re: storage affinity groups
>
> I am not a Java developer, so I am at a total loss on Mike’s 
approach. How
> would end users choose this new storage pool allocator from UI when
> provisioning new instance?
>
> My hope is that if the feature is added to ACS, end users can assign 
an
> anti-storage affinity group to VM instances, just as assign anti-host
> affinity groups from UI or API, either at VM creation time, or update
> assignments for existing instances (along with any necessary VM 
stop/start,
> storage migration actions, etc).
>
> Obviously, this feature is useful only when there are more than one
> primary storage devices available for the same cluster or zone (in 
case for
> zone wide primary storage volumes).
>
> Just curious, how many primary storage volumes are available for your
> clusters/zones?
>

Re: storage affinity groups

2016-09-09 Thread Yiping Zhang
Will described my use case perfectly.

Ideally, the underlying storage technology used for the cloud should provide 
the reliability required.  But not every company has the money for the best 
storage technology on the market. So the next best thing is to provide some 
fault tolerance redundancy through the app and at the same time make it easy to 
use for end users and administrators alike.

Regards,

Yiping

On 9/9/16, 11:49 AM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:

Yep, based on the recent e-mail Yiping sent, I would agree, Will.

At the time being, you have two options: 1) storage tagging 2) 
fault-tolerant primary storage like a SAN.

From: williamstev...@gmail.com <williamstev...@gmail.com> on behalf of Will 
Stevens <wstev...@cloudops.com>
Sent: Friday, September 9, 2016 12:44 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

My understanding is that he wants to do anti-affinity across primary
storage endpoints.  So if he has two web servers, it would ensure that one
of his web servers is on Primary1 and the other is on Primary2.  This means
that if he loses a primary storage for some reason, he only loses one of
his load balanced web servers.

Does that sound about right?

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Sep 9, 2016 at 2:40 PM, Tutkowski, Mike <mike.tutkow...@netapp.com>
wrote:

> Hi Yiping,
>
> Reading your most recent e-mail, it seems like you are looking for a
> feature that does more than simply makes sure virtual disks are roughly
> allocated equally across the primary storages of a given cluster.
>
> At first, that is what I imagined your request to be.
>
> From this e-mail, though, it looks like this is something you'd like users
> to be able to personally choose (ex. a user might want virtual disk 1 on
> different storage than virtual disk 2).
>
> Is that a fair representation of your request?
>
> If so, I believe storage tagging (as was mentioned by Marty) is the only
> way to do that at present. It does, as you indicated, lead to a
> proliferation of offerings, however.
>
> As for how I personally solve this issue: I do not run a cloud. I work for
> a storage vendor. In our situation, the clustered SAN that we develop is
> highly fault tolerant. If the SAN is offline, then it probably means your
> entire datacenter is offline (ex. power loss of some sort).
    >
> Talk to you later,
> Mike
> 
> From: Yiping Zhang <yzh...@marketo.com>
> Sent: Friday, September 9, 2016 11:08 AM
> To: dev@cloudstack.apache.org
> Subject: Re: storage affinity groups
>
> I am not a Java developer, so I am at a total loss on Mike’s approach. How
> would end users choose this new storage pool allocator from UI when
> provisioning new instance?
>
> My hope is that if the feature is added to ACS, end users can assign an
> anti-storage affinity group to VM instances, just as assign anti-host
> affinity groups from UI or API, either at VM creation time, or update
> assignments for existing instances (along with any necessary VM 
stop/start,
> storage migration actions, etc).
>
> Obviously, this feature is useful only when there are more than one
> primary storage devices available for the same cluster or zone (in case 
for
> zone wide primary storage volumes).
>
> Just curious, how many primary storage volumes are available for your
> clusters/zones?
>
> Regards,
> Yiping
>
> On 9/8/16, 6:04 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:
>
> Personally, I think the most flexible way is if you have a developer
> write a storage-pool allocator to customize the placement of virtual disks
> as you see fit.
>
> You extend the StoragePoolAllocator class, write your logic, and
> update a config file so that Spring is aware of the new allocator and
> creates an instance of it when the management server is started up.
>
> You might even want to extend ClusterScopeStoragePoolAllocator
> (instead of directly implementing StoragePoolAllocator) as it possibly
> provides some useful functionality for you already.
> 
> From: Marty Godsey <ma...@gonsource.com>
> Sent: Thursday, Se

Re: storage affinity groups

2016-09-09 Thread Yiping Zhang
I am not a Java developer, so I am at a total loss on Mike’s approach. How 
would end users choose this new storage pool allocator from UI when 
provisioning new instance?

My hope is that if the feature is added to ACS, end users can assign an 
anti-storage affinity group to VM instances, just as assign anti-host affinity 
groups from UI or API, either at VM creation time, or update assignments for 
existing instances (along with any necessary VM stop/start, storage migration 
actions, etc).

Obviously, this feature is useful only when there are more than one primary 
storage devices available for the same cluster or zone (in case for zone wide 
primary storage volumes).

Just curious, how many primary storage volumes are available for your 
clusters/zones? 

Regards,
Yiping

On 9/8/16, 6:04 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:

Personally, I think the most flexible way is if you have a developer write 
a storage-pool allocator to customize the placement of virtual disks as you see 
fit.

You extend the StoragePoolAllocator class, write your logic, and update a 
config file so that Spring is aware of the new allocator and creates an 
instance of it when the management server is started up.

You might even want to extend ClusterScopeStoragePoolAllocator (instead of 
directly implementing StoragePoolAllocator) as it possibly provides some useful 
functionality for you already.

From: Marty Godsey <ma...@gonsource.com>
Sent: Thursday, September 8, 2016 6:27 PM
To: dev@cloudstack.apache.org
Subject: RE: storage affinity groups

So what would be the best way to do it? I use templates to make it simple 
for my users so that the Xen tools are already installed as an example.

Regards,
Marty Godsey

-Original Message-
From: Yiping Zhang [mailto:yzh...@marketo.com]
Sent: Thursday, September 8, 2016 7:55 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

Well, using tags leads to proliferation of templates or service offerings 
etc. It is not very scalable and gets out of hand very quickly.

Yiping

On 9/8/16, 4:25 PM, "Marty Godsey" <ma...@gonsource.com> wrote:

I do this by using storage tags. As an example I have some templates 
that are either created on SSD or magnetic storage. The template has a storage 
tag associated with it and then I assigned the appropriate storage tag to the 
primary storage.

Regards,
Marty Godsey

-Original Message-
From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com]
Sent: Thursday, September 8, 2016 7:16 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

If one doesn't already exist, you can write a custom storage allocator 
to handle this scenario.

    > On Sep 8, 2016, at 4:25 PM, Yiping Zhang <yzh...@marketo.com> wrote:
>
> Hi,  Devs:
>
> We all know how (anti)-host affinity group works in CloudStack,  I am 
wondering if there is a similar concept for (anti)-storage affinity group?
>
> The use case is as this:  in a setup with just one (somewhat) 
unreliable primary storage, if the primary storage is off line, then all VM 
instances would be impacted. Now if we have two primary storage volumes for the 
cluster, then when one of them goes offline, only half of VM instances would be 
impacted (assuming the VM instances are evenly distributed between the two 
primary storage volumes).  Thus, the (anti)-storage affinity groups would make 
sure that instance's disk volumes are distributed among available primary 
storage volumes just like (anti)-host affinity groups would distribute 
instances among hosts.
>
> Does anyone else see the benefits of anti-storage affinity groups?
>
> Yiping






Re: storage affinity groups

2016-09-08 Thread Yiping Zhang
Well, using tags leads to proliferation of templates or service offerings etc. 
It is not very scalable and gets out of hand very quickly.

Yiping

On 9/8/16, 4:25 PM, "Marty Godsey" <ma...@gonsource.com> wrote:

I do this by using storage tags. As an example I have some templates that 
are either created on SSD or magnetic storage. The template has a storage tag 
associated with it and then I assigned the appropriate storage tag to the 
primary storage.

Regards,
Marty Godsey

-Original Message-
From: Tutkowski, Mike [mailto:mike.tutkow...@netapp.com] 
Sent: Thursday, September 8, 2016 7:16 PM
To: dev@cloudstack.apache.org
Subject: Re: storage affinity groups

If one doesn't already exist, you can write a custom storage allocator to 
handle this scenario.

> On Sep 8, 2016, at 4:25 PM, Yiping Zhang <yzh...@marketo.com> wrote:
> 
> Hi,  Devs:
> 
> We all know how (anti)-host affinity group works in CloudStack,  I am 
wondering if there is a similar concept for (anti)-storage affinity group?
> 
> The use case is as this:  in a setup with just one (somewhat) unreliable 
primary storage, if the primary storage is off line, then all VM instances 
would be impacted. Now if we have two primary storage volumes for the cluster, 
then when one of them goes offline, only half of VM instances would be impacted 
(assuming the VM instances are evenly distributed between the two primary 
storage volumes).  Thus, the (anti)-storage affinity groups would make sure 
that instance's disk volumes are distributed among available primary storage 
volumes just like (anti)-host affinity groups would distribute instances among 
hosts.
> 
> Does anyone else see the benefits of anti-storage affinity groups?
> 
> Yiping




storage affinity groups

2016-09-08 Thread Yiping Zhang
Hi,  Devs:

We all know how (anti)-host affinity group works in CloudStack,  I am wondering 
if there is a similar concept for (anti)-storage affinity group?

The use case is as this:  in a setup with just one (somewhat) unreliable 
primary storage, if the primary storage is off line, then all VM instances 
would be impacted. Now if we have two primary storage volumes for the cluster, 
then when one of them goes offline, only half of VM instances would be impacted 
(assuming the VM instances are evenly distributed between the two primary 
storage volumes).  Thus, the (anti)-storage affinity groups would make sure 
that instance’s disk volumes are distributed among available primary storage 
volumes just like (anti)-host affinity groups would distribute instances among 
hosts.

Does anyone else see the benefits of anti-storage affinity groups?

Yiping


Re: [Proposal] Template for CloudStack API Reference Pages

2015-11-11 Thread Yiping Zhang
As a user who uses API a lot,  I would like to see following improvements in 
api reference pages:

1) In brief description for Title section, please specify if the referenced API 
is async or not.  Currently, this info is available only on the API listing 
pages with “(A)” after the api name, but not available or obvious anywhere on 
the api reference page itself. 

2)  For each parameter, in addition to , ,  
attributes, it would be great to also provide following:
 := integer | string | array | enumerate | boolean etc
 := true | false | null | 0 etc

A Notes subsection for parameters: IMHO, there are several reasons that 
such a section will be useful:
* A list of values which have special meaning to the api and what are 
their special meanings, if any.  For example, for listVirtualMachines api, 
projectid=-1 would return instances belonging to ALL projects.  Here value “-1” 
is special.
* combination of certain parameters are mutually exclusive, or are 
required.  Some of this info are currently present in the parameter’s 
description field. But they are usually too brief, hard to read and hard to 
understand.


3) Add a limitations section:
   This section describes scenarios where the referenced API does not apply to, 
or not implemented yet, or known to not work properly.  Many apis have 
limitations and the information is scattered all over places in documents, if 
exists at all. So most often users can only find out by trial and errors.
   
For example,  assignVirtualMachine api has following limitations: 1) does 
not work with VM instances belonging to a project, 2) not implemented for 
Advanced networking with security group enabled.

4) Add an Authorization section or just provide info on the page somewhere: 
describe who can make this api call:  root admin, domain admin, or regular 
users.  Currently, this info is  provided by listing available apis in 
different pages titled “Root admin API”, “domain admin api” and “User api”.  
Personally,  I prefer a separate section on each api’s reference page for this 
info so that it can’t be missed.
   
5)  Error response:  I really like the idea of adding this section to the 
reference page.  Please list both HTTP response code as well as CloudStack 
internal error code and error messages.




Finally, please get some one to proof-read all descriptions.  Some of current 
API document are really hard to understand!

BTW: which release is this proposal targeted for ?

Just my $0.02.

Yiping


On 11/10/15, 9:10 PM, "Daejuan Jacobs"  wrote:

>I assume by "Format" you mean data type.
>
>But I think this looks good. It's simple, yet it manages to nail all the
>points you need when developing on a software's API.
>
>On Tue, Nov 10, 2015 at 8:33 AM Rajsekhar K 
>wrote:
>
>> Hi, All,
>>
>> This is the proposal for a new template for CloudStack API reference
>> pages. This template is based on the reference page templates for REST APIs.
>>
>> Please find attached the following documents for your review:
>>
>>- Template for normal and asynchronous CloudStack API references.
>>- Sample API reference page using the template for a CloudStack API
>>(listZones).
>>
>>
>> Please review this template and let me know your thoughts on this.
>>
>> Thanks,
>> Rajsekhar
>>


Re: Mentor

2015-10-28 Thread Yiping Zhang
Hi, David:

I am speaking as a CloudStack user/admin/operator here.

Here is an issue which really really drives me crazy, but should be relatively 
easy for a java developer to work on:  improving error log messages!

Here is a specific example: when deploying a VM instance fails, often the error 
message simply says “InsufficientServerCapacityException: Unable to create a 
deployment”  along with a stack trace, but without any easily understandable 
information.  I have encountered this error for at least a dozen different 
reasons.  This message really can be improved to provide more context and human 
understandable output to help CloudStack admins to troubleshoot the real 
problem.

Good luck.

Yiping



On 10/27/15, 3:27 PM, "Erik Weber"  wrote:

>On Tue, Oct 27, 2015 at 10:34 PM, David Willard 
>wrote:
>
>> Hi B. Prakash, Daan, Erik,
>>
>> I have emailed many time requesting a mentor and still no responses. I
>> have been following cloudstack for a few months. I am entry-level java
>> completed a Java class as I have earned my bachelor in IT from Northeastern
>> University. I earned a 4.0 for the Java course. Even if I have to start
>> troubleshooting code I am a quick learner and in a few months be able to
>> code. I am a member of the OSI and my ultimate goal is to obtain a job in
>> information security.
>>
>>
>Hi David,
>
>I am sorry that your efforts to get into the community hasn't given the
>wanted results yet.
>
>Coding CloudStack is beyond my skill set, so I can't really offer any
>mentoring, but if you are looking for simple tasks to carry out to get more
>familiar with the project I am sure we could come up with some issues for
>you :-)
>
>Your first matter of business should be to get a CloudStack cloud up and
>running so that you can test any changes you do.
>This can be done on a single machine if needed.
>
>-- 
>Erik


Re: UI translation for 4.6

2015-10-23 Thread Yiping Zhang
Well, is version 2.2 still relevant at all ?  Why waste time on it ?




On 10/23/15, 2:09 PM, "Milamber" <milam...@apache.org> wrote:

>Hello,
>
>Thanks for the translation.
>
>I've just open the 2.2 resource for acceptance of translated strings, 
>you can now grow to 100% for All versions of CloudStack.
>
>Milamber
>
>
>On 23/10/2015 21:58, Yiping Zhang wrote:
>> I took a look at Chinese, and finished last two messages for 4.5/4.6.   All 
>> the remaining untranslated messages are for 2.2, but that one does not 
>> accept any more translations.  So there is no way to get it to reach 100% :(
>>
>> Yiping
>>
>>
>>
>> On 10/23/15, 12:05 PM, "Erik Weber" <terbol...@gmail.com> wrote:
>>
>>> On Fri, Oct 23, 2015 at 6:11 PM, Milamber <milam...@apache.org> wrote:
>>>
>>>> Hello,
>>>>
>>>> The new stats for the translations of Web UI 4.6 (languages over 50%):
>>>>
>>>> French (France) 100%
>>>> Portuguese (Brazil) 99%
>>>> Japanese (Japan)99%
>>>> Chinese (China) 99%
>>>> Norwegian Bokmål (Norway)   99%
>>>> Hungarian   98%
>>>> Dutch (Netherlands) 94%
>>>> German (Germany)76%
>>>> Russian (Russia)75%
>>>> Korean (Korea)  66%
>>>> Spanish 53%
>>>>
>>>> Thanks for the translators! (especially to the Norwegian 70%->99%, 655 new
>>>> translated strings!)
>>>>
>>>>
>>> I just finished the 2.2 strings that were missing and got us up to 100%.
>>>
>>> Thanks to Dag S. and Jan-Arve N. as well for their efforts!
>>>
>>> -- 
>>> Erik
>


Re: UI translation for 4.6

2015-10-23 Thread Yiping Zhang
I took a look at Chinese, and finished last two messages for 4.5/4.6.   All the 
remaining untranslated messages are for 2.2, but that one does not accept any 
more translations.  So there is no way to get it to reach 100% :(

Yiping



On 10/23/15, 12:05 PM, "Erik Weber"  wrote:

>On Fri, Oct 23, 2015 at 6:11 PM, Milamber  wrote:
>
>> Hello,
>>
>> The new stats for the translations of Web UI 4.6 (languages over 50%):
>>
>> French (France) 100%
>> Portuguese (Brazil) 99%
>> Japanese (Japan)99%
>> Chinese (China) 99%
>> Norwegian Bokmål (Norway)   99%
>> Hungarian   98%
>> Dutch (Netherlands) 94%
>> German (Germany)76%
>> Russian (Russia)75%
>> Korean (Korea)  66%
>> Spanish 53%
>>
>> Thanks for the translators! (especially to the Norwegian 70%->99%, 655 new
>> translated strings!)
>>
>>
>I just finished the 2.2 strings that were missing and got us up to 100%.
>
>Thanks to Dag S. and Jan-Arve N. as well for their efforts!
>
>-- 
>Erik


Re: How does the parameter startdate/enddate of api listEvents() use new time format like 'yyyy-MM-dd HH:mm:ss' ?

2015-08-25 Thread Yiping Zhang
Tony:

Your date format seems to be OK. I just tried on my cs 4.5.1 with cloudmonkey 
5.3.1:

(local)   list events startdate='2015-08-25 15:20:00'
count = 1
event:
id = 983c0369-80f2-4431-801c-273bffd925e5
account = admin
created = 2015-08-25T15:29:54-0500
description = user has logged in from IP Address 10.0.248.86
domain = ROOT
domainid = 994ff03e-bb8f-11e4-b7d5-36d1d14da5e9
level = INFO
state = Completed
type = USER.LOGIN
username = admin
(local)   


So your problem is somewhere else.  Can you access other API’s with that 
apikey/secretkey pair ? What client are you using to make api calls ?  You gave 
stack trace from client side, what’s the error message in CloudStack logs say ?

Good luck

Yiping


On 8/25/15, 3:22 AM, Daan Hoogland daan.hoogl...@gmail.com wrote:

Tony/Cao Tong,


just as a guess and as I see you are chinese; the chacter between the date
and the time is a 32 (uri encoded %20)? Not sure if that is your problem
but it might be.

Looking at the stack trace you might as well want to look at the version of
your client library as it seems to refuse the format client side.


regards,
Daan

On Tue, Aug 25, 2015 at 12:16 PM, Abhinandan Prateek 
abhinandan.prat...@shapeblue.com wrote:

 Yes, as per
 https://cloudstack.apache.org/api/apidocs-4.2/root_admin/listEvents.html


  On 24-Aug-2015, at 2:08 pm, tony_caot...@163.com wrote:
 
  Hello Every.
 
  I know this is a very simple question to most of you. but it is really
 hard for me to continue my work.
 
  So could Anyone spend your three minutes to give me some advice, it will
 be very usefull to me.
 
  my question is:
 
 How does the parameter startdate/enddate of api listEvents() use new
 time format  like '-MM-dd HH:mm:ss' ?
 
 
   Event({'listall':'True', 'startdate':'2015-08-24 00:00:00'})
 
 http://10.0.1.100:8080/client/api?apiKey=hjZ12EQ4JfFasIHO3RCXBLji-3RbBmdC973utGwCL5388WypVKwtaNsDso-JzVQIZXUVwfaT1vANdDUJs3Vkkgcommand=listEventslistall=Trueresponse=jsonstartdate=2015-08-24+00%3A00%3A00signature=z4LQCw7yzGmTK5B7TzAbzl1biXI%3D
  Traceback (most recent call last):
   File stdin, line 1, in module
   File SignedAPICall.py, line 67, in Event
 a = api.listEvents(request)
   File SignedAPICall.py, line 49, in handlerFunction
 return self._make_request(name, args[0])
   File SignedAPICall.py, line 61, in _make_request
 data = self._http_get(self.value)
   File SignedAPICall.py, line 54, in _http_get
 response = urllib.urlopen(url)
   File /usr/lib64/python2.7/urllib.py, line 87, in urlopen
 return opener.open(url)
   File /usr/lib64/python2.7/urllib.py, line 208, in open
 return getattr(self, name)(url)
   File /usr/lib64/python2.7/urllib.py, line 359, in open_http
 return self.http_error(url, fp, errcode, errmsg, headers)
   File /usr/lib64/python2.7/urllib.py, line 372, in http_error
 result = method(url, fp, errcode, errmsg, headers)
   File /usr/lib64/python2.7/urllib.py, line 683, in http_error_401
 errcode, errmsg, headers)
   File /usr/lib64/python2.7/urllib.py, line 381, in http_error_default
 raise IOError, ('http error', errcode, errmsg, headers)
  IOError: ('http error', 401, 'Unauthorized', httplib.HTTPMessage
 instance at 0x7f083da78998)
 
  ---
  Cao Tong
 
  On 08/19/2015 10:32 AM, tony_caot...@163.com wrote:
 
  Hi All:
 
 Does any one have any idea ? Thanks.
 
  ---
  Cao Tong
 
  On 08/18/2015 06:04 PM, tony_caot...@163.com wrote:
 
  In ParamProcessWorker::setFieldValue() I found this lins:
 
case DATE:
 // This piece of code is for maintaining backward
 compatibility
 // and support both the date formats(Bug 9724)
 
  Is it related to my problem?  Where can I found the descriptions about
 BUG9724 ?
 
  ---
  Cao Tong
 
  On 08/18/2015 05:54 PM, tony_caot...@163.com wrote:
 
  Hello,
 
  When I use timestamp format startdate=2015-07-31, it works find.
  When I use it like this startdate=2015-07-31 13:00:00.
  It return a error
  IOError: ('http error', 401, 'Unauthorized', httplib.HTTPMessage
 instance at 0x16dca70)
 
  Could anyone tell me why ?
 
  I have read the code
 DefaultLoginAPIAuthenticatorCmd::authenticate() , but I am still not
 understand what was happening,
  it seems authenticate failed, but why??
 
  ---
  Cao Tong
 
  On 07/31/2015 07:07 PM, tony_caot...@163.com wrote:
 
  Hi,
 
 was this format is enabled in ACS 4.5.1 like /-MM-dd
 HH:mm:ss
 
 
  /I found it in 4.5.0 API doc, but it seems not enabled.
 
  http://cloudstack.apache.org/api/apidocs-4.5/user/listEvents.html
 
  api.listEvents(startdate=2015-07-31 13:00:00)
  Traceback (most recent call last):
   File stdin, line 1, in module
   File call.py, line 48, in handlerFunction
 return self._make_request(name, kwargs)
   File call.py, line 60, in _make_request
 data = self._http_get(self.value)
   File call.py, line 53, in _http_get
 response = urllib.urlopen(url)
   File 

Using cloudstack RabbitMQ events

2015-07-27 Thread Yiping Zhang
Hi, list:

First, please pardon me for some rant.

rant
I have been using cloudstack rabbitmq events to integrate with external apps 
since 4.3.x.  Recently we upgraded to CS 4.5.1 and noticed that CS 4.5 events 
are quite different from 4.3 events, at least for ones we are working with (for 
VM.CREATE and VM.DESTROY events, both the routing key and the event message are 
now different).

I have not found any documentation on such changes.   If  event notification 
framework is intended as an integration point with external apps,  I would have 
expected a stable and backward compatible interface to cloudstack events over 
upgrades, just like any other cloudstack API’s.   How can such changes be 
introduced and not documented in release notes?
/rant


While working with VM.CREATE and VM.DESTROY events (for CS 4.5.1), I noticed 
some weirdness of the messages when parsing them as JSON object.  Here is an 
example:


{

  cmdInfo: 
{\response\:\json\,\id\:\b780c229-7064-47e5-97d0-a8b4590b36b8\,\sessionkey\:\WY6E5WuM8SbqMw4bCumnVgGsgEQ\\u003d\,\ctxDetails\:\{\\\com.cloud.vm.VirtualMachine\\\:\\\b780c229-7064-47e5-97d0-a8b4590b36b8\\\}\,\cmdEventType\:\VM.DESTROY\,\ctxUserId\:\2\,\httpmethod\:\GET\,\_\:\1438027779033\,\uuid\:\b780c229-7064-47e5-97d0-a8b4590b36b8\,\ctxAccountId\:\2\,\ctxStartEventId\:\6282\},

  instanceType: VirtualMachine,

  instanceUuid: b780c229-7064-47e5-97d0-a8b4590b36b8,

  jobId: 61a62e5d-61ee-41eb-b947-0f8ef5d857c3,

  status: SUCCEEDED,

  processStatus: 0,

  commandEventType: VM.DESTROY,

  resultCode: 0,

  command: org.apache.cloudstack.api.command.admin.vm.DestroyVMCmdByAdmin,

  jobResult: 
org.apache.cloudstack.api.response.UserVmResponse/virtualmachine/{\id\:\b780c229-7064-47e5-97d0-a8b4590b36b8\,\name\:\yz-x1\,\displayname\:\yz-x1\,\account\:\admin\,\domainid\:\994ff03e-bb8f-11e4-b7d5-36d1d14da5e9\,\domain\:\ROOT\,\created\:\2015-07-27T12:01:12-0500\,\state\:\Destroyed\,\haenable\:false,\zoneid\:\1b0b4859-7b8a-41dd-8522-4dbf24345509\,\zonename\:\sjlab\,\templateid\:\e6fa410f-4bf0-4b3c-9982-9d60e7ffc07e\,\templatename\:\Base\,\templatedisplaytext\:\Base
 with 32 GB root and 
cloud-init\,\passwordenabled\:false,\serviceofferingid\:\11a5e901-bc78-45c6-8b81-a2a9e3530164\,\serviceofferingname\:\1CPU@1.0Ghz@1.5GB\,\cpunumber\:1,\cpuspeed\:1000,\memory\:1536,\cpuused\:\0.56%\,\networkkbsread\:0,\networkkbswrite\:2,\diskkbsread\:2670,\diskkbswrite\:163,\diskioread\:0,\diskiowrite\:0,\guestosid\:\a0c75a5b-bb8f-11e4-b7d5-36d1d14da5e9\,\rootdeviceid\:0,\rootdevicetype\:\ROOT\,\securitygroup\:[{\id\:\ad13aa78-bb8f-11e4-b7d5-36d1d14da5e9\,\name\:\default\,\description\:\Default
 Security 
Group\,\account\:\admin\,\ingressrule\:[],\egressrule\:[],\tags\:[]}],\nic\:[{\id\:\1c87d7e1-f8c9-425e-809d-7edd1a30c3a6\,\networkid\:\abe603fe-1d8b-4b23-9aa2-0234f18de686\,\networkname\:\vlan106\,\netmask\:\255.255.255.0\,\gateway\:\10.0.106.1\,\ipaddress\:\10.0.106.170\,\isolationuri\:\vlan://106\,\broadcasturi\:\vlan://106\,\traffictype\:\Guest\,\type\:\Shared\,\isdefault\:true,\macaddress\:\06:d9:2e:00:03:f6\}],\hypervisor\:\XenServer\,\instancename\:\i-2-346-VM\,\tags\:[],\details\:{\hypervisortoolsversion\:\xenserver56\},\affinitygroup\:[],\displayvm\:true,\isdynamicallyscalable\:true,\ostypeid\:206,\jobid\:\61a62e5d-61ee-41eb-b947-0f8ef5d857c3\,\jobstatus\:0},

  account: ad11bb05-bb8f-11e4-b7d5-36d1d14da5e9,

  user: ad129c91-bb8f-11e4-b7d5-36d1d14da5e9

}

There are two problems for above example:

 1.  The nested data structures are not parsed properly. As you can see, values 
for both “cmdInfo” and “jobResult” are strings instead of nested hash.  I have 
to make additional calls to JSON parser on those values to parse them.
 2.  The string 
org.apache.cloudstack.api.response.UserVmResponse/virtualmachine/“ at the 
beginning of the value for “jobResult” makes this value invalid to be parsed as 
JSON object at all.

Thanks for listening.

Yiping





Re: Using cloudstack RabbitMQ events

2015-07-27 Thread Yiping Zhang
Just files a Doc bug, CLOUDSTACK-8679 and a code bug CLOUDSTACK-8680.

Yiping

On 7/27/15, 3:29 PM, Daan Hoogland daan.hoogl...@gmail.com wrote:

Yiping,


good rant; You are absolutely right. Did you create a ticket (or more
tickets) for this?

On Mon, Jul 27, 2015 at 11:47 PM, Yiping Zhang yzh...@marketo.com wrote:
 Hi, list:

 First, please pardon me for some rant.

 rant
 I have been using cloudstack rabbitmq events to integrate with external
apps since 4.3.x.  Recently we upgraded to CS 4.5.1 and noticed that CS
4.5 events are quite different from 4.3 events, at least for ones we are
working with (for VM.CREATE and VM.DESTROY events, both the routing key
and the event message are now different).

 I have not found any documentation on such changes.   If  event
notification framework is intended as an integration point with external
apps,  I would have expected a stable and backward compatible interface
to cloudstack events over upgrades, just like any other cloudstack
API¹s.   How can such changes be introduced and not documented in
release notes?
 /rant


 While working with VM.CREATE and VM.DESTROY events (for CS 4.5.1), I
noticed some weirdness of the messages when parsing them as JSON object.
 Here is an example:


 {

   cmdInfo: 
{\response\:\json\,\id\:\b780c229-7064-47e5-97d0-a8b4590b36b8\,\
sessionkey\:\WY6E5WuM8SbqMw4bCumnVgGsgEQ\\u003d\,\ctxDetails\:\{\\
\com.cloud.vm.VirtualMachine\\\:\\\b780c229-7064-47e5-97d0-a8b4590b36b
8\\\}\,\cmdEventType\:\VM.DESTROY\,\ctxUserId\:\2\,\httpmethod
\:\GET\,\_\:\1438027779033\,\uuid\:\b780c229-7064-47e5-97d0-a8b
4590b36b8\,\ctxAccountId\:\2\,\ctxStartEventId\:\6282\},

   instanceType: VirtualMachine,

   instanceUuid: b780c229-7064-47e5-97d0-a8b4590b36b8,

   jobId: 61a62e5d-61ee-41eb-b947-0f8ef5d857c3,

   status: SUCCEEDED,

   processStatus: 0,

   commandEventType: VM.DESTROY,

   resultCode: 0,

   command: 
org.apache.cloudstack.api.command.admin.vm.DestroyVMCmdByAdmin,

   jobResult: 
org.apache.cloudstack.api.response.UserVmResponse/virtualmachine/{\id\
:\b780c229-7064-47e5-97d0-a8b4590b36b8\,\name\:\yz-x1\,\displaynam
e\:\yz-x1\,\account\:\admin\,\domainid\:\994ff03e-bb8f-11e4-b7d
5-36d1d14da5e9\,\domain\:\ROOT\,\created\:\2015-07-27T12:01:12-05
00\,\state\:\Destroyed\,\haenable\:false,\zoneid\:\1b0b4859-7b8
a-41dd-8522-4dbf24345509\,\zonename\:\sjlab\,\templateid\:\e6fa41
0f-4bf0-4b3c-9982-9d60e7ffc07e\,\templatename\:\Base\,\templatedisp
laytext\:\Base with 32 GB root and
cloud-init\,\passwordenabled\:false,\serviceofferingid\:\11a5e901-b
c78-45c6-8b81-a2a9e3530164\,\serviceofferingname\:\1CPU@1.0Ghz@1.5GB\
,\cpunumber\:1,\cpuspeed\:1000,\memory\:1536,\cpuused\:\0.56%\
,\networkkbsread\:0,\networkkbswrite\:2,\diskkbsread\:2670,\diskkb
swrite\:163,\diskioread\:0,\diskiowrite\:0,\guestosid\:\a0c75a5b-
bb8f-11e4-b7d5-36d1d14da5e9\,\rootdeviceid\:0,\rootdevicetype\:\ROO
T\,\securitygroup\:[{\id\:\ad13aa78-bb8f-11e4-b7d5-36d1d14da5e9\,\
name\:\default\,\description\:\Default Security
Group\,\account\:\admin\,\ingressrule\:[],\egressrule\:[],\tags
\:[]}],\nic\:[{\id\:\1c87d7e1-f8c9-425e-809d-7edd1a30c3a6\,\netwo
rkid\:\abe603fe-1d8b-4b23-9aa2-0234f18de686\,\networkname\:\vlan106
\,\netmask\:\255.255.255.0\,\gateway\:\10.0.106.1\,\ipaddress\
:\10.0.106.170\,\isolationuri\:\vlan://106\,\broadcasturi\:\vlan
://106\,\traffictype\:\Guest\,\type\:\Shared\,\isdefault\:true
,\macaddress\:\06:d9:2e:00:03:f6\}],\hypervisor\:\XenServer\,\in
stancename\:\i-2-346-VM\,\tags\:[],\details\:{\hypervisortoolsver
sion\:\xenserver56\},\affinitygroup\:[],\displayvm\:true,\isdynam
icallyscalable\:true,\ostypeid\:206,\jobid\:\61a62e5d-61ee-41eb-b94
7-0f8ef5d857c3\,\jobstatus\:0},

   account: ad11bb05-bb8f-11e4-b7d5-36d1d14da5e9,

   user: ad129c91-bb8f-11e4-b7d5-36d1d14da5e9

 }

 There are two problems for above example:

  1.  The nested data structures are not parsed properly. As you can
see, values for both ³cmdInfo² and ³jobResult² are strings instead of
nested hash.  I have to make additional calls to JSON parser on those
values to parse them.
  2.  The string 
org.apache.cloudstack.api.response.UserVmResponse/virtualmachine/³ at
the beginning of the value for ³jobResult² makes this value invalid to
be parsed as JSON object at all.

 Thanks for listening.

 Yiping






-- 
Daan



Re: connection of Agent to Management is continuously dropping

2015-06-18 Thread Yiping Zhang

Are you by any chance using IBM java 1.7.x on your management server ?

We had a similar problem where connection from cloud agent on CPVM/SSVM to
management server got disconnected with an SSL error, but we are not using
SSL on management server at all. After switching back to
java-1.7.0-openjdk, the problem went away without any other configuration
changes.


Yiping

On 6/14/15, 11:40 PM, Devender Singh dev1986en...@gmail.com wrote:

I have  disable SS by setting secstorage.encrypt.copy to false, and
changing consoleproxy.url.domain to empty.

but after restarting the console proxy vm I am getting below error.

netstat -na | grep 8250



tcp  867  0  x.x.x.x:37789 y.y.y.y:8250  CLOSE_WAIT

tcp  867  0  x.x.x.x:38275 y.y.y.y:8250  CLOSE_WAIT

tcp  867  0  .x.x.x:37942   y.y.y.y:8250  CLOSE_WAIT





by Devender kumar Singh




On Fri, Jun 12, 2015 at 4:53 PM, Devender Singh dev1986en...@gmail.com
wrote:

 After upgrading cloudstack 4.2.1 to 4.4.2 we are facing console proxy
 agent issue .

 connection of Agent to Management is continuously dropping

 netstat -na | grep 8250



 tcp  867  0  x.x.x.x:37789 y.y.y.y:8250  CLOSE_WAIT

 tcp  867  0  x.x.x.x:38275 y.y.y.y:8250  CLOSE_WAIT

 tcp  867  0  .x.x.x:37942   y.y.y.y:8250  CLOSE_WAIT

 tcp  867  0  x.x.x.x3:38327y.y.y.y:8250  CLOSE_WAIT

 tcp  867  0  x.x.x.x:37810  y.y.y.y:8250  CLOSE_WAIT

 tcp  867  0  x.x.x.x:37737  y.y.y.y:8250  CLOSE_WAIT

 tcp  867  0  x.x.x.x:37775  y.y.y.y:8250  CLOSE_WAIT

 tcp  867  0  x.x.x.x:37858  y.y.y.y:8250  CLOSE_WAIT

 tcp  867  0  x.x.x.x:38039  y.y.y.y:8250  CLOSE_WAIT

 tcp  867  0  x.x.x.x:37960  y.y.y.y:8250  CLOSE_WAIT







Re: [SOLVED] No response received when trying to login

2015-04-16 Thread Yiping Zhang
I am using RabbitMQ feature,  and I did notice that the UI become
noticeably slower for some operations, such as VM create / destroy actions.
When the RabbitMQ server is down, the login process become unresponsive.

Yiping

On 4/16/15, 3:42 AM, Erik Weber terbol...@gmail.com wrote:

Since I don't really use the feature, but merely tested it, I removed the
rabbitmq bean I previously set up.

I guess the proper solution is to figure out why rabbitmq barks, probably
due to something being full..

-- 
Erik

On Thu, Apr 16, 2015 at 12:40 PM, Nux! n...@li.nux.ro wrote:

 Erik,

 Can you share more details about how you solved this? What did you have
to
 do exactly?

 Just thinking it may come it handy to some poor soul in the future.

 --
 Sent from the Delta quadrant using Borg technology!

 Nux!
 www.nux.ro

 - Original Message -
  From: Erik Weber terbol...@gmail.com
  To: dev dev@cloudstack.apache.org, us...@cloudstack.apache.org
  Sent: Thursday, 16 April, 2015 10:27:16
  Subject: Re: [SOLVED] No response received when trying to login

  Thank you Rajani, it was rabbitmq problems.
 
  All solved, and I can log in again :-)
 
  --
  Erik
 
  On Thu, Apr 16, 2015 at 10:54 AM, Rajani Karuturi raj...@apache.org
 wrote:
 
  If you configured RabbitMQ service, check logs on the Rabbitmq hosts.
 
  check the size of the events table and see if an insert is taking
time.
 You
  could try the login and then show full processlist at the mysql
prompt
 to
  see any slow queries.
 
 
  ~Rajani
 
  On Thu, Apr 16, 2015 at 1:57 PM, Erik Weber terbol...@gmail.com
 wrote:
 
   On Thu, Apr 16, 2015 at 10:21 AM, Rajani Karuturi
raj...@apache.org
   wrote:
  
Can you check if its blocked on raising the login event? probably
   activemq
is down or the events table is full..
   
   
   Thanks for the suggestion, how would I go forward to check that?
  
   I've restarted cloudstack-management multiple times if it matters.
  
   --
   Erik
  




Re: Can System VMs be migrated?

2015-02-09 Thread Yiping Zhang
How do you migrate systemVM’s in UI?

If you just put the host, where the systemVM’s are running, into
“maintenance” mode, then you may have hit this bug which was fixed in
4.4.0: https://issues.apache.org/jira/browse/CLOUDSTACK-5660


Yiping

On 2/9/15, 11:01 AM, Rafael Weingartner rafaelweingart...@gmail.com
wrote:

Update:
The UI may not be working because it uses the command migrateSystemVm,
instead of migrateVirtualMachineWithVolume.
Shoud I open a bug report?

On Mon, Feb 9, 2015 at 3:21 PM, Rafael Weingartner 
rafaelweingart...@gmail.com wrote:

 I also tried and it did not work. I am using CS 4.3.0.
 I used the UI button. I got the error VM_REQUIRES_SR.

 On Mon, Feb 9, 2015 at 2:52 PM, Prashant Kumar Mishra 
 prashantkumar.mis...@citrix.com wrote:

 1-Yes we can migrate system vm , I have done it many times


 2-Even if vm is deployed with local storage you can migrate to another
 local storage . Check out this api migrateVirtualMachineWithVolume



 
http://cloudstack.apache.org/docs/api/apidocs-4.4/root_admin/migrateVirt
ual
 MachineWithVolume.html
 
http://cloudstack.apache.org/docs/api/apidocs-4.4/root_admin/migrateVir
tualMachineWithVolume.html


 ~prashant




 On 2/9/15, 5:08 PM, Rafael Weingartner rafaelweingart...@gmail.com
 wrote:

 That is the answer I wanted to hear.  If we can migrate system VMs,
why
 are
 the systems VMs¹ VDI allocated in the local SR on Xen hypervisor?
 
 It is not possible to migrate those system VMs, hence they are using a
 local SR.
 
 On Mon, Feb 9, 2015 at 3:01 AM, Sanjeev Neelarapu 
 sanjeev.neelar...@citrix.com wrote:
 
  Yes, we can.
 
  -Original Message-
  From: Rafael Weingartner [mailto:rafaelweingart...@gmail.com]
  Sent: Saturday, February 07, 2015 2:41 AM
  To: dev@cloudstack.apache.org
  Subject: Can System VMs be migrated?
 
  Hi folks,
 
  I was wondering, can we migrate systems vms from a host to another
one
 in
  the same cluster?
 
 
  --
  Rafael Weingärtner
 
 
 
 
 --
 Rafael Weingärtner




 --
 Rafael Weingärtner




-- 
Rafael Weingärtner



inconsistent listXXX API behaviors

2015-01-09 Thread Yiping Zhang
Hi, all

We have noticed some behavior differences among various listXXX API calls as 
shown bellow:

When using listZones API with “name=xxx” argument, the returned zone’s name 
must be an exact match for the given argument value. In this example, zone name 
must be exactly “xxx” for it to be returned by this call.

When using listPods API with “name=pod” argument, all pods whose name ends with 
“pod” will be returned by this call. For example, pods with name as “my_pod”, 
“my_2nd_pod” and “new_pod” will all be returned.  In other words, in these this 
API call, the name match is a sub string match, not exact string match, with 
given argument value.  The API listClusters behaves the same way as listPods.

Are the different behavior among these API calls intentional? If so what are 
the rationales for the differences ? If the different behavior is due to bugs 
in implementations,  then there are over 100 listXXX type API calls whose 
behavior needs to be reviewed to see how many of them show buggy behaviors.

Thanks

Yiping


user credential for adding kvm hosts

2014-08-19 Thread Yiping Zhang
Hi, all

I have asked this question on users list, but got no answers, so moving the 
question to dev list:

When adding a new (kvm) host to a cluster, the UI asks for a user name (doc 
says “usually the root”) and its password.  It seems that CS management server 
will ssh into port 22 of new host with this username/password to do its magics 
(requires root privilege!). And I also noticed through experiments that this 
credential is also required when bringing a  host in or out of maintenance mode 
etc.

Because our corporate security policy does not allow direct root user login 
with a password, I am wondering are there any other mechanisms available to 
allow CS management server to manage (kvm) hypervisor hosts?  Possible 
solutions are using either public key authentication for root or using a non 
root user with sudo privilege on hypervisor hosts.  I have not found 
documentations on this subject.

Thanks,

Yiping