Re: [VOTE] Apache CloudStack 4.1.0 (third round)

2013-05-26 Thread Wido den Hollander

Hi Chip,

I'm sorry, but I'm going to have to vote -1 on this one.

It's my bad, but it seems that I made a mistake on the Debian packaging 
side. See this commit: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=commitdiff;h=dc822a83d77830281402175b4a57b25b7e3b180a


I just verified, cloudstack-setup-agent has variables in it like 
@AGENTSYSCONFDIR@ which would render the tool useless.


I see this commit is already in the 4.1 branch, but it isn't in the 
commit you are voting on (873c19).


I also found a problem with the AWSAPI package, which is in master in 
commit 28f7a216d8bf4da29a45cb76e5c28ee568ae1984


I already cherry-picked that one to 4.1 since it's only touching 
packaging and not code.


Other then the packaging I'm happy with this code. I obviously wasn't 
able to do a full QA on my own, but the tests I've done all work, which 
include:

* Deploying instances
* Adding RBD storage
* Attaching RBD volumes

With the packaging resolved I'd vote +1, but for now it's -1 (binding).

Wido

On 05/24/2013 07:41 PM, Chip Childers wrote:

Hi All,

I've created a 4.1.0 release, with the following artifacts up for a
vote:

Git Branch and Commit SH:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.1
Commit: 873c1972c7bbe337cee2000a09451d14ebbcb728

List of changes:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES;hb=4.1

Source release (checksums and signatures are available at the same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.1.0/

PGP release keys (signed using A99A5D58):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

Testing instructions are here:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+test+procedure

Vote will be open for 72 hours.

For sanity in tallying the vote, can PMC members please be sure to
indicate (binding) with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)



Commits into 4.1 without patch request

2013-05-27 Thread Wido den Hollander

Hi Hiroaki,

I've noticed you pushed a couple of commits into both master and 4.1 
without going through the patch request on the mailinglist.


For example in 4.1:
* 9bc43930b822a294c863e954bc5b39fc4b506f46
* 8faf845889b28157a2ecbd6e406d73a326b27756

When the 4.1 branch was frozen the agreement was/is that all commits 
into the 4.1 branch have to go through the mailinglist so Chip Childers 
can pick them up and cherry-pick them into the 4.1 branch.


I see you made some good commits, but the 4.1 branch is actually frozen 
at this point.


The Debian package fixes are very welcome though! But since we are in 
the process of voting for a 4.1 release we have to make sure nothing 
changes in the 4.1 branch.


Maybe you weren't aware of this agreement, so that why I'm sending this 
to you.


Thanks!

Wido


Re: patchviasocket: Why in Perl and not native Java?

2013-05-27 Thread Wido den Hollander



On 05/26/2013 06:41 PM, Marcus Sorensen wrote:

Yes, you're welcome to rewrite it in Python.  You're spot on with why
it's not in Java.



Thanks for the clarification!


  As for why it's in Perl, it was simple for me to do and we already
have a dependency on it. As far as I've seen, the majority of what's
written for the agent to call is in Bash, and this task was fairly
difficult for Bash. The exceptions being security groups and the
installation helpers, which are in Python, presumably because they're
also difficult in Bash. The script this replaced was a Bash script.
It's probably because some people are really good at Bash, but don't
know Perl or Python, some are good at Python, but don't know Perl, and
vice versa.  Or maybe someone knows them all but has a preference on a
specific tool for a specific job, for compatibility or whatever
reason. I personally tend to shy away from Python if I want to write
something that I know will work everywhere, due to the 2.6/2.7
compatibility and spread between distros, but that shouldn't matter
for a simple script like patchviasocket.pl.



Ack, I might give it a try to rewrite in Python.


Do we want to standardize and say that if it can't be in Java, and it
can't be in Bash, it has to be in Python? The other dependencies on
Perl are few, so we could probably wipe them out as well. If we have
to limit, I'd much rather it be Java, Perl, Python than Java, Bash,
Python, but it's too late for that I think. As a side note, Bash seems
to be highly preferred in the agent code, even though some of the
system vm scripts are fairly complex. It would be nice to see these
simplified by using a more powerful language as well.



I'd vote for:
* Java
* Bash
* Python

The reasoning behind Python is that:
A): We already have a lot of Python scripts (783 vs 1 Perl)
B): It seems to be the standard for Linux distributions to develop their 
scripts in. Ubuntu is a good example.


Wido



On Sun, May 26, 2013 at 3:20 AM, Wido den Hollander w...@widodh.nl wrote:

Hi Marcus,

This is somewhat a rhetorical question, but I wanted to confirm anyway.

The 4.2 SystemVMs use a VirtIO socket on KVM to get their boot arguments.
That is great, since there is no longer a need for patch disks which enables
them to run on RBD.

One of the things I dislike about the KVM agent is all the scripts it runs,
I'd rather see them all disappear since executing scripts and getting the
correct exit statuses is always a difficult thing.

Anyway, the patchviasocket.pl script opens the VirtIO Unix Socket on the
hypervisor and writes some data to it.

I've been searching and with some third party libraries it is also possible
to do this natively from Java, for example:

* http://www.matthew.ath.cx/projects/java/
* http://code.google.com/p/junixsocket/

They require libraries on the system with JNI though, so it will make our
packaging efforts harder. Was this the reasoning behind doing this in Perl?

If so, why Perl? Since most of the stuff in CloudStack is done in Java,
Python or Bash, why Perl? Couldn't we rewrite this in Python if we don't
want to do this in Java?

Wido


Re: [VOTE] Release Apache CloudStack 4.1.0 (fourth round)

2013-05-27 Thread Wido den Hollander

Hi Chip,

I'm sorry, this is a -1 again.

Hiroaki found a bug in the Deb packaging again, with commit: 
c4d61897d93420093dfbb046502b3f5e0d31fb9f


The agent wasn't depending on nfs-common, which would cause problems for 
users installing and running with NFS.


I also found another bug in the Deb packaging which I caused yesterday: 
d9084df9a9e46d663b8ef11639af528caab91bb5


server.xml and tomcat6.conf are both symlinks and with I listed the 
files in /var/lib/dpkg/info/cloudstack-management.list it didn't show 
these symlinks are part of the package.


Anyway, again, this is just packaging issues, code wise it works for me.

Wido

On 05/26/2013 04:54 PM, Chip Childers wrote:

Hi All,

I've created a 4.1.0 release, with the following artifacts up for a
vote.

The changes from round 3 are two commits related to DEB packaging.

Git Branch and Commit SH:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.1
Commit:  db007da15290970c842c3229a11051c20b512a65

List of changes:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES;hb=4.1

Source release (checksums and signatures are available at the same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.1.0/

PGP release keys (signed using A99A5D58):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

Testing instructions are here:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+test+procedure

Vote will be open for 72 hours.

For sanity in tallying the vote, can PMC members please be sure to
indicate (binding) with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)



Re: [VOTE] Release Apache CloudStack 4.1.0 (fifth round)

2013-05-29 Thread Wido den Hollander

+1 (binding)

Since the changes are only Deb related in this case I'm basing my vote 
on previous rounds.


I verified the Deb packages this time and both the Management server and 
AWS API install cleanly now.


I'm not an expert on the AWS API, but I see the Mgmt server listening on 
port 7080.


On 05/28/2013 03:47 PM, Chip Childers wrote:

Hi All,

I've created a 4.1.0 release, with the following artifacts up for a
vote.

The changes from round 4 are related to DEB packaging, some
translation strings, and a functional patch to make bridge type
optional during the agent setup (for backward compatibility).

Git Branch and Commit SH:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.1
Commit: a5214bee99f6c5582d755c9499f7d99fd7b5b701

List of changes:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES;hb=4.1

Source release (checksums and signatures are available at the same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.1.0/

PGP release keys (signed using A99A5D58):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

Testing instructions are here:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+test+procedure

Vote will be open for 72 hours.

For sanity in tallying the vote, can PMC members please be sure to
indicate (binding) with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)





Re: [PROPOSAL] Pushback 4.2.0 Feature Freeze

2013-05-30 Thread Wido den Hollander



On 05/30/2013 07:43 PM, Chiradeep Vittal wrote:

I'm actually OK with delaying the release (as you pointed out, 4.1
impacted 4.2 in a big way). *I* like flexibility. But it behooves the
community to have a stable set of rules.

It is the cognitive dissonance that bothers me. Theoretically a time-based
release doesn't care about such impacts, but reality is that if someone
has been working on a feature for 4 months and it doesn't make it because
of the cut-off, they are going to feel aggrieved, especially if at some
point in the past the community agreed to make an exception.



I ack on this one. A lot of work went into the object store branch 
(since that's what discussion seems to be pointing at) and it would be a 
nightmare for the developers to merge this into 4.3.


John had valid points on the merge of the branch, but imho those can be 
fixed after it's merged in.


It's feature freeze, but it doesn't mean that we can't do any squashing 
of bugs.


Other developers are also waiting on merging their stuff in after the 
freeze so it will hit 4.3


I wouldn't open the window for features longer since that might bring 
more stuff into 4.2 which needs QA as well.


Wido


On 5/30/13 3:49 AM, John Burwell jburw...@basho.com wrote:


Chiradeep,

As I understood that conversation, it was about permanently changing
the length of release cycles.  I am proposing that we acknowledge the
impact of the longer than anticipated 4.1.0 release, and push out
4.2.0.  4.3.0 would still be a four month release cycle, it would just
start X weeks later.

I like Chip's compromise of 4 weeks.  I think it would be a great
benefit to the 4.2.0 release if the community had the opportunity to
completely focus on its development for some period of time.

Finally, to David's concern that other features might be added during
such an extension.  I think that would be acceptable provided they
pass review.  The goal of my proposal is not permit more features but
to give the community time to review and collaborate on changes coming
into the release.  If additional high quality feature implementations
happen to get merged in during that period then I consider that a
happy side effect.

Thanks,
-John


On May 30, 2013, at 1:51 AM, Chiradeep Vittal
chiradeep.vit...@citrix.com wrote:


This topic was already discussed here:
http://www.mail-archive.com/dev@cloudstack.apache.org/msg03235.html


The consensus then was revisit *after* 4.2. I won't rehash the pros
and
cons, please do familiarize yourself with that thread.


On 5/29/13 10:10 PM, Mike Tutkowski mike.tutkow...@solidfire.com
wrote:


+1 Four weeks extra would be ideal in this situation.


On Wed, May 29, 2013 at 10:48 PM, Sebastien Goasguen
run...@gmail.comwrote:




On 30 May 2013, at 06:34, Chip Childers chip.child...@sungard.com
wrote:


On May 29, 2013, at 7:59 PM, John Burwell jburw...@basho.com wrote:


All,

Since we have taken an eight (8) week delay completing the 4.1.0

release, I would like propose that we re-evaluate the timelines for
the
4.2.0 release.  When the schedule was originally conceived, it was
intended
that the project would have eight (8) weeks to focus exclusively on
4.2.0
development.  Unfortunately, this delay has created an unfortunate
conflict
between squashing 4.1.0 bugs and completing 4.2.0 features.  I propose
that
we acknowledge this schedule impact, and push back the 4.2.0 feature
freeze
date by eight (8) weeks to 2 August 2013.  This delay will give the
project
time to properly review merges and address issues holistically, and,
hopefully, relieve a good bit of the stress incurred by the
simultaneous
4.1.0 and 4.2.0 activities.


Thanks,
-John


This is a reasonable idea IMO. I'd probably only extend by a month
personally, but your logic is sound.  I'd much rather have reasoned
discussions about code than argue procedural issues about timing any
day. This might help facilitate that on some of the features folks
are
scrambling to complete.

Others?


I am +1 on this, 4 weeks maybe ?





--
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloudhttp://solidfire.com/solution/overview/?video=play
**






Re: [DISCUSS] Baremetal and UCS UI when to check in

2013-05-31 Thread Wido den Hollander

On 05/30/2013 10:33 PM, Chip Childers wrote:

On Wed, May 29, 2013 at 10:19:53PM +, Animesh Chaturvedi wrote:


As you may recall we had disabled Baremetal in 4.1 and master because of 
blocker issues. Frank unfortunately has been tied up with $dayjob 
responsibilities and now plans  to start fixing baremetal issues the following 
week.

Jessica Wang had been working on the UI for baremetal and UCS but given that 
the feature is disabled she is not sure how / when to check it in. I see the 
following options

1) Add the UI support when we enable and fix baremetal
2) Add the UI support now, keep it disabled and enable it later when Frank 
re-enables the feature


The UI changes that will be made are:
- Baremetal: add Screen for Baremetal DHCP server and Baremetal PXE Server
- UCS:
- Screen to add UCS Manager to Zone
- Blades: List blades for the selected UCS Manager
- Screen to associate blades to service profiles for that UCS Manager

IMHO option 1 is preferred since that ensures UI comes in when the feature is 
working and integration tested. Please share your thoughts.


Thanks
Animesh



It seems silly to have it in the UI, but not in the underlying code.



+1 It should be the other way around. Code first, UI later or at the 
same time.


Having dead UI options would also confuse people.

Wido


BTW - is there any progress on getting the tests requested by David when
he veto'ed the merge?

-chip





Re: https://reviews.apache.org/r/9539/

2013-05-31 Thread Wido den Hollander

Hi Wei,

On 05/31/2013 09:25 AM, Wei Zhou wrote:

Abhinandan,

Thanks!
I have committed it to 4.0 branch.



The agreement is that no commits to both the 4.0 or 4.1 branches are 
made without going through Joe Brockmeier (4.0) or Chip Childers (4.1).


4.0 is a released version (4.0.2) and there is not going to be a 4.0.3, 
so you bug fixes will never be release in a Apache version.


4.1 is the next release coming up, but I don't know if these fixes are 
already in 4.1? The vote is currently active for 4.1 and if it passes 
4.1 will be released in the *current* state soon.


If your fixes are not in, they will go into 4.1.1 and obviously 4.2

All commits should go into master and you should sent a cherry-pick 
request to the dev list to get them into a version branch.


Wido


Kind Regards,

Wei ZHOU
Innovation Engineer Cloud, LeaseWeb B.V.
w.z...@leaseweb.commailto:w.z...@leaseweb.com

From: Abhinandan Prateek [mailto:cloudst...@aprateek.com]
Sent: 24 May 2013 07:13
To: Wei Zhou
Cc: dev@cloudstack.apache.org
Subject: https://reviews.apache.org/r/9539/

Hi Wei,

https://reviews.apache.org/r/9539/ was pending in my review dashboard. Now 
that you are a committer I think you can commit it.
I was on a vacation and recently saw it in my queue.

Congrats once again !

-abhi





Re: [DISCUSS] Disaster Recovery solution for CloudStack

2013-05-31 Thread Wido den Hollander

On 05/31/2013 11:02 AM, Nguyen Anh Tu wrote:

Hi forks,

I'm looking for a Disaster Recovery solution on CS. Looking around I found
an article showing some great informations but not enought. Personally I
think:

+ Host: CS already implemented migration, which can move VMs to another
pearful host
+ Database: we have replication


Replication is not enough. Your database with CloudStack is key, it's 
your most precious metadata, so make sure you have GOOD, very good 
backups of it.


The best would be a dump of the database every X hours and have binary 
logs as well to be able to go into into point in time with your database.



+ Management Server: we can use multi MS
+ Primary Storage: it's the important component when Disaster Recovery
happen. It contains ROOT and DATA volumes and nobody happy if they're lost.
We need mirror (or replicate) solution here. Many distribute file system
can help (GlusterFS, Ceph, Hadoop..). An interesting solution I found on
XenServer is Portable SR, which make the SR become fully self-contained. We
can detach it and re-attach to a new host. Nothing lost.


CloudStack can't tell you anything about how safe the data is on your 
primary storage, so you just want to make sure you never loose data on it.


Ceph is a great example (I'm a big time fan!) of how you can store your 
data on multiple machines. But even when not using Ceph, just make sure 
you don't loose data on it.


ZFS with zfs send|receive is a great way to backup your data to a 
secondary location in case something goes wrong and you need to restore.


Wido


+ Secondary Storage: Easy to backup.

How do you think? Did you have a plan to do Disaster Recovery?

Thanks,





Re: [MERGE] diso_io_stat branch into MASTER

2013-05-31 Thread Wido den Hollander

Hi Wei,

On 05/30/2013 05:47 PM, Wei ZHOU wrote:

Hi,

I would like to merge disk_io_stat branch into master.
If nobody object, I will merge into master in 48 hours.



I tried reviewing the whole patch and I mainly focused on the KVM side 
since that's what I know most about.


Code wise it seems good, although I'm not sure about the .sql file.

Shouldn't there be a creation of the tables in create-schema.sql as 
well? This is for newly created clusters.


Wido


The feature includes

(1) Add disk I/O polling for instances to CloudStack.

(2) Add it to the instance vm disk statistics table.

(3) and add it to the usage database for optional billing in public clouds.

JIRA ticket: https://issues.apache.org/jira/browse/CLOUDSTACK-1192
FS (I will update later) :
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Disk+IO+statistics+for+instances

Merge check list :-

* Did you check the branch's RAT execution success?
Yes

* Are there new dependencies introduced?
No

* What automated testing (unit and integration) is included in the new
feature?
Unit tests (UsageManagerTest) are added.

* What testing has been done to check for potential regressions?
(1) CloudStack UI display the bytes rate and IO rate.

(2) VM operations, including

deploy, stop, start, reboot, destroy, expunge. migrate, restore

(3) Volume operations, including

Attach, Detach

* Existing issue

(1)For XenServer/XCP, xepapi returns bytes per seconds instead of total I/O.


To review the code, you can try

git diff 7fb6eaa0ca5f0f58b23ab6af812db6366743717a
c30057635d04a2396f84c588127d7ebe42e503a7

Best regards,

Wei

[1]
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Disk+IO+statistics+for+instances
[2] refs/heads/disk_io_stat
[3] 
https://issues.apache.org/jira/browse/CLOUDSTACK-1192https://issues.apache.org/jira/browse/CLOUDSTACK-2071(CLOUDSTACK-
*1192* - Add disk I/O statistics of instances)





Re: [MERGE] disk_io_throttling to MASTER

2013-05-31 Thread Wido den Hollander

Hi Wei,

On 05/30/2013 06:03 PM, Wei ZHOU wrote:

Hi,
I would like to merge disk_io_throttling branch into master.
If nobody object, I will merge into master in 48 hours.
The purpose is :

Virtual machines are running on the same storage device (local storage or
share strage). Because of the rate limitation of device (such as iops), if
one VM has large disk operation, it may affect the disk performance of
other VMs running on the same storage device.
It is neccesary to set the maximum rate and limit the disk I/O of VMs.



Looking at the code I see you make no difference between Read and Write 
IOps.


Qemu and libvirt support setting both a different rate for Read and 
Write IOps which could benefit a lot of users.


It's also strange, in the polling side you collect both the Read and 
Write IOps, but on the throttling side you only go for a global value.


Write IOps are usually much more expensive then Read IOps, so it seems 
like a valid use-case where that an admin would set a lower value for 
write IOps vs Read IOps.


Since this only supports KVM at this point I think it would be of great 
value to at least have the mechanism in place to support both, 
implementing this later would be a lot of work.


If a hypervisor doesn't support setting different values for read and 
write you can always sum both up and set that as the total limit.


Can you explain why you implemented it this way?

Wido


The feature includes:

(1) set the maximum rate of VMs (in disk_offering, and global configuration)
(2) change the maximum rate of VMs
(3) limit the disk rate (total bps and iops)
JIRA ticket: https://issues.apache.org/jira/browse/CLOUDSTACK-1192
FS (I will update later) :
https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling
Merge check list :-

* Did you check the branch's RAT execution success?
Yes

* Are there new dependencies introduced?
No

* What automated testing (unit and integration) is included in the new
feature?
Unit tests are added.

* What testing has been done to check for potential regressions?
(1) set the bytes rate and IOPS rate on CloudStack UI.
(2) VM operations, including
deploy, stop, start, reboot, destroy, expunge. migrate, restore
(3) Volume operations, including
Attach, Detach

To review the code, you can try
git diff c30057635d04a2396f84c588127d7ebe42e503a7
f2e5591b710d04cc86815044f5823e73a4a58944

Best regards,
Wei

[1]
https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling
[2] refs/heads/disk_io_throttling
[3] 
https://issues.apache.org/jira/browse/CLOUDSTACK-1301https://issues.apache.org/jira/browse/CLOUDSTACK-2071(CLOUDSTACK-1301
- VM Disk I/O Throttling)





Re: [MERGE] disk_io_throttling to MASTER

2013-05-31 Thread Wido den Hollander

Hi Wei,

On 05/31/2013 03:13 PM, Wei ZHOU wrote:

Hi Wido,

Thanks. Good question.

I  thought about at the beginning. Finally I decided to ignore the
difference of read and write mainly because the network throttling did not
care the difference of sent and received bytes as well.


That reasoning seems odd. Networking and disk I/O completely different.

Disk I/O is much more expensive in most situations then network bandwith.


Implementing it will be some copy-paste work. It could be implemented in
few days. For the deadline of feature freeze, I will implement it after
that , if needed.



It think it's a feature we can't miss. But if it goes into the 4.2 
window we have to make sure we don't release with only total IOps and 
fix it in 4.3, that will confuse users.


Wido


-Wei




2013/5/31 Wido den Hollander w...@widodh.nl


Hi Wei,


On 05/30/2013 06:03 PM, Wei ZHOU wrote:


Hi,
I would like to merge disk_io_throttling branch into master.
If nobody object, I will merge into master in 48 hours.
The purpose is :

Virtual machines are running on the same storage device (local storage or
share strage). Because of the rate limitation of device (such as iops), if
one VM has large disk operation, it may affect the disk performance of
other VMs running on the same storage device.
It is neccesary to set the maximum rate and limit the disk I/O of VMs.



Looking at the code I see you make no difference between Read and Write
IOps.

Qemu and libvirt support setting both a different rate for Read and Write
IOps which could benefit a lot of users.

It's also strange, in the polling side you collect both the Read and Write
IOps, but on the throttling side you only go for a global value.

Write IOps are usually much more expensive then Read IOps, so it seems
like a valid use-case where that an admin would set a lower value for write
IOps vs Read IOps.

Since this only supports KVM at this point I think it would be of great
value to at least have the mechanism in place to support both, implementing
this later would be a lot of work.

If a hypervisor doesn't support setting different values for read and
write you can always sum both up and set that as the total limit.

Can you explain why you implemented it this way?

Wido

  The feature includes:


(1) set the maximum rate of VMs (in disk_offering, and global
configuration)
(2) change the maximum rate of VMs
(3) limit the disk rate (total bps and iops)
JIRA ticket: 
https://issues.apache.org/**jira/browse/CLOUDSTACK-1192https://issues.apache.org/jira/browse/CLOUDSTACK-1192
FS (I will update later) :
https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**
VM+Disk+IO+Throttlinghttps://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling
Merge check list :-

* Did you check the branch's RAT execution success?
Yes

* Are there new dependencies introduced?
No

* What automated testing (unit and integration) is included in the new
feature?
Unit tests are added.

* What testing has been done to check for potential regressions?
(1) set the bytes rate and IOPS rate on CloudStack UI.
(2) VM operations, including
deploy, stop, start, reboot, destroy, expunge. migrate, restore
(3) Volume operations, including
Attach, Detach

To review the code, you can try
git diff c30057635d04a2396f84c588127d7e**be42e503a7
f2e5591b710d04cc86815044f5823e**73a4a58944

Best regards,
Wei

[1]
https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**
VM+Disk+IO+Throttlinghttps://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling
[2] refs/heads/disk_io_throttling
[3] 
https://issues.apache.org/**jira/browse/CLOUDSTACK-1301https://issues.apache.org/jira/browse/CLOUDSTACK-1301
ht**tps://issues.apache.org/jira/**browse/CLOUDSTACK-2071https://issues.apache.org/jira/browse/CLOUDSTACK-2071

(**CLOUDSTACK-1301


- VM Disk I/O Throttling)










Re: [MERGE] disk_io_throttling to MASTER

2013-05-31 Thread Wido den Hollander

On 05/31/2013 03:59 PM, John Burwell wrote:

Wido,

+1 -- this enhancement must to discretely support read and write IOPS.  I don't 
see how it could be fixed later because I don't see how we correctly split 
total IOPS into read and write.  Therefore, we would be stuck with a total 
unless/until we decided to break backwards compatibility.



What Wei meant was merging it into master now so that it will go in the 
4.2 branch and add Read / Write IOps before the 4.2 release so that 4.2 
will be released with Read and Write instead of Total IOps.


This is to make the May 31st feature freeze date. But if the window 
moves (see other threads) then it won't be necessary to do that.


Wido


I also completely agree that there is no association between network and disk 
I/O.

Thanks,
-John

On May 31, 2013, at 9:51 AM, Wido den Hollander w...@widodh.nl wrote:


Hi Wei,

On 05/31/2013 03:13 PM, Wei ZHOU wrote:

Hi Wido,

Thanks. Good question.

I  thought about at the beginning. Finally I decided to ignore the
difference of read and write mainly because the network throttling did not
care the difference of sent and received bytes as well.


That reasoning seems odd. Networking and disk I/O completely different.

Disk I/O is much more expensive in most situations then network bandwith.


Implementing it will be some copy-paste work. It could be implemented in
few days. For the deadline of feature freeze, I will implement it after
that , if needed.



It think it's a feature we can't miss. But if it goes into the 4.2 window we 
have to make sure we don't release with only total IOps and fix it in 4.3, that 
will confuse users.

Wido


-Wei




2013/5/31 Wido den Hollander w...@widodh.nl


Hi Wei,


On 05/30/2013 06:03 PM, Wei ZHOU wrote:


Hi,
I would like to merge disk_io_throttling branch into master.
If nobody object, I will merge into master in 48 hours.
The purpose is :

Virtual machines are running on the same storage device (local storage or
share strage). Because of the rate limitation of device (such as iops), if
one VM has large disk operation, it may affect the disk performance of
other VMs running on the same storage device.
It is neccesary to set the maximum rate and limit the disk I/O of VMs.



Looking at the code I see you make no difference between Read and Write
IOps.

Qemu and libvirt support setting both a different rate for Read and Write
IOps which could benefit a lot of users.

It's also strange, in the polling side you collect both the Read and Write
IOps, but on the throttling side you only go for a global value.

Write IOps are usually much more expensive then Read IOps, so it seems
like a valid use-case where that an admin would set a lower value for write
IOps vs Read IOps.

Since this only supports KVM at this point I think it would be of great
value to at least have the mechanism in place to support both, implementing
this later would be a lot of work.

If a hypervisor doesn't support setting different values for read and
write you can always sum both up and set that as the total limit.

Can you explain why you implemented it this way?

Wido

  The feature includes:


(1) set the maximum rate of VMs (in disk_offering, and global
configuration)
(2) change the maximum rate of VMs
(3) limit the disk rate (total bps and iops)
JIRA ticket: 
https://issues.apache.org/**jira/browse/CLOUDSTACK-1192https://issues.apache.org/jira/browse/CLOUDSTACK-1192
FS (I will update later) :
https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**
VM+Disk+IO+Throttlinghttps://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling
Merge check list :-

* Did you check the branch's RAT execution success?
Yes

* Are there new dependencies introduced?
No

* What automated testing (unit and integration) is included in the new
feature?
Unit tests are added.

* What testing has been done to check for potential regressions?
(1) set the bytes rate and IOPS rate on CloudStack UI.
(2) VM operations, including
deploy, stop, start, reboot, destroy, expunge. migrate, restore
(3) Volume operations, including
Attach, Detach

To review the code, you can try
git diff c30057635d04a2396f84c588127d7e**be42e503a7
f2e5591b710d04cc86815044f5823e**73a4a58944

Best regards,
Wei

[1]
https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**
VM+Disk+IO+Throttlinghttps://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Disk+IO+Throttling
[2] refs/heads/disk_io_throttling
[3] 
https://issues.apache.org/**jira/browse/CLOUDSTACK-1301https://issues.apache.org/jira/browse/CLOUDSTACK-1301
ht**tps://issues.apache.org/jira/**browse/CLOUDSTACK-2071https://issues.apache.org/jira/browse/CLOUDSTACK-2071

(**CLOUDSTACK-1301


- VM Disk I/O Throttling)














Re: [QUESTION]db unavailable recovery

2014-03-03 Thread Wido den Hollander



On 03/03/2014 03:29 PM, Daan Hoogland wrote:

H,

is it normal behavior that cloudstack won't reconnect to the db if it
has been down while cloudstack was running? I would expect a
reconnect. Or is there a rationale for not attempting a reconnect?



It doesn't and that's normal. Since CS relies on the DB it simply kills 
itself if it looses connection.


Normally you have multiple Management servers running, so loosing one 
shouldn't be a problem.


There is a old thread about this on the dev@ list explaining it all.

Wido


Re: master branch DbUtilTest failure

2014-03-03 Thread Wido den Hollander



On 03/03/2014 11:40 PM, Prachi Damle wrote:

Hi,

I am facing this error while doing a clean install on latest master.

It fails running 'com.cloud.utils.DbUtilTest' with error 'Could not initialize 
class com.cloud.utils.db.TransactionLegacy'

I am on Windows.  Anyone seeing this? Any ideas to fix?



Does the logfile of the surefire plugin say something interesting? 
Should be in the target directory.




Prachi




[INFO] 
[INFO] Building Apache CloudStack Framework - Event Notification 4.4.0-SNAPSHOT
[INFO] 
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ cloud-framework-db ---
[INFO] Deleting 
C:\cloud\apache-cloudstack-oss\incubator-cloudstack\framework\db\target 
(includes = [**/*], excludes = [])
[INFO] Deleting 
C:\cloud\apache-cloudstack-oss\incubator-cloudstack\framework\db (includes = 
[target, dist], excludes = [])
[INFO]
[INFO] --- maven-checkstyle-plugin:2.11:check (cloudstack-checkstyle) @ 
cloud-framework-db ---
[INFO] Starting audit...
Audit done.

[INFO]
[INFO] --- maven-remote-resources-plugin:1.3:process (default) @ 
cloud-framework-db ---
[INFO]
[INFO] --- maven-resources-plugin:2.5:resources (default-resources) @ 
cloud-framework-db ---
[debug] execute contextualize
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO]
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
cloud-framework-db ---
[INFO] Compiling 43 source files to 
C:\cloud\apache-cloudstack-oss\incubator-cloudstack\framework\db\target\classes
[INFO]
[INFO] --- maven-resources-plugin:2.5:testResources (default-testResources) @ 
cloud-framework-db ---
[debug] execute contextualize
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
C:\cloud\apache-cloudstack-oss\incubator-cloudstack\framework\db\test\resources
[INFO] Copying 3 resources
[INFO]
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
cloud-framework-db ---
[INFO] Compiling 15 source files to 
C:\cloud\apache-cloudstack-oss\incubator-cloudstack\framework\db\target\test-classes
[INFO]
[INFO] --- maven-surefire-plugin:2.12:test (default-test) @ cloud-framework-db 
---
[INFO] Surefire report directory: 
C:\cloud\apache-cloudstack-oss\incubator-cloudstack\framework\db\target\surefire-reports

---
T E S T S
---
Running com.cloud.utils.DbUtilTest
log4j:WARN No appenders could be found for logger 
(com.cloud.utils.crypt.EncryptionSecretKeyChecker).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
Tests run: 28, Failures: 0, Errors: 26, Skipped: 2, Time elapsed: 0.667 sec  
FAILURE!

Results :

Tests in error:
   getTableName(com.cloud.utils.DbUtilTest)
   getTableName(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   closeStatement(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   closeStatement(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   closeStatementFail(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   closeStatementFail(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   closeResultSet(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   closeResultSet(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   getGlobalLock(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   getGlobalLock(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   closeResultSetFail(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   closeResultSetFail(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   releaseGlobalLock(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   releaseGlobalLock(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   closeNull(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   closeNull(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   closeConnection(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   closeConnection(com.cloud.utils.DbUtilTest): Could not initialize class 
com.cloud.utils.db.TransactionLegacy
   

Re: ALARM - ACS reboots host servers!!!

2014-03-04 Thread Wido den Hollander

On 03/04/2014 03:38 PM, Marcus wrote:

On Tue, Mar 4, 2014 at 3:34 AM, France mailingli...@isg.si wrote:

Hi Marcus and others.

There is no need to kill of the entire hypervisor, if one of the primary
storages fail.
You just need to kill the VMs and probably disable SR on XenServer, because
all other SRs and VMs have no problems.
if you kill those, then you can safely start them elsewhere. On XenServer
6.2 you call destroy the VMs which lost access to NFS without any problems.


That's a great idea, but as already mentioned, it doesn't work in
practice. You can't kill a VM that is hanging in D state, waiting on
storage. I also mentioned that it causes problems for libvirt and much
of the other system not using the storage.



Just tuning in here and Marcus is right. If NFS is hanging the processes 
go into status D, both Qemu/KVM and libvirt.


The only remedy at that point to Fence of the host is a reboot, you 
can't do anything with the processes which are blocking.


When you run stuff which only lives in userspace like Ceph with librbd 
it's a different story, but with NFS you are stuck.




If you really want to still kill the entire host and it's VMs in one go, I
would suggest live migrating the VMs which have had not lost their storage
off first, and then kill those VMs on a stale NFS by doing hard reboot.
Additional time, while migrating working VMs, would even give some grace
time for NFS to maybe recover. :-)


You won't be able to live migrate a VM that is stuck in D state, or
use libvirt to do so if one of its storage pools is unresponsive,
anyway.



Indeed, same issue again. Libivrt COMPLETELY blocks, not just one 
storage pool.




Hard reboot to recover from D state of NFS client can also be avoided by
using soft mount options.


As mentioned, soft and intr very rarely actually work, in my
experience. I wish they did as I truly have come to loathe NFS for it.



Indeed, they almost never work. I've been working with NFS for over 10 
years now and those damn options have NEVER worked properly.


That's just the downside of having stuff go through kernel space.



I run a bunch of Pacemaker/Corosync/Cman/Heartbeat/etc clusters and we don't
just kill whole nodes but fence services from specific nodes. STONITH is
implemented only when the node looses the quorum.


Sure, but how do you fence a KVM host from an NFS server? I don't
think we've written a firewall plugin that works to fence hosts from
any NFS server. Regardless, what CloudStack does is more of a poor
man's clustering, the mgmt server is the locking in the sense that it
is managing what's going on, but it's not a real clustering service.
Heck, it doesn't even STONITH, it tries to clean shutdown, which fails
as well due to hanging NFS (per the mentioned bug, to fix it they'll
need IPMI fencing or something like that).



IPMI fencing is something I've been thinking about as well. Would be a 
great benefit for the HA in CloudStack.



I didn't write the code, I'm just saying that I can completely
understand why it kills nodes when it deems that their storage has
gone belly-up. It's dangerous to leave that D state VM hanging around,
and it will until the NFS storage comes back. In a perfect world you'd
just stop the VMs that were having the issue, or if there were no VMs
you'd just de-register the storage from libvirt, I agree.



de-register won't work either... Libvirt tries a umount which will block 
as well.


Wido



Regards,
F.


On 3/3/14 5:35 PM, Marcus wrote:


It's the standard clustering problem. Any software that does any sort
of avtive clustering is going to fence nodes that have problems, or
should if it cares about your data. If the risk of losing a host due
to a storage pool outage is too great, you could perhaps look at
rearranging your pool-to-host correlations (certain hosts run vms from
certain pools) via clusters. Note that if you register a storage pool
with a cluster, it will register the pool with libvirt when the pool
is not in maintenance, which, when the storage pool goes down will
cause problems for the host even if no VMs from that storage are
running (fetching storage stats for example will cause agent threads
to hang if its NFS), so you'd need to put ceph in its own cluster and
NFS in its own cluster.

It's far more dangerous to leave a host in an unknown/bad state. If a
host loses contact with one of your storage nodes, with HA, cloudstack
will want to start the affected VMs elsewhere. If it does so, and your
original host wakes up from it's NFS hang, you suddenly have a VM
running in two locations, corruption ensues. You might think we could
just stop the affected VMs, but NFS tends to make things that touch it
go into D state, even with 'intr' and other parameters, which affects
libvirt and the agent.

We could perhaps open a feature request to disable all HA and just
leave things as-is, disallowing operations when there are outages. If
that sounds useful you can create the feature request on

Re: Change Volume IOPS on fly without detaching the disk feature.

2014-03-05 Thread Wido den Hollander



On 03/05/2014 10:12 AM, Wei ZHOU wrote:

I was thinking about it last week.
AFAIK, libvirt-java 0.5.1 does not support change setting on running vms,
but virsh command line and libvirt API supports it.
so the sulution are
(1) change libvirt-java to support it, and make it released in the next
version. Maybe Wido can help us.


Sure! That seems the best way forward. What is currently lacking in the 
libvirt-java bindings?



(2) call virsh command line.



Please, please, do not do that. That's very hacky. We should really keep 
using the libvirt-java bindings and stay away from invoking binaries.


Wido


-Wei

2014-03-05 9:01 GMT+01:00 Punith S punit...@cloudbyte.com:


hi guys,

we are having a fixed max iops for each volume being attached to the
instance in managed storage,
so this a problem where we are making users to pre allocate the iops of the
disk without having an option to change or resize it, similar to the size
metric.

so i would like to introduce a new feature which enables to change or
resize the volume iops on fly without detaching the datadisk of the VM with
zero downtime where performance of the datadisk can be altered at any point
with the available iops of the primary storage pool, which is similar in
resizing the volume or datadisk of the vm , where in latter we have to
detach the datadisk.

what do you guys think about this feature ? any feedback ?

thanks,

--
regards,

punith s
cloudbyte.com





Re: Change Volume IOPS on fly without detaching the disk feature.

2014-03-06 Thread Wido den Hollander



On 03/05/2014 07:18 PM, Marcus wrote:

For the hypervisor version of throttling, we just need
ResizeVolumeCommand to pass the VolumeObjectTO rather than just the
volume uuid/path, so that when we change offerings on the agent side
we have the info we need to update libvirt with the new iops/bytes
settings. We also need the libvirt java bindings to do so, per
previous discussion.



I'm already working on the patch: 
https://github.com/wido/libvirt-java/tree/change-iops


It's not so hard to implement it seems. Hopefully I'll have it ready 
after the weekend.



On Wed, Mar 5, 2014 at 11:12 AM, Marcus shadow...@gmail.com wrote:

Wouldn't this be implemented as just changing disk offerings? The
resizeVolume API call already allows you to switch disk offerings, we
just need to add a hook in there to optionally call the storage driver
(If volume is deployed to a primary storage) to make an update to the
iops properties on the backend storage. Come to think of it, depending
on how storage drivers are implementing the iops/limits feature,
resizeVolume might be breaking this, or simply requiring a reboot to
apply. That is, if the storage driver is setting the iops just once
upon volume creation, it's probably breaking when a user moves a disk
between offerings that may have alternate iops limits (this is
probably not the case for hypervisor throttling, as that's applied
from whatever is current when the VM starts up).

On Wed, Mar 5, 2014 at 9:58 AM, Mike Tutkowski
mike.tutkow...@solidfire.com wrote:

Hi,

Perhaps I'm not following this correctly, but I'm a bit lost on why we are
talking about changing settings on running VMs.

 From what I understand, you are a representative of a storage vendor that
has a rate-limiting feature. You want to be able to not only set the Max
IOPS, but also adjust them. Is this true?

If so, I totally agree. SolidFire has control over Min and Max IOPS and it
is on my to-do list to add support into CloudStack to be able to
dynamically change these values (right now customers do this from the
SolidFire API or its GUI).

If you would like to work on this feature, that would be great. I'd be
happy to review your design and code.

One complication is that we are looking at adding support for generic
key/value pairs for storage plug-ins in 4.5 and this would effectively
remove the need to have Min and Max IOPS as special fields in the
CloudStack API and GUI.

I'm going to CC Chris Suichll (from NetApp) as he and I have already
discussed this generic-properties concept. It would be good to get his
feedback on how we might go about dynamically updating storage-plug-in
key/value pairs.

Thanks!
Mike


On Wed, Mar 5, 2014 at 3:12 AM, Wido den Hollander w...@widodh.nl wrote:




On 03/05/2014 10:12 AM, Wei ZHOU wrote:


I was thinking about it last week.
AFAIK, libvirt-java 0.5.1 does not support change setting on running vms,
but virsh command line and libvirt API supports it.
so the sulution are
(1) change libvirt-java to support it, and make it released in the next
version. Maybe Wido can help us.



Sure! That seems the best way forward. What is currently lacking in the
libvirt-java bindings?


  (2) call virsh command line.




Please, please, do not do that. That's very hacky. We should really keep
using the libvirt-java bindings and stay away from invoking binaries.

Wido


  -Wei


2014-03-05 9:01 GMT+01:00 Punith S punit...@cloudbyte.com:

  hi guys,


we are having a fixed max iops for each volume being attached to the
instance in managed storage,
so this a problem where we are making users to pre allocate the iops of
the
disk without having an option to change or resize it, similar to the size
metric.

so i would like to introduce a new feature which enables to change or
resize the volume iops on fly without detaching the datadisk of the VM
with
zero downtime where performance of the datadisk can be altered at any
point
with the available iops of the primary storage pool, which is similar in
resizing the volume or datadisk of the vm , where in latter we have to
detach the datadisk.

what do you guys think about this feature ? any feedback ?

thanks,

--
regards,

punith s
cloudbyte.com







--
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloudhttp://solidfire.com/solution/overview/?video=play
*(tm)*


Re: Change Volume IOPS on fly without detaching the disk feature.

2014-03-06 Thread Wido den Hollander



On 03/06/2014 11:48 AM, Wei ZHOU wrote:

awesome!

Can you implement cpu tuning or network QoS as well ? Thanks!



Yes, I was planning on adding multiple methods at once with a couple of 
patches.



-Wei


2014-03-06 11:42 GMT+01:00 Wido den Hollander w...@widodh.nl:




On 03/05/2014 07:18 PM, Marcus wrote:


For the hypervisor version of throttling, we just need
ResizeVolumeCommand to pass the VolumeObjectTO rather than just the
volume uuid/path, so that when we change offerings on the agent side
we have the info we need to update libvirt with the new iops/bytes
settings. We also need the libvirt java bindings to do so, per
previous discussion.



I'm already working on the patch: https://github.com/wido/
libvirt-java/tree/change-iops

It's not so hard to implement it seems. Hopefully I'll have it ready after
the weekend.


  On Wed, Mar 5, 2014 at 11:12 AM, Marcus shadow...@gmail.com wrote:



Wouldn't this be implemented as just changing disk offerings? The
resizeVolume API call already allows you to switch disk offerings, we
just need to add a hook in there to optionally call the storage driver
(If volume is deployed to a primary storage) to make an update to the
iops properties on the backend storage. Come to think of it, depending
on how storage drivers are implementing the iops/limits feature,
resizeVolume might be breaking this, or simply requiring a reboot to
apply. That is, if the storage driver is setting the iops just once
upon volume creation, it's probably breaking when a user moves a disk
between offerings that may have alternate iops limits (this is
probably not the case for hypervisor throttling, as that's applied
from whatever is current when the VM starts up).

On Wed, Mar 5, 2014 at 9:58 AM, Mike Tutkowski
mike.tutkow...@solidfire.com wrote:


Hi,

Perhaps I'm not following this correctly, but I'm a bit lost on why we
are
talking about changing settings on running VMs.

  From what I understand, you are a representative of a storage vendor
that
has a rate-limiting feature. You want to be able to not only set the Max
IOPS, but also adjust them. Is this true?

If so, I totally agree. SolidFire has control over Min and Max IOPS and
it
is on my to-do list to add support into CloudStack to be able to
dynamically change these values (right now customers do this from the
SolidFire API or its GUI).

If you would like to work on this feature, that would be great. I'd be
happy to review your design and code.

One complication is that we are looking at adding support for generic
key/value pairs for storage plug-ins in 4.5 and this would effectively
remove the need to have Min and Max IOPS as special fields in the
CloudStack API and GUI.

I'm going to CC Chris Suichll (from NetApp) as he and I have already
discussed this generic-properties concept. It would be good to get his
feedback on how we might go about dynamically updating storage-plug-in
key/value pairs.

Thanks!
Mike


On Wed, Mar 5, 2014 at 3:12 AM, Wido den Hollander w...@widodh.nl
wrote:




On 03/05/2014 10:12 AM, Wei ZHOU wrote:

  I was thinking about it last week.

AFAIK, libvirt-java 0.5.1 does not support change setting on running
vms,
but virsh command line and libvirt API supports it.
so the sulution are
(1) change libvirt-java to support it, and make it released in the
next
version. Maybe Wido can help us.



Sure! That seems the best way forward. What is currently lacking in the
libvirt-java bindings?


   (2) call virsh command line.




  Please, please, do not do that. That's very hacky. We should really

keep
using the libvirt-java bindings and stay away from invoking binaries.

Wido


   -Wei



2014-03-05 9:01 GMT+01:00 Punith S punit...@cloudbyte.com:

   hi guys,



we are having a fixed max iops for each volume being attached to the
instance in managed storage,
so this a problem where we are making users to pre allocate the iops
of
the
disk without having an option to change or resize it, similar to the
size
metric.

so i would like to introduce a new feature which enables to change or
resize the volume iops on fly without detaching the datadisk of the
VM
with
zero downtime where performance of the datadisk can be altered at any
point
with the available iops of the primary storage pool, which is
similar in
resizing the volume or datadisk of the vm , where in latter we have
to
detach the datadisk.

what do you guys think about this feature ? any feedback ?

thanks,

--
regards,

punith s
cloudbyte.com







--
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloudhttp://solidfire.com/solution/overview/?video=play
*(tm)*







Re: master issue calling volume resize via libvirt - Was: libvirt-java upgrade

2014-03-08 Thread Wido den Hollander



On 03/08/2014 12:32 AM, Marcus wrote:

On Fri, Mar 7, 2014 at 4:30 PM, Marcus shadow...@gmail.com wrote:

Hrm... sent instead of pasted. Commit

commit 3989d6c48118f31464c87c71b6279a11eb13eb35
Author: Wido den Hollander w...@widodh.nl
Date:   Mon Feb 3 17:04:11 2014 +0100

 kvm: Resize volumes using libvirt

virsh blockresize works on this system, so I can only assume that the
libvirt.so.0.9.8 that ships with Ubuntu 12.04 doesn't support
virStorageVolResize.

# strings /usr/lib/libvirt.so.0.9.8  | grep virStorageVolR
virStorageVolRef
virStorageVolRef
virStorageVolRef


Hmm, that's a good one. I'm not able to check this right now, but on all 
my test systems I run libvirt 1.0.2 from the Ubuntu Cloud Archive, so 
that could be the problem.


Wido



On Fri, Mar 7, 2014 at 4:28 PM, Marcus shadow...@gmail.com wrote:

Wido,
 I'm seeing this in Ubuntu 12.04 after commit



2014-02-10 01:19:16,793 DEBUG [kvm.resource.LibvirtComputingResource]
(agentRequest-Handler-2:null) Volume
/mnt/2fe9a944-505e-38cb-bf87-72623634be4a/e47e6501-c8ae-41a7-9abc-0f7fdad5fb30
can be resized by libvirt. Asking libvirt to resize the volume.
2014-02-10 01:19:16,800 WARN  [cloud.agent.Agent]
(agentRequest-Handler-2:null) Caught:
java.lang.UnsatisfiedLinkError: Error looking up function
'virStorageVolResize': /usr/lib/libvirt.so.0.9.8: undefined symbol:
virStorageVolResize
at com.sun.jna.Function.init(Function.java:208)
at com.sun.jna.NativeLibrary.getFunction(NativeLibrary.java:536)
at com.sun.jna.NativeLibrary.getFunction(NativeLibrary.java:513)
at com.sun.jna.NativeLibrary.getFunction(NativeLibrary.java:499)
at com.sun.jna.Library$Handler.invoke(Library.java:199)
at com.sun.proxy.$Proxy0.virStorageVolResize(Unknown Source)
at org.libvirt.StorageVol.resize(Unknown Source)
at 
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:1808)
at 
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1331)
at com.cloud.agent.Agent.processRequest(Agent.java:501)
at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:808)
at com.cloud.utils.nio.Task.run(Task.java:84)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Re: master issue calling volume resize via libvirt - Was: libvirt-java upgrade

2014-03-10 Thread Wido den Hollander



On 03/09/2014 01:19 AM, Marcus wrote:

I imagine the new LTS will have it, but I'm not sure what our OS support
policy is.


Well, I think we should also keep supporting 12.04 since it's not EOL 
until 2017.


But we can always say that we require the Ubuntu Cloud Archive to be used?

wido


On Mar 8, 2014 11:59 AM, Wido den Hollander w...@widodh.nl wrote:




On 03/08/2014 12:32 AM, Marcus wrote:


On Fri, Mar 7, 2014 at 4:30 PM, Marcus shadow...@gmail.com wrote:


Hrm... sent instead of pasted. Commit

commit 3989d6c48118f31464c87c71b6279a11eb13eb35
Author: Wido den Hollander w...@widodh.nl
Date:   Mon Feb 3 17:04:11 2014 +0100

  kvm: Resize volumes using libvirt

virsh blockresize works on this system, so I can only assume that the
libvirt.so.0.9.8 that ships with Ubuntu 12.04 doesn't support
virStorageVolResize.

# strings /usr/lib/libvirt.so.0.9.8  | grep virStorageVolR
virStorageVolRef
virStorageVolRef
virStorageVolRef




Hmm, that's a good one. I'm not able to check this right now, but on all
my test systems I run libvirt 1.0.2 from the Ubuntu Cloud Archive, so that
could be the problem.

Wido



On Fri, Mar 7, 2014 at 4:28 PM, Marcus shadow...@gmail.com wrote:


Wido,
  I'm seeing this in Ubuntu 12.04 after commit



2014-02-10 01:19:16,793 DEBUG [kvm.resource.LibvirtComputingResource]
(agentRequest-Handler-2:null) Volume
/mnt/2fe9a944-505e-38cb-bf87-72623634be4a/e47e6501-c8ae-
41a7-9abc-0f7fdad5fb30
can be resized by libvirt. Asking libvirt to resize the volume.
2014-02-10 01:19:16,800 WARN  [cloud.agent.Agent]
(agentRequest-Handler-2:null) Caught:
java.lang.UnsatisfiedLinkError: Error looking up function
'virStorageVolResize': /usr/lib/libvirt.so.0.9.8: undefined symbol:
virStorageVolResize
at com.sun.jna.Function.init(Function.java:208)
at com.sun.jna.NativeLibrary.getFunction(NativeLibrary.java:536)
at com.sun.jna.NativeLibrary.getFunction(NativeLibrary.java:513)
at com.sun.jna.NativeLibrary.getFunction(NativeLibrary.java:499)
at com.sun.jna.Library$Handler.invoke(Library.java:199)
at com.sun.proxy.$Proxy0.virStorageVolResize(Unknown Source)
at org.libvirt.StorageVol.resize(Unknown Source)
at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(
LibvirtComputingResource.java:1808)
at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.
executeRequest(LibvirtComputingResource.java:1331)
at com.cloud.agent.Agent.processRequest(Agent.java:501)
at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:808)
at com.cloud.utils.nio.Task.run(Task.java:84)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)







Re: CloudStack implementations

2014-03-18 Thread Wido den Hollander

On 03/18/2014 12:07 PM, Marcus wrote:

Do we have any general stats on how cloudstack is being used? Common
deployment sizes, largest deployments, etc? I'm curious as to how far
people have actually scaled it in real deployments, although I realize that
the info can be proprietary.



Recently at the Ceph project the tool ceph-brag was developed. It 
gathers information about your Ceph deployment and sends back the 
information to the project.


Something like this might be nice (opt-in!!) for CloudStack. It can 
anonymously report things like:

- Number of Instances
- Number of pods, cluster, hosts
- Number of Primary Storage and their type
- Basic / Advanced Networking

This could all be written into one JSON file which we can submit back to 
the project.


With this we would get more information about how CloudStack is used.

Obviously, the code will be Open Source so people can see how we gather 
the information (probably a lot of SQL selects..) and how we submit it 
to our servers.


Is that something what you would like?

Wido


Re: Cloudstack documentation

2014-03-18 Thread Wido den Hollander

On 03/18/2014 02:08 PM, Pierre-Luc Dion wrote:

Hi all,

I would like to know where we can get newer version of documentation for
Cloudstack. Look like the documentation site is not updated since 4.2.0. I
know that the documentation system will change, is their any information
about it somewhere?



Yes, it moved to it's own repository: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack-docs.git


$ git clone https://git-wip-us.apache.org/repos/asf/cloudstack-docs.git

From there you can probably build the docs yourself. Be aware that we 
are moving to RST.



Also, I would like to contribute as well.



Patches are always welcome!


Thanks,

Pierre-Luc Dion
Architecte de Solution Cloud | Cloud Solutions Architect
- - -

*CloudOps*420 rue Guy
Montréal QC  H3J 1S6
www.cloudops.com
@CloudOps_





Re: [RFC]Bypass Libvirt storage pool for NFS

2014-03-19 Thread Wido den Hollander



On 03/19/2014 07:54 PM, Edison Su wrote:

I found many times in QA's testing environment, the libvirt storage 
pool(created on NFS) is missing on the kvm host frequently, for no reason. It 
may relate to bug https://bugzilla.redhat.com/show_bug.cgi?id=977706.
In order to fix this issue, and bug CLOUDSTACK-2729, we added a lot of 
workaround to fight with libvirt, such as, if can't find the storage pool, then 
create the same pool again etc. As the storage pool can be lost on kvm host at 
any time, it will cause a lot of operation errors, such as can't start vm, 
can't delete volume etc, etc.
I want to bypass libvirt storage pool for NFS, as java itself, already have all 
the capabilities that libvirt can provide, such as create a file, delete a 
file, list a directory etc, there is no need to add another layer of crap here. 
In doing so, we won't be blocked by libvirt 
bug(https://bugzilla.redhat.com/show_bug.cgi?id=977706) to support newer 
version of KVM.



-1

I understand the issues which we see here, but imho the way forward is 
to fix this in libvirt instead of simply go around it.


We should not try to re-invent the wheel here, but fix the root-cause.

Yes, Java can do a lot, but I think libvirt can do this better.

For the RBD code I also had a couple of changes go into libvirt recently 
and this NFS issue can also be fixed.


Loosing NFS pools in libvirt is most of the times due to a restart of 
libvirt, they don't magically disappear from libvirt.


I agree that we should be able to start the pool again even while it's 
mounted, but that's something we should fix in libvirt.


Wido


Re: CloudStack implementations

2014-03-19 Thread Wido den Hollander



On 03/18/2014 02:06 PM, Rohit Yadav wrote:

Cool idea, we can also have a monkey-brag ACS plugin which gives users an
API which can be triggered via cloudmonkey cli tool or by some gui/button
on the frontend to submit stats anonymously to our servers.



So I was thinking of writing 'cloudstack-report' as a tool.

The Debian and RPM packages can install a CRON which runs weekly/monthly 
to invoke the tool and reports to our service.


Via a config in /etc/cloudstack/report people can configure the tool. 
They could add their company/organization if they want or leave it set 
to anonymous.


The tool could do a PUT with some JSON data to 'report.cloudstack.org' 
which runs ElasticSearch where we then store all the data.


Easy to analyze and run statistics on.

I would like to have a opt-out since as a project member I want as much 
feedback about how CloudStack is being used, but as a user I want a opt-in.


So I think I'll start on the code itself to gather the data and later on 
we can discuss how we are going to do this.


In the end we need more feedback about the usage of CloudStack so we 
know what our users need.


Wido


Cheers.


On Tue, Mar 18, 2014 at 4:45 PM, Wido den Hollander w...@widodh.nl wrote:


On 03/18/2014 12:07 PM, Marcus wrote:


Do we have any general stats on how cloudstack is being used? Common
deployment sizes, largest deployments, etc? I'm curious as to how far
people have actually scaled it in real deployments, although I realize
that
the info can be proprietary.



Recently at the Ceph project the tool ceph-brag was developed. It
gathers information about your Ceph deployment and sends back the
information to the project.

Something like this might be nice (opt-in!!) for CloudStack. It can
anonymously report things like:
- Number of Instances
- Number of pods, cluster, hosts
- Number of Primary Storage and their type
- Basic / Advanced Networking

This could all be written into one JSON file which we can submit back to
the project.

With this we would get more information about how CloudStack is used.

Obviously, the code will be Open Source so people can see how we gather
the information (probably a lot of SQL selects..) and how we submit it to
our servers.

Is that something what you would like?

Wido





Re: ACS and KVM uses /tmp for volumes migration and templates

2014-03-20 Thread Wido den Hollander

On 03/20/2014 12:59 AM, Andrei Mikhailovsky wrote:

Hi guys,

I was wondering if this is a bug?



No, it's a feature.


I've noticed that during volume migration from NFS to RBD primary storage the 
volume image is first copied to /tmp and only then to the RBD storage. This 
seems silly to me as one would expect a typical volume to be larger than the 
host's hard disk. Also, it is a common practice to use tmpfs as /tmp for 
performance reasons. Thus, a typical host server will have far smaller /tmp 
folder than the size of an average volume. As a result, volume migration would 
break after filling the /tmp and could probably cause a bunch of issue for the 
KVM host itself as well as any vms running on the server.



Correct. The problem was that RBD images know two formats. Format 1 
(old/legacy) and format 2.


In order to perform cloning images should be in RBD format 2.

When running qemu-img convert with a RBD image as a destination qemu-img 
will create a RBD image in format 1.


That's due to this piece of code in block/rbd.c in Qemu:

ret = rbd_create(io_ctx, name, bytes, obj_order);

rbd_create() creates images in format 1. To use format 2 you should use 
rbd_create2() or rbd_create3().


With RBD format 1 we can't do snapshotting or cloning, which we require 
in ACS.


So I had to do a intermediate step where I first wrote the RAW image 
somewhere and afterwards write it to RBD.


After some discussion a config option has been added to Ceph:

OPTION(rbd_default_format, OPT_INT, 1)

This allows me to do this:

qemu-img convert .. -O raw .. rbd:rbd/myimage:rbd_default_format=2

This causes librbd/RBD to create a format 2 image and we can skip the 
convert step to /tmp.


This option is available since Ceph Dumpling 0.67.5 and was not 
available when ACS 4.2 was written.


I'm going to make changes in master which skip the step with /tmp.

Technically this can be backported to 4.2, but then you would have to 
run your own homebrew version of 4.2



It also seems that the /tmp is temporarily used during a template creation .



Same story as above.


My setup:

ACS 4.2.1
Ubuntu 12.04 with KVM
RBD + NFS for Primary storage
NFS for Staging and Secondary storage


Thanks

Andrei





Re: [RFC]Bypass Libvirt storage pool for NFS

2014-03-20 Thread Wido den Hollander



On 03/20/2014 05:38 PM, Nux! wrote:

On 19.03.2014 22:48, Edison Su wrote:

-Original Message-
From: Nux! [mailto:n...@li.nux.ro]
Sent: Wednesday, March 19, 2014 3:34 PM
To: dev@cloudstack.apache.org
Subject: RE: [RFC]Bypass Libvirt storage pool for NFS

On 19.03.2014 22:28, Edison Su wrote:

Edison, if - with the workarounds in place now - the current version
of KVM works OK, then why wouldn't a newer version work just as fine?
Just trying to understand this.


That's a long story, there is a bug in Libvirt, which is introduced in
a newer version(0.9.10), which can make the storage pool disappear.


Edison, that I understand, but what is the technical reason that
prevents
using newer KVM?
It looks like current KVM works fine on CentOS 6.5 for example which has
libvirt 0.10.2.


Yes, at first glance, the newer version libvirt( 0.9.10) just works
fine. But under stress test, it will complain NFS storage pool
missing, and can't add the storage pool back, unless you shut down all
the VMs which using the storage pool. That's the
bug(https://bugzilla.redhat.com/show_bug.cgi?id=977706) all about.

In ACS 4.2/4.3 release, we only recommend to use libvirt =0.9.10, if
primary storage is NFS.


Ok, I'm trying to make some noise in that bz entry, hopefully someone
gets annoyed enough to do something about it.



And it just went upstream! How great is that?


Quick question: there is no problem if instead of using NFS directly we
use the shared mount point option, is it?



Probably not, since we would then be mounting the filesystem manually 
instead of having libvirt do it.


I'd say get this patch down into EL6 and I'll try to get it into Ubuntu 
14.04.


Wido


Lucian



Re: ACS and KVM uses /tmp for volumes migration and templates

2014-03-23 Thread Wido den Hollander



On 03/21/2014 02:23 PM, Andrei Mikhailovsky wrote:


Wido,


i would be happy to try the custom ACS build unless 4.3 comes out soon. It has 
been overdue for sometime now )). Has this feature been addressed in the 4.3 
release?



No, it hasn't been fixed yet. I have to admit, I forgot about this until 
you sent this e-mail to the list.


I'll fix this in master later this week.



I can leave with this feature for the time being, but i do see a longer term 
issue when my volumes become large as i've only got about 100gb free space on 
my host servers.




I fully agree. While writing this code I was aware of this. See my 
comments in the code: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob;f=plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java;h=5de8bd26ae201187f5db5fd16b7e3ca157cab53a;hb=master#l1087



 From what i can tell by looking at the rbd ls -l info all of my volumes are 
done in Format 2



Correct, because I by-pass libvirt and Qemu at some places right now.



Cheers,


Andrei




- Original Message -

From: Wido den Hollander w...@widodh.nl
To: dev@cloudstack.apache.org
Sent: Thursday, 20 March, 2014 9:40:29 AM
Subject: Re: ACS and KVM uses /tmp for volumes migration and templates

On 03/20/2014 12:59 AM, Andrei Mikhailovsky wrote:

Hi guys,

I was wondering if this is a bug?



No, it's a feature.


I've noticed that during volume migration from NFS to RBD primary storage the 
volume image is first copied to /tmp and only then to the RBD storage. This 
seems silly to me as one would expect a typical volume to be larger than the 
host's hard disk. Also, it is a common practice to use tmpfs as /tmp for 
performance reasons. Thus, a typical host server will have far smaller /tmp 
folder than the size of an average volume. As a result, volume migration would 
break after filling the /tmp and could probably cause a bunch of issue for the 
KVM host itself as well as any vms running on the server.



Correct. The problem was that RBD images know two formats. Format 1
(old/legacy) and format 2.

In order to perform cloning images should be in RBD format 2.

When running qemu-img convert with a RBD image as a destination qemu-img
will create a RBD image in format 1.

That's due to this piece of code in block/rbd.c in Qemu:

ret = rbd_create(io_ctx, name, bytes, obj_order);

rbd_create() creates images in format 1. To use format 2 you should use
rbd_create2() or rbd_create3().

With RBD format 1 we can't do snapshotting or cloning, which we require
in ACS.

So I had to do a intermediate step where I first wrote the RAW image
somewhere and afterwards write it to RBD.

After some discussion a config option has been added to Ceph:

OPTION(rbd_default_format, OPT_INT, 1)

This allows me to do this:

qemu-img convert .. -O raw .. rbd:rbd/myimage:rbd_default_format=2

This causes librbd/RBD to create a format 2 image and we can skip the
convert step to /tmp.

This option is available since Ceph Dumpling 0.67.5 and was not
available when ACS 4.2 was written.

I'm going to make changes in master which skip the step with /tmp.

Technically this can be backported to 4.2, but then you would have to
run your own homebrew version of 4.2


It also seems that the /tmp is temporarily used during a template creation .



Same story as above.


My setup:

ACS 4.2.1
Ubuntu 12.04 with KVM
RBD + NFS for Primary storage
NFS for Staging and Secondary storage


Thanks

Andrei







Re: applying patch - manually change jar file

2014-03-24 Thread Wido den Hollander

On 03/24/2014 10:12 AM, Tomasz Zięba wrote:

Hello,

How do I apply a patch to the ACS 4.2.1 ?

I have fixed file:
apache-cloudstack-4.2.1-src/plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java

and would like to make apply this changes to cloudstack-management

After performing the following commands:

/usr/bin/javac -cp
/usr/share/java/commons-collections.jar:/usr/share/java/commons-dbcp.jar:/usr/share/java/commons-logging.jar:/usr/share/java/commons-logging-api.jar:/usr/share/java/commons-pool.jar:/usr/share/java/commons-httpclient.jar:/usr/share/java/ws-commons-util.jar:/usr/share/java/jnetpcap.jar:/usr/share/cloudstack-agent/lib/*:/usr/share/cloudstack-management/lib/*:/usr/share/cloudstack-common/lib/*:/usr/share/cloudstack-management/webapps/client/WEB-INF/lib/*
apache-cloudstack-4.2.1-src/plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java

#find /usr/ -name *.jar -exec grep -Hls xen.resource {} \;

/usr/bin/jar -uvf
/usr/share/cloudstack-management/webapps/client/WEB-INF/lib/cloud-plugin-hypervisor-xen-4.2.1-SNAPSHOT.jar
apache-cloudstack-4.2.1-src/plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.class
apache-cloudstack-4.2.1-src/plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase$SRType.class
apache-cloudstack-4.2.1-src/plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase$XsHost.class
apache-cloudstack-4.2.1-src/plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase$XsLocalNetwork.class


and restart cloudstack-management, unfortunately, still uses the old
classes.



You should remove the same file without the -SNAPSHOT in the name. It's 
probably reading that file on boot.


Wido


Thank you.





Re: ACS and KVM uses /tmp for volumes migration and templates

2014-03-24 Thread Wido den Hollander

On 03/23/2014 08:01 PM, Andrei Mikhailovsky wrote:

Wido,

Could you please let me know when you've done this so I could try it out. Would 
it be a part of the 4.3 branch or 4.4?



I'll do that. It will go into master which is 4.4 and I'm not sure if 
this will be backported to 4.3.1


Wido


Thanks
- Original Message -

From: Wido den Hollander w...@widodh.nl
To: dev@cloudstack.apache.org
Sent: Sunday, 23 March, 2014 3:56:44 PM
Subject: Re: ACS and KVM uses /tmp for volumes migration and templates



On 03/21/2014 02:23 PM, Andrei Mikhailovsky wrote:


Wido,


i would be happy to try the custom ACS build unless 4.3 comes out soon. It has 
been overdue for sometime now )). Has this feature been addressed in the 4.3 
release?



No, it hasn't been fixed yet. I have to admit, I forgot about this until
you sent this e-mail to the list.

I'll fix this in master later this week.



I can leave with this feature for the time being, but i do see a longer term 
issue when my volumes become large as i've only got about 100gb free space on 
my host servers.




I fully agree. While writing this code I was aware of this. See my
comments in the code:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob;f=plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java;h=5de8bd26ae201187f5db5fd16b7e3ca157cab53a;hb=master#l1087


 From what i can tell by looking at the rbd ls -l info all of my volumes are 
done in Format 2



Correct, because I by-pass libvirt and Qemu at some places right now.



Cheers,


Andrei




- Original Message -

From: Wido den Hollander w...@widodh.nl
To: dev@cloudstack.apache.org
Sent: Thursday, 20 March, 2014 9:40:29 AM
Subject: Re: ACS and KVM uses /tmp for volumes migration and templates

On 03/20/2014 12:59 AM, Andrei Mikhailovsky wrote:

Hi guys,

I was wondering if this is a bug?



No, it's a feature.


I've noticed that during volume migration from NFS to RBD primary storage the 
volume image is first copied to /tmp and only then to the RBD storage. This 
seems silly to me as one would expect a typical volume to be larger than the 
host's hard disk. Also, it is a common practice to use tmpfs as /tmp for 
performance reasons. Thus, a typical host server will have far smaller /tmp 
folder than the size of an average volume. As a result, volume migration would 
break after filling the /tmp and could probably cause a bunch of issue for the 
KVM host itself as well as any vms running on the server.



Correct. The problem was that RBD images know two formats. Format 1
(old/legacy) and format 2.

In order to perform cloning images should be in RBD format 2.

When running qemu-img convert with a RBD image as a destination qemu-img
will create a RBD image in format 1.

That's due to this piece of code in block/rbd.c in Qemu:

ret = rbd_create(io_ctx, name, bytes, obj_order);

rbd_create() creates images in format 1. To use format 2 you should use
rbd_create2() or rbd_create3().

With RBD format 1 we can't do snapshotting or cloning, which we require
in ACS.

So I had to do a intermediate step where I first wrote the RAW image
somewhere and afterwards write it to RBD.

After some discussion a config option has been added to Ceph:

OPTION(rbd_default_format, OPT_INT, 1)

This allows me to do this:

qemu-img convert .. -O raw .. rbd:rbd/myimage:rbd_default_format=2

This causes librbd/RBD to create a format 2 image and we can skip the
convert step to /tmp.

This option is available since Ceph Dumpling 0.67.5 and was not
available when ACS 4.2 was written.

I'm going to make changes in master which skip the step with /tmp.

Technically this can be backported to 4.2, but then you would have to
run your own homebrew version of 4.2


It also seems that the /tmp is temporarily used during a template creation .



Same story as above.


My setup:

ACS 4.2.1
Ubuntu 12.04 with KVM
RBD + NFS for Primary storage
NFS for Staging and Secondary storage


Thanks

Andrei












Re: ACS and KVM uses /tmp for volumes migration and templates

2014-03-24 Thread Wido den Hollander

On 03/24/2014 03:22 PM, Andrei Mikhailovsky wrote:


Do you think I can apply the patch manually to the 4.3 branch? I would love to 
try it with 4.3, but not too adventitious to upgrade my setup to 4.4 yet ))



Yes! I just pushed a commit to the master branch: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=commit;h=9763faf85e3f54ac84d5ca1d5ad6e89c7fcc87ee


To build 4.3

$ git checkout 4.3
$ git cherry-pick 9763faf85e3f54ac84d5ca1d5ad6e89c7fcc87ee
$ dpkg-buildpackage

Now you only have to update the cloudstack-agent package on the hypervisors.

Wido



Andrei
- Original Message -

From: Wido den Hollander w...@widodh.nl
To: dev@cloudstack.apache.org
Sent: Monday, 24 March, 2014 12:29:36 PM
Subject: Re: ACS and KVM uses /tmp for volumes migration and templates

On 03/23/2014 08:01 PM, Andrei Mikhailovsky wrote:

Wido,

Could you please let me know when you've done this so I could try it out. Would 
it be a part of the 4.3 branch or 4.4?



I'll do that. It will go into master which is 4.4 and I'm not sure if
this will be backported to 4.3.1

Wido


Thanks
- Original Message -

From: Wido den Hollander w...@widodh.nl
To: dev@cloudstack.apache.org
Sent: Sunday, 23 March, 2014 3:56:44 PM
Subject: Re: ACS and KVM uses /tmp for volumes migration and templates



On 03/21/2014 02:23 PM, Andrei Mikhailovsky wrote:


Wido,


i would be happy to try the custom ACS build unless 4.3 comes out soon. It has 
been overdue for sometime now )). Has this feature been addressed in the 4.3 
release?



No, it hasn't been fixed yet. I have to admit, I forgot about this until
you sent this e-mail to the list.

I'll fix this in master later this week.



I can leave with this feature for the time being, but i do see a longer term 
issue when my volumes become large as i've only got about 100gb free space on 
my host servers.




I fully agree. While writing this code I was aware of this. See my
comments in the code:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob;f=plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java;h=5de8bd26ae201187f5db5fd16b7e3ca157cab53a;hb=master#l1087


 From what i can tell by looking at the rbd ls -l info all of my volumes are 
done in Format 2



Correct, because I by-pass libvirt and Qemu at some places right now.



Cheers,


Andrei




- Original Message -

From: Wido den Hollander w...@widodh.nl
To: dev@cloudstack.apache.org
Sent: Thursday, 20 March, 2014 9:40:29 AM
Subject: Re: ACS and KVM uses /tmp for volumes migration and templates

On 03/20/2014 12:59 AM, Andrei Mikhailovsky wrote:

Hi guys,

I was wondering if this is a bug?



No, it's a feature.


I've noticed that during volume migration from NFS to RBD primary storage the 
volume image is first copied to /tmp and only then to the RBD storage. This 
seems silly to me as one would expect a typical volume to be larger than the 
host's hard disk. Also, it is a common practice to use tmpfs as /tmp for 
performance reasons. Thus, a typical host server will have far smaller /tmp 
folder than the size of an average volume. As a result, volume migration would 
break after filling the /tmp and could probably cause a bunch of issue for the 
KVM host itself as well as any vms running on the server.



Correct. The problem was that RBD images know two formats. Format 1
(old/legacy) and format 2.

In order to perform cloning images should be in RBD format 2.

When running qemu-img convert with a RBD image as a destination qemu-img
will create a RBD image in format 1.

That's due to this piece of code in block/rbd.c in Qemu:

ret = rbd_create(io_ctx, name, bytes, obj_order);

rbd_create() creates images in format 1. To use format 2 you should use
rbd_create2() or rbd_create3().

With RBD format 1 we can't do snapshotting or cloning, which we require
in ACS.

So I had to do a intermediate step where I first wrote the RAW image
somewhere and afterwards write it to RBD.

After some discussion a config option has been added to Ceph:

OPTION(rbd_default_format, OPT_INT, 1)

This allows me to do this:

qemu-img convert .. -O raw .. rbd:rbd/myimage:rbd_default_format=2

This causes librbd/RBD to create a format 2 image and we can skip the
convert step to /tmp.

This option is available since Ceph Dumpling 0.67.5 and was not
available when ACS 4.2 was written.

I'm going to make changes in master which skip the step with /tmp.

Technically this can be backported to 4.2, but then you would have to
run your own homebrew version of 4.2


It also seems that the /tmp is temporarily used during a template creation .



Same story as above.


My setup:

ACS 4.2.1
Ubuntu 12.04 with KVM
RBD + NFS for Primary storage
NFS for Staging and Secondary storage


Thanks

Andrei
















Re: Checkstyle Error

2014-03-25 Thread Wido den Hollander



On 03/25/2014 03:19 PM, Alex Hitchins wrote:

Thanks Wido, checking that now.

Also, one thing you or someone else might be able to assist me with.

I'm getting the following error setting up Maven on Debian. I don't get the 
issue with Ubuntu however... any idea?

# apt-get install maven
E: Unable to locate package maven



Odd, since the package is there: https://packages.debian.org/wheezy/maven

I however never use Debian, so I'm not sure why it fails.

Wido


I've opened Synaptic Package Manager and refreshed. Searching for the package 
Maven, I see two items that seem to both be installed already. Is the answer to 
just use Ubuntu?


Regards

Alex Hitchins

D: +44 1892 523 587 | S: +44 2036 030 540 | M: +44 7788 423 969

alex.hitch...@shapeblue.com

-Original Message-
From: Wido den Hollander [mailto:w...@widodh.nl]
Sent: 25 March 2014 14:15
To: dev@cloudstack.apache.org
Subject: Re: Checkstyle Error



On 03/25/2014 03:12 PM, Alex Hitchins wrote:

I'm getting a checkstyle error on compilation.

What is the best way to see what I'm doing to cause the issue? I seem to be 
just getting the project it's failing on. This is with supplying -e to maven to 
give a stack trace. Is there anything I can configure in Eclipse to enforce 
these rules too?



Check the target directory of that project. There is a checkstyle report which 
tells you which file and line is to blame.


[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-checkstyle-plugin:2.11:check
(cloudstack-checkstyle) on project cloud-server: Failed during
checkstyle execution: Ther e are 1 checkstyle errors. - [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.
maven.plugins:maven-checkstyle-plugin:2.11:check
(cloudstack-checkstyle) on project cloud-
server: Failed during checkstyle execution


Need Enterprise Grade Support for Apache CloudStack?
Our CloudStack Infrastructure 
Supporthttp://shapeblue.com/cloudstack-infrastructure-support/ offers the 
best 24/7 SLA for CloudStack Environments.

Apache CloudStack Bootcamp training courses

**NEW!** CloudStack 4.2.1
traininghttp://shapeblue.com/cloudstack-training/
18th-19th February 2014, Brazil.
Classroomhttp://shapeblue.com/cloudstack-training/
17th-23rd March 2014, Region A. Instructor led,
On-linehttp://shapeblue.com/cloudstack-training/
24th-28th March 2014, Region B. Instructor led,
On-linehttp://shapeblue.com/cloudstack-training/
16th-20th June 2014, Region A. Instructor led,
On-linehttp://shapeblue.com/cloudstack-training/
23rd-27th June 2014, Region B. Instructor led,
On-linehttp://shapeblue.com/cloudstack-training/

This email and any attachments to it may be confidential and are intended solely 
for the use of the individual to whom it is addressed. Any views or opinions 
expressed are solely those of the author and do not necessarily represent those of 
Shape Blue Ltd or related companies. If you are not the intended recipient of this 
email, you must neither take any action based upon its contents, nor copy or show 
it to anyone. Please contact the sender if you believe you have received this email 
in error. Shape Blue Ltd is a company incorporated in England  Wales. 
ShapeBlue Services India LLP is a company incorporated in India and is operated 
under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company 
incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue 
is a registered trademark.


This email and any attachments to it may be confidential and are intended solely 
for the use of the individual to whom it is addressed. Any views or opinions 
expressed are solely those of the author and do not necessarily represent those of 
Shape Blue Ltd or related companies. If you are not the intended recipient of this 
email, you must neither take any action based upon its contents, nor copy or show 
it to anyone. Please contact the sender if you believe you have received this email 
in error. Shape Blue Ltd is a company incorporated in England  Wales. 
ShapeBlue Services India LLP is a company incorporated in India and is operated 
under license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company 
incorporated in Brasil and is operated under license from Shape Blue Ltd. ShapeBlue 
is a registered trademark.



Re: [ANNOUNCE] Apache CloudStack 4.3.0 Released

2014-03-25 Thread Wido den Hollander



On 03/25/2014 03:47 PM, Antone Heyward wrote:

There should be a checklist to verify certain things are in place before
the release announcement.




Well, officially Apache projects only release source. The DEB and RPM 
packages are something from the community, not the project itself.


While I get your point, it's not a part of the official release.

Wido




On Tue, Mar 25, 2014 at 10:42 AM, Wido den Hollander w...@widodh.nl wrote:




On 03/25/2014 03:40 PM, Nux! wrote:


On 25.03.2014 14:28, Chip Childers wrote:


Website issues should all be resolved now.  Thanks for noticing this.



Also http://cloudstack.apt-get.eu/rhel/4.3/ doesn't work (404). Nobody
built the RPMs?



Seems like it. I don't have RHEL systems available to build the packages
one. Usually David build them.

Wido







Re: [ANNOUNCE] Apache CloudStack 4.3.0 Released

2014-03-25 Thread Wido den Hollander



On 03/25/2014 03:40 PM, Nux! wrote:

On 25.03.2014 14:28, Chip Childers wrote:

Website issues should all be resolved now.  Thanks for noticing this.


Also http://cloudstack.apt-get.eu/rhel/4.3/ doesn't work (404). Nobody
built the RPMs?



Seems like it. I don't have RHEL systems available to build the packages 
one. Usually David build them.


Wido


Re: [ANNOUNCE] Apache CloudStack 4.3.0 Released

2014-03-25 Thread Wido den Hollander



On 03/25/2014 04:25 PM, benoit lair wrote:

Super! The rpms are now available !



Indeed. Currently the mirror is pumping out 300Mbit of CloudStack 
downloads with peaks to 500Mbit.


Great to see so many downloads!

Wido


Thanks à lot guys.


2014-03-25 16:00 GMT+01:00 Antone Heyward thehyperadvi...@gmail.com:


@Wido - Thanks for the info, i did not know that.


On Tue, Mar 25, 2014 at 10:49 AM, Wido den Hollander w...@widodh.nl
wrote:




On 03/25/2014 03:47 PM, Antone Heyward wrote:


There should be a checklist to verify certain things are in place before
the release announcement.




Well, officially Apache projects only release source. The DEB and RPM
packages are something from the community, not the project itself.

While I get your point, it's not a part of the official release.

Wido




On Tue, Mar 25, 2014 at 10:42 AM, Wido den Hollander w...@widodh.nl
wrote:




On 03/25/2014 03:40 PM, Nux! wrote:

  On 25.03.2014 14:28, Chip Childers wrote:


  Website issues should all be resolved now.  Thanks for noticing this.




Also http://cloudstack.apt-get.eu/rhel/4.3/ doesn't work (404).

Nobody

built the RPMs?


  Seems like it. I don't have RHEL systems available to build the

packages
one. Usually David build them.

Wido









--
Antone
@thehyperadvisor
http://thehyperadvisor.com





Re: [ANNOUNCE] Apache CloudStack 4.3.0 Released

2014-03-25 Thread Wido den Hollander



On 03/25/2014 03:28 PM, Chip Childers wrote:

Website issues should all be resolved now.  Thanks for noticing this.



I also fixed the Ubuntu packages. They were there, but not properly 
indexed by the mirror.



On Tue, Mar 25, 2014 at 10:14 AM, Sumita Gorla sumita.go...@citrix.com wrote:

Hi Chip

Thank you for the announcement.

The website still shows the latest release as 4.2.1
http://cloudstack.apache.org/downloads.html

Apache CloudStack's most current release is 4.2.1.


Thanks,
Sumita










On 3/25/14 10:05 AM, Chip Childers chipchild...@apache.org wrote:


Announcing Apache CloudStack 4.3.0

Tue Mar 25 2014 09:47:06 GMT-0400 (EDT)

Flexible, scalable, Open Source Infrastructure as a Service (IaaS) used
by organizations such as Zynga, Datapipe, and ISWest, among others, for
creating, managing, and deploying public, private, and hybrid Cloud
Computing environments

Forest Hill, MD --25 March 2014-- The Apache Software Foundation (ASF),
the all-volunteer developers, stewards, and incubators of more than 170
Open Source projects and initiatives, today announced Apache CloudStack
v4.3, the latest feature release of the CloudStack cloud orchestration
platform.

Apache CloudStack is an integrated Infrastructure-as-a-Service (IaaS)
software platform that allows users to build feature-rich public,
private, and hybrid cloud environments. CloudStack includes an intuitive
user interface and rich APIs for managing the compute, networking,
software, and storage infrastructure resources. CloudStack became an
Apache Top-level Project (TLP) in March 2013. We are proud to announce
CloudStack v4.3, said Hugo Trippaers, Vice President of Apache
CloudStack. This release represents over six months of work from the
Apache CloudStack community with many new and improved features.

Under The Hood

CloudStack V4.3 is the next feature release of the 4.x line which first
released on November 6, 2012. Some of the noteworthy new and improved
features include:

- Support for Microsoft Hyper-V - Apache CloudStack can now manage Hyper-V
  hypervisors in addition to KVM, XenServer, VMware, LXC, and Bare Metal
- Juniper OpenContrail integration - OpenContrail is a software defined
  networking controller from Juniper that CloudStack now integrates with
  to provide SDN services
- SSL Termination support for guest VMs - Apache CloudStack can configure
  and manage SSL termination in certain load balancer devices
- Palo Alto Firewall integration - Apache CloudStack can now manage and
  configure Palo Alto firewalls
- Remote access VPN for VPC networks - CloudStack's remote access VPN is
  now available for Virtual Private Cloud networks
- Site to Site VPN between VRs - CloudStack now allows site-to-site VPN
  connectivity to it's virtual routing devices. This permits your cloud
  computing environment to appear as a natural extension of your local
  network, or for you to easily interconnect multiple environments
- VXLAN support expansion to include KVM - CloudStack's support for
  integrating VXLAN, the network virtualization technology that attempts
  to ameliorate scalability problems with traditional networking
- SolidFire plugin extension to support KVM and hypervisor snapshots for
  XenServer and ESX - SolidFire provides guaranteed Storage Quality of
  Service at the Virtual Machine level
- Dynamic Compute offering - CloudStack now has the ability to dynamically
  scale the resources assigned to a running virtual machine instance for
  those hypervisors which support it

Downloads and Documentation

The official source code for the v4.3 release, as well as individual
contributors' convenience binaries, can be downloaded from the Apache
CloudStack downloads page at http://cloudstack.apache.org/downloads.html

The CloudStack 4.2 release includes over 110 issues from 4.2.0 and
4.2.1, including fixes for object storage support, documentation, and
more. A full list of corrected issues and upgrade instructions are
available in the Release Notes
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes

Official installation, administration, and API documentation for each
release is available at http://docs.cloudstack.apache.org/en/latest/
Apache CloudStack in Action

Join members of the Apache CloudStack community at the CloudStack
Collaboration Conference, taking place 9-11 April 2014 immediately
following ApacheCon. For more information, visit
http://cloudstackcollab.org

Availability and Oversight

As with all Apache products, Apache CloudStack v4.3 is released under
the Apache License v2.0, and is overseen by a self-selected team of
active contributors to the project. A Project Management Committee (PMC)
guides the Project¹s day-to-day operations, including community
development and product releases. For documentation and ways to become
involved with Apache CloudStack, visit http://cloudstack.apache.org/

About The Apache Software Foundation (ASF)

Established in 1999, the all-volunteer Foundation oversees 

Re: [ANNOUNCE] Apache CloudStack 4.3.0 Released

2014-03-26 Thread Wido den Hollander



On 03/26/2014 08:10 AM, Ryan Lei wrote:

Please allow me to confirm again. Are the packages in these repos non-OSS
(should I call it noredist?) and will continue to be so in future
releases, too?



They are OSS only. I however have to repeat. The Apache CloudStack 
project officially only releases source. I'm hosting this mirror as a 
convenience for users, but they are not official packages from the project.


As a community we understand that people want packages, so we provide 
them as members of the project.



DEB package repository: http://cloudstack.apt-get.eu/ubuntu
RPM package repository: http://cloudstack.apt-get.eu/rhel/4.3/



---
Yu-Heng (Ryan) Lei, Associate Researcher
Cloud Computing Dept, Chunghwa Telecom Labs
ryan...@cht.com.tw or ryanlei750...@gmail.com



On Tue, Mar 25, 2014 at 11:40 PM, Wido den Hollander w...@widodh.nl wrote:




On 03/25/2014 04:25 PM, benoit lair wrote:


Super! The rpms are now available !



Indeed. Currently the mirror is pumping out 300Mbit of CloudStack
downloads with peaks to 500Mbit.

Great to see so many downloads!

Wido


  Thanks à lot guys.



2014-03-25 16:00 GMT+01:00 Antone Heyward thehyperadvi...@gmail.com:

  @Wido - Thanks for the info, i did not know that.



On Tue, Mar 25, 2014 at 10:49 AM, Wido den Hollander w...@widodh.nl
wrote:




On 03/25/2014 03:47 PM, Antone Heyward wrote:

  There should be a checklist to verify certain things are in place

before
the release announcement.



  Well, officially Apache projects only release source. The DEB and RPM

packages are something from the community, not the project itself.

While I get your point, it's not a part of the official release.

Wido




On Tue, Mar 25, 2014 at 10:42 AM, Wido den Hollander w...@widodh.nl
wrote:




On 03/25/2014 03:40 PM, Nux! wrote:

   On 25.03.2014 14:28, Chip Childers wrote:



   Website issues should all be resolved now.  Thanks for noticing
this.




  Also http://cloudstack.apt-get.eu/rhel/4.3/ doesn't work (404).



Nobody



built the RPMs?



   Seems like it. I don't have RHEL systems available to build the


packages
one. Usually David build them.

Wido









--
Antone
@thehyperadvisor
http://thehyperadvisor.com








Re: ALARM - ACS reboots host servers!!!

2014-04-03 Thread Wido den Hollander



On 04/03/2014 10:06 AM, France wrote:

I'm also interested in this issue.
Can any1 from developers confirm this is expected behavior?



Yes, this still happens due to the kvmheartbeat.sh script which runs.

On some clusters I disabled this by simply overwriting that script with 
a version where reboot is removed.


I have some ideas on how to fix this, but I don't have the time at the 
moment.


Short version: The hosts shouldn't reboot themselves as long as they can 
reach other nodes or it should at least be configurable.


The management server should also do further inspection during HA by 
using a helper on the KVM Agent.


Wido


On 2/4/14 2:32 PM, Andrei Mikhailovsky wrote:

Coming back to this issue.

This time to perform the maintenance of the nfs primary storage I've
plated the storage in question in the Maintenance mode. After about 20
minutes ACS showed the nfs storage is in Maintenance. However, none of
the virtual machines with volumes on that storage were stopped. I've
manually stopped the virtual machines and went to upgrade and restart
the nfs server.

A few minutes after the nfs server shutdown all of my host servers
went into reboot killing all vms!

Thus, it seems that putting nfs server in Maintenance mode does not
stop ACS agent from restarting the host servers.

Does anyone know a way to stop this behaviour?

Thanks

Andrei


- Original Message -
From: France mailingli...@isg.si
To: us...@cloudstack.apache.org
Cc: dev@cloudstack.apache.org
Sent: Monday, 3 March, 2014 9:49:28 AM
Subject: Re: ALARM - ACS reboots host servers!!!

I believe this is a bug too, because VMs not running on the storage, get
destroyed too:

Issue has been around for a long time, like with all others I reported.
They do not get fixed:
https://issues.apache.org/jira/browse/CLOUDSTACK-3367

We even lost assignee today.

Regards,
F.

On 3/3/14 6:55 AM, Koushik Das wrote:

The primary storage needs to be put in maintenance before doing any
upgrade/reboot as mentioned in the previous mails.

-Koushik

On 03-Mar-2014, at 6:07 AM, Marcus shadow...@gmail.com wrote:


Also, please note that in the bug you referenced it doesn't have a
problem with the reboot being triggered, but with the fact that reboot
never completes due to hanging NFS mount (which is why the reboot
occurs, inaccessible primary storage).

On Sun, Mar 2, 2014 at 5:26 PM, Marcus shadow...@gmail.com wrote:

Or do you mean you have multiple primary storages and this one was not
in use and put into maintenance?

On Sun, Mar 2, 2014 at 5:25 PM, Marcus shadow...@gmail.com wrote:

I'm not sure I understand. How do you expect to reboot your primary
storage while vms are running?  It sounds like the host is being
fenced since it cannot contact the resources it depends on.

On Sun, Mar 2, 2014 at 3:24 PM, Nux! n...@li.nux.ro wrote:

On 02.03.2014 21:17, Andrei Mikhailovsky wrote:

Hello guys,


I've recently came across the bug CLOUDSTACK-5429 which has
rebooted
all of my host servers without properly shutting down the guest
vms.
I've simply upgraded and rebooted one of the nfs primary storage
servers and a few minutes later, to my horror, i've found out
that all
of my host servers have been rebooted. Is it just me thinking
so, or
is this bug should be fixed ASAP and should be a blocker for any
new
ACS release. I mean not only does it cause downtime, but also
possible
data loss and server corruption.

Hi Andrei,

Do you have HA enabled and did you put that primary storage in
maintenance
mode before rebooting it?
It's my understanding that ACS relies on the shared storage to
perform HA so
if the storage goes it's expected to go berserk. I've noticed
similar
behaviour in Xenserver pools without ACS.
I'd imagine a cure for this would be to use network distributed
filesystems like GlusterFS or CEPH.

Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro




Re: How does a system VM get an IP address?

2014-04-07 Thread Wido den Hollander



On 04/07/2014 12:33 AM, Rafael Weingartner wrote:

Hi folks,
I was wondering how a system vm gets an IP address. I know they are the
first things that CS needs in order to start up others VMs, so when they
start there is no virtual router to assign IP addresses via DHCP.



Via a local virtio socket from the hypervisor in KVM mode. The VM boots 
and via that local serial socket it gets the IP-address.



I also noticed that on the physical hosts with the VM.Start command CS
sends some extra data that includes the IPs that the VM should get.
However, I have no idea how it actually gets those parameters and set its
IP.


The management server sends this information to the KVM agent. User 
Instances get the IP address via DHCP from the VR, System VMs via the 
local serial socket.


Wido



Does anyone here know how it works?



Re: How does a system VM get an IP address?

2014-04-07 Thread Wido den Hollander



On 04/07/2014 02:20 PM, Rafael Weingartner wrote:

Thanks man ;).
By any chance if a system VM does not get its ip assigned, would there be
any way to debug and check what is going on? I mean, if after it is running
on the physical host it does respond on the IP that was assigned to it.



I say check the agent's log (set to debug!) and try to SSH into the SSVM 
with the cloudstack-ssh command.


Wido



On Mon, Apr 7, 2014 at 9:14 AM, Wido den Hollander w...@widodh.nl wrote:




On 04/07/2014 12:33 AM, Rafael Weingartner wrote:


Hi folks,
I was wondering how a system vm gets an IP address. I know they are the
first things that CS needs in order to start up others VMs, so when they
start there is no virtual router to assign IP addresses via DHCP.



Via a local virtio socket from the hypervisor in KVM mode. The VM boots
and via that local serial socket it gets the IP-address.


  I also noticed that on the physical hosts with the VM.Start command CS

sends some extra data that includes the IPs that the VM should get.
However, I have no idea how it actually gets those parameters and set its
IP.



The management server sends this information to the KVM agent. User
Instances get the IP address via DHCP from the VR, System VMs via the local
serial socket.

Wido




Does anyone here know how it works?







Re: How does a system VM get an IP address?

2014-04-07 Thread Wido den Hollander



On 04/07/2014 03:10 PM, Rafael Weingartner wrote:

By agent you mean CS management server? I am running CS 4.1.1.


No, in the KVM Agent. The hypervisor.

/var/log/cloudstack/agent/agent.log


would the command cloudstack-ssh work, if the VM does not respond on its ip
addresses?


It probably has it's local 169.X.X.X address, so that should work.

Wido





On Mon, Apr 7, 2014 at 10:01 AM, Wido den Hollander w...@widodh.nl wrote:




On 04/07/2014 02:20 PM, Rafael Weingartner wrote:


Thanks man ;).
By any chance if a system VM does not get its ip assigned, would there be
any way to debug and check what is going on? I mean, if after it is
running
on the physical host it does respond on the IP that was assigned to it.



I say check the agent's log (set to debug!) and try to SSH into the SSVM
with the cloudstack-ssh command.

Wido




On Mon, Apr 7, 2014 at 9:14 AM, Wido den Hollander w...@widodh.nl
wrote:




On 04/07/2014 12:33 AM, Rafael Weingartner wrote:

  Hi folks,

I was wondering how a system vm gets an IP address. I know they are the
first things that CS needs in order to start up others VMs, so when they
start there is no virtual router to assign IP addresses via DHCP.


  Via a local virtio socket from the hypervisor in KVM mode. The VM boots

and via that local serial socket it gets the IP-address.


   I also noticed that on the physical hosts with the VM.Start command CS


sends some extra data that includes the IPs that the VM should get.
However, I have no idea how it actually gets those parameters and set
its
IP.



The management server sends this information to the KVM agent. User
Instances get the IP address via DHCP from the VR, System VMs via the
local
serial socket.

Wido



  Does anyone here know how it works?












Re: Build failed in Jenkins: build-master #609

2014-04-10 Thread Wido den Hollander
It works fine on my local systems as well.

I changed a maven dependency to fix a RBD bug.

Wido

 Op 10 apr. 2014 om 21:34 heeft Marcus Sorensen mar...@betterservers.com 
 het volgende geschreven:
 
 Ok, now I can't get it to fail locally in Linux or Mac, even with
 deleting my ~/.m2. Not sure what's going on.
 
 On 4/10/14, 9:48 AM, Marcus Sorensen wrote:
 Hmm, sorry. I thought master was fixed, as we saw complaints about the
 rados stuff yesterday (and I saw it on my linux vm as well), but the
 build worked fine on my mac today.  It must have only worked due to
 cached .m2 on my mac. I'm going to try to figure out what we need to
 revert to get master going again.
 
 On 4/10/14, 9:38 AM, jenk...@cloudstack.org wrote:
 See http://jenkins.buildacloud.org/job/build-master/609/changes
 
 Changes:
 
 [marcus] CLOUDSTACK-6191 Add support for specifying volume provisioning
 
 --
 [...truncated 3066 lines...]
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java:[20,21]
  error: package com.ceph.rados does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java:[21,21]
  error: package com.ceph.rados does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java:[22,19]
  error: package com.ceph.rbd does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java:[23,19]
  error: package com.ceph.rbd does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java:[24,19]
  error: package com.ceph.rbd does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java:[50,21]
  error: package com.ceph.rados does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java:[51,21]
  error: package com.ceph.rados does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java:[52,21]
  error: package com.ceph.rados does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java:[53,19]
  error: package com.ceph.rbd does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java:[54,19]
  error: package com.ceph.rbd does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java:[55,19]
  error: package com.ceph.rbd does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java:[36,21]
  error: package com.ceph.rados does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java:[37,21]
  error: package com.ceph.rados does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java:[38,21]
  error: package com.ceph.rados does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java:[39,19]
  error: package com.ceph.rbd does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java:[40,19]
  error: package com.ceph.rbd does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java:[41,19]
  error: package com.ceph.rbd does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java:[42,23]
  error: package com.ceph.rbd.jna does not exist
 [ERROR] 
 http://jenkins.buildacloud.org/job/build-master/ws/plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java:[2373,24]
  error: cannot find symbol
 [ERROR] symbol:   class Rados
 [ERROR] location: class LibvirtComputingResource
 [ERROR] 
 

Ceph RBD speed and snapshot usage improvements for CloudStack 4.4

2014-04-11 Thread Wido den Hollander

Hi,

I just pushed to commits to master and cherry picked them to the 4.4 
branch to resolve a long outstanding issue with the time it takes to 
deploy templates and back up snapshots of RBD.


The two commits [0][1] also address the space used by RBD snapshots on 
Secondary Storage.


Before librbd (used by Qemu) 0.67.5 (Ceph Dumpling) qemu-img couldn't 
create RBD format images which is required for deploying a template to 
RBD. Format 2 is used to perform layering and snapshotting on RBD images.


The same version of librbd also allows to directly copy a snapshot from 
RBD to QCOW2 or RAW output. Since qemu-img now does the work the 
destination image is sparse instead of a full write of the RBD image's size.


Both patches will make sure that initial deployment of a template on RBD 
is a lot faster and that backing up a RBD snapshot to Secondary Storage 
will consume less time, storage and network bandwidth.


I have to note however, you require at least librbd 0.67.5 on your 
systems in order for these patches work.


I think some people will be very happy with this.

Wido

[0]: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=commit;h=95f6f6531260f0e0946eec3212d06b0d71befc9e
[1]: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=commit;h=8764692b271d8ac683e616a732845524f77329a2


Re: [RFC]Bypass Libvirt storage pool for NFS

2014-04-11 Thread Wido den Hollander



On 03/20/2014 07:51 PM, Nux! wrote:

On 20.03.2014 18:48, Wido den Hollander wrote:


And it just went upstream! How great is that?


Pretty great. That's how open source works. :)




Quick question: there is no problem if instead of using NFS directly we
use the shared mount point option, is it?



Probably not, since we would then be mounting the filesystem manually
instead of having libvirt do it.


Ok, so worst case scenario people can just fall-back on this option.



I'd say get this patch down into EL6 and I'll try to get it into
Ubuntu 14.04.


Done! I just got e-mail from Bugzilla: It's fixed in libvirt-0.10.2-32.el6

Wido



Yeah, fingers crossed on that!

Lucian



Re: [VOTE] Accept the donation of RDP client code into Apache CloudStack

2013-10-21 Thread Wido den Hollander

I'm +0 on this.

I can't fully say if the code is correct or not. I've never used 
Hyper-V, so I don't think it would be wise for me to vote.


Just want to show I saw this VOTE.

Wido

On 10/21/2013 08:11 PM, Donal Lafferty wrote:

As stated in a previous thread [1], Citrix is proposing the donation of source 
for an RDP client.  After donation, the client would be integrated with the 
console system VM in order to provide access to Hyper-V based VMs.

The client's source is in the diff attached to the Review Board submission 
https://reviews.apache.org/r/14701/

[1] http://markmail.org/thread/q6sfqrhosmirm3bg

I would like to call a vote here, so that we have a formal consensus on 
accepting the code into the project.  I suggest that it be accepted into a 
branch, and then we work through any technical concerns / reviews / changes 
prior to a master branch merge.

VOTING will be left open for 72 hours.

This is a technical decision, which means committer and PMC votes are binding.


DL




Re: [ASF4.2.1] default to 64-bit system VM template

2013-10-23 Thread Wido den Hollander

On 10/23/2013 07:32 AM, Abhinandan Prateek wrote:


   We are planning to  make 64-bit system vm templates as default offering in 
4.2.1.
This is an initial email to have thoughts from the community on this.



I'd say -1.

Since it is going to be a minor upgrade and although this seems like a 
harmless thing, it would postpone it to 4.3


A minor release should be bugfixes only imho and changing the 
architecture of a System VM isn't a bugfix.


Wido


-abhi





Re: Intermittent File Access to cloudstack.apt-get.eu

2013-10-28 Thread Wido den Hollander

On 10/27/2013 11:07 AM, Marty Sweet wrote:

Hi Guys,

During the upgrade process I also kept experiencing the packages needed
disappearing from the cloudstack.apt-get.eu website, this caused multiple
servers to fail on update/install processes only to work several minutes
later once the files had been restored. It's clear the files are changing
regularly on there as their dates are being constantly updated.

I can reproduce this on our production nodes and from a web browser using a
different ISP, this will definitely cause issues with new people to CS and
could be a confusing first hurdle for some.

http://cloudstack.apt-get.eu/ubuntu/dists/precise/4.2/

Has anyone else experienced these issues? Just refresh the directory a few
times today and see when files appear/disappear.



Hmm, that's odd. I maintain that server and there is a script which 
re-indexes all packages, but it should re-index what is already there.


I'll check on this.

Wido


Thanks,
Marty



Re: Docker?

2013-11-03 Thread Wido den Hollander



On 10/30/2013 02:18 PM, Nguyen Anh Tu wrote:

Hi,

I'm starting look into docker's architecture and figure out how it plays
with lxc and openstack at that moment. Will try to figure some of key
points before the CCC13 start, so we can discuss directly at there.



That would be VERY nice. Docker seems to be very nice. A cool 
integration with LXC in CloudStack would be awesome.


I'd love to talk about this on CCC13.

Wido


Sent from my GT-N7000
On Oct 30, 2013 4:14 PM, Sebastien Goasguen run...@gmail.com wrote:



On Oct 30, 2013, at 4:56 AM, Nux! n...@li.nux.ro wrote:


Hi,

Docker[1] seems to be all the rage these days and has breathed new life

in LXC. Is there any interest whatsoever from the devs to support for it?

I noticed it has already landed in the other stack.



The answer is yes :)

Nugyen has started looking at it, so as Darren.

Can you start a wiki page to dump some ideas, there are some interesting
architectural decision to be had.

-sebastien


Lucian

[1] - http://www.docker.io/

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro







Re: Docker?

2013-11-03 Thread Wido den Hollander



On 11/03/2013 03:19 PM, David Nalley wrote:

On Sun, Nov 3, 2013 at 9:16 AM, Wido den Hollander w...@widodh.nl wrote:



On 10/30/2013 02:18 PM, Nguyen Anh Tu wrote:


Hi,

I'm starting look into docker's architecture and figure out how it plays
with lxc and openstack at that moment. Will try to figure some of key
points before the CCC13 start, so we can discuss directly at there.



That would be VERY nice. Docker seems to be very nice. A cool integration
with LXC in CloudStack would be awesome.



We have LXC support in 4.2



I know, but Docker would add a lot of value to the LXC integration.

Haven't looked at the LXC integration just yet.

Wido


--David



Re: CloudStack Mirror in US

2013-11-19 Thread Wido den Hollander



On 11/20/2013 12:27 AM, Sebastien Goasguen wrote:

Not that i know off, but wido setup rsync on the EU one. He sent config 
instructions some time back.



Indeed. cloudstack.apt-get.eu runs a rsync daemon where you can sync 
from. So feel free to sync from there.


Just don't run your cron every minute :)

wido


-Sebastien

On 19 Nov 2013, at 23:47, Musayev, Ilya imusa...@webmd.net wrote:


Do we have US mirror for ACS convenience repos?

If not, I would like to make use of my current hosting providers unlimited 
bandwidth deal.

Thanks
ilya


Re: Image format supported by cloudstack

2013-11-19 Thread Wido den Hollander



On 11/20/2013 05:23 AM, Rajesh Battala wrote:

For
  xenserver/xcp  ---   VHD
  KVM   ---   qcow2


Since the Ceph/RBD integration KVM also started to understand the RAW 
format, but it's not 100% yet.


Wido


  Esx/VSphere ---   ova
  HyperV  ---   VHD

On any Hypervisor installing OS with ISO is supported, but root disk format 
will be created w.r.t hypervisor supported format mentioned above.

Thanks
Rajesh Battala

-Original Message-
From: manas biswal [mailto:manas.biswa...@gmail.com]
Sent: Wednesday, November 20, 2013 9:38 AM
To: us...@cloudstack.apache.org
Cc: dev@cloudstack.apache.org; issues-subscr...@cloudstack.apache.org
Subject: Re: Image format supported by cloudstack

Good day. The image formats depends upon the Hypervisor you used.
If it is XenServer/XCP then the image format is VHD and for KVM it is .img . It 
is nothing to do with Cloudstack. As XenServer/XCP keeping the virtual 
harddisks in .VHD format in attached NFS repository. Also in CloudStack for 
primary and secondary storage NFS storage has been used.You can import an 
existing .vhd file to CloudStack. Normally .VHD files are bigger than the 
actual Operting System (OS) space. As VHD is the complete harddisk that 
includes both OS spcae + empty space.

On Tue, Nov 19, 2013 at 11:06 PM, Sanjay Tripathi sanjay.tripa...@citrix.com 
wrote:

Answers inline.


-Original Message-
From: jitendra shelar [mailto:jitendra.shelar...@gmail.com]
Sent: Tuesday, November 19, 2013 10:04 PM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org; issues-
subscr...@cloudstack.apache.org
Subject: Image format supported by cloudstack

Hi Team,

Can somebody please tell me which all image format are supported by
the cloudstack?


 From the CloudStack UI, you can get the guest OS list in register template/register 
ISO form page, or try the API listOsTypes to get the complete list.



And do cloudstack (version 4.0 and 4.2)  provide support for windows
7, Windows 2012, Ubuntu 12 and Ubuntu 13 ISO?


CloudStack supports all the mentioned guest OS. Only for Ubuntu 13, you need to register 
the template/ISO under Other Ubuntu guest OS type.




Thanks,
Jitendra


--Sanjay





--
Thanks and Regards
Manas Ranjan Biswal
Research Scientist,
OpenTechnology Center,
NIC,Chennai
08015698191,9776349149
manas.bis...@nic.in



Re: Error while running master

2013-11-21 Thread Wido den Hollander



On 11/20/2013 09:37 PM, Syed Ahmed wrote:

Is this change going in 4.3? Is so then the Alter table I guess should
be in schema-421to430.sql. I don't see a schema-430to440.sql though. How
deploydb read the files? Does it go through all the
schema files or does it pick the latest one?



It somehow got lost due to my own merging. I comitted it into 
421to430.sql now, but I think this should be in 430to440.sql, but that 
file doesn't exist yet.


The feature freeze for 4.3 is closed, right?

Wido



Thanks,
-Syed


On Wed 20 Nov 2013 03:24:37 PM EST, Wei ZHOU wrote:

Wido committed 1edaa36cc68e845a42339d5f267d49c82343aefb today.
try after ALTER TABLE  disk_offering ADD COLUMN cache_mode varchar(20)
I do not know which schema file should be inserted into,
schema-421to430.sql or schema-430to440.sql ?

2013/11/20 Syed Ahmed sah...@cloudops.com


Hi All,

I am facing the following error when running the latest master. I have
done a clean compile and have dropped and created the db again.

[WARNING] Nested in
org.springframework.context.ApplicationContextException:
Failed to start bean 'cloudStackLifeCycle'; nested exception is
com.cloud.utils.exception.CloudRuntimeException: DB Exception on:
com.mysql.jdbc.JDBC4PreparedStatement@538a1556: SELECT disk_offering.id,
disk_offering.domain_id, disk_offering.unique_name, disk_offering.name,
disk_offering.display_text, disk_offering.disk_size, disk_offering.tags,
disk_offering.type, disk_offering.removed, disk_offering.created,
disk_offering.recreatable, disk_offering.use_local_storage,
disk_offering.system_use, disk_offering.customized, disk_offering.uuid,
disk_offering.customized_iops, disk_offering.min_iops,
disk_offering.max_iops, disk_offering.sort_key,
disk_offering.bytes_read_rate, disk_offering.bytes_write_rate,
disk_offering.iops_read_rate, disk_offering.iops_write_rate,
disk_offering.cache_mode, disk_offering.display_offering,
disk_offering.state, disk_offering.hv_ss_reserve, service_offering.cpu,
service_offering.speed, service_offering.ram_size,
service_offering.nw_rate, service_offering.mc_rate,
service_offering.ha_enabled, service_offering.limit_cpu_use,
service_offering.is_volatile, service_offering.host_tag,
service_offering.default_use, service_offering.vm_type,
service_offering.sort_key, service_offering.deployment_planner FROM
service_offering INNER JOIN disk_offering ON service_offering.id=disk_
offering.id  WHERE disk_offering.type='Service' AND
disk_offering.unique_name = _binary'Cloud.Com-Small Instance'  AND
disk_offering.system_use = 1 AND disk_offering.removed IS NULL :
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown
column
'disk_offering.cache_mode' in 'field list'


This is the definition of disk_offering from create-schema.sql

CREATE TABLE `cloud`.`disk_offering` (
   `id` bigint unsigned NOT NULL auto_increment,
   `domain_id` bigint unsigned,
   `name` varchar(255) NOT NULL,
   `uuid` varchar(40),
   `display_text` varchar(4096) NULL COMMENT 'Descrianaption text set by
the admin for display purpose only',
   `disk_size` bigint unsigned NOT NULL COMMENT 'disk space in byte',
   `type` varchar(32) COMMENT 'inheritted by who?',
   `tags` varchar(4096) COMMENT 'comma separated tags about the
disk_offering',
   `recreatable` tinyint(1) unsigned NOT NULL DEFAULT 0 COMMENT 'The
root
disk is always recreatable',
   `use_local_storage` tinyint(1) unsigned NOT NULL DEFAULT 0 COMMENT
'Indicates whether local storage pools should be used',
   `unique_name` varchar(32) UNIQUE COMMENT 'unique name',
   `system_use` tinyint(1) unsigned NOT NULL DEFAULT 0 COMMENT 'is this
offering for system used only',
   `customized` tinyint(1) unsigned NOT NULL DEFAULT 0 COMMENT '0
implies
not customized by default',
   `removed` datetime COMMENT 'date removed',
   `created` datetime COMMENT 'date the disk offering was created',
   `sort_key` int(32) NOT NULL default 0 COMMENT 'sort key used for
customising sort method',
   PRIMARY KEY  (`id`),
   INDEX `i_disk_offering__removed`(`removed`),
   CONSTRAINT `uc_disk_offering__uuid` UNIQUE (`uuid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;


And this is in my DB

mysql describe disk_offering;
+---+-+--+-+
-++
| Field | Type| Null | Key | Default | Extra
|
+---+-+--+-+
-++
| id| bigint(20) unsigned | NO   | PRI | NULL|
auto_increment |
| domain_id | bigint(20) unsigned | YES  | | NULL|
|
| name  | varchar(255)| NO   | | NULL|
|
| uuid  | varchar(40) | YES  | UNI | NULL|
|
| display_text  | varchar(4096)   | YES  | | NULL|
|
| disk_size | bigint(20) unsigned | NO   | | NULL|
|
| type  | varchar(32) | YES  | | NULL|
|
| tags  | 

Re: Bump to 4.4.0

2013-11-21 Thread Wido den Hollander



On 11/21/2013 04:39 PM, Hugo Trippaers wrote:

Hey all,

Together with Wido i bumped the version of master to 4.4.0 as we now have the 
4.3 branch for 4.3.0


Yes, this was because the 4.3 branch already exists, so master should 
now be 4.4 to accomodate upgrades for new features.


Wido





Cheers from #CCCEU13 !

Hugo



[DISCUSS] Reporting tool for feeding back zone, pod and cluster information

2013-11-23 Thread Wido den Hollander

Hi,

I discussed this during CCCEU13 with David, Chip and Hugo and I promised 
I put it on the ml.


My idea is to come up with a reporting tool which users can run daily 
which feeds us back information about how they are using CloudStack:


* Hypervisors
* Zone sizes
* Cluster sizes
* Primary Storage sizes and types
* Same for Secondary Storage
* Number of management servers
* Version

This would ofcourse be anonimized where we would send one file with JSON 
data back to our servers where we can proccess it to do statistics.


The tool will obviously be open source and participating in this will be 
opt-in only.


We currently don't know what's running out there, so that would be great 
to know.


Some questions remain:
* Who is going to maintain the data?
* Who has access to the data?
* How long do we keep it?
* Do we do logging of IPs sending the data to us?

I certainly do not want to spy on our users, so that's why it's opt-in 
and the tool should be part of the main repo, but I think that for us as 
a project it's very useful to know what our users are doing with CloudStack.


Comments?

Wido


[DISCUSS] Reporting tool for feeding back zone, pod and cluster information

2013-11-23 Thread Wido den Hollander

Hi,

I discussed this during CCCEU13 with David, Chip and Hugo and I promised 
I put it on the ml.


My idea is to come up with a reporting tool which users can run daily 
which feeds us back information about how they are using CloudStack:


* Hypervisors
* Zone sizes
* Cluster sizes
* Primary Storage sizes and types
* Same for Secondary Storage
* Number of management servers
* Version

This would ofcourse be anonimized where we would send one file with JSON 
data back to our servers where we can proccess it to do statistics.


The tool will obviously be open source and participating in this will be 
opt-in only.


We currently don't know what's running out there, so that would be great 
to know.


Some questions remain:
* Who is going to maintain the data?
* Who has access to the data?
* How long do we keep it?
* Do we do logging of IPs sending the data to us?

I certainly do not want to spy on our users, so that's why it's opt-in 
and the tool should be part of the main repo, but I think that for us as 
a project it's very useful to know what our users are doing with CloudStack.


Comments?

Wido


Re: [DISCUSS] Reporting tool for feeding back zone, pod and cluster information

2013-11-28 Thread Wido den Hollander



On 11/26/2013 10:42 PM, Steve Wilson wrote:

I built something like this for products at Sun Microsystems.  We embedded
into nearly everything we built:

The Java Runtime Environment
Open Office
Solaris
MySQL
Even things like Server LOMs
(the list goes on)

By default, when each of these products installed/first run, it would try
to bring the user into the program.  It was always possible to opt out,
but we really worked to get people to not opt out.  We got shockingly HUGE
piles of data (literally from millions of installed product instances).
We didn't get any complaints (EVER) in the years we ran this program.  It
was hugely useful to the product teams.



I wouldn't go for opt-out by default. We might ask the question during 
the initial management server installation, but it shouldn't be opt-out 
and not informing the user.



BTW, we didn't even make this data anonymous.  You could obviously choose
to be anonymous, but if people want to give their names/companies then why
not let them?  You'd be surprised how many people wouldn't mind.



I wouldn't want a database with all that information in there. I'm also 
aiming for usage about CloudStack, not who uses it.


We might make something where you can claim your data, but I'd 
anonimize it anyway.


Wido


-Steve

On 11/26/13 12:49 PM, Chiradeep Vittal chiradeep.vit...@citrix.com
wrote:


+1.
Of course we must ensure proper treatment of this data (anonymization,
retention, removal, copyrights)

On 11/23/13 11:01 AM, Wido den Hollander w...@widodh.nl wrote:


Hi,

I discussed this during CCCEU13 with David, Chip and Hugo and I promised
I put it on the ml.

My idea is to come up with a reporting tool which users can run daily
which feeds us back information about how they are using CloudStack:

* Hypervisors
* Zone sizes
* Cluster sizes
* Primary Storage sizes and types
* Same for Secondary Storage
* Number of management servers
* Version

This would ofcourse be anonimized where we would send one file with JSON
data back to our servers where we can proccess it to do statistics.

The tool will obviously be open source and participating in this will be
opt-in only.

We currently don't know what's running out there, so that would be great
to know.

Some questions remain:
* Who is going to maintain the data?
* Who has access to the data?
* How long do we keep it?
* Do we do logging of IPs sending the data to us?

I certainly do not want to spy on our users, so that's why it's opt-in
and the tool should be part of the main repo, but I think that for us as
a project it's very useful to know what our users are doing with
CloudStack.

Comments?

Wido






Re: Deleting Primary Storage, where host was removed.

2013-11-29 Thread Wido den Hollander



On 11/29/2013 09:53 AM, Girish Shilamkar wrote:

Hello,

In my test ACS setup I am stuck due to this problem 
https://issues.apache.org/jira/browse/CLOUDSTACK-4402
where I cannot delete primary storage, as the last host with which it was 
associated was removed.
Is there a workaround for this issue ? Like deleting the entries from database.



Yes, deleting from the database should be sufficient.

Every storage pool has a unique numeric ID in the database and volumes 
which go with that.


I currently am not able to access a database with the information, but 
when you browse around you should be able to find it.


Note: Make a backup before trying!

Wido


Please advise.

Regards,
Girish




Re: Progress on GlusterFS support from the CCCEU Hackathon

2013-12-06 Thread Wido den Hollander

Hi Niels,

I saw that you already got some good feedback on your first version of 
the patch.


I'll wait for the second revision of the patch to do my review.

Wido

On 12/01/2013 03:46 PM, Niels de Vos wrote:

On Mon, Nov 25, 2013 at 12:41:58PM -0800, David Nalley wrote:

Hi folks:

Just bringing some things from the hackathon back to the mailing list.

One of the things worked on there was GlusterFS support. Wido and
Niels began work on this, and you can see the blog post[1] from Niels,
which might be helpful to others as well for things like Sheepdog.

There's also now a project on gluster's forge [2] where the code for
this work in progress lives for the moment. Please don't hesitate to
get involved and help if you are interested.


I've now posted the changes for review as well:
* Add support for Primary Storage on Gluster using the libvirt backend
   - https://reviews.apache.org/r/15932/
* [#15933] Add Gluster to the list of protocols in the Management Server
   - https://reviews.apache.org/r/15933/

If someone can review these and give some pointers for improving so that
it can be included in a future release, I'd much appreciate that. The
current patches are done against the 4.2 branch, but they probably apply
cleanly against master too.

These are my first patches for CloudStack, and it's likely that I missed
something. I thought it makes sense to show these changes early so that
anyone interested can try, ask questions and/or give feedback.

Thanks,
Niels



[1] http://blog.nixpanic.net/2013/11/initial-work-on-gluster-integration.html
[2] https://forge.gluster.org/cloudstack-gluster#more



--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on


Re: Master branch - can't run up systemvm with KVM hypervisor

2013-12-08 Thread Wido den Hollander



On 12/08/2013 12:45 PM, Nguyen Anh Tu wrote:

Hi guys,

I'm trying to deploy CS (master branch) with KVM hypervisor but get error
when running up systemVM.

Primary error here:

2013-12-09 01:19:02,237 WARN  [kvm.resource.LibvirtComputingResource]
(agentRequest-Handler-5:null) LibvirtException
org.libvirt.LibvirtException: internal error unknown disk cache mode 'null'


Hmm, that might be due to my patch from two weeks ago. Let me look into 
this!


Wido


 at org.libvirt.ErrorHandler.processError(Unknown Source)
 at org.libvirt.Connect.processError(Unknown Source)
 at org.libvirt.Connect.processError(Unknown Source)
 at org.libvirt.Connect.domainCreateXML(Unknown Source)
 at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.startVM(LibvirtComputingResource.java:1139)
 at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:3399)
 at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1233)
 at com.cloud.agent.Agent.processRequest(Agent.java:501)
 at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:808)
 at com.cloud.utils.nio.Task.run(Task.java:81)
 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:701)


Full Log from Agent here:

http://pastebin.com/6rtRFbdx

Anyone can help?

Thanks,

--Tuna



Re: Master branch - can't run up systemvm with KVM hypervisor

2013-12-09 Thread Wido den Hollander



On 12/09/2013 02:52 PM, Nguyen Anh Tu wrote:

On Mon, Dec 9, 2013 at 2:56 PM, Wido den Hollander w...@widodh.nl wrote:


Hmm, that might be due to my patch from two weeks ago. Let me look into
this!


Yeah, Thanks Wido :-)



I think it's fixed with this patch: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=commit;h=088247b61b4ecea7bb757becd233e10c97a7a75a


Wido


--Tuna



Re: kvm - why just qcow2

2013-12-14 Thread Wido den Hollander



On 12/14/2013 09:05 AM, Marcus Sorensen wrote:

I suppose it would be best, and probably easiest, to accept templates in
vmdk, vdi, etc, but qemu-img convert to qcow2 during the copy to primary
storage, to keep the agent code from dealing with many formats. There's a
lot of code that would need rework to deal with non-qcow2 file based disks.


I ran into this when implementing RBD. Since the code made all kinds of 
assumptions when it came to the format. RBD uses RAW (how KVM sees it).


That's why I wrote QemuImg.java to abstract most of that work.

But I agree with you, we shouldn't force ourselfs into QCOW2, but we 
should be aware that the hypervisors could be doing a lot of converting 
work.


Wido


On Dec 13, 2013 10:39 PM, Marcus Sorensen shadow...@gmail.com wrote:


Is there any reason why we only support qcow2 format on KVM? It seems
like it would be fairly simple to support other formats, qemu-img can
handle going from VMDK to RAW for example, and qemu-kvm can even use
VMDK, QED, and other formats. It even seems like QemuImg.java was
written with other formats in mind. It seems like it wouldn't be a lot
of work to simply let other formats come through, we might have to
change LibvirtVMDef a bit so it can make the proper XML.





Re: S3 is still broken in the latest snapshot of 4.2.1

2013-12-18 Thread Wido den Hollander



On 12/18/2013 04:49 PM, Andrei Mikhailovsky wrote:

Hello guys,

My colleague has done some testing of the latest snapshot based on nightly 
4.2.1 from https://dist.apache.org/repos/dist/dev/cloudstack/4.2.1/ and 
discovered several issues with using S3 as a secondary storage, which are very 
serious in my opinion. Here is our setup:

- Ubuntu 12.04.3 with latest updates and KVM Hypervisor
- Libvirt 1.1.1 / QEMU 1.5.0 from ubuntu-cloud ppa (Havana)
- CEPH RBD / RADOSGW up to Emperor 0.72.1 (RBD as Primary and RadosGW S3 as 
Secondary with NFS staging)


All issues are easily reproducible.

1) Any instance created from S3 Storage template or installed from ISO can not 
Live Migrate. No matter if the state of original ISO is attached or detached, 
when trying to Live Migrate, management server and the destination host agent 
produce the following errors (By the way, the OS type set to Ubuntu 12 64bit, 
but shows up as Apple Mac OS X):

2013-12-10 18:48:28,067 DEBUG [cloud.agent.Agent] (agentRequest-Handler-4:null) Request:Seq 26-349831812:  { Cmd , MgmtId: 90520737989049, via: 26, Ver: v1, Flags: 100111, [{com.cloud.agent.api.PrepareForMigrationCommand:{vm:{id:56,name:i-2-56-VM,type:User,cpus:2,minSpeed:2800,maxSpeed:2800,minRam:2147483648,maxRam:2147483648,arch:x86_64,os:Apple Mac OS X 10.6 
(32-bit),bootArgs:,rebootOnCrash:false,enableHA:true,limitCpuUse:false,enableDynamicallyScaleVm:false,vncPassword:5c7980f779d3ffa3,params:{},uuid:bd033b3d-f86a-4d6f-bb8c-06e61b7e1d62,disks:[{data:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid:6c9c3134-bfcf-4b8f-8508-db7d8fea5404,volumeType:ROOT,dataStore:{org.apache.cloudstack.storage.to.PrimaryDataStoreTO:{uuid:4a1a6908-7c45-3232-a250-550650793b1c,id:9,poolType:RBD,host:ceph.admin.lv,path:cloudstack,port:6789}},name:ROOT-56,size:21474836480,path:754a16ec-662c-

4303-97f9
-3168f1affbfb,volumeId:78,vmName:i-2-56-VM,accountId:2,format:RAW,id:78,hypervisorType:KVM}},diskSeq:0,type:ROOT},{data:{org.apache.cloudstack.storage.to.TemplateObjectTO:{path:template/tmpl/2/212/212-2-e6277a31-7fb6-3ca1-9486-c383c9027cdb/ub.iso,origUrl:http://www.emigrant.lv/ub.iso,uuid:75badc3e-ca5e-490c-8450-5f4397c43789,id:212,format:ISO,accountId:2,hvm:true,displayText:Ubuntu
 Server 12.04.3 64bit,imageDataStore:{com.cloud.agent.api.to.S3TO:{id:11,uuid:ee84fa05-3ad5-4822-89fd-0e1817421b19,endPoint:s3.admin.lv,bucketName:cs-secondary,httpsFlag:false,created:Dec 10, 2013 3:40:55 
PM,enableRRS:false}},name:212-2-e6277a31-7fb6-3ca1-9486-c383c9027cdb,hypervisorType:None}},diskSeq:3,type:ISO}],nics:[{deviceId:0,networkRateMbps:200,defaultNic:true,uuid:58903a2b-ef3c-40e5-8b83-99b343ee7474,ip:10.50.1.249,netmask:255.255.255.0,gateway:10.50.1.1,mac:02:00:30:44:
00:01,d
ns1:91.224.1.10,dns2:212.70.182.77,broadcastType:Vlan,type:Guest,broadcastUri:vlan://578,isolationUri:vlan://578,isSecurityGroupEnabled:false,name:cloudbr0},{deviceId:1,networkRateMbps:200,defaultNic:false,uuid:c0df2b28-e97b-4eda-91e0-71a171ec5509,ip:10.50.1.27,netmask:255.255.255.0,gateway:10.50.1.1,mac:02:00:6a:3e:00:10,dns1:91.224.1.10,dns2:212.70.182.


2013-12-10 18:48:28,124 WARN  [cloud.agent.Agent] (agentRequest-Handler-4:null) 
Caught:
java.lang.ClassCastException: com.cloud.agent.api.to.S3TO cannot be cast to 
com.cloud.agent.api.to.NfsTO
 at 
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.getVolumePath(LibvirtComputingResource.java:3628)
 at 
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2985)
 at 
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1196)
 at com.cloud.agent.Agent.processRequest(Agent.java:525)
 at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:852)
 at com.cloud.utils.nio.Task.run(Task.java:83)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:701)



I can't tell you right now what the issue is.


2) Spanshoting seems to be totally broken:

- any ROOT volume for any instance is always marked as OVM Hypervisor (not KVM) 
which brakes the normal behaviour of the action buttons filter in 
scripts/storage.js in the GUI


That's weird, I haven't see that before. When I developed the features 
for 4.2.0 it worked just fine.



- the Volume details Action Filter functions in scripts/storage.js lack some 
conditions for hypervysor and volume type / state combination
- due to those bugs the GUI doesn't provide Take Snapshot / Recurring Snapshot 
buttons for most of the volumes (had to create a DATA volume, attach it to VM 
and then detach for Snapshot buttons to appear).

3) Regardless the value of the snapshot.backup.rightafter setting snapshots 
created on RBD are always being copied to Secondary S3 / NFS storage.



That's a known bug but not something 

Re: Regarding contribution to CloudStack project

2013-12-26 Thread Wido den Hollander

Hi Abhinav!

On 12/23/2013 05:27 PM, Abhinav Koppula wrote:

Hi all,

I am Abhinav Koppula, a senior undergraduate student pursuing my Bachelors
in India. I am interested in contributing towards the Apache CloudStack
project. I would be really glad if anyone could guide me on how I can get
started.
Also, I wanted to know if there are any pre-requisites(in terms of computer
science concepts) which I need to cover before starting off.



Well, we don't require anything. You can start contributing right away 
if you want to.



I am skilled in Java but however I do not have prior experience of working
on cloud computing platforms. What books/resources should I refer which
would help me in understanding the code-base easily?



There are no real books on CloudStack and how to code for it. This is 
impossible to write at this moment since a lot changes in CloudStack.


I recommend you start with cloning the source and you start to 
understand the code.


Are there any particular things you want to work on? If so, we can point 
you in to the right direction to where to look.


Wido


Thanks,
Abhinav Koppula



Re: Nexenta iSCSI and NFS driver

2013-12-31 Thread Wido den Hollander



On 12/31/2013 04:53 AM, Victor Rodionov wrote:

Hello,

I want implement iSCSI and NFS driver for NexentaStor, what you think about
this guys?



What would the driver do? Since CloudStack already supports iSCSI and 
NFS. Will it create volumes on the Nexenta platform automatically?


Wido


Thanks,
Victor Rodionov



Re: [Proposal] Switch to Java 7

2014-01-06 Thread Wido den Hollander

Just to repeat what has been discussed some time ago.

All the current Long Term Support distributions have Java 7 available.

RHEL6, RHEL7, Ubuntu 12.04, Ubuntu 14.04 (due in April) will all have 
Java 7 available.


I don't see a problem in switching to Java 7 with CloudStack 4.4 or 4.5

Wido

On 01/07/2014 12:18 AM, Kelven Yang wrote:

Java 7 has been around for some time now. I strongly suggest CloudStack to 
adopt Java 7 as early as possible, the reason I feel like to raise the issue is 
from the some of practicing with the new DB transaction pattern, as following 
example shows.  The new Transaction pattern uses anonymous class to beautify 
the code structure, but in the mean time, it will introduce a couple runtime 
costs

   1.  Anonymous class introduces a “captured context”, information exchange 
between the containing context and the anonymous class implementation context 
has either to go through with mutable passed-in parameter or returned result 
object, in the following example, without changing basic Transaction framework, 
I have to exchange through returned result with an un-typed array. This has a 
few implications at run time, basically with each call of the method, it will 
generate two objects to the heap. Depends on how frequently the involved method 
will be called, it may introduce quite a burden to java GC process
   2.  Anonymous class captured context also means that there will be more 
hidden classes be generated, since each appearance of the anonymous class 
implementation will have a distance copy of its own as hidden class, it will 
generally increase our permanent heap usage, which is already pretty huge with 
current CloudStack code base.

Java 7 has a language level support to address the issues in a cheaper way that 
our current DB Transaction code pattern is trying to solve.  
http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html.
   So, time to adopt Java 7?

 public OutcomeVirtualMachine startVmThroughJobQueue(final String vmUuid,
 final MapVirtualMachineProfile.Param, Object params,
 final DeploymentPlan planToDeploy) {

 final CallContext context = CallContext.current();
 final User callingUser = context.getCallingUser();
 final Account callingAccount = context.getCallingAccount();

 final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);


 Object[] result = Transaction.execute(new 
TransactionCallbackObject[]() {
 @Override
 public Object[] doInTransaction(TransactionStatus status) {
 VmWorkJobVO workJob = null;

_vmDao.lockRow(vm.getId(), true);
ListVmWorkJobVO pendingWorkJobs = 
_workJobDao.listPendingWorkJobs(VirtualMachine.Type.Instance,
 vm.getId(), VmWorkStart.class.getName());

if (pendingWorkJobs.size()  0) {
assert (pendingWorkJobs.size() == 1);
workJob = pendingWorkJobs.get(0);
} else {
workJob = new VmWorkJobVO(context.getContextId());

 
workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
workJob.setCmd(VmWorkStart.class.getName());

workJob.setAccountId(callingAccount.getId());
workJob.setUserId(callingUser.getId());
workJob.setStep(VmWorkJobVO.Step.Starting);
workJob.setVmType(vm.getType());
workJob.setVmInstanceId(vm.getId());
 
workJob.setRelated(AsyncJobExecutionContext.getOriginJobContextId());

// save work context info (there are some duplications)
 VmWorkStart workInfo = new 
VmWorkStart(callingUser.getId(), callingAccount.getId(), vm.getId(), 
VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER);
workInfo.setPlan(planToDeploy);
workInfo.setParams(params);
workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));

 _jobMgr.submitAsyncJob(workJob, 
VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 }

 return new Object[] {workJob, new Long(workJob.getId())};
 }
 });

 final long jobId = (Long)result[1];
 AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(jobId);

 return new VmStateSyncOutcome((VmWorkJobVO)result[0],
 VirtualMachine.PowerState.PowerOn, vm.getId(), null);
 }


Kelven



Re: [JENKINS] Failing jobs (again)

2014-01-13 Thread Wido den Hollander



On 01/13/2014 01:59 PM, Hugo Trippaers wrote:

Hey all,

We seem to have some trouble with a number of buildslaves. The rpmbuilder and 
docs slaves are not reachable at the moment. I’m trying to move some of the 
work to other nodes, but due to git timeouts jobs keep failing.

Will keep you guys posted.


Thanks! Although I get the point that we get these e-mails on dev@ it 
sometimes bugs me very much.


I usually read all my mail on my phone and those build failed e-mails 
are irritating noise.


Didn't the build process send out these e-mails to the last X committers?

Can't we do something where we only send out the e-mail once every hour 
or so?


Wido



Cheers,

Hugo



Re: [ANNOUNCE] New PMC Member: Giles Sirett

2014-01-21 Thread Wido den Hollander

Congrats Giles! Great to have you onboard!

Wido

On 01/21/2014 03:45 PM, Chip Childers wrote:

The Project Management Committee (PMC) for Apache CloudStack has asked
Giles Sirett to join the PMC and we are pleased to announce that he
has accepted.

Join me in congratulating Giles!

-The CloudStack PMC



Re: Cloudstack 4.3 on Centos i686

2014-01-21 Thread Wido den Hollander



On 01/21/2014 11:31 PM, Prabhakaran Ganesan wrote:

Hi

Does 4.3 support Centos i686? I am able to download and build the source on my 
Centos host (32-bit). But I get the following Java exception when I try to 
start the management server. Is the problem with Java or is it some 64-bit 
dependency?



Well, I think it should. The JVM should take care of the 32-bit/64-bit 
stuff, but my question is: Why still 32-bit?


I really don't see a memory related problem here, but you never know if 
the 4GB memory limit is a problem. I've never run CS under i686.


Wido


Thanks
Prabhakar

INFO  [c.c.s.GsonHelper] (main:null) Default Builder inited.
INFO  [c.c.u.c.ComponentContext] (main:null) Setup Spring Application context
INFO  [o.a.c.s.l.CloudStackExtendedLifeCycle] (main:null) Running system 
integrity checker com.cloud.upgrade.DatabaseIntegrityChecker@1344568
INFO  [c.c.u.DatabaseIntegrityChecker] (main:null) Grabbing lock to check for 
database integrity.
INFO  [c.c.u.DatabaseIntegrityChecker] (main:null) Performing database 
integrity check
INFO  [o.a.c.s.l.CloudStackExtendedLifeCycle] (main:null) Running system 
integrity checker 
org.apache.cloudstack.utils.identity.ManagementServerNode@4a3e92
INFO  [o.a.c.s.l.CloudStackExtendedLifeCycle] (main:null) Configuring 
CloudStack Components
2014-01-21 09:27:07.585:WARN::Failed startup of context 
org.mortbay.jetty.plugin.Jetty6PluginWebAppContext@b71be{/client,/root/cloudstack/client/target/generated-webapp}
org.springframework.context.ApplicationContextException: Failed to start bean 
'cloudStackLifeCycle'; nested exception is net.sf.ehcache.CacheException: 
Unable to create CacheManagerPeerListener. Initial cause was 
sdk-vee-appliance-2.englab.juniper.net: sdk-vee-appliance-2.englab.juniper.net
at 
org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:170)
at 
org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:51)
at 
org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:339)
at 
org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:143)
at 
org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:108)
at 
org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:945)
at 
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.loadContext(DefaultModuleDefinitionSet.java:141)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet$2.with(DefaultModuleDefinitionSet.java:119)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:239)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:244)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:244)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:227)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.loadContexts(DefaultModuleDefinitionSet.java:115)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.load(DefaultModuleDefinitionSet.java:78)
at 
org.apache.cloudstack.spring.module.factory.ModuleBasedContextFactory.loadModules(ModuleBasedContextFactory.java:37)
at 
org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.init(CloudStackSpringContext.java:69)
at 
org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.init(CloudStackSpringContext.java:56)
at 
org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.init(CloudStackSpringContext.java:60)
at 
org.apache.cloudstack.spring.module.web.CloudStackContextLoaderListener.contextInitialized(CloudStackContextLoaderListener.java:51)
at 
org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java:549)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:136)
at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at 
org.mortbay.jetty.plugin.Jetty6PluginWebAppContext.doStart(Jetty6PluginWebAppContext.java:115)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 

[DISCUSS] Potential removal of OpenJDK and Tomcat from Ubuntu main

2014-01-22 Thread Wido den Hollander

Hi,

James Page from Canonical just pointed me to a thread [0] on the Ubuntu 
Cloud list where the discussion started to remove OpenJDK7/Tomcat from 
Ubuntu main.


He asked me what the impact would be regarding to CloudStack if users 
would have to fetch OpenJDK and Tomcat from a 3rd party repo, so I 
quickly responded that it would hurt the Ubuntu users running CloudStack.


For now it looks like OpenJDK and Tomcat will stay in Ubuntu's main 
repository, but for me it sparked the discussion again around Java 7.


We can be pretty sure that distributions will be dropping Java 6 pretty 
soon, so want to change the Maven settings to force Java 7 in the master 
repository.


We should also start testing with Tomcat 7 since we can expect Ubuntu 
14.04 (the next LTS) to only have that tomcat version.


We've been over the Java 7 switch over and over, so I recommend we 
simply switch master.


I'll start a different thread about that later with a ANNOUNCE in it.

Wido

[0]: https://lists.ubuntu.com/archives/ubuntu-devel/2014-January/037991.html


[ANNOUNCE] Switch to Java 7 in master branch on Monday Jan 27th 2014

2014-01-23 Thread Wido den Hollander

Hi,

We had multiple threads discussing the switch to Java 7 which all ended 
in +1 everywhere.


The most recent [0] one is from the beginning of this month and since 
nobody actually took the step I'll do it.


On 27-01-2014 11:00:00 CET / GMT +1 I'll push a commit where on the 
pom.xml cs.jdk.version will be set to 1.7.


I'll also update the Debian and RPM packaging to only depend on Java 7 
packages and no longer on Java 6.


Without any objections I'll continue with this on the date specified above.

Wido

[0]: 
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201401.mbox/%3CCEF079AF.5E3AB%25kelven.yang%40citrix.com%3E


Re: Location of the 4.3 System VM Templates

2014-01-23 Thread Wido den Hollander

On 01/23/2014 03:35 PM, David Nalley wrote:

I'll respectfully disagree. We need to find a place for systemVMs
whether that means classifying them as convenience binaries or putting
them on Wido's box, or S3.



I manually sync the images now. Sometimes I download some images to this 
location: http://cloudstack.apt-get.eu/systemvm/


For example the 4.2 Templates should be there currently.


The problem with jenkins, and specifically with that link for jenkins
is that 4.3 is still a living branch; and like with 4.2.1 there might
be changes to the system VM. We should promote a known good systemvm.

--David

On Thu, Jan 23, 2014 at 1:55 AM, Hugo Trippaers h...@trippaers.nl wrote:

The best location for the system vm images is 
http://jenkins.buildacloud.org/view/4.3/job/cloudstack-4.3-systemvm/. There we 
have the systemvm images that belong to the latest build agains the 4.3 tree. 
Look for the 'Last Successful Artifacts’ on that page.

Cheers,

Hugo


On 23 jan. 2014, at 07:30, Abhisek Basu abhisekb...@msn.com wrote:


I was able to get them from path like: 
http://download.cloud.com/templates/4.3/systemvm64template-2013-12-23-hyperv.vhd.bz2

Regards,

Abhisek

-Original Message- From: Radhika Puthiyetath
Sent: Thursday, January 23, 2014 11:15 AM
To: dev@cloudstack.apache.org
Subject: Location of the 4.3 System VM Templates

Hi,

Could someone please help me with the location of the System VM Templates for 
4.3 release ?

I am unable to access http://download.cloud.com. I assume that they are posted 
to templates directory.

Thanks
-Radhika







[ANNOUNCE] Switched to Java 7

2014-01-27 Thread Wido den Hollander

Hi,

As promised [0], I switched the master [1] branch to Java 7 this morning.

The builds all work on my Ubuntu 12.04 system, but if anything is broken 
on your system, keep this switch in mind.


Wido

[0]: 
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201401.mbox/%3C52E0F3B1.1000100%40widodh.nl%3E
[1]: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=commit;h=49a29ce0cfe0fca535037d0d7e15e399e76ea49f


Re: [ANNOUNCE] Switched to Java 7

2014-01-27 Thread Wido den Hollander



On 01/27/2014 12:04 PM, Wido den Hollander wrote:

Hi,

As promised [0], I switched the master [1] branch to Java 7 this morning.

The builds all work on my Ubuntu 12.04 system, but if anything is broken
on your system, keep this switch in mind.



Right, I see that it broke Jenkins, since that seems to have Java 6 
installed.


Somebody who can access Jenkins and install Java 7?

Wido


Wido

[0]:
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201401.mbox/%3C52E0F3B1.1000100%40widodh.nl%3E

[1]:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=commit;h=49a29ce0cfe0fca535037d0d7e15e399e76ea49f



Re: [ANNOUNCE] Switched to Java 7

2014-01-27 Thread Wido den Hollander

Hi,

I noticed some questions regarding the switch to Java 7.

If you run Ubuntu, make sure you install this package: openjdk-7-jdk

Afterwards, make sure your java and javac executables point towards Java 7:

$ sudo update-alternatives --config java
$ sudo update-alternatives --config javac

Set both to the Java 7 paths and verify with:

$ java -version
$ javac -version

Wido

On 01/27/2014 12:04 PM, Wido den Hollander wrote:

Hi,

As promised [0], I switched the master [1] branch to Java 7 this morning.

The builds all work on my Ubuntu 12.04 system, but if anything is broken
on your system, keep this switch in mind.

Wido

[0]:
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201401.mbox/%3C52E0F3B1.1000100%40widodh.nl%3E

[1]:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=commit;h=49a29ce0cfe0fca535037d0d7e15e399e76ea49f



Re: cloudstack-agent on ubuntu

2014-01-30 Thread Wido den Hollander

On 01/29/2014 08:24 PM, Anirban Chakraborty wrote:

Hi All,

I am trying to install cloudstack-agent on ubuntu 12.04 and it is failing with 
openjdk-6-jre dependency. Following is the error message:

The following packages have unmet dependencies:
  cloudstack-agent : Depends: openjdk-6-jre but it is not installable or
  openjdk-7-jre but it is not going to be installed
 Depends: cloudstack-common (= 4.2.1) but it is not going 
to be installed
 Depends: ethtool but it is not installable
E: Unable to correct problems, you have held broken packages.
——
Has any one seen this, and if so, how do I get around it? Thanks much.



Can you verify if you can install ethtool manually on your machine?

Would be weird if you couldn't, but still worth checking out.

Wido


Anirban Chakraborty





Re: Review Request 15932: Add support for Primary Storage on Gluster using the libvirt backend

2014-02-07 Thread Wido den Hollander

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15932/#review33922
---


My sincere apologies Niels! I completely missed the second version of your 
patch!

I just took a look at it again and it seems pretty straight forward. Most of 
the RBD code did a lot of work for you, so it's fairly easy to have GlusterFS 
in CS.

I tried applying the patch to the master branch and that failed. It seems that 
you wrote the patch against the 4.2 branch, correct?

Could you try to rebase it again master? If it then applies we might be able to 
get GlusterFS into 4.4! :)

- Wido den Hollander


On Jan. 14, 2014, 3:54 p.m., Niels de Vos wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/15932/
 ---
 
 (Updated Jan. 14, 2014, 3:54 p.m.)
 
 
 Review request for cloudstack.
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 The support for Gluster as Primary Storage is mostly based on the
 implementation for NFS. Like NFS, libvirt can address a Gluster environment
 through the 'netfs' pool-type.
 
 
 Diffs
 -
 
   api/src/com/cloud/storage/Storage.java 07b6667 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
  182cb22 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolDef.java
  e181cea 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolXMLParser.java
  a707a0b 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java
  6aaabc5 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java
  aaefc16 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
  0760e51 
   
 plugins/storage/volume/default/src/org/apache/cloudstack/storage/datastore/lifecycle/CloudStackPrimaryDataStoreLifeCycleImpl.java
  7555c1e 
 
 Diff: https://reviews.apache.org/r/15932/diff/
 
 
 Testing
 ---
 
 See http://blog.nixpanic.net/2013/12/using-gluster-as-primary-storage-in.html
 
 
 Thanks,
 
 Niels de Vos
 




Re: Review Request 15932: Add support for Primary Storage on Gluster using the libvirt backend

2014-02-19 Thread Wido den Hollander

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15932/#review34859
---

Ship it!


It seems good to me. Applies cleanly to master and builds just fine.

Code-wise it's simple but effective, should allow us to support Gluster.

- Wido den Hollander


On Feb. 19, 2014, 8:24 a.m., Niels de Vos wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/15932/
 ---
 
 (Updated Feb. 19, 2014, 8:24 a.m.)
 
 
 Review request for cloudstack.
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 The support for Gluster as Primary Storage is mostly based on the
 implementation for NFS. Like NFS, libvirt can address a Gluster environment
 through the 'netfs' pool-type.
 
 
 Diffs
 -
 
   api/src/com/cloud/storage/Storage.java ff83dfc 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
  d63b643 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolDef.java
  dbe5d4b 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolXMLParser.java
  a6186f6 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java
  ff75d61 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java
  8cdecd8 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
  a5f33eb 
   
 plugins/storage/volume/default/src/org/apache/cloudstack/storage/datastore/lifecycle/CloudStackPrimaryDataStoreLifeCycleImpl.java
  b90d5fc 
 
 Diff: https://reviews.apache.org/r/15932/diff/
 
 
 Testing
 ---
 
 See http://blog.nixpanic.net/2013/12/using-gluster-as-primary-storage-in.html
 
 
 Thanks,
 
 Niels de Vos
 




Re: [DISCUSS] Policy blocker?

2014-02-20 Thread Wido den Hollander

On 02/20/2014 02:37 PM, David Nalley wrote:

Hi folks,

I cringe to raise this issue. After 6 RCs I am sure we are all feeling
a little bit of release vote fatigue. Especially Animesh. I apologize
in advance; in all other respects I am ready to give a +1 to RC6.



My apologies, I didn't find the time to play with 4.3 at all.


I've been playing with 4.3.0-rc6 for a couple of days now. I attempted
to build some RPMs and had problems with dependency resolution in
maven. This led me to looking at a number of different poms, and I
noticed mysql-connector-java is listed as a runtime dependency. For
our end users, this really isn't necessary - the debs and rpms specify
a requirement (effectively a system requirement in the terms of
policy) for mysql-connector-java. We don't need it to build the
software (at least not in any location I've seen) - just when running.
(And thus its a system dependency, much like MySQL is.)

mysql-connector-java is GPLv2; which is Cat X. By including it as a
dependency in the pom it automatically gets downloaded. The 3rd Party
software policy has this line in it:

YOU MUST NOT distribute build scripts or documentation within an
Apache product with the purpose of causing the default/standard build
of an Apache product to include any part of aprohibited work.

We've released software with this dependency previously. Is this a
blocker for 4.3 or do we fix going forward? (If we hadn't already
shipped releases with this problem I'd lean a bit more towards it
being a blocker - but its more murky now.)

Thoughts, comments, flames?



I'd say we are OK for now with this. We know it's a problem and it can 
be fixed in the next release imho.


Wido


--David

[1] https://www.apache.org/legal/3party.html





Re: Review Request 15932: Add support for Primary Storage on Gluster using the libvirt backend

2014-02-20 Thread Wido den Hollander


 On Feb. 19, 2014, 1:35 p.m., Wido den Hollander wrote:
  It seems good to me. Applies cleanly to master and builds just fine.
  
  Code-wise it's simple but effective, should allow us to support Gluster.

I just merged it into master and pushed.

So gluster is in master right now! Niels, can I ask you to test it all again? 
Just to make sure the code all works like you intended.


- Wido


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15932/#review34859
---


On Feb. 19, 2014, 8:24 a.m., Niels de Vos wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/15932/
 ---
 
 (Updated Feb. 19, 2014, 8:24 a.m.)
 
 
 Review request for cloudstack.
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 The support for Gluster as Primary Storage is mostly based on the
 implementation for NFS. Like NFS, libvirt can address a Gluster environment
 through the 'netfs' pool-type.
 
 
 Diffs
 -
 
   api/src/com/cloud/storage/Storage.java ff83dfc 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
  d63b643 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolDef.java
  dbe5d4b 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolXMLParser.java
  a6186f6 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java
  ff75d61 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java
  8cdecd8 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
  a5f33eb 
   
 plugins/storage/volume/default/src/org/apache/cloudstack/storage/datastore/lifecycle/CloudStackPrimaryDataStoreLifeCycleImpl.java
  b90d5fc 
 
 Diff: https://reviews.apache.org/r/15932/diff/
 
 
 Testing
 ---
 
 See http://blog.nixpanic.net/2013/12/using-gluster-as-primary-storage-in.html
 
 
 Thanks,
 
 Niels de Vos
 




Re: Fwd: Warning from dev@cloudstack.apache.org

2014-02-21 Thread Wido den Hollander



On 02/20/2014 05:01 PM, Frankie Onuonga wrote:

Hi guys,
I got this in my mail box from last year.
I honestly do not know how I missed this.

Isn't this a little odd as I have been able to mail you guys successfully.



I've been getting the same. I started ignoring them since my e-mail 
works just fine.


Wido


Kind Regards,

Onuonga Frankie

-- Forwarded message --
From: dev-h...@cloudstack.apache.org
Date: Sun, Dec 29, 2013 at 4:55 PM
Subject: Warning from dev@cloudstack.apache.org
To: frankie.onuo...@gmail.com


Hi! This is the ezmlm program. I'm managing the
dev@cloudstack.apache.org mailing list.

I'm working for my owner, who can be reached
at dev-ow...@cloudstack.apache.org.


Messages to you from the dev mailing list seem to
have been bouncing. I've attached a copy of the first bounce
message I received.

If this message bounces too, I will send you a probe. If the probe bounces,
I will remove your address from the dev mailing list,
without further notice.


I've kept a list of which messages from the dev mailing list have
bounced from your address.

Copies of these messages may be in the archive.
To retrieve a set of messages 123-145 (a maximum of 100 per request),
send a short message to:
dev-get.123_...@cloudstack.apache.org

To receive a subject and author list for the last 100 or so messages,
send a short message to:
dev-in...@cloudstack.apache.org

Here are the message numbers:

48919

--- Enclosed is a copy of the bounce message I received.

Return-Path: 
Received: (qmail 21385 invoked for bounce); 18 Dec 2013 16:05:55 -
Date: 18 Dec 2013 16:05:55 -
From: mailer-dae...@apache.org
To: dev-return-489...@cloudstack.apache.org
Subject: failure notice






Re: Review Request 18412: Gluster should store disk images in qcow2 format

2014-02-25 Thread Wido den Hollander

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18412/#review35381
---

Ship it!


Ship It!

- Wido den Hollander


On Feb. 23, 2014, 7:20 p.m., Niels de Vos wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/18412/
 ---
 
 (Updated Feb. 23, 2014, 7:20 p.m.)
 
 
 Review request for cloudstack and Wido den Hollander.
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 By default all network disks are in RAW format. Gluster works fine with
 QCOW2 which has some advantages.
 
 Disks are by default in QCOW2 format. It is possible to run into
 a mismatch, where the disk is in QCOW2 format, but QEMU gets started
 with format=raw. This causes the virtual machines to lockup on boot.
 
 Failures to start a virtual machine can be verified by checking the log
 of the virtual machine, and compare the output of 'qemu-img info'.
 
 In /var/log/libvirt/qemu/VM.log find the URL for the drive:
 
 -drive file=gluster+tcp://...,format=raw,..
 
 Compare this with the 'qemu-img info' output of the same file, mounted
 under /mnt/pool-uuid/img-uuid:
 
 # qemu-img info /mnt/pool-uuid/img-uuid
 ...
 file format: qcow2
 ...
 
 This change makes passes the format when creating a disk located on RBD
 (RAW only) and Gluster (QCOW2).
 
 
 Diffs
 -
 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
  c986855 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtDomainXMLParser.java
  9cf6a90 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtVMDef.java
  290c5a9 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java
  1c37607 
 
 Diff: https://reviews.apache.org/r/18412/diff/
 
 
 Testing
 ---
 
 Test results described with this setup: 
 http://blog.nixpanic.net/2014_02_23_archive.html
 
 
 Thanks,
 
 Niels de Vos
 




Re: Review Request 15933: Add Gluster to the list of protocols in the Management Server

2014-02-25 Thread Wido den Hollander

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/15933/#review35382
---

Ship it!


Ship It!

- Wido den Hollander


On Feb. 19, 2014, 8:27 a.m., Niels de Vos wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/15933/
 ---
 
 (Updated Feb. 19, 2014, 8:27 a.m.)
 
 
 Review request for cloudstack.
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 Gluster can now be used for Primary Storage just like NFS. This change adds 
 the
 Gluster protocol to the Management Server:
 
 Infrastructure - Primary Storage - Add Primary Storage
 
 And also add the option to create Primary Storage on Gluster when
 following the 'Add Zone' wizard from:
 
 Infrastructure - Zones - Add Zone
 
 
 Diffs
 -
 
   client/WEB-INF/classes/resources/messages.properties bd4a27d 
   ui/dictionary.jsp 7ccb466 
   ui/scripts/sharedFunctions.js 2a15967 
   ui/scripts/system.js 8159124 
   ui/scripts/zoneWizard.js fd5705b 
 
 Diff: https://reviews.apache.org/r/15933/diff/
 
 
 Testing
 ---
 
 Some screenshots and verification:
 - http://blog.nixpanic.net/2013/12/using-gluster-as-primary-storage-in.html
 
 
 Thanks,
 
 Niels de Vos
 




Re: Who committed this?

2014-02-25 Thread Wido den Hollander



On 02/25/2014 12:33 PM, Hugo Trippaers wrote:

commit 14689d781005006d95d5f67573331fd64e4c57a6
Author: Niels de Vos nde...@redhat.com
Date:   Sun Feb 23 14:32:24 2014 +0100

 Gluster should store volumes in qcow2 format


commit c02197ae86ba90ee4553fa437a1200e64915649f
Author: Niels de Vos nde...@redhat.com
Date:   Sat Nov 23 14:30:40 2013 -0700

 Add Gluster to the list of protocols in the Management Server



There is no signed-off included in the commit?



So that was me... I ran a 'git am', but I forgot the -s. My bad, thanks 
for pointing it out!


Wido



Cheers,

Hugo



Re: Unreleased 4.4.1 packages on cloudstack.apt-get.eu, why?

2014-10-09 Thread Wido den Hollander


On 10/09/2014 01:21 PM, Nux! wrote:
 Hello,
 
 I've noticed there are 4.4.1 packages on 
 http://cloudstack.apt-get.eu/rhel/4.4/. 
 Since 4.4.0 is latest release, how come? This is bad, people will install 
 broken stuff.
 Can anyone fix this?
 

I removed the packages, but I'm not sure who uploaded them.

They were uploaded on September 30th at 11:12 CET.

Wido

 Lucian
 
 --
 Sent from the Delta quadrant using Borg technology!
 
 Nux!
 www.nux.ro
 


Re: Using console VM's without realhostip.com domain name

2014-10-10 Thread Wido den Hollander
Hi,

Go to the global settings and empty the console proxy and secondary
storage domain settings.

Those values should say realhostip.com now.

Restart the management server and you're done. That will disable SSL.

Wido

On 10/10/2014 05:22 AM, Amin wrote:
 Hello,
 
  
 
 Can anyone please advise how to use cloudstack 4.3.1 (upgraded version from
 2.2.14), how to use the console VM after deprecation of realhostip.com, we
 are not using SSL certificates. And we don't have domain name.
 
  
 
 Thanks in advance 
 
  
 
 


Re: [VOTE][ACS44]Apache CloudStack 4.4.1 RC 3 in branch 4.4-RC20141014T2316

2014-10-15 Thread Wido den Hollander
+1

Based on the commits that have happened my old tests are still valid.

Wido

On 10/14/2014 11:23 PM, Daan Hoogland wrote:
 Hi All,
 
 As all open blocking issues were reported fixed today, I've created a 4.4.1
 release, with the following artifacts up for a vote:
 
 Git Branch and Commit SH:
 https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.4-RC20141014T2316
 Commit: 8db506b536f3139250d33df571c98c1c3fa83650
 
 List of changes:
 http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/latest/
 
 Source release (checksums and signatures are available at the same
 location):
 https://dist.apache.org/repos/dist/dev/cloudstack/4.4.1/
 
 PGP release keys (signed using 4096R/AA4736F3):
 https://dist.apache.org/repos/dist/release/cloudstack/KEYS
 
 Vote will be open for 72 hours.
 
 For sanity in tallying the vote, can PMC members please be sure to
 indicate (binding) with their vote?
 
 [ ] +1  approve
 [ ] +0  no opinion
 [ ] -1  disapprove (and reason why)
 ​may for(tun/c)e​ be with us,
 


Re: [PROPOSAL] Remove SNAPSHOT from versioning and keep tags on the release branch

2014-10-20 Thread Wido den Hollander
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/20/2014 12:33 PM, Rohit Yadav wrote:
 Hi,
 
 Background:
 
 Whenever we start on a new release and cut its release branch, for
 example 4.5 branch, we add the -SNAPSHOT string to the version
 string in pom.xmls, debian/changelog and elsewhere. Just this mere
 action adds a divergence between release and master branches and
 between two minor releases as well. Also, we have seen build issue
 that come up just because someone forgot to add or remove -SNAPSHOT
 or .snapshot in debian/ or packaging. The other issue is
 historically we keep release tags on the release branches, by doing
 this it makes it easy to find commits and follow the git history.
 By doing a separate RC branch and then tagging on it is alright,
 you can still do a git fetch and git checkout tag but it break
 the historic convention.
 
 
 So, please share your views on the follow proposal that try to add
 simple changes:
 
 1. Remove -SNAPSHOT suffix (and its lower/other case variants) from
 the source code, just change to next version and keep working on
 it; we don’ have to fix build systems often.
 
 2. In future keep release tags on major release branch (for
 example, 4.3.0, 4.3.1 both on 4.3 branch)
 

+1 from my side. It breaks the Debian/Ubuntu packaging quite often and
I don't see any benefit from the -SNAPSHOT versioning.


Wido

 
 
 Regards, Rohit Yadav Software Architect, ShapeBlue M. +91 88 262
 30892 | rohit.ya...@shapeblue.com Blog: bhaisaab.org | Twitter:
 @_bhaisaab
 
 Find out more about ShapeBlue and our range of CloudStack related
 services
 
 IaaS Cloud Design 
 Buildhttp://shapeblue.com/iaas-cloud-design-and-build// CSForge –
 rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/ 
 CloudStack
 Consultinghttp://shapeblue.com/cloudstack-consultancy/ CloudStack
 Infrastructure
 Supporthttp://shapeblue.com/cloudstack-infrastructure-support/ 
 CloudStack Bootcamp Training
 Courseshttp://shapeblue.com/cloudstack-training/
 
 This email and any attachments to it may be confidential and are
 intended solely for the use of the individual to whom it is
 addressed. Any views or opinions expressed are solely those of the
 author and do not necessarily represent those of Shape Blue Ltd or
 related companies. If you are not the intended recipient of this
 email, you must neither take any action based upon its contents,
 nor copy or show it to anyone. Please contact the sender if you
 believe you have received this email in error. Shape Blue Ltd is a
 company incorporated in England  Wales. ShapeBlue Services India
 LLP is a company incorporated in India and is operated under
 license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is
 a company incorporated in Brasil and is operated under license from
 Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company registered by The
 Republic of South Africa and is traded under license from Shape
 Blue Ltd. ShapeBlue is a registered trademark.
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBAgAGBQJUROViAAoJEAGbWC3bPspCMjQP/Rk+vUQzqLuuvuBNJsxTDmvA
qSunDElNH/FlNauLPjBI4yFN/0U5fF0wyNo1TT9DKvEXqC8CLVedabw6AnXiB5M7
Tq4H/pcGk5wy7SiO/ujbWqybfFTETfqSFsx8da6t2liGLU7xpqRF7mY2qJYee+Fl
5nayH6qUlmSVWYJQIZ814VsExmflPwrM86KdcB7spK7b/FHVns14PdpDxWVPSKYu
DA1cIPw6d+juYhsiz4jOErwc4EoftjgKvxq3Pqeg94CIe7mn5CY+Rj83Golnbw7x
NdWWsztCMne1LuKKDn1WFwmqCurt/QxwtDHmkF329QYDZ3ChMkotiDdjBBIKN026
eXm3ZIj2SNyUA73OlVK5av+2ivEJ4E10LP5/GmToTLi9buzJWcx+Q7XqbIRZMOOO
H4Sv97ww3WgooIPGxKdxB68sgVxbVsSHzYwereWM8LPjQatQL2FKuqmW/I8rSIcT
O/3IrbmsPJhIAVeBKkpVVASmABp52vm3aCXWkFD8muqHcYGjkxiECusLfTFkkxt6
mfulKCsDSh07BBGG7Mb+xru+q9uEn45J5F+FbvpOpe0lOOlrrzyk2oBd2Imm8KeF
kMk403f236zML73sKL3zapLWfaNy+itsVHSNR1vSbB73+OKUWCpvOao/lyQ0rNk1
HMaANeRqibyJA+FHajZr
=CNBu
-END PGP SIGNATURE-


Re: [PROPOSAL] Move to github PR only during moratorium on commit

2014-10-20 Thread Wido den Hollander
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/18/2014 11:00 AM, sebgoa wrote:
 After [1] I would like to officially bring up the following
 proposal.
 
 [Proposal]  All commits come through github PR, *even* for
 committers. We declare a moratorium period (agreed suspension of
 activity) during which direct commit to master is forbidden. Only
 the master RM is allowed to merge PR in master (we define a master
 RM). If direct commit to master is done, master RM reverts without
 warning. Same for 4.5 and 4.4. branches. 
 
 This is drastic and I am sure some folks will not like it, but here
 is my justification for such a measure:
 

I fully understand the reasoning and I agree that this is the best way
forward.

Code quality is much more needed then new features. Revert without
warning is also just fine.

It will take some time to adjust, but this would be a great thing to do.

Wido

 [Reasons]:  Our commit and release processes have so far been
 based on the idea that development happens on master and that a
 release branch is cut from master (unstable development branch).
 Then a different set of community members harden the release
 branch, QA and bring it to GA level. During that time development
 keeps on going in master.
 
 This is an OK process if we have the luxury of having a QA team and
 can cope with split personality of being developers and release
 managers.
 
 My point of view is that as a community we cannot afford such a
 split brain organization and our experience overt the last year
 proves my point (delayed release date, broken builds, features
 merged without warning…)
 
 We can avoid this by cutting a release branch from a stable one
 (from the start), then as you (Daan) have mentioned several times,
 fix bugs in the release branch and merge them back in the stable
 source of the release (be it master).
 
 Feature development need to be done outside master, period. Not
 only for non-committers but also for committers. And merge request
 need to be called. This will help review and avoid surprises.
 
 New git workflow were proposed and shutdown, mostly calling for
 better CI to solve quality issues. CI will not solve our quality
 issues alone. We need to better police ourselves.
 
 To avoid long discussions, I propose this simple but drastic
 measure. We move all our commits to github PR until 4.5 is out,
 this stands for committers and non-committers, direct commits
 (especially to master) would be reverted immediately. 
 
 Our development and release process is broken, we cannot continue
 like this, let's fix it.
 
 [1] http://markmail.org/thread/xeliefp3oleq3g54
 
 -sebastien
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBAgAGBQJUROXIAAoJEAGbWC3bPspCifwP/1NtM7L61p8YI/W4qn9LH+fs
oo5PeLY61IitdDGJoT7DTfiJhCgM4HWYHKyZOUkZLm53wUKbrUuzSVZ6ZIfrlss6
erWIXCHrk2VPzaBX3nyOBzaEdrnLYTlMdJgxHVd5IUq+HBtf3GPQDUHu3EnsNKk4
BP+3fJg5hfIF4jkevXlGebcdsT9CWQIz3Z656Dr1tknYUBlTyM+wBOsp3BhAEWG0
VTC9Gt/YfOo3P3xWEBUZ40GRhd4bM9OaHhiDqN/vz4wKyz7X2//oShudVSTZsrg9
Hm2CYt9W1/TjLEAk87VBvtNyY4U9OaZTnQBK/T+N7uu7E8whFmqffDDWYiSmBt+J
UDsYQfn/U/aKqgyMasXRWTF8CxZHRM8YyZzwbrMhVYUPdlsnhUdmzksG7zKwcNjp
rpXcS3LkXW6xp0sYp6MWdUp7EVhpDE7q06HiWDONrrKRdyKTxO9P+tWWUJJdVee/
zwb617SIFyDYK9DBKSDSGIdAFE81rl+PBTqPJe2wotJ5KKuRbcVWaqFqvWqmuLxs
XiMiB3CduBbLecmyEqa0szQqv9OXRHeNGvrjEM29L7kYSb6Rua05jmTwFXEj+1Ob
IxF9wOB/licwfCOE8VSNz8xbiptbtloKhT4OFyliFW/RAuBwQlznus8A9xGVgYz1
gCnEIljSR6Tle3F9kvU7
=MHeW
-END PGP SIGNATURE-


Re: failed 430-440 upgrade: Unable to start instance due to Unable to get answer that is of class com.cloud.agent.api.StartAnswer

2014-10-20 Thread Wido den Hollander


On 10/21/2014 12:25 AM, Nux! wrote:
 Hi guys,
 
 I've followed the 4.3.x to 4.4.0 upgrade instruction to the letter and it has 
 worked, except the last step which failed to restart my system VMs. Now I'm 
 on 4.4.0 without any system VMs; they are stuck in a creation loop which 
 never succeeds. Relevant management log:
 
 Job failed due to exception Resource [Host:1] is unreachable: Host 1: Unable 
 to start instance due to Unable to get answer that is of class 
 com.cloud.agent.api.StartAnswer
 http://pastebin.com/raw.php?i=k6WksLcU
 
 And on the agent:
 
 2014-10-20 23:10:57,369 DEBUG [cloud.agent.Agent] 
 (agentRequest-Handler-5:null) Processing command: 
 com.cloud.agent.api.StopCommand
 2014-10-20 23:10:57,371 DEBUG [kvm.resource.LibvirtConnection] 
 (agentRequest-Handler-5:null) can't find connection: KVM, for vm: v-439-VM, 
 continue
 2014-10-20 23:10:57,372 DEBUG [kvm.resource.LibvirtConnection] 
 (agentRequest-Handler-5:null) can't find connection: LXC, for vm: v-439-VM, 
 continue
 2014-10-20 23:10:57,372 DEBUG [kvm.resource.LibvirtConnection] 
 (agentRequest-Handler-5:null) can't find which hypervisor the vm used , then 
 use the default hypervisor
 2014-10-20 23:10:57,374 DEBUG [kvm.resource.LibvirtComputingResource] 
 (agentRequest-Handler-5:null) Failed to get dom xml: 
 org.libvirt.LibvirtException: Domain not found: no domain with matching name 
 'v-439-VM'
 2014-10-20 23:10:57,375 DEBUG [kvm.resource.LibvirtComputingResource] 
 (agentRequest-Handler-5:null) Failed to get dom xml: 
 org.libvirt.LibvirtException: Domain not found: no domain with matching name 
 'v-439-VM'
 2014-10-20 23:10:57,377 DEBUG [kvm.resource.LibvirtComputingResource] 
 (agentRequest-Handler-5:null) Failed to get dom xml: 
 org.libvirt.LibvirtException: Domain not found: no domain with matching name 
 'v-439-VM'
 
 And this goes on and on.
 
 Any ideas?
 

Is all the logging on DEBUG on the Agent? there has to be more
information in the logfile telling you what is going on exactly.

Wido

 Lucian
 
 
 --
 Sent from the Delta quadrant using Borg technology!
 
 Nux!
 www.nux.ro
 


Re: failed 430-440 upgrade: Unable to start instance due to Unable to get answer that is of class com.cloud.agent.api.StartAnswer

2014-10-21 Thread Wido den Hollander


On 10/21/2014 12:36 AM, Nux! wrote:
 Wido,
 
 Alas DEBUG is on without telling me anything I can understand. Here's some 
 more:
 
 http://pastebin.com/raw.php?i=cYf8g5tg
 

So this is the issue:

2014-10-20 23:24:27,323 DEBUG [cloud.agent.Agent]
(agentRequest-Handler-1:null) Seq 1-1609755391808241885:  { Ans: ,
MgmtId: 270567360573239, via: 1, Ver: v1, Flags: 10,
[{com.cloud.agent.api.Answer:{result:false,details:No enum
constant
com.cloud.hypervisor.kvm.resource.LibvirtVMDef.DiskDef.diskCacheMode.none,wait:0}},{com.cloud.agent.api.Answer:{result:false,details:Stopped
by previous failure,wait:0}}] }

That should be fixed in 4.4.1

By this commit:
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=commitdiff;h=eeac81838fcc3c53d2115074c8141c9411d3ef4c

Just upgrading the agent to 4.4.1 is enough, the mgmt server can still
run 4.4.0

Wido

 --
 Sent from the Delta quadrant using Borg technology!
 
 Nux!
 www.nux.ro
 
 - Original Message -
 From: Wido den Hollander w...@widodh.nl
 To: dev@cloudstack.apache.org
 Sent: Monday, 20 October, 2014 23:29:04
 Subject: Re: failed 430-440 upgrade: Unable to start instance due to Unable 
 to get answer that is of class
 com.cloud.agent.api.StartAnswer
 
 On 10/21/2014 12:25 AM, Nux! wrote:
 Hi guys,

 I've followed the 4.3.x to 4.4.0 upgrade instruction to the letter and it 
 has
 worked, except the last step which failed to restart my system VMs. Now I'm 
 on
 4.4.0 without any system VMs; they are stuck in a creation loop which never
 succeeds. Relevant management log:

 Job failed due to exception Resource [Host:1] is unreachable: Host 1: 
 Unable to
 start instance due to Unable to get answer that is of class
 com.cloud.agent.api.StartAnswer
 http://pastebin.com/raw.php?i=k6WksLcU

 And on the agent:

 2014-10-20 23:10:57,369 DEBUG [cloud.agent.Agent] 
 (agentRequest-Handler-5:null)
 Processing command: com.cloud.agent.api.StopCommand
 2014-10-20 23:10:57,371 DEBUG [kvm.resource.LibvirtConnection]
 (agentRequest-Handler-5:null) can't find connection: KVM, for vm: v-439-VM,
 continue
 2014-10-20 23:10:57,372 DEBUG [kvm.resource.LibvirtConnection]
 (agentRequest-Handler-5:null) can't find connection: LXC, for vm: v-439-VM,
 continue
 2014-10-20 23:10:57,372 DEBUG [kvm.resource.LibvirtConnection]
 (agentRequest-Handler-5:null) can't find which hypervisor the vm used , then
 use the default hypervisor
 2014-10-20 23:10:57,374 DEBUG [kvm.resource.LibvirtComputingResource]
 (agentRequest-Handler-5:null) Failed to get dom xml:
 org.libvirt.LibvirtException: Domain not found: no domain with matching name
 'v-439-VM'
 2014-10-20 23:10:57,375 DEBUG [kvm.resource.LibvirtComputingResource]
 (agentRequest-Handler-5:null) Failed to get dom xml:
 org.libvirt.LibvirtException: Domain not found: no domain with matching name
 'v-439-VM'
 2014-10-20 23:10:57,377 DEBUG [kvm.resource.LibvirtComputingResource]
 (agentRequest-Handler-5:null) Failed to get dom xml:
 org.libvirt.LibvirtException: Domain not found: no domain with matching name
 'v-439-VM'

 And this goes on and on.

 Any ideas?


 Is all the logging on DEBUG on the Agent? there has to be more
 information in the logfile telling you what is going on exactly.

 Wido

 Lucian


 --
 Sent from the Delta quadrant using Borg technology!

 Nux!
 www.nux.ro


Re: Review Request 27393: Fix for Resize RBD Root Disk on Deploy

2014-10-31 Thread Wido den Hollander

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27393/#review59415
---

Ship it!


Seems good to me! I'll apply it.

- Wido den Hollander


On Oct. 30, 2014, 8:37 p.m., Logan B wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/27393/
 ---
 
 (Updated Oct. 30, 2014, 8:37 p.m.)
 
 
 Review request for cloudstack and Wido den Hollander.
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 Currently if you specify a root disk size when deploying an instance using 
 Ceph RBD as the Primary Storage backend the disk will not be resized.
 
 This patch adds a check after deploying an instance that checks the specified 
 root disk volume size against the size of the template it was deployed from.  
 If the specified volume size is larger than the template it will resize the 
 root disk volume to the specified size.
 
 If the specified volume size is equal to or smaller than the template size 
 then no resizing occurs, and the instance will be deployed with a root disk 
 size equal to the size of the template.
 
 
 Diffs
 -
 
   
 plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
  af7a0e0 
 
 Diff: https://reviews.apache.org/r/27393/diff/
 
 
 Testing
 ---
 
 Test: Created new instance from 5GB template with 20GB root disk size 
 specified.
 Result: Instance created successfully with 20GB volume size.
 
 Test: Created new instance from 5GB template with 5GB root disk size 
 specified.
 Result: Instance created successfully with 5GB volume size.
 
 Test: Created new instance from 5GB template with 1GB root disk size 
 specified.
 Result: Instance created successfully with 5GB volume size.
 
 Test: Created new instance from 5GB template with no root disk size specified.
 Result: Instance created successfully with 5GB volume size.
 
 Patch was deployed to our production infrastructure on July 25th, 2014.  It's 
 been running stable since that time.
 
 
 Thanks,
 
 Logan B
 




Re: Strange if else under LibvirtStorageAdaptor.java[lines 1203-1206]

2014-11-03 Thread Wido den Hollander


On 11/03/2014 10:05 AM, Santhosh Edukulla wrote:
 Team,
 
 Either of the paths are doing the same thing for below if else, please check. 
 This is observed under master.
 

I think that is a weird merge thing somewhere. I don't see any reason
why this if statement is there.

Wido

  if (srcPool.getType() != StoragePoolType.RBD) {
 newDisk = destPool.createPhysicalDisk(name, 
 Storage.ProvisioningType.THIN, disk.getVirtualSize());
 } else {
 newDisk = destPool.createPhysicalDisk(name, 
 Storage.ProvisioningType.THIN, disk.getVirtualSize());
 }
 
 Santhosh
 


Re: n00b question about code cleanup fixes

2014-11-05 Thread Wido den Hollander


On 11/05/2014 07:28 PM, Derrick Schneider wrote:
 Hey, all. I just downloaded and got Cloudstack building on my dev machine,
 and I had a question about a minor code-cleanup issue I saw. Specifically,
 in DatabaseUpgradeChecker, there's this:
 
 if (Version.compare(trimmedCurrentVersion, upgrades[upgrades.length
 - 1].getUpgradedVersion()) != 0) {
 s_logger.error(The end upgrade version is actually at  +
 upgrades[upgrades.length - 1].getUpgradedVersion() +
  but our management server code version is at  +
 currentVersion);
 throw new CloudRuntimeException(The end upgrade version is
 actually at  + upgrades[upgrades.length - 1].getUpgradedVersion() +
  but our management server code version is at  +
 currentVersion);
 }
 
 
 I thought I would clean this up to just have one copy of the string so that
 one doesn't need to remember to update both copies when changing/expanding
 the message. There are some other instances in the file as well.
 
 This isn't a bug fix, but do I still file a bug report for tracking
 requests? It's obviously not a feature, so it probably doesn't warrant
 discussion (despite the fact that I'm emailing about it).
 

No, you don't need to. You can simply file a patch on reviews.apache.org
and it can go into cloudStack.

Always great to see commits coming from the community!

Wido

 This is probably a n00b question, but it wasn't clear from the
 documentation about contributing.
 


Re: KVM VM snapshots any time soon ?

2014-11-05 Thread Wido den Hollander


On 11/05/2014 07:37 PM, Logan Barfield wrote:
 This is a feature we would like to see as well.
 
 Does KVM have native support for VM/Memory snapshotting?  If so what are
 the barriers to getting this implemented?
 

KVM has the support, but there is a problem with the storage.

Snapshotting a disk is simple, that's supported by QCOW2, LVM, RBD, etc.

But when you want to snapshot a VirtualMachine you have to store the
memory contents somewhere.

That is something that hasn't been worked out. Do you create a
additional 'disk' for storing the memory contents?

That's the barriers I currently know for implementing it.

Wido

 
 Thank You,
 
 Logan Barfield
 Tranquil Hosting
 
 On Wed, Nov 5, 2014 at 1:20 PM, Andrija Panic andrija.pa...@gmail.com
 wrote:
 
 Anybody ?

 On 3 November 2014 15:23, Andrija Panic andrija.pa...@gmail.com wrote:

 Hi guys,

 do we have any plans in some near future to imlement full VM KVM
 snapshots. So not single volume snaphost, but a reall VM snapshots. It is
 already there for Xen/VMware, but not KVM.

 Any thoughts, ETA, or difficulties to implement this ?
 Any feedback greatly appriciated...

 Best,


 Andrija Panić




 --

 Andrija Panić
 --
   http://admintweets.com
 --

 


Re: KVM VM snapshots any time soon ?

2014-11-05 Thread Wido den Hollander


On 11/05/2014 10:13 PM, Andrija Panic wrote:
 Wido,  would it be possible that the memory is really stored as you
 suggested in a separate disk file, on the same primary storage as where the
 ROOTdisk exists.

Maybe, probably a possibility. I haven't looked into it that deeply.

 How did you implement this with Xen (I'm not familiar with it at all.) ?
 

I did not implement it on Xen, so I don't know.

Wido

 On 5 November 2014 22:01, Wido den Hollander w...@widodh.nl wrote:
 


 On 11/05/2014 07:37 PM, Logan Barfield wrote:
 This is a feature we would like to see as well.

 Does KVM have native support for VM/Memory snapshotting?  If so what are
 the barriers to getting this implemented?


 KVM has the support, but there is a problem with the storage.

 Snapshotting a disk is simple, that's supported by QCOW2, LVM, RBD, etc.

 But when you want to snapshot a VirtualMachine you have to store the
 memory contents somewhere.

 That is something that hasn't been worked out. Do you create a
 additional 'disk' for storing the memory contents?

 That's the barriers I currently know for implementing it.

 Wido


 Thank You,

 Logan Barfield
 Tranquil Hosting

 On Wed, Nov 5, 2014 at 1:20 PM, Andrija Panic andrija.pa...@gmail.com
 wrote:

 Anybody ?

 On 3 November 2014 15:23, Andrija Panic andrija.pa...@gmail.com
 wrote:

 Hi guys,

 do we have any plans in some near future to imlement full VM KVM
 snapshots. So not single volume snaphost, but a reall VM snapshots. It
 is
 already there for Xen/VMware, but not KVM.

 Any thoughts, ETA, or difficulties to implement this ?
 Any feedback greatly appriciated...

 Best,


 Andrija Panić




 --

 Andrija Panić
 --
   http://admintweets.com
 --



 
 
 


Re: KVM VM snapshots any time soon ?

2014-11-05 Thread Wido den Hollander


On 11/05/2014 10:18 PM, Mike Tutkowski wrote:
 Hey Wido,
 
 Perhaps you know the answer to this:
 
 My main interactions with KVM have been in the form of data disks.
 
 I create a LUN on my SAN that gets passed to a VM as a data disk.
 
 In this situation, how would a snapshot of such a disk work?
 

The disk would be snapshotted, but at the same time the memory contents
of the VM would be written to a new disk/file.

 On XenServer and ESX, your data disk can be just a subset of the overall
 LUN and hypervisor snapshots create new virtual disks that are stored right
 next to the original ones on the same LUN.
 
 I'm un-clear how this works in KVM.
 

It would work about the same with KVM.

You can look into the virsh save command of libvirt to see what it does.

Wido

 Thanks!
 Mike
 
 On Wed, Nov 5, 2014 at 2:01 PM, Wido den Hollander w...@widodh.nl wrote:
 


 On 11/05/2014 07:37 PM, Logan Barfield wrote:
 This is a feature we would like to see as well.

 Does KVM have native support for VM/Memory snapshotting?  If so what are
 the barriers to getting this implemented?


 KVM has the support, but there is a problem with the storage.

 Snapshotting a disk is simple, that's supported by QCOW2, LVM, RBD, etc.

 But when you want to snapshot a VirtualMachine you have to store the
 memory contents somewhere.

 That is something that hasn't been worked out. Do you create a
 additional 'disk' for storing the memory contents?

 That's the barriers I currently know for implementing it.

 Wido


 Thank You,

 Logan Barfield
 Tranquil Hosting

 On Wed, Nov 5, 2014 at 1:20 PM, Andrija Panic andrija.pa...@gmail.com
 wrote:

 Anybody ?

 On 3 November 2014 15:23, Andrija Panic andrija.pa...@gmail.com
 wrote:

 Hi guys,

 do we have any plans in some near future to imlement full VM KVM
 snapshots. So not single volume snaphost, but a reall VM snapshots. It
 is
 already there for Xen/VMware, but not KVM.

 Any thoughts, ETA, or difficulties to implement this ?
 Any feedback greatly appriciated...

 Best,


 Andrija Panić




 --

 Andrija Panić
 --
   http://admintweets.com
 --



 
 
 


Re: KVM VM snapshots any time soon ?

2014-11-06 Thread Wido den Hollander


On 11/05/2014 10:26 PM, Mike Tutkowski wrote:
 When you say the disk would be snapshotted, do you mean from the point of
 view of the SAN or from the point of view of the hypervisor?
 

Hypervisor. QCOW2 files support snapshots, so does RBD.

When using iSCSI we usually create a VG on top of the LUN and then we
can snapshot each LV which is mapped to a Instance.

 A SAN snapshot is no problem. I'm mainly curious to see how this would work
 for a hypervisor snapshot.
 
 Thanks!
 
 On Wed, Nov 5, 2014 at 2:24 PM, Wido den Hollander w...@widodh.nl wrote:
 


 On 11/05/2014 10:18 PM, Mike Tutkowski wrote:
 Hey Wido,

 Perhaps you know the answer to this:

 My main interactions with KVM have been in the form of data disks.

 I create a LUN on my SAN that gets passed to a VM as a data disk.

 In this situation, how would a snapshot of such a disk work?


 The disk would be snapshotted, but at the same time the memory contents
 of the VM would be written to a new disk/file.

 On XenServer and ESX, your data disk can be just a subset of the overall
 LUN and hypervisor snapshots create new virtual disks that are stored
 right
 next to the original ones on the same LUN.

 I'm un-clear how this works in KVM.


 It would work about the same with KVM.

 You can look into the virsh save command of libvirt to see what it does.

 Wido

 Thanks!
 Mike

 On Wed, Nov 5, 2014 at 2:01 PM, Wido den Hollander w...@widodh.nl
 wrote:



 On 11/05/2014 07:37 PM, Logan Barfield wrote:
 This is a feature we would like to see as well.

 Does KVM have native support for VM/Memory snapshotting?  If so what
 are
 the barriers to getting this implemented?


 KVM has the support, but there is a problem with the storage.

 Snapshotting a disk is simple, that's supported by QCOW2, LVM, RBD, etc.

 But when you want to snapshot a VirtualMachine you have to store the
 memory contents somewhere.

 That is something that hasn't been worked out. Do you create a
 additional 'disk' for storing the memory contents?

 That's the barriers I currently know for implementing it.

 Wido


 Thank You,

 Logan Barfield
 Tranquil Hosting

 On Wed, Nov 5, 2014 at 1:20 PM, Andrija Panic andrija.pa...@gmail.com

 wrote:

 Anybody ?

 On 3 November 2014 15:23, Andrija Panic andrija.pa...@gmail.com
 wrote:

 Hi guys,

 do we have any plans in some near future to imlement full VM KVM
 snapshots. So not single volume snaphost, but a reall VM snapshots.
 It
 is
 already there for Xen/VMware, but not KVM.

 Any thoughts, ETA, or difficulties to implement this ?
 Any feedback greatly appriciated...

 Best,


 Andrija Panić




 --

 Andrija Panić
 --
   http://admintweets.com
 --







 
 
 


Re: [DISCUSS] Changing nictype/disk controller for a template or VM

2014-11-10 Thread Wido den Hollander


On 10-11-14 08:37, Rohit Yadav wrote:
 Hi,
 
 In case of VMWare, when we register a new template we can pass in nic type 
 and disk controller but there is no API (or parameter passable to current 
 APIs) to change nic type and disk controller for both VM and template.
 
 I’m planning to add a details map (following the registerTemplate API) to 
 updateTemplate and updateVirtualMachine APIs which would takes in the nictype 
 and disk controller parameters. Please share if this would cause any issue 
 for you or if there is was a good reason to not allow it before? Right now to 
 do something like that would require database hacking. Thanks.
 

I don't see any problems. For some longer running VMs it might be useful
at some point that you can switch from e1000 to VirtIO NICs for example.

Wido

 Regards,
 Rohit Yadav
 Software Architect, ShapeBlue
 M. +91 88 262 30892 | rohit.ya...@shapeblue.com
 Blog: bhaisaab.org | Twitter: @_bhaisaab
 
 Find out more about ShapeBlue and our range of CloudStack related services
 
 IaaS Cloud Design  Buildhttp://shapeblue.com/iaas-cloud-design-and-build//
 CSForge – rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/
 CloudStack Consultinghttp://shapeblue.com/cloudstack-consultancy/
 CloudStack Infrastructure 
 Supporthttp://shapeblue.com/cloudstack-infrastructure-support/
 CloudStack Bootcamp Training 
 Courseshttp://shapeblue.com/cloudstack-training/
 
 This email and any attachments to it may be confidential and are intended 
 solely for the use of the individual to whom it is addressed. Any views or 
 opinions expressed are solely those of the author and do not necessarily 
 represent those of Shape Blue Ltd or related companies. If you are not the 
 intended recipient of this email, you must neither take any action based upon 
 its contents, nor copy or show it to anyone. Please contact the sender if you 
 believe you have received this email in error. Shape Blue Ltd is a company 
 incorporated in England  Wales. ShapeBlue Services India LLP is a company 
 incorporated in India and is operated under license from Shape Blue Ltd. 
 Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
 operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company 
 registered by The Republic of South Africa and is traded under license from 
 Shape Blue Ltd. ShapeBlue is a registered trademark.
 


Re: Strange if else under LibvirtStorageAdaptor.java[lines 1203-1206]

2014-11-10 Thread Wido den Hollander


On 10-11-14 17:04, Santhosh Edukulla wrote:
 Wido,
 
 If i get your note, then shall i remove the mentioned if else logic only to 
 one liner as below?
 
 newDisk = destPool.createPhysicalDisk(name, Storage.ProvisioningType.THIN, 
 disk.getVirtualSize());
 

Yes, that should be right. The if-statement doesn't actually do anything.

 Regards,
 Santhosh
 
 From: Wido den Hollander [w...@widodh.nl]
 Sent: Monday, November 03, 2014 5:27 AM
 To: Santhosh Edukulla; dev@cloudstack.apache.org
 Cc: shadow...@gmail.com
 Subject: Re: Strange if else under LibvirtStorageAdaptor.java[lines 1203-1206]
 
 On 11/03/2014 10:05 AM, Santhosh Edukulla wrote:
 Team,

 Either of the paths are doing the same thing for below if else, please 
 check. This is observed under master.

 
 I think that is a weird merge thing somewhere. I don't see any reason
 why this if statement is there.
 
 Wido
 
  if (srcPool.getType() != StoragePoolType.RBD) {
 newDisk = destPool.createPhysicalDisk(name, 
 Storage.ProvisioningType.THIN, disk.getVirtualSize());
 } else {
 newDisk = destPool.createPhysicalDisk(name, 
 Storage.ProvisioningType.THIN, disk.getVirtualSize());
 }

 Santhosh

 


Re: [DISCUSS][ACS44] release 4.4.2

2014-11-11 Thread Wido den Hollander


On 11/11/2014 03:40 PM, Daan Hoogland wrote:
 H,
 
 As we made a big boogy by have a different commit id used for the
 artifact as for the tag/vote, I need to create a new relase including
 vote. Does anybody have last minute entries/fixes they want in? I plan
 to build tonight (about 21:00 UTC)
 

I am working on this one was we speak:
https://issues.apache.org/jira/browse/CLOUDSTACK-3383

That is currently broken in all versions. I'd love to have that one in 4.4.2

Wido

 regards,
 


<    1   2   3   4   5   6   7   8   9   10   >