Thank you, Mike and Tim.
I will follow this guide to submit code ASAP.
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Git
On Tue, Jun 10, 2014 at 4:33 AM, Mike Tutkowski
mike.tutkow...@solidfire.com wrote:
Yes, I was going to mention what Tim said about using the term XenServer
Thanks, Hieu!
I have reviewed your design (making only minor changes to your Wiki).
Please feel free to have me review your code when you are ready.
Also, do you have a plan for integration testing? It would be great if you
could update your Wiki page to include what your plans on in this
Hieu,
I made a couple of minor edits to your design to ensure everything is
XenServer based. If you haven't done so already, please also fetch the
most recent master and base off of that. I refactored the old Xen plugin
into a XenServer specific one since Xen Project isn't currently supported,
Yes, I was going to mention what Tim said about using the term XenServer
instead of Xen as Tim has done a bunch of work recently to separate the
two.
I made a few changes in your Wiki when I saw a reference to Xen instead
of to XenServer.
On Mon, Jun 9, 2014 at 2:53 PM, Tim Mackey
Hi Mike,
Done that, I have added new FAQ section.
In addition, I have tested that volume can take a snapshot, create volume
from that snapshot and attach back to VM normally. Currently I am testing
with volume migration.
On Fri, Jun 6, 2014 at 11:11 AM, Mike Tutkowski
Sorry, thought you were based off the link you provided in this reply.
In our case, we are using CloudStack integrated in VDI solution to provived
pooled VM type[1]. So may be my approach can bring better UX for user with
lower bootime ...
A short change in design are followings
- VM will be
Hi Hieu,
Will it be good to include bulk operation of this feature? In addition,
does Xen support parallel execution of these operations ?
Regards,
Amit
*CloudByte Inc.* http://www.cloudbyte.com/
On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE hieul...@gmail.com wrote:
Mike, Punith,
Please review
hi Hieu,
after going through your Golden Primary Storage proposal , from my
understanding you are creating a SSD golden PS for holding parent
VDH(nothing but the template which go copied from secondary storage) and a
normal primary storage for ROOT volumes(child VHD) for the corresponding
vm's.
Hieu,
If I understand the objective correctly, you are trying to reduce the
IO associated with a desktop start of day boot storm. In your
proposal, you're effectively wanting to move the CloudStack secondary
storage concept to include a locally attached storage device which is
SSD based. While
Hi Hieu,
Thanks for sending a link to your proposal.
Some items we should consider:
1) We need to make sure that CloudStack does not delete your golden
template in the background. As it stands today with XenServer, if a
template resides on a primary storage and no VDI is referencing it, the
5) We need to understand how this new model impacts storage tagging, if at
all.
On Thu, Jun 5, 2014 at 12:50 PM, Mike Tutkowski
mike.tutkow...@solidfire.com wrote:
Hi Hieu,
Thanks for sending a link to your proposal.
Some items we should consider:
1) We need to make sure that CloudStack
To follow up on the storage tagging question I raised, I think it could
work this way:
The storage tag field could still be employed and it would be in reference
to the primary storage that houses the root disks (and VM snapshots)...not
in reference to the golden primary storage that is used to
Other than going through a for loop and deploying VM after VM, I don't
think CloudStack currently supports a bulk-VM-deploy operation.
It would be nice if CS did so at some point in the future; however, that is
probably a separate proposal from Hieu's.
On Thu, Jun 5, 2014 at 12:13 AM, Amit Das
6) The copy_vhd_from_secondarystorage XenServer plug-in is not used when
you're using XenServer + XS62ESP1 + XS62ESP1004. In that case, please refer
to copyTemplateToPrimaryStorage(CopyCommand) method in the
Xenserver625StorageProcessor class.
On Thu, Jun 5, 2014 at 1:56 PM, Mike Tutkowski
Hieu,
I assume you are using MCS for you golden image? What version of XD? Given
you are using pooled desktops, have you thought about using a PVS BDM iso
and mount it with in your 1000 VMs? This way you can stagger reboots via
PVS console or Studio. This would require a change to your delivery
Hi guys,
Hm, lots of problems and questions, I will try to resolve one by one.
On Fri, Jun 6, 2014 at 1:51 AM, Mike Tutkowski mike.tutkow...@solidfire.com
wrote:
5) We need to understand how this new model impacts storage tagging, if at
all.
On Thu, Jun 5, 2014 at 12:50 PM, Mike
Hi Tim,
On Fri, Jun 6, 2014 at 1:39 AM, Tim Mackey tmac...@gmail.com wrote:
Hieu,
If I understand the objective correctly, you are trying to reduce the
IO associated with a desktop start of day boot storm. In your
proposal, you're effectively wanting to move the CloudStack secondary
Hi Todd,
On Fri, Jun 6, 2014 at 9:17 AM, Todd Pigram t...@toddpigram.com wrote:
Hieu,
I assume you are using MCS for you golden image? What version of XD? Given
you are using pooled desktops, have you thought about using a PVS BDM iso
and mount it with in your 1000 VMs? This way you can
Hi Hieu,
Would you be able to place these questions and answers in your design doc
so that we can more easily track them?
Thanks!
Mike
On Thu, Jun 5, 2014 at 9:55 PM, Hieu LE hieul...@gmail.com wrote:
Hi Todd,
On Fri, Jun 6, 2014 at 9:17 AM, Todd Pigram t...@toddpigram.com wrote:
Hieu,
Daan helped out with this. You should be good to go now.
On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE hieul...@gmail.com wrote:
Hi Mike,
Could you please give edit/create permission on ASF Jira/Wiki confluence ?
I can not add a new Wiki page.
My Jira ID: hieulq
Wiki: hieulq89
Review Board:
Mike, Punith,
Please review Golden Primary Storage proposal. [1]
Thank you.
[1]:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski
mike.tutkow...@solidfire.com wrote:
Daan helped out with this. You should be good
Hi Mike,
You are right, performance will be decreased over time because writes IOPS
will always end up on slower storage pool.
In our case, we are using CloudStack integrated in VDI solution to provived
pooled VM type[1]. So may be my approach can bring better UX for user with
lower bootime ...
Hi,
Yes, please feel free to add a new Wiki page for your design.
Here is a link to applicable design info:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
Also, feel free to ask more questions and have me review your design.
Thanks!
Mike
On Tue, Jun 3, 2014 at 7:29 PM, Hieu
Hi Mike,
Could you please give edit/create permission on ASF Jira/Wiki confluence ?
I can not add a new Wiki page.
My Jira ID: hieulq
Wiki: hieulq89
Review Board: hieulq
Thanks !
On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski mike.tutkow...@solidfire.com
wrote:
Hi,
Yes, please feel free
Thanks Mike and Punith for quick reply.
Both solutions you gave here are absolutely correct. But as I mentioned in
the first email, I want another better solution for current infrastructure
at my company.
Creating a high IOPS primary storage using storage tags is good but it will
be very waste
It is an interesting idea. If the constraints you face at your company can
be corrected somewhat by implementing this, then you should go for it.
It sounds like writes will be placed on the slower storage pool. This means
as you update OS components, those updates will be placed on the slower
Also, give some thought in your design as to how VM migration will work.
Thanks!
On Monday, June 2, 2014, Mike Tutkowski mike.tutkow...@solidfire.com
wrote:
It is an interesting idea. If the constraints you face at your company can
be corrected somewhat by implementing this, then you should
Hi all,
There are some problems while deploying a large amount of VMs in my company
with CloudStack. All VMs are deployed from same template (e.g: Windows 7)
and the quantity is approximately ~1000VMs. The problems here is low IOPS,
low performance of VM (about ~10-11 IOPS, boot time is very
hi hieu,
your problem is the bottle neck we see as a storage vendors in the cloud,
meaning all the vms in the cloud have not been guaranteed iops from the
primary storage, because in your case i'm assuming you are running 1000vms
on a xen cluster whose all vm's disks are lying on a same primary
Thanks, Punith - this is similar to what I was going to say.
Any time a set of CloudStack volumes share IOPS from a common pool, you
cannot guarantee IOPS to a given CloudStack volume at a given time.
Your choices at present are:
1) Use managed storage (where you can create a 1:1 mapping
30 matches
Mail list logo