Re: Windows Server 2016 - unable to boot ISO for installation

2017-08-06 Thread Cloud List
Hi Lucian and all,

Just a quick update, the problem was resolved after I added below lines
into the host's agent.properties:

===
guest.cpu.mode=custom
guest.cpu.model=core2duo
===

and restarted CloudStack agent. The VM is now able to boot-up from the
Windows Server 2016 ISO without experiencing the BSOD issue.

Many thanks for all the help in pointing me to the right direction! :)

Cheers.

-ip-


On Thu, Aug 3, 2017 at 12:04 PM, Cloud List  wrote:

> Hi Lucian,
>
> Good day to you, and thank you for your reply.
>
> From what I understand, once the hypervisor is managed by CloudStack
> (using cloudstack-agent), we shouldn't use virt-manager to create VMs and
> all VMs will have to be created through CloudStack, correct?
>
> I read further and based on below documentation:
>
> https://social.technet.microsoft.com/Forums/en-US/695c8997-52cf-4c30-a3f7-
> f26a40dc703a/failed-install-of-build-10041-in-the-kvm-
> virtual-machine-system-thread-exception-not-handled?forum=
> WinPreview2014Setup
>
> I am using Sandy Bridge CPU and based on the above, it seems that we can
> resolve the problem by setting the CPU type as follows: 'core2duo'.
>
> The question is how to change it through CloudStack, can I confirm that we
> can do so by following the "Configure CPU model for KVM guest (Optional)"
> instruction at below URL?
>
> http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/4.8/
> hypervisor/kvm.html
>
> Based on the above instruction, I would need to add some lines on
> agent.properties file as follows:
>
> ===
> guest.cpu.mode=custom
> guest.cpu.model=core2duo
> ===
>
> and then restart CloudStack agent for the changes to take affect. Is it
> correct? Please correct me if I'm wrong.
>
> Looking forward to your reply, thank you.
>
> Cheers.
>
> -ip-
>
>
>
>
>
> On Tue, Aug 1, 2017 at 11:26 PM, Nux!  wrote:
>
>> Hello,
>>
>> I am not in a position to test, but it should be fairly easy to; just
>> power up virt-manager on the said Ubuntu hypervisor and try to create a
>> Windows 2016 VM with  
>>
>> Let us know how it went :)
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro
>>
>> - Original Message -
>> > From: "Cloud List" 
>> > To: "users" 
>> > Cc: "dev" 
>> > Sent: Tuesday, 1 August, 2017 04:11:06
>> > Subject: Re: Windows Server 2016 - unable to boot ISO for installation
>>
>> > Hi Lucian,
>> >
>> > Nice to hear from you again. Below is the KVM version (QEMU version, to
>> be
>> > exact) I am using:
>> >
>> > # kvm --version
>> > QEMU emulator version 2.4.1, Copyright (c) 2003-2008 Fabrice Bellard
>> >
>> > I am using Ubuntu instead of CentOS for the KVM host, may I know if this
>> > affects KVM running on Ubuntu as well? I am using Intel Xeon E5 series
>> for
>> > the hypervisor hosts, not too sure if it's supported and part of the
>> Xeon E
>> > series on the list?
>> >
>> > Looking forward to your reply, thank you.
>> >
>> > Cheers.
>> >
>> > -ip-
>> >
>> >
>> > On Tue, Aug 1, 2017 at 12:45 AM, Nux!  wrote:
>> >
>> >> It could be this
>> >> https://access.redhat.com/documentation/en-US/Red_Hat_
>> >> Enterprise_Linux/6/html/6.8_Release_Notes/known_issues_virtu
>> alization.html
>> >>
>> >> --
>> >> Sent from the Delta quadrant using Borg technology!
>> >>
>> >> Nux!
>> >> www.nux.ro
>> >>
>> >> - Original Message -
>> >> > From: "Nux!" 
>> >> > To: "users" 
>> >> > Cc: "dev" 
>> >> > Sent: Monday, 31 July, 2017 16:54:51
>> >> > Subject: Re: Windows Server 2016 - unable to boot ISO for
>> installation
>> >>
>> >> > What version of KVM?
>> >> >
>> >> > --
>> >> > Sent from the Delta quadrant using Borg technology!
>> >> >
>> >> > Nux!
>> >> > www.nux.ro
>> >> >
>> >> > - Original Message -
>> >> >> From: "Cloud List" 
>> >> >> To: "users" 
>> >> >> Cc: "dev" 
>> >> >> Sent: Monday, 31 July, 2017 03:31:45
>> >> >> Subject: Windows Server 2016 - unable to boot ISO for installation
>> >> >
>> >> >> Dear all,
>> >> >>
>> >> >> I am using CloudStack 4.8.1.1 with KVM hypervisor. I have downloaded
>> >> >> Windows Server 2016 ISO but when I tried to boot up the ISO for
>> >> >> installation and template creation, it immediately goes into Blue
>> >> Screen /
>> >> >> BSOD with below error messages:
>> >> >>
>> >> >> ===
>> >> >> Your PC ran into a problem and needs a restart. We'll restart for
>> you.
>> >> >>
>> >> >> For more information about this issue and possible fixes, visit
>> >> >> http://windows.com/stopcode
>> >> >>
>> >> >> If you call a support person, give them this info:
>> >> >> Stop Code: SYSTEM THREAD EXCEPTION NOT HANDLED
>> >> >> ===
>> >> >>
>> >> >> and then it will go into restart, hit with the same error and keep
>> on
>> >> >> 

Re: KVM qcow2 perfomance

2017-08-06 Thread Ivan Kudryavtsev
Hi. No offence, but as topic author wrote, he has 30MB/s. Just kept it in
mind when wrote about 100.

7 авг. 2017 г. 3:01 пользователь "Eric Green" 
написал:

>
> > On Aug 5, 2017, at 21:03, Ivan Kudryavtsev 
> wrote:
> >
> > Hi, I think Eric's comments are too tough. E.g. I have 11xSSD 1TB with
> > linux soft raid 5 and Ext4 and it works like a charm without special
> > tunning.
> >
> > Qcow2 also not so bad. LVM2 does it better of course (if not being
> > snapshotted). Our users have different workloads and nobody claims disk
> > performance is a problem. Read/write 100 MB/sec over 10G connection is
> not
> > a problem at all for the setup specified above.
>
> 100 MB/sec is the speed of a single vintage 2010 5200 RPM SATA-2 drive.
> For many people, that is not a problem. For some, it is. For example, I
> have a 12x-SSD RAID10 for a database. This RAID10 is on a SAS2 bus with 4
> channels thus capable of 2.4 gigaBYTES per second raw throughput. Yes, I
> have validated that the SAS2 bus is the limit on throughput for my SSD
> array. If I provided a qcow2 volume to the database instance that only
> managed 100MB/sec, my database people would howl.
>
> I have many virtual machines that run quite happily with thin qcow2
> volumes on 12-disk RAID6 XFS datastores (spinning storage) with no problem,
> because they don't care about disk throughput, they are there to process
> data, or provide services like DNS or a Wiki knowledge base, or otherwise
> do things that aren't particularly time-critical in our environment. So
> it's all about your customer and his needs. For maximum throughput, qcow2
> on a ext4 soft RAID capable of doing 100Mb/sec is very... 2010 spinning
> storage... and people who need more than that, like database people, will
> be extremely dissatisfied.
>
> Thus my suggestions of ways to improve performance via providing a custom
> disk offering for those cases where disk performance and specifically write
> performance is a problem -- switching to 'sparse' rather than 'thin' as the
> provisioning mechanism (which greatly speeds writes since now only the
> filesystem block allocation mechanisms get invoked, rather than qcow2's
> block allocation mechanisms, and qcow2 now only has a single allocation
> zone which greatly speeds its own lookups), using a different underlying
> filesystem that has proven to have more consistent performance (xfs isn't
> much faster than ext4 under most scenarios but doesn't have the lengthy
> dropouts in performance that come with lots of writes on ext4), and
> possibly flipping on async caching in the disk offering if data integrity
> isn't a problem (for example, for an Elasticsearch instance, the data is
> all replicated across multiple nodes on multiple datastores anyhow, so if I
> lose an Elasticsearch node's data so what? I just destroy that instance and
> create a new one to join to the cluster!). And of course there's always the
> option of simply avoiding qcow2 altogether and providing the data via iSCSI
> or NFS directly to the instance, which may be what you need to do for
> something like a database that has some very specific performance and
> throughput requirements.
>
>
>


Re: KVM qcow2 perfomance

2017-08-06 Thread Eric Green

> On Aug 5, 2017, at 21:03, Ivan Kudryavtsev  wrote:
> 
> Hi, I think Eric's comments are too tough. E.g. I have 11xSSD 1TB with
> linux soft raid 5 and Ext4 and it works like a charm without special
> tunning.
> 
> Qcow2 also not so bad. LVM2 does it better of course (if not being
> snapshotted). Our users have different workloads and nobody claims disk
> performance is a problem. Read/write 100 MB/sec over 10G connection is not
> a problem at all for the setup specified above.

100 MB/sec is the speed of a single vintage 2010 5200 RPM SATA-2 drive. For 
many people, that is not a problem. For some, it is. For example, I have a 
12x-SSD RAID10 for a database. This RAID10 is on a SAS2 bus with 4 channels 
thus capable of 2.4 gigaBYTES per second raw throughput. Yes, I have validated 
that the SAS2 bus is the limit on throughput for my SSD array. If I provided a 
qcow2 volume to the database instance that only managed 100MB/sec, my database 
people would howl.

I have many virtual machines that run quite happily with thin qcow2 volumes on 
12-disk RAID6 XFS datastores (spinning storage) with no problem, because they 
don't care about disk throughput, they are there to process data, or provide 
services like DNS or a Wiki knowledge base, or otherwise do things that aren't 
particularly time-critical in our environment. So it's all about your customer 
and his needs. For maximum throughput, qcow2 on a ext4 soft RAID capable of 
doing 100Mb/sec is very... 2010 spinning storage... and people who need more 
than that, like database people, will be extremely dissatisfied. 

Thus my suggestions of ways to improve performance via providing a custom disk 
offering for those cases where disk performance and specifically write 
performance is a problem -- switching to 'sparse' rather than 'thin' as the 
provisioning mechanism (which greatly speeds writes since now only the 
filesystem block allocation mechanisms get invoked, rather than qcow2's block 
allocation mechanisms, and qcow2 now only has a single allocation zone which 
greatly speeds its own lookups), using a different underlying filesystem that 
has proven to have more consistent performance (xfs isn't much faster than ext4 
under most scenarios but doesn't have the lengthy dropouts in performance that 
come with lots of writes on ext4), and possibly flipping on async caching in 
the disk offering if data integrity isn't a problem (for example, for an 
Elasticsearch instance, the data is all replicated across multiple nodes on 
multiple datastores anyhow, so if I lose an Elasticsearch node's data so what? 
I just destroy that instance and create a new one to join to the cluster!). And 
of course there's always the option of simply avoiding qcow2 altogether and 
providing the data via iSCSI or NFS directly to the instance, which may be what 
you need to do for something like a database that has some very specific 
performance and throughput requirements.




Re: IPMI out of management

2017-08-06 Thread Rohit Yadav
Victor,


Which proposed features do you want to know, if they are covered with ipmi 
oobm? As per current master, 4.10/4.9 releases, IPMI based out-of-band 
management works with CloudStack.


- Rohit


From: victor 
Sent: Thursday, August 3, 2017 6:44:26 AM
To: users@cloudstack.apache.org; Rohit Yadav
Subject: Re: IPMI out of management

Hello Guys,

I am also able to successfully configure ipmi fencing with cloudstack.
Also can you guys let me know whether all the proposed features are
fully covered with the  ipmi out of management.

Regards
Victor

On 08/03/2017 03:35 AM, Rohit Yadav wrote:
> Thanks Gabriel, Rodrigo -- good to know this is in use.
>
>
> Victor - yes it works, there is a ipmisim [1] tool you can test the 
> implementation against, as well as real h/w.
>
>
> [1] https://pypi.python.org/pypi/ipmisim
>
>
> - Rohit
>
> 
> From: Rodrigo Baldasso 
> Sent: Tuesday, August 1, 2017 2:38:28 PM
> To: users@cloudstack.apache.org
> Subject: Re: IPMI out of management
>
> I'm using here and works fine with my Supermicro servers.
>
> - - - - - - - - - - - - - - - - - - -
>
> Rodrigo Baldasso - LHOST
>
> (51) 9 8419-9861
> - - - - - - - - - - - - - - - - - - -
> On 01/08/2017 09:01:20, victor  wrote:
> Hello Guys,
>
> Have anybody able to configure and test ipmi "out of management" with
> cloudstack successfully.
>
> Regards
>
> Victor
>
>
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>


rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



RE: Instance with a larger disk size then Template

2017-08-06 Thread Imran Ahmed
Hi, 
I created a small video tutorial for doing this manually   (without cloud-init) 
for Non-LVM based root disks.

Here is the link : https://www.youtube.com/watch?v=cjAHbjHWRlM

I am planning to do a similar one for LVM based root disks as well.


Cheers,

Imran

-Original Message-
From: Mohd Zainal Abidin [mailto:zainal@gmail.com] 
Sent: Friday, August 04, 2017 3:29 AM
To: users@cloudstack.apache.org
Subject: Re: Instance with a larger disk size then Template

We have this issue long time ago. We manually resize root when VM's still
running. After resize and reboot the size show correct.

On Aug 4, 2017 6:25 AM, "ilya"  wrote:

> Just a thought - as i do this very frequently.
>
> If you are using LVM on your ROOT partition - you dont need to power it
> on via Live CD.
>
> It can all be done online while the system running.
>
>
>
>
> On 8/3/17 6:40 AM, Imran Ahmed wrote:
> > Hi Erik,
> >
> > Thanks for suggestion, I tried this too and was successful till
> lvextending the logical volume. However at the stage of running resize2fs
> it produced errors like : Bad super block..."  so I ended up installing
> from an ISO and partitioning without LVM this time so that I could use this
> template to resize in future.
> >
> > Cheers,
> >
> > Imran
> >
> > -Original Message-
> > From: Erik Weber [mailto:terbol...@gmail.com]
> > Sent: Thursday, August 03, 2017 3:56 PM
> > To: users@cloudstack.apache.org
> > Subject: Re: Instance with a larger disk size then Template
> >
> > A faster approach than those mentioned is to create a new partition on
> > the unused disk space, and add it to the volume group, then use
> > lvextend and resizing the fs.
> >
> > On Thu, Aug 3, 2017 at 12:00 PM, Imran Ahmed  wrote:
> >> Hi All,
> >>
> >> I am creating an instance with a 300GB disk from a CentOS 7 template
> that
> >> has 5GB disk (LVM Based).
> >> The issue is that the root LVM partition inside the new VM instance
> still
> >> shows 5GB .
> >>
> >> The device size  (/dev/vda) however shows 300GB.  The question is what
> is
> >> the best strategy to resize the root LVM partition so that I could use
> all
> >> 300G.
> >>
> >> Kind regards,
> >>
> >> Imran
> >>
> >
>