Re: Slow VirtualRouter/NAT (25MB/s)

2021-09-27 Thread Nathan McGarvey
Wei and Alex,
Thanks. That was it. I had overlooked that setting and wasn't using
the word "throttle/throttling" in my searching. Additionally, I had
misinterpreted the value of NULL in the service offerings: I though
blank/NULL meant infinity instead of 0 meaning infinity. (NULL means use
the global setting parameter.)


Here's the default global settings should someone find this thread
in the future and just want to change the default for everything instead
of creating new Network Offerings:

network.throttling.rate (for applying to the network offerings)
vm.network.throttling.rate (for applying to the service offerings)



Note that I had to fully stop, then start a guest VM or recreate a
the virtual router for the global settings to take effect.



Thanks,
-Nathan McGarvey


On 9/27/21 3:58 PM, Alex Mattioli wrote:
> Which is the same as the 25MB/s mentioned.
> The ACS VF can easily pass 3gpbs of traffic, but you need to change the 
> network offering.
> 
> Regards
> Alex
> 
>  
> 
> 
> -Original Message-
> From: Wei ZHOU  
> Sent: 27 September 2021 20:16
> To: users 
> Subject: Re: Slow VirtualRouter/NAT (25MB/s)
> 
> Hi Nathan,
> 
> If you use the default network offering 'Offering for Isolated networks with 
> Source Nat service enabled', the Network Rate (Mb/s) is 200.
> 
> -Wei
> 
> On Mon, 27 Sept 2021 at 19:15, Nathan McGarvey 
> wrote:
> 
>> All,
>> Has anyone had issues with throughput on a NATed topology being 
>> limited to almost exactly 25MB/s? (ACS 4.15.0.0 on KVM RHEL/CentOS 
>> 8.X)
>>
>> E.g.
>> VM on hypervisor -> Virtual Router NAT -> public
>>
>>
>> I get 25MB/s when doing a /dev/shm/ file copy to a /dev/null over SCP 
>> on a 1G network. (E.g. scp /dev/shm/bigfile publichost:/dev/null)
>>
>> It isn't CPU pegged, and operates the same among VMs with different 
>> resources (RAM/CPU) available.
>>
>> The same copy from the underlying hypervisor is 112MB/s (which is 
>> fairly ideal/normal on an unloaded 1G network.)
>>
>> The same copy from the systemvm itself is also significantly slower 
>> (40MB/s or less)
>>
>>
>> The compute offerings have no known bandwidth restrictions.
>>
>>
>> Just wondering if folks actually have achieved gigabit (or preferably 
>> more for supporting 10G throughput.) speed from a NATed VM.
>>
>>
>> Thanks,
>> -Nathan McGarvey
>>


Slow VirtualRouter/NAT (25MB/s)

2021-09-27 Thread Nathan McGarvey
All,
Has anyone had issues with throughput on a NATed topology being
limited to almost exactly 25MB/s? (ACS 4.15.0.0 on KVM RHEL/CentOS 8.X)

E.g.
VM on hypervisor -> Virtual Router NAT -> public


I get 25MB/s when doing a /dev/shm/ file copy to a /dev/null over SCP on
a 1G network. (E.g. scp /dev/shm/bigfile publichost:/dev/null)

It isn't CPU pegged, and operates the same among VMs with different
resources (RAM/CPU) available.

The same copy from the underlying hypervisor is 112MB/s (which is fairly
ideal/normal on an unloaded 1G network.)

The same copy from the systemvm itself is also significantly slower
(40MB/s or less)


The compute offerings have no known bandwidth restrictions.


Just wondering if folks actually have achieved gigabit (or preferably
more for supporting 10G throughput.) speed from a NATed VM.


Thanks,
-Nathan McGarvey


Re: [DISCUSS] SystemVM template upgrade improvements

2021-09-03 Thread Nathan McGarvey
+1

This is also helpful for restricted-from-internet installations. (E.g.
places with on-site installs and strong firewall/proxy/air-gap rules.)
Which is a feature that is increasingly hard to come by for cloud-based
underpinnings, but increasingly of interest for many organizations who
like to have control of where their data resides. (Banks, medical
institutions, governments, etc.)


Should it be in the same packaging, or be a separate package entirely?
That way the current packaging could still remain as-is but have the
option of obtaining the seperately packaged systemVMs in package-manager
format. If you really wanted to, you could even break out the KVM vs Xen
vs VMWare into separate packages to help reduce size and increase
modularity. Then you still are hooking into the turnkey-method since it
lends itself to a apt-get upgrade or yum upgrade, and can update
components individually and maintain that certain versions of SystemVMs
require certain cloudstack versions and vice-versa.



Thanks,
-Nathan McGarvey

On 9/2/21 9:29 AM, Rohit Yadav wrote:
> Hi Hean,
> 
> Yes I think the old approach of registering systemvm template prior to 
> upgrade as well as the option to switch between systemvmtemplate continues to 
> be supported. What this feature primarily aims is to make CloudStack turnkey 
> operationally.
> 
> May I ask if anyone has any objections on the increased package size? Due to 
> the trade off of including systemvmtemplates in the management package the 
> size increased to about 1-1.5GB which is the only thing I didn't like. 
> However I think this can be optimised in future releases.
> 
> Regards.
> 
> From: Hean Seng 
> Sent: Thursday, September 2, 2021 7:34:32 AM
> To: users@cloudstack.apache.org 
> Cc: d...@cloudstack.apache.org 
> Subject: Re: [DISCUSS] SystemVM template upgrade improvements
> 
> This is good idea.  Or else , we shall allow  manual upload via. GUI, and
> mark for system template .
> 
> On Wed, Sep 1, 2021 at 9:08 PM Pearl d'Silva 
> wrote:
> 
>> I probably missed adding the PR link to the feature -
>> https://github.com/apache/cloudstack/pull/4329. Please do provide you
>> inputs.
>>
>>
>> Thanks,
>> Pearl
>>
>> 
>> From: Pearl d'Silva 
>> Sent: Wednesday, September 1, 2021 5:49 PM
>> To: d...@cloudstack.apache.org 
>> Subject: [DISCUSS] SystemVM template upgrade improvements
>>
>> Hi All,
>>
>> We have been working on a feature that simplifies SystemVM template
>> install and upgrades for CloudStack. Historically we've required users to
>> seed the template on secondary storage during fresh installation and
>> register the template before an upgrade - this really does not make
>> CloudStack turnkey, as we end up maintaining and managing them as a
>> separate component - for example, users can't simply do an apt-get upgrade
>> or yum upgrade to upgrade CloudStack.
>>
>> The feature works by automatically initiating registration of the SystemVM
>> templates during upgrades or when the first secondary storage is added to a
>> zone where the SystemVM template hasn't been seeded. This feature addresses
>> several operational pain points for example, when the admin user forgets to
>> register the SystemVM template prior to an upgrade and faces the issue of
>> having to roll back the database midway during the upgrade process. With
>> this feature the upgrade process is seamless, such that the end users do
>> not need to worry about having to perform template registration, but rather
>> have the upgrade process take care of everything that is required.
>>
>> In order to facilitate this feature, the SystemVM templates have to be
>> bundled with the cloudstack-management rpm/deb package which causes the
>> total noredist cloudstack-management package size to increase to about
>> 1.6GB. We currently are packaging templates of only the three widely
>> supported hypervisors - KVM, XenServer/XCP-ng and VMWare.
>> (These templates are only packaged if the build is initiated with the
>> noredist flag.)
>>
>> We'd like to get your opinion on this idea.
>>
>> Thanks & Regards,
>> Pearl Dsilva
>>
>>
>>
>>
>>
>>
>>
> 
> --
> Regards,
> Hean Seng
> 
>  
> 
> 


Re: [DISCUSS] Rocky 8.4 and CloudStack

2021-06-28 Thread Nathan McGarvey
Rohit,
Agreed. on the el8 rename for the repos. I'd recommend keeping a
symlink back to centos just for folks running automated scripts and such
downstream. Make sure the rsync daemon still works for both, too.

Do you know how difficult it would be to change the CI/CD build
processes to point to point to either multiple OSes or change to another
OS for testing? E.g. Don't actually switch off of CentOS 8 quite yet,
but be able to test alternatives before the end of the year.

Thanks,
-Nathan McGarvey

On 6/28/21 6:02 AM, Rohit Yadav wrote:
> Great thanks all for the discussion, so what we mostly agree on are:
> 
>   *   CentOS8, Rocky Linux 8 and other initiatives may all be binary 
> compatible
>   *   We can host all el8 repos which these distros may use
>   *   The community may help validate the CloudStack el8 pkgs among one or 
> more clear winner with time
> 
> As an immediate action, let's us publish all "centos8" or "rocky8" package 
> repos under generic "el8" repos? For example, 
> http://download.cloudstack.org/testing/nightly/latest/ we can add symlink or 
> rename dirs as "el8", "el7".
> 
> 
> Regards.
> 
> 
> From: n...@li.nux.ro 
> Sent: Thursday, June 24, 2021 21:12
> To: d...@cloudstack.apache.org 
> Cc: Nathan McGarvey 
> Subject: Re: [DISCUSS] Rocky 8.4 and CloudStack
> 
> That's a very good suggestion, I'm sure we can sort out something.
> 
> Regards,
> Lucian
> 
> 
>  
> 
> On 2021-06-24 14:40, Nathan McGarvey wrote:
>> Nux,
>> Also agree regarding EL8.
>>
>> I wonder if it is possible to build on a RHEL "development" license
>> where builds and smoke tests and such can be done without licensing
>> cost.
>> (https://developers.redhat.com/articles/faqs-no-cost-red-hat-enterprise-linux,
>> https://developers.redhat.com/terms-and-conditions)
>>
>> I'm not a lawyer and the terms seem murky as to how an Open-Source
>> project like CloudStack would interact with those terms, even in a
>> non-production sense. Do any other ASF projects use RHEL for build/test
>> servers or anything like that?
>>
>>
>> Thanks,
>> -Nathan McGarvey
>>
>>
>>
>> On 6/24/21 8:17 AM, Sven Vogel wrote:
>>> @nux
>>>
>>> „Might be then worth going for supporting "EL8" and by that include
>>> any
>>> of Rocky, Alma, OtherClone etc.“
>>>
>>> Agree
>>>
>>> __
>>>
>>> Sven Vogel
>>> Senior Manager Research and Development - Cloud and Infrastructure
>>>
>>> EWERK DIGITAL GmbH
>>> Brühl 24, D-04109 Leipzig
>>> P +49 341 42649 - 99
>>> F +49 341 42649 - 98
>>> s.vo...@ewerk.com
>>> www.ewerk.com<http://www.ewerk.com>
>>>
>>> Geschäftsführer:
>>> Dr. Erik Wende, Hendrik Schubert, Tassilo Möschke
>>> Registergericht: Leipzig HRB 9065
>>>
>>> Support:
>>> +49 341 42649 555
>>>
>>> Zertifiziert nach:
>>> ISO/IEC 27001:2013
>>> DIN EN ISO 9001:2015
>>> DIN ISO/IEC 2-1:2018
>>>
>>> ISAE 3402 Typ II Assessed
>>>
>>> EWERK-Blog<https://blog.ewerk.com/> |
>>> LinkedIn<https://www.linkedin.com/company/ewerk-group> |
>>> Xing<https://www.xing.com/company/ewerk> |
>>> Twitter<https://twitter.com/EWERK_Group> |
>>> Facebook<https://de-de.facebook.com/EWERK.Group/>
>>>
>>>
>>> Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
>>>
>>> Disclaimer Privacy:
>>> Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien)
>>> ist vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht
>>> der bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
>>> Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
>>> informieren Sie in diesem Fall unverzüglich den Absender und löschen
>>> Sie die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem
>>> System. Vielen Dank.
>>>
>>> The contents of this e-mail (including any attachments) are
>>> confidential and may be legally privileged. If you are not the
>>> intended recipient of this e-mail, any disclosure, copying,
>>> distribution or use of its contents is strictly prohibited, and you
>>> should please notify the sender immediately and then delete it
>>> (including any attachments) from your system. Thank you.
>>>
>

Re: CloudStack and Ansible

2021-05-10 Thread Nathan McGarvey
Ivet,
Are you looking for people that use Ansible to setup and manage a
CloudStack install, (E.g. setup/install management servers and
hypervisors and such.) or people that use Ansible as a provisioning tool
and interact with the Cloudstack API, etc?

Thanks,
-Nathan McGarvey


On 5/10/21 9:52 AM, Ivet Petrova wrote:
> Hy everyone,
> 
> I would like to prepare a blog post on CloudStack and Ansible use case. Are 
> there any users here of this combination, who are willing to share insights 
> (how you use it, why you selected it, etc.) and help me for the post?
> 
> Kind regards,
> 
> 
>  
> 
> 


Re: [VOTE] Renaming default git branch name from 'master' to 'main' and replace offensive words as appropriate for inclusiveness

2021-04-30 Thread Nathan McGarvey
+1, -1, and +0:

   Overall idea: +1  (Agree with Rene regarding context being important,
too.)


   Some specific pull requests: -1 or 0:

   -1: How is this related? It seems to be a commit that shouldn't
have been a part of this pull request since it is a brand new file that
is unrelated:
https://github.com/apache/cloudstack-www/pull/83/commits/9545ce619b377326daae5b303ffe89b5ea90a288


+0 or -1: I can't reasonably review this:
https://github.com/apache/cloudstack-www/pull/83/commits/9ce732ceeb47bf6dee73073d892a51fbeea39f09
as it changed over 5000 files going back many many years in the past to
now-dead/unmaintained code. This is a huge repo-bloat commit of doom.
(You're changing API docs for dead code on something that can't even be
manually reviewed). I'd suggest just adding an explanatory file for
unsupported releases instead of changing thousands of files that are a
decade old. Maybe even removing old API docs would be an option. Or just
change the latest X releases, and gracefully age off the old ones.
(Related: How much bigger does this make the git repo and how much
longer does it take to apply diffs when cloning?)


Other questions/comments:

Is there a overarching ASF criteria for what words are
inappropriate for future development?

Should there be git hooks similar to scan for such terms?

How about when upstream projects use a inappropriate term? E.g.
MySQL pre-8.0.23 uses "master" in their configs, variables, and
documents, but uses "replication source" or "replica", etc. after that
point in time. (Ref:
https://dev.mysql.com/doc/refman/8.0/en/binlog-replication-configuration-overview.html)
Having a disjuncture between the implementation code and the upstream
project makes it really hard to cross-reference documentation. The
client/conf/db.properties.in file was changed to be db.cloud.backup, but
why not make that db.cloud.replica or something that lines up with their
documentation? Another example is with network interfaces. The "slave"
term is different than the proposed "secondary" in Linux. A secondary
interface actually means an alias or a fully separate physical device.
Maybe "member device" or something is more correct.



Thanks,
-Nathan McGarvey



On 4/30/21 6:43 AM, Suresh Anaparti wrote:
> Hi All,
> 
> Following the discussion thread on renaming default git branch name and 
> inclusiveness [1], I would like to start a vote to gather consensus on the 
> following plan:
> 
> 1. Accept the following rename PRs (raised against 'master' branch) which 
> renames git default branch to 'main' and replaces some offensive words, and 
> Merge them post acceptance.
>   - cloudstack => PR: https://github.com/apache/cloudstack/pull/4922
>   - cloudstack-documentation => PR: 
> https://github.com/apache/cloudstack-documentation/pull/155
>   - cloudstack-www => PR: https://github.com/apache/cloudstack-www/pull/83
>   - cloudstack-cloudmonkey => PR: 
> https://github.com/apache/cloudstack-cloudmonkey/pull/76
>   - cloudstack-kubernetes-provider => PR: 
> https://github.com/apache/cloudstack-kubernetes-provider/pull/29
>   - cloudstack-ec2stack => PR: 
> https://github.com/apache/cloudstack-ec2stack/pull/2
>   - cloudstack-gcestack => PR: 
> https://github.com/apache/cloudstack-gcestack/pull/3
> 
> 2. Request ASF infra to disable pushes to 'master' branch.
> 
> 3. Rename 'master' branch to 'main' [2][3], and Request ASF infra (open INFRA 
> ticket) to make 'main' as the default branch [4], in GitHub repo settings for 
> all the CloudStack repos. This will also re-target the current PRs against 
> 'master' branch to 'main'.
> 
> 3a. The update on the central repo will be done as follows (only by a PMC or 
> Infra member with access)
>   - Clone the repo (git clone https://github.com/apache/cloudstack.git)
>   - Sync local 'master' with the cloudstack repo (cd cloudstack && git 
> checkout master && git fetch --all -p && git pull)
>   - Rename local 'master' branch to 'main' (git branch -m master main)
>   - Push renamed 'main' branch (git push -u origin main)
>   - Update Default Branch on GitHub [4]
>   - Delete 'master' branch (git push origin --delete master)
> 3b. After the central renaming has been done. New users can clone and 
> directly checkout 'main' branch. Existing users can start using 'main' 
> locally, using the below steps.
>   - Switch to master branch (git checkout master)
>   - Rename local 'master' branch to 'main' (git branch -m master main)
>   - Sync local 'main' with repo (git fetch)
>   - Remove the existing tracking connection with “origin/master” (git 
> branch --unset-upstream)
>   - Create a new tracking connect

Re: Missing Features in the new UI

2021-04-30 Thread Nathan McGarvey
+1 for the LDAP ones, especially.

Thanks,
-Nathan McGarvey


On 4/29/21 5:28 AM, David Jumani wrote:
> Thanks for adding them to the issue Nicolas.
> Inviting others to pitch in with the features they use which are missing in 
> the new UI so we can take a call on whether to implement them.
> 
> From: Nicolas Vazquez 
> Sent: Thursday, April 29, 2021 10:29 AM
> To: users 
> Subject: Re: Missing Features in the new UI
> 
> Thanks David.
> 
> I would like to include a few items in the list, the last one was not 
> avaiable in the legacy UI but would be nice to have it:
> 
>   *   Add LDAP account button
>   *   Link account to LDAP
>   *   Import VMs (Vmware)
> 
> Regards,
> 
> Nicolas Vazquez
> 
> 
> From: David Jumani 
> Sent: Tuesday, April 27, 2021 7:36 AM
> To: users 
> Subject: Missing Features in the new UI
> 
> Hi,
> 
> I was going through the new UI and noticed that a few features are not yet 
> implemented in the new UI which exist in the legacy UI and APIs exist in the 
> backend (not sure whether they're still functional). I've made a list of them 
> over at https://github.com/apache/cloudstack/issues/4937 and wanted to get 
> feedback on whether there's a need or sufficient users who still use them to 
> implement them in the new UI. The primary ones are :
>   *
> 
>   *   Global Server Load Balancing Support 
> http://docs.cloudstack.apache.org/en/latest/adminguide/networking/global_server_load_balancing.html
>   *   AutoScale without Netscaler 
> http://docs.cloudstack.apache.org/en/latest/adminguide/autoscale_without_netscaler.html
>   *   Override default traffic label for VMware with nexus / dvswitch when 
> adding a cluster
> 
> If anyone is still using them, please shoutout so a call can be taken whether 
> to implement them in the new UI or not
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  
> 
> 


Re: [VOTE] New life to Terraform Provider CloudStack with Apache CloudStack project

2021-04-15 Thread Nathan McGarvey
+1

I've yet to find something as a viable alternative to Terraform that
allows flexible switching between cloud providers (or even co-using)
without huge code rewrites. One of Cloudstack's big sellers is it's
relatively simple and stable for setup and maintenance (not
over-abstracted, low cost to entry, can be installed without direct
internet access for private clouds, etc.).  The downside is that, much
like every other cloud API, it requires a *lot* of custom code to
integrate for end-users/developers, so folks tend to migrate to whoever
has the fastest and lowest cost of adoption instead of ease of setup and
maintenance.

Many [citation needed] folks are using Terraform (brief Internet
research: IEEE, Whole Foods, Udemy, Uber, and many more)

As a potential alternative, if a an AWS/Azure/GCP/whatever
compatibility layer or similar was maintained to the point that you
could just document to use that Terraform provider, then this becomes
moot. (Though that is really just picking which abstraction layer to
maintain, so maybe not being tied to another company is good.)

I also keep running across people mis-understand Terraform a lot. It
doesn't [usually] compete with puppet/ansible/chef nor things like
nagios/bro/solarwinds/elastic:

1. Terraform is used to provision from nothing. It is an external
tool that interacts with the cloud APIs for everything from instance
provisioning, volume management, and networking, etc.
2. Ansible/puppet/chef to do stateful configuration management and
similar operations after provisioning (in most cases).
3. Elastic/nagios/bro/solarwinds/whatever for continuous monitoring
for things that aren't cloud-native and need stability because they
can't just be "re-spawned" on failure.


Thanks,
-Nathan McGarvey

P.s.: If voting +2 were allowed, I'd be a +3. :)


On 4/15/21 4:05 AM, Rohit Yadav wrote:
> Hi All,
> 
> Following the discussion thread on Terraform [1], I would like to start a 
> vote to gather consensus on the following actions:
> 
>   1.  Create a new "cloudstack-terraform-provider" repository based on Apache 
> Licence v2.0 using re-licensed codebase of the archived/former terraform 
> cloudstack provider repository: 
> https://github.com/hashicorp/terraform-provider-cloudstack (note: 
> re-licensing from MPL to AL will be done by Hashicorp)
>   2.  Request ASF infra to enable issues, PR, and wiki features on the 
> repository
>   3.  Work with the community towards any further maintenance, development, 
> and releases of the provider
>   4.  Publish official releases on the official registry [2] if/after Apache 
> CloudStack project gets a verified account (published by PMC members with 
> access to the registry, or following guidelines from ASF infra if they've any)
> 
> The vote will be open for 120 hours, until Wed 21 April 2021.
> For sanity in tallying the vote, can PMC members please be sure to indicate 
> "(binding)" with their vote?
> 
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
> 
> [1] https://markmail.org/message/iuggxin7kj6ri4hb
> [2] https://registry.terraform.io/browse/providers
> 
> 
> Regards.
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
> @shapeblue
>   
>  
> 
> 


Re: Accessing the new UI in 4.15

2021-03-08 Thread Nathan McGarvey
I'm on CentOS 8.3 (Yes, I know that is going to be EOL soon) with a
source-built RPM install and have the same issue. I built the 4.15.0.0
tarball via the provided packaging script with "-d centos8 -p oss" and
on service start, I get just that same almost-bare page with the legacy
UI deprecation notice.

Just adding a data point. May not be related, but I've been fighting
with it for a couple days now and can't seem to coax any useful logs or
anything out of it.

Thanks,
-Nathan McGarvey

On 3/8/21 11:00 PM, Joshua Schaeffer wrote:
> I'm on Ubuntu 20.04 and I did build it from source. Thanks for pointing that 
> out. Are there any docs on building/including the UI from source?
> 
> On 3/8/21 9:47 PM, Rohit Yadav wrote:
>> Hi Joshua,
>>
>> Where did you install the packages from, can you share the repository link? 
>> And if you're in CentOS or Ubuntu?
>>
>> In case you built it yourself from source, you need to build the UI too. For 
>> example, this repository includes the UI in the rpms and debs:
>> http://download.cloudstack.org/centos/7/4.15
>> http://packages.shapeblue.com/cloudstack/upstream/centos7/4.15/
>> http://packages.shapeblue.com/cloudstack/upstream/debian/4.15/
>>
>> Regards,
>> Rohit Yadav
>>
>> 
>> From: Joshua Schaeffer 
>> Sent: Tuesday, March 9, 2021 7:22:24 AM
>> To: users@cloudstack.apache.org 
>> Subject: Re: Accessing the new UI in 4.15
>>
>> No, I didn't send a screenshot before. Here is what I get: 
>> https://imgur.com/a/kpTnLoV
>>
>> The source code for the page is below:
>>
>> > html>> content="text/html; charset=UTF-8" />Apache 
>> CloudStackThe legacy UI has been deprecated in this 
>> version as notified in the
>> > href="http://docs.cloudstack.apache.org/en/4.14.0.0/releasenotes/about.html#new-user-interface-depreciation-notice-of-existing-ui;>previous
>>  release. The legacy UI will be > href="http://docs.cloudstack.apache.org/en/4.15.0.0/releasenotes/about.html#primate-ga-and-legacy-ui-deprecation-and-removal-notice;>removed
>>  in the next release.To access the legacy UI click 
>> here.
>> rohit.ya...@shapeblue.com 
>> www.shapeblue.com
>> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
>> @shapeblue
>>   
>>  
>>
>>  Here is the package installed on the controller: 
>> jschaeffer@bllcloudctl01:~$ dpkg -l | grep cloudstack ii cloudstack-common 
>> 4.15.0.0 all A common package which contains files which are shared by 
>> several CloudStack packages ii cloudstack-management 4.15.0.0 all CloudStack 
>> server library
>>
>> On 3/8/21 3:04 PM, Andrija Panic wrote:
>>>  http://:8080/clientshould serve you with the new
>>> UI/Primate (in case you've attached a screenshot - it's not visible, ML
>>> strips any attachments - please post a link to the external image to see
>>> what is going on)
>>> (and yes, Primate is now part of ACS, not a separate install/package)
>>>
>>> Best,
>>>
>>> On Mon, 8 Mar 2021 at 21:47, Joshua Schaeffer 
>>> wrote:
>>>
>>>> I just installed ACS 4.15 and I'm trying to access the new UI at 
>>>> "http://:8080/client"
>>>> and all that comes back is the following page:
>>>>
>>>> The legacy UI has been deprecated in this version as notified in the
>>>> previous release <
>>>> http://docs.cloudstack.apache.org/en/4.14.0.0/releasenotes/about.html#new-user-interface-depreciation-notice-of-existing-ui>.
>>>> The legacy UI will be removed in the next release <
>>>> http://docs.cloudstack.apache.org/en/4.15.0.0/releasenotes/about.html#primate-ga-and-legacy-ui-deprecation-and-removal-notice
>>>>> .
>>>> To access the legacy UI click here <
>>>> http://bllcloudlb01.harmonywave.cloud/client/legacy>.
>>>>
>>>> How do I actually get to the new UI? I can still get to the legacy UI and
>>>> I've tried going to /client/primate but that page doesn't exist. Do you
>>>> still have to install primate separately in 4.15? I've searched the docs
>>>> [1] and it seems to indicate that the new UI should be readily available at
>>>> the /client URL. Also in the release notes [2] it mentions the following:
>>>>
>>>> The default URL :8080/client will serve the new UI and
>>>> :8080/client/legacy will serve the deprecated legacy UI.
>>>>
>>>> [1] https://docs.cloudstack.apache.org/en/latest/adminguide/ui.html
>>>> [2]
>>>> https://docs.cloudstack.apache.org/en/4.15.0.0/releasenotes/about.html#primate-ga-and-legacy-ui-deprecation-and-removal-notice
>>>>
>>>> --
>>>> Thanks,
>>>> Joshua Schaeffer
>>>>
>>>>
>> --
>> Thanks,
>> Joshua Schaeffer
>>
> 


Self-imposed Race-condition Installation Pitfall

2021-02-13 Thread Nathan McGarvey
All/any,
I just spent an unfortunate amount of time debugging a CloudStack
install (4.15) in what ended up being a self-caused race condition and
just wanted others to be aware in case they ran into the same issue:

   I set up a [very ugly] ansible playbook to install the 4.15
management server on a clean CentOS 8 install (yes, I know that is EOL
at the end of this year)

   But I did something too fast and caused a race condition:

   I rebooted after the "cloudstack-setup-management" command completed.

   It turns out that I was interrupting the database upgrade scripts and
would reboot to a non-functional manager with weird and
non-deterministic errors when it would try to re-upgrade the database.
(Lots of duplicate column, foreign key constraint, etc. errors.) The
errors only depended on how far the migrations has gotten before the
reboot occurred.

   It was really hard to track down just because I assumed that once
"cloudstack-setup-management" was done, that implied that it was in a
stable state for service stops/restarts and reboots. Obviously I was wrong.

   Hopefully someone finds this useful and may save time/frustration by
being patient and watching the log file until all the database upgrade
steps are complete.


Thanks,
-Nathan McGarvey