On 11/11/2020 2:01 AM, Hean Seng wrote:
IPv6 do not have NAT , each VM suppose to have indiviual Ipv6 Address.
NAT66 does in fact exist, and the virtual routers used for VLANs could
in fact be configured with RADV to provide an IETF RFC4193 SLAAC prefix
to private VPC networks then use NAT66
On 8/13/2020 10:22 AM, Hean Seng wrote:
Hi
Cloudstack 4.14 , Advance Network with GiuestNetwork have Public IP
Create a compute offering with 10Mbps, but vm created stil able to burst to
200Mps
Any one know how to solve this
The virtualization hypervisor is responsible for enforcing
Correct, 4.11.3 template is used for 4.11.3, 4.12, and 4.13. 4.14 moves
to the 4.14.0 template.
There seems to be something odd happening key-wise sometimes with
upgrades from 4.11.3 to 4.13.1 or 4.14.0. I managed an upgrade from
4.11.3 to 4.13.1 that *almost* worked, but the secondary
I will note that we have several Windows 10 / Windows 2018 genre VM's on
Centos 7.8.2003 and are not seeing any guest freezes. But:
1) We are not using CPU host-passthrough.
2) We do have the QEMU Windows guest drivers installed in the VM's.
3) I am running Cloudstack 4.11.3. I tried
org/SpecialInterestGroup/Virtualization>
Regards.
From: Eric Lee Green
Sent: Saturday, August 8, 2020 08:44
To: users@cloudstack.apache.org
Subject: Has anybody gotten Cloudstack 4.13.1.0 or 4.14.0.0 working on Centos
7.7 or 7.8?
I've tried multiple times now an
I've tried multiple times now and it's refusing to start the virtual
routers and thus refusing to start anything else. Or rather, it's
starting them, I see the VM appear in the process list (ps -ax | grep
kvm) and I see log messages in /var/log/messages saying it's started and
I see the log,
On 10/10/19 5:19 AM, Ioan Marginean wrote:
Hi users,
I installed CS 4.13 on 3 KVM hypervisors. Every host has 2 interfaces, eno1 end
eno2. I had defined cloudbr0 on eno1 end here goes Management, Public and
storage traffic. The guest traffic goes on eno2. All seems perfect until I
started to
uestion of which one
changed and how to work around it is still a question I'm trying to answer.
Andrija
On Fri, 24 May 2019 at 21:12, Eric Lee Green <mailto:eric.lee.gr...@gmail.com>> wrote:
On 5/24/19 10:16 AM, Andrija Panic wrote:
> Eric,
>
> your BIND
t the
"right" way to do this right now so I can retire my hack script.
On Fri, 24 May 2019 at 02:15, Eric Lee Green
wrote:
I had this working under 4.9. All I did was, on my main BIND9 servers,
point a forward zone at 'cloud..com' to the virtual router
associated with all VM's that wer
I had this working under 4.9. All I did was, on my main BIND9 servers,
point a forward zone at 'cloud..com' to the virtual router
associated with all VM's that were publicly available. I could then
resolve all foo.cloud..com names on my global network.
Somehow, though, this quit working after
t the same level of testing and QA.
--
Sent from the Delta quadrant using Borg technology!
Nux!
www.nux.ro
- Original Message -
From: "Eric Lee Green"
To: "users"
Sent: Wednesday, 22 May, 2019 15:33:15
Subject: Re: Upgrade to Cloudstack 4.11.2 fails *AGAIN*
Okay.
, Andrija Panic wrote:
Eric,
did you actually test this in production?
Andrija
On Wed, 22 May 2019 at 16:33, Eric Lee Green
wrote:
Okay. This makes sense.
And people wonder why Amazon decided to make their own Linux rather than
use Centos and why Ubuntu has seized huge market share from Red Hat
when I have some sleep since it is now midnight here in
the SF Bay area.
On Wed, 22 May 2019, 6:10 am Eric Lee Green,
wrote:
Thanks for the response, sorry if I sound frustrated, but this is
supposed to be a simple easy process and it's been horrible all the way
through. 4.11.1 failed so I had
sleep since it is now midnight here in
the SF Bay area.
On Wed, 22 May 2019, 6:10 am Eric Lee Green,
wrote:
Thanks for the response, sorry if I sound frustrated, but this is
supposed to be a simple easy process and it's been horrible all the way
through. 4.11.1 failed so I had to downgrade
rom the management server to the agent to feed to the instance).
I would appreciate if one of the list could assist to check and change
if necessary.
Regards
René
On 5/22/19 2:51 AM, Eric Lee Green wrote:
You may remember me as the person who had to roll back to Cloudstack
4.9.x because Cloudstack 4.1
register the
4.11.2 systemvmtemplate beforw upgrading etc.
Regards.
Regards,
Rohit Yadav
From: Eric Lee Green
Sent: Wednesday, May 22, 2019 6:21:16 AM
To: users@cloudstack.apache.org
Subject: Upgrade to Cloudstack 4.11.2 fails *AGAIN*
You may remember me
You may remember me as the person who had to roll back to Cloudstack
4.9.x because Cloudstack 4.11.1 wouldn't start any virtual machines once
I upgraded to it, claiming that there were inadequate resources even
though I had over 150 gigabytes of memory free in my cluster and oodles
of CPU free
On 2/28/19 7:19 AM, Fariborz Navidan wrote:
Hello,
It seems cloudstack does not configure system vms correctly. Even if I
delete them and cloudstack recreates, they cannot reach internet however
they can be reached from outside. When I check, I see no gateway is set on
them. If I set manually,
On 2/27/19 2:58 PM, Fariborz Navidan wrote:
Hello All,
I have used qemu-img tool to convert a vmdk to qcow2 image. I want to add
the image as template to ACS so I can deploy from it and get the VM
migrated to ACS. I have installed httpd on management server and I am able
to start the file at
On 11/19/18 3:47 PM, Yiping Zhang wrote:
Eric:
What's your value for global setting cpu.overprovisioning.factor?
I have this value set to 3.0. Right now, one of my servers with 32 cores @ 2.0
GHz (with HT enabled), I can allocate a total of 79 vCPU and 139 GHz to 26 VM
instances. That's
On 11/19/18 03:56, Andrija Panic wrote:
Hi Ugo,
Why would you want to do this, just curious ?
I believe it's not possible, but anyway (at least with KVM, probably same
for other hypervisors) it doesn't even makes sense/use, since when
deploying a VM, ACS query host free/unused number of MHz
Yeah, had all sorts of problems with custom network offerings after
upgrading to 4.11.1, along with problems with launching virtual machines
(every attempt to launch resulted in a "not enough resources" error),
couldn't get virtual routers to come up for custom networks, etc. I
didn't have
instances that the
general public isn't supposed to access.
My question is if that architecture is recommeneded and how safe it is to
put “real” public IP on System VMs and VRs directly.
Thanks in advance,
Netlynker
On Thu, 27 Sep 2018 at 8:58 AM, Eric Lee Green
wrote:
On 9/25/18
In particular, Ceph needs a *lot* of spindles / CPU / network interfaces
to run with reasonable performance. I tried just a 3-system 6-spindle
Ceph implementation, and was getting streaming write throughput of 20
megabytes per second. Which, uhm, isn't good, in case you're wondering.
As in,
If you set the offering to allow HA and create the instances as HA
instances, they will autostart once the management server figures out
they're really dead (either because it used STONITH to kill the
unreachable node, or because that node became reachable again). When I
had to reboot my
This is the type of discussion that I wanted to open - the argument that I see
for earlier dropping of v6 is that - Between May 2018 and q2 2020 RHEL/CentOS
6.x will only receive security and mission critical updates, meanwhile packages
on which we depend or may want to utilise in the future
On 11/10/2017 11:01 AM, Ron Wheeler wrote:
I have been using CentOS for a long time but they seem to have screwed
up the recent updates to CentOS 7 to the point where after updating to
the latest version (originally build 514 and now 683), the system no
longer boots. I have to boot to build
On 08/17/2017 11:17 AM, Asanka Gunasekara wrote:
Hi Dag, the ip 172.17.101.1 which it is looking for is my gateway IP. Below
are the urls for the the requested query output files
SELECT * FROM cloud.image_store;
Interesting. The only row with a NULL 'removed' column looks good, so
it looks
28 matches
Mail list logo