Hi all,
One small comment since strongSwan didn't make it into 4.9. There is still a
very simple bug in enabling PFS for site-to-site VPNs. The code checks the
Dead Peer Detection (DPD) variable instead of the PFS variable when determining
whether or not to enable PFS for the site-to-site
12, 2016 1:48 AM
To: Sean Lair <sl...@ippathways.com>; dev@cloudstack.apache.org
Subject: RE: [VOTE] Apache Cloudstack 4.9.0 RC1
> Op 11 juli 2016 om 22:40 schreef Sean Lair <sl...@ippathways.com>:
>
>
> Hi all,
>
> One small comment since strongSwan didn't mak
It is open against the 4.9 branch.
We are running 4.9.2.0, looks like it affects all 4.9.x.x
We haven't tested against 4.10 (strongswan) yet. But it could be a problem and
will be worth testing. If strongswan starts before cloudstack adds the nics to
the VM it could have same issue.
> On
We just upgraded from 4.8.1.1 to 4.9.2.0. After upgrading we rebooted the
virtual routers, and noticed that our site-to-site VPNs and remote-access VPNs
would no longer connect. After troubleshooting, we noticed that Openswan
(ipsec.d) wasn't listening on the vRouter's IPs. Here is the
tion of configure.py, because
that section does not start ipsec if the public IP is not on the system yet...
That is my synopsis at least.
Thanks
Sean
-Original Message-
From: Sean Lair
Sent: Thursday, February 23, 2017 2:27 PM
To: dev@cloudstack.apache.org
Subject: VPN/IPSEC pro
alerting.
-Original Message-
From: Simon Weller [mailto:swel...@ena.com]
Sent: Monday, April 10, 2017 5:02 PM
To: dev@cloudstack.apache.org
Subject: RE: How are router checks scheduled?
Do you have 2 management servers?
Simon Weller/615-312-6068
-Original Message-----
From: Sea
w are router checks scheduled?
We've seen something very similar. By any chance, are you seeing any strange
cpu load issues that grow over time as well?
Our team has been chasing down an issue that appears to be related to s2s vpn
checks, where a race condition seems to occur that threads out t
The change to "/opt/cloud/bin/checkbatchs2svpn.sh" fixes the issues where no
all of the VPN checks are returned. I'll create and issue and PR
Sean
-Original Message-----
From: Sean Lair
Sent: Tuesday, April 11, 2017 2:33 PM
To: dev@cloudstack.apache.org
Subject: RE: How are rou
According to my management server logs, some of the period checks are getting
kicked off twice at the same time. The CheckRouterTask is kicked off every
30-seconds, but each time it is ran, it is ran twice at the same second... See
logs below for example:
2017-04-10 21:48:12,879 DEBUG
Here are three issues we ran into in 4.9.2.0. We have been running all of
these fixes for several months without issues. The code changes are all very
easy/small, but had a big impact for us.
I'd respectfully suggest they go into 4.9.3.0:
https://github.com/apache/cloudstack/pull/2041 (VR
Hi Rohit
I previous suggested these for 4.9.3.0
https://github.com/apache/cloudstack/pull/2041 (VR related jobs scheduled and
run twice on mgmt servers)
https://github.com/apache/cloudstack/pull/2040 (Bug in monitoring of S2S VPNs -
also exists in 4.10)
: Re: Private Gateway SNAT Bug
Thanks Sean. Can you do something for us?
Can you open an issue at https://github.com/apache/cloudstack/issues/?
We decided not to use Jira anymore. Also, can you close the jira ticket?
On Tue, May 29, 2018 at 6:08 PM, Sean Lair wrote:
> Opened up Issue with m
Opened up Issue with more info:
https://issues.apache.org/jira/browse/CLOUDSTACK-10379
-Original Message-
From: Sean Lair
Sent: Tuesday, May 29, 2018 12:08 PM
To: dev@cloudstack.apache.org
Subject: Private Gateway SNAT Bug
I've found a bug in the Private Gateway functionality, when
I've found a bug in the Private Gateway functionality, when Source NAT is
enabled for the Private Gateway. When the SNAT is added to iptables, it has
the source CIDR of the private gateway subnet. Since no VMs live in that
private gateway subnet, the SNAT doesn't work. Below is an example:
Would someone mind testing testing a Restart VPC w/ Cleanup on a VPC that has a
private gateway configured? The test
"test_03_vpc_privategw_restart_vpc_cleanup" is failing due to the following
(according to logs). My test environment is not available right now so I can't
check myself. I
I use a wildcard cert on 4.9.2 it it's fine. We haven't gone to 4.10 yet to
test. We'll prob go straight to 4.11 when released.
We have also had the high-cpu on the mgmt servers in our 4.9.x deployments. It
is very frusterating, it also happens every few days. Haven't been able to
track
Hi all,
We are testing VM HA and are having a problem with our system VMs (secondary
storage and console) not being started up on another host when a host fails.
Shouldn't the system VMs be VM HA-enabled? Currently they are just in an
"Alert" agent state, but never migrate. We are currently
/cloudstack/blob/e532b574ddb186a117da638fb6059356fe7c266c/scripts/vm/hypervisor/kvm/kvmheartbeat.sh#L161
we used to comment this line, because we did have some issues with
communication link, and this commented line saved our a$$ few times :)
CHeers
On 20 February 2018 at 20:50, Sean Lair <sl...@ip
We've done a lot of work on VM HA (we are on 4.9.3) and have it working
reliably. We've also been able stop the problem of VMs getting started on two
hosts during some HA events. Since this is 4.9.3, we do not use IPMI for this
functionality. We have not testing how the addition of IPMI in
We were in the same situation as Nux.
In our test environment we hit the issue with VMs not getting fenced and coming
up on two hosts because of VM HA. However, we updated some of the logic for
VM HA and turned on libvirtd's locking mechanism. Now we are working great w/o
IPMI. The locking
on that
> host are active and then attempts some corrective action.
>
>
> Kind regards,
>
> Paul Angus
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue
>
>
>
>
> -Original Message
e here).
Thanks
On 16 February 2018 at 21:52, Sean Lair <sl...@ippathways.com> wrote:
> We were in the same situation as Nux.
>
> In our test environment we hit the issue with VMs not getting fenced and
> coming up on two hosts because of VM HA. However, we updated some
We have some Windows VMs we have VM HA enabled for. When a user does a
shutdown of the VM from within Windows, VM HA reports the following and powers
the VM back up. Is this expected behavior?
Log Snip-it:
2018-02-20 19:51:58,898 INFO [c.c.v.VirtualMachineManagerImpl]
Looks like it is still referenced here:
http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.11/templates/_password.html
-Original Message-
From: Chiradeep Vittal [mailto:chirade...@gmail.com]
Sent: Tuesday, February 27, 2018 3:59 PM
To: dev
FYI Nux, I opened the following PR for the change we made in our environment to
get VM HA to work. I referenced your ticket!
https://github.com/apache/cloudstack/pull/2474
-Original Message-
From: Nux! [mailto:n...@li.nux.ro]
Sent: Monday, January 22, 2018 8:15 AM
To: dev
line, because we did have some issues with
communication link, and this commented line saved our a$$ few times :)
CHeers
On 20 February 2018 at 20:50, Sean Lair <sl...@ippathways.com> wrote:
> Hi Andrija
>
> We are currently running XenServer in production. We are worki
Sorry, replied to wrong snapshot thread..
-Original Message-
From: Sean Lair
Sent: Tuesday, January 22, 2019 11:48 AM
To: dev
Cc: us...@cloudstack.apache.org
Subject: RE: CloudStack 4.11.2 Snapshot Revert fail
Luckily it was for a VM that is never touched in CloudStack. The snaps
snapshot (vm will be paused)
(2) then create a volume snapshot from the vm snapshot
-Wei
Sean Lair 于2019年1月22日周二 下午5:30写道:
> Hi all,
>
> We had some instances where VM disks are becoming corrupted when using
> KVM snapshots. We are running CloudStack 4.9.3 with KVM on CentOS 7.
&
Luckily it was for a VM that is never touched in CloudStack. The snaps were
scheduled ones. No, no changes to VM or template.
We are due to upgrade from 4.9.3 but we have not yet.
-Original Message-
From: Andrija Panic [mailto:andrija.pa...@gmail.com]
Sent: Tuesday, January 22, 2019
: Sean Lair
Sent: Tuesday, January 22, 2019 10:30 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Snapshots on KVM corrupting disk images
Hi all,
We had some instances where VM disks are becoming corrupted when using KVM
snapshots. We are running CloudStack 4.9.3 with KVM
Hi all,
We had some instances where VM disks are becoming corrupted when using KVM
snapshots. We are running CloudStack 4.9.3 with KVM on CentOS 7.
The first time was when someone mass-enabled scheduled snapshots on a lot of
large number VMs and secondary storage filled up. We had to restore
es in your log-file when the disaster
> happened?
>
> I hope, things will be well. Wish you good luck and all the best!
>
>
> ‐‐‐ Original Message ‐‐‐
> On Tuesday, 22 January 2019 18:30, Sean Lair wrote:
>
> > Hi all,
> >
> > We had some instances
ossible memory allocation deadlock size
> 65552 in kmem_realloc (mode:0x250)
> Did you see any unusual messages in your log-file when the disaster
> happened?
>
> I hope, things will be well. Wish you good luck and all the best!
>
>
> ‐‐‐ Original Message ‐‐‐
&
deadlock size 65552 in kmem_realloc
(mode:0x250) Did you see any unusual messages in your log-file when the
disaster happened?
I hope, things will be well. Wish you good luck and all the best!
‐‐‐ Original Message ‐‐‐
On Tuesday, 22 January 2019 18:30, Sean Lair wrote:
> Hi all,
>
After upgrading from 4.9.3 to 4.11.2, we no longer see hosts in the CloudStack
web-interface. Hitting the listHosts API directly also does not return any
results. It's just an empty list. When looking in the DB we do see the hosts
and there are rows where the version is 4.11.2.0.
The
Update on the issue. Thanks Richard for the hint about MariaDB needing an
update (and everyone else that responded). It's crazy, I did a manual select,
mimicking the host_view SQL, and also received zero rows. I modifed the select
statement to remove the LEFT JOIN with last_annotation_view,
Opened Issue:
https://github.com/apache/cloudstack/issues/3826
We noticed that on mysql-connector-java version 8.0.19 (not sure about other
8.0.x versions) we have errors such as the following:
Caused by: java.lang.IllegalArgumentException: Can not set long field
Hi All, there is a discrepancy in our Cloudstack Documentation. The following
Upgrade section says to NOT check the Routing checkbox when uploading a new
SystemVM Template:
http://docs.cloudstack.apache.org/en/latest/upgrading/upgrade/upgrade-4.12.html
This page however says we SHOULD check
Hi all,
We are running 4.11.3 with a single zone, that zone is working without issue.
We are trying to add a second zone to the installation, and everything seems to
go well, except we are confused on how the SystemVM templates should be handled
for the new zone. The new zone has its own
tify the
sender immediately. Although IndiQus attempts to sweep e-mail and attachments
for viruses, it does not guarantee that both are virus-free and accepts no
liability for any damage sustained as a result of viruses.
> On 28-Mar-2020, at 4:08 AM, Sean Lair wrote:
>
> Hi all,
&
Are you using NFS?
Yea, we implmented locking because of that problem:
https://libvirt.org/locking-lockd.html
echo lock_manager = \"lockd\" >> /etc/libvirt/qemu.conf
-Original Message-
From: Andrija Panic
Sent: Wednesday, October 30, 2019 6:55 AM
To: dev
Cc: users
Subject: Re:
I would love to see OpenVPN as the client VPN. We consider the current Client
VPN unusable. We use OpenVPN with OPNsense firewalls and it has been
rock-solid.
-Original Message-
From: Rohit Yadav
Sent: Friday, June 11, 2021 12:40 PM
To: us...@cloudstack.apache.org;
Thanks for the reply guys. We'll start looking more into this!
Sean
-Original Message-
From: Rohit Yadav
Sent: Wednesday, March 24, 2021 7:28 AM
To: dev@cloudstack.apache.org
Cc: Sean Lair
Subject: [DKIM Fail] Re: Set Number of queues for Virtio NIC driver to vCPU
count?
Hi Sean
Hi all,
We are looking to improve the network performance of our KVM/QEMU VMs running
in CloudStack. One thing we noticed is that the Virtio NICs are not configured
to use multiple queues. A couple of years ago someone created a PR to increase
the Virtio SCSI queue count to match the number
Hi Rohit from our initial debugging, the issue may be a little more involved.
Maybe you could add some insight.
We added some debug logging to monitor the size of the activeCertMap and have
noticed it is almost always 0. When the CABackgroundTask runs, it never does
anything because the in
Names list.
3. Similar to #2, the CA background task also has an issue when KVM agents come
through a load-balancer
We'll fix #2 and #3 by having the KVM agents connect directly to the mgmt
servers.
Thanks
Sean
-Original Message-
From: Sean Lair
Sent: Monday, March 15, 2021 12:18 PM
We are seeing a strange problem in our ACS environments. We are running
Centos7 as our hypervisors. When we take a VM Snapshot and then later revert
to it, it works as long as we haven't stopped and started the VM. If we stop
the VM and start it again - even if it is still on the same host -
We have some confusion on which Template Type SystemVM templates should be set
to. The documentation seems to be inconsistent, could someone help clarify?
The following URL says to set "Routing" to NO when registering a new SystemVM
Template:
48 matches
Mail list logo