When using advanced networking, I believe there's an option to create site
to site tunnel.

 

Maybe you can establish a tunnel between the isolated network via the VR and
onsite DC firewall/router.

 

But of course, it's going through public/Internet which might hinder the
network performance based on your setup.

 

Never tried this myself, just my thought.

 

On 2024/04/27 01:34:20 Nixon Varghese K S wrote:
> Hi All,
>
> The optimum way to get NFS storage on the ACS guest VM is what we're
> attempting to determine.This test environment is set up for advanced
> networking on ACS (4.19.1).
>
> ACS Portal: 10.10.40.252
> NFS server: 10.10.40.250
> KVM host: 172.16.0.100 (Have two network interface cards, one for private
> use (cloudbr0) and the other for public use (cloudbr1))
>
> ACS Management Range: 172.16.0.10-172.16.0.50 (cloudbr0)
> ACS Public Range: Public IP RANGE (cloudbr1)
>
> I had trunked KVM Privet NIC to talk to the ACS and NFS subnets. So
through
> 172.16.0.0, I can communicate with the 10.10.40.0 network.
>
> When I launch a VM with an isolated network of 10.1.1.5, it creates a VR
> with 3 NICs (eth0: 10.1.1.1, eth1: control, and eth2: public). I need to
> mount an NFS server with this guest VM. While checking the VR route, I can
> see the default route to the public NIC. Through that NIC, I won't get the
> 10.10.40.250 system as it passed out from KVM through cloudbr1.
>
> It is not advised to trunk KVM host cloudbr1 NIC and allow 10.10.40.250
> traffic to route through the public network. What, in this particular
> situation, will be the best course of action? I'm eager to hear your
> thoughts.
>
> With Regards,
> Nixon Varghese
>

 

Best Regards,

 

Hanis Irfan

Cloud Infrastructure Engineer | Nebula Systems Sdn Bhd

Email:  <mailto:hanis.ir...@nebula-sys.com> hanis.ir...@nebula-sys.com

Website:  <https://www.nebula-sys.com/> https://www.nebula-sys.com

 

Reply via email to