Hi Miguel,

Your new PVE host 111.222.333.74 needs to have 111.222.333.254 as it's default gateway.

The VM's need 111.222.333.74 as their default gateway. This is what OVH/SoYouStart requires.

Also if you assign public failover ip addresses to your VM's, then you need to generate a mac address for them in the management console of soyoustart and assign that mac address to the public interface of the VM.

So consider if you have 16 failover ip addresses in the range 1.2.3.1/28

Then the /etc/network/interfaces of one such vm should be:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
    address 1.2.3.1
    netmask 255.255.255.255
    pre-up /etc/network/firewall $IFACE up
    post-up ip route add 111.222.333.74 dev $IFACE
    post-up ip route add 1.2.3.1/28 dev $IFACE
    post-up ip route add default via 91.121.183.137 dev $IFACE
    post-down ip route del default via 91.121.183.137 dev $IFACE
    post-down ip route del 1.2.3.1/28 dev $IFACE
    post-down ip route del 111.222.333.74 dev $IFACE
    post-down /etc/network/firewall $IFACE down

Best regards,

Stephan


On 16-12-18 20:23, Miguel González wrote:
Sorry Stephan, I´ve been working with this setup for about 2 years.

I am just wondering if in the case of an PVE host IP address like

111.222.333.74 (this last 74 is real) should have my VM´s  gateway with
111.222.333.254 or 111.222.333.137.

That´s what I am asking.

Right now my legacy server has exactly the same IP address and the new
one except that the last .74 is .220. The gateways configured in all VMs
running perfectly on that legacy server has also .254 as gateway in
their configuration. That´s what confuses me.

So summarizing:

legacy dedicated server IP: 111.222.333.220

--> All VMs have 111.222.333.254 as gateway

new dedicated server IP: 111.222.333.74

--> Configuring 111.222.333.254 in the VM makes reachable the public IP
address of the new server and the gateway but I can´t ping to the
outside world.

I hope this clarifies the situation :)

Miguel


On 12/16/18 8:03 PM, Stephan Leemburg wrote:
Hi Miguel,

Yes, on the pve host the OVH gateway is the .254

But your containers and vm's on the pve host must use the ip address
of the pve as their default gateway.

Also you need to assign mac addresses from the ovh control panel if
you are using the public failover ip addresses.

Kind regards,
Stephan

On 16-12-18 18:30, Miguel González wrote:
Hi Stephan,

    I use public failover IP addresses. I ask about your gateway
configuration, you use:

    91.121.183.137

    and as far as I know, the gateway must be the public IP address of
the
host ending with .254. That´s what OVH says in their docs.

    Thanks!

On 12/15/18 2:43 PM, Stephan Leemburg wrote:
OVH Requires you to route traffic from VM's via the IP address of your
hardware.

So 137 is the ip address of the hardware.

Do you use any public ip addresses on your soyoustart system?

Or just private range and then send them out via NAT?

Met vriendelijke groet,
Stephan Leemburg
IT Functions

e: sleemb...@it-functions.nl
p: +31 (0)71 889 23 33
m: +31(0)6 83 22 30 69
kvk: 27313647

On 15-12-18 14:39, Miguel González wrote:
There must be something wrong with the configuration since I have
tested
another server and seems to be fine.

Why do you use 137? In the proxmox docs they say the gateway is
xxx.xxx.xxx.254

Thanks!


On 12/15/18 2:16 PM, Stephan Leemburg wrote:
Did you setup routing correctly within the containers / vm's?
OVH/SoYouStart has awkward network routing requirements.

I have 2 servers at soyoustart and they do fine with the correct
network configuration.

Below is an example from one of my containers.

Also, it is a good idea to setup a firewall and put your
containers on
vmbr devices connected to the lan side of your firewall.

Then on the lan side you have 'normal' network configurations.

The pve has ip address 91.121.183.137

I have a subnet 54.37.62.224/28 on which containers and vm's live.

# cat /etc/network/interfaces

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
       address 54.37.62.232
       netmask 255.255.255.255
       pre-up /etc/network/firewall $IFACE up
       post-up ip route add 91.121.183.137 dev $IFACE
       post-up ip route add 54.37.62.224/28 dev $IFACE
       post-up ip route add default via 91.121.183.137 dev $IFACE
       post-down ip route del default via 91.121.183.137 dev $IFACE
       post-down ip route del 54.37.62.224/28 dev $IFACE
       post-down ip route del 91.121.183.137 dev $IFACE
       post-down /etc/network/firewall $IFACE down'


Met vriendelijke groet,
Stephan Leemburg

On 15-12-18 13:02, Miguel González wrote:
Hi,

       I am migrating some VMs from a Soyoustart (OVH) Proxmox 5.1
to a
brand new Proxmox 5.3 server (again Soyoustart).

       I have followed the instructions from OVH and Proxmox and I
can ping
from the VM to the host and the gateway and I can ping from the
host to
the VM. But I can´t ping the DNS server or anything outside the host
machine (i.e.: the legacy Proxmox host).

      Some people suggest to enable ip forwarding, but I don´t have
enabled
on the legacy server.

      But I enable it anyway echo 1 > /proc/sys/net/ipv4/ip_forward
         and nothing happens.

      iptables seems to be turned off both in host and vm:

       iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

So I´m out of ideas here

Any suggestion?

Miguel



---
This email has been checked for viruses by AVG.
https://www.avg.com
_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to