Re: [one-users] Problem with network bridge from VMs to physical network.

2014-08-15 Thread Diego M .
Hi all!
I just finished writing the post on my blog, this is how I solved my issue with 
the network cards and the traffic:  
http://nodonogard.blogspot.com/2014/08/opennebula-in-server-with-two-network.html



This first problem was that the arp tables were being build wrongly on the 
clients reaching the server and this was caused by the flux issue, and the 
second problem was that I was wrongly routing the traffic of the VMs.



Best regards,


Diego Marciano.

From: jme...@opennebula.org
Date: Wed, 13 Aug 2014 10:10:44 +0200
Subject: Re: [one-users] Problem with network bridge from VMs to physical 
network.
To: thedragonsreb...@hotmail.com
CC: users@lists.opennebula.org

Great news!
I was a bit at loss :)
Looking forward to reading the answer

On Wed, Aug 13, 2014 at 1:29 AM, Diego M.  wrote:





Hi all,
Thanks for the time used to read my messages. I found the source of my 
issues and I'm creating an entry on my blog to summarize the issue and the 
resolution so it can easily be found by other people with the same issue.



I will share the link to the post here as soon as I finish with it.

Best regards!
Diego Marciano
From: thedragonsreb...@hotmail.com


To: jme...@opennebula.org
Date: Fri, 8 Aug 2014 09:11:03 -0300
CC: users@lists.opennebula.org


Subject: Re: [one-users] Problem with network bridge from VMs to physical 
network.




Hi Jaime, Thanks for your reply!
Still not, I'm trying to figure it out right now. The ip route output is the 
following:root@Host1:~# ip route show

default via 192.168.7.254 dev eth0192.168.7.0/24 dev eth0  proto kernel  scope 
link  src 192.168.7.1192.168.7.0/24 dev Vbr0  proto kernel  scope link  src 
192.168.7.2

192.168.254.0/24 dev Vbr0  proto kernel  scope link  src 192.168.254.254
Term1 routes:C:\Users\User1>route PRINT

IPv4 Route 
Table===Active
 Routes:Network DestinationNetmask  Gateway   Interface  

  0.0.0.0  0.0.0.0 192.168.7.254   192.168.7.50  
10.142.168.0255.255.255.0 On-link 
192.168.7.50===


192.168.7.254 routes:admin@Gateway:/tmp/home/root# ip route show130.255.155.1 
dev eth0  scope link192.168.7.0/24 dev br0  proto kernel  scope link  src 
192.168.7.254

130.255.155.0/24 dev eth0  proto kernel  scope link  src 
130.255.155.33192.168.254.0/24 via 192.168.7.2 dev br0  metric 1

127.0.0.0/8 dev lo  scope linkdefault via 130.255.155.1 dev eth0
I'm thinking that this issue could be caused as I'm routing the traffic to the 
virtual machines from 192.168.7.254 (The gateway of my network) to Host1 
through 192.168.7.2, but actually the traffic comming from Host1 is through 
192.168.7.1, maybe that could be causing my problem?


From: jme...@opennebula.org
Date: Fri, 8 Aug 2014 11:33:01 +0200
Subject: Re: [one-users] Problem with network bridge from VMs to physical 
network.


To: thedragonsreb...@hotmail.com
CC: users@lists.opennebula.org

Hi,


did you manage to figure this out?
otherwise, can you send us the output of "ip route" in the VM, Host 1 and Term 
1?
cheers,


Jaime

On Wed, Jul 30, 2014 at 6:30 PM, Diego M.  wrote:





Hi all,I'm trying to implement opennebula on my personal lab as we have some 
projects with a colleague and it is nice to have disposable VMs, and also we 
are taking the oportunity to learn about OpenNebula to keep up-to-date :)




I would like to ask you a question I have, regarding networking, because I'm 
pretty sure that I'm missing something on the configurations but I cannot 
realize what.
We have the following infrastructure:



  
  And the problem is that from the clients on 192.168.7.0/24 subnet I can ping 
the VMs on 192.168.254.0/24, but the problem is that from the VM I can only 
ping 192.168.7.1, 192.168.7.2, and 192.168.7.254 of the 192.168.7.0/24 subnet, 
and all the other clients from some reason are not reacheable. 





I'm for sure missing something somewhere, but I cannot figure what. I had 
already enabled the ip4 forwarding on Host1 for all interfaces, and the 
following are the contents of /etc/network/interfaces file:



# The loopback network interfaceauto loiface lo inet loopback
# The primary network interface



allow-hotplug eth0iface eth0 inet staticaddress 192.168.7.1
netmask 255.255.255.0gateway 192.168.7.254




auto Vbr0iface Vbr0 inet staticaddress 192.168.7.2netmask 
255.255.255.0network 192.168.7.0



broadcast 192.168.7.255gateway 192.168.7.254
bridge_ports eth1bridge_fd 9bridge_hello 2



bridge_maxage 12bridge_maxwait 5bridge_stp off
auto Vbr0:1iface Vbr0:1 inet static



address 192.168.254.254netmask 255.255.255.0gateway 
192.168.7.2

And this is the vnet template I'm u

Re: [one-users] Problem with network bridge from VMs to physical network.

2014-08-13 Thread Jaime Melis
Great news!

I was a bit at loss :)

Looking forward to reading the answer


On Wed, Aug 13, 2014 at 1:29 AM, Diego M. 
wrote:

> Hi all,
> Thanks for the time used to read my messages. I found the source of my
> issues and I'm creating an entry on my blog to summarize the issue and the
> resolution so it can easily be found by other people with the same issue.
>
> I will share the link to the post here as soon as I finish with it.
>
> Best regards!
> Diego Marciano
> --
> From: thedragonsreb...@hotmail.com
> To: jme...@opennebula.org
> Date: Fri, 8 Aug 2014 09:11:03 -0300
> CC: users@lists.opennebula.org
>
> Subject: Re: [one-users] Problem with network bridge from VMs to physical
> network.
>
> Hi Jaime,
> Thanks for your reply!
>
> Still not, I'm trying to figure it out right now. The ip route output is
> the following:
> root@Host1:~# ip route show
> default via 192.168.7.254 dev eth0
> 192.168.7.0/24 dev eth0  proto kernel  scope link  src 192.168.7.1
> 192.168.7.0/24 dev Vbr0  proto kernel  scope link  src 192.168.7.2
> 192.168.254.0/24 dev Vbr0  proto kernel  scope link  src 192.168.254.254
>
> Term1 routes:
> C:\Users\User1>route PRINT
> IPv4 Route Table
> ===
> Active Routes:
> Network DestinationNetmask  Gateway   Interface
>   0.0.0.0  0.0.0.0 192.168.7.254   192.168.7.50
>  10.142.168.0255.255.255.0 On-link 192.168.7.50
> ===
>
> 192.168.7.254 routes:
> admin@Gateway:/tmp/home/root# ip route show
> 130.255.155.1 dev eth0  scope link
> 192.168.7.0/24 dev br0  proto kernel  scope link  src 192.168.7.254
> 130.255.155.0/24 dev eth0  proto kernel  scope link  src 130.255.155.33
> 192.168.254.0/24 via 192.168.7.2 dev br0  metric 1
> 127.0.0.0/8 dev lo  scope link
> default via 130.255.155.1 dev eth0
>
> I'm thinking that this issue could be caused as I'm routing the traffic to
> the virtual machines from 192.168.7.254 (The gateway of my network) to Host1
> through 192.168.7.2, but actually the traffic comming from Host1 is
> through 192.168.7.1, maybe that could be causing my problem?
>
> --
> From: jme...@opennebula.org
> Date: Fri, 8 Aug 2014 11:33:01 +0200
> Subject: Re: [one-users] Problem with network bridge from VMs to physical
> network.
> To: thedragonsreb...@hotmail.com
> CC: users@lists.opennebula.org
>
> Hi,
>
> did you manage to figure this out?
>
> otherwise, can you send us the output of "ip route" in the VM, Host 1 and
> Term 1?
>
> cheers,
> Jaime
>
>
> On Wed, Jul 30, 2014 at 6:30 PM, Diego M. 
> wrote:
>
> Hi all,
> I'm trying to implement opennebula on my personal lab as we have some
> projects with a colleague and it is nice to have disposable VMs, and also
> we are taking the oportunity to learn about OpenNebula to keep up-to-date :)
>
> I would like to ask you a question I have, regarding networking, because
> I'm pretty sure that I'm missing something on the configurations but I
> cannot realize what.
>
> We have the following infrastructure:
>
>
>
>   And the problem is that from the clients on 192.168.7.0/24 subnet I can
> ping the VMs on 192.168.254.0/24, but the problem is that from the VM I
> can only ping 192.168.7.1, 192.168.7.2, and 192.168.7.254 of the
> 192.168.7.0/24 subnet, and all the other clients from some reason are not
> reacheable.
>
>
> I'm for sure missing something somewhere, but I cannot figure what. I had
> already enabled the ip4 forwarding on Host1 for all interfaces, and the
> following are the contents of /etc/network/interfaces file:
>
> # The loopback network interface
> auto lo
> iface lo inet loopback
>
> # The primary network interface
> allow-hotplug eth0
> iface eth0 inet static
> address 192.168.7.1
> netmask 255.255.255.0
> gateway 192.168.7.254
>
> auto Vbr0
> iface Vbr0 inet static
> address 192.168.7.2
> netmask 255.255.255.0
> network 192.168.7.0
> broadcast 192.168.7.255
> gateway 192.168.7.254
> bridge_ports eth1
> bridge_fd 9
> bridge_hello 2
> bridge_maxage 12
> bridge_maxwait 5
> bridge_stp off
>
> auto Vbr0:1
> iface Vbr0:1 inet static
> address 192.168.254.254
> netmask 255.255.255.0
> gateway 192.168.7.2
>
>
>
> And this is the vnet template I'm using for the VMs:
>
> onevnet 

Re: [one-users] Problem with network bridge from VMs to physical network.

2014-08-12 Thread Diego M .
Hi all,
Thanks for the time used to read my messages. I found the source of my 
issues and I'm creating an entry on my blog to summarize the issue and the 
resolution so it can easily be found by other people with the same issue.

I will share the link to the post here as soon as I finish with it.

Best regards!
Diego Marciano
From: thedragonsreb...@hotmail.com
To: jme...@opennebula.org
Date: Fri, 8 Aug 2014 09:11:03 -0300
CC: users@lists.opennebula.org
Subject: Re: [one-users] Problem with network bridge from VMs to physical 
network.




Hi Jaime, Thanks for your reply!
Still not, I'm trying to figure it out right now. The ip route output is the 
following:root@Host1:~# ip route showdefault via 192.168.7.254 dev 
eth0192.168.7.0/24 dev eth0  proto kernel  scope link  src 
192.168.7.1192.168.7.0/24 dev Vbr0  proto kernel  scope link  src 
192.168.7.2192.168.254.0/24 dev Vbr0  proto kernel  scope link  src 
192.168.254.254
Term1 routes:C:\Users\User1>route PRINTIPv4 Route 
Table===Active
 Routes:Network DestinationNetmask  Gateway   Interface 
   0.0.0.0  0.0.0.0 192.168.7.254   192.168.7.50  
10.142.168.0255.255.255.0 On-link 
192.168.7.50===
192.168.7.254 routes:admin@Gateway:/tmp/home/root# ip route show130.255.155.1 
dev eth0  scope link192.168.7.0/24 dev br0  proto kernel  scope link  src 
192.168.7.254130.255.155.0/24 dev eth0  proto kernel  scope link  src 
130.255.155.33192.168.254.0/24 via 192.168.7.2 dev br0  metric 1127.0.0.0/8 dev 
lo  scope linkdefault via 130.255.155.1 dev eth0
I'm thinking that this issue could be caused as I'm routing the traffic to the 
virtual machines from 192.168.7.254 (The gateway of my network) to Host1 
through 192.168.7.2, but actually the traffic comming from Host1 is through 
192.168.7.1, maybe that could be causing my problem?
From: jme...@opennebula.org
Date: Fri, 8 Aug 2014 11:33:01 +0200
Subject: Re: [one-users] Problem with network bridge from VMs to physical 
network.
To: thedragonsreb...@hotmail.com
CC: users@lists.opennebula.org

Hi,
did you manage to figure this out?
otherwise, can you send us the output of "ip route" in the VM, Host 1 and Term 
1?
cheers,


Jaime

On Wed, Jul 30, 2014 at 6:30 PM, Diego M.  wrote:





Hi all,I'm trying to implement opennebula on my personal lab as we have some 
projects with a colleague and it is nice to have disposable VMs, and also we 
are taking the oportunity to learn about OpenNebula to keep up-to-date :)


I would like to ask you a question I have, regarding networking, because I'm 
pretty sure that I'm missing something on the configurations but I cannot 
realize what.
We have the following infrastructure:

  
  And the problem is that from the clients on 192.168.7.0/24 subnet I can ping 
the VMs on 192.168.254.0/24, but the problem is that from the VM I can only 
ping 192.168.7.1, 192.168.7.2, and 192.168.7.254 of the 192.168.7.0/24 subnet, 
and all the other clients from some reason are not reacheable. 



I'm for sure missing something somewhere, but I cannot figure what. I had 
already enabled the ip4 forwarding on Host1 for all interfaces, and the 
following are the contents of /etc/network/interfaces file:

# The loopback network interfaceauto loiface lo inet loopback
# The primary network interface

allow-hotplug eth0iface eth0 inet staticaddress 192.168.7.1
netmask 255.255.255.0gateway 192.168.7.254


auto Vbr0iface Vbr0 inet staticaddress 192.168.7.2netmask 
255.255.255.0network 192.168.7.0

broadcast 192.168.7.255gateway 192.168.7.254
bridge_ports eth1bridge_fd 9bridge_hello 2

bridge_maxage 12bridge_maxwait 5bridge_stp off
auto Vbr0:1iface Vbr0:1 inet static

address 192.168.254.254netmask 255.255.255.0gateway 
192.168.7.2

And this is the vnet template I'm using for the VMs:

onevnet show PublicVIRTUAL NETWORK 48 INFORMATION

ID : 48NAME   : Public

USER   : oneadminGROUP  : usersCLUSTER: -

TYPE   : RANGEDBRIDGE : Vbr0

VLAN   : NoUSED LEASES: 1


PERMISSIONSOWNER  : um-

GROUP  : u--OTHER  : ---


VIRTUAL NETWORK TEMPLATEBRIDGE="Vbr0"

DESCRIPTION=""DNS="192.168.7.254"

GATEWAY="192.168.254.254"NETWORK_ADDRESS="192.168.254.0"

NETWORK_MASK="255.255.255.0"PHYDEV=""

VLAN="NO"VLAN_ID=""


RANGEIP_START   : 192.168.254.1

IP_END : 192.168.254.253


USED LEASESLEASE=[ MAC="02:00:c0:a8:fe:01", IP="192.168.254.1", 
IP6_LINK="fe80::400:c0ff:fea8:fe01", USED="1", VID="

Re: [one-users] Problem with network bridge from VMs to physical network.

2014-08-08 Thread Diego M .
Hi Jaime, Thanks for your reply!
Still not, I'm trying to figure it out right now. The ip route output is the 
following:root@Host1:~# ip route showdefault via 192.168.7.254 dev 
eth0192.168.7.0/24 dev eth0  proto kernel  scope link  src 
192.168.7.1192.168.7.0/24 dev Vbr0  proto kernel  scope link  src 
192.168.7.2192.168.254.0/24 dev Vbr0  proto kernel  scope link  src 
192.168.254.254
Term1 routes:C:\Users\User1>route PRINTIPv4 Route 
Table===Active
 Routes:Network DestinationNetmask  Gateway   Interface 
   0.0.0.0  0.0.0.0 192.168.7.254   192.168.7.50  
10.142.168.0255.255.255.0 On-link 
192.168.7.50===
192.168.7.254 routes:admin@Gateway:/tmp/home/root# ip route show130.255.155.1 
dev eth0  scope link192.168.7.0/24 dev br0  proto kernel  scope link  src 
192.168.7.254130.255.155.0/24 dev eth0  proto kernel  scope link  src 
130.255.155.33192.168.254.0/24 via 192.168.7.2 dev br0  metric 1127.0.0.0/8 dev 
lo  scope linkdefault via 130.255.155.1 dev eth0
I'm thinking that this issue could be caused as I'm routing the traffic to the 
virtual machines from 192.168.7.254 (The gateway of my network) to Host1 
through 192.168.7.2, but actually the traffic comming from Host1 is through 
192.168.7.1, maybe that could be causing my problem?
From: jme...@opennebula.org
Date: Fri, 8 Aug 2014 11:33:01 +0200
Subject: Re: [one-users] Problem with network bridge from VMs to physical 
network.
To: thedragonsreb...@hotmail.com
CC: users@lists.opennebula.org

Hi,
did you manage to figure this out?
otherwise, can you send us the output of "ip route" in the VM, Host 1 and Term 
1?
cheers,


Jaime

On Wed, Jul 30, 2014 at 6:30 PM, Diego M.  wrote:





Hi all,I'm trying to implement opennebula on my personal lab as we have some 
projects with a colleague and it is nice to have disposable VMs, and also we 
are taking the oportunity to learn about OpenNebula to keep up-to-date :)


I would like to ask you a question I have, regarding networking, because I'm 
pretty sure that I'm missing something on the configurations but I cannot 
realize what.
We have the following infrastructure:

  
  And the problem is that from the clients on 192.168.7.0/24 subnet I can ping 
the VMs on 192.168.254.0/24, but the problem is that from the VM I can only 
ping 192.168.7.1, 192.168.7.2, and 192.168.7.254 of the 192.168.7.0/24 subnet, 
and all the other clients from some reason are not reacheable. 



I'm for sure missing something somewhere, but I cannot figure what. I had 
already enabled the ip4 forwarding on Host1 for all interfaces, and the 
following are the contents of /etc/network/interfaces file:

# The loopback network interfaceauto loiface lo inet loopback
# The primary network interface

allow-hotplug eth0iface eth0 inet staticaddress 192.168.7.1
netmask 255.255.255.0gateway 192.168.7.254


auto Vbr0iface Vbr0 inet staticaddress 192.168.7.2netmask 
255.255.255.0network 192.168.7.0

broadcast 192.168.7.255gateway 192.168.7.254
bridge_ports eth1bridge_fd 9bridge_hello 2

bridge_maxage 12bridge_maxwait 5bridge_stp off
auto Vbr0:1iface Vbr0:1 inet static

address 192.168.254.254netmask 255.255.255.0gateway 
192.168.7.2

And this is the vnet template I'm using for the VMs:

onevnet show PublicVIRTUAL NETWORK 48 INFORMATION

ID : 48NAME   : Public

USER   : oneadminGROUP  : usersCLUSTER: -

TYPE   : RANGEDBRIDGE : Vbr0

VLAN   : NoUSED LEASES: 1


PERMISSIONSOWNER  : um-

GROUP  : u--OTHER  : ---


VIRTUAL NETWORK TEMPLATEBRIDGE="Vbr0"

DESCRIPTION=""DNS="192.168.7.254"

GATEWAY="192.168.254.254"NETWORK_ADDRESS="192.168.254.0"

NETWORK_MASK="255.255.255.0"PHYDEV=""

VLAN="NO"VLAN_ID=""


RANGEIP_START   : 192.168.254.1

IP_END : 192.168.254.253


USED LEASESLEASE=[ MAC="02:00:c0:a8:fe:01", IP="192.168.254.1", 
IP6_LINK="fe80::400:c0ff:fea8:fe01", USED="1", VID="92" ]


VIRTUAL MACHINES


ID USER GROUPNAMESTAT UCPUUMEM HOST TIME

92 adminusersDebian 7.5 Base runn0256M HOMPLMPKRS   0d 11h23
If someone realize what I'm doing wrong and could give me an advise?

May also, it is not the best way to bridge the connection of the VMs to the 
physical network, but I did not found other way of doing it on the 
documentation, or at least I did not understood.


More detailed information about the templates I'm using, below is the "public&

Re: [one-users] Problem with network bridge from VMs to physical network.

2014-08-08 Thread Jaime Melis
Hi,

did you manage to figure this out?

otherwise, can you send us the output of "ip route" in the VM, Host 1 and
Term 1?

cheers,
Jaime


On Wed, Jul 30, 2014 at 6:30 PM, Diego M. 
wrote:

> Hi all,
> I'm trying to implement opennebula on my personal lab as we have some
> projects with a colleague and it is nice to have disposable VMs, and also
> we are taking the oportunity to learn about OpenNebula to keep up-to-date :)
>
> I would like to ask you a question I have, regarding networking, because
> I'm pretty sure that I'm missing something on the configurations but I
> cannot realize what.
>
> We have the following infrastructure:
>
>
>
>   And the problem is that from the clients on 192.168.7.0/24 subnet I can
> ping the VMs on 192.168.254.0/24, but the problem is that from the VM I
> can only ping 192.168.7.1, 192.168.7.2, and 192.168.7.254 of the
> 192.168.7.0/24 subnet, and all the other clients from some reason are not
> reacheable.
>
>
> I'm for sure missing something somewhere, but I cannot figure what. I had
> already enabled the ip4 forwarding on Host1 for all interfaces, and the
> following are the contents of /etc/network/interfaces file:
>
> # The loopback network interface
> auto lo
> iface lo inet loopback
>
> # The primary network interface
> allow-hotplug eth0
> iface eth0 inet static
> address 192.168.7.1
> netmask 255.255.255.0
> gateway 192.168.7.254
>
> auto Vbr0
> iface Vbr0 inet static
> address 192.168.7.2
> netmask 255.255.255.0
> network 192.168.7.0
> broadcast 192.168.7.255
> gateway 192.168.7.254
> bridge_ports eth1
> bridge_fd 9
> bridge_hello 2
> bridge_maxage 12
> bridge_maxwait 5
> bridge_stp off
>
> auto Vbr0:1
> iface Vbr0:1 inet static
> address 192.168.254.254
> netmask 255.255.255.0
> gateway 192.168.7.2
>
>
>
> And this is the vnet template I'm using for the VMs:
>
> onevnet show Public
>
> VIRTUAL NETWORK 48 INFORMATION
>
> ID : 48
>
> NAME   : Public
>
> USER   : oneadmin
>
> GROUP  : users
>
> CLUSTER: -
>
> TYPE   : RANGED
>
> BRIDGE : Vbr0
>
> VLAN   : No
>
> USED LEASES: 1
>
>
> PERMISSIONS
>
> OWNER  : um-
>
> GROUP  : u--
>
> OTHER  : ---
>
>
> VIRTUAL NETWORK TEMPLATE
>
> BRIDGE="Vbr0"
>
> DESCRIPTION=""
>
> DNS="192.168.7.254"
>
> GATEWAY="192.168.254.254"
>
> NETWORK_ADDRESS="192.168.254.0"
>
> NETWORK_MASK="255.255.255.0"
>
> PHYDEV=""
>
> VLAN="NO"
>
> VLAN_ID=""
>
>
> RANGE
>
> IP_START   : 192.168.254.1
>
> IP_END : 192.168.254.253
>
>
> USED LEASES
>
> LEASE=[ MAC="02:00:c0:a8:fe:01", IP="192.168.254.1",
> IP6_LINK="fe80::400:c0ff:fea8:fe01", USED="1", VID="92" ]
>
>
> VIRTUAL MACHINES
>
>
> ID USER GROUPNAMESTAT UCPUUMEM HOST
>   TIME
>
> 92 adminusersDebian 7.5 Base runn0256M HOMPLMPKRS   0d
> 11h23
>
>
> If someone realize what I'm doing wrong and could give me an advise?
> May also, it is not the best way to bridge the connection of the VMs to
> the physical network, but I did not found other way of doing it on the
> documentation, or at least I did not understood.
>
> More detailed information about the templates I'm using, below is the
> "public" network template(provides leases of 192.168.254.0/24), that is
> the one I want to bridge to the local network (192.168.7.0/24). And after
> the network template, the information of the VM template, where the NIC
> using"public" network template is assigned.
> oneadmin@HOMPLMPKRSV0001:/root$ onevnet list
>   ID USER GROUPNAMECLUSTER  TYPE BRIDGE
> LEASES
>   47 oneadmin usersPrivate -   R Vbr0
>  1
>   48 oneadmin usersPublic  -   R Vbr0
>  1
> oneadmin@HOMPLMPKRSV0001:/root$ onevnet show 48
> VIRTUAL NETWORK 48 INFORMATION
> ID : 48
> NAME   : Public
> USER   : oneadmin
> GROUP  : users
> CLUSTER: -
> TYPE   : RANGED
> BRIDGE : Vbr0
> VLAN   : No
> USED LEASES: 1
>
> PERMISSIONS
> OWNER  : um-
> GROUP  : u--
> OTHER  : ---
>
> VIRTUAL NETWORK TEMPLATE
> BRIDGE="Vbr0"
> DESCRIPTION=""
> DNS="192.168.7.254"
> GATEWAY="192.168.254.254"
> NETWORK_ADDRESS="192.168.254.0"
> NETWORK_MASK="255.255.255.0"
> PHYDEV=""
> VLAN="NO"
> VLAN_ID=""
>
> RANGE
> IP_START   : 192.168.254.1
> IP_END : 192.168.254.253
>
> USED LEASES
> LEASE=[ MAC="02:00:c0:a8:fe:01", IP="192.168.254.1",
> IP6_LINK="fe80::400:c0ff:fea8:fe01", USED="1", VID="92" ]
>
> VIRTUAL MACHINES
>
> ID USER GROUPNAMESTAT UCPUUMEM HOST
>   TIME
> 92 adminusersDebian 7.5 Base runn0256M HOMPLMPKRS   1d
> 10h57
> oneadmin@HOMPLMPKRSV0001:/root$ onetemplate list
>   ID USERGROUP   NAME
>  REGTI