Re: [PVE-User] Debian 7 guest losing network

2015-09-28 Thread Alexandre DERUMIER


Alexandre Derumier 
Ingénieur système et stockage 


Fixe : 03 20 68 90 88 
Fax : 03 20 68 90 81 


45 Bvd du Général Leclerc 59100 Roubaix 
12 rue Marivaux 75002 Paris 


MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de 
trafic

- Mail original -
De: "Eneko Lacunza" 
À: "proxmoxve" 
Envoyé: Lundi 28 Septembre 2015 17:53:29
Objet: [PVE-User] Debian 7 guest losing network

Hi all, 

I have a PVE 3.4 node with pve kernel 2.6.32-39 (details below). 

A quite busy Samba server VM is losing network conectivity about every 
week since 27th August. 
This VM can't ping other network elements, and one can't access the 
Samba in the VM nor ping it when this happens. 
VM is a security-updated Debian 7. Last apt-get upgrade in the VM before 
first lost network was on 19th August. 

Last Proxmox update was on 15th June, so I don't really think this is a 
Proxmox related issue - but I'm sure there are lot's of Debian 7 VMs out 
there. Is anyone having this problem? Any idea? 

I tried both e1000 and virtio network adapters, but VM continues to lose 
network. I do have other 2 Debian 7 hosts in the cluster (one in the 
same host), but they are not so busy (and haven't lost network for now). 

# pveversion -v 
proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve) 
pve-manager: 3.4-6 (running version: 3.4-6/102d4547) 
pve-kernel-2.6.32-39-pve: 2.6.32-156 
pve-kernel-2.6.32-29-pve: 2.6.32-126 
pve-kernel-2.6.32-34-pve: 2.6.32-140 
lvm2: 2.02.98-pve4 
clvm: 2.02.98-pve4 
corosync-pve: 1.4.7-1 
openais-pve: 1.1.4-3 
libqb0: 0.11.1-2 
redhat-cluster-pve: 3.2.0-2 
resource-agents-pve: 3.9.2-4 
fence-agents-pve: 4.0.10-2 
pve-cluster: 3.0-17 
qemu-server: 3.4-6 
pve-firmware: 1.1-4 
libpve-common-perl: 3.0-24 
libpve-access-control: 3.0-16 
libpve-storage-perl: 3.0-33 
pve-libspice-server1: 0.12.4-3 
vncterm: 1.1-8 
vzctl: 4.0-1pve6 
vzprocps: 2.0.11-2 
vzquota: 3.1-2 
pve-qemu-kvm: 2.2-10 
ksm-control-daemon: 1.1-1 
glusterfs-client: 3.5.2-1 

Thanks a lot 
Eneko 

-- 
Zuzendari Teknikoa / Director Técnico 
Binovo IT Human Project, S.L. 
Telf. 943575997 
943493611 
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa) 
www.binovo.es 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Debian 7 guest losing network

2015-09-28 Thread Alexandre DERUMIER
Hi,

Not sure it's related but
I remember some network latency problems with kernel 3.2 from wheezy and virtio
nics in the past.
Using kernel 3.16 from backports fixed the problem.


- Mail original -
De: "Eneko Lacunza" 
À: "proxmoxve" 
Envoyé: Lundi 28 Septembre 2015 17:53:29
Objet: [PVE-User] Debian 7 guest losing network

Hi all, 

I have a PVE 3.4 node with pve kernel 2.6.32-39 (details below). 

A quite busy Samba server VM is losing network conectivity about every 
week since 27th August. 
This VM can't ping other network elements, and one can't access the 
Samba in the VM nor ping it when this happens. 
VM is a security-updated Debian 7. Last apt-get upgrade in the VM before 
first lost network was on 19th August. 

Last Proxmox update was on 15th June, so I don't really think this is a 
Proxmox related issue - but I'm sure there are lot's of Debian 7 VMs out 
there. Is anyone having this problem? Any idea? 

I tried both e1000 and virtio network adapters, but VM continues to lose 
network. I do have other 2 Debian 7 hosts in the cluster (one in the 
same host), but they are not so busy (and haven't lost network for now). 

# pveversion -v 
proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve) 
pve-manager: 3.4-6 (running version: 3.4-6/102d4547) 
pve-kernel-2.6.32-39-pve: 2.6.32-156 
pve-kernel-2.6.32-29-pve: 2.6.32-126 
pve-kernel-2.6.32-34-pve: 2.6.32-140 
lvm2: 2.02.98-pve4 
clvm: 2.02.98-pve4 
corosync-pve: 1.4.7-1 
openais-pve: 1.1.4-3 
libqb0: 0.11.1-2 
redhat-cluster-pve: 3.2.0-2 
resource-agents-pve: 3.9.2-4 
fence-agents-pve: 4.0.10-2 
pve-cluster: 3.0-17 
qemu-server: 3.4-6 
pve-firmware: 1.1-4 
libpve-common-perl: 3.0-24 
libpve-access-control: 3.0-16 
libpve-storage-perl: 3.0-33 
pve-libspice-server1: 0.12.4-3 
vncterm: 1.1-8 
vzctl: 4.0-1pve6 
vzprocps: 2.0.11-2 
vzquota: 3.1-2 
pve-qemu-kvm: 2.2-10 
ksm-control-daemon: 1.1-1 
glusterfs-client: 3.5.2-1 

Thanks a lot 
Eneko 

-- 
Zuzendari Teknikoa / Director Técnico 
Binovo IT Human Project, S.L. 
Telf. 943575997 
943493611 
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa) 
www.binovo.es 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph Performance

2015-09-28 Thread Martin Maurer

Hi,

a good start to get a feeling about expected performance, read this 
great paper from Redhat:


http://www.redhat.com/en/resources/red-hat-ceph-storage-clusters-supermicro-storage-servers

if you want to build a small three node cluster, you should use a intel 
3700 200 GB SSD for your journal. this SSD can be used as journal for 4 
or 5 OSD. but note, if this SSD fails, you loose all these OSDs - 
monitor health status.


if you have more budget, think of a SSD only setup (for OSDs).
use enterprise class SSDs, with powerloss protection.

best,

martin

On 28.09.2015 18:10, Tobias Kropf - inett GmbH wrote:

Hi @ all

i have a question to ceph we plan to build our own ceph cluster in
datacenter. Can you tell me the performance statics from running ceph
cluster with the same setup?

We want to buy the follow setup:

3x Chassis with:

CPUs: 2 x Intel E5-2620v3
RAM: 64GB
NIC: 2x10GBit/s CEPH, 4x1GBit/s
HDD: 4x2TB SATA,1x80GB SSD - OS, 1x240GB SSD - Ceph Cache

--

Tobias Kropf

Technik

inett GmbH » Ihr IT Systemhaus in Saarbrücken

Eschberger Weg 1

66121 Saarbrücken

Geschäftsführer: Marco Gabriel

Handelsregister Saarbrücken

HRB 16588

Telefon: 0681 / 41 09 93 – 0

Telefax: 0681 / 41 09 93 – 99

E-Mail: i...@inett.de 

Web: www.inett.de 

Zarafa Gold Partner - Proxmox Authorized Reseller - Proxmox Training
Center - SEP sesam Certified Partner - Endian Certified Partner -
Kaspersky Silver Partner - Mitglied im iTeam Systemhausverbund für den
Mittelstand



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



--
Best Regards,

Martin Maurer

mar...@proxmox.com
http://www.proxmox.com


Proxmox Server Solutions GmbH
Kohlgasse 51/10, 1050 Vienna, Austria
Commercial register no.: FN 258879 f
Registration office: Handelsgericht Wien

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph Performance

2015-09-28 Thread Fabrizio Cuseo
Hello Tobias. 
Check if your SSD is suitable for Ceph Journal: 

http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
 

If you can, add 2 OSD Sata disks per host. 

Regards, Fabrizio 



- Il 28-set-15, alle 18:10, Tobias Kropf - inett GmbH  ha 
scritto: 





Hi @ all 

i have a question to ceph we plan to build our own ceph cluster in 
datacenter. Can you tell me the performance statics from running ceph cluster 
with the same setup? 

We want to buy the follow setup: 

3x Chassis with: 

CPUs: 2 x Intel E5-2620v3 
RAM: 64GB 
NIC: 2x10GBit/s CEPH, 4x1GBit/s 
HDD: 4x2TB SATA,1x80GB SSD - OS, 1x240GB SSD - Ceph Cache 





-- 



Tobias Kropf 



Technik 













inett GmbH » Ihr IT Systemhaus in Saarbrücken 



Eschberger Weg 1 

66121 Saarbrücken 

Geschäftsführer: Marco Gabriel 

Handelsregister Saarbrücken 

HRB 16588 





Telefon: 0681 / 41 09 93 – 0 

Telefax: 0681 / 41 09 93 – 99 

E-Mail: i...@inett.de 


Web: www.inett.de 




Zarafa Gold Partner - Proxmox Authorized Reseller - Proxmox Training Center - 
SEP sesam Certified Partner - Endian Certified Partner - Kaspersky Silver 
Partner - Mitglied im iTeam Systemhausverbund für den Mittelstand 



___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 




-- 
--- 
Fabrizio Cuseo - mailto:f.cu...@panservice.it 
Direzione Generale - Panservice InterNetWorking 
Servizi Professionali per Internet ed il Networking 
Panservice e' associata AIIP - RIPE Local Registry 
Phone: +39 0773 410020 - Fax: +39 0773 470219 
http://www.panservice.it mailto:i...@panservice.it 
Numero verde nazionale: 800 901492 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Debian 7 guest losing network

2015-09-28 Thread Robert Fantini
what type of storage is used?

does network issue start when there is heavy network traffic - like when
backups are getting done?

On Mon, Sep 28, 2015 at 11:53 AM, Eneko Lacunza  wrote:

> Hi all,
>
> I have a PVE 3.4 node with pve kernel 2.6.32-39 (details below).
>
> A quite busy Samba server VM is losing network conectivity about every
> week since 27th August.
> This VM can't ping other network elements, and one can't access the Samba
> in the VM nor ping it when this happens.
> VM is a security-updated Debian 7. Last apt-get upgrade in the VM before
> first lost network was on 19th August.
>
> Last Proxmox update was on 15th June, so I don't really think this is a
> Proxmox related issue - but I'm sure there are lot's of Debian 7 VMs out
> there. Is anyone having this problem? Any idea?
>
> I tried both e1000 and virtio network adapters, but VM continues to lose
> network. I do have other 2 Debian 7 hosts in the cluster (one in the same
> host), but they are not so busy (and haven't lost network for now).
>
> # pveversion -v
> proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve)
> pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
> pve-kernel-2.6.32-39-pve: 2.6.32-156
> pve-kernel-2.6.32-29-pve: 2.6.32-126
> pve-kernel-2.6.32-34-pve: 2.6.32-140
> lvm2: 2.02.98-pve4
> clvm: 2.02.98-pve4
> corosync-pve: 1.4.7-1
> openais-pve: 1.1.4-3
> libqb0: 0.11.1-2
> redhat-cluster-pve: 3.2.0-2
> resource-agents-pve: 3.9.2-4
> fence-agents-pve: 4.0.10-2
> pve-cluster: 3.0-17
> qemu-server: 3.4-6
> pve-firmware: 1.1-4
> libpve-common-perl: 3.0-24
> libpve-access-control: 3.0-16
> libpve-storage-perl: 3.0-33
> pve-libspice-server1: 0.12.4-3
> vncterm: 1.1-8
> vzctl: 4.0-1pve6
> vzprocps: 2.0.11-2
> vzquota: 3.1-2
> pve-qemu-kvm: 2.2-10
> ksm-control-daemon: 1.1-1
> glusterfs-client: 3.5.2-1
>
> Thanks a lot
> Eneko
>
> --
> Zuzendari Teknikoa / Director Técnico
> Binovo IT Human Project, S.L.
> Telf. 943575997
>   943493611
> Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
> www.binovo.es
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Ceph Performance

2015-09-28 Thread Tobias Kropf - inett GmbH
Hi @ all

i have a question to ceph we plan to build our own ceph cluster in 
datacenter. Can you tell me the performance statics from running ceph cluster 
with the same setup?

We want to buy the follow setup:

3x Chassis with:

CPUs: 2 x Intel E5-2620v3
RAM: 64GB
NIC: 2x10GBit/s CEPH, 4x1GBit/s
HDD: 4x2TB SATA,1x80GB SSD - OS, 1x240GB SSD - Ceph Cache


 
-- 

 
Tobias Kropf

 
Technik

 
 
 

 
 
 
inett GmbH » Ihr IT Systemhaus in Saarbrücken

 
Eschberger Weg 1

66121 Saarbrücken

Geschäftsführer: Marco Gabriel

Handelsregister Saarbrücken

HRB 16588

    

 
Telefon: 0681 / 41 09 93 - 0

Telefax: 0681 / 41 09 93 - 99

E-Mail: i...@inett.de  

Web: www.inett.de  

 
Zarafa Gold Partner - Proxmox Authorized Reseller - Proxmox Training Center - 
SEP sesam Certified Partner - Endian Certified Partner - Kaspersky Silver 
Partner - Mitglied im iTeam Systemhausverbund für den Mittelstand

 
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Debian 7 guest losing network

2015-09-28 Thread Eneko Lacunza

Hi all,

I have a PVE 3.4 node with pve kernel 2.6.32-39 (details below).

A quite busy Samba server VM is losing network conectivity about every 
week since 27th August.
This VM can't ping other network elements, and one can't access the 
Samba in the VM nor ping it when this happens.
VM is a security-updated Debian 7. Last apt-get upgrade in the VM before 
first lost network was on 19th August.


Last Proxmox update was on 15th June, so I don't really think this is a 
Proxmox related issue - but I'm sure there are lot's of Debian 7 VMs out 
there. Is anyone having this problem? Any idea?


I tried both e1000 and virtio network adapters, but VM continues to lose 
network. I do have other 2 Debian 7 hosts in the cluster (one in the 
same host), but they are not so busy (and haven't lost network for now).


# pveversion -v
proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-39-pve: 2.6.32-156
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-34-pve: 2.6.32-140
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Thanks a lot
Eneko

--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
  943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] can't make zfs-zsync working

2015-09-28 Thread Jean-Laurent Ivars
I had no answer so I tough the mailing list wasn’t working for me…. I made a 
change in the wiki for other people not struggle like i have (completely insane 
the script can’t work with hostname…

If someone is interested i made cool script. What the script make is to keep in 
sync in both the hosts, create a new log file everyday to log all the sync and 
send you an email containing the log file if something bad happens. In fact 
it's a loop and the goal is to always have the most recent copies of the vm 
disk on both sides. Almost as interesting than DRBD but without the split-brain 
complications :)
In my case for exemple i have approximately 15 KVM VM witch are not to much 
solicited and the script need 1 minute to make a loop, I think during solicited 
period maybe 2 or 3 minutes, surely less than 5... It's all new so i have not 
experience with it, if someone use it i would be very happy if he let me know 
how it works for him. 

It's made to work almost "out of the box"" in a full ZFS Proxmox installation 
in a two hosts cluster only, if your configuration is different you will have 
to adapt it...

You just have to verify that you have following packages installed : pve-zsync 
and screen, you will have to put your mail address in the var monmail at the 
beginning of the script.

Sorry all the comments in the script are in french, hope you will understand :)

#!/bin/bash


monmail="ad...@mydomain.com"


gosync() {
##On commence la boucle
while true;do
## Creation du log (on verifie que le dossier des logs est cree avant)
if [ ! -d "/var/log/syncro" ];then
mkdir -p /var/log/syncro
fi
logfic="/var/log/syncro/syncro-`date '+%d-%m-%Y'`.log"
##On detecte sur quelle machine on se trouve et quelle est la machine distante
loc=`hostname`
dist=`ls /etc/pve/nodes/ | grep -v $loc`
###On recupere les ID des VM qui utilisent zfs locales puis de VM distantes
vmloc=`grep rpool /etc/pve/nodes/$loc/qemu-server/*.conf | cut -d / -f 7 | cut 
-d . -f 1`
vmdist=`grep rpool /etc/pve/nodes/$dist/qemu-server/*.conf | cut -d / -f 7 | 
cut -d . -f 1`
###On recupere l'IP de l'hote distant
ipdist=$(ping -c 1 $dist | gawk -F'[()]' '/PING/{print $2}')
##On vérifie la présence du répertoire des hotes du cluster
if [ ! -d "/etc/pve/nodes/" ]; then
echo "PB avec le cluster a `date '+%d-%m-%Y_%Hh%Mm%Ss'`" >> $logfic
##On laisse une trace d'envoi de mail et on l'envoie
if [ $logfic != `cat /tmp/mail.tmp` ];then
echo $logfic > /tmp/mail.tmp
cat $logfic | mail -s "PB Syncro ZFS" $monmail;
fi
fi


echo "syncro des machines de $loc vers $dist" >> $logfic
for n in $vmloc
do
if test -f "/tmp/stopsync.req"
then
rm /tmp/stopsync.req
touch /tmp/stopsync.ok
exit 0
else
echo "debut syncro de la machine $n a `date '+%d-%m-%Y_%Hh%Mm%Ss'`" >> $logfic
pve-zsync sync --source $n --dest $ipdist:rpool/lastsync --maxsnap 1 --verbose 
>> $logfic
if test ${?} -eq 0 ; then
echo "syncro de la machine $n finie a `date '+%d-%m-%Y_%Hh%Mm%Ss'`" >> $logfic
else
##On laisse une trace d'envoi de mail et on l'envoie
if [ $logfic != `cat /tmp/mail.tmp` ];then
echo $logfic > /tmp/mail.tmp
cat $logfic | mail -s "PB Syncro ZFS" $monmail;
fi
fi
fi
done


echo "syncro des machines de $dist vers $loc" >> $logfic  
for n in $vmdist
do
if test -f "/tmp/stopsync.req"
then
rm /tmp/stopsync.req
touch /tmp/stopsync.ok
exit 0

else 
echo "debut syncro de la machine $n a `date '+%d-%m-%Y_%Hh%Mm%Ss'`" >> $logfic
pve-zsync sync --source $ipdist:$n --dest rpool/lastsync --maxsnap 1 --verbose 
>> $logfic
if test ${?} -eq 0 ; then
echo "syncro de la machine $n finie a `date '+%d-%m-%Y_%Hh%Mm%Ss'`" >> $logfic
else
##On laisse une trace d'envoi de mail et on l'envoie
if [ $logfic != `cat /tmp/mail.tmp` ];then
echo $logfic > /tmp/mail.tmp
cat $logfic | mail -s "PB Syncro ZFS" $monmail;
fi
fi
fi


done


done
}


stop() {
touch /tmp/stopsync.req
##On commence une nouvelle boucle pour attendre que la syncro en cours soit 
finie
while true;do
if test -f "/tmp/stopsync.ok"
then
echo "Arret de la syncro : OK"
##Et l'arret du script en lui meme
rm /tmp/stopsync.ok
kill $$
exit 0
else
echo "Arret en cours..."
echo "la syncronisation en cours se finit, cela peut durer un peu..."
sleep 3 
fi
done
}


case "$1" in
gosync)
gosync
;;
start)
screen -d -m -S syncro-zfs bash -c '/root/scripts/syncro-zfs gosync'
echo "Lancement de la syncronisation : OK"
echo "taper la commande 'screen -r syncro-zfs' pour voir la sortie standard."
;;
stop)
stop
;;
*)
echo "Usage: $0 {start|stop}" >&2
exit 1
;;
esac


Hope you will like it, please let me know.

Best regards,




Jean-Laurent Ivars 
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille 
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47 
Linkedin    |  Viadeo 
   |  www.ipgenius.fr 

> Le 28 sept. 2015 à 08:18, Michael Rasmussen  a écrit :
> 
> On Mon, 28 Sep 2015 08:14:42 +0200 (CEST)
> Wolfgang 

Re: [PVE-User] Fwd: can't make zfs-zsync working

2015-09-28 Thread Wolfgang Bumiller
> root@cyclone ~ # pve-zsync sync --source 106  --dest ouragan:rpool/BKP_24H
> --verbose 

I just checked the source - apparently we currently only allow ip(v4) addresses
there, no hostnames (this still needs changing...).

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Fwd: can't make zfs-zsync working

2015-09-28 Thread Michael Rasmussen
On Mon, 28 Sep 2015 08:14:42 +0200 (CEST)
Wolfgang Bumiller  wrote:

> > root@cyclone ~ # pve-zsync sync --source 106  --dest ouragan:rpool/BKP_24H
> > --verbose   
> 
> I just checked the source - apparently we currently only allow ip(v4) 
> addresses
> there, no hostnames (this still needs changing...).
> 
And IPv6?

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Nature makes boys and girls lovely to look upon so they can be
tolerated until they acquire some sense.
-- William Phelps


pgptoqWLe6gr3.pgp
Description: OpenPGP digital signature
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user