Re: [PVE-User] Can´t ping to the outside - OVH Proxmox 5.3

2018-12-16 Thread dORSY
 I use a much simpler solution:
2: ens18:  mtu 1500 qdisc fq_codel state UP 
group default qlen 1000
    link/ether my:ma:ca:dd:re:ss brd ff:ff:ff:ff:ff:ff
    inet xx.yy.zz.235/32 brd xx.yy.zz.235/32 scope global ens18
   valid_lft forever preferred_lft forever

# ip ro
default via xx.yy.zz.235 dev ens18

So basically your own /32 IP is your GW. Then the soyoustart switch does the 
job if you registered the MAC for your VM.
Also works for windows.

On Sunday, 16 December 2018, 20:37:05 CET, Stephan Leemburg 
 wrote:  
 
 Hi Miguel,

Your new PVE host 111.222.333.74 needs to have 111.222.333.254 as it's 
default gateway.

The VM's need 111.222.333.74 as their default gateway. This is what 
OVH/SoYouStart requires.

Also if you assign public failover ip addresses to your VM's, then you 
need to generate a mac address for them in the management console of 
soyoustart and assign that mac address to the public interface of the VM.

So consider if you have 16 failover ip addresses in the range 1.2.3.1/28

Then the /etc/network/interfaces of one such vm should be:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
     address 1.2.3.1
     netmask 255.255.255.255
     pre-up /etc/network/firewall $IFACE up
     post-up ip route add 111.222.333.74 dev $IFACE
     post-up ip route add 1.2.3.1/28 dev $IFACE
     post-up ip route add default via 91.121.183.137 dev $IFACE
     post-down ip route del default via 91.121.183.137 dev $IFACE
     post-down ip route del 1.2.3.1/28 dev $IFACE
     post-down ip route del 111.222.333.74 dev $IFACE
     post-down /etc/network/firewall $IFACE down

Best regards,

Stephan


On 16-12-18 20:23, Miguel González wrote:
> Sorry Stephan, I´ve been working with this setup for about 2 years.
>
> I am just wondering if in the case of an PVE host IP address like
>
> 111.222.333.74 (this last 74 is real) should have my VM´s  gateway with
> 111.222.333.254 or 111.222.333.137.
>
> That´s what I am asking.
>
> Right now my legacy server has exactly the same IP address and the new
> one except that the last .74 is .220. The gateways configured in all VMs
> running perfectly on that legacy server has also .254 as gateway in
> their configuration. That´s what confuses me.
>
> So summarizing:
>
> legacy dedicated server IP: 111.222.333.220
>
> --> All VMs have 111.222.333.254 as gateway
>
> new dedicated server IP: 111.222.333.74
>
> --> Configuring 111.222.333.254 in the VM makes reachable the public IP
> address of the new server and the gateway but I can´t ping to the
> outside world.
>
> I hope this clarifies the situation :)
>
> Miguel
>
>
> On 12/16/18 8:03 PM, Stephan Leemburg wrote:
>> Hi Miguel,
>>
>> Yes, on the pve host the OVH gateway is the .254
>>
>> But your containers and vm's on the pve host must use the ip address
>> of the pve as their default gateway.
>>
>> Also you need to assign mac addresses from the ovh control panel if
>> you are using the public failover ip addresses.
>>
>> Kind regards,
>> Stephan
>>
>> On 16-12-18 18:30, Miguel González wrote:
>>> Hi Stephan,
>>>
>>>     I use public failover IP addresses. I ask about your gateway
>>> configuration, you use:
>>>
>>>     91.121.183.137
>>>
>>>     and as far as I know, the gateway must be the public IP address of
>>> the
>>> host ending with .254. That´s what OVH says in their docs.
>>>
>>>     Thanks!
>>>
>>> On 12/15/18 2:43 PM, Stephan Leemburg wrote:
 OVH Requires you to route traffic from VM's via the IP address of your
 hardware.

 So 137 is the ip address of the hardware.

 Do you use any public ip addresses on your soyoustart system?

 Or just private range and then send them out via NAT?

 Met vriendelijke groet,
 Stephan Leemburg
 IT Functions

 e: sleemb...@it-functions.nl
 p: +31 (0)71 889 23 33
 m: +31(0)6 83 22 30 69
 kvk: 27313647

 On 15-12-18 14:39, Miguel González wrote:
> There must be something wrong with the configuration since I have
> tested
> another server and seems to be fine.
>
> Why do you use 137? In the proxmox docs they say the gateway is
> xxx.xxx.xxx.254
>
> Thanks!
>
>
> On 12/15/18 2:16 PM, Stephan Leemburg wrote:
>> Did you setup routing correctly within the containers / vm's?
>> OVH/SoYouStart has awkward network routing requirements.
>>
>> I have 2 servers at soyoustart and they do fine with the correct
>> network configuration.
>>
>> Below is an example from one of my containers.
>>
>> Also, it is a good idea to setup a firewall and put your
>> containers on
>> vmbr devices connected to the lan side of your firewall.
>>
>> Then on the lan side you have 'normal' network configurations.
>>
>> The pve has ip address 91.121.183.137
>>
>> I have a subnet 54.37.62.224/28 on which containers and vm's live.
>>
>> # cat /etc/network/interfaces
>>
>> auto lo
>> ifa

Re: [PVE-User] hdparm

2018-10-31 Thread dorsy

hdparm is for hdds(ssds).

Use fio, it can test with a variety of settings (random 
iops/bandwith/etc.) and are reproducable!


On 2018. 10. 31. 15:52, lord_Niedzwiedz wrote:

        How i make test RAID/Volume speed in Proxmox ?
hdparm -tT ... ?
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] NVMe

2018-10-30 Thread dorsy

On most systems legacy boot from nvme drives is not possible.

On 2018. 10. 30. 15:08, lord_Niedzwiedz wrote:

        I set legacy boot in bios.
Use only one disk with lvm.
And system not start with this.

Any sugestion ?

I have a problem.
Im trying to install Proxmox on 4 NVMe drives.
One on the motherboard, two on the PCIe.

Proxmox see everything at the installation.
I give the option zfs (RAIDZ-1).

And I get a mistake error at the end.
"unable to create zfs root pool"

GRUB is not yet working with ZFS on EFI. Try to switch to legacy boot in
BIOS if possible or use LVM for the installation.


Attached pictures (1-5) .jpg.
https://help.komandor.pl/Wymiana/1.jpg
https://help.komandor.pl/Wymiana/2.jpg
https://help.komandor.pl/Wymiana/3.jpg
https://help.komandor.pl/Wymiana/4.jpg
https://help.komandor.pl/Wymiana/5.jpg


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] DRBD sync speed

2018-10-10 Thread dorsy

DRBD related, google for DRBD resync speed :)

On 2018. 10. 10. 15:23, Adam Weremczuk wrote:

Hi all,

I'm trying out DRBD Pacemaker HA Cluster on Proxmox 5.2

I have 2 identical servers connected with 2 x 1 Gbps links in 
bond_mode balance-rr.


The bond is working fine; I get a transfer rate of 150 MB/s with scp.

Following this guide: 
https://www.theurbanpenguin.com/drbd-pacemaker-ha-cluster-ubuntu-16-04/ 
was going  smoothly up until:


drbdadm -- --overwrite-data-of-peer primary r0/0

cat /proc/drbd
version: 8.4.10 (api:1/proto:86-101)
srcversion: 17A0C3A0AF9492ED4B9A418
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-
    ns:10944 nr:0 dw:0 dr:10992 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 
wo:f oos:3898301536

    [>] sync'ed:  0.1% (3806932/3806944)M
    finish: 483:25:13 speed: 2,188 (2,188) K/sec

The transfer rate is horribly slow and at this pace it's going to take 
20 days for two 4 TB volumes to sync!


That's almost 15 times slower comparing with the guide video (8:30): 
https://www.youtube.com/watch?v=WQGi8Nf0kVc


The volumes have been zeroed and contain no live data yet.

My sdb disks are logical drives (hardware RAID) set up as RAID50 with 
the defaults:


Strip size: 128 KB
Access policy: RW
Read policy: Normal
Write policy: Write Back with BBU
IO policy: Direct
Drive Cache: Disable
Disable BGI: No

Performance looks good when tested with hdparm:

hdparm -tT /dev/sdb1

/dev/sdb1:
 Timing cached reads:   15056 MB in  1.99 seconds = 7550.46 MB/sec
 Timing buffered disk reads: 2100 MB in  3.00 seconds = 699.81 MB/sec

The volumes have been zeroed and contain no live data yet.

Any idea why the sync rate is so painfully slow and how to improve it?

Regards,
Adam

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox CEPH 6 servers failures!

2018-10-05 Thread dorsy
moveing from 6 to 8 mons and loosing 4 of them instead of 3 will not 
save you.


Basic maths:
floor((n/2)+1)

On 2018. 10. 05. 14:31, Gilberto Nunes wrote:

Nice.. Perhaps if I create a VM in Proxmox01 and Proxmox02, and join this
VM into Cluster Ceph, can I solve to quorum problem?
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em sex, 5 de out de 2018 às 09:23, dorsy  escreveu:


Your question has already been answered. You need majority to have quorum.

On 2018. 10. 05. 14:10, Gilberto Nunes wrote:

Hi
Perhaps this can help:

https://imageshack.com/a/img921/6208/X7ha8R.png

I was thing about it, and perhaps if I deploy a VM in both side, with
Proxmox and add this VM to the CEPH cluster, maybe this can help!

thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em sex, 5 de out de 2018 às 03:55, Alexandre DERUMIER <

aderum...@odiso.com>

escreveu:


Hi,

Can you resend your schema, because it's impossible to read.


but you need to have to quorum on monitor to have the cluster working.


- Mail original -
De: "Gilberto Nunes" 
À: "proxmoxve" 
Envoyé: Jeudi 4 Octobre 2018 22:05:16
Objet: [PVE-User] Proxmox CEPH 6 servers failures!

Hi there

I have something like this:

CEPH01 |
|- CEPH04
|
|
CEPH02 |-|
CEPH05
| Optic Fiber
|
CEPH03 |
|--- CEPH06

Sometime, when Optic Fiber not work, and just CEPH01, CEPH02 and CEPH03
remains, the entire cluster fail!
I find out the cause!

ceph.conf

[global] auth client required = cephx auth cluster required = cephx auth
service required = cephx cluster network = 10.10.10.0/24 fsid =
e67534b4-0a66-48db-ad6f-aa0868e962d8 keyring =
/etc/pve/priv/$cluster.$name.keyring mon allow pool delete = true osd
journal size = 5120 osd pool default min size = 2 osd pool default size

=

3
public network = 10.10.10.0/24 [osd] keyring =
/var/lib/ceph/osd/ceph-$id/keyring [mon.pve-ceph01] host = pve-ceph01

mon

addr = 10.10.10.100:6789 mon osd allow primary affinity = true
[mon.pve-ceph02] host = pve-ceph02 mon addr = 10.10.10.110:6789 mon osd
allow primary affinity = true [mon.pve-ceph03] host = pve-ceph03 mon

addr

=
10.10.10.120:6789 mon osd allow primary affinity = true

[mon.pve-ceph04]

host = pve-ceph04 mon addr = 10.10.10.130:6789 mon osd allow primary
affinity = true [mon.pve-ceph05] host = pve-ceph05 mon addr =
10.10.10.140:6789 mon osd allow primary affinity = true

[mon.pve-ceph06]

host = pve-ceph06 mon addr = 10.10.10.150:6789 mon osd allow primary
affinity = true

Any help will be welcome!

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox CEPH 6 servers failures!

2018-10-05 Thread dorsy

Your question has already been answered. You need majority to have quorum.

On 2018. 10. 05. 14:10, Gilberto Nunes wrote:

Hi
Perhaps this can help:

https://imageshack.com/a/img921/6208/X7ha8R.png

I was thing about it, and perhaps if I deploy a VM in both side, with
Proxmox and add this VM to the CEPH cluster, maybe this can help!

thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em sex, 5 de out de 2018 às 03:55, Alexandre DERUMIER 
escreveu:


Hi,

Can you resend your schema, because it's impossible to read.


but you need to have to quorum on monitor to have the cluster working.


- Mail original -
De: "Gilberto Nunes" 
À: "proxmoxve" 
Envoyé: Jeudi 4 Octobre 2018 22:05:16
Objet: [PVE-User] Proxmox CEPH 6 servers failures!

Hi there

I have something like this:

CEPH01 |
|- CEPH04
|
|
CEPH02 |-|
CEPH05
| Optic Fiber
|
CEPH03 |
|--- CEPH06

Sometime, when Optic Fiber not work, and just CEPH01, CEPH02 and CEPH03
remains, the entire cluster fail!
I find out the cause!

ceph.conf

[global] auth client required = cephx auth cluster required = cephx auth
service required = cephx cluster network = 10.10.10.0/24 fsid =
e67534b4-0a66-48db-ad6f-aa0868e962d8 keyring =
/etc/pve/priv/$cluster.$name.keyring mon allow pool delete = true osd
journal size = 5120 osd pool default min size = 2 osd pool default size =
3
public network = 10.10.10.0/24 [osd] keyring =
/var/lib/ceph/osd/ceph-$id/keyring [mon.pve-ceph01] host = pve-ceph01 mon
addr = 10.10.10.100:6789 mon osd allow primary affinity = true
[mon.pve-ceph02] host = pve-ceph02 mon addr = 10.10.10.110:6789 mon osd
allow primary affinity = true [mon.pve-ceph03] host = pve-ceph03 mon addr
=
10.10.10.120:6789 mon osd allow primary affinity = true [mon.pve-ceph04]
host = pve-ceph04 mon addr = 10.10.10.130:6789 mon osd allow primary
affinity = true [mon.pve-ceph05] host = pve-ceph05 mon addr =
10.10.10.140:6789 mon osd allow primary affinity = true [mon.pve-ceph06]
host = pve-ceph06 mon addr = 10.10.10.150:6789 mon osd allow primary
affinity = true

Any help will be welcome!

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox per VM memory limit

2018-10-03 Thread dorsy

Does the CPU and host/guest OS support addressing 2TB RAM?
Most CPU-s can address <=2^40bit physical memory!

For example "address sizes    : 39 bits physical, 48 bits virtual" for 
my laptop cpu.


On 2018. 10. 03. 14:18, Gilberto Nunes wrote:

That's ok for RHEL and pure KVM/QEMU.
But I thing Proxmox staff do something to the QEMU code used in Proxmox,
because this friend of mine doesn't get a VM with more than 1 TB of memory
to work properly!

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qua, 3 de out de 2018 às 04:34, dorsy  escreveu:


Since it is kvm, you can look at the kvm limits:

https://access.redhat.com/articles/rhel-kvm-limits

On 2018. 10. 03. 5:14, Woods, Ken A (DNR) wrote:

I see.  I wonder if that person was confused.
XenServer does have a 1TB limit.

(But, I’m not staff for them, either. )



On Oct 2, 2018, at 18:32, Gilberto Nunes 

wrote:

Because somebody told me that Proxmox could not address more than 1TB.
Never mind!

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em ter, 2 de out de 2018 às 22:23, Woods, Ken A (DNR) <

ken.wo...@alaska.gov>

escreveu:


I'm not a staff member.  Just a warning--I might be totally wrong about
all this.  Or...not.

IIRC, the coded limit is 4TiB per VM (I think there's a QEMU imposed

limit

at 42 bits...somebody correct me if I'm wrong.) Which gets into a

whole big

off-topic discussion about address paths and the size of page tables

and

their structure, soyeah.  Anyway.

I'm going to guess (given your recent posts) that you don't have that

much

available on the hosts, so Mark's answer is still correct--it's as

much as

you have in your hosts.

So we can *all* bypass some back and forth:  Are you *really* trying to
ask about memory over-commitment?
If so, the short answer is "Please don't do that if you care about
stability, which we know you do. Use ballooning."
If not, please ask the question behind "how much memory can I allocate

to

a VM?"
In short:  Why are you asking?

kw


-Original Message-
From: pve-user [mailto:pve-user-boun...@pve.proxmox.com] On Behalf Of
Gilberto Nunes
Sent: Tuesday, October 02, 2018 4:45 PM
To: PVE User List
Subject: Re: [PVE-User] Proxmox per VM memory limit

Good. Nice to hear that.
But I need someone in the Proxmox staff could answer the question...

Just

to be more accurate about it.
Thanks

Em ter, 2 de out de 2018 17:53, Mark Adams 

escreveu:

assuming the OS in the VM supports it, as much as the host hardware
can support (no limit).

On Tue, 2 Oct 2018 at 19:35, Gilberto Nunes

wrote:


Hi there!

How many memory per VM I get in PVE?
Is there some limit? 1 TB? 2 TB?
Just curious

Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://urldefense.proofpoint.com/v2/url?u=https-3A__pve.proxmox.com
_cgi-2Dbin_mailman_listinfo_pve-2Duser&d=DwIGaQ&c=teXCf5DW4bHgLDM-H5
_GmQ&r=THf3d3FQjCY5FQHo3goSprNAh9vsOWPUM7J0jwvvVwM&m=O4NomRKToJNrdOw
J3wAO9kcqxCcKNWrsS4za1ErSVTY&s=sKid9Pc9Jybmox9w3K4N93O9EX6JOtzu51KFs
Kxl0Ow&e=

___
pve-user mailing list
pve-user@pve.proxmox.com


https://urldefense.proofpoint.com/v2/url?u=https-3A__pve.proxmox.com_c

gi-2Dbin_mailman_listinfo_pve-2Duser&d=DwIGaQ&c=teXCf5DW4bHgLDM-H5_GmQ
&r=THf3d3FQjCY5FQHo3goSprNAh9vsOWPUM7J0jwvvVwM&m=O4NomRKToJNrdOwJ3wAO9
kcqxCcKNWrsS4za1ErSVTY&s=sKid9Pc9Jybmox9w3K4N93O9EX6JOtzu51KFsKxl0Ow&e
=

___
pve-user mailing list
pve-user@pve.proxmox.com



https://urldefense.proofpoint.com/v2/url?u=https-3A__pve.proxmox.com_cgi-2Dbin_mailman_listinfo_pve-2Duser&d=DwIGaQ&c=teXCf5DW4bHgLDM-H5_GmQ&r=THf3d3FQjCY5FQHo3goSprNAh9vsOWPUM7J0jwvvVwM&m=O4NomRKToJNrdOwJ3wAO9kcqxCcKNWrsS4za1ErSVTY&s=sKid9Pc9Jybmox9w3K4N93O9EX6JOtzu51KFsKxl0Ow&e=

___
pve-user mailing list
pve-user@pve.proxmox.com


https://urldefense.proofpoint.com/v2/url?u=https-3A__pve.proxmox.com_cgi-2Dbin_mailman_listinfo_pve-2Duser&d=DwIGaQ&c=teXCf5DW4bHgLDM-H5_GmQ&r=THf3d3FQjCY5FQHo3goSprNAh9vsOWPUM7J0jwvvVwM&m=2r81_0E8RDPUSCnhh4F6b1qUaFAdLE9NSiUyVU0s5vk&s=8zb236duzJWtRHiLhx8TL0fTMsyKm-v3Pd_jJ00NpoA&e=

___
pve-user mailing list
pve-user@pve.proxmox.com


https://urldefense.proofpoint.com/v2/url?u=https-3A__pve.proxmox.com_cgi-2Dbin_mailman_listinfo_pve-2Duser&d=DwIGaQ&c=teXCf5DW4bHgLDM-H5_GmQ&r=THf3d3FQjCY5FQHo3goSprNAh9vsOWPUM7J0jwvvVwM&m=2r81_0E8RDPUSCnhh4F6b1qUaFAdLE9NSiUyVU0s5vk&a

Re: [PVE-User] Proxmox per VM memory limit

2018-10-03 Thread dorsy

Since it is kvm, you can look at the kvm limits:

https://access.redhat.com/articles/rhel-kvm-limits

On 2018. 10. 03. 5:14, Woods, Ken A (DNR) wrote:

I see.  I wonder if that person was confused.
XenServer does have a 1TB limit.

(But, I’m not staff for them, either. )



On Oct 2, 2018, at 18:32, Gilberto Nunes  wrote:

Because somebody told me that Proxmox could not address more than 1TB.
Never mind!

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em ter, 2 de out de 2018 às 22:23, Woods, Ken A (DNR) 
escreveu:


I'm not a staff member.  Just a warning--I might be totally wrong about
all this.  Or...not.

IIRC, the coded limit is 4TiB per VM (I think there's a QEMU imposed limit
at 42 bits...somebody correct me if I'm wrong.) Which gets into a whole big
off-topic discussion about address paths and the size of page tables and
their structure, soyeah.  Anyway.

I'm going to guess (given your recent posts) that you don't have that much
available on the hosts, so Mark's answer is still correct--it's as much as
you have in your hosts.

So we can *all* bypass some back and forth:  Are you *really* trying to
ask about memory over-commitment?
If so, the short answer is "Please don't do that if you care about
stability, which we know you do. Use ballooning."
If not, please ask the question behind "how much memory can I allocate to
a VM?"
In short:  Why are you asking?

kw


-Original Message-
From: pve-user [mailto:pve-user-boun...@pve.proxmox.com] On Behalf Of
Gilberto Nunes
Sent: Tuesday, October 02, 2018 4:45 PM
To: PVE User List
Subject: Re: [PVE-User] Proxmox per VM memory limit

Good. Nice to hear that.
But I need someone in the Proxmox staff could answer the question... Just
to be more accurate about it.
Thanks

Em ter, 2 de out de 2018 17:53, Mark Adams  escreveu:


assuming the OS in the VM supports it, as much as the host hardware
can support (no limit).

On Tue, 2 Oct 2018 at 19:35, Gilberto Nunes

wrote:


Hi there!

How many memory per VM I get in PVE?
Is there some limit? 1 TB? 2 TB?
Just curious

Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
https://urldefense.proofpoint.com/v2/url?u=https-3A__pve.proxmox.com
_cgi-2Dbin_mailman_listinfo_pve-2Duser&d=DwIGaQ&c=teXCf5DW4bHgLDM-H5
_GmQ&r=THf3d3FQjCY5FQHo3goSprNAh9vsOWPUM7J0jwvvVwM&m=O4NomRKToJNrdOw
J3wAO9kcqxCcKNWrsS4za1ErSVTY&s=sKid9Pc9Jybmox9w3K4N93O9EX6JOtzu51KFs
Kxl0Ow&e=

___
pve-user mailing list
pve-user@pve.proxmox.com
https://urldefense.proofpoint.com/v2/url?u=https-3A__pve.proxmox.com_c
gi-2Dbin_mailman_listinfo_pve-2Duser&d=DwIGaQ&c=teXCf5DW4bHgLDM-H5_GmQ
&r=THf3d3FQjCY5FQHo3goSprNAh9vsOWPUM7J0jwvvVwM&m=O4NomRKToJNrdOwJ3wAO9
kcqxCcKNWrsS4za1ErSVTY&s=sKid9Pc9Jybmox9w3K4N93O9EX6JOtzu51KFsKxl0Ow&e
=

___
pve-user mailing list
pve-user@pve.proxmox.com

https://urldefense.proofpoint.com/v2/url?u=https-3A__pve.proxmox.com_cgi-2Dbin_mailman_listinfo_pve-2Duser&d=DwIGaQ&c=teXCf5DW4bHgLDM-H5_GmQ&r=THf3d3FQjCY5FQHo3goSprNAh9vsOWPUM7J0jwvvVwM&m=O4NomRKToJNrdOwJ3wAO9kcqxCcKNWrsS4za1ErSVTY&s=sKid9Pc9Jybmox9w3K4N93O9EX6JOtzu51KFsKxl0Ow&e=
___
pve-user mailing list
pve-user@pve.proxmox.com
https://urldefense.proofpoint.com/v2/url?u=https-3A__pve.proxmox.com_cgi-2Dbin_mailman_listinfo_pve-2Duser&d=DwIGaQ&c=teXCf5DW4bHgLDM-H5_GmQ&r=THf3d3FQjCY5FQHo3goSprNAh9vsOWPUM7J0jwvvVwM&m=2r81_0E8RDPUSCnhh4F6b1qUaFAdLE9NSiUyVU0s5vk&s=8zb236duzJWtRHiLhx8TL0fTMsyKm-v3Pd_jJ00NpoA&e=

___
pve-user mailing list
pve-user@pve.proxmox.com
https://urldefense.proofpoint.com/v2/url?u=https-3A__pve.proxmox.com_cgi-2Dbin_mailman_listinfo_pve-2Duser&d=DwIGaQ&c=teXCf5DW4bHgLDM-H5_GmQ&r=THf3d3FQjCY5FQHo3goSprNAh9vsOWPUM7J0jwvvVwM&m=2r81_0E8RDPUSCnhh4F6b1qUaFAdLE9NSiUyVU0s5vk&s=8zb236duzJWtRHiLhx8TL0fTMsyKm-v3Pd_jJ00NpoA&e=

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] OVS Internal ports all in state unknown

2018-09-14 Thread dORSY
this is not proxmox relatedthose are not phisical ifs so there is no link up or 
down on them

 
 
  On Fri, Sep 14, 2018 at 16:42, Brian Sidebotham wrote: 
  Hi Guys,

We are using Openvswitch networking and we have a physical 1G management
network and two 10G physical links bonded. The physical interfaces show a
state of UP when doing "ip a".

However for the OVS bond, bridges and internal ports we get a state of"
UNKNOWN". Is this expected?

Everything else is essentially working OK - The GUI marks the bond, bridge
and internal ports as active and traffic is working as expected, but I
don't know why the state of these is not UP?

An example of an internal port OVS Configuration in /etc/network/interfaces
(as setup by the GUI):

allow-vmbr1 vlan233
iface vlan233 inet static
address  10.1.33.24
netmask  255.255.255.0
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=233

and ip a output:

14: vlan233:  mtu 1500 qdisc noqueue state
*UNKNOWN* group default qlen 1000
    link/ether e2:53:9f:28:cb:2b brd ff:ff:ff:ff:ff:ff
    inet 10.1.33.24/24 brd 10.1.33.255 scope global vlan233
      valid_lft forever preferred_lft forever
    inet6 fe80::e053:9fff:fe28:cb2b/64 scope link
      valid_lft forever preferred_lft forever

The version we're running is detailed below. We rolled back the kernel as
we were having stability problems with 4.15.8 on our hardware (HP Proliant
Gen8)

root@ :/etc/network# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.13.16-2-pve)
pve-manager: 5.2-7 (running version: 5.2-7/8d88e66a)
pve-kernel-4.15: 5.2-5
pve-kernel-4.15.18-2-pve: 4.15.18-20
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.13-2-pve: 4.13.13-33
ceph: 12.2.7-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-24
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-1
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-29
pve-container: 2.0-25
pve-docs: 5.2-8
pve-firewall: 3.0-13
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-32
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9

---
Brian Sidebotham

Wanless Systems Limited
e: brian@wanless.systems
m:+44 7739 359 883
o: +44 330 223 3595


The information in this email is confidential and solely for the use of the
intended recipient(s). If you receive this email in error, please notify
the sender and delete the email from your system immediately. In such
circumstances, you must not make any use of the email or its contents.

Views expressed by an individual in this email do not necessarily reflect
the views of Wanless Systems Limited.

Computer viruses may be transmitted by email. Wanless Systems Limited
accepts no liability for any damage caused by any virus transmitted by this
email. E-mail transmission cannot be guaranteed to be secure or error-free.
It is possible that information may be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The sender does
not accept liability for any errors or omissions in the contents of this
message, which arise as a result of e-mail transmission.

Please note that all calls are recorded for monitoring and quality purposes.

Wanless Systems Limited.
Registered office: Wanless Systems Limited, Bracknell, Berkshire, RG12 0UN.
Registered in England.
Registered number: 6901359
.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
  
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to use lvm on zfs ?

2018-08-08 Thread dorsy

I'd say that it is more convinient to support one method.
Also mentioned in this thread that zfs could be considered to be a 
successor of MDraid+LVM.


It is still a debian system with cutom kernel and some pve packages on 
top, so You could do anything just like on any standard debian system.


On 8/8/18 3:32 PM, Denis Morejon wrote:
Why does Proxmox team have not incorporated a software Raid in the 
install process ? So that we could include redundancy and lvm 
advantages when using local disks.





El 08/08/18 a las 09:23, Denis Morejon escribió:



El 07/08/18 a las 17:51, Yannis Milios escribió:

  (zfs create -V 100G rpool/lvm) and make that a PV (pvcreate
/dev/zvol/rpool/lvm) and make a VG (vgcreate pve 
/dev/zvol/rpool/lvm)

and then a LV (lvcreate -L100% pve/data)



Try the above as it was suggested to you ...



But I suspect I have no space to create an
additional zfs volume since the one mounted on "/" occupied all 
the space


No, that's a wrong assumption, zfs does not pre-allocate the whole 
space of

the pool, even if looks like it does so. In short there is no need to
"shrink" the pool in order to create a zvol as it was suggested 
above...
Still, the whole idea of having LVM ontop of ZFS/zvol is a mess, but 
if you

insist, it's up to you ...
A combination of Linux RAID + LVM would look much more elegant in your
case, but for that you have to reinstall PVE by using the Debian iso.
During the installation create a linux raid array with lvm on top 
and then

add PVE repos ass described in the wiki:

https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
That's right. Now I understand that lvm/zfs would be a mess. Mainly 
because
zfs doesn't create a block devices such as partitions on which I 
could do pvcreate ...

and make It part of a lvm volumen group.

After a (zfs create -V 100G rpool/lvm) a have to do a losetup to 
create a loop device an so on...


Instead, I will keep zfs Raid mounted on "/" (local storage) on the 
last 4 Proxmox, remove the local-lvm storage from all Proxmox, and 
resize the local storage of the first 4 Proxmox . In such a way that 
all the 8 Proxmox have just local storage making the migration of VMs 
between nodes easy.




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Interfaces startup and ip-up.d scripts...

2018-05-18 Thread dORSY
 
"So, now i cannot play with networking. ;-)"
Then don't play with them. Simply use the "old-school" working post-up and 
pre-down directives for the interfaces. As we all linux admins do for ages. And 
use built-in firewall tools (iptables/netfilter-persistent or proxmox's fireall 
or anything coming as a debian package) All in all proxmox5 is debian9 and 
works exactly as one :).

On Friday, 18 May 2018, 17:23:20 CEST, Marco Gaiarin  
wrote:  
 
 Mandi! Josh Knight
  In chel di` si favelave...

> Interesting, I couldn't reproduce the problem on my server.

I'm not a very large user case: i've many PVE system, but they are 4.4
and not ''firewalled'', this is a 5.2 and a case ''per se''...

> I set
> verbose=yes, I created a test script that simply did echo $IFACE and after
> 3 reboots it seems to execute each time. After boot I just did  journalctl
> -b | grep ifup  and I was able to see the interface names printed.

Boh...


> in journalctl -b, are you seeing anything related to run-parts? Or does
> ifup not print anything at all?

In a ''falied'' boot i can se the logs for interfaces 'lo' and '--all'
(why '--all'?). Logs report, for every interface:
    /bin/ip link set dev  up
and then the run of the 'run-parts':
    run-parts: executing /etc/network/if-up.d/0sysctl
and the the single runs of the scripts:
    run-parts: executing /etc/network/if-up.d/bridgevlan

In a good boot, i can se the logs for interfaces 'lo', 'vmbr0', 'vmbr1'
and '--all''. Same logs.


> Is your firewall script using anything interface specific?  If you put it
> in that directory, it will be executed for each interface. 

My script are parametrizied, and get runned only on particular
interfaces.


> I'm curious if
> you add a post-up line to your /etc/network/interfaces file, it would be
> called only once when your mgt interface comes up.

I was short on time, and so i was forced to put that server
in production, in a non too easy reachable place.

So, now i cannot play with networking. ;-)

-- 
dott. Marco Gaiarin                        GNUPG Key ID: 240A3D66
  Associazione ``La Nostra Famiglia''          http://www.lanostrafamiglia.it/
  Polo FVG  -  Via della Bontà, 7 - 33078  -  San Vito al Tagliamento (PN)
  marco.gaiarin(at)lanostrafamiglia.it  t +39-0434-842711  f +39-0434-842797

        Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
      http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
    (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
  
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Attempted to add SSL to pve, now cannot access the web GUI.

2018-05-17 Thread dORSY
you should delete the ssl exception from firefox if you had one. also you can 
try an alternative browser.

Sent from Yahoo Mail on Android 
 
  On Thu, May 17, 2018 at 17:21, Viktor Kruug wrote:   Is 
there a way to reset the web GUI back to default settings to get around
the bad SSL certificate settings?

My server is not accessible from the internet, so I had to generate the
certificate using the DNS-01 process and upload the certificate via the web
portal.  Currently, when navigating to the URL, Firefox gives me "Secure
Connection Failed".

-- 
*No trees were destroyed in the sending of this message; however, a
significant number of electrons were terribly inconvenienced.*


[[[ To any NSA and FBI agents reading my email: please consider    ]]]
[[[ whether defending the US Constitution against all enemies,    ]]]
[[[ foreign or domestic, requires you to follow Snowden’s example. ]]]
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
  
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Interfaces startup and ip-up.d scripts...

2018-05-15 Thread dORSY
 Also I use this like:    post-up /etc/network/custom/vmbr0-post-up
    post-up /sbin/ethtool -G eth0 rx 2048 || true
    post-up /sbin/ethtool -G eth0 tx 2048 || true
    pre-down /etc/network/custom/vmbr0-pre-down
And never had problems with it.

On Tuesday, 15 May 2018, 19:08:42 CEST, Marco Gaiarin  
wrote:  
 
 Mandi! Josh Knight
  In chel di` si favelave...

> Are you using the script to assign an IP address manually, or are you using
> it to set firewall rules?

I'm setting firewall rules; my script are coded carefully, and exit
always with 0 status.
They works in my debisn server/firewall without a trouble (but, indeed, with
standard debian kernel, and still in jessie).


> I'm trying to determine what you're trying to
> do.  Having the IP defined in /etc/network/interfaces should be enough for
> it to come up correctly without any custom scripts.  Is this not the case?

Tha strange things is exactly that. With simple stanzas like that:

> >  auto vmbr0
> >  iface vmbr0 inet static
> >        address  10.99.25.254
> >        netmask  255.255.252.0
> >        bridge_ports enp2s0f0
> >        bridge_stp off
> >        bridge_fd 0
> >        bridge_vlan_aware yes
> >
> >  auto vmbr1
> >  iface vmbr1 inet static
> >        address  10.5.2.230
> >        netmask  255.255.0.0
> >        gateway  10.5.1.254
> >        bridge_ports enp2s0f1
> >        bridge_stp off
> >        bridge_fd 0

interfaces get correctly brought up (eg, and 'ip address show' list
interfaces correctly), but looking at 'journalctl -b' seems that
if-up.d and if-down.d scripts get never executed.

*ALL* scripts, of course, not only mine...

-- 
dott. Marco Gaiarin                        GNUPG Key ID: 240A3D66
  Associazione ``La Nostra Famiglia''          http://www.lanostrafamiglia.it/
  Polo FVG  -  Via della Bontà, 7 - 33078  -  San Vito al Tagliamento (PN)
  marco.gaiarin(at)lanostrafamiglia.it  t +39-0434-842711  f +39-0434-842797

        Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
      http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
    (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
  
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Interfaces startup and ip-up.d scripts...

2018-05-15 Thread dORSY
 You can also use either proxmox's built-in firewall (which is awesome if you 
get used to it) or netfilter-persistent to load/save iptables rules.
They tend to be more robust than shellscipt hooks.

On Tuesday, 15 May 2018, 19:08:42 CEST, Marco Gaiarin  
wrote:  
 
 Mandi! Josh Knight
  In chel di` si favelave...

> Are you using the script to assign an IP address manually, or are you using
> it to set firewall rules?

I'm setting firewall rules; my script are coded carefully, and exit
always with 0 status.
They works in my debisn server/firewall without a trouble (but, indeed, with
standard debian kernel, and still in jessie).


> I'm trying to determine what you're trying to
> do.  Having the IP defined in /etc/network/interfaces should be enough for
> it to come up correctly without any custom scripts.  Is this not the case?

Tha strange things is exactly that. With simple stanzas like that:

> >  auto vmbr0
> >  iface vmbr0 inet static
> >        address  10.99.25.254
> >        netmask  255.255.252.0
> >        bridge_ports enp2s0f0
> >        bridge_stp off
> >        bridge_fd 0
> >        bridge_vlan_aware yes
> >
> >  auto vmbr1
> >  iface vmbr1 inet static
> >        address  10.5.2.230
> >        netmask  255.255.0.0
> >        gateway  10.5.1.254
> >        bridge_ports enp2s0f1
> >        bridge_stp off
> >        bridge_fd 0

interfaces get correctly brought up (eg, and 'ip address show' list
interfaces correctly), but looking at 'journalctl -b' seems that
if-up.d and if-down.d scripts get never executed.

*ALL* scripts, of course, not only mine...

-- 
dott. Marco Gaiarin                        GNUPG Key ID: 240A3D66
  Associazione ``La Nostra Famiglia''          http://www.lanostrafamiglia.it/
  Polo FVG  -  Via della Bontà, 7 - 33078  -  San Vito al Tagliamento (PN)
  marco.gaiarin(at)lanostrafamiglia.it  t +39-0434-842711  f +39-0434-842797

        Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
      http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
    (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
  
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Interfaces startup and ip-up.d scripts...

2018-05-15 Thread dORSY
 I suggest using post-up / pre-down hooks in interfaces. It makes sure that the 
interfaces are actually up before the commands get executed.

On Tuesday, 15 May 2018, 15:26:49 CEST, Marco Gaiarin  
wrote:  
 
 
I've to setup a little PVE server in a private but hostile network, and
i've only an IP available, so i was forced to assign the IP to the
phisical server, running latest proxmox, and i've setup a firewall
using my hand-made scripts.

I've put the script, as usual with debian, in /etc/network/if-up.d/ and
if-down.d/, but i've found that not at every boot they get started.

So, i've enabled networking debug (eg, set VERBOSE=yes in
/etc/default/networking) and found that at every boot scripts get
called with 'lo' interface, but only roughly 1 out of 10 times the
other bridge interfaces get started.

So, i got:

 May 15 10:18:25 clerk ifup[2958]: /bin/ip link set dev lo up
 May 15 10:18:25 clerk ifup[2958]: /bin/run-parts --exit-on-error --verbose 
/etc/network/if-up.d

but then:

 May 15 10:18:26 clerk ifup[2958]: /bin/run-parts --exit-on-error --verbose 
/etc/network/if-up.d

without interface name. With some more debug i discovered that is the
'--all' interface.
The strange things is that interfaces vmbr0 and vmbr1 are up, simply
the scripts get not called.

Some boot, instead:

 May 15 10:18:27 clerk ifup[4043]: /bin/ip link set dev vmbr0  up
 May 15 10:18:27 clerk ifup[4043]: /bin/run-parts --exit-on-error --verbose 
/etc/network/if-up.d

 May 15 10:18:28 clerk ifup[4043]: /bin/ip addr add 10.5.2.230/255.255.0.0 
broadcast 10.5.255.255          dev vmbr1 label vmbr1
 May 15 10:18:28 clerk ifup[4043]: /bin/ip link set dev vmbr1  up
 May 15 10:18:28 clerk ifup[4043]:  /bin/ip route add default via 10.5.1.254  
dev vmbr1 onlink
 May 15 10:18:28 clerk ifup[4043]: /bin/run-parts --exit-on-error --verbose 
/etc/network/if-up.d

and clearly firewalling script works.


For now, i've put 'systemctl restart networking' in /etc/rc.local.

My /etc/network/interfaces is rather simple:

 auto lo
 iface lo inet loopback
 
 iface enp2s0f0 inet manual
 
 iface enp2s0f1 inet manual
     ethernet-autoneg on
     link-speed 100
     link-duplex full
     hardware-dma-ring-tx 18
     offload-tso off
     offload-gso off
     offload-gro off
 
 auto vmbr0
 iface vmbr0 inet static
     address  10.99.25.254
    netmask  255.255.252.0
     bridge_ports enp2s0f0 
    bridge_stp off
    bridge_fd 0
    bridge_vlan_aware yes

 auto vmbr1
 iface vmbr1 inet static
    address  10.5.2.230
    netmask  255.255.0.0
    gateway  10.5.1.254
    bridge_ports enp2s0f1
    bridge_stp off
    bridge_fd 0


There's something i can do to fix this? Thanks.

-- 
dott. Marco Gaiarin                        GNUPG Key ID: 240A3D66
  Associazione ``La Nostra Famiglia''          http://www.lanostrafamiglia.it/
  Polo FVG  -  Via della Bontà, 7 - 33078  -  San Vito al Tagliamento (PN)
  marco.gaiarin(at)lanostrafamiglia.it  t +39-0434-842711  f +39-0434-842797

        Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
      http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
    (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
  
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Restoring files - /etc/pve not mounted after cluster create fail

2018-02-24 Thread dORSY
You can dd a block device into an image and/or another block device, even 
possible over ssh/network.
An lvm volume is a block device.
You can mount filesystems (from partitons or block devices or files, as linux 
can create an fs onto a lots of things), and not "disks".
What you basically do with proxmox is you create an lvm volume and pass it to a 
VM as a disk (block device). Then you partition theese inside the VM.

Hope it helps.
 

On Saturday, 24 February 2018, 16:21:17 CET, Gregor Burck 
 wrote:  
 
 Sorry but how should I do a backup when the host doesn't work correctly?

Or like this:

with dd make an image file from the volume:

dd if=/dev/dm-6 of=/root/vm-100-disk-1 bs=1M

Then move the file to the new host and play it on the new host back?

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
  
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE 5.1 - Intel <-> AMD migration crash with Debian 9

2018-02-02 Thread dorsy

I can only quote the guy who maintains the stable Linux kernel releases:
"Conclusion

Again, update your kernels, don’t delay, and don’t stop. The updates to 
resolve these problems will be continuing to come for a long period of 
time. Also, there are still lots of other bugs and security issues being 
resolved in the stable and LTS kernel releases that are totally 
independent of these types of issues, so keeping up to date is always a 
good idea.


Right now, there are a lot of very overworked, grumpy, sleepless, and 
just generally pissed off kernel developers working as hard as they can 
to resolve these issues that they themselves did not cause at all. 
Please be considerate of their situation right now. They need all the 
love and support and free supply of their favorite beverage that we can 
provide them to ensure that we all end up with fixed systems as soon as 
possible."


source: http://kroah.com/log/blog/2018/01/06/meltdown-status/

On 2018-02-02 13:10, Uwe Sauter wrote:

Am 02.02.2018 um 13:02 schrieb Eneko Lacunza:

Hi,

El 02/02/18 a las 12:59, Uwe Sauter escribió:

This is a very important message for all users of Proxmox. Is there any
announcement on the lists for it?

This kernel is already quite old and you should install always latest packages 
anyway. So no, there is no extra information
besides the well known sources about each single bugfix.

Don't know what you consider "quite old", but our servers were last updated on 
15th january 2018. I really thought that we were on
bleeding edge versions... :-)

Also, it doesn't seem reasonable to think that users will be checking dayly? 
for kernel updates and installing them *and
rebooting* the server... :)

For checking, that's where monitoring software is for… I have Nagios checks 
that keep me informed if there are any new packages
available.

Regarding reboots: well, that's why you run a cluster so you are able to reboot 
hosts without interuption of the serivces provided
by VMs…

Sure, but some user's don't have shared storage, or have just one server.

Also, you just always install de latest available versions inmediately, so 
that... you get also the broken versions like this
kernel? Very good for maintaining services provided by VMs... ;)

Cheers



I never said that I install everything the second it is available, just that 
Nagios keeps me informed.

And then again either you use the community repository and get the latest and 
greatest or you pay for the enterprise repo where
things might be more stable… But especially in troubled times like this January 
I don't see a point to wait… but that's just my 2
cents.

And I didn't have trouble with any kernel update so far since I started using 
Proxmox early last year.

Regards,

Uwe
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Black screen installing Proxmox 5.1 (iso release 3)

2018-02-01 Thread dORSY
Can you boot it with a debian installer?
If you can then you are able to install debian and install proxmox packages via 
the repositories.
However you won't be able to install the system to zfs out-of-the box. (if that 
is the plan)
 

On Thursday, 1 February 2018, 20:44:27 CET, Yusibel González Pérez 
 wrote:  
 
 I trying install Proxmox 5.1 (iso release 3) on server Huawei RH2285H
V2. Booting from dvd, i choose "Install Proxmox VE" de instaltion
begin and after message "waiting for /dev ... " the screen put in
black and can't do nothing.
I try to change option for kernel: nomodeset, apci=off, noapci,
gfxpayload... and always optain black screen.

Some solution??

thanks
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
  
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Loosing network connectivity in a VM

2018-01-02 Thread dORSY
What does stops responding mean?
Is there anything in the host's logs (bridge/networking related)? 
What do you see in the vm's when its happening? Is interface up? Is there an 
IP? What are the routes?

We had for example this bug for a while (tough it made only warnings for me):

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1715609
 

On Tuesday, 2 January 2018, 19:28:30 CET, Mark Schouten  
wrote:  
 
 I just had it again. Toggling the 'disconnect' checkbox in Proxmox 'fixes' the 
issue.


I do not use bonding. It is as simple as it gets. A bridge on a 10Gbit 
interface.


Do you have a link about these virtio issues in 4.10 ?



Met vriendelijke groeten,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl



 Van:  Alexandre DERUMIER  
 Aan:  proxmoxve  
 Verzonden:  2-1-2018 17:49 
 Onderwerp:  Re: [PVE-User] Loosing network connectivity in a VM 

Try to upgrade to kernel 4.13, they are known virtio bug in 4.10.
(no sure it's related, but it could help)

do you use bonding on your host ? if yes, which mode ?

- Mail original -
De: "Mark Schouten" 
À: "proxmoxve" 
Envoyé: Mardi 2 Janvier 2018 13:41:43
Objet: [PVE-User] Loosing network connectivity in a VM

Hi, 

I have a VM running Debian Stretch, with three interfaces. Two VirtIO 
interfaces and one E1000. 

In the last few days one of the Virtio interfaces stopped responding. The 
other interfaces are working flawlessly. 

The Virtio-interface in question is the busiest interface, but at the moment 
the the interface stops responding, it's not busy at all. 

tcpdump shows me ARP-traffic going out, but nothing coming back. 

I experienced this with other VM's on this (physical) machine as, making me 
believe it is a new bug in KVM/Virtio. 

The config of the VM is quite default, as is the config for other VM's on which 
I experienced this same issue. 

Has anybody seen this behaviour before, does anyone have an idea what to do 
about it? 

root@host02:~# pveversion 
pve-manager/5.0-30/5ab26bc (running kernel: 4.10.17-2-pve) 

# info version 
2.9.0pve-qemu-kvm_2.9.0-4 

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ 
Mark Schouten | Tuxis Internet Engineering 
KvK: 61527076 | http://www.tuxis.nl/ 
T: 0318 200208 | i...@tuxis.nl 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
  
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PCI Pass-Through question

2017-11-29 Thread dORSY
A good starting poin would be:

https://pve.proxmox.com/wiki/Pci_passthrough



On Wednesday, 29 November 2017, 11:11:42 CET, Lonnie Cumberland 
 wrote: 
...
I am still very new to Proxmox VX and have been looking for some
documentation on doing PCI Pass-Through from a VM.
...
Any ideas on where to find some type of documentation?

Thanks and have a great day,
Lonnie

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] copy paste

2017-11-02 Thread dorsy
You can ssh to the proxmox node, which is a linux host, therefore you 
can ssh to any of your VMs.


The only thing you need is an IP from the "VM-only" network.

The thing is that modern linux systems do not initialize a "text" 
console, but they use a graphical local console.


You can have a serial console in KVM.

More info: https://pve.proxmox.com/wiki/Serial_Terminal

On 2017-11-02 15:59, mj wrote:


On 11/02/2017 03:54 PM, dorsy wrote:

I'd suggest using ssh for console.
So you can copy/paste at your local terminal emulator.


Yeah I do that, but sometimes I need to test things in an isolated 
VMs-only network, and in that case I cannot simply ssh into a VM.


In those cases I can only work via the console, and copying/pasting 
would help SO much.


But guessing from these answers, copy/paste only works under X?

MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] copy paste

2017-11-02 Thread dorsy

I'd suggest using ssh for console.
So you can copy/paste at your local terminal emulator.


On 2017-11-02 15:50, mj wrote:



On 11/02/2017 03:13 PM, Martin Maurer wrote:

Works here, tested with a default Debian Stretch (Gnome).
Seems you do not run any x/desktop in your guest?


True. Is it only supported under x?

Nothing possible onder a text only (server) install?

MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Modify sender email address for mail notification

2017-10-27 Thread dorsy

Check Datacenter/Options on the GUI.


On 2017-10-27 10:27, Martin LEUSCH wrote:

Hi,

I configure postfix to modify root address for email from and to root 
as I always do for Linux servers.


In vzdump  notification, addresses are correctly rewriten but for HA 
manager notifications, sender is still root@$hostname. Whats is the 
difference between these kind of notification?


I also modify "email_from" in datacenter.cfg but this has no effect, 
is there something to do to apply this settings?


Thanks,

Martin

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Kernel Errors

2017-09-25 Thread dORSY
 Just tested, the 4.10.17-23 kernel from pve-testing indeed fixes the problem :)

On Monday, 25 September 2017, 14:26:52 CEST, Daniel  
wrote:  
 
 Sounds perfect for me ;)


-- 
Grüsse
 
Daniel

Am 25.09.17, 08:18 schrieb "pve-user im Auftrag von Fabian Grünbichler" 
:

    On Sun, Sep 24, 2017 at 05:11:11PM +, Daniel wrote:
    > Doese it have any netork trouble or is it working fine just see that 
error in the kernel log?
    
    it is just a warning, will be fixed with the next round of kernel updates.
    
    ___
    pve-user mailing list
    pve-user@pve.proxmox.com
    https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
    

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
  
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Kernel Errors

2017-09-24 Thread dORSY
 I also have theese messages.
They must be related to some kernel net driver regression/bug.
As Proxmox does use its own ubuntu-based kernel, this bug seems to be relevant:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1715609Sometimes disabling 
gso/gro/lro-like features on the network devices could help, but i tried many 
of them with no success yet.with:
ethtool -K ethX lro offcheck with ethtool -k ethX

Let's hope ubuntu/proxmox guys fix it soon.

On Sunday, 24 September 2017, 16:40:03 CEST, Daniel  
wrote:  
 
 Hi there,

i run a cluster of 18 Proxmox Host in the lates Version and latest Updates 
installed.

Could someone tell me what this Kernel Message here mean:
After I change Network settings I have this on all Hosts.



[ 4675.285311] [ cut here ]
[ 4675.285314] WARNING: CPU: 2 PID: 1972 at net/core/dev.c:2576 
skb_warn_bad_offload+0xd1/0x120
[ 4675.285315] igb: caps=(0x003000214bb3, 0x) len=1714 
data_len=1672 gso_size=1480 gso_type=2 ip_summed=0
[ 4675.285316] Modules linked in: nfsv3 nfs_acl nfs lockd grace fscache veth 
ip_set ip6table_filter ip6_tables binfmt_misc iptable_filter softdog 
nfnetlink_log nfnetlink nls_iso8859_1 dm_thin_pool dm_persistent_data 
dm_bio_prison dm_bufio libcrc32c ipmi_ssif intel_rapl sb_edac edac_core 
x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass ast ttm 
drm_kms_helper mei_me drm i2c_algo_bit crct10dif_pclmul mei fb_sys_fops lpc_ich 
syscopyarea sysfillrect crc32_pclmul snd_pcm ghash_clmulni_intel snd_timer pcbc 
sysimgblt aesni_intel aes_x86_64 snd soundcore joydev input_leds crypto_simd 
glue_helper cryptd intel_cstate intel_rapl_perf pcspkr ioatdma ipmi_si 
ipmi_devintf ipmi_msghandler shpchp mac_hid wmi acpi_pad acpi_power_meter 
vhost_net vhost macvtap macvlan ib_iser rdma_cm iw_cm ib_cm ib_core
[ 4675.285351]  configfs iscsi_tcp libiscsi_tcp libiscsi sunrpc 
scsi_transport_iscsi ip_tables x_tables autofs4 btrfs xor raid6_pq hid_generic 
usbmouse usbkbd usbhid hid megaraid_sas i2c_i801 ahci libahci igb(O) dca ptp 
pps_core fjes
[ 4675.285358] CPU: 2 PID: 1972 Comm: corosync Tainted: G        W  O    
4.10.17-3-pve #1
[ 4675.285359] Hardware name: Supermicro Super Server/X10DRW-i, BIOS 2.0b 
04/13/2017
[ 4675.285359] Call Trace:
[ 4675.285362]  dump_stack+0x63/0x81
[ 4675.285363]  __warn+0xcb/0xf0
[ 4675.285364]  warn_slowpath_fmt+0x5f/0x80
[ 4675.285365]  skb_warn_bad_offload+0xd1/0x120
[ 4675.285367]  __skb_gso_segment+0x181/0x190
[ 4675.285368]  validate_xmit_skb+0x14f/0x2a0
[ 4675.285369]  validate_xmit_skb_list+0x43/0x70
[ 4675.285371]  sch_direct_xmit+0x16b/0x1c0
[ 4675.285372]  __dev_queue_xmit+0x477/0x680
[ 4675.285374]  ? dev_queue_xmit+0x10/0x20
[ 4675.285375]  dev_queue_xmit+0x10/0x20
[ 4675.285376]  br_dev_queue_push_xmit+0x7e/0x160
[ 4675.285377]  br_forward_finish+0x3d/0xb0
[ 4675.285378]  ? br_fdb_external_learn_del+0x120/0x120
[ 4675.285379]  __br_forward+0x14a/0x1e0
[ 4675.285380]  ? __kmalloc_reserve.isra.34+0x31/0x90
[ 4675.285381]  br_forward+0xa3/0xb0
[ 4675.285382]  br_dev_xmit+0x1f9/0x2b0
[ 4675.285383]  dev_hard_start_xmit+0xa3/0x1f0
[ 4675.285384]  __dev_queue_xmit+0x5ae/0x680
[ 4675.285385]  dev_queue_xmit+0x10/0x20
[ 4675.285387]  ip_finish_output2+0x27a/0x370
[ 4675.285388]  ip_finish_output+0x1c7/0x270
[ 4675.285389]  ip_output+0x76/0xe0
[ 4675.285390]  ? ip_forward_options+0x1b0/0x1b0
[ 4675.285391]  ip_local_out+0x35/0x40
[ 4675.285392]  ip_send_skb+0x19/0x40
[ 4675.285393]  udp_send_skb+0xae/0x280
[ 4675.285394]  udp_sendmsg+0x332/0x9e0
[ 4675.285395]  ? ip_reply_glue_bits+0x50/0x50
[ 4675.285397]  ? aa_sock_msg_perm+0x61/0x150
[ 4675.285398]  inet_sendmsg+0x31/0xb0
[ 4675.285399]  sock_sendmsg+0x38/0x50
[ 4675.285400]  ___sys_sendmsg+0x2c2/0x2d0
[ 4675.285419]  ? ep_send_events_proc+0x87/0x1a0
[ 4675.285421]  ? ep_poll+0x191/0x350
[ 4675.285422]  ? __fget_light+0x25/0x60
[ 4675.285423]  __sys_sendmsg+0x54/0x90
[ 4675.285425]  SyS_sendmsg+0x12/0x20
[ 4675.285427]  entry_SYSCALL_64_fastpath+0x1e/0xad
[ 4675.285428] RIP: 0033:0x7f5bec484e90
[ 4675.285428] RSP: 002b:7ffc4bba4a30 EFLAGS: 0293 ORIG_RAX: 
002e
[ 4675.285430] RAX: ffda RBX: 0016 RCX: 7f5bec484e90
[ 4675.285430] RDX: 4000 RSI: 7ffc4bba4a80 RDI: 000b
[ 4675.285431] RBP: 0008 R08:  R09: 
[ 4675.285431] R10: 000188de2956 R11: 0293 R12: 7f5bed2a6cf8
[ 4675.285432] R13: 7f5bed2b701e R14: 7f5bed284010 R15: 000c
[ 4675.285446] ---[ end trace 5d57510b90b28d5f ]---


--
Grüsse

Daniel
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
  
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] proxmox 5 - replication fails

2017-07-12 Thread dorsy


# cat /etc/pve/replication.cfg
local: 105-0
target ns302695
rate 10
schedule */2:00

local: 103-0
target ns3511723
rate 11
schedule */20

local: 109-0
target ns3511723
rate 10

local: 102-0
target ns302695
rate 10
schedule 22:30

local: 107-0
target ns302695
rate 10

local: 100-0
target ns302695
rate 10
schedule */2:00

cat /var/lib/pve-manager/pve-replication-state.json
{"103":{"local/ns3511723":{"storeid_list":["local-zfs"],"fail_count":0,"last_try":1499859600,"last_sync":1499859600,"last_iteration":1499859600,"last_node":"ns302695","duration":4.482678}},"109":{"local/ns3511723":{"fail_count":0,"storeid_list":["local-zfs"],"last_sync":1499859000,"last_try":1499859000,"last_iteration":1499859000,"last_node":"ns302695","duration":7.828846}}}

On the failed node (at the moment, I had failures from both sides):
# cat /var/lib/pve-manager/pve-replication-state.json
{"105":{"local/ns302695":{"last_iteration":1499853601,"fail_count":0,"duration":32.107092,"last_node":"ns3511723","storeid_list":["local-zfs"],"last_try":1499853633,"last_sync":1499853633}},"102":{"local/ns302695":{"last_try":1499805001,"last_sync":1499805001,"last_node":"ns3511723","duration":126.81862,"storeid_list":["local-zfs"],"last_iteration":1499805001,"fail_count":0}},"107":{"local/ns302695":{"fail_count":0,"last_iteration":1499859000,"duration":3.511844,"last_node":"ns3511723","storeid_list":["local-zfs"],"last_try":1499859000,"last_sync":1499859000}},"100":{"local/ns302695":{"error":"command 
'set -o pipefail && pvesm export local-zfs:vm-100-disk-1 zfs - 
-with-snapshots 1 -snapshot __replicate_100-0_1499858220__ | 
/usr/bin/cstream -t 1000 | /usr/bin/ssh -o 'BatchMode=yes' -o 
'HostKeyAlias=ns302695' r...@ip.of.tar.get -- pvesm import 
local-zfs:vm-100-disk-1 zfs - -with-snapshots 1' failed: exit code 
255","fail_count":5,"last_iteration":1499858220,"duration":2.493542,"last_node":"ns3511723","storeid_list":["local-zfs"],"last_try":1499858220,"last_sync":1499846406}}}


But I knew all these from the API :)

pve:/> get nodes/ns3511723/replication/100-0/log
200 OK
[
   {
  "n" : 1,
  "t" : "2017-07-12 13:17:00 100-0: start replication job"
   },
   {
  "n" : 2,
  "t" : "2017-07-12 13:17:00 100-0: guest => VM 100, running => 12279"
   },
   {
  "n" : 3,
  "t" : "2017-07-12 13:17:00 100-0: volumes => local-zfs:vm-100-disk-1"
   },
   {
  "n" : 4,
  "t" : "2017-07-12 13:17:01 100-0: create snapshot 
'__replicate_100-0_1499858220__' on local-zfs:vm-100-disk-1"

   },
   {
  "n" : 5,
  "t" : "2017-07-12 13:17:01 100-0: full sync 
'local-zfs:vm-100-disk-1' (__replicate_100-0_1499858220__)"

   },
   {
  "n" : 6,
  "t" : "2017-07-12 13:17:03 100-0: delete previous replication 
snapshot '__replicate_100-0_1499858220__' on local-zfs:vm-100-disk-1"

   },
   {
  "n" : 7,
  "t" : "2017-07-12 13:17:03 100-0: end replication job with error: 
command 'set -o pipefail && pvesm export local-zfs:vm-100-disk-1 zfs - 
-with-snapshots 1 -snapshot __replicate_100-0_1499858220__ | 
/usr/bin/cstream -t 1000 | /usr/bin/ssh -o 'BatchMode=yes' -o 
'HostKeyAlias=ns302695' r...@ip.of.tar.get -- pvesm import 
local-zfs:vm-100-disk-1 zfs - -with-snapshots 1' failed: exit code 255"

   }
]

pve:/> get nodes/ns3511723/replication/100-0/status
200 OK
{
   "duration" : 2.493542,
   "error" : "command 'set -o pipefail && pvesm export 
local-zfs:vm-100-disk-1 zfs - -with-snapshots 1 -snapshot 
__replicate_100-0_1499858220__ | /usr/bin/cstream -t 1000 | 
/usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=ns302695' 
r...@ip.of.tar.get -- pvesm import local-zfs:vm-100-disk-1 zfs - 
-with-snapshots 1' failed: exit code 255",

   "fail_count" : 5,
   "guest" : "100",
   "id" : "100-0",
   "jobnum" : "0",
   "last_sync" : 1499846406,
   "last_try" : 1499858220,
   "next_sync" : 1499860020,
   "rate" : 10,
   "schedule" : "*/2:00",
   "target" : "ns302695",
   "type" : "local",
   "vmtype" : "qemu"
}

Also, I have set a throttle of 10MB/s for the replication jobs, which is 
just a portion of the available bandwidth between the nodes, it should 
not be an issue.


On 2017-07-12 13:39, Dominik Csapak wrote:

hi,

i reply here, to avoid confusion in the other thread

can you post the content of the two files:

/etc/pve/replication.cfg
/var/lib/pve-manager/pve-replication-state.json (of the source node)

?


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] proxmox 5 - replication fails

2017-07-12 Thread dorsy

Another strange thing is that the failing jobs are running out of schedule.

Jul 12 12:00:03 ns pvesr[12049]: total estimated size is 119M
Jul 12 12:00:03 ns pvesr[12049]: TIMESENT   SNAPSHOT
Jul 12 12:00:04 ns pvesr[12049]: 12:00:04   9.18M 
rpool/data/vm-100-disk-1@__replicate_100-0_1499853601__

...
Jul 12 12:00:14 ns pvesr[12049]: 12:00:14113M 
rpool/data/vm-100-disk-1@__replicate_100-0_1499853601__


No errors in logs. Finishes just seemingly fine. Schedule is */2:00.

However after approx. 5 minutes it wants to send the whole disk:

Jul 12 12:06:03 ns pvesr[13511]: send from @ to 
rpool/data/vm-100-disk-1@__replicate_100-0_1499853960__ estimated size 
is 8.64G



Is there any way to get more debug info from pvesr or the scheduling of 
the replication?

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] proxmox 5 - replication fails

2017-07-10 Thread dorsy

Same error again, nothing strange until 3:20 AM:

Jul 10 03:00:02 xxnodenamexx pvesr[32023]: send from 
@__replicate_103-0_1499647500__ to 
rpool/data/vm-103-disk-1@__replicate_103-0_1499648400__ estimated size 
is 7.27M

Jul 10 03:00:02 xxnodenamexx pvesr[32023]: total estimated size is 7.27M
Jul 10 03:00:02 xxnodenamexx pvesr[32023]: TIMESENT SNAPSHOT
Jul 10 03:00:02 xxnodenamexx pvesr[32023]: 
rpool/data/vm-103-disk-1@__replicate_103-0_1499647500__ name 
rpool/data/vm-103-disk-1@__replicate_103-0_1499647500__-
Jul 10 03:00:03 xxnodenamexx pvesr[32023]: 03:00:03   3.10M 
rpool/data/vm-103-disk-1@__replicate_103-0_1499648400__
Jul 10 03:00:06 xxnodenamexx pvesr[32023]: send from 
@__replicate_109-0_1499647505__ to 
rpool/data/vm-109-disk-1@__replicate_109-0_1499648405__ estimated size 
is 24.0M

Jul 10 03:00:06 xxnodenamexx pvesr[32023]: total estimated size is 24.0M
Jul 10 03:00:06 xxnodenamexx pvesr[32023]: TIMESENT SNAPSHOT
Jul 10 03:00:06 xxnodenamexx pvesr[32023]: 
rpool/data/vm-109-disk-1@__replicate_109-0_1499647505__ name 
rpool/data/vm-109-disk-1@__replicate_109-0_1499647505__-
Jul 10 03:00:07 xxnodenamexx pvesr[32023]: 03:00:07   17.9M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499648405__
Jul 10 03:15:02 xxnodenamexx pvesr[2801]: send from 
@__replicate_103-0_1499648400__ to 
rpool/data/vm-103-disk-1@__replicate_103-0_1499649300__ estimated size 
is 63.9M

Jul 10 03:15:02 xxnodenamexx pvesr[2801]: total estimated size is 63.9M
Jul 10 03:15:02 xxnodenamexx pvesr[2801]: TIMESENT SNAPSHOT
Jul 10 03:15:03 xxnodenamexx pvesr[2801]: 
rpool/data/vm-103-disk-1@__replicate_103-0_1499648400__ name 
rpool/data/vm-103-disk-1@__replicate_103-0_1499648400__-
Jul 10 03:15:03 xxnodenamexx pvesr[2801]: 03:15:03   6.00M 
rpool/data/vm-103-disk-1@__replicate_103-0_1499649300__
Jul 10 03:15:04 xxnodenamexx pvesr[2801]: 03:15:04   13.5M 
rpool/data/vm-103-disk-1@__replicate_103-0_1499649300__
Jul 10 03:15:05 xxnodenamexx pvesr[2801]: 03:15:05   13.9M 
rpool/data/vm-103-disk-1@__replicate_103-0_1499649300__
Jul 10 03:15:06 xxnodenamexx pvesr[2801]: 03:15:06   17.2M 
rpool/data/vm-103-disk-1@__replicate_103-0_1499649300__
Jul 10 03:15:07 xxnodenamexx pvesr[2801]: 03:15:07   49.7M 
rpool/data/vm-103-disk-1@__replicate_103-0_1499649300__
Jul 10 03:15:24 xxnodenamexx pvesr[2801]: send from 
@__replicate_109-0_1499648405__ to 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__ estimated size 
is 169M

Jul 10 03:15:24 xxnodenamexx pvesr[2801]: total estimated size is 169M
Jul 10 03:15:24 xxnodenamexx pvesr[2801]: 
rpool/data/vm-109-disk-1@__replicate_109-0_1499648405__ name 
rpool/data/vm-109-disk-1@__replicate_109-0_1499648405__-

Jul 10 03:15:24 xxnodenamexx pvesr[2801]: TIMESENT SNAPSHOT
Jul 10 03:15:25 xxnodenamexx pvesr[2801]: 03:15:25   18.6M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:26 xxnodenamexx pvesr[2801]: 03:15:26   49.3M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:27 xxnodenamexx pvesr[2801]: 03:15:27   69.9M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:28 xxnodenamexx pvesr[2801]: 03:15:28   74.0M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:29 xxnodenamexx pvesr[2801]: 03:15:29   80.3M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:30 xxnodenamexx pvesr[2801]: 03:15:30   85.5M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:31 xxnodenamexx pvesr[2801]: 03:15:31   92.9M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:32 xxnodenamexx pvesr[2801]: 03:15:32101M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:33 xxnodenamexx pvesr[2801]: 03:15:33102M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:34 xxnodenamexx pvesr[2801]: 03:15:34126M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:35 xxnodenamexx pvesr[2801]: 03:15:35141M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:36 xxnodenamexx pvesr[2801]: 03:15:36141M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:37 xxnodenamexx pvesr[2801]: 03:15:37142M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:38 xxnodenamexx pvesr[2801]: 03:15:38142M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:39 xxnodenamexx pvesr[2801]: 03:15:39151M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:40 xxnodenamexx pvesr[2801]: 03:15:40167M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:41 xxnodenamexx pvesr[2801]: 03:15:41169M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__



And then it wants to send from "@" isntead of "@__replicate_103-0_[time]__
Also strange that the schedule is */15, but it is starting at 3:20 after 
3:15



Jul 10 03:20:05 xxnodenamexx pvesr[4021]: send from @ to 
rpool/data/vm

Re: [PVE-User] proxmox 5 - replication fails

2017-07-10 Thread dorsy

Sorry not meant to reply here.

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] proxmox 5 - replication fails

2017-07-10 Thread dorsy

Same error again, nothing strange until 3:20 AM:

Jul 10 03:00:02 xxnodenamexx pvesr[32023]: send from 
@__replicate_103-0_1499647500__ to 
rpool/data/vm-103-disk-1@__replicate_103-0_1499648400__ estimated size 
is 7.27M

Jul 10 03:00:02 xxnodenamexx pvesr[32023]: total estimated size is 7.27M
Jul 10 03:00:02 xxnodenamexx pvesr[32023]: TIMESENT SNAPSHOT
Jul 10 03:00:02 xxnodenamexx pvesr[32023]: 
rpool/data/vm-103-disk-1@__replicate_103-0_1499647500__ name 
rpool/data/vm-103-disk-1@__replicate_103-0_1499647500__-
Jul 10 03:00:03 xxnodenamexx pvesr[32023]: 03:00:03   3.10M 
rpool/data/vm-103-disk-1@__replicate_103-0_1499648400__
Jul 10 03:00:06 xxnodenamexx pvesr[32023]: send from 
@__replicate_109-0_1499647505__ to 
rpool/data/vm-109-disk-1@__replicate_109-0_1499648405__ estimated size 
is 24.0M

Jul 10 03:00:06 xxnodenamexx pvesr[32023]: total estimated size is 24.0M
Jul 10 03:00:06 xxnodenamexx pvesr[32023]: TIMESENT SNAPSHOT
Jul 10 03:00:06 xxnodenamexx pvesr[32023]: 
rpool/data/vm-109-disk-1@__replicate_109-0_1499647505__ name 
rpool/data/vm-109-disk-1@__replicate_109-0_1499647505__-
Jul 10 03:00:07 xxnodenamexx pvesr[32023]: 03:00:07   17.9M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499648405__
Jul 10 03:15:02 xxnodenamexx pvesr[2801]: send from 
@__replicate_103-0_1499648400__ to 
rpool/data/vm-103-disk-1@__replicate_103-0_1499649300__ estimated size 
is 63.9M

Jul 10 03:15:02 xxnodenamexx pvesr[2801]: total estimated size is 63.9M
Jul 10 03:15:02 xxnodenamexx pvesr[2801]: TIMESENT SNAPSHOT
Jul 10 03:15:03 xxnodenamexx pvesr[2801]: 
rpool/data/vm-103-disk-1@__replicate_103-0_1499648400__ name 
rpool/data/vm-103-disk-1@__replicate_103-0_1499648400__-
Jul 10 03:15:03 xxnodenamexx pvesr[2801]: 03:15:03   6.00M 
rpool/data/vm-103-disk-1@__replicate_103-0_1499649300__
Jul 10 03:15:04 xxnodenamexx pvesr[2801]: 03:15:04   13.5M 
rpool/data/vm-103-disk-1@__replicate_103-0_1499649300__
Jul 10 03:15:05 xxnodenamexx pvesr[2801]: 03:15:05   13.9M 
rpool/data/vm-103-disk-1@__replicate_103-0_1499649300__
Jul 10 03:15:06 xxnodenamexx pvesr[2801]: 03:15:06   17.2M 
rpool/data/vm-103-disk-1@__replicate_103-0_1499649300__
Jul 10 03:15:07 xxnodenamexx pvesr[2801]: 03:15:07   49.7M 
rpool/data/vm-103-disk-1@__replicate_103-0_1499649300__
Jul 10 03:15:24 xxnodenamexx pvesr[2801]: send from 
@__replicate_109-0_1499648405__ to 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__ estimated size 
is 169M

Jul 10 03:15:24 xxnodenamexx pvesr[2801]: total estimated size is 169M
Jul 10 03:15:24 xxnodenamexx pvesr[2801]: 
rpool/data/vm-109-disk-1@__replicate_109-0_1499648405__ name 
rpool/data/vm-109-disk-1@__replicate_109-0_1499648405__-

Jul 10 03:15:24 xxnodenamexx pvesr[2801]: TIMESENT SNAPSHOT
Jul 10 03:15:25 xxnodenamexx pvesr[2801]: 03:15:25   18.6M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:26 xxnodenamexx pvesr[2801]: 03:15:26   49.3M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:27 xxnodenamexx pvesr[2801]: 03:15:27   69.9M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:28 xxnodenamexx pvesr[2801]: 03:15:28   74.0M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:29 xxnodenamexx pvesr[2801]: 03:15:29   80.3M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:30 xxnodenamexx pvesr[2801]: 03:15:30   85.5M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:31 xxnodenamexx pvesr[2801]: 03:15:31   92.9M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:32 xxnodenamexx pvesr[2801]: 03:15:32101M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:33 xxnodenamexx pvesr[2801]: 03:15:33102M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:34 xxnodenamexx pvesr[2801]: 03:15:34126M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:35 xxnodenamexx pvesr[2801]: 03:15:35141M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:36 xxnodenamexx pvesr[2801]: 03:15:36141M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:37 xxnodenamexx pvesr[2801]: 03:15:37142M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:38 xxnodenamexx pvesr[2801]: 03:15:38142M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:39 xxnodenamexx pvesr[2801]: 03:15:39151M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:40 xxnodenamexx pvesr[2801]: 03:15:40167M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__
Jul 10 03:15:41 xxnodenamexx pvesr[2801]: 03:15:41169M 
rpool/data/vm-109-disk-1@__replicate_109-0_1499649323__



And then it wants to send from "@" isntead of "@__replicate_103-0_[time]__
Also strange that the schedule is */15, but it is starting at 3:20 after 
3:15



Jul 10 03:20:05 xxnodenamexx pvesr[4021]: send from @ to 
rpool/data/vm

[PVE-User] proxmox 5 - replication fails

2017-07-09 Thread dORSY
:03:09 NameOfNode1 pvesr[10151]: total estimated size is 19.5G
Jul 09 11:03:09 NameOfNode1 pvesr[10151]: TIMESENT   SNAPSHOT
Jul 09 11:03:10 NameOfNode1 pvesr[10151]: rpool/data/vm-103-disk-1name  
  rpool/data/vm-103-disk-1-
Jul 09 11:03:10 NameOfNode1 pvesr[10151]: volume 'rpool/data/vm-103-disk-1' 
already exists
Jul 09 11:03:10 NameOfNode1 pvesr[10151]: warning: cannot send 
'rpool/data/vm-103-disk-1@__replicate_103-0_1499590980__': Broken pipe
Jul 09 11:03:10 NameOfNode1 pvesr[10151]: cannot send 
'rpool/data/vm-103-disk-1': I/O error
Jul 09 11:03:10 NameOfNode1 pvesr[10151]: command 'zfs send -Rpv -- 
rpool/data/vm-103-disk-1@__replicate_103-0_1499590980__' failed: exit code 1
Jul 09 11:03:10 NameOfNode1 pvesr[10151]: send/receive failed, cleaning up 
snapshot(s)..
Jul 09 11:04:02 NameOfNode1 pvesr[10347]: send from @ to 
rpool/data/vm-109-disk-1@__replicate_109-0_1499591040__ estimated size is 28.4G
Jul 09 11:04:02 NameOfNode1 pvesr[10347]: total estimated size is 28.4G
Jul 09 11:04:02 NameOfNode1 pvesr[10347]: TIMESENT   SNAPSHOT
Jul 09 11:04:02 NameOfNode1 pvesr[10347]: rpool/data/vm-109-disk-1name  
  rpool/data/vm-109-disk-1-
Jul 09 11:04:02 NameOfNode1 pvesr[10347]: volume 'rpool/data/vm-109-disk-1' 
already exists
Jul 09 11:04:02 NameOfNode1 pvesr[10347]: warning: cannot send 
'rpool/data/vm-109-disk-1@__replicate_109-0_1499591040__': Broken pipe
Jul 09 11:04:02 NameOfNode1 pvesr[10347]: cannot send 
'rpool/data/vm-109-disk-1': I/O error
Jul 09 11:04:02 NameOfNode1 pvesr[10347]: command 'zfs send -Rpv -- 
rpool/data/vm-109-disk-1@__replicate_109-0_1499591040__' failed: exit code 1
Jul 09 11:04:02 NameOfNode1 pvesr[10347]: send/receive failed, cleaning up 
snapshot(s)..
Jul 09 11:13:03 NameOfNode1 pvesr[11914]: send from @ to 
rpool/data/vm-103-disk-1@__replicate_103-0_1499591580__ estimated size is 19.5G
Jul 09 11:13:03 NameOfNode1 pvesr[11914]: total estimated size is 19.5G
Jul 09 11:13:03 NameOfNode1 pvesr[11914]: TIMESENT   SNAPSHOT
Jul 09 11:13:03 NameOfNode1 pvesr[11914]: rpool/data/vm-103-disk-1name  
  rpool/data/vm-103-disk-1-
Jul 09 11:13:03 NameOfNode1 pvesr[11914]: volume 'rpool/data/vm-103-disk-1' 
already exists
Jul 09 11:13:03 NameOfNode1 pvesr[11914]: warning: cannot send 
'rpool/data/vm-103-disk-1@__replicate_103-0_1499591580__': Broken pipe
Jul 09 11:13:03 NameOfNode1 pvesr[11914]: cannot send 
'rpool/data/vm-103-disk-1': I/O error
Jul 09 11:13:03 NameOfNode1 pvesr[11914]: command 'zfs send -Rpv -- 
rpool/data/vm-103-disk-1@__replicate_103-0_1499591580__' failed: exit code 1
Jul 09 11:13:03 NameOfNode1 pvesr[11914]: send/receive failed, cleaning up 
snapshot(s)..

After that it really wants to transfer whole disks again and again but it fails 
to do that.There is no way to recover from this, just by destroying the whole 
ZFS volume on the target side and restransfer it.
Please help me get around this problem or to find out what goes wrong there.
regards,dorsy
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user