Re: [PVE-User] Sockets vs. Cores

2014-12-30 Thread Wolfgang Link
Hi Sebastian,

pleas write in English, this is an English list.

normally it depends on your host system.
If you have only 1 socket the best for you is to use the cores and left the
socket on 1.

The socket is only interesting if you have an NUMA machine.

So to your question 3 Core would be the next step.

 On December 30, 2014 at 2:15 PM sebast...@debianfan.de 
sebast...@debianfan.de wrote:


 Hallo  Guten Tag zusammen,

 bislang benutze ich auf einer Maschine für die VM's 2 Sockets  2 Cores.

 Dies ist vermutlich für die Maschine zu viel zugeteilte  ungenutzte
 Leistung.

 Was wäre die nächstniedrigere Leistungsstufe - 1 Socket  2 Cores oder 2
 Sockets  1 Core ?

 gruß  sorry für die noob-Frage ;-)

 Sebastian
 __
_
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Regards

Wolfgang
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox for distributed environment

2015-02-18 Thread Wolfgang Link
Proxmox have currently no global view over clusters. you must manage all 
cluster by them self.


Regrades

Wolfgang

Am 18.02.15 um 10:13 schrieb Andrew (Mac) McIver:

Is Proxmox VE the right tool for a star layout of multiple small 
virtualization clusters, geographically distributed, but with central single-pane 
management?

The hypervisors (two per site) won't have network or storage shared with other 
sites. The current solution is non-managed standalone hypervisor hosts (with 
internal disks) at around 1000 geographically separated locations.

I couldn't find any related references on the website, hence the question here.




___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Qemu guest agent

2015-02-20 Thread Wolfgang Link

Freeze is working on wheezy-backpots, if the underlaying FS supports is!
On 02/20/2015 09:29 AM, Alexandre DERUMIER wrote:

What improvements does offer using a guest agent?

I think currently

guest-shutdown to shutdown the guest

and

guest-fsfreeze-freeze, to freeze the filesystem for snapshots



Does wheezy-backpots package qemu-guest-agent work OK with PVE 3.4?

Shutdown, i'm sure it's ok.
Freeze i'm not sure


- Mail original -
De: Eneko Lacunza elacu...@binovo.es
À: proxmoxve pve-user@pve.proxmox.com
Envoyé: Vendredi 20 Février 2015 08:49:51
Objet: [PVE-User] Qemu guest agent

Hi all,

I noticed that PVE 3.4 supports activating qemu guest agent.

What improvements does offer using a guest agent?

Does wheezy-backpots package qemu-guest-agent work OK with PVE 3.4?

Thanks a lot
Eneko




___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Scaling up Proxmox VE

2015-05-11 Thread Wolfgang Link

Hi Andrew,

if you have more than 16 nodes it can happen, that the ring packet from 
corosync is to slow.

therefore you cluster think the nodes are offline.

Regards,

Wolfgang

On 05/12/2015 06:04 AM, Andrew Thrift wrote:

Hi,

I notice the wiki mentions a soft-limit of 16 nodes.

What happens beyond this ?

We are already at 16 nodes, and looking to continue growing our cluster.

What are other people doing to scale up ?

We have considered multiple clusters, but there is currently no way to 
specify VMID ranges and UUID's are not used so we are concerned about 
getting overlapping VMID's on shared storage.



Regards,



Andrew


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox VE ZFS replication manager released (pve-zsync)

2015-06-30 Thread Wolfgang Link

There where some issues in 0.6.4 but they are fixed in 0.6.4!
https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.6.4


On 06/30/2015 02:08 PM, Angel Docampo wrote:

Hi there!

Is it based on zfs send-receive? I thought it was buggy on linux... 
perhaps it was on 0.6.3?


Anyway, that's a great feature, thank you!

:)

On 30/06/15 12:19, Martin Maurer wrote:

Hi all,

We just released the brand new Proxmox VE ZFS replication manager
(pve-zsync)!

This CLI tool synchronizes your virtual machine (virtual disks and VM
configuration) or directory stored on ZFS between two servers - very
useful for backup and replication tasks.

A big Thank-you to our active community for all feedback, testing, bug
reporting and patch submissions.

Documentation
http://pve.proxmox.com/wiki/PVE-zsync

Git
https://git.proxmox.com/?p=pve-zsync.git;a=summary

Bugtracker
https://bugzilla.proxmox.com/



--


*Angel Docampo
*
*Datalab Tecnologia, s.a.*
Castillejos, 352 - 08025 Barcelona
Tel. 93 476 69 14 - Ext: 114
Mob. 670.299.381



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Limited number of virtio-devices/drives

2015-07-31 Thread Wolfgang Link

Yes, the maximum is 16 Virtio disks.


On 07/30/2015 10:25 PM, Keri Alleyne wrote:

Good day,

I'm monitoring this thread: 
https://forum.proxmox.com/threads/9782-There-is-now-a-limit-of-virtio-devices-drives


Quote Originally Posted by dietmar

You can have 4 IDE disks, 14 SCSI disks, 16 VIRTIO disks and 6 SATA 
disks (= 40 disks).



Are we still limited to 16 VIRTIO disks on the recent versions of 
Proxmox VE 3.4?


Thanks.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Lxc remote backup

2015-10-15 Thread Wolfgang Link

At the moment you have to use pct

At the moment supported storage with snapshot capability are ZFS 
(ZFSPoolPlugin), RBD.


On 10/14/2015 09:47 PM, Jérémy Carnus wrote:

Hi,

With Pve 4.0 now as stable, I just would like to know which can of 
tool / soft I can use if I want to make snapshot of LXC container, and 
stored on a remote server and avoiding a temporary store everything on 
the local disk.


Does someone has in mind a tool for that ?

Thanks
--
Jérémy Carnus


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] migration of container on zfs

2015-10-15 Thread Wolfgang Link
What happened if you use
pct resize  rootfs  
on the command line to set the refquota?

> Thiago Damas <tda...@gmail.com> hat am 14. Oktober 2015 um 20:01 geschrieben:
> 
> 
>   Hi,
>   I'm facing problems when migrating containers, living on zfsvols.
>   Despite the rootfs showing the right "quota" value, the "refquota"
> attribute of the zfs dataset isnt set on the destination server.
>   Sorry for my bad english.
> 
>   Best regards,
>   Thiago
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Best Regards,
 
Wolfgang Link
 
w.l...@proxmox.com
http://www.proxmox.com

 
Proxmox Server Solutions GmbH
Kohlgasse 51/10, 1050 Vienna, Austria
Commercial register no.: FN 258879 f
Registration office: Handelsgericht Wien

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] migration of container on zfs

2015-10-15 Thread Wolfgang Link



On 10/15/2015 07:15 PM, Thiago Damas wrote:

root@sm9:~# pct resize 102 rootfs +0

root@sm9:~# zfs get refquota volumes/subvol-102-disk-1
NAME   PROPERTY  VALUE SOURCE
volumes/subvol-102-disk-1  refquota  none  default

root@sm9:~# pct resize 102 rootfs +1
zfs error: cannot set property for 'volumes/subvol-102-disk-1': use 
'none' to disable quota/refquota

root@sm9:~# cat /etc/pve/lxc/102.conf
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: teste
memory: 512
net0: 
bridge=vmbr0,gw=192.168.224.1,hwaddr=CE:F4:22:3C:C9:95,ip=192.168.224.205/24,name=eth0,type=veth 


ostype: ubuntu
rootfs: zfs:subvol-102-disk-1,size=16G
swap: 512

This is normal, your container is larger than 1Byte.

root@sm9:~# pct resize 102 rootfs 17G

root@sm9:~# zfs get refquota volumes/subvol-102-disk-1
NAME   PROPERTY  VALUE SOURCE
volumes/subvol-102-disk-1  refquota  17G   local
root@sm9:~# cat /etc/pve/lxc/102.conf
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: teste
memory: 512
net0: 
bridge=vmbr0,gw=192.168.224.1,hwaddr=CE:F4:22:3C:C9:95,ip=192.168.224.205/24,name=eth0,type=veth 


ostype: ubuntu
rootfs: zfs:subvol-102-disk-1,size=17M
swap: 512


This is an known bug see https://bugzilla.proxmox.com/show_bug.cgi?id=752
and is fixed in pve-container lager than  1.0-8
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Lxc remote backup

2015-10-15 Thread Wolfgang Link

Yes it will be included to thew GUI soon.

On 10/15/2015 09:39 PM, Jérémy Carnus wrote:

Ok, but is it in your plan to add such kind of tool ?

Thanks

Le 15/10/2015 04:16, Wolfgang Link a écrit :

At the moment you have to use pct

At the moment supported storage with snapshot capability are ZFS 
(ZFSPoolPlugin), RBD.


On 10/14/2015 09:47 PM, Jérémy Carnus wrote:

Hi,

With Pve 4.0 now as stable, I just would like to know which can of 
tool / soft I can use if I want to make snapshot of LXC container, 
and stored on a remote server and avoiding a temporary store 
everything on the local disk.


Does someone has in mind a tool for that ?

Thanks
--
Jérémy Carnus


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


--
Jérémy Carnus


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] Proxmox VE 4.0 released!

2015-10-07 Thread Wolfgang Link

I made a note to the upgrade and 4.x  Cluster wiki to clarify this

On 10/07/2015 12:30 PM, Angel Docampo wrote:



On 07/10/15 06:20, Alexandre DERUMIER wrote:

It's possible to upgrade all nodes to jessie/proxmox 4.0, at the same time,
without reboot host. (and without stop the vms)
A wiki entry should be extremely useful, I would try by myself, but I 
do not have enough hardware at the moment to make the test, perhaps in 
some days I can get new hardware and try it out.

--


*Angel Docampo
*
*Datalab Tecnologia, s.a.*
Castillejos, 352 - 08025 Barcelona
Tel. 93 476 69 14 - Ext: 114
Mob. 670.299.381



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE 4.0 and Ceph - install problem

2015-07-07 Thread Wolfgang Link
There are no official ceph pages fore Debian jessie, but AFAIK they come 
soon.


On 07/07/2015 11:23 AM, Fabrizio Cuseo wrote:

Hello there.
I am trying a 3 host cluster with PVE 4.0beta with ceph server, but when I try 
to install ceph (pveceph install -version hammer, or pveceph install -version 
firefly or pveceph install), I have this error:

The following information may help to resolve the situation:

The following packages have unmet dependencies:
  ceph : Depends: libboost-program-options1.49.0 (= 1.49.0-1) but it is not 
installable
 Depends: libboost-system1.49.0 (= 1.49.0-1) but it is not installable
 Depends: libboost-thread1.49.0 (= 1.49.0-1) but it is not installable
  ceph-common : Depends: librbd1 (= 0.94.2-1~bpo70+1) but 0.80.7-2 is to be 
installed
Depends: libboost-thread1.49.0 (= 1.49.0-1) but it is not 
installable
Depends: libudev0 (= 146) but it is not installable
Breaks: librbd1 ( 0.92-1238) but 0.80.7-2 is to be installed
E: Unable to correct problems, you have held broken packages.
command 'apt-get -q --assume-yes --no-install-recommends -o 
'Dpkg::Options::=--force-confnew' install -- ceph ceph-common gdisk' failed: 
exit code 100


Is Ceph server already supported on PVE 4.0beta ? If not, is planned in a short 
time ?

Regards, Fabrizio Cuseo







___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Change node IP addresses

2016-01-08 Thread Wolfgang Link

Hi,

here you get the needed information
https://pve.proxmox.com/wiki/Separate_Cluster_Network

On 01/08/2016 08:55 AM, Frederic Van Espen wrote:

Hi,

Is it possible to change the node IP addresses without breaking the 
cluster for an extended amount of time? Are there any caveats?


I would like to move the PVE nodes in a separate dedicated network for 
better security.


Cheers,

Frederic


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Netapp export setting

2015-11-24 Thread Wolfgang Link

Hi,
has anybody a working PVE4 with Netapp NAS and  NFS?
Because I would need a working export setting for PVE4.


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Understanding lvm-thin

2016-06-10 Thread Wolfgang Link
Benchmarks LVM vs file.raw

Setup

Physical disk MX200 Crusial only used be test VM/CT
Debian 8 curent version
Extra Disk 32GB (QEMU no cache virtio bus)

QEMU on LVM
dd if=randfile of=/dev/vdb bs=4k
220662+0 records in
220662+0 records out
903831552 bytes (904 MB) copied, 2.58608 s, 349 MB/s

dd if=/dev/zero of=/dev/vdb bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.51501 s, 427 MB/s

LXC on LVM

dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 2.33282 s, 460 MB/s

dd if=/dev/zero of=/bench/test.raw bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.18087 s, 492 MB/s

QEMU on file.raw laying on ext4
dd if=randfile of=/dev/vdb bs=4k
220662+0 records in
220662+0 records out
903831552 bytes (904 MB) copied, 2.47066 s, 366 MB/s

root@livemig:/home/link# dd if=/dev/zero of=/dev/vdb bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.09934 s, 977 MB/s

LXC on file.raw laying on ext4
Hard to say it use the host cache

root@Bench:~# dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 0.595112 s, 1.8 GB/s
root@Bench:~# dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 0.966619 s, 1.1 GB/s
root@Bench:~# dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 1.81487 s, 592 MB/s
root@Bench:~# dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 0.958734 s, 1.1 GB/s
root@Bench:~# dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 4.51895 s, 238 MB/s
root@Bench:~# dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 1.69404 s, 634 MB/s
root@Bench:~# dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 0.958381 s, 1.1 GB/s

root@Bench:~# dd if=/dev/zero of=/bench/test.raw bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.797751 s, 1.3 GB/s
root@Bench:~# dd if=/dev/zero of=/bench/test.raw bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.795282 s, 1.4 GB/s
root@Bench:~# dd if=/dev/zero of=/bench/test.raw bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.05295 s, 1.0 GB/s
root@Bench:~# dd if=/dev/zero of=/bench/test.raw bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.56381 s, 193 MB/s

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] LCX Migrate

2016-06-16 Thread Wolfgang Link
Criu do not work proper with LXC Container what is needed for Live mig.

On 06/16/2016 03:08 PM, Daniel Eschner wrote:
> Very cool ;)
> 
> I can wait a couple of days;)
> 
> Live Migration also implemented as well?
> 
> 
>> Am 16.06.2016 um 15:06 schrieb Wolfgang Link <w.l...@proxmox.com>:
>>
>> This will come in some days.
>> code is at git available.
>>
>>
>> On 06/16/2016 03:04 PM, Daniel Eschner wrote:
>>> Hi to all,
>>>
>>> it seems that it is not possible to mirgate an offline LCX container to 
>>> another cluster member when its located on thin-lvm:
>>>
>>> Jun 16 15:02:32 starting migration of CT 105 to node 'host07' (10.0.2.116)
>>> Jun 16 15:02:32 copy mountpoint 'rootfs' (local-lvm:vm-105-disk-1) to node 
>>> ' host07'
>>> Jun 16 15:02:32 ERROR: unable to migrate 'local-lvm:vm-105-disk-1' to 
>>> 'local-lvm:vm-105-disk-1' on host '10.0.2.116' - source type 'lvmthin' not 
>>> implemented
>>> Jun 16 15:02:32 aborting phase 1 - cleanup resources
>>> Jun 16 15:02:32 start final cleanup
>>> Jun 16 15:02:32 ERROR: migration aborted (duration 00:00:00): unable to 
>>> migrate 'local-lvm:vm-105-disk-1' to 'local-lvm:vm-105-disk-1' on host 
>>> '10.0.2.116' - source type 'lvmthin' not implemented
>>> TASK ERROR: migration aborted
>>>
>>> Is there any work around to fix that?
>>>
>>> Cheers
>>>
>>> Daniel
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] about IO Delay using openvz and zimbra

2016-06-16 Thread Wolfgang Link
Hi,

I have also a T320 with the H310 Controller and this give me a hard time
because the Dell firmware were not working properly, so I decide flash
the LSI firmware on this controller.

I use zfs with the IT mode of the Dell firmware before now the LSI
firmware and it works perfect.

On 06/17/2016 04:38 AM, Orlando Martinez Bao wrote:
> Hello friends
> 
> I am SysAdmin at the Agrarian University of Havana, Cuba.
> 
>  
> 
> I have installed Proxmox v3.4 here a cluster of seven nodes and for some
> days I am having problems with a node in the cluster which only has a
> Container with Zimbra 8.
> 
> The problem is I'm having a lot of I / O Delay and that server is very slow
> to the point that sometimes the service is down.
> 
> The server is PowerEdge T320 Dell with Intel Xeon E5-2420 12gram with 12
> cores and 1GB of disk 7200RPM 2xHDD are configured as RAID1.
> 
> I have virtualizing the zimbra 8 using a template of 12.04, has 8 cores, 8G
> RAM, 500GHD the storage of container is local storage.
> 
> Then I put the output to see the IO when you are not running the VM. Look at
> the BUFFERED READS that are marked are very bad. And those moments have seen
> IO Delay up to 50%.
> 
>  
> 
> root@n07:~# pveperf
> 
> CPU BOGOMIPS:  45601.20
> 
> REGEX/SECOND:  1025079
> 
> HD SIZE:   9.84 GB (/dev/mapper/pve-root)
> 
> BUFFERED READS:1.51 MB/sec
> 
> AVERAGE SEEK TIME: 165.88 ms
> 
> FSYNCS/SECOND: 0.40
> 
> DNS EXT:   206.20 ms
> 
> DNS INT:   0.91 ms (unah.edu.cu)
> 
> root@n07:~# pveperf
> 
> CPU BOGOMIPS:  45601.20
> 
> REGEX/SECOND:  1048361
> 
> HD SIZE:   9.84 GB (/dev/mapper/pve-root)
> 
> BUFFERED READS:0.78 MB/sec
> 
> AVERAGE SEEK TIME: 283.84 ms
> 
> FSYNCS/SECOND: 0.50
> 
> DNS EXT:   206.13 ms
> 
> DNS INT:   0.89 ms (unah.edu.cu)
> 
> root@n07:~# pveperf (Este fue cuando detuve la VM)
> 
> CPU BOGOMIPS:  45601.20
> 
> REGEX/SECOND:  1073712
> 
> HD SIZE:   9.84 GB (/dev/mapper/pve-root)
> 
> BUFFERED READS:113.04 MB/sec
> 
> AVERAGE SEEK TIME: 13.49 ms
> 
> FSYNCS/SECOND: 9.66
> 
> DNS EXT:   198.59 ms
> 
> DNS INT:   0.86 ms (unah.edu.cu)
> 
> root@n07:~# pveperf
> 
> CPU BOGOMIPS:  45601.20
> 
> REGEX/SECOND:  1024213
> 
> HD SIZE:   9.84 GB (/dev/mapper/pve-root)
> 
> BUFFERED READS:164.30 MB/sec
> 
> AVERAGE SEEK TIME: 13.61 ms
> 
> FSYNCS/SECOND: 16.34
> 
> DNS EXT:   234.75 ms
> 
> DNS INT:   0.94 ms (unah.edu.cu)
> 
>  
> 
>  
> 
> Please help me.
> 
> Best Regards
> 
> Orlando
> 
>  
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] TASK ERROR: can't rollback, more recent snapshots exist

2016-03-23 Thread Wolfgang Link
You can make a clone on top the needed snapshot and use the clone in
your VM config instead the normal disk.

On 03/23/2016 06:34 AM, Lindsay Mathieson wrote:
> On 23 March 2016 at 15:24, Dietmar Maurer  wrote:
> 
>> This is a ZFS limitation - you can only rollback to latest snapshot.
> 
> 
> 
> Wow, thats a pretty major limitation. Are there any work arounds?
> 
> 
> thanks,
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] OVH + Vrack + private IP + Proxmox 4

2016-04-25 Thread Wolfgang Link
Hi,

here is a link where it is explain how to test multicast.

https://pve.proxmox.com/wiki/Multicast_notes#Testing_multicast


This two links explain how to setup a cluster and the network

https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster

https://pve.proxmox.com/wiki/Separate_Cluster_Network

On 04/25/2016 08:02 AM, Régis Houssin wrote:
> Hello
> 
> I have 2 dedicated servers at OVH (french provider) with vrack, and I
> can not talk to both servers with private IP addresses (172.16.0.0/12)
> does anyone know how with an example? (/etc/network/interfaces)
> and you know, if multicast is enabled on vrack?
> 
> Thank you for your help
> 
> Cordialement,
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] unstoppable CT

2016-04-25 Thread Wolfgang Link
It is possible to kill the process with kill command?

On 04/25/2016 07:53 AM, Régis Houssin wrote:
> Hello
> 
> I have a worry in recent weeks, a CT (LXC + ZFS) consumes 100% of CPU
> and impossible to stop,
> and could not connect on the CT (SSH or other).
> I have to reboot in hard, the host server.
> looking at the logs and I see nothing.
> Do you know a way to kill this CT, without rebooting the host server?
> 
> (pct stop xxx looping)
> 
> Moreover, it is becoming increasingly common.
> it has happened to someone?
> 
> Thank you for your help
> 
> 
> 
> Cordialement,
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] network setup problems in 4.2

2016-05-13 Thread Wolfgang Link
Sorry forget what I wrote this is not relevant to your problem.

On 05/13/2016 09:30 AM, Albert Dengg wrote:
> On Fri, May 13, 2016 at 09:14:55AM +0200, Wolfgang Link wrote:
>> Hi Albert,
>>
>> Do you have already installed openv-switch 2.5?
>>
>> If yes make a downgrade to 2.4.
> do i need to add additional repositories for newer openvswitch
> versions?
> 
> i currently have
> openvswitch-switch: 2.3.2-3
> from the enterprise repository.
> 
> thx
> 
> regards,
> albert
> 
> 
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] proxmoxm 4.x live migration

2016-07-25 Thread Wolfgang Link
Hi

which version of virtio driver do you use and can you please provide the
VM config?

On 07/25/2016 01:35 PM, Tonči Stipičević wrote:
> Hello to all,
> 
> after I migrated to the latest version (enterprise - repos), have tested
> live migration.
> 
> So , vm-win7  cannot survive more than 2 migrations.
> 
> I usually start pinging 8.8.8.8   from cli and then do the migration.
> 
> So after migration  prox1 ->  prox2 it is still pinging ,
> 
> after prox2 -> prox1 it is still pinging
> 
> but after the 3rd move (prox1 > 2)  it blocks , ping stops , windows
> explorer does not work , "restart" can be selected but it won't execute
> 
> 
> 
> This cluster was installed from scratch and this is the only vm I
> have.   (ovs-switch is also involved )
> 
> 
> Is there anything alse to be checked / configured ?
> 
> 
> Thank you very much in advance
> and
> Best regards
> 
> Tonci Stipicevic
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] proxmoxm 4.x live migration

2016-07-28 Thread Wolfgang Link
Try to use as display type Spice.

On 07/26/2016 08:01 PM, Tonči Stipičević wrote:
> Hello Wolfgang,
> 
> this is the vm config :
> 
> agent: 1
> bootdisk: virtio0
> cores: 1
> ide2: rn102:iso/virtio-win-0.1.102.iso,media=cdrom,size=156988K
> memory: 2048
> name: w7test
> net0: virtio=36:65:32:31:31:32,bridge=vmbr1,tag=1012
> numa: 0
> ostype: win7
> smbios1: uuid=2019bba5-7f6c-430f-905e-f4f4891892ed
> sockets: 1
> virtio0: san1:vm--disk-1,size=15G
> 
> 
> so the vm uses virto driver 0.1.102   ...
> 
> Thank you in advance
> and
> 
> BR Tonci

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] OVE and balance-rr

2016-07-14 Thread Wolfgang Link
Hi,

yes should work

mode=balance-rr
 
> Lindsay Mathieson  hat am 14. Juli 2016 um 09:42 
> geschrieben:
> 
> 
> Is it possible to create a balance-rr bond with OVS?
> 
> -- 
> Lindsay
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Verison 4.x problem with changing IP in /etc/hosts

2017-02-26 Thread Wolfgang Link
Hi,

Yes you have to change it manual this is intended and not a bug.
If you have a cluster you need to change the cororsync.conf too.

On 02/24/2017 08:27 PM, Lari Tanase wrote:
> after some debug I found that the trouble is that in the

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Corosync : name & IP confusion

2016-08-30 Thread Wolfgang Link
This out put is generated by corosync.

But you are correct it is not possible to delete a IP (ring0_addr).

Please make a bugzilla entry.

https://bugzilla.proxmox.com/

On 08/30/2016 12:35 PM, Florent B wrote:
> Hi everyone,
> 
> I configured my corosync.conf to use nodes IP address as "ring0_addr"
> like this :
> 
> node {
> name: host7
> nodeid: 2
> quorum_votes: 1
> ring0_addr: 10.109.0.7
>   }
> 
> The problem is that PVE is not using this information very well : you
> take as "node name" the value of "ring0_addr" :o :
> 
> # pvecm nodes
> 
> Membership information
> --
> Nodeid  Votes Name
>  8  1 10.109.0.1
>  7  1 10.109.0.2
>  6  1 10.109.0.3
>  5  1 10.109.0.4
>  4  1 10.109.0.5
>  3  1 10.109.0.6
>  2  1 10.109.0.7
>  1  1 10.109.0.8
>  9  1 10.109.0.9 (local)
> 
> And of course when I want to remove host3 (10.109.0.3):
> 
> # pvecm delnode host3
> no such node 'host3'
> 
> # pvecm delnode 10.109.0.3
> 400 Parameter verification failed.
> node: invalid format - value does not look like a valid node name
> 
> I think PVE is doing a large confusion for a long time between name & IP
> (a lot of people having problem with their hosts file, see forum &
> mailing list).
> 
> Why don't simplify all this ? Do not configure an host file on each
> node, but use "ring0_addr" for what it is done : node IP address ?
> 
> What do you think about this ?
> 
> Flo
> 
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox and non-proxmox hypervisors

2016-10-26 Thread Wolfgang Link
Proxmox VE use it own management stack and makes no use of libvirt, so
we can only manage other Proxmox VE hosts and there KVM and LXC containers.

If you like to use it you have to migrate all you kvm machines to
Proxmox VE, but this is no problem because only the config differs.

On 10/27/2016 12:25 AM, Leroy Tennison wrote:
> We have a number of kvm hypervisors currently in use, some on Ubuntu, a few 
> on openSUSE. We would like to provide the Proxmox web interface for users 
> rather than virt-manager (or the CLI). I understand we would need one Proxmox 
> hypervisor on Debian. If we did that, would the web interface be able to 
> manage the other hypervisors or is the web interface possible only because of 
> Proxmox software running on the hypervisor? If it is the latter, would it be 
> possible to install only the Proxmox software supporting the web interface on 
> the other hypervisors? Thanks. 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph Disk Usage/Storage

2016-12-14 Thread Wolfgang Link
You pool size is 3*400 GB so 1.2GB is correct, but your config say 3/2.
This means you have 3 copies of every pg(Placement Group) and min 2 pg
are needed to operate correct.

This means if you write 1GB you lose 3GB of free storage.


On 12/14/2016 12:14 PM, Daniel wrote:
> Hi there,
> 
> i created a Ceph File-System with 3x 400GB
> In my config i said 3/2 that means that one of that disks are only for an 
> faulty issue (like raid5)
> 3 HDDs Max and 2 HDDs Minimum
> 
> In my System-Overview u see that i have 1.2TB Free-Space which cant be 
> correct.
> 
> This is what the CLI command shows me:
> 
> POOLS:
> NAME ID USED %USED MAX AVAIL OBJECTS 
> ceph 2 0 0  441G   0 
> 
> But as i understand it correctly max Avail GB must be round about 800GB
> 
> Cheers
> 
> Daniel
> 
> 
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] Proxmox VE 5.0 beta1 released!

2017-03-23 Thread Wolfgang Link
Yes look like you miss jewel ;-)

https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel

On 03/23/2017 11:03 AM, Eneko Lacunza wrote:
> Hi Martin,
> 
> El 22/03/17 a las 14:51, Martin Maurer escribió:
>> Hi all!
>>
>> We are proud to announce the release of the first beta of our Proxmox VE
>> 5.x family - based on the great Debian Stretch.
>>
>> Get more details from the forum announcement:
>>
>> https://forum.proxmox.com/threads/proxmox-ve-5-0-beta1-released.33731/
>>
> Glad to hear this. But, it is a bit confusing to see Ceph Luminous
> referenced in the beta release announcement, when we are yet in Ceph
> Hammer in Proxmox 4.4, waiting for jewel upgrade?
> 
> Did I a miss something? :-)
> 
> Thanks
> Eneko
> 
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync

2017-03-09 Thread Wolfgang Link
Hi,

You can't destroy datasets where snapshots exists.

zfs list -t all
will show you all datasets

and zfs destroy -R will erase all dataset what are referenced to tihs
given set.

On 03/09/2017 08:35 PM, Luis G. Coralle wrote:
> Hi.
> I was trying to sync two vm with pve-zsync between two nodes with pve 4.2
> After completing the tests, I could not remove rpool/data/vm-107-disk-1 and
> rpool/data/vm-17206-disk-1.
> How can I remove them?
> 
> root@pve4:~# zfs list
> NAME USED  AVAIL  REFER  MOUNTPOINT
> rpool332G  2.23T   140K  /rpool
> rpool/ROOT   194G  2.23T   140K  /rpool/ROOT
> rpool/ROOT/pve-1 194G  2.23T   194G  /
> rpool/STORAGE   49.1G  2.23T  49.1G  /rpool/STORAGE
> rpool/data  79.5G  2.23T   140K  /rpool/data
> rpool/data/vm-101-disk-13.04G  2.23T  3.04G  -
> rpool/data/vm-103-disk-168.6G  2.23T  68.6G  -
> rpool/data/vm-107-disk-15.23G  2.23T  2.62G  -
> rpool/data/vm-17206-disk-1  2.62G  2.23T93K  -
> rpool/swap  8.50G  2.24T  99.2M  -
> 
> root@pve4:~# zfs destroy rpool/data/vm-17206-disk-1
> cannot destroy 'rpool/data/vm-17206-disk-1': dataset already exists
> 
> root@pve4:~# zfs destroy rpool/data/vm-107-disk-1
> cannot destroy 'rpool/data/vm-107-disk-1': dataset already exists
> 
> 
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ZFS over iSCSI in Proxmox 5.0 issues

2017-07-18 Thread Wolfgang Link

Hi,

Proxmox VE is not a storage box, so we do not provide this kind of setups.

ZFS over iSCSI is used if you have a external storage box like FreeNas.

Debian Stretch use lio as iscsi target what should also work with IET.



On 07/19/2017 01:10 AM, Mikhail wrote:

Hello,

I'm trying to setup Proxmox 5.0 node as shared storage node (basically I
have a 4x10TB disks inside that server, Proxmox installed on ZFS RAID 10
filesystem). I'm willing to use this node as a host for KVM vms, and
also as a storage source for my other nodes.

The issue is that I cannot seem to setup this node as a ZFS-over-iSCSI
source for the other nodes. It looks like Debian Stretch (base system
for Proxmox 5.0) has dropped support for "iscsitarget" package that was
available in Debian Jessie - iscsitarget package provides tools for IET
(ietadm, ietd). So the problem comes when I'm trying to setup new
ZFS-over-iSCSI storage from the Datacenter GUI: there I have to choose
iSCSI Provider module - Comstar, istgt, IET.

I cannot choose Comstar because it is purely for Solaris type of OS.
I cannot choose IET because my Proxmox 5.0 host has no IET (iscsitarget)
package available.
And as a last resort, I have tried "istgt" as a provider (before that, I
installed "istgt" package inside my Proxmox 5.0 storage node).
Before doing this, I followed Proxmox wiki page and set up ssh keys for
authorization on Proxmox 5.0 storage server. This is all working good.
However, if I choose istgt as a provider, I get the following error
whenever I try to create/run new vm that's storage source is set to
ZFS-over-iSCSI volume:

TASK ERROR: create failed - No configuration found. Install istgt on
192.168.88.2 at /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm line 99.

istgt is actually installed on 192.168.88.2.

The question is how to use ZFS-over-iSCSI in this scenario?

Thanks!
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] prox storage replication <> iscsi multipath problem

2017-07-06 Thread Wolfgang Link

Hi Tonci,

I guess it is the network traffic.
You should limited the replica speed or use a separate Network.


On 07/07/2017 07:35 AM, Tonči Stipičević wrote:

Hi to all,

I'm testing pvesr and it works correct so far and is big step ahead
regarding migration/replication w/o shared storage. Actually that is
something I was really waiting for , because it is easier to find
neighborhood with two (only) server than two servers with shared
storage. This is the way that one host can really back the other one up
(since sync frequency is fine-tunable)

But I do have problem with some kind of collisions. My test lab has
3 hosts and one freenas shared storage. The connection in between is
iscsi-target-multipath , so each node (incl freenas as shared storage ->
lvm) has 3 nics . In order to test and play with pvesr I created zfspool
on each host using local hard drive (1 drive -> one zfs volume ... no
redundancy etc) and storage replication was working fine . But after a
while iscsi-multipath connections are still on but  my shared lvm iscsi
freenas storage disappears .  The only way I was able to got it back was
deleting all pvesr jobs and  destroying zfs pools on each node.

I repeated this scenario more time times but the result was the same


I'm aware that this scenario ( shared storage and storage replication )
is not kind a usual (but it should be possible) but I'm still wondering
why this pvesr killed my  freenas-iscsi target ?


Thank you in advance

Best regards
Tonci


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Error pubkey apt

2017-06-05 Thread Wolfgang Link

Hi Gilberto,

there is only the pvetest repo online yet.

But i thing you have the pve4 key what is a different key from the pve5 
repo.


On 06/02/2017 05:50 PM, Gilberto Nunes wrote:

Hi

Last few days, I get this error when try apt update

Atingido:27 https://download.docker.com/linux/debian stretch InRelease

Err:24 http://download.proxmox.com/debian/pve stretch InRelease

  As assinaturas a seguir não puderam ser verificadas devido à chave
pública não estar disponível: NO_PUBKEY 0D9A1950E2EF0603

Is there something wrong with official apt repo??

Note: I am using Debian Strech as base OS!



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] OpenBSD problems (freezing console, lots of time drift) on Proxmox 5.0

2017-09-22 Thread Wolfgang Link
It looks like a kernel problem.
Not sure what exactly is the problem, but every kernel(mainline and ubuntu) 
larger 4.10 will trigger this behavior on some HW.
Debugging take some time.
I will report the result in this forum thread.

https://forum.proxmox.com/threads/openbsd-6-1-guest-h%C3%A4ngt.36903/#post-181872

> Alex Bihlmaier  hat am 22. September 2017 um 13:29 
> geschrieben:
>
>
> Hi - after upgrading several Proxmox setups from
> version 4.4 to 5.0 (currently 5.0-31) i have problems
> with VMs running OpenBSD (version 6.1).
>
> phenomens i encounter are:
> * reboot / shutdown on the console is not working properly.
> it results in a console freeze and 100% load on a single cpu core
> * time sleep 1 should result in a total runtime of 1 second.
> Actually on one host it gives me this:
> "sleep 1 0.00s user 0.00s system 0% cpu 17.486 total"
> * ntpd is unable to keep up with a local time drift (only on one
> Intel i7 Host, another Xeon Proxmox host does not suffer from this
> issue) and the guest clock is drifting more and more away from the
> real clock
>
> interesting to note: after deactivating the system console on the
> VGA/NoVNC and switching to a serial console the above phenomens are
> fixed and the OpenBSD guest is running smoothly like with Proxmox VE
> 4.4.
>
> switching from VGA system console to serial system console:
>
> /etc/boot.conf
> set tty com0
>
>
> Anyone with similar issue? Maybe time for a bug submission to the
> developers.
>
>
> cheers
> Alex
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Set IDs per Node

2017-10-09 Thread Wolfgang Link
Hi Mehmet,
>Hello guys,
>
>is it possible to configure a proxmox Node to set a specific start-id for a vm 
>and increment this id for successive vm's? 

no it is not possible.

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ZFS Replication

2017-12-05 Thread Wolfgang Link
Hi Mark,

> - Is it possible to change the network that does the replication? (IE be
> good to use a direct connected with balance-rr for throughput)

You can change the replication network in the datacenter.conf option migration.

> - Is it possible to replicate between machines that are not in the same
> cluster?

For this task you have to use pve-zsync.

> Both can be easily done via zfs send/recv in cli of course, but wonder if
> this is possible through the web interface?

No it is not.

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] SPAM: CEPH Luminous source packages

2018-02-12 Thread Wolfgang Link
> Mike O'Connor  hat am 13. Februar 2018 um 01:56 geschrieben:
> 
> 
> Hi All
> 
> Where can I find the source packages that the Proxmox Ceph Luminous was
> built from ?
> 
> 
> Mike
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Here are the git repo

https://git.proxmox.com/?p=ceph.git;a=summary

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Pve-zsync available size

2018-02-15 Thread Wolfgang Link
Hi,
> Jérémy Carnus <jer...@jcarnus.fr> hat am 15. Februar 2018 um 12:02 
> geschrieben:
> 
> 
> Hi
> I just notice that using the pve-zsync tool to replicate the zfs pool to 
> another server doesn't keep the available size on the pool. Is it wanted ?  
> How proxmox 5 manage zfs size ?  With quota ? 
pve-zsync do no sync the hole pool.
Do you mean subvolumes for LXC?
> 
> Thanks
> 
> Jérémy Carnus
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Best Regards,

Wolfgang Link

w.l...@proxmox.com
http://www.proxmox.com


Proxmox Server Solutions GmbH
Bräuhausgasse 37, 1050 Vienna, Austria
Commercial register no.: FN 258879 f
Registration office: Handelsgericht Wien

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Pve-zsync available size

2018-02-15 Thread Wolfgang Link
> Wolfgang Link <w.l...@proxmox.com> hat am 15. Februar 2018 um 14:49 
> geschrieben:
> 
> 
> > Yes subvolume like rpool/data/subvol-100-disk1
> We do not replicate the file system property.
> Because you have to restore it anyway manual and so you can set the refquota 
> at this step.

Best Regards,

Wolfgang Link

w.l...@proxmox.com
http://www.proxmox.com


Proxmox Server Solutions GmbH
Bräuhausgasse 37, 1050 Vienna, Austria
Commercial register no.: FN 258879 f
Registration office: Handelsgericht Wien

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync log

2018-03-08 Thread Wolfgang Link
Hello,

> Does anyone know a decent way of logging pve-zsync status? For failure or how 
> long it took to run the sync?
All jobs are executed by a cron job. A Proxmox VE host default setting is, if a 
cron job generate output, send this output to the root email address.

But you can configure cron as you need it. send an email or write to syslog.

Extra login could be done if you edit the cron job in /etc/cron.d/pve-zsync and 
give all pve-zsync jobs a -verbose parameter.

I see this is an undocumented feature so I will send a patch.

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync processes

2018-03-21 Thread Wolfgang Link
Hi,

this indicates that the sync time is to low.
cron fork every (default) 15 minutes a pve-zsync process.
If the former pve-zsync process is not finished, it will wait until the former 
process is done.

You should rise your sync interval this can be done in the 
/etc/cron.d/pve-zsync.

Best Regards,

Wolfgang Link

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync processes

2018-03-21 Thread Wolfgang Link

> So does this mean that all those processes are sitting in a "queue" waiting
> to execute? wouldn't it be more sensible for the script to terminate if a
> process is already running for the same job?
> 
No because as I wrote 15 is default, but we have many user which have longer 
intervals like 1 day.
If you would quit the process you would skip one day.

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] grub-install error: unknown filesystem, won't boot anymore

2019-03-18 Thread Wolfgang Link
Hi Arjen,

thanks for this information.

We update the wiki with your information.

Best Regards,

Wolfgang Link

> arjenvanweel...@gmail.com hat am 8. März 2019 um 21:21 geschrieben:
> 
> 
> Apologies for replying to myself.
> 
> On Fri, 2019-03-08 at 11:12 +0100, arjenvanweel...@gmail.com wrote:
> > Dear readers,
> > 
> > My up-to-date (no-subscription) Proxmox installation at home stopped
> > booting properly. GRUB complains at boot about:
> >   error: no such device: .
> >   error: unknown filesystem.
> > 
> > I got my system booting again using a new installation of Proxmox on
> > a USB-stick, because Rescue Boot option of the installation ISO did
> > not
> > work for me. Fortunately, my 4-way mirror rpool is just fine (as are
> > the other ZFS pools), but grub-install, grub-probe, and insmod normal
> > at GRUB prompt keep returning "error: unknown filesystem".
> > 
> > Last thing I did to my rpool was 'zfs set dnodesize=auto rpool' as
> > suggested on the ZFS Tips and Tricks official Wiki-page.
> > Reverting to dnodesize=legacy did not fix the GRUB boot issue.
> > 
> zpool get all rpool shows:
>   rpool  feature@large_dnode  active  local
> 
> I guess this make me run into a GRUB bug?
>   https://savannah.gnu.org/bugs/?func=detailitem_id=48885
> If so, suggesting 'dnodesize=auto' on the Proxmox Wiki might not be the
> best idea until the bootloader used by Proxmox supports it?
> 
> > Currently, my system is booting partially from the USB-stick and
> > partially from my original rpool, which works but is not ideal.
> > Does someone recognize this problem? Does anyone know a fix?
> > 
> Is there a way to disable/undo large_dnode?
> 
> kind regards, Arjen
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user