Re: [PVE-User] zfs snapshots issue
在 2015年7月8日,上午12:33,Alexandre DERUMIER aderum...@odiso.com 写道: how much memory do you have in your vm ? balloon: 4096 bootdisk: virtio0 cores: 4 cpuunits: 10 hotplug: disk,network,usb iothread: 1 memory: 8192 name: base onboot: 1 ostype: l26 smbios1: uuid=d30d103a-183b-4ef9-b66a-5456bf150ee2 sockets: 1 vga: std virtio0: k122102:vm-100-disk-1,discard=on,size=32G ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Re: [PVE-User] slow backup
Hi Francisco I'm running latest version (enterprise-repo) Unfortunately I got the same result again ... thank you very much in advance BR Tonci INFO: starting new backup job: vzdump 100 --bwlimit 0 --storage nfs05 INFO: Starting Backup of VM 100 (qemu) INFO: status = stopped INFO: update VM 100: -lock backup INFO: backup mode: stop INFO: ionice priority: 1 INFO: creating archive '/mnt/pve/nfs05/dump/vzdump-qemu-100-2015_07_07-20_39_41.vma' INFO: starting kvm to execute backup task INFO: started backup task 'ae0a136c-07f6-44f0-9367-b0837757e5c4' INFO: status: 0% (30932992/34359738368), sparse 0% (13627392), duration 3, 10/5 MB/s INFO: status: 1% (348192768/34359738368), sparse 0% (130928640), duration 37, 9/5 MB/s INFO: status: 2% (691994624/34359738368), sparse 0% (144646144), duration 79, 8/7 MB/s INFO: status: 3% (1034092544/34359738368), sparse 0% (152612864), duration 121, 8/7 MB/s INFO: status: 4% (1378156544/34359738368), sparse 0% (154607616), duration 165, 7/7 MB/s INFO: status: 5% (1722023936/34359738368), sparse 0% (160534528), duration 207, 8/8 MB/s INFO: status: 6% (2067267584/34359738368), sparse 0% (162983936), duration 249, 8/8 MB/s INFO: status: 7% (2408448000/34359738368), sparse 0% (166244352), duration 290, 8/8 MB/s INFO: status: 8% (2758344704/34359738368), sparse 0% (172875776), duration 334, 7/7 MB/s INFO: status: 9% (3098083328/34359738368), sparse 0% (177434624), duration 374, 8/8 MB/s INFO: status: 10% (3441491968/34359738368), sparse 0% (178905088), duration 416, 8/8 MB/s INFO: status: 11% (3786866688/34359738368), sparse 0% (180793344), duration 459, 8/7 MB/s INFO: status: 12% (4130734080/34359738368), sparse 0% (183898112), duration 499, 8/8 MB/s INFO: status: 13% (4466802688/34359738368), sparse 0% (185630720), duration 538, 8/8 MB/s INFO: status: 14% (4812439552/34359738368), sparse 0% (188973056), duration 579, 8/8 MB/s INFO: status: 15% (5154471936/34359738368), sparse 0% (191553536), duration 619, 8/8 MB/s INFO: status: 16% (5500370944/34359738368), sparse 0% (196132864), duration 660, 8/8 MB/s INFO: status: 17% (5842534400/34359738368), sparse 0% (202567680), duration 701, 8/8 MB/s INFO: status: 18% (6191710208/34359738368), sparse 0% (204541952), duration 742, 8/8 MB/s INFO: status: 19% (6530400256/34359738368), sparse 0% (207069184), duration 782, 8/8 MB/s INFO: status: 20% (6876168192/34359738368), sparse 0% (207560704), duration 823, 8/8 MB/s INFO: status: 21% (7218462720/34359738368), sparse 0% (213643264), duration 865, 8/8 MB/s INFO: status: 22% (7560495104/34359738368), sparse 0% (217743360), duration 907, 8/8 MB/s INFO: status: 23% (7906459648/34359738368), sparse 0% (220524544), duration 948, 8/8 MB/s INFO: status: 24% (8252358656/34359738368), sparse 0% (221818880), duration 990, 8/8 MB/s INFO: status: 25% (8596291584/34359738368), sparse 0% (222715904), duration 1031, 8/8 MB/s INFO: status: 26% (8935440384/34359738368), sparse 0% (223522816), duration 1074, 7/7 MB/s INFO: status: 27% (9285795840/34359738368), sparse 0% (225869824), duration 1116, 8/8 MB/s INFO: status: 28% (9621667840/34359738368), sparse 0% (228499456), duration 1159, 7/7 MB/s INFO: status: 29% (9972023296/34359738368), sparse 0% (229257216), duration 1201, 8/8 MB/s INFO: status: 30% (10312548352/34359738368), sparse 0% (229969920), duration 1240, 8/8 MB/s INFO: status: 31% (10659102720/34359738368), sparse 0% (235982848), duration 1285, 7/7 MB/s INFO: status: 32% (10999365632/34359738368), sparse 0% (249303040), duration 1325, 8/8 MB/s INFO: status: 33% (11348017152/34359738368), sparse 1% (504135680), duration 1362, 9/2 MB/s INFO: status: 34% (11688148992/34359738368), sparse 2% (844267520), duration 1395, 10/0 MB/s INFO: status: 35% (12028805120/34359738368), sparse 3% (1184923648), duration 1429, 10/0 MB/s INFO: status: 36% (12379029504/34359738368), sparse 4% (1535148032), duration 1465, 9/0 MB/s INFO: status: 37% (12716605440/34359738368), sparse 5% (1860755456), duration 1499, 9/0 MB/s INFO: status: 38% (13057982464/34359738368), sparse 5% (1878110208), duration 1540, 8/7 MB/s INFO: status: 39% (13406699520/34359738368), sparse 5% (1880797184), duration 1582, 8/8 MB/s INFO: status: 40% (13751549952/34359738368), sparse 5% (1881178112), duration 1624, 8/8 MB/s INFO: status: 41% (14090829824/34359738368), sparse 5% (1881948160), duration 1666, 8/8 MB/s INFO: status: 42% (14432468992/34359738368), sparse 5% (1882587136), duration 1713, 7/7 MB/s INFO: status: 43% (14775353344/34359738368), sparse 5% (1882738688), duration 1761, 7/7 MB/s INFO: status: 44% (15121383424/34359738368), sparse 5% (1883697152), duration 1810, 7/7 MB/s INFO: status: 45% (15466496000/34359738368), sparse 5% (1884278784), duration 1855, 7/7 MB/s INFO: status: 46% (15812591616/34359738368), sparse 5% (1888219136), duration 1896, 8/8 MB/s INFO: status: 47% (16155213824/34359738368), sparse 5% (1889505280), duration 1935, 8/8 MB/s INFO: status: 48%
Re: [PVE-User] zfs snapshots issue
在 2015年7月7日,下午9:57,Pongrácz István pongracz.ist...@gmail.com 写道: The issue could be somewhere in the zfs util and pve application layer if we can exclude all other trivial problems (no space left on hdd etc. :) no, it’s refresh installed system,include 6 x 1TB sata. # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 431G 3.14T 192K /rpool rpool/ROOT 1.88G 3.14T 192K /rpool/ROOT rpool/ROOT/pve-1 1.88G 3.14T 1.88G / rpool/swap 65.9G 3.21T 192K - rpool/vm-100-disk-1 33.0G 3.17T 1.68G - rpool/vm-100-disk-2 330G 3.47T 128K - # zpool list NAMESIZE ALLOC FREE EXPANDSZ FRAGCAP DEDUP HEALTH ALTROOT rpool 5.44T 5.34G 5.43T - 0% 0% 1.00x ONLINE - just test it, maybe associated with include RAM. # qm snapshot 100 tes1t -vmstate 1 if use vmstate option, will be snapshot long long time, I've never seen a successful. if no use vmstate option, it’s will very fast snapshot! thanks again.___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Re: [PVE-User] zfs snapshots issue
Ah, ok, now it's clear, you saved the memory contents, too. So, maybe that part has a bug :) Cheers, István eredeti üzenet- Feladó: lyt_yudi lyt_y...@icloud.com Címzett: Pongrจขcz Istvจขn CC: proxmoxve (pve-user pve.proxmox.com) Dátum: Tue, 07 Jul 2015 23:13:56 +0800 -- 在 2015年7月7日,下午9:57,Pongrácz István pongracz.ist...@gmail.com pongracz.ist...@gmail.com 写道: The issue could be somewhere in the zfs util and pve application layer if we can exclude all other trivial problems (no space left on hdd etc. :) no, it’s refresh installed system,include 6 x 1TB sata. # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 431G 3.14T 192K /rpool rpool/ROOT 1.88G 3.14T 192K /rpool/ROOT rpool/ROOT/pve-1 1.88G 3.14T 1.88G / rpool/swap 65.9G 3.21T 192K - rpool/vm-100-disk-1 33.0G 3.17T 1.68G - rpool/vm-100-disk-2 330G 3.47T 128K - # zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT rpool 5.44T 5.34G 5.43T - 0% 0% 1.00x ONLINE - just test it, maybe associated with include RAM. # qm snapshot 100 tes1t -vmstate 1 if use vmstate option, will be snapshot long long time, I've never seen a successful. if no use vmstate option, it’s will very fast snapshot! thanks again. ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
[PVE-User] PVE 4.0 Backup Feature?
Hi, I know this has been talked about for quite some time, but I haven't heard anything recently. Is there any possibility for backups to contain the VM name? Gerald ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Re: [PVE-User] zfs snapshots issue
在 2015年7月7日,下午7:34,Pongrácz István pongracz.ist...@gmail.com 写道: I do not think, it is a zfs issue. ZFS snapshots are transparent to the upper level and happens very quickly. thanks, are you have the same problem ?___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
[PVE-User] PVE 4.0 and Ceph - install problem
Hello there. I am trying a 3 host cluster with PVE 4.0beta with ceph server, but when I try to install ceph (pveceph install -version hammer, or pveceph install -version firefly or pveceph install), I have this error: The following information may help to resolve the situation: The following packages have unmet dependencies: ceph : Depends: libboost-program-options1.49.0 (= 1.49.0-1) but it is not installable Depends: libboost-system1.49.0 (= 1.49.0-1) but it is not installable Depends: libboost-thread1.49.0 (= 1.49.0-1) but it is not installable ceph-common : Depends: librbd1 (= 0.94.2-1~bpo70+1) but 0.80.7-2 is to be installed Depends: libboost-thread1.49.0 (= 1.49.0-1) but it is not installable Depends: libudev0 (= 146) but it is not installable Breaks: librbd1 ( 0.92-1238) but 0.80.7-2 is to be installed E: Unable to correct problems, you have held broken packages. command 'apt-get -q --assume-yes --no-install-recommends -o 'Dpkg::Options::=--force-confnew' install -- ceph ceph-common gdisk' failed: exit code 100 Is Ceph server already supported on PVE 4.0beta ? If not, is planned in a short time ? Regards, Fabrizio Cuseo -- --- Fabrizio Cuseo - mailto:f.cu...@panservice.it Direzione Generale - Panservice InterNetWorking Servizi Professionali per Internet ed il Networking Panservice e' associata AIIP - RIPE Local Registry Phone: +39 0773 410020 - Fax: +39 0773 470219 http://www.panservice.it mailto:i...@panservice.it Numero verde nazionale: 800 901492 ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Re: [PVE-User] PVE 4.0 and Ceph - install problem
There are no official ceph pages fore Debian jessie, but AFAIK they come soon. On 07/07/2015 11:23 AM, Fabrizio Cuseo wrote: Hello there. I am trying a 3 host cluster with PVE 4.0beta with ceph server, but when I try to install ceph (pveceph install -version hammer, or pveceph install -version firefly or pveceph install), I have this error: The following information may help to resolve the situation: The following packages have unmet dependencies: ceph : Depends: libboost-program-options1.49.0 (= 1.49.0-1) but it is not installable Depends: libboost-system1.49.0 (= 1.49.0-1) but it is not installable Depends: libboost-thread1.49.0 (= 1.49.0-1) but it is not installable ceph-common : Depends: librbd1 (= 0.94.2-1~bpo70+1) but 0.80.7-2 is to be installed Depends: libboost-thread1.49.0 (= 1.49.0-1) but it is not installable Depends: libudev0 (= 146) but it is not installable Breaks: librbd1 ( 0.92-1238) but 0.80.7-2 is to be installed E: Unable to correct problems, you have held broken packages. command 'apt-get -q --assume-yes --no-install-recommends -o 'Dpkg::Options::=--force-confnew' install -- ceph ceph-common gdisk' failed: exit code 100 Is Ceph server already supported on PVE 4.0beta ? If not, is planned in a short time ? Regards, Fabrizio Cuseo ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Re: [PVE-User] zfs snapshots issue
Hi, I do not think, it is a zfs issue. ZFS snapshots are transparent to the upper level and happens very quickly. Best regards, István eredeti üzenet- Feladó: lyt_yudi lyt_y...@icloud.com Címzett: proxmoxve (pve-user@pve.proxmox.com ) Dátum: Tue, 07 Jul 2015 18:22:49 +0800 -- hi,all when i take snapshots for a vm, but will be snapshots for a long long times. if stop task, got this error: savevm not yet finished savevm not yet finished savevm not yet finished savevm not yet finished and snapstate is prepare. this problem also reproduce in PVE 4.0-24/946af136 this is zfs bug? or i miss something? thanks! # pveversion -v proxmox-ve-2.6.32: 3.4-157 (running kernel: 2.6.32-39-pve) pve-manager: 3.4-6 (running version: 3.4-6/102d4547) pve-kernel-2.6.32-39-pve: 2.6.32-157 lvm2: 2.02.98-pve4 clvm: 2.02.98-pve4 corosync-pve: 1.4.7-1 openais-pve: 1.1.4-3 libqb0: 0.11.1-2 redhat-cluster-pve: 3.2.0-2 resource-agents-pve: 3.9.2-4 fence-agents-pve: 4.0.10-2 pve-cluster: 3.0-18 qemu-server: 3.4-6 pve-firmware: 1.1-4 libpve-common-perl: 3.0-24 libpve-access-control: 3.0-16 libpve-storage-perl: 3.0-33 pve-libspice-server1: 0.12.4-3 vncterm: 1.1-8 vzctl: 4.0-1pve6 vzprocps: 2.0.11-2 vzquota: 3.1-2 pve-qemu-kvm: 2.2-10 ksm-control-daemon: 1.1-1 glusterfs-client: 3.5.2-1 lyt_yudi lyt_y...@icloud.com lyt_y...@icloud.com __ ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
[PVE-User] Infiniband Mellanox cards and DDR
Hallo. I am testing a Mellanox card (IBM Voltaire HCA 400Ex-D) with a cisco 4x DDR infiniband switch. The problem I have is that the cards (that seems to be 4X DDR) have a 10Gbit (SDR) link. --- 03:00.0 InfiniBand: Mellanox Technologies MT25208 [InfiniHost III Ex] (rev 20) --- root@pve1:/etc/apt/sources.list.d# ibstat CA 'mthca0' CA type: MT25208 Number of ports: 2 Firmware version: 5.3.0 Hardware version: 20 Node GUID: 0x0008f104039814d0 System image GUID: 0x0008f104039814d3 Port 1: State: Active Physical state: LinkUp Rate: 10 Base lid: 6 LMC: 0 SM lid: 2 Capability mask: 0x02510a68 Port GUID: 0x0008f104039814d1 Link layer: InfiniBand root@pve1:/etc/apt/sources.list.d# ibportstate 6 0 CA PortInfo: # Port info: Lid 6 port 0 LinkState:...Active PhysLinkState:...LinkUp Lid:.6 SMLid:...2 LMC:.0 LinkWidthSupported:..1X or 4X LinkWidthEnabled:1X or 4X LinkWidthActive:.4X LinkSpeedSupported:..2.5 Gbps LinkSpeedEnabled:2.5 Gbps LinkSpeedActive:.2.5 Gbps Mkey:not displayed MkeyLeasePeriod:.15 ProtectBits:.0 - If I try to set the speed at DDR, I have this error: root@pve1:/etc/apt/sources.list.d# ibportstate 6 0 speed 2 Initial CA PortInfo: # Port info: Lid 6 port 0 LinkState:...Active PhysLinkState:...LinkUp Lid:.6 SMLid:...2 LMC:.0 LinkWidthSupported:..1X or 4X LinkWidthEnabled:1X or 4X LinkWidthActive:.4X LinkSpeedSupported:..2.5 Gbps LinkSpeedEnabled:2.5 Gbps LinkSpeedActive:.2.5 Gbps Mkey:not displayed MkeyLeasePeriod:.15 ProtectBits:.0 ibportstate: iberror: failed: smp set portinfo failed The switch reported this card in infiniband topology: MT25218 InfiniHostEx Mellanox Technologies Someone have tested the cards ? Fabrizio Cuseo -- --- Fabrizio Cuseo - mailto:f.cu...@panservice.it Direzione Generale - Panservice InterNetWorking Servizi Professionali per Internet ed il Networking Panservice e' associata AIIP - RIPE Local Registry Phone: +39 0773 410020 - Fax: +39 0773 470219 http://www.panservice.it mailto:i...@panservice.it Numero verde nazionale: 800 901492 ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Re: [PVE-User] Infiniband Mellanox cards and DDR
Another thing is that your cables must support DDR as well. On July 7, 2015 3:07:00 PM CEST, Fabrizio Cuseo f.cu...@panservice.it wrote: I have found that the 400EX is a SDR card, while the 400EX-D is a DDR card. I don't know if is a firmware problem. http://sup.xenya.si/sup/info/voltaire/HCAwebfinal.pdf HCA 400Ex/Ex-F • Dual port 4X (10 Gbps) InfiniBand PCI-Express low profile host channel adapter HCA 400Ex-D • Dual port 4X DDR (20 Gbps) InfiniBand PCI-Express low profile host channel adapter HCA 400 • Dual port 4X (10 Gbps) InfiniBand PCI/PCI-X low profile host channel adapter - Messaggio originale - Da: Michael Rasmussen m...@miras.org A: Fabrizio Cuseo f.cu...@panservice.it, pve-user pve-user@pve.proxmox.com Inviato: Martedì, 7 luglio 2015 12:01:03 Oggetto: Re: [PVE-User] Infiniband Mellanox cards and DDR AFAIK this card is a SDR card so DDR is not possible. On July 7, 2015 11:46:41 AM CEST, Fabrizio Cuseo f.cu...@panservice.it wrote: Hallo. I am testing a Mellanox card (IBM Voltaire HCA 400Ex-D) with a cisco 4x DDR infiniband switch. The problem I have is that the cards (that seems to be 4X DDR) have a 10Gbit (SDR) link. --- 03:00.0 InfiniBand: Mellanox Technologies MT25208 [InfiniHost III Ex] (rev 20) --- root@pve1:/etc/apt/sources.list.d# ibstat CA 'mthca0' CA type: MT25208 Number of ports: 2 Firmware version: 5.3.0 Hardware version: 20 Node GUID: 0x0008f104039814d0 System image GUID: 0x0008f104039814d3 Port 1: State: Active Physical state: LinkUp Rate: 10 Base lid: 6 LMC: 0 SM lid: 2 Capability mask: 0x02510a68 Port GUID: 0x0008f104039814d1 Link laye r: InfiniBand root@pve1:/etc/apt/sources.list.d# ibportstate 6 0 CA PortInfo: # Port info: Lid 6 port 0 LinkState:...Active PhysLinkState:...LinkUp Lid:.6 SMLid:...2 LMC:.0 LinkWidthSupported:..1X or 4X LinkWidthEnabled:1X or 4X LinkWidthActive:.4X LinkSpeedSupported:..2.5 Gbps LinkSpeedEnabled:2.5 Gbps LinkSpeedActive:.2.5 Gbps Mkey:not displayed MkeyLeasePeriod:.15 ProtectBits:.0 - If I try to set the speed at DDR, I have this error: root@pve1:/etc/apt/sources.list.d# ibportstate 6 0 speed 2 Initial CA PortInfo: # Port info: Lid 6 port 0 LinkState:...Active PhysLinkState:...LinkUp Lid:.6 SMLid:...2 LMC:.0 LinkWidthSupported:..1X or 4X LinkWidthEnabled:1X or 4X LinkWidthActive:.4X LinkSpeedSupported:..2.5 Gbps LinkSpeedEnabled:2.5 Gbps LinkSpeedActive:.2.5 Gbps Mkey:not displayed MkeyLeasePeriod:.15 ProtectBits:.0 ibportstate: iberror: failed: smp set portinfo failed The switch reported this card in infiniband topology: MT25218 InfiniHostEx Mellanox Technologies Someone have tested the cards ? Fabrizio Cuseo -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. This mail was virus scanned and spam checked before delivery. This mail is also DKIM signed. See header dkim-signature. -- --- Fabrizio Cuseo - mailto:f.cu...@panservice.it Direzione Generale - Panservice InterNetWorking Servizi Professionali per Internet ed il Networking Panservice e' associata AIIP - RIPE Local Registry Phone: +39 0773 410020 - Fax: +39 0773 470219 http://www.panservice.it mailto:i...@panservice.it Numero verde nazionale: 800 901492 -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. This mail was virus scanned and spam checked before delivery. This mail is also DKIM signed. See header dkim-signature. ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Re: [PVE-User] Infiniband Mellanox cards and DDR
I have found that the 400EX is a SDR card, while the 400EX-D is a DDR card. I don't know if is a firmware problem. http://sup.xenya.si/sup/info/voltaire/HCAwebfinal.pdf HCA 400Ex/Ex-F • Dual port 4X (10 Gbps) InfiniBand PCI-Express low profile host channel adapter HCA 400Ex-D • Dual port 4X DDR (20 Gbps) InfiniBand PCI-Express low profile host channel adapter HCA 400 • Dual port 4X (10 Gbps) InfiniBand PCI/PCI-X low profile host channel adapter - Messaggio originale - Da: Michael Rasmussen m...@miras.org A: Fabrizio Cuseo f.cu...@panservice.it, pve-user pve-user@pve.proxmox.com Inviato: Martedì, 7 luglio 2015 12:01:03 Oggetto: Re: [PVE-User] Infiniband Mellanox cards and DDR AFAIK this card is a SDR card so DDR is not possible. On July 7, 2015 11:46:41 AM CEST, Fabrizio Cuseo f.cu...@panservice.it wrote: Hallo. I am testing a Mellanox card (IBM Voltaire HCA 400Ex-D) with a cisco 4x DDR infiniband switch. The problem I have is that the cards (that seems to be 4X DDR) have a 10Gbit (SDR) link. --- 03:00.0 InfiniBand: Mellanox Technologies MT25208 [InfiniHost III Ex] (rev 20) --- root@pve1:/etc/apt/sources.list.d# ibstat CA 'mthca0' CA type: MT25208 Number of ports: 2 Firmware version: 5.3.0 Hardware version: 20 Node GUID: 0x0008f104039814d0 System image GUID: 0x0008f104039814d3 Port 1: State: Active Physical state: LinkUp Rate: 10 Base lid: 6 LMC: 0 SM lid: 2 Capability mask: 0x02510a68 Port GUID: 0x0008f104039814d1 Link laye r: InfiniBand root@pve1:/etc/apt/sources.list.d# ibportstate 6 0 CA PortInfo: # Port info: Lid 6 port 0 LinkState:...Active PhysLinkState:...LinkUp Lid:.6 SMLid:...2 LMC:.0 LinkWidthSupported:..1X or 4X LinkWidthEnabled:1X or 4X LinkWidthActive:.4X LinkSpeedSupported:..2.5 Gbps LinkSpeedEnabled:2.5 Gbps LinkSpeedActive:.2.5 Gbps Mkey:not displayed MkeyLeasePeriod:.15 ProtectBits:.0 - If I try to set the speed at DDR, I have this error: root@pve1:/etc/apt/sources.list.d# ibportstate 6 0 speed 2 Initial CA PortInfo: # Port info: Lid 6 port 0 LinkState:...Active PhysLinkState:...LinkUp Lid:.6 SMLid:...2 LMC:.0 LinkWidthSupported:..1X or 4X LinkWidthEnabled:1X or 4X LinkWidthActive:.4X LinkSpeedSupported:..2.5 Gbps LinkSpeedEnabled:2.5 Gbps LinkSpeedActive:.2.5 Gbps Mkey:not displayed MkeyLeasePeriod:.15 ProtectBits:.0 ibportstate: iberror: failed: smp set portinfo failed The switch reported this card in infiniband topology: MT25218 InfiniHostEx Mellanox Technologies Someone have tested the cards ? Fabrizio Cuseo -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. This mail was virus scanned and spam checked before delivery. This mail is also DKIM signed. See header dkim-signature. -- --- Fabrizio Cuseo - mailto:f.cu...@panservice.it Direzione Generale - Panservice InterNetWorking Servizi Professionali per Internet ed il Networking Panservice e' associata AIIP - RIPE Local Registry Phone: +39 0773 410020 - Fax: +39 0773 470219 http://www.panservice.it mailto:i...@panservice.it Numero verde nazionale: 800 901492 ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
[PVE-User] Proxmox VE 4.0
Hi Proxmox staff Very thanks to include XFS in installation system... I appreciate that... Cheers... -- Gilberto Ferreira +55 (47) 9676-7530 Skype: gilberto.nunes36 ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Re: [PVE-User] zfs snapshots issue
eredeti üzenet- Feladó: lyt_yudi lyt_y...@icloud.com I do not think, it is a zfs issue. ZFS snapshots are transparent to the upper level and happens very quickly. thanks, are you have the same problem ? No, but I use zfsonlinux for many years. One proxmox server I have runs for more than 633 days with an earlier zfs version without any issue, snapshots, daily remote backup using zfs etc. The issue could be somewhere in the zfs util and pve application layer if we can exclude all other trivial problems (no space left on hdd etc. :) Cheers, István ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Re: [PVE-User] zfs snapshots issue
在 2015年7月8日,上午12:47,lyt_yudi lyt_y...@icloud.com 写道: how much memory do you have in your vm ? balloon: 4096 bootdisk: virtio0 cores: 4 cpuunits: 10 hotplug: disk,network,usb iothread: 1 memory: 8192 name: base onboot: 1 ostype: l26 smbios1: uuid=d30d103a-183b-4ef9-b66a-5456bf150ee2 sockets: 1 vga: std virtio0: k122102:vm-100-disk-1,discard=on,size=32G had report bug: https://bugzilla.proxmox.com/show_bug.cgi?id=655 thanks. ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user