Re: [PVE-User] Undo pvecm

2016-07-06 Thread William Gnann
On Wed, Jul 06, 2016 at 08:32:34AM +0200, Eneko Lacunza wrote:
> >Should I undo the cluster configuration since it will take a time to restore 
> >the other node?
> If you delete the failed node from the cluster, then you MUST reinstall it
> before powering on. If you intend to resume it's operations normally without
> reinstalling, then don't remove it.
Ok.

> >Finally, was it a bad idea to use the cluster configuration (without HA) to 
> >concentrate the machine administration?
> If you can, just add a simple machine to the cluster, so that it really has
> 3 nodes for voting and you don't have quorum problems when one machine is
> down. You can even use a NUC or an old machine.
Hmmm. I think I will add a third machine then.

Thank you!

> 
> Cheers
> Eneko
> 
> -- 
> Zuzendari Teknikoa / Director Técnico
> Binovo IT Human Project, S.L.
> Telf. 943493611
>   943324914
> Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
> www.binovo.es
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

-- 
William Gnann - SI
Analista de Sistemas
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE 4.2 and 802.3ad bonding

2016-07-06 Thread Michael Rasmussen
On Wed, 6 Jul 2016 05:12:58 -0300
Gilberto Nunes  wrote:

> hum... I miss this part too... Why create a bridge over a bond???
> 
That is the preferred way in Proxmox.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
If you find a solution and become attached to it, the solution may
become your next problem.


pgpHzjyeiCj3M.pgp
Description: OpenPGP digital signature
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox storage -> adding ceph storage monhost issue

2016-07-06 Thread Dominik Csapak

On 07/06/2016 03:03 PM, Wolfgang Bumiller wrote:

On Wed, Jul 06, 2016 at 02:16:41PM +0200, Alwin Antreich wrote:

Hi all,

there is an issue when adding IPs in the storage.cfg file at the line monhost, 
when formatted with commas as opposed to
spaces as separator between IPs.


We expect semicolons (';') (though spaces seem to work, too, and commas
work everywhere except with qemu). We should probably enforce semicolons
via the schema definition and improve the documentation & error messages
there.



i think we should split the list of monhosts (with split_list) and then
use the semicolon in the command, this way no existing config with
spaces is made unusable and everone can use their preferred style of
separating

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox storage -> adding ceph storage monhost issue

2016-07-06 Thread Alwin Antreich
Hi Wolfgang,

On 07/06/2016 03:03 PM, Wolfgang Bumiller wrote:
> On Wed, Jul 06, 2016 at 02:16:41PM +0200, Alwin Antreich wrote:
>> Hi all,
>>
>> there is an issue when adding IPs in the storage.cfg file at the line 
>> monhost, when formatted with commas as opposed to
>> spaces as separator between IPs.
> 
> We expect semicolons (';') (though spaces seem to work, too, and commas
> work everywhere except with qemu). We should probably enforce semicolons
> via the schema definition and improve the documentation & error messages
> there.
> 
>> rbd: rbd
>>  monhost 192.168.4.50,192.168.4.51,192.168.4.52
>>  content images,rootdir
>>  krbd
>>  username admin
>>  pool rbd
>>
>> rbd: rbd2
>>  monhost 192.168.4.50 192.168.4.51 192.168.4.52
>>  content images
>>  username admin
>>  pool rbd2
>>
>>
>> == error message wit comma separated list of IPs ==
>> kvm: -drive
>> file=rbd:rbd2/vm-101-disk-1:mon_host=192.168.4.50,192.168.4.51,192.168.4.52:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/rbd2.keyring,if=none,id=drive-virtio0,cache=writeback,discard=on,format=raw,aio=threads,detect-zeroes=unmap:
>> Block format 'raw' used by device 'drive-virtio0' doesn't support the option 
>> '192.168.4.51'
>>
>> Is this issue a Proxmox or Ceph one?

Thanks to Markus & Holger, I got that. :-)

I have added the storage through the GUI, with all those notations. Do you mean 
by error message also one when adding
the storage trough the GUI?

>>
>> Thanks.
>>
>> Cheers,
>> Alwin
>>
>> == pveversion -v ==
>> proxmox-ve: 4.2-54 (running kernel: 4.4.10-1-pve)
>> pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)
>> pve-kernel-4.4.6-1-pve: 4.4.6-48
>> pve-kernel-4.2.6-1-pve: 4.2.6-36
>> pve-kernel-4.2.8-1-pve: 4.2.8-41
>> pve-kernel-4.4.10-1-pve: 4.4.10-54
>> lvm2: 2.02.116-pve2
>> corosync-pve: 2.3.5-2
>> libqb0: 1.0-1
>> pve-cluster: 4.0-42
>> qemu-server: 4.0-81
>> pve-firmware: 1.1-8
>> libpve-common-perl: 4.0-68
>> libpve-access-control: 4.0-16
>> libpve-storage-perl: 4.0-55
>> pve-libspice-server1: 0.12.5-2
>> vncterm: 1.2-1
>> pve-qemu-kvm: 2.5-19
>> pve-container: 1.0-68
>> pve-firewall: 2.0-29
>> pve-ha-manager: 1.0-32
>> ksm-control-daemon: 1.2-1
>> glusterfs-client: 3.5.2-2+deb8u2
>> lxc-pve: 1.1.5-7
>> lxcfs: 2.0.0-pve2
>> cgmanager: 0.39-pve1
>> criu: 1.6.0-1
>> zfsutils: 0.6.5-pve9~jessie
>> ceph: 0.94.7-1~bpo80+1
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
> 

Cheers,
Alwin
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph RBD image striping

2016-07-06 Thread Alwin Antreich
Hi Holger,
hi Markus,

On 07/06/2016 02:47 PM, Markus Dellermann wrote:
> Am Mittwoch, 6. Juli 2016, 14:31:48 CEST schrieb Alwin Antreich:
>> Hi Markus,
>>
>> On 07/06/2016 02:06 PM, Markus Dellermann wrote:
>>> Hi..
>>>
>>> Am Mittwoch, 6. Juli 2016, 13:56:21 CEST schrieb Alwin Antreich:
 Hi,

 On 07/06/2016 09:33 AM, Eneko Lacunza wrote:
> Hi,
>
> El 06/07/16 a las 09:26, Alwin Antreich escribió:
>> On 07/06/2016 08:27 AM, Eneko Lacunza wrote:
>>> Hi Alwin,
>>>
>>> In Proxmox Ceph client integration is done using librbd, not krbd.
>>> Stripe parameters can't be defined from Proxmox GUI.>>
>>
>> If I understood correctly, this depends, if you add a ceph pool to
>> proxmox with the KRBD option enabled or not. I know that these settings
>> aren't done through the GUI, as you need to create the image with ceph
>> tools manually.>
>
> Sorry, I didn't recall the "new" krbd checkbox in storage config, never
> used that myself. Why are you using it?

 For LXC it's required, as this maps the rbd image, so it can be mounted
 by
 lxc.

>>> Why are you changing striping parameters?
>>
>> We have some write intense tasks to perform and stripping can boost
>> write
>> performance. To test this I would like to add such a disk to a VM.
>
> I would do as follows:
> - Create the VM rbd disk from GUI
> - Then from CLI, remove rbd volume and recreate with the same name and
> the
> striping parameters you want.

 That's the way I went, but the disk isn't picked up and the only thing I
 see is the timeout.

> Are you using SSDs for journals?
>
>
> Cheers

 It is working now, but in the end, I found two different issues. The
 first
 one, had to do with the VM I was testing, once I tried the same setup
 with
 a different VM it gave me the second issue. Apparently there is a
 difference between mon_host address list formatting with comma or space
 between IPs.

 ==with KRBD==
 monhost 192.168.4.50,192.168.4.51,192.168.4.52
 ==w/o KRBD==
 monhost 192.168.4.50 192.168.4.51 192.168.4.52
>>>
>>> You should use semicolon
>>
>> But this gives me the error message, when I try to mount the image. I
>> created a separate email on this to follow up.
>>
> 
> You have tried this:
> monhost 192.168.4.50;192.168.4.51;192.168.4.52

Thanks for clarification.

> 
> ?
> 
>> Thanks.
>>
 ==error message==
 kvm: -drive
 file=rbd:rbd2/vm-101-disk-1:mon_host=192.168.4.50,192.168.4.51,192.168.4.
 52:
 id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/rbd2.keyring,if
 =non
 e,id=drive-virtio0,cache=writeback,discard=on,format=raw,aio=threads,det
 ect- zeroes=unmap: Block format 'raw' used by device 'drive-virtio0'
 doesn't support the option '192.168.4.51'

 Now, is this a ceph issue or proxmox? I will send a new email to follow
 up
 on this separately.

Still, the GUI allows all of these notations and adds the storage (incl. 
content browsing). So then the Proxmox GUI
should catch the wrong notation, shouldn't it?


 Cheers,
 Alwin
>>>
>>> Markus
>>>
 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>> Cheers,
>> Alwin
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

Cheers,
Alwin
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox storage -> adding ceph storage monhost issue

2016-07-06 Thread Wolfgang Bumiller
On Wed, Jul 06, 2016 at 02:16:41PM +0200, Alwin Antreich wrote:
> Hi all,
> 
> there is an issue when adding IPs in the storage.cfg file at the line 
> monhost, when formatted with commas as opposed to
> spaces as separator between IPs.

We expect semicolons (';') (though spaces seem to work, too, and commas
work everywhere except with qemu). We should probably enforce semicolons
via the schema definition and improve the documentation & error messages
there.

> rbd: rbd
>   monhost 192.168.4.50,192.168.4.51,192.168.4.52
>   content images,rootdir
>   krbd
>   username admin
>   pool rbd
> 
> rbd: rbd2
>   monhost 192.168.4.50 192.168.4.51 192.168.4.52
>   content images
>   username admin
>   pool rbd2
> 
> 
> == error message wit comma separated list of IPs ==
> kvm: -drive
> file=rbd:rbd2/vm-101-disk-1:mon_host=192.168.4.50,192.168.4.51,192.168.4.52:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/rbd2.keyring,if=none,id=drive-virtio0,cache=writeback,discard=on,format=raw,aio=threads,detect-zeroes=unmap:
> Block format 'raw' used by device 'drive-virtio0' doesn't support the option 
> '192.168.4.51'
> 
> Is this issue a Proxmox or Ceph one?
> 
> Thanks.
> 
> Cheers,
> Alwin
> 
> == pveversion -v ==
> proxmox-ve: 4.2-54 (running kernel: 4.4.10-1-pve)
> pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)
> pve-kernel-4.4.6-1-pve: 4.4.6-48
> pve-kernel-4.2.6-1-pve: 4.2.6-36
> pve-kernel-4.2.8-1-pve: 4.2.8-41
> pve-kernel-4.4.10-1-pve: 4.4.10-54
> lvm2: 2.02.116-pve2
> corosync-pve: 2.3.5-2
> libqb0: 1.0-1
> pve-cluster: 4.0-42
> qemu-server: 4.0-81
> pve-firmware: 1.1-8
> libpve-common-perl: 4.0-68
> libpve-access-control: 4.0-16
> libpve-storage-perl: 4.0-55
> pve-libspice-server1: 0.12.5-2
> vncterm: 1.2-1
> pve-qemu-kvm: 2.5-19
> pve-container: 1.0-68
> pve-firewall: 2.0-29
> pve-ha-manager: 1.0-32
> ksm-control-daemon: 1.2-1
> glusterfs-client: 3.5.2-2+deb8u2
> lxc-pve: 1.1.5-7
> lxcfs: 2.0.0-pve2
> cgmanager: 0.39-pve1
> criu: 1.6.0-1
> zfsutils: 0.6.5-pve9~jessie
> ceph: 0.94.7-1~bpo80+1
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph RBD image striping

2016-07-06 Thread Markus Dellermann
Am Mittwoch, 6. Juli 2016, 14:31:48 CEST schrieb Alwin Antreich:
> Hi Markus,
> 
> On 07/06/2016 02:06 PM, Markus Dellermann wrote:
> > Hi..
> > 
> > Am Mittwoch, 6. Juli 2016, 13:56:21 CEST schrieb Alwin Antreich:
> >> Hi,
> >> 
> >> On 07/06/2016 09:33 AM, Eneko Lacunza wrote:
> >>> Hi,
> >>> 
> >>> El 06/07/16 a las 09:26, Alwin Antreich escribió:
>  On 07/06/2016 08:27 AM, Eneko Lacunza wrote:
> > Hi Alwin,
> > 
> > In Proxmox Ceph client integration is done using librbd, not krbd.
> > Stripe parameters can't be defined from Proxmox GUI.>>
>  
>  If I understood correctly, this depends, if you add a ceph pool to
>  proxmox with the KRBD option enabled or not. I know that these settings
>  aren't done through the GUI, as you need to create the image with ceph
>  tools manually.>
> >>> 
> >>> Sorry, I didn't recall the "new" krbd checkbox in storage config, never
> >>> used that myself. Why are you using it?
> >> 
> >> For LXC it's required, as this maps the rbd image, so it can be mounted
> >> by
> >> lxc.
> >> 
> > Why are you changing striping parameters?
>  
>  We have some write intense tasks to perform and stripping can boost
>  write
>  performance. To test this I would like to add such a disk to a VM.
> >>> 
> >>> I would do as follows:
> >>> - Create the VM rbd disk from GUI
> >>> - Then from CLI, remove rbd volume and recreate with the same name and
> >>> the
> >>> striping parameters you want.
> >> 
> >> That's the way I went, but the disk isn't picked up and the only thing I
> >> see is the timeout.
> >> 
> >>> Are you using SSDs for journals?
> >>> 
> >>> 
> >>> Cheers
> >> 
> >> It is working now, but in the end, I found two different issues. The
> >> first
> >> one, had to do with the VM I was testing, once I tried the same setup
> >> with
> >> a different VM it gave me the second issue. Apparently there is a
> >> difference between mon_host address list formatting with comma or space
> >> between IPs.
> >> 
> >> ==with KRBD==
> >> monhost 192.168.4.50,192.168.4.51,192.168.4.52
> >> ==w/o KRBD==
> >> monhost 192.168.4.50 192.168.4.51 192.168.4.52
> > 
> > You should use semicolon
> 
> But this gives me the error message, when I try to mount the image. I
> created a separate email on this to follow up.
> 

You have tried this:
monhost 192.168.4.50;192.168.4.51;192.168.4.52

?

> Thanks.
> 
> >> ==error message==
> >> kvm: -drive
> >> file=rbd:rbd2/vm-101-disk-1:mon_host=192.168.4.50,192.168.4.51,192.168.4.
> >> 52:
> >> id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/rbd2.keyring,if
> >> =non
> >> e,id=drive-virtio0,cache=writeback,discard=on,format=raw,aio=threads,det
> >> ect- zeroes=unmap: Block format 'raw' used by device 'drive-virtio0'
> >> doesn't support the option '192.168.4.51'
> >> 
> >> Now, is this a ceph issue or proxmox? I will send a new email to follow
> >> up
> >> on this separately.
> >> 
> >> Cheers,
> >> Alwin
> > 
> > Markus
> > 
> >> ___
> >> pve-user mailing list
> >> pve-user@pve.proxmox.com
> >> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > 
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
> Cheers,
> Alwin
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph RBD image striping

2016-07-06 Thread Alwin Antreich
Hi Markus,

On 07/06/2016 02:06 PM, Markus Dellermann wrote:
> Hi..
> Am Mittwoch, 6. Juli 2016, 13:56:21 CEST schrieb Alwin Antreich:
>> Hi,
>>
>> On 07/06/2016 09:33 AM, Eneko Lacunza wrote:
>>> Hi,
>>>
>>> El 06/07/16 a las 09:26, Alwin Antreich escribió:
 On 07/06/2016 08:27 AM, Eneko Lacunza wrote:
> Hi Alwin,
>
> In Proxmox Ceph client integration is done using librbd, not krbd.
> Stripe parameters can't be defined from Proxmox GUI.>> 
 If I understood correctly, this depends, if you add a ceph pool to
 proxmox with the KRBD option enabled or not. I know that these settings
 aren't done through the GUI, as you need to create the image with ceph
 tools manually.> 
>>> Sorry, I didn't recall the "new" krbd checkbox in storage config, never
>>> used that myself. Why are you using it?
>> For LXC it's required, as this maps the rbd image, so it can be mounted by
>> lxc.
> Why are you changing striping parameters?

 We have some write intense tasks to perform and stripping can boost write
 performance. To test this I would like to add such a disk to a VM.
>>>
>>> I would do as follows:
>>> - Create the VM rbd disk from GUI
>>> - Then from CLI, remove rbd volume and recreate with the same name and the
>>> striping parameters you want.
>> That's the way I went, but the disk isn't picked up and the only thing I see
>> is the timeout.
>>> Are you using SSDs for journals?
>>>
>>>
>>> Cheers
>>
>> It is working now, but in the end, I found two different issues. The first
>> one, had to do with the VM I was testing, once I tried the same setup with
>> a different VM it gave me the second issue. Apparently there is a
>> difference between mon_host address list formatting with comma or space
>> between IPs.
>>
>> ==with KRBD==
>> monhost 192.168.4.50,192.168.4.51,192.168.4.52
>> ==w/o KRBD==
>> monhost 192.168.4.50 192.168.4.51 192.168.4.52
>>
> 
> You should use semicolon 

But this gives me the error message, when I try to mount the image. I created a 
separate email on this to follow up.

Thanks.

> 
> 
> 
>> ==error message==
>> kvm: -drive
>> file=rbd:rbd2/vm-101-disk-1:mon_host=192.168.4.50,192.168.4.51,192.168.4.52:
>> id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/rbd2.keyring,if=non
>> e,id=drive-virtio0,cache=writeback,discard=on,format=raw,aio=threads,detect-
>> zeroes=unmap: Block format 'raw' used by device 'drive-virtio0' doesn't
>> support the option '192.168.4.51'
>>
>> Now, is this a ceph issue or proxmox? I will send a new email to follow up
>> on this separately.
>>
>> Cheers,
>> Alwin
>>
> 
> Markus
> 
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

Cheers,
Alwin
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox storage -> adding ceph storage monhost issue

2016-07-06 Thread Alwin Antreich
Hi all,

there is an issue when adding IPs in the storage.cfg file at the line monhost, 
when formatted with commas as opposed to
spaces as separator between IPs.

rbd: rbd
monhost 192.168.4.50,192.168.4.51,192.168.4.52
content images,rootdir
krbd
username admin
pool rbd

rbd: rbd2
monhost 192.168.4.50 192.168.4.51 192.168.4.52
content images
username admin
pool rbd2


== error message wit comma separated list of IPs ==
kvm: -drive
file=rbd:rbd2/vm-101-disk-1:mon_host=192.168.4.50,192.168.4.51,192.168.4.52:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/rbd2.keyring,if=none,id=drive-virtio0,cache=writeback,discard=on,format=raw,aio=threads,detect-zeroes=unmap:
Block format 'raw' used by device 'drive-virtio0' doesn't support the option 
'192.168.4.51'

Is this issue a Proxmox or Ceph one?

Thanks.

Cheers,
Alwin

== pveversion -v ==
proxmox-ve: 4.2-54 (running kernel: 4.4.10-1-pve)
pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-4.4.10-1-pve: 4.4.10-54
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-42
qemu-server: 4.0-81
pve-firmware: 1.1-8
libpve-common-perl: 4.0-68
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-55
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-19
pve-container: 1.0-68
pve-firewall: 2.0-29
pve-ha-manager: 1.0-32
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie
ceph: 0.94.7-1~bpo80+1
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph RBD image striping

2016-07-06 Thread Markus Dellermann
Hi..
Am Mittwoch, 6. Juli 2016, 13:56:21 CEST schrieb Alwin Antreich:
> Hi,
> 
> On 07/06/2016 09:33 AM, Eneko Lacunza wrote:
> > Hi,
> > 
> > El 06/07/16 a las 09:26, Alwin Antreich escribió:
> >> On 07/06/2016 08:27 AM, Eneko Lacunza wrote:
> >>> Hi Alwin,
> >>> 
> >>> In Proxmox Ceph client integration is done using librbd, not krbd.
> >>> Stripe parameters can't be defined from Proxmox GUI.>> 
> >> If I understood correctly, this depends, if you add a ceph pool to
> >> proxmox with the KRBD option enabled or not. I know that these settings
> >> aren't done through the GUI, as you need to create the image with ceph
> >> tools manually.> 
> > Sorry, I didn't recall the "new" krbd checkbox in storage config, never
> > used that myself. Why are you using it?
> For LXC it's required, as this maps the rbd image, so it can be mounted by
> lxc.
> >>> Why are you changing striping parameters?
> >> 
> >> We have some write intense tasks to perform and stripping can boost write
> >> performance. To test this I would like to add such a disk to a VM.
> > 
> > I would do as follows:
> > - Create the VM rbd disk from GUI
> > - Then from CLI, remove rbd volume and recreate with the same name and the
> > striping parameters you want.
> That's the way I went, but the disk isn't picked up and the only thing I see
> is the timeout.
> > Are you using SSDs for journals?
> > 
> > 
> > Cheers
> 
> It is working now, but in the end, I found two different issues. The first
> one, had to do with the VM I was testing, once I tried the same setup with
> a different VM it gave me the second issue. Apparently there is a
> difference between mon_host address list formatting with comma or space
> between IPs.
> 
> ==with KRBD==
> monhost 192.168.4.50,192.168.4.51,192.168.4.52
> ==w/o KRBD==
> monhost 192.168.4.50 192.168.4.51 192.168.4.52
> 

You should use semicolon 



> ==error message==
> kvm: -drive
> file=rbd:rbd2/vm-101-disk-1:mon_host=192.168.4.50,192.168.4.51,192.168.4.52:
> id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/rbd2.keyring,if=non
> e,id=drive-virtio0,cache=writeback,discard=on,format=raw,aio=threads,detect-
> zeroes=unmap: Block format 'raw' used by device 'drive-virtio0' doesn't
> support the option '192.168.4.51'
> 
> Now, is this a ceph issue or proxmox? I will send a new email to follow up
> on this separately.
> 
> Cheers,
> Alwin
> 

Markus

> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph RBD image striping

2016-07-06 Thread Alwin Antreich
Hi,

On 07/06/2016 09:33 AM, Eneko Lacunza wrote:
> Hi,
> 
> El 06/07/16 a las 09:26, Alwin Antreich escribió:
>> On 07/06/2016 08:27 AM, Eneko Lacunza wrote:
>>> Hi Alwin,
>>>
>>> In Proxmox Ceph client integration is done using librbd, not krbd. Stripe 
>>> parameters can't be defined from Proxmox GUI.
>> If I understood correctly, this depends, if you add a ceph pool to proxmox 
>> with the KRBD option enabled or not. I know
>> that these settings aren't done through the GUI, as you need to create the 
>> image with ceph tools manually.
> Sorry, I didn't recall the "new" krbd checkbox in storage config, never used 
> that myself. Why are you using it?

For LXC it's required, as this maps the rbd image, so it can be mounted by lxc.

>>
>>> Why are you changing striping parameters?
>> We have some write intense tasks to perform and stripping can boost write 
>> performance. To test this I would like to add
>> such a disk to a VM.
> I would do as follows:
> - Create the VM rbd disk from GUI
> - Then from CLI, remove rbd volume and recreate with the same name and the 
> striping parameters you want.

That's the way I went, but the disk isn't picked up and the only thing I see is 
the timeout.

> 
> Are you using SSDs for journals?
> 
> 
> Cheers
> 

It is working now, but in the end, I found two different issues. The first one, 
had to do with the VM I was testing,
once I tried the same setup with a different VM it gave me the second issue. 
Apparently there is a difference between
mon_host address list formatting with comma or space between IPs.

==with KRBD==
monhost 192.168.4.50,192.168.4.51,192.168.4.52
==w/o KRBD==
monhost 192.168.4.50 192.168.4.51 192.168.4.52

==error message==
kvm: -drive
file=rbd:rbd2/vm-101-disk-1:mon_host=192.168.4.50,192.168.4.51,192.168.4.52:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/rbd2.keyring,if=none,id=drive-virtio0,cache=writeback,discard=on,format=raw,aio=threads,detect-zeroes=unmap:
Block format 'raw' used by device 'drive-virtio0' doesn't support the option 
'192.168.4.51'

Now, is this a ceph issue or proxmox? I will send a new email to follow up on 
this separately.

Cheers,
Alwin
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE 4.2 and 802.3ad bonding

2016-07-06 Thread Gilberto Nunes
hum... I miss this part too... Why create a bridge over a bond???


2016-07-06 5:10 GMT-03:00 Nicolas Costes :

> Le mardi 5 juillet 2016 21:50:11 Michael Rasmussen a écrit :
> > PS. it is a bad design to assign an address directly to the bond.
> > Instead create bridges over this bond and assign addresses to the
> > bridges.
>
> Interesting, can you explain, please ?
>
> I currently am running Ceph on my 3 Proxmox nodes, unsing 3 dedicated 2-
> interfaces LACP bonds. I configured the 3 IP adresses directly on the
> bonds,
> what do I risk ?
>
> Those 3x2 interfaces are on a dedicated VLAN and used only for Ceph. (no
> VM on
> them). I run PVE 3.4.
>
>
>
>
> --
> Nicolas Costes
> Responsable de parc informatique
> IUT de la Roche-sur-Yon
> Université de Nantes
> Tél.: 02 51 47 40 29
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>



-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE 4.2 and 802.3ad bonding

2016-07-06 Thread Nicolas Costes
Le mardi 5 juillet 2016 21:50:11 Michael Rasmussen a écrit :
> PS. it is a bad design to assign an address directly to the bond.
> Instead create bridges over this bond and assign addresses to the
> bridges.

Interesting, can you explain, please ?

I currently am running Ceph on my 3 Proxmox nodes, unsing 3 dedicated 2-
interfaces LACP bonds. I configured the 3 IP adresses directly on the bonds, 
what do I risk ?

Those 3x2 interfaces are on a dedicated VLAN and used only for Ceph. (no VM on 
them). I run PVE 3.4.




-- 
Nicolas Costes
Responsable de parc informatique
IUT de la Roche-sur-Yon
Université de Nantes
Tél.: 02 51 47 40 29
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph RBD image striping

2016-07-06 Thread Eneko Lacunza

Hi,

El 06/07/16 a las 09:26, Alwin Antreich escribió:

On 07/06/2016 08:27 AM, Eneko Lacunza wrote:

Hi Alwin,

In Proxmox Ceph client integration is done using librbd, not krbd. Stripe 
parameters can't be defined from Proxmox GUI.

If I understood correctly, this depends, if you add a ceph pool to proxmox with 
the KRBD option enabled or not. I know
that these settings aren't done through the GUI, as you need to create the 
image with ceph tools manually.
Sorry, I didn't recall the "new" krbd checkbox in storage config, never 
used that myself. Why are you using it?



Why are you changing striping parameters?

We have some write intense tasks to perform and stripping can boost write 
performance. To test this I would like to add
such a disk to a VM.

I would do as follows:
- Create the VM rbd disk from GUI
- Then from CLI, remove rbd volume and recreate with the same name and 
the striping parameters you want.


Are you using SSDs for journals?


Cheers

--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943493611
  943324914
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph RBD image striping

2016-07-06 Thread Alwin Antreich
Hi Eneko,

On 07/06/2016 08:27 AM, Eneko Lacunza wrote:
> Hi Alwin,
> 
> In Proxmox Ceph client integration is done using librbd, not krbd. Stripe 
> parameters can't be defined from Proxmox GUI.

If I understood correctly, this depends, if you add a ceph pool to proxmox with 
the KRBD option enabled or not. I know
that these settings aren't done through the GUI, as you need to create the 
image with ceph tools manually.

> Why are you changing striping parameters?

We have some write intense tasks to perform and stripping can boost write 
performance. To test this I would like to add
such a disk to a VM.

> 
> Cheers
> Eneko
> 
> El 05/07/16 a las 19:14, Alwin Antreich escribió:
>> Hi all,
>>
>> how can I create an image with the ceph striping feature and add the disk to 
>> a VM?
>>
>> Thanks in advance.
>>
>>
>> I can add an image via the rbd cli, but fail to activate it through proxmox 
>> (timeout). I manually added the disk file to
>> the VM  config under /etc/pve/qemu-server/. In proxmox the storage is added 
>> w/o KRBD.
>>
>> rbd -p rbd2 --image-features 3 --stripe-count 8 --stripe-unit 524288 --size 
>> 4194304 --image-format 2 create
>> vm-208014-disk-2
>>
>> http://docs.ceph.com/docs/hammer/man/8/rbd/
>>
>> ==syslog==
>> Jul  5 18:58:30 hermodr pvedaemon[1340]: worker exit
>> Jul  5 18:58:30 hermodr pvedaemon[3853]: worker 1340 finished
>> Jul  5 18:58:30 hermodr pvedaemon[3853]: starting 1 worker(s)
>> Jul  5 18:58:30 hermodr pvedaemon[3853]: worker 7328 started
>> Jul  5 18:59:01 hermodr pmxcfs[3327]: [status] notice: received log
>> Jul  5 18:59:05 hermodr pvestatd[3789]: status update time (300.209 seconds)
>> Jul  5 19:00:28 hermodr pveproxy[3535]: worker exit
>> Jul  5 19:00:28 hermodr pveproxy[30422]: worker 3535 finished
>> Jul  5 19:00:28 hermodr pveproxy[30422]: starting 1 worker(s)
>> Jul  5 19:00:28 hermodr pveproxy[30422]: worker 7581 started
>> Jul  5 19:02:37 hermodr pvedaemon[7328]:  update VM 208014: -ide3 
>> rbd2:vm-208014-disk-2
>> Jul  5 19:03:07 hermodr pveproxy[7581]: proxy detected vanished client 
>> connection
>> Jul  5 19:04:05 hermodr pvestatd[3789]: status update time (300.220 seconds)
>>
>>
>> ==packages on all cluster nodes==
>> proxmox-ve: 4.2-54 (running kernel: 4.4.10-1-pve)
>> pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)
>> pve-kernel-4.4.6-1-pve: 4.4.6-48
>> pve-kernel-4.2.6-1-pve: 4.2.6-36
>> pve-kernel-4.2.8-1-pve: 4.2.8-41
>> pve-kernel-4.4.10-1-pve: 4.4.10-54
>> lvm2: 2.02.116-pve2
>> corosync-pve: 2.3.5-2
>> libqb0: 1.0-1
>> pve-cluster: 4.0-42
>> qemu-server: 4.0-81
>> pve-firmware: 1.1-8
>> libpve-common-perl: 4.0-68
>> libpve-access-control: 4.0-16
>> libpve-storage-perl: 4.0-55
>> pve-libspice-server1: 0.12.5-2
>> vncterm: 1.2-1
>> pve-qemu-kvm: 2.5-19
>> pve-container: 1.0-68
>> pve-firewall: 2.0-29
>> pve-ha-manager: 1.0-32
>> ksm-control-daemon: 1.2-1
>> glusterfs-client: 3.5.2-2+deb8u2
>> lxc-pve: 1.1.5-7
>> lxcfs: 2.0.0-pve2
>> cgmanager: 0.39-pve1
>> criu: 1.6.0-1
>> zfsutils: 0.6.5-pve9~jessie
>> ceph: 0.94.7-1~bpo80+1
>>
>> Cheers,
>> Alwin
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
> 
> 

Any hint is welcome to where to look next.

Thanks.

Cheers,
Alwin
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Undo pvecm

2016-07-06 Thread Eneko Lacunza

El 05/07/16 a las 21:53, William Gnann escribió:

Hi,

I have created a cluster with two nodes to concentrate the administration of my 
machines within a single interface. But one of the machines got some hardware 
problem and went down.

I ran "pvecm e 1" to bring back the master node to quorate state and excluded 
the node with hardware problem.

Should I undo the cluster configuration since it will take a time to restore 
the other node?
If you delete the failed node from the cluster, then you MUST reinstall 
it before powering on. If you intend to resume it's operations normally 
without reinstalling, then don't remove it.

Finally, was it a bad idea to use the cluster configuration (without HA) to 
concentrate the machine administration?
If you can, just add a simple machine to the cluster, so that it really 
has 3 nodes for voting and you don't have quorum problems when one 
machine is down. You can even use a NUC or an old machine.


Cheers
Eneko

--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943493611
  943324914
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph RBD image striping

2016-07-06 Thread Eneko Lacunza

Hi Alwin,

In Proxmox Ceph client integration is done using librbd, not krbd. 
Stripe parameters can't be defined from Proxmox GUI. Why are you 
changing striping parameters?


Cheers
Eneko

El 05/07/16 a las 19:14, Alwin Antreich escribió:

Hi all,

how can I create an image with the ceph striping feature and add the disk to a 
VM?

Thanks in advance.


I can add an image via the rbd cli, but fail to activate it through proxmox 
(timeout). I manually added the disk file to
the VM  config under /etc/pve/qemu-server/. In proxmox the storage is added w/o 
KRBD.

rbd -p rbd2 --image-features 3 --stripe-count 8 --stripe-unit 524288 --size 
4194304 --image-format 2 create vm-208014-disk-2

http://docs.ceph.com/docs/hammer/man/8/rbd/

==syslog==
Jul  5 18:58:30 hermodr pvedaemon[1340]: worker exit
Jul  5 18:58:30 hermodr pvedaemon[3853]: worker 1340 finished
Jul  5 18:58:30 hermodr pvedaemon[3853]: starting 1 worker(s)
Jul  5 18:58:30 hermodr pvedaemon[3853]: worker 7328 started
Jul  5 18:59:01 hermodr pmxcfs[3327]: [status] notice: received log
Jul  5 18:59:05 hermodr pvestatd[3789]: status update time (300.209 seconds)
Jul  5 19:00:28 hermodr pveproxy[3535]: worker exit
Jul  5 19:00:28 hermodr pveproxy[30422]: worker 3535 finished
Jul  5 19:00:28 hermodr pveproxy[30422]: starting 1 worker(s)
Jul  5 19:00:28 hermodr pveproxy[30422]: worker 7581 started
Jul  5 19:02:37 hermodr pvedaemon[7328]:  update VM 208014: -ide3 
rbd2:vm-208014-disk-2
Jul  5 19:03:07 hermodr pveproxy[7581]: proxy detected vanished client 
connection
Jul  5 19:04:05 hermodr pvestatd[3789]: status update time (300.220 seconds)


==packages on all cluster nodes==
proxmox-ve: 4.2-54 (running kernel: 4.4.10-1-pve)
pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-4.4.10-1-pve: 4.4.10-54
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-42
qemu-server: 4.0-81
pve-firmware: 1.1-8
libpve-common-perl: 4.0-68
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-55
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-19
pve-container: 1.0-68
pve-firewall: 2.0-29
pve-ha-manager: 1.0-32
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie
ceph: 0.94.7-1~bpo80+1

Cheers,
Alwin
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943493611
  943324914
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user