[PVE-User] How to configure the best for CEPH

2016-03-19 Thread Jean-Laurent Ivars
Hello everyone,

At first, I am sorry if my english is not very good but hopefully you are going 
to understand what i say (you know french people reputation…).

I would be very happy if some people take the time to read this email, an even 
more if i get answers.

I have a 2 host cluster setup with ZFS and replicated on each other with 
pvesync script among other things and my VMs are running on these hosts for now 
but I am impatient to be able to migrate on my new infrastructure. I decided to 
change my infrastructure because I really would like to take advantage of CEPH 
for replication, expanding abilities, live migration and even maybe high 
availability setup.

After having read a lot of documentations/books/forums, I decided to go with 
CEPH storage which seem to be the way to go for me.

My servers are hosted by OVH and from what I read, and with the budget I have, 
the best options with CEPH storage in mind seemed to be the following servers : 
https://www.ovh.com/fr/serveurs_dedies/details-servers.xml?range=HOST=2016-HOST-32H
 

 
With the following storage options : No HW Raid, 2X300Go SSD and 2X2To HDD

One of the reasons I choose these models is the 10Gb VRACK option and I 
understood that CEPH needs a fast network to be efficient. Of course in a 
perfect world, the best would be to have a lot of disks for OSDs, two more SSD 
for my system and 2 10Gb bonded NIC but this is the most approaching I can 
afford in the OVH product range.

I already made the install of the cluster and set different VLANs for cluster 
and storage. Set the hosts files and installed CEPH. Everything went seamless 
except the fact that OVH installation create a MBR install on the SSD and CEPH 
needs a GPT one but I managed to convert the partition tables so now, I though 
I was all set for CEPH configuration.

For now, my partitioning scheme is the following : (message rejected because 
too big for mailing list so there is a link)  
https://www.ipgenius.fr/tools/pveceph.png 


I know that it would be better to give CEPH the whole disks but I have to put 
my system somewhere… I was thinking that even if it’s not the best (i can’t 
afford more), these settings would work… So I have tried to give CEPH the OSDs 
with my SSD journal partition with the appropriate command but it didn’t seem 
to work and I assume it's because CEPH don’t want partitions but entire hard 
drive…

root@pvegra1 ~ # pveceph createosd /dev/sdc -journal_dev /dev/sda4
create OSD on /dev/sdc (xfs)
using device '/dev/sda4' for journal
Creating new GPT entries.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same 
device as the osd data
WARNING:ceph-disk:Journal /dev/sda4 was not prepared with ceph-disk. Symlinking 
directly.
Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
meta-data=/dev/sdc1  isize=2048   agcount=4, agsize=122094597 blks
 =   sectsz=512   attr=2, projid32bit=1
 =   crc=0finobt=0
data =   bsize=4096   blocks=488378385, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0 ftype=0
log  =internal log   bsize=4096   blocks=238466, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.

I saw the following threads : 
https://forum.proxmox.com/threads/ceph-server-feedback.17909/ 
  
https://forum.proxmox.com/threads/ceph-server-why-block-devices-and-not-partitions.17863/
 


But this kind of setting seem to suffer performance issue and It’s not 
officially supported and I am not feeling very well with that because at the 
moment, I only took community subscription from Proxmox but I want to be able 
to move on a different plan to get support from them if I need it and if I go 
this way, I’m afraid they will say me it’s a non supported configuration.

OVH can provide USB keys so I could install the system on it and get my whole 
disks for CEPH, but I think it is not supported too. Moreover, I fear for 
performances and stability in the time with this solution.

Maybe I could use one SSD for the system and journal partitions (but again it’s 
a mix not really supported) and the other SSD dedicated to CEPH… but with this 

Re: [PVE-User] VM clone with 2 disks fails on ZFS-iSCSI IET storage

2016-03-19 Thread Michael Rasmussen
On Fri, 18 Mar 2016 17:59:09 +0300
Mikhail  wrote:

> 
> 
> So I guess this has something to do with IET.
> 
I have just tested with a server containing 2 disks, both
zfs-over-iscsi to a solaris server. It works as expected so I think
this is a problem with ietd.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
FORTUNE PROVIDES QUESTIONS FOR THE GREAT ANSWERS: #19
A:  To be or not to be.
Q:  What is the square root of 4b^2?


pgpKFEAZaz2qm.pgp
Description: OpenPGP digital signature
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pvecm e 1 not working anymore

2016-03-19 Thread Thomas Lamprecht


On 03/18/2016 12:36 PM, Jean-Laurent Ivars wrote:
> You can let go it’s ok, I revert the conf file from my other node, I
> restarted the corosync service from the web interface and the folder
> is back.
>
> As you’re saying* pvecm expected 1 *is working despite what pvecm
> status says because both the hosts are online, I just made test
> (temporary deactivate cluster interface on the other node)

If you did something like "ifdown eth1" forget it , that wont work with
corosync, it may actual cause problems.
And yes it will *not* set expected votes lower if the cluster is full up
and quorate!

A real test from my side shows that is working:

root@due:~# pvecm s
Quorum information
--
Date: Fri Mar 18 13:15:42 2016
Quorum provider:  corosync_votequorum
Nodes:3
Node ID:  0x0002
Ring ID:  704
Quorate:  Yes

Votequorum information
--
Expected votes:   3
Highest expected: 3
Total votes:  3
Quorum:   2 
Flags:Quorate

Membership information
--
Nodeid  Votes Name
0x0001  1 10.10.10.1
0x0002  1 10.10.10.2 (local)
0x0003  1 10.10.10.3

*-> pull network plug here*

root@due:~# pvecm s
Quorum information
--
Date: Fri Mar 18 13:15:46 2016
Quorum provider:  corosync_votequorum
Nodes:1
Node ID:  0x0002
Ring ID:  708
Quorate:  No

Votequorum information
--
Expected votes:   3
Highest expected: 3
Total votes:  1
Quorum:   2 Activity blocked
Flags:   

Membership information
--
Nodeid  Votes Name
0x0002  1 10.10.10.2 (local)

root@due:~# pvecm e 1

root@due:~# pvecm s
Quorum information
--
Date: Fri Mar 18 13:15:51 2016
Quorum provider:  corosync_votequorum
Nodes:1
Node ID:  0x0002
Ring ID:  708
Quorate:  Yes

Votequorum information
--
Expected votes:   1
Highest expected: 1
Total votes:  1
Quorum:   1 
Flags:Quorate

Membership information
--
Nodeid  Votes Name
0x0002  1 10.10.10.2 (local)
root@due:~#


btw. using that do a achieve some false sense of HA in a two node
cluster should *never* be done in a production system, if something
fails or does not work as expected do not blame us, we warned you. If
you want HA use at least three nodes.

>
> and voilà :
>
> root@roubaix /etc/pve # pvecm status
> Quorum information
> --
> Date: Fri Mar 18 12:32:44 2016
> Quorum provider:  corosync_votequorum
> Nodes:1
> Node ID:  0x0001
> Ring ID:  1504
> Quorate:  No
>
> Votequorum information
> --
> Expected votes:   2
> Highest expected: 2
> Total votes:  1
> Quorum:   2 Activity blocked
> Flags:
>
> Membership information
> --
> Nodeid  Votes Name
> 0x0001  1 10.10.10.2 (local)
> root@roubaix /etc/pve # pvecm expected 1
> root@roubaix /etc/pve # pvecm status
> Quorum information
> --
> Date: Fri Mar 18 12:33:00 2016
> Quorum provider:  corosync_votequorum
> Nodes:1
> Node ID:  0x0001
> Ring ID:  1504
> Quorate:  Yes
>
> Votequorum information
> --
> Expected votes:   1
> Highest expected: 1
> Total votes:  1
> Quorum:   1  
> Flags:Quorate 
>
> Membership information
> --
> Nodeid  Votes Name
> 0x0001  1 10.10.10.2 (local)
> root@roubaix /etc/pve #  
>
> Thank you
>
> 
>
>   
>
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to configure the best for CEPH

2016-03-19 Thread Jean-Laurent Ivars
thank you again for your advices and recommendations

have nice day :)


Jean-Laurent Ivars 
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille 
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47 
Linkedin    |  Viadeo 
   |  www.ipgenius.fr 

> Le 17 mars 2016 à 11:28, Eneko Lacunza  a écrit :
> 
> Hi,
> 
> El 17/03/16 a las 10:51, Jean-Laurent Ivars escribió:
>> El 16/03/16 a las 20:39, Jean-Laurent Ivars escribió:
 I have a 2 host cluster setup with ZFS and replicated on each other with 
 pvesync script among other things and my VMs are running on these hosts 
 for now but I am impatient to be able to migrate on my new infrastructure. 
 I decided to change my infrastructure because I really would like to take 
 advantage of CEPH for replication, expanding abilities, live migration and 
 even maybe high availability setup.
 
 After having read a lot of documentations/books/forums, I decided to go 
 with CEPH storage which seem to be the way to go for me.
 
 My servers are hosted by OVH and from what I read, and with the budget I 
 have, the best options with CEPH storage in mind seemed to be the 
 following servers : 
 https://www.ovh.com/fr/serveurs_dedies/details-servers.xml?range=HOST=2016-HOST-32H
  
 
  
 With the following storage options : No HW Raid, 2X300Go SSD and 2X2To HDD
>>> About the SSD, what exact brand/model are they? I can't find this info on 
>>> OVH web.
>> 
>> The models are INTEL SSDSC2BB30, you can find information here : 
>> https://www.ovh.com/fr/serveurs_dedies/avantages-disques-ssd.xml 
>> 
>> They are datacenter SSD and they have the Power Loss Imminent protection.
> Ok, they should perform well for Ceph, I have one of those in a setup. You 
> should monitor their wear-out though, as they are rated only for 0.3 drive 
> writes per day.
>>> 
 
 I know that it would be better to give CEPH the whole disks but I have to 
 put my system somewhere… I was thinking that even if it’s not the best (i 
 can’t afford more), these settings would work… So I have tried to give 
 CEPH the OSDs with my SSD journal partition with the appropriate command 
 but it didn’t seem to work and I assume it's because CEPH don’t want 
 partitions but entire hard drive…
 
 root@pvegra1 ~ # pveceph createosd /dev/sdc -journal_dev /dev/sda4
 create OSD on /dev/sdc (xfs)
 using device '/dev/sda4' for journal
 Creating new GPT entries.
 GPT data structures destroyed! You may now partition the disk using fdisk 
 or
 other utilities.
 Creating new GPT entries.
 The operation has completed successfully.
 WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same 
 device as the osd data
 WARNING:ceph-disk:Journal /dev/sda4 was not prepared with ceph-disk. 
 Symlinking directly.
 Setting name!
 partNum is 0
 REALLY setting name!
 The operation has completed successfully.
 meta-data=/dev/sdc1  isize=2048   agcount=4, agsize=122094597 
 blks
  =   sectsz=512   attr=2, projid32bit=1
  =   crc=0finobt=0
 data =   bsize=4096   blocks=488378385, imaxpct=5
  =   sunit=0  swidth=0 blks
 naming   =version 2  bsize=4096   ascii-ci=0 ftype=0
 log  =internal log   bsize=4096   blocks=238466, version=2
  =   sectsz=512   sunit=0 blks, lazy-count=1
 realtime =none   extsz=4096   blocks=0, rtextents=0
 Warning: The kernel is still using the old partition table.
 The new table will be used at the next reboot.
 The operation has completed successfully.
 
 I saw the following threads : 
 https://forum.proxmox.com/threads/ceph-server-feedback.17909/ 
   
 https://forum.proxmox.com/threads/ceph-server-why-block-devices-and-not-partitions.17863/
  
 
 
 But this kind of setting seem to suffer performance issue and It’s not 
 officially supported and I am not feeling very well with that because at 
 the moment, I only took community subscription from Proxmox but I want to 
 be able to move on a different plan to get support from them if I need it 
 and if I go this way, I’m afraid they will say me it’s a non supported 
 configuration.
 
>> 
>> So you aren’t « shocked » I want to use 

Re: [PVE-User] VM clone with 2 disks fails on ZFS-iSCSI IET storage

2016-03-19 Thread Mikhail
I think that if you're going to use FreeNAS w/ZFS only for testing
purposes or for fun then you will be fine. But if you're seriously
thinking to put into production, then I suggest you to think twice
before it is too late.

>From what I see, ZFS works best with Solaris-based OS - that is, it has
native iSCSI support, it is all stable and so on. If you put ZFS on some
other OS, then you are in trouble.

I remember ~10 years (or maybe even more, like 2004) back when FreeBSD
introduced ZFS support. I had a chance to test it, and I was quickly
disappointed - these old days 4GB of RAM were like 128GB of RAM today,
and it was so unstable. So many years passed, but we still see some sort
of limitations or problems with ZFS on OSes other than Solaris.

On 03/18/2016 07:01 PM, Daniel Bayerdorffer wrote:
> 
> I'm in the process of researching FreeNAS for ZFS storage. I've read several 
> good results on their forum. I know there is supposed to be issues with the 
> BSD iSCSI driver. I believe the upcoming version of FreeNAS, will be solve 
> that issue.
> 
> I'm just wondering what other people's thoughts are, that use ProxMox?
> 
> Thanks,
> Daniel
> 
> 
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM clone with 2 disks fails on ZFS-iSCSI IET storage

2016-03-19 Thread Michael Rasmussen
On Fri, 18 Mar 2016 18:31:10 +0300
Mikhail  wrote:

> 
> So I think it is now about time to switch to old school LVM over iSCSI
> in my case, until I put some real data on this cluster..
> 
before you drop this why not try a solaris based solution?
I can recommend omnios.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
God is love, but get it in writing.
-- Gypsy Rose Lee


pgpckNxhyquUa.pgp
Description: OpenPGP digital signature
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Cannot boot on a Windows 7 VM from a full-cloned template

2016-03-19 Thread Dominik Csapak
Hi,

could you maybe send your configuration of the template?

(should be under /etc/pve/qemu/106.conf)

I can reproduce the issue, but only when the cache mode of the disk is writeback
or writeback(unsafe)

regards
Dominik

> On March 16, 2016 at 7:15 PM Gaël Jobin  wrote:
> 
> Hi all,
> 
> I'm using Proxmox with the testing repo.
> 
> I successfully installed a Windows 7 VM and made a template of it. Then, I
> tried to create a new VM by cloning the previous template (full clone).
> Unfortunately, the new VM cannot boot Windows. On the other hand, with a
> "linked-clone", it works fine.
> 
> I noticed that the cloning was internally doing a "qemu-img convert". More
> precisely in my case, "/usr/bin/qemu-img convert -p -f raw -O raw
> /var/lib/vz/images/106/base-106-disk-1.raw
> /var/lib/vz/images/109/vm-109-disk-1.raw".
> 
> I did the same command manually and was quiet surprised to see that the
> new disk has the exact same size but not the same MD5 hash (md5sum command).
> 
> Any idea why qemu-img corrupt the disk?
> 
> For the moment, I just manually "cp" the base disk to my newly created VM
> directory and it's working. Also, I tried to convert the base disk from raw to
> qcow2 and back qcow2 to raw and the new raw disk is booting fine ! The problem
> seems related to "raw to raw" conversion...
> 
> qemu-img version 2.5.0pve-qemu-kvm_2.5-9
> 
> Thank you for your help,
> 
> Regards,
> Gaël
> 


 

> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to configure the best for CEPH

2016-03-19 Thread Eneko Lacunza

Hi Jean-Laurent,

El 16/03/16 a las 20:39, Jean-Laurent Ivars escribió:
I have a 2 host cluster setup with ZFS and replicated on each other 
with pvesync script among other things and my VMs are running on these 
hosts for now but I am impatient to be able to migrate on my new 
infrastructure. I decided to change my infrastructure because I really 
would like to take advantage of CEPH for replication, expanding 
abilities, live migration and even maybe high availability setup.


After having read a lot of documentations/books/forums, I decided to 
go with CEPH storage which seem to be the way to go for me.


My servers are hosted by OVH and from what I read, and with the budget 
I have, the best options with CEPH storage in mind seemed to be the 
following servers : 
https://www.ovh.com/fr/serveurs_dedies/details-servers.xml?range=HOST=2016-HOST-32H 


With the following storage options : No HW Raid, 2X300Go SSD and 2X2To HDD
About the SSD, what exact brand/model are they? I can't find this info 
on OVH web.


One of the reasons I choose these models is the 10Gb VRACK option and 
I understood that CEPH needs a fast network to be efficient. Of course 
in a perfect world, the best would be to have a lot of disks for OSDs, 
two more SSD for my system and 2 10Gb bonded NIC but this is the most 
approaching I can afford in the OVH product range.
In your configuration, I doubt very much you'll be able to leverage 10Gb 
NICs; I have a 3node 3osd each setup in our office, with 1 gbit network, 
and ceph hardly uses 200-300Mbps. Maybe you have a bit lower latency, 
but that will be all.


I already made the install of the cluster and set different VLANs for 
cluster and storage. Set the hosts files and installed CEPH. 
Everything went seamless except the fact that OVH installation create 
a MBR install on the SSD and CEPH needs a GPT one but I managed to 
convert the partition tables so now, I though I was all set for CEPH 
configuration.


_For now, my partitioning scheme is the following :__(_message 
rejected because too big for mailing list so there is a link) 
https://www.ipgenius.fr/tools/pveceph.png


Seems quite good, maybe having a bit more room for root filesystem would 
be good, you have 300GB of disk... :) Also see below.




I know that it would be better to give CEPH the whole disks but I have 
to put my system somewhere… I was thinking that even if it’s not the 
best (i can’t afford more), these settings would work… So I have tried 
to give CEPH the OSDs with my SSD journal partition with the 
appropriate command but it didn’t seem to work and I assume it's 
because CEPH don’t want partitions but entire hard drive…


root@pvegra1 ~ # pveceph createosd /dev/sdc -journal_dev /dev/sda4
create OSD on /dev/sdc (xfs)
using device '/dev/sda4' for journal
Creating new GPT entries.
GPT data structures destroyed! You may now partition the disk using 
fdisk or

other utilities.
Creating new GPT entries.
The operation has completed successfully.
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the 
same device as the osd data
WARNING:ceph-disk:Journal /dev/sda4 was not prepared with ceph-disk. 
Symlinking directly.

Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
meta-data=/dev/sdc1 isize=2048   agcount=4, agsize=122094597 blks
 = sectsz=512   attr=2, projid32bit=1
 = crc=0finobt=0
data = bsize=4096   blocks=488378385, imaxpct=5
 = sunit=0  swidth=0 blks
naming   =version 2 bsize=4096   ascii-ci=0 ftype=0
log  =internal log bsize=4096   blocks=238466, version=2
 = sectsz=512   sunit=0 blks, lazy-count=1
realtime =none extsz=4096   blocks=0, rtextents=0
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.

I saw the following threads :
https://forum.proxmox.com/threads/ceph-server-feedback.17909/
https://forum.proxmox.com/threads/ceph-server-why-block-devices-and-not-partitions.17863/

But this kind of setting seem to suffer performance issue and It’s not 
officially supported and I am not feeling very well with that because 
at the moment, I only took community subscription from Proxmox but I 
want to be able to move on a different plan to get support from them 
if I need it and if I go this way, I’m afraid they will say me it’s a 
non supported configuration.


OVH can provide USB keys so I could install the system on it and get 
my whole disks for CEPH, but I think it is not supported too. 
Moreover, I fear for performances and stability in the time with this 
solution.


Maybe I could use one SSD for the system and journal partitions (but 
again it’s a mix not really supported) and the other SSD dedicated to 
CEPH… but with this solution I loose my system RAID protection… and a 
lot of SSD space...


I’m a little bit confused about the best partitioning scheme and how 

Re: [PVE-User] Virtualize Windows 2012

2016-03-19 Thread Jean-Laurent Ivars
Hello,

you have to put the drivers iso in the vm and during the installation you have 
to choose « load driver from a drive » and you search the right version for 
your win version.

But you should always use the virtio disk and net because it improve a lot 
perfs.

you should have a look here : 
https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers 


and also at the bottom of the page you will find interesting links (network, 
memory…)

Best regards,


Jean-Laurent Ivars 
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille 
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47 
Linkedin    |  Viadeo 
   |  www.ipgenius.fr 

> Le 19 mars 2016 à 15:26, Luis G. Coralle  a 
> écrit :
> 
> Hello everyone.
> 
> I have many linux with virtualized network and bus virtio disk qcow2 and 
> works wonders. I'm about to virtualize Windows Server 2012 r2 but never 
> virtualize one. Any recommendations with respect to virtual disk format or 
> raw qcow and bus type disc: virtio, ide, sata, scsi.
> 
> Thank you
> 
> -- 
> Luis G. Coralle
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pvecm e 1 not working anymore

2016-03-19 Thread Jean-Laurent Ivars
the forme syntax was :

 
  

I don’t what is the right syntax for the corosync file :(


Jean-Laurent Ivars 
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille 
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47 
Linkedin    |  Viadeo 
   |  www.ipgenius.fr 

> Le 18 mars 2016 à 11:15, Jean-Laurent Ivars  a écrit :
> 
> Thank you for your answer but do you know how to tell corosync.conf, quorum 
> with one node ? (how to put the directive expected_votes=1 ?) 
> 
>   
> Jean-Laurent Ivars 
> Responsable Technique | Technical Manager
> 22, rue Robert - 13007 Marseille 
> Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47 
> Linkedin    |  Viadeo 
>    |  www.ipgenius.fr 
> 
>> Le 18 mars 2016 à 11:03, Mohamed Sadok Ben Jazia 
>> > a 
>> écrit :
>> 
>> I can answer the last part. corosync.conf => corosync/cman cluster 
>> configuration file (previous to PVE 4.x this file was called cluster.conf)
>> 
>> 
>> On 18 March 2016 at 10:57, Jean-Laurent Ivars > > wrote:
>> Hi list,
>> 
>> In the new cluster version (pve4) if i loose quorum (in a 2 nodes set un 
>> with one node down for example) I’m stuck.
>> 
>> Before pve4 I could send the following command pvecm e 1 and my host was 
>> fully working again but now it seems it doesn’t work this way (the command 
>> run without error but nothing happens)
>> 
>> Moreover, the new file for setting cluster (and quorum) isn’t cluster.conf 
>> anymore but corosync.conf but i don’t find where to set this in that file !
>> 
>> Does someone know the way to do ?
>> 
>> thank you for answer in advance :)
>> Best regards,
>> 
>>  
>> Jean-Laurent Ivars 
>> Responsable Technique | Technical Manager
>> 22, rue Robert - 13007 Marseille 
>> Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47 
>> Linkedin    |  Viadeo 
>>    |  www.ipgenius.fr 
>> 
>> 
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com 
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
>> 
>> 
>> 
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com 
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Virtualize Windows 2012

2016-03-19 Thread Luis G. Coralle
Hello everyone.

I have many linux with virtualized network and bus virtio disk qcow2 and
works wonders. I'm about to virtualize Windows Server 2012 r2 but never
virtualize one. Any recommendations with respect to virtual disk format or
raw qcow and bus type disc: virtio, ide, sata, scsi.

Thank you

-- 
Luis G. Coralle
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Virtualize Windows 2012

2016-03-19 Thread Ben Jazia Mohamed Sadok

Hello,
Hi installed Windows Server 2012 R2 without any problem with the iso 
file with virtuo disk and network interface


Le 19/03/2016 15:26, Luis G. Coralle a écrit :

Hello everyone.

I have many linux with virtualized network and bus virtio disk qcow2 
and works wonders. I'm about to virtualize Windows Server 2012 r2 but 
never virtualize one. Any recommendations with respect to virtual disk 
format or raw qcow and bus type disc: virtio, ide, sata, scsi.


Thank you

--
Luis G. Coralle


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] VM clone with 2 disks fails on ZFS-iSCSI IET storage

2016-03-19 Thread Mikhail
Hello,

I'm running 3-node cluster with latest PVE 4.1-1 community edition.
My shared storage is ZFS over iSCSI (ZFS storage server is Linux Debian
Jessie with IET).

There's a problem cloning VM that has 2 (or possibly "2 and more") disks
attached to VM in this setup. The problem is that one disk gets copied,
and then "Clone" task fails with message: TASK ERROR: clone failed: File
exists. at /usr/share/perl5/PVE/Storage/LunCmd/Iet.pm line 376.

There's no problems cloning same VM if it has only one disk.

Here's steps to reproduce

1) create VM with 2 disks (both on shared storage)
2) shutdown VM
3) attempt to clone this VM to another VM

More details in my case:

1) VM 8002 - source VM to clone, here's output from storage server -

root@storage:/etc/iet# zfs list|grep 8002
rpool/vm-8002-disk-130.5G  6.32T  30.5G  -
rpool/vm-8002-disk-277.8G  6.32T  77.8G  -

root@storage:/etc/iet# cat /etc/iet/ietd.conf
Target iqn.2016-03.eu.myorg:rpool
Lun 1 Path=/dev/rpool/vm-1001-disk-2,Type=blockio
Lun 2 Path=/dev/rpool/vm-1002-disk-1,Type=blockio
Lun 7 Path=/dev/rpool/vm-8003-disk-1,Type=blockio
Lun 5 Path=/dev/rpool/vm-101-disk-1,Type=blockio
Lun 3 Path=/dev/rpool/vm-8002-disk-1,Type=blockio
Lun 6 Path=/dev/rpool/vm-8002-disk-2,Type=blockio
Lun 0 Path=/dev/rpool/vm-8201-disk-1,Type=blockio
Lun 8 Path=/dev/rpool/vm-8301-disk-1,Type=blockio
Lun 9 Path=/dev/rpool/vm-8301-disk-2,Type=blockio
Lun 10 Path=/dev/rpool/vm-8302-disk-1,Type=blockio
Lun 4 Path=/dev/rpool/vm-8001-disk-1,Type=blockio

root@storage:/etc/iet# cat /proc/net/iet/volume
tid:1 name:iqn.2016-03.eu.myorg:rpool
lun:1 state:0 iotype:blockio iomode:wt blocks:1048576000 blocksize:512
path:/dev/rpool/vm-1001-disk-2
lun:2 state:0 iotype:blockio iomode:wt blocks:67108864 blocksize:512
path:/dev/rpool/vm-1002-disk-1
lun:5 state:0 iotype:blockio iomode:wt blocks:62914560 blocksize:512
path:/dev/rpool/vm-101-disk-1
lun:3 state:0 iotype:blockio iomode:wt blocks:62914560 blocksize:512
path:/dev/rpool/vm-8002-disk-1
lun:6 state:0 iotype:blockio iomode:wt blocks:314572800 blocksize:512
path:/dev/rpool/vm-8002-disk-2
lun:0 state:0 iotype:blockio iomode:wt blocks:83886080 blocksize:512
path:/dev/rpool/vm-8201-disk-1
lun:8 state:0 iotype:blockio iomode:wt blocks:31457280 blocksize:512
path:/dev/rpool/vm-8301-disk-1
lun:9 state:0 iotype:blockio iomode:wt blocks:104857600 blocksize:512
path:/dev/rpool/vm-8301-disk-2
lun:10 state:0 iotype:blockio iomode:wt blocks:104857600 blocksize:512
path:/dev/rpool/vm-8302-disk-1
lun:12 state:0 iotype:blockio iomode:wt blocks:62914560 blocksize:512
path:/dev/rpool/vm-8091-disk-1
lun:4 state:0 iotype:blockio iomode:wt blocks:62914560 blocksize:512
path:/dev/rpool/vm-8001-disk-1



VM config:

root@pm1:/etc/pve/nodes/pm2/qemu-server# pwd
/etc/pve/nodes/pm2/qemu-server
root@pm1:/etc/pve/nodes/pm2/qemu-server# cat 8002.conf
boot: cdn
bootdisk: virtio0
cores: 1
ide2: isoimages:iso/systemrescuecd-x86-4.7.1.iso,media=cdrom,size=469942K
memory: 1024
name: rep
net0: virtio=32:38:38:39:39:33,bridge=vmbr0,tag=80
numa: 0
ostype: l26
smbios1: uuid=8b0b1ab8-d3e3-48ae-8834-edd0e68a3c0c
sockets: 1
virtio0: storage-1:vm-8002-disk-1,size=30G
virtio1: storage-1:vm-8002-disk-2,size=150G

Storage config:

virtio1: storage-1:vm-8002-disk-2,size=150G
root@pm1:/etc/pve/nodes/pm2/qemu-server# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
maxfiles 0
content vztmpl,rootdir,images,iso

nfs: isoimages
path /mnt/pve/isoimages
server 192.168.4.1
export /rpool/shared/isoimages
content iso
options vers=3
maxfiles 1

zfs: storage-1
pool rpool
blocksize 4k
iscsiprovider iet
portal 192.168.4.1
target iqn.2016-03.eu.myorg:rpool
sparse
nowritecache
content images

2) Attempting to clone 8002 to lets say 80123. Here's task output (cut
some % lines):

create full clone of drive virtio0 (storage-1:vm-8002-disk-1)
transferred: 0 bytes remaining: 32212254720 bytes total: 32212254720
bytes progression: 0.00 %
qemu-img: iSCSI Failure: SENSE KEY:ILLEGAL_REQUEST(5)
ASCQ:INVALID_OPERATION_CODE(0x2000)
transferred: 322122547 bytes remaining: 31890132173 bytes total:
32212254720 bytes progression: 1.00 %
transferred: 32212254720 bytes remaining: 0 bytes total: 32212254720
bytes progression: 100.00 %
transferred: 32212254720 bytes remaining: 0 bytes total: 32212254720
bytes progression: 100.00 %
create full clone of drive virtio1 (storage-1:vm-8002-disk-2)
TASK ERROR: clone failed: File exists. at
/usr/share/perl5/PVE/Storage/LunCmd/Iet.pm line 376.

after that, "zfs list" on storage shows there's one volume on ZFS:

root@storage:/etc/iet# zfs list|grep 80123
rpool/vm-80123-disk-2 64K  6.32T64K  -

Obviously it was created by PVE. rpool/vm-80123-disk-2 was 

Re: [PVE-User] pvecm e 1 not working anymore

2016-03-19 Thread Fabian Grünbichler
> Jean-Laurent Ivars  hat am 18. März 2016 um 11:15
> geschrieben:
> 
> 
> Thank you for your answer but do you know how to tell corosync.conf, quorum
> with one node ? (how to put the directive expected_votes=1 ?) 

$ man pvecm

...
pvecm expected 

  Tells corosync a new value of expected votes.

   integer (1 - N)

Expected votes
...


Regards,
Fabian

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to configure the best for CEPH

2016-03-19 Thread Eneko Lacunza

Hi,

El 17/03/16 a las 10:51, Jean-Laurent Ivars escribió:

El 16/03/16 a las 20:39, Jean-Laurent Ivars escribió:
I have a 2 host cluster setup with ZFS and replicated on each other 
with pvesync script among other things and my VMs are running on 
these hosts for now but I am impatient to be able to migrate on my 
new infrastructure. I decided to change my infrastructure because I 
really would like to take advantage of CEPH for replication, 
expanding abilities, live migration and even maybe high availability 
setup.


After having read a lot of documentations/books/forums, I decided to 
go with CEPH storage which seem to be the way to go for me.


My servers are hosted by OVH and from what I read, and with the 
budget I have, the best options with CEPH storage in mind seemed to 
be the following servers : 
https://www.ovh.com/fr/serveurs_dedies/details-servers.xml?range=HOST=2016-HOST-32H 

With the following storage options : No HW Raid, 2X300Go SSD and 
2X2To HDD
About the SSD, what exact brand/model are they? I can't find this 
info on OVH web.


The models are INTEL SSDSC2BB30, you can find information here : 
https://www.ovh.com/fr/serveurs_dedies/avantages-disques-ssd.xml

They are datacenter SSD and they have the Power Loss Imminent protection.
Ok, they should perform well for Ceph, I have one of those in a setup. 
You should monitor their wear-out though, as they are rated only for 0.3 
drive writes per day.




I know that it would be better to give CEPH the whole disks but I 
have to put my system somewhere… I was thinking that even if it’s 
not the best (i can’t afford more), these settings would work… So I 
have tried to give CEPH the OSDs with my SSD journal partition with 
the appropriate command but it didn’t seem to work and I assume it's 
because CEPH don’t want partitions but entire hard drive…


root@pvegra1 ~ # pveceph createosd /dev/sdc -journal_dev /dev/sda4
create OSD on /dev/sdc (xfs)
using device '/dev/sda4' for journal
Creating new GPT entries.
GPT data structures destroyed! You may now partition the disk using 
fdisk or

other utilities.
Creating new GPT entries.
The operation has completed successfully.
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not 
the same device as the osd data
WARNING:ceph-disk:Journal /dev/sda4 was not prepared with ceph-disk. 
Symlinking directly.

Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
meta-data=/dev/sdc1 isize=2048   agcount=4, 
agsize=122094597 blks

  =   sectsz=512   attr=2, projid32bit=1
  =   crc=0finobt=0
data   =   bsize=4096 blocks=488378385, imaxpct=5
  =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0 ftype=0
log   =internal log   bsize=4096 blocks=238466, version=2
  =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.

I saw the following threads :
https://forum.proxmox.com/threads/ceph-server-feedback.17909/
https://forum.proxmox.com/threads/ceph-server-why-block-devices-and-not-partitions.17863/

But this kind of setting seem to suffer performance issue and It’s 
not officially supported and I am not feeling very well with that 
because at the moment, I only took community subscription from 
Proxmox but I want to be able to move on a different plan to get 
support from them if I need it and if I go this way, I’m afraid they 
will say me it’s a non supported configuration.




So you aren’t « shocked » I want to use partitions instead of whole 
drives in my configuration ?
OSD Journals are always a partition. :-) That is what Proxmox does from 
GUI; creates a new partition for the journal in the journal-dirve; if 
you don't choose a journal drive, then it creates 2 partitions on the 
OSD disk, one for journal and the other for data.


OVH can provide USB keys so I could install the system on it and get 
my whole disks for CEPH, but I think it is not supported too. 
Moreover, I fear for performances and stability in the time with 
this solution.


Maybe I could use one SSD for the system and journal partitions (but 
again it’s a mix not really supported) and the other SSD dedicated 
to CEPH… but with this solution I loose my system RAID protection… 
and a lot of SSD space...


I’m a little bit confused about the best partitioning scheme and how 
to manage to obtain a stable, supported, which the less space lost 
and performant configuration.


Should I continue with my partitioning scheme even if it’s not the 
best supported, it seem the most appropriate in my case or do I need 
to completing rethink my install ?


Please can someone give me advice, I’m all yours :)
Thanks a lot for anyone taking the 

Re: [PVE-User] pvecm e 1 not working anymore

2016-03-19 Thread Jean-Laurent Ivars
You can let go it’s ok, I revert the conf file from my other node, I restarted 
the corosync service from the web interface and the folder is back.

As you’re saying pvecm expected 1 is working despite what pvecm status says 
because both the hosts are online, I just made test (temporary deactivate 
cluster interface on the other node)

and voilà :

root@roubaix /etc/pve # pvecm status
Quorum information
--
Date: Fri Mar 18 12:32:44 2016
Quorum provider:  corosync_votequorum
Nodes:1
Node ID:  0x0001
Ring ID:  1504
Quorate:  No

Votequorum information
--
Expected votes:   2
Highest expected: 2
Total votes:  1
Quorum:   2 Activity blocked
Flags:

Membership information
--
Nodeid  Votes Name
0x0001  1 10.10.10.2 (local)
root@roubaix /etc/pve # pvecm expected 1
root@roubaix /etc/pve # pvecm status
Quorum information
--
Date: Fri Mar 18 12:33:00 2016
Quorum provider:  corosync_votequorum
Nodes:1
Node ID:  0x0001
Ring ID:  1504
Quorate:  Yes

Votequorum information
--
Expected votes:   1
Highest expected: 1
Total votes:  1
Quorum:   1  
Flags:Quorate 

Membership information
--
Nodeid  Votes Name
0x0001  1 10.10.10.2 (local)
root@roubaix /etc/pve #  

Thank you


Jean-Laurent Ivars 
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille 
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47 
Linkedin    |  Viadeo 
   |  www.ipgenius.fr 

> Le 18 mars 2016 à 12:19, Dietmar Maurer  a écrit :
> 
> 
> 
>> On March 18, 2016 at 12:06 PM Jean-Laurent Ivars  
>> wrote:
>> 
>> 
>> better and better ! i tried the following option :
>> 
>> quorum {
>>  provider: corosync_votequorum
>>  expected_votes: 1
>> }
> 
> You really should not set such (dangerous) options unless
> you understand what you are doing ...
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pvecm e 1 not working anymore

2016-03-19 Thread Jean-Laurent Ivars
thank you for your answer :)

I saw this but it doesn’t seem to work, or i dont understand how to use it 
correctly ?

root@roubaix /etc/pve # pvecm expected 1
root@roubaix /etc/pve # pvecm status
Quorum information
--
Date: Fri Mar 18 11:51:14 2016
Quorum provider:  corosync_votequorum
Nodes:2
Node ID:  0x0001
Ring ID:  1484
Quorate:  Yes

Votequorum information
--
Expected votes:   2
Highest expected: 2
Total votes:  2
Quorum:   2  
Flags:Quorate 

Membership information
--
Nodeid  Votes Name
0x0002  1 10.10.10.1
0x0001  1 10.10.10.2 (local)


Jean-Laurent Ivars 
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille 
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47 
Linkedin    |  Viadeo 
   |  www.ipgenius.fr 

> Le 18 mars 2016 à 11:23, Fabian Grünbichler  a 
> écrit :
> 
>> Jean-Laurent Ivars  hat am 18. März 2016 um 11:15
>> geschrieben:
>> 
>> 
>> Thank you for your answer but do you know how to tell corosync.conf, quorum
>> with one node ? (how to put the directive expected_votes=1 ?) 
> 
> $ man pvecm
> 
> ...
>pvecm expected 
> 
>  Tells corosync a new value of expected votes.
> 
>   integer (1 - N)
> 
>Expected votes
> ...
> 
> 
> Regards,
> Fabian
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pvecm e 1 not working anymore

2016-03-19 Thread Jean-Laurent Ivars
Honestly I don’t see the point of your message, you basically show the same as 
i did, unless the fact that you have tree servers in your cluster, my servers 
are hosted so i’m very sorry but I can’t pull or plug a cable…

Yes in this configuration, i only have a two hosts cluster and I know it is not 
made for activate HA that’s why i do not.

And as far as I know I didn’t blamed anyone so I don’t understand the tone of 
your mail. (saying hi, thank or regards doesn’t hurt i assure you) 
I’m seriously wondering about buying subscription with support level and you’re 
not helping me... 

Regards,


Jean-Laurent Ivars 
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille 
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47 
Linkedin    |  Viadeo 
   |  www.ipgenius.fr 

> Le 18 mars 2016 à 13:20, Thomas Lamprecht  a écrit :
> 
> 
> 
> On 03/18/2016 12:36 PM, Jean-Laurent Ivars wrote:
>> You can let go it’s ok, I revert the conf file from my other node, I 
>> restarted the corosync service from the web interface and the folder is back.
>> 
>> As you’re saying pvecm expected 1 is working despite what pvecm status says 
>> because both the hosts are online, I just made test (temporary deactivate 
>> cluster interface on the other node)
> 
> If you did something like "ifdown eth1" forget it , that wont work with 
> corosync, it may actual cause problems.
> And yes it will *not* set expected votes lower if the cluster is full up and 
> quorate!
> 
> A real test from my side shows that is working:
> 
> root@due:~# pvecm s
> Quorum information
> --
> Date: Fri Mar 18 13:15:42 2016
> Quorum provider:  corosync_votequorum
> Nodes:3
> Node ID:  0x0002
> Ring ID:  704
> Quorate:  Yes
> 
> Votequorum information
> --
> Expected votes:   3
> Highest expected: 3
> Total votes:  3
> Quorum:   2  
> Flags:Quorate 
> 
> Membership information
> --
> Nodeid  Votes Name
> 0x0001  1 10.10.10.1
> 0x0002  1 10.10.10.2 (local)
> 0x0003  1 10.10.10.3
> 
> *-> pull network plug here*
> 
> root@due:~# pvecm s
> Quorum information
> --
> Date: Fri Mar 18 13:15:46 2016
> Quorum provider:  corosync_votequorum
> Nodes:1
> Node ID:  0x0002
> Ring ID:  708
> Quorate:  No
> 
> Votequorum information
> --
> Expected votes:   3
> Highest expected: 3
> Total votes:  1
> Quorum:   2 Activity blocked
> Flags:
> 
> Membership information
> --
> Nodeid  Votes Name
> 0x0002  1 10.10.10.2 (local)
> 
> root@due:~# pvecm e 1
> 
> root@due:~# pvecm s
> Quorum information
> --
> Date: Fri Mar 18 13:15:51 2016
> Quorum provider:  corosync_votequorum
> Nodes:1
> Node ID:  0x0002
> Ring ID:  708
> Quorate:  Yes
> 
> Votequorum information
> --
> Expected votes:   1
> Highest expected: 1
> Total votes:  1
> Quorum:   1  
> Flags:Quorate 
> 
> Membership information
> --
> Nodeid  Votes Name
> 0x0002  1 10.10.10.2 (local)
> root@due:~# 
> 
> 
> btw. using that do a achieve some false sense of HA in a two node cluster 
> should *never* be done in a production system, if something fails or does not 
> work as expected do not blame us, we warned you. If you want HA use at least 
> three nodes.
> 
>> 
>> and voilà :
>> 
>> root@roubaix /etc/pve # pvecm status
>> Quorum information
>> --
>> Date: Fri Mar 18 12:32:44 2016
>> Quorum provider:  corosync_votequorum
>> Nodes:1
>> Node ID:  0x0001
>> Ring ID:  1504
>> Quorate:  No
>> 
>> Votequorum information
>> --
>> Expected votes:   2
>> Highest expected: 2
>> Total votes:  1
>> Quorum:   2 Activity blocked
>> Flags:
>> 
>> Membership information
>> --
>> Nodeid  Votes Name
>> 0x0001  1 10.10.10.2 (local)
>> root@roubaix /etc/pve # pvecm expected 1
>> root@roubaix /etc/pve # pvecm status
>> Quorum information
>> --
>> Date: Fri Mar 18 12:33:00 2016
>> Quorum provider:  corosync_votequorum
>> Nodes:1
>> Node ID:  0x0001
>> Ring ID:  1504
>> Quorate:  Yes
>> 
>> Votequorum information
>> --
>> Expected votes:   1
>> Highest expected: 1
>> Total votes:  1
>> Quorum:   1  
>> Flags:Quorate 
>> 
>> Membership information
>> --
>> Nodeid  Votes Name
>> 0x0001  1 10.10.10.2 (local)
>> 

[PVE-User] pvecm e 1 not working anymore

2016-03-19 Thread Jean-Laurent Ivars
Hi list,

In the new cluster version (pve4) if i loose quorum (in a 2 nodes set un with 
one node down for example) I’m stuck.

Before pve4 I could send the following command pvecm e 1 and my host was fully 
working again but now it seems it doesn’t work this way (the command run 
without error but nothing happens)

Moreover, the new file for setting cluster (and quorum) isn’t cluster.conf 
anymore but corosync.conf but i don’t find where to set this in that file !

Does someone know the way to do ?

thank you for answer in advance :)
Best regards,


Jean-Laurent Ivars 
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille 
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47 
Linkedin    |  Viadeo 
   |  www.ipgenius.fr 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Cannot boot on a Windows 7 VM from a full-cloned template

2016-03-19 Thread Lindsay Mathieson

On 17/03/2016 4:15 AM, Gaël Jobin wrote:
For the moment, I just manually "cp" the base disk to my newly created 
VM directory and it's working. Also, I tried to convert the base disk 
from raw to qcow2 and back qcow2 to raw and the new raw disk is 
booting fine ! The problem seems related to "raw to raw" conversion...



Whats the underlying filesystem?

--
Lindsay Mathieson

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM clone with 2 disks fails on ZFS-iSCSI IET storage

2016-03-19 Thread Mikhail
On 03/19/2016 08:29 AM, Markus Köberl wrote:
>> and just now I tried ZFS send/receive on the storage system to copy
>> volumes. I was very much surprised that the speed is getting at 100MB/s
>> max..
> 
> maybe you force your pool to only use 4k blocks, see later.
> have you also activated jumbo frames on the network? 

Yes, I'm using 8000 MTU across all systems connected to storage.
And yes, the blocks were 4k.

> 
> Setting blocksize 4k will maybe create all volumes with a max blocksize of 4k
> I am using 64k her (produces more fragmentation but gets faster)
> run zpool history on the storage
> a see entries like:
> 2014-07-04.20:11:06 zpool create -f -O atime=off -o ashift=12 -m none nsas35 
> mirror SASi1-6 SASi2-6 mirror SASi1-7 SASi2-7
> ...
> 2016-03-03.14:51:39 zfs create -b 64k -V 20971520k nsas35/vm-129-disk-1
> 2016-03-03.14:52:42 zfs create -b 64k -V 157286400k nsas35/vm-129-disk-2

No more history, last night I converted storage system to MD RAID10
w/LVM. I wish I had more time to run experiments, but my time limit for
this is exhausted, I need stable storage system by April.

> 
>>> iscsiprovider iet
>>> portal 192.168.4.1
>>> target iqn.2016-03.eu.myorg:rpool
>>> sparse
>>> nowritecache
> 
> I also do not have sparse and nowritecache in my config
> 
>>> content images
> 
> Till one month ago i had a nexenta store running (opensolaris) with striped 
> mirror using 4 drives.
> the 4 drives have been the bottleneck. An old debian based zfsonlinux server 
> (stuff ling around) using 8 disks works faster for us
> 

So maybe I will have better luck next time, on my next cluster =)

Mikhail.

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Cannot boot on a Windows 7 VM from a full-cloned template

2016-03-19 Thread Gaël Jobin
Hi all,

I'm using Proxmox with the testing repo.

I successfully installed a Windows 7 VM and made a template of it.
Then, I tried to create a new VM by cloning the previous template (full
clone). Unfortunately, the new VM cannot boot Windows. On the other
hand, with a "linked-clone", it works fine.

I noticed that the cloning was internally doing a "qemu-img convert".
More precisely in my case, "/usr/bin/qemu-img convert -p -f raw -O raw
/var/lib/vz/images/106/base-106-disk-1.raw /var/lib/vz/images/109/vm-
109-disk-1.raw".

I did the same command manually and was quiet surprised to see that the
new disk has the exact same size but not the same MD5 hash (md5sum
command).

Any idea why qemu-img corrupt the disk?

For the moment, I just manually "cp" the base disk to my newly created
VM directory and it's working. Also, I tried to convert the base disk
from raw to qcow2 and back qcow2 to raw and the new raw disk is booting
fine ! The problem seems related to "raw to raw" conversion...

qemu-img version 2.5.0pve-qemu-kvm_2.5-9

Thank you for your help,

Regards,
Gaël___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pvecm e 1 not working anymore

2016-03-19 Thread Dietmar Maurer
> I saw this but it doesn’t seem to work, or i dont understand how to use it
> correctly ?
> Votequorum information
> --
> Expected votes:   2
> Highest expected: 2
> Total votes:  2

Both nodes are online (you cannot set expected votes below actual votes).

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user