Re: [PVE-User] Windows & Gluster 3.8

2016-12-14 Thread Lindsay Mathieson

When host VM's on gluster you should be setting the following:

performance.readdir-ahead: on
cluster.data-self-heal: on
cluster.quorum-type: auto
cluster.server-quorum-type: server
performance.strict-write-ordering: off
performance.stat-prefetch: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.granular-entry-heal: yes
cluster.locking-scheme: granular

And possibly consider downgrading to 3.8.4?

On 14/12/2016 10:05 PM, cont...@makz.me wrote:

I've tried with ide / sata & scsi, for an existing VM it crash at boot screen
(windows 7 logo), for new install it crash when i click next at the disk
selector

   


root@hvs1:/var/log/glusterfs# **gluster volume info**
   
Volume Name: vm

Type: Replicate
Volume ID: 76ff16d4-7bd3-4070-b39d-8a173c6292c3
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: hvs1-gluster:/gluster/vm/brick
Brick2: hvs2-gluster:/gluster/vm/brick
Brick3: hvsquorum-gluster:/gluster/vm/brick
root@hvs1:/var/log/glusterfs# **gluster volume status**
Status of volume: vm
Gluster process TCP Port  RDMA Port  Online  Pid
\-
-
Brick hvs1-gluster:/gluster/vm/brick49153 0  Y   3565
Brick hvs2-gluster:/gluster/vm/brick49153 0  Y   3465
Brick hvsquorum-gluster:/gluster/vm/brick   49153 0  Y   2696
NFS Server on localhost N/A   N/AN   N/A
Self-heal Daemon on localhost   N/A   N/AY   3586
NFS Server on hvsquorum-gluster N/A   N/AN   N/A
Self-heal Daemon on hvsquorum-gluster   N/A   N/AY   4190
NFS Server on hvs2-gluster  N/A   N/AN   N/A
Self-heal Daemon on hvs2-glusterN/A   N/AY   6212
   
Task Status of Volume vm

\-
-
There are no active volume tasks
   
root@hvs1:/var/log/glusterfs# **gluster volume heal vm info**

Brick hvs1-gluster:/gluster/vm/brick
Status: Connected
Number of entries: 0
   
Brick hvs2-gluster:/gluster/vm/brick

Status: Connected
Number of entries: 0
   
Brick hvsquorum-gluster:/gluster/vm/brick

Status: Connected
Number of entries: 0

   


**Here's the kvm log**

   


[2016-12-14 11:08:29.210641] I [MSGID: 104045] [glfs-master.c:91:notify]
0-gfapi: New graph 68767331-2e73-7276-2e62-62772e62652d (0) coming up
[2016-12-14 11:08:29.210689] I [MSGID: 114020] [client.c:2356:notify] 0-vm-
client-0: parent translators are ready, attempting connect on transport
[2016-12-14 11:08:29.211020] I [MSGID: 114020] [client.c:2356:notify] 0-vm-
client-1: parent translators are ready, attempting connect on transport
[2016-12-14 11:08:29.211272] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-vm-
client-0: changing port to 49153 (from 0)
[2016-12-14 11:08:29.211285] I [MSGID: 114020] [client.c:2356:notify] 0-vm-
client-2: parent translators are ready, attempting connect on transport
[2016-12-14 11:08:29.211838] I [MSGID: 114057] [client-
handshake.c:1446:select_server_supported_programs] 0-vm-client-0: Using
Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-12-14 11:08:29.211910] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-vm-
client-1: changing port to 49153 (from 0)
[2016-12-14 11:08:29.211980] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-vm-
client-2: changing port to 49153 (from 0)
[2016-12-14 11:08:29.212227] I [MSGID: 114046] [client-
handshake.c:1222:client_setvolume_cbk] 0-vm-client-0: Connected to vm-
client-0, attached to remote volume '/gluster/vm/brick'.
[2016-12-14 11:08:29.212239] I [MSGID: 114047] [client-
handshake.c:1233:client_setvolume_cbk] 0-vm-client-0: Server and Client lk-
version numbers are not same, reopening the fds
[2016-12-14 11:08:29.212296] I [MSGID: 108005] [afr-common.c:4298:afr_notify]
0-vm-replicate-0: Subvolume 'vm-client-0' came back up; going online.
[2016-12-14 11:08:29.212316] I [MSGID: 114035] [client-
handshake.c:201:client_set_lk_version_cbk] 0-vm-client-0: Server lk version =
1
[2016-12-14 11:08:29.212426] I [MSGID: 114057] [client-
handshake.c:1446:select_server_supported_programs] 0-vm-client-1: Using
Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-12-14 11:08:29.212590] I [MSGID: 114057] [client-
handshake.c:1446:select_server_supported_programs] 0-vm-client-2: Using
Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-12-14 11:08:29.212874] I [MSGID: 114046] [client-
handshake.c:1222:client_setvolume_cbk] 0-vm-client-2: Connected to vm-
client-2, attached to remote volume '/gluster/vm/brick'.
[2016-12-14 11:08:29.212886] I [MSGID: 114047] [client-
handshake.c:1233:client_setvolume_cbk] 0-vm-client-2: Server and Client lk-
version numbers are not same, reopening the fds
[2016-12-14 

Re: [PVE-User] Ceph Disk Usage/Storage

2016-12-14 Thread Daniel
Ahh ok,

so when i setup 2/1 it means 2 Copies. right?


> Am 14.12.2016 um 13:00 schrieb Wolfgang Link :
> 
> You pool size is 3*400 GB so 1.2GB is correct, but your config say 3/2.
> This means you have 3 copies of every pg(Placement Group) and min 2 pg
> are needed to operate correct.
> 
> This means if you write 1GB you lose 3GB of free storage.
> 
> 
> On 12/14/2016 12:14 PM, Daniel wrote:
>> Hi there,
>> 
>> i created a Ceph File-System with 3x 400GB
>> In my config i said 3/2 that means that one of that disks are only for an 
>> faulty issue (like raid5)
>> 3 HDDs Max and 2 HDDs Minimum
>> 
>> In my System-Overview u see that i have 1.2TB Free-Space which cant be 
>> correct.
>> 
>> This is what the CLI command shows me:
>> 
>> POOLS:
>>NAME ID USED %USED MAX AVAIL OBJECTS 
>>ceph 2 0 0  441G   0 
>> 
>> But as i understand it correctly max Avail GB must be round about 800GB
>> 
>> Cheers
>> 
>> Daniel
>> 
>> 
>> 
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> 
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Windows & Gluster 3.8

2016-12-14 Thread Bart Lageweg | Bizway
Moving VM image to local and start VM? (to start the VM en debug on gluster?)

-Oorspronkelijk bericht-
Van: pve-user [mailto:pve-user-boun...@pve.proxmox.com] Namens cont...@makz.me
Verzonden: woensdag 14 december 2016 13:06
Aan: PVE User List 
CC: PVE User List 
Onderwerp: Re: [PVE-User] Windows & Gluster 3.8

I've tried with ide / sata & scsi, for an existing VM it crash at boot screen 
(windows 7 logo), for new install it crash when i click next at the disk 
selector

  

root@hvs1:/var/log/glusterfs# **gluster volume info**  
  
Volume Name: vm
Type: Replicate
Volume ID: 76ff16d4-7bd3-4070-b39d-8a173c6292c3
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:  
Brick1: hvs1-gluster:/gluster/vm/brick
Brick2: hvs2-gluster:/gluster/vm/brick
Brick3: hvsquorum-gluster:/gluster/vm/brick
root@hvs1:/var/log/glusterfs# **gluster volume status** Status of volume: vm  
Gluster process TCP Port  RDMA Port  Online  Pid  
\-
-  
Brick hvs1-gluster:/gluster/vm/brick49153 0  Y   3565  
Brick hvs2-gluster:/gluster/vm/brick49153 0  Y   3465  
Brick hvsquorum-gluster:/gluster/vm/brick   49153 0  Y   2696  
NFS Server on localhost N/A   N/AN   N/A  
Self-heal Daemon on localhost   N/A   N/AY   3586  
NFS Server on hvsquorum-gluster N/A   N/AN   N/A  
Self-heal Daemon on hvsquorum-gluster   N/A   N/AY   4190  
NFS Server on hvs2-gluster  N/A   N/AN   N/A  
Self-heal Daemon on hvs2-glusterN/A   N/AY   6212  
  
Task Status of Volume vm
\-
-
There are no active volume tasks  
  
root@hvs1:/var/log/glusterfs# **gluster volume heal vm info** Brick 
hvs1-gluster:/gluster/vm/brick
Status: Connected
Number of entries: 0  
  
Brick hvs2-gluster:/gluster/vm/brick
Status: Connected
Number of entries: 0  
  
Brick hvsquorum-gluster:/gluster/vm/brick
Status: Connected
Number of entries: 0  

  

**Here's the kvm log**

  

[2016-12-14 11:08:29.210641] I [MSGID: 104045] [glfs-master.c:91:notify]
0-gfapi: New graph 68767331-2e73-7276-2e62-62772e62652d (0) coming up
[2016-12-14 11:08:29.210689] I [MSGID: 114020] [client.c:2356:notify] 0-vm-
client-0: parent translators are ready, attempting connect on transport
[2016-12-14 11:08:29.211020] I [MSGID: 114020] [client.c:2356:notify] 0-vm-
client-1: parent translators are ready, attempting connect on transport
[2016-12-14 11:08:29.211272] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-vm-
client-0: changing port to 49153 (from 0)
[2016-12-14 11:08:29.211285] I [MSGID: 114020] [client.c:2356:notify] 0-vm-
client-2: parent translators are ready, attempting connect on transport
[2016-12-14 11:08:29.211838] I [MSGID: 114057] [client- 
handshake.c:1446:select_server_supported_programs] 0-vm-client-0: Using Program 
GlusterFS 3.3, Num (1298437), Version (330)
[2016-12-14 11:08:29.211910] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-vm-
client-1: changing port to 49153 (from 0)
[2016-12-14 11:08:29.211980] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-vm-
client-2: changing port to 49153 (from 0)
[2016-12-14 11:08:29.212227] I [MSGID: 114046] [client- 
handshake.c:1222:client_setvolume_cbk] 0-vm-client-0: Connected to vm- 
client-0, attached to remote volume '/gluster/vm/brick'.  
[2016-12-14 11:08:29.212239] I [MSGID: 114047] [client- 
handshake.c:1233:client_setvolume_cbk] 0-vm-client-0: Server and Client lk- 
version numbers are not same, reopening the fds
[2016-12-14 11:08:29.212296] I [MSGID: 108005] [afr-common.c:4298:afr_notify]
0-vm-replicate-0: Subvolume 'vm-client-0' came back up; going online.  
[2016-12-14 11:08:29.212316] I [MSGID: 114035] [client- 
handshake.c:201:client_set_lk_version_cbk] 0-vm-client-0: Server lk version =
1
[2016-12-14 11:08:29.212426] I [MSGID: 114057] [client- 
handshake.c:1446:select_server_supported_programs] 0-vm-client-1: Using Program 
GlusterFS 3.3, Num (1298437), Version (330)
[2016-12-14 11:08:29.212590] I [MSGID: 114057] [client- 
handshake.c:1446:select_server_supported_programs] 0-vm-client-2: Using Program 
GlusterFS 3.3, Num (1298437), Version (330)
[2016-12-14 11:08:29.212874] I [MSGID: 114046] [client- 
handshake.c:1222:client_setvolume_cbk] 0-vm-client-2: Connected to vm- 
client-2, attached to remote volume '/gluster/vm/brick'.  
[2016-12-14 11:08:29.212886] I [MSGID: 114047] [client- 
handshake.c:1233:client_setvolume_cbk] 0-vm-client-2: Server and Client lk- 
version numbers are not same, reopening the fds
[2016-12-14 11:08:29.212983] I [MSGID: 114046] [client- 
handshake.c:1222:client_setvolume_cbk] 0-vm-client-1: Connected to vm- 
client-1, attached to 

Re: [PVE-User] Windows & Gluster 3.8

2016-12-14 Thread cont...@makz.me
I've tried with ide / sata & scsi, for an existing VM it crash at boot screen
(windows 7 logo), for new install it crash when i click next at the disk
selector

  

root@hvs1:/var/log/glusterfs# **gluster volume info**  
  
Volume Name: vm  
Type: Replicate  
Volume ID: 76ff16d4-7bd3-4070-b39d-8a173c6292c3  
Status: Started  
Snapshot Count: 0  
Number of Bricks: 1 x 3 = 3  
Transport-type: tcp  
Bricks:  
Brick1: hvs1-gluster:/gluster/vm/brick  
Brick2: hvs2-gluster:/gluster/vm/brick  
Brick3: hvsquorum-gluster:/gluster/vm/brick  
root@hvs1:/var/log/glusterfs# **gluster volume status**  
Status of volume: vm  
Gluster process TCP Port  RDMA Port  Online  Pid  
\-
-  
Brick hvs1-gluster:/gluster/vm/brick49153 0  Y   3565  
Brick hvs2-gluster:/gluster/vm/brick49153 0  Y   3465  
Brick hvsquorum-gluster:/gluster/vm/brick   49153 0  Y   2696  
NFS Server on localhost N/A   N/AN   N/A  
Self-heal Daemon on localhost   N/A   N/AY   3586  
NFS Server on hvsquorum-gluster N/A   N/AN   N/A  
Self-heal Daemon on hvsquorum-gluster   N/A   N/AY   4190  
NFS Server on hvs2-gluster  N/A   N/AN   N/A  
Self-heal Daemon on hvs2-glusterN/A   N/AY   6212  
  
Task Status of Volume vm  
\-
-  
There are no active volume tasks  
  
root@hvs1:/var/log/glusterfs# **gluster volume heal vm info**  
Brick hvs1-gluster:/gluster/vm/brick  
Status: Connected  
Number of entries: 0  
  
Brick hvs2-gluster:/gluster/vm/brick  
Status: Connected  
Number of entries: 0  
  
Brick hvsquorum-gluster:/gluster/vm/brick  
Status: Connected  
Number of entries: 0  

  

**Here's the kvm log**

  

[2016-12-14 11:08:29.210641] I [MSGID: 104045] [glfs-master.c:91:notify]
0-gfapi: New graph 68767331-2e73-7276-2e62-62772e62652d (0) coming up  
[2016-12-14 11:08:29.210689] I [MSGID: 114020] [client.c:2356:notify] 0-vm-
client-0: parent translators are ready, attempting connect on transport  
[2016-12-14 11:08:29.211020] I [MSGID: 114020] [client.c:2356:notify] 0-vm-
client-1: parent translators are ready, attempting connect on transport  
[2016-12-14 11:08:29.211272] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-vm-
client-0: changing port to 49153 (from 0)  
[2016-12-14 11:08:29.211285] I [MSGID: 114020] [client.c:2356:notify] 0-vm-
client-2: parent translators are ready, attempting connect on transport  
[2016-12-14 11:08:29.211838] I [MSGID: 114057] [client-
handshake.c:1446:select_server_supported_programs] 0-vm-client-0: Using
Program GlusterFS 3.3, Num (1298437), Version (330)  
[2016-12-14 11:08:29.211910] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-vm-
client-1: changing port to 49153 (from 0)  
[2016-12-14 11:08:29.211980] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-vm-
client-2: changing port to 49153 (from 0)  
[2016-12-14 11:08:29.212227] I [MSGID: 114046] [client-
handshake.c:1222:client_setvolume_cbk] 0-vm-client-0: Connected to vm-
client-0, attached to remote volume '/gluster/vm/brick'.  
[2016-12-14 11:08:29.212239] I [MSGID: 114047] [client-
handshake.c:1233:client_setvolume_cbk] 0-vm-client-0: Server and Client lk-
version numbers are not same, reopening the fds  
[2016-12-14 11:08:29.212296] I [MSGID: 108005] [afr-common.c:4298:afr_notify]
0-vm-replicate-0: Subvolume 'vm-client-0' came back up; going online.  
[2016-12-14 11:08:29.212316] I [MSGID: 114035] [client-
handshake.c:201:client_set_lk_version_cbk] 0-vm-client-0: Server lk version =
1  
[2016-12-14 11:08:29.212426] I [MSGID: 114057] [client-
handshake.c:1446:select_server_supported_programs] 0-vm-client-1: Using
Program GlusterFS 3.3, Num (1298437), Version (330)  
[2016-12-14 11:08:29.212590] I [MSGID: 114057] [client-
handshake.c:1446:select_server_supported_programs] 0-vm-client-2: Using
Program GlusterFS 3.3, Num (1298437), Version (330)  
[2016-12-14 11:08:29.212874] I [MSGID: 114046] [client-
handshake.c:1222:client_setvolume_cbk] 0-vm-client-2: Connected to vm-
client-2, attached to remote volume '/gluster/vm/brick'.  
[2016-12-14 11:08:29.212886] I [MSGID: 114047] [client-
handshake.c:1233:client_setvolume_cbk] 0-vm-client-2: Server and Client lk-
version numbers are not same, reopening the fds  
[2016-12-14 11:08:29.212983] I [MSGID: 114046] [client-
handshake.c:1222:client_setvolume_cbk] 0-vm-client-1: Connected to vm-
client-1, attached to remote volume '/gluster/vm/brick'.  
[2016-12-14 11:08:29.212992] I [MSGID: 114047] [client-
handshake.c:1233:client_setvolume_cbk] 0-vm-client-1: Server and Client lk-
version numbers are not same, reopening the fds  
[2016-12-14 11:08:29.213042] I [MSGID: 114035] [client-
handshake.c:201:client_set_lk_version_cbk] 

Re: [PVE-User] Ceph Disk Usage/Storage

2016-12-14 Thread Wolfgang Link
You pool size is 3*400 GB so 1.2GB is correct, but your config say 3/2.
This means you have 3 copies of every pg(Placement Group) and min 2 pg
are needed to operate correct.

This means if you write 1GB you lose 3GB of free storage.


On 12/14/2016 12:14 PM, Daniel wrote:
> Hi there,
> 
> i created a Ceph File-System with 3x 400GB
> In my config i said 3/2 that means that one of that disks are only for an 
> faulty issue (like raid5)
> 3 HDDs Max and 2 HDDs Minimum
> 
> In my System-Overview u see that i have 1.2TB Free-Space which cant be 
> correct.
> 
> This is what the CLI command shows me:
> 
> POOLS:
> NAME ID USED %USED MAX AVAIL OBJECTS 
> ceph 2 0 0  441G   0 
> 
> But as i understand it correctly max Avail GB must be round about 800GB
> 
> Cheers
> 
> Daniel
> 
> 
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Windows & Gluster 3.8

2016-12-14 Thread Lindsay Mathieson

On 14/12/2016 8:51 PM, cont...@makz.me wrote:

Today i've upgraded to 4.4, snapshot now works well but since i've updated,
all my windows VM are broken, when windows try to write on disk the vm crash
without error, even when i try to reinstall a windows, when i partition or
format the vm instantly crash.

   


Hmmm - I upgraded to PVE 4.4 today (rolling upgrade, 3 nodes) and am 
using Gluster 3.8.4, all my windows VM's are running fine + snapshots 
ok. Sounds like you a gluster volume in trouble.



What virtual disk interface are you using with the windows VM'? ide? 
virtio? scsi+virtio controller?


can you post:

gluster volume info


gluster volume status

gluster volume heal  heal info




--
Lindsay Mathieson

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Ceph Disk Usage/Storage

2016-12-14 Thread Daniel
Hi there,

i created a Ceph File-System with 3x 400GB
In my config i said 3/2 that means that one of that disks are only for an 
faulty issue (like raid5)
3 HDDs Max and 2 HDDs Minimum

In my System-Overview u see that i have 1.2TB Free-Space which cant be correct.

This is what the CLI command shows me:

POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS 
ceph 2 0 0  441G   0 

But as i understand it correctly max Avail GB must be round about 800GB

Cheers

Daniel



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Windows & Gluster 3.8

2016-12-14 Thread Kevin Lemonnier
> 
> I've installed gluster 3.8.6 on my pve cluster because the gluster used by
> proxmox too old.
> 

I don't think proxmox comes with any gluster, it's the debian one and as often
with debian packages, it is indeed very old.

> 
> Today i've upgraded to 4.4, snapshot now works well but since i've updated,
> all my windows VM are broken, when windows try to write on disk the vm crash
> without error, even when i try to reinstall a windows, when i partition or
> format the vm instantly crash.
> 
>   
> 
> Someone can help me ?
> 

Not sure here's the best place for this, you might want to try the gluster-users
list instead. I'd advise attaching your logs (/var/log/gluster) since they'll 
ask
for that anyway :)

Ideally your client logs too, but being a Proxmox user I know how hard that is 
to
get for proxmox VMs .. That would be a very nice addition to proxmox by the way,
a checkbox or something to get the VM output in a file, for debugging libgfapi
it would help a lot.
Currently the only way I am aware of to get those logs is starting the VM by
hand in a shell, bypassing the web interface.

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111


signature.asc
Description: Digital signature
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Windows & Gluster 3.8

2016-12-14 Thread cont...@makz.me
Hello,

  

I've installed gluster 3.8.6 on my pve cluster because the gluster used by
proxmox too old.

It worked great with pve4.3 (except crash when snapshot is used)

  

Today i've upgraded to 4.4, snapshot now works well but since i've updated,
all my windows VM are broken, when windows try to write on disk the vm crash
without error, even when i try to reinstall a windows, when i partition or
format the vm instantly crash.

  

Someone can help me ?

  

Thank you !

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user