[PVE-User] ZFS+GlusterFS LXC image creation trouble.

2016-06-10 Thread Jeremy McCoy
Hi all,

I am working on getting shared storage working on my hosts. My current issue is
that creating an LXC container on my GlusterFS mount is failing with the
following error:
>Warning, had trouble writing out superblocks.TASK ERROR: command 'mkfs.ext4 -O
>mmp -E 'root_owner=0:0' /mnt/gluster/images/102/vm-102-disk-1.raw' failed: exit
>code 144

Running that command manually on the host still generates the warning, but
otherwise successfully creates the image that I am able to mount. Is it
possible/safe to alter whatever script generates the container images so that it
does not give up here?

My first attempt at getting this working using the GUI to mount the storage
worked, but the performance was abysmal once I had several machines running
(idle) on it. I dug around in the GlusterFS documentation and various blog posts
about running GlusterFS on ZFS, and decided that manually mounting the storage
would give me better performance. 

My host config is at pastebin.com/cSQX2RDK, and the GlusterFS config is
different on each host so that the local brick is always type storage/posix. If
I have done something terribly wrong (which is entirely possible), please let me
know.

Thanks,
Jeremy
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Uptime problem for lxc containers

2016-06-10 Thread Simone Piccardi
I have some wrong uptime results from lxc containers. This is my Proxmx
version:

root@haynes ~ # pveversion  -v
proxmox-ve: 4.2-52 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-11 (running version: 4.2-11/2c626aa1)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.8-1-pve: 4.4.8-52
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-40
qemu-server: 4.0-79
pve-firmware: 1.1-8
libpve-common-perl: 4.0-67
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-51
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-19
pve-container: 1.0-67
pve-firewall: 2.0-29
pve-ha-manager: 1.0-31
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1


The containers where started quite longtime ago:

root@jojo:~# ps aufx
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
root 1  1.5  0.0  10668  1344 pts/1Ss+  apr30   2:41 init

but:
root@jojo:~# uptime
 12:46:29 up  3:32,  1 user,  load average: 0,41, 0,61, 0,57

an yesterday at 16:00 (sorry I did not save the output) was about 12 hours.

The results come from /proc/uptime (and are the same for a second LXC
container on the same machine). It seems its value are plain wrong, I
see that mounted from lxcfs, I don't know form where this one get those
values.


Regards
Simone
-- 
Simone Piccardi Truelite Srl
picca...@truelite.it (email/jabber) Via Monferrato, 6
Tel. +39-347-103243350142 Firenze
http://www.truelite.it  Tel. +39-055-7879597Fax. +39-055-736
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Understanding lvm-thin

2016-06-10 Thread Wolfgang Link
Benchmarks LVM vs file.raw

Setup

Physical disk MX200 Crusial only used be test VM/CT
Debian 8 curent version
Extra Disk 32GB (QEMU no cache virtio bus)

QEMU on LVM
dd if=randfile of=/dev/vdb bs=4k
220662+0 records in
220662+0 records out
903831552 bytes (904 MB) copied, 2.58608 s, 349 MB/s

dd if=/dev/zero of=/dev/vdb bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.51501 s, 427 MB/s

LXC on LVM

dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 2.33282 s, 460 MB/s

dd if=/dev/zero of=/bench/test.raw bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.18087 s, 492 MB/s

QEMU on file.raw laying on ext4
dd if=randfile of=/dev/vdb bs=4k
220662+0 records in
220662+0 records out
903831552 bytes (904 MB) copied, 2.47066 s, 366 MB/s

root@livemig:/home/link# dd if=/dev/zero of=/dev/vdb bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.09934 s, 977 MB/s

LXC on file.raw laying on ext4
Hard to say it use the host cache

root@Bench:~# dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 0.595112 s, 1.8 GB/s
root@Bench:~# dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 0.966619 s, 1.1 GB/s
root@Bench:~# dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 1.81487 s, 592 MB/s
root@Bench:~# dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 0.958734 s, 1.1 GB/s
root@Bench:~# dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 4.51895 s, 238 MB/s
root@Bench:~# dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 1.69404 s, 634 MB/s
root@Bench:~# dd if=randfile of=/bench/test.raw bs=4k
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 0.958381 s, 1.1 GB/s

root@Bench:~# dd if=/dev/zero of=/bench/test.raw bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.797751 s, 1.3 GB/s
root@Bench:~# dd if=/dev/zero of=/bench/test.raw bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.795282 s, 1.4 GB/s
root@Bench:~# dd if=/dev/zero of=/bench/test.raw bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.05295 s, 1.0 GB/s
root@Bench:~# dd if=/dev/zero of=/bench/test.raw bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.56381 s, 193 MB/s

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Understanding lvm-thin

2016-06-10 Thread Dietmar Maurer
> >>LVM does not support efficient snapshots, so I want to get rid of that as
> >>fast
> >>as possible...
> 
> Do you have benchmark between lvm-thin vs plain .raw file ?
> 
> If user don't need snapshot but maximum performance ?

First, I do not have benchmarks.

But I doubt that someone wants to use the OS system partition for such setup.
Instead,
he would use an extra disk for that?

Besides, it is really easy to remove the lvm-thin volume and replace it with a
normal LVM if somebody really need it.

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-kernel-4.4.8-1-pve upgrade hanging

2016-06-10 Thread Antoine Jacques de Dixmude

Hello,

FYI, I've restarted the node and runned the following commands:

apt-get clean
dpkg --configure -a

Next, I retried an apt-get upgrade and, this time, it was successful.

Maybe, it was a corrupted archive.

Thanks to all those who responded !

On 06/09/2016 07:43 PM, Jean-Laurent Ivars wrote:

hello

just typing apt-get clean

regards,


Jean-Laurent Ivars
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47
Linkedin    |  Viadeo 
   |  www.ipgenius.fr 


Le 9 juin 2016 à 16:17, Lindsay Mathieson  a écrit 
:

On 9/06/2016 11:57 PM, Antoine Jacques de Dixmude wrote:

Obviously, the package pve-kernel was half-installed before I runned the 
dist-upgrade command. But I don't understand why dpkg is hanging.

Corrupt download maybe? perhaps time to clear the apt cache - which I can't 
remember how it is done.

--
Lindsay Mathieson

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Understanding lvm-thin

2016-06-10 Thread Alexandre DERUMIER
>>LVM does not support efficient snapshots, so I want to get rid of that as fast
>>as possible...

Do you have benchmark between lvm-thin vs plain .raw file ?

If user don't need snapshot but maximum performance ?


- Mail original -
De: "dietmar" 
À: "proxmoxve" , "aderumier" 
Envoyé: Vendredi 10 Juin 2016 07:58:01
Objet: Re: [PVE-User] Understanding lvm-thin

> 
> >>Is there any way to install Proxmox with support only for lvm? 
> 
> you can use proxmox 4.1 iso installer, and upgrade to 4.2. 
> 
> 
> Maybe it could be great to be able to choose lvm vs lvmthin in installer 
> (like 
> fo zfs) 

LVM does not support efficient snapshots, so I want to get rid of that as fast 
as possible... 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user