eredeti üzenet-
Feladó: "Mike Hammett"
Címzett: "PVE User List"
Dátum: Wed, 17 Feb 2016 09:38:13 -0600 (CST)
--
>
>
>I don't know what GlusterFS is, but you want ZFS on raw disks. HBA
>controllers, no
Consider to use zfs for proxmox and use its snapshot/send/receive methods.
Taking a snapshot usually less than a second and you can send the incremental
changes over the network.
You should check your existing systems and decide, is it worth to change your
system or not.
Bye,
István
eredeti üzenet-
Feladó: "Lindsay Mathieson"
Címzett: "PVE User List"
Dátum: Wed, 3 Feb 2016 06:58:15 +1000
-
> Why not ZFS? its a lot more mature than btrfs.
>
>
> --
> Lindsay Mathieson
>
Hi,
Exactly.
My
I was able to install the final 4.0 into a Virtualbox VM with only 2GB RAM +
4x8GB zfs raid10.
Cheers,
István
eredeti üzenet-
Feladó: "Gilberto Nunes" gilberto.nune...@gmail.com
Címzett: "PVE User List"
Dátum: Thu, 8 Oct 2015 07:45:04 -0300
eredeti üzenet-
Feladó: "Martin Maurer" mar...@proxmox.com
Címzett: pve-user@pve.proxmox.com, pve-de...@pve.proxmox.com
Dátum: Tue, 6 Oct 2015 16:07:33 +0200
-
> Hi all!
>
> We are proud to announce the
Hi Lindsay,
Could you post me the following results by private email (outputs of these
commands)?
zpool status -v
zpool get all
zfs list
zfs get all (needed 2 times for the system and data)
arcstat.py
Questions:
did you turn on dedup on your pools?
do you use any non-default
eredeti üzenet-
Feladó: Dietmar Maurer diet...@proxmox.com
Címzett: pve-user pve.proxmox.com
, Pongrácz István
Dátum: Fri, 31 Jul 2015 17:43:28 +0200 (CEST)
-
In ZFS, it is possible to create independent
Not the best source to get a good point:
https://openvz.org/Ploop/Why#Before_ploop
IOW: There was a great discussion a few days earlier on the ovz-list, if
simfs
should be droped or
not - developers would like dropping it ;)
Exactly. I already read that thread :
PS:
Hi,
I have a proxmox node, which is using zfs in such a way I described (except
btrfs) and that node is up an running for more than 657 days now.
uptime for a server = security or stability. Long uptimes also means
a lot of active kernel bugs fixed in more resent kernels.
This is an
eredeti üzenet-
Feladó: Dietmar Maurer diet...@proxmox.com
Címzett: pve-user pve.proxmox.com
, Pongrácz István
Dátum: Fri, 31 Jul 2015 11:58:53 +0200 (CEST)
-
I hope developers find my thoughts useful
PPS: .. even if I think you should better/more clearly compare your setup
with
latest pve, as some
things are already in there.
I just uploaded latest packages to the pve 4.0 test repository. You can now
create
lxc containers on zfs as subvolumes, and create zfs snapshots.
That kind
...@proxmox.com
Címzett: lyt_yudi lyt_y...@icloud.com , Pongrácz István
CC: proxmoxve
Dátum: Fri, 31 Jul 2015 07:19:36 +0200 (CEST)
-
I reproduced the situation. There is a workaround with the new packages,
check
the las few steps below.
I just upload
Hi,
As the recent PVE systems offer very good zfs integration and lxc as
replacement of openvz, I have some thought to share with users and developers
to consider.
A very short introduction of zfs to understand my thoughts:
it is like a software raid system with lvm and copy-on-write
Hi,
I reproduced the situation. There is a workaround with the new packages, check
the las few steps below.
Steps:
I installed a new PVE 3.4 from iso.
Changed the pve repository to pve-no-subscription
and did an apt-get update, apt-get dist-upgrade
kernel and zfs got upgraded, including
Dátum: Tue, 28 Jul 2015 14:33:33 +0800
--
so, thanks you reply.
在 2015年7月28日,下午2:10,Pongrácz István pongracz.ist...@gmail.com
pongracz.ist...@gmail.com 写道:
Anyway, are you sure these drives are not part of existing pool?
Please
Interesting.
Are these drives are direct attached hard drives, without any hw raid card or
something like that?
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Hi,
When you create a new pool, /dev/sdb is not really a valid device.
Try to create a pool using their id, like this:
zpool create -f -o ashift=12 tank mirror
ata-Hitachi_HTS545050A7E380_TEJ52139CA9VNS
ata-Hitachi_HTS545050A7E380_TEJ52139CAVXNS mirror
Anyway, are you sure these drives are not part of existing pool?
Please check it before you destroy the previous possible pool.
Cheers,
István
eredeti üzenet-
Feladó: lyt_yudi lyt_y...@icloud.com
Címzett: proxmoxve
Dátum: Tue, 28 Jul 2015 11:09:12 +0800
:13:56 +0800
--
在 2015年7月7日,下午9:57,Pongrácz István pongracz.ist...@gmail.com
pongracz.ist...@gmail.com 写道:
The issue could be somewhere in the zfs util and pve application layer if we
can exclude
all other trivial problems (no space
Hi,
I do not think, it is a zfs issue. ZFS snapshots are transparent to the upper
level and happens very quickly.
Best regards,
István
eredeti üzenet-
Feladó: lyt_yudi lyt_y...@icloud.com
Címzett: proxmoxve (pve-user@pve.proxmox.com )
Dátum: Tue, 07 Jul
eredeti üzenet-
Feladó: lyt_yudi lyt_y...@icloud.com
I do not think, it is a zfs issue. ZFS snapshots are transparent to the upper
level and
happens very quickly.
thanks, are you have the same problem ?
No, but I use zfsonlinux for many years. One
eredeti üzenet-
Feladó: Daniel Mettler mettl...@numlock.ch
Címzett: pve-user@pve.proxmox.com
Dátum: Thu, 19 Mar 2015 00:31:58 +0100 (CET)
-
# zfs get compression
NAME PROPERTY VALUE SOURCE
rpool
pve.proxmox.com
, Pongrácz István
, Martin Maurer mar...@proxmox.com , pve-devel pve.proxmox.com
Dátum: Fri, 20 Feb 2015 21:02:37 +0100 (CET)
-
- it use gzip-6 compression by default - as the system supports lz4
compression, I recommend to use
eredeti üzenet-
Feladó: Martin Maurer mar...@proxmox.com
Címzett: pve-user@pve.proxmox.com, pve-de...@pve.proxmox.com
Dátum: Thu, 19 Feb 2015 14:38:39 +0100
-
Hi all,
We just released Proxmox VE 3.4 - the
pages, nor will reclaim the memory the balloon
driver (is it
installed?)
Cheers
Eneko
On 07/12/14 19:55, Pongrácz István wrote:
Hi,
I have a strange issue with the latest PVE and windows 7.
The problem is the huge memory reservation and opposite memory reports. Only
one
windows7 is running
Hi,
I have a strange issue with the latest PVE and windows 7.
The problem is the huge memory reservation and opposite memory reports. Only
one windows7 is running without any usage, but the memory consumption is huge,
including ksm.
The memory config for the windows7 VM: 2GB (min.) / 22GB
Hi,
First of all, proxmox has nothing to do with this, because this is a linux/qemu
feature and not the distribution.
My question: you created 13 snapshot on windows and the differences between
them have zero space?
Normally windows change the data on the disk a lot of times per days.
For
eredeti üzenet-
Feladó: Gilberto Nunes gilberto.nune...@gmail.com
Címzett: Pongrácz István
CC: pve-user pve.proxmox.com
Dátum: Sun, 17 Aug 2014 13:41:06 -0300
--
Yes I tried to delete any snapshot... But I
to subscribe to the zfsonlinux mailing list.
Cheers,
István
PS: You should have a Plan B and backup :)
eredeti üzenet-
Feladó: Frederic Van Espen frederic...@gmail.com
Címzett: Pongrácz István
CC: pve-user pve.proxmox.com
Dátum: Mon, 10 Mar 2014 10:08:29 +0100
to subscribe to the zfsonlinux mailing list.
Cheers,
István
PS: You should have a Plan B and backup :)
eredeti üzenet-
Feladó: Frederic Van Espen frederic...@gmail.com
Címzett: Pongrácz István
CC: pve-user pve.proxmox.com
Dátum: Mon, 10 Mar 2014 10:08:29 +0100
Hi,
I am biased, I use zfsonlinux, I can recommend to try it.
First you have to check your system with some test before you drop it into
production.
Cheers,
István
eredeti üzenet-
Feladó: Frederic Van Espen frederic...@gmail.com
Címzett:
Or use zfs on the storage server :)
eredeti üzenet-
Feladó: Bart Lageweg | Bizway b...@bizway.nl
Címzett: 'Lindsay Mathieson'
, pve-user pve. proxmox. com
Dátum: Thu, 27 Feb 2014 20:55:57 +
-
You can
Maybe Wheezy != PVE installer.
You can use a hardware raid config, it will work (usually).
Regarding raid, I agree, but some users are using ZFS(onlinux) with success,
which is definitely a 'soft raid'. I also use zfsonlinux in all my servers and
I do not plan to use hardware raid in any
eredeti üzenet-
Feladó: admin-at-extremeshok-dot-com ad...@extremeshok.com
Címzett: pve-user@pve.proxmox.com
Dátum: Tue, 18 Feb 2014 17:23:11 +0200
--
Its easy to do incremental and encrypted backups to a
eredeti üzenet-
Feladó: Lindsay Mathieson
CC: proxmoxve (pve-user@pve.proxmox.com )
Dátum: Tue, 18 Feb 2014 16:30:10 +1000
-
On 18 February 2014 09:36, Bruce B
wrote:
Hi everyone,
I am looking for a
üzenet-
Feladó: Chris Murray chrismurra...@gmail.com
Címzett: Pongrácz István
, Alexandre DERUMIER aderum...@odiso.com
CC: pve-user pve.proxmox.com
Dátum: Tue, 4 Feb 2014 14:58:06 -
-
I'm not sure I get a chance to specify
Just a comment, not a filesystem war.
This is why I use zfs in the underlying storage, instead of ext3/lvm/vzdump:
live snapshots - as frequently as I want (every: 15 minutes, hours, days,
weeks, months)
only stores the difference (COW) when make a new snapshot
zfs send/receive: very
Hi,
[Cut]
SLOG - dedicated device (partition) to hold ZIL on it instead of the pool
(much quicker
than pool - high iops for sync writes)
first of thanks for writing an informative input.
as it seems, a person must need 1 SSD for L2ARC and 2xmirrored SSDs for ZIL,
now the
question is
eredeti üzenet-
Feladó: Michael Rasmussen m...@miras.org
Címzett: pve-user pve.proxmox.com
Dátum: Thu, 2 Jan 2014 20:54:34 +0100
-
On Thu, 2 Jan 2014 20:31:34 +0100
Pongrácz István
wrote:
My opinion
My question, what is the recent status for the ploop support of openvz in the
recent PVE?
We do not plan to support ploop with the current release. The plan is to wait
for the RHEL7 based kernel.
Thank you!
I asked this, because I checked glusterfs with openvz and this was one
Hi,
I found that, ploop option exists in the vz.conf file, but I did not find the
convert function in the vzctl tool to be able to convert openvz private area to
a ploop system.
My question, what is the recent status for the ploop support of openvz in the
recent PVE?
Thank you!
István
Hi,
I just read your issue.
Some comments on your ZFS server setup:
for ZIL, in your config, 1-8GB size more than enoug in any case
L2ARC - it needs ram to keep header information in ARC, probably lower l2arc
than actual
Example: for ZIL and L2ARC, you should be better with 2 x 60GB SSD:
István
, pve-user pve.proxmox.com
Dátum: Tue, 29 Oct 2013 14:26:28 +0100
--
Maybe you need to refresh your browser cache.
Pongrácz István
wrote:
Hi,
Thank you!
In your Release note I found ZFS as GUI, poolsize etc.
My question
What a pity...
I use glusterfs on top of zfsonlinux, so it would be nice. But zfsonlinux
needs a 3.2+
kernel to correct some important bugfixex and memory leaks, and I don't know
if pve-kernel has
those corrections.
Hi,
I use zfs 0.6.1-rc11 on one server with kernel proxmox
Hi,
So, the NIC died now again. I logged the NIC with findep, so, I have a log,
later I check it.
Here is the report:
Uptime with the new kernel and e1000e driver: 7 days, 20:59
Dead NIC: eth1
eth1 Link encap:Ethernet HWaddr 00:03:1d:0b:8a:e3
inet6 addr: fe80::203:1dff:fe0b:8ae3/64
eredeti üzenet-
Feladó: Paul Gray g...@cs.uni.edu
Címzett: pve-user@pve.proxmox.com
Dátum: Thu, 03 Oct 2013 07:20:03 -0500
-
On 10/03/2013 04:10 AM, Pongrácz István wrote:
RX packets:342904 errors
eredeti üzenet-
Feladó: Michael Rasmussen m...@miras.org
Címzett: pve-user@pve.proxmox.com
Dátum: Wed, 25 Sep 2013 23:17:13 +0200
-
Anyway, I will contact the factory too about this issue and ask their help.
Thank you for the link, I just start to upgrade the system.
Do you think this kernel version contains relevant patch for this kind of issue?
Thank you,
István
eredeti üzenet-
Feladó: Dietmar Maurer diet...@proxmox.com
Címzett: Pongrácz István
, pve-user
Thank you, I started the server with the new kernel, uptime is 32 minutes and
growing.
We will see.
Thank you!
Bye,
István
eredeti üzenet-
Feladó: Dietmar Maurer diet...@proxmox.com
Címzett: Pongrácz István
, pve-user pve.proxmox.com
Dátum: Wed, 25 Sep 2013
eredeti üzenet-
Feladó: lst_ho...@kwsoft.de
Címzett: pve-user@pve.proxmox.com
Dátum: Wed, 25 Sep 2013 15:13:39 +0200
-
Zitat von Pongrácz István
:
Hi,
I found that, with the recent pve kernel
eredeti üzenet-
Feladó: Enrico M enr...@majaglug.net
Címzett: pve-user@pve.proxmox.com
Dátum: Sat, 21 Sep 2013 17:24:57 +0200
-
So, technically you can adjust proxmox as you need. If one not a tech.
genie,
eredeti üzenet-
Feladó: Marco Gabriel - inett GmbH mgabr...@inett.de
Címzett: admin-at-extremeshok-dot-com ad...@extremeshok.com
CC: pve-user@pve.proxmox.com
Dátum: Mon, 16 Sep 2013 17:41:21 +0200
-
Little
eredeti üzenet-
Feladó: admin extremeshok.com ad...@extremeshok.com
Címzett: pve-user@pve.proxmox.com
Dátum: Thu, 09 May 2013 17:13:00 +0200
-
Any updates to the backup functionality ?
ie. delta backups /
eredeti üzenet-
Feladó: Christian Blaich christ...@blaich.eu
Címzett: pve-user@pve.proxmox.com
Dátum: Thu, 09 May 2013 18:20:01 +0200
-
hey,
how can zfs help with delta snapshots ?
Never used zfs before.
eredeti üzenet-
Feladó: Martin Maurer mar...@proxmox.com
Címzett: pve-user pve.proxmox.com
Dátum: Fri, 18 Jan 2013 10:25:44 +
--
Hi,
I assume you run latest 2.2? I am not aware of a problem here,
Hi,
FYI: The snapshot left on the disk after the backup stopped.
So, I will manually remove the snapshot.
Cheers,
István
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
56 matches
Mail list logo