> The backup gui has snapshot mode as default so if the user misses to
> change this setting backup will fail. The scheduled backup seems to be
> able to detect what backup mode a VM's storage and type support and
> choose backup mode accordingly.
BTW, we already have some code to support such
> > BTW, we just uploaded the target-cli packages.
> >
> Yes, has noticed. Writing a LUN plugin for target-cli is on my todo
> list.
Great!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On Sun, 23 Oct 2016 12:03:15 +0200 (CEST)
Dietmar Maurer wrote:
>
> BTW, we just uploaded the target-cli packages.
>
Yes, has noticed. Writing a LUN plugin for target-cli is on my todo
list.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael
> > Simply make a backup using suspend mode (I guess I miss something)?
> >
> The backup gui has snapshot mode as default so if the user misses to
> change this setting backup will fail. The scheduled backup seems to be
> able to detect what backup mode a VM's storage and type support and
>
On Sun, 23 Oct 2016 09:59:16 +0200 (CEST)
Dietmar Maurer wrote:
> > Since it is not possible to expose a zfs snapshot over iscsi backup in
> > snapshot mode is not an option so is there a way to have the backup job
> > skip snapshot mode and automatically use suspend mode to
> Since it is not possible to expose a zfs snapshot over iscsi backup in
> snapshot mode is not an option so is there a way to have the backup job
> skip snapshot mode and automatically use suspend mode to avoid error
> messages like this?
Simply make a backup using suspend mode (I guess I miss
On Sun, 23 Oct 2016 05:46:13 +0200
Michael Rasmussen wrote:
> Missing.
> Deactivating when CT is stopped -> solution just needs to be
> implemented.
>
Test and proof migrations works, both online and offline.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
On Tue, 11 Oct 2016 23:12:58 +0200
Michael Rasmussen wrote:
> It seems to work :-)
>
I am able now to create a LXC on a zvol. Works great;-)
I am able to make backup in suspend mode.
I am able to make snapshots, both online and offline.
Missing.
Deactivating when CT is
hosts, because I had all
luns of all vms on alls hosts.
(so between 500-700luns), and multipath daemon was cpu crazy.
- Mail original -
De: "datanom.net" <m...@datanom.net>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Mercredi 12 Octobre 2016 07:44:2
On Wed, 12 Oct 2016 05:41:26 +0200 (CEST)
Alexandre DERUMIER wrote:
>
> Well, this is very dependent of nfs server implementation quality.
> I'm running vm on netapp san though nfs 4.1, and I have very good performance.
>
netapp is a purpose build NFS based storage
<diet...@proxmox.com>
À: "aderumier" <aderum...@odiso.com>, "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Mercredi 12 Octobre 2016 06:32:37
Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI
> >>NFS does not provide good iops and has the ability t
rum...@odiso.com>, "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Mercredi 12 Octobre 2016 06:32:37
Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI
> >>NFS does not provide good iops and has the ability to bring down a
> >>node. IMHO NFS is only useful
ing the long
debate in forum ;)
- Mail original -
De: "datanom.net" <m...@datanom.net>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Mardi 11 Octobre 2016 23:12:58
Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Mon, 10 Oct 2016 22:30:04 +0200 (C
> >>NFS does not provide good iops and has the ability to bring down a
> >>node. IMHO NFS is only useful as filestorage server for backups and
> >>iso images.
>
> Well, this is very dependent of nfs server implementation quality.
> I'm running vm on netapp san though nfs 4.1, and I have very good
y good performance.
I think the biggest problem is lack of fstrim/discard. (should be availble in
the future nfs 4.2)
- Mail original -
De: "datanom.net" <m...@datanom.net>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Mardi 11 Octobre 2016 21:38
On Mon, 10 Oct 2016 22:30:04 +0200 (CEST)
Alexandre DERUMIER wrote:
>
> I'm afraid of behaviour if a node is not joinable to do the delete or rescan.
> Seem to be difficult to manage with big cluster.
> I'm curious to see result of:
>
It seems to work :-)
1) After login
On Tue, 11 Oct 2016 21:26:08 +0200 (CEST)
Dietmar Maurer wrote:
>
> Why not NFS? This would make everything easier. IMHO iSCSI is really clumsy.
>
NFS does not provide good iops and has the ability to bring down a
node. IMHO NFS is only useful as filestorage server for
> > VMs use KVM live backup feature, which is not available for containers.
> >
> I see. A work around could be to make a clone of a snapshot which is
> then exposed through iscsi. Would that be an idea?
Why not NFS? This would make everything easier. IMHO iSCSI is really clumsy.
On Tue, 11 Oct 2016 19:54:05 +0200 (CEST)
Dietmar Maurer wrote:
>
> VMs use KVM live backup feature, which is not available for containers.
>
I see. A work around could be to make a clone of a snapshot which is
then exposed through iscsi. Would that be an idea?
--
> > Besides, iSCSI has many other drawbacks, for example it is not possible
> > to access ZFS snapshots over iSCSI. If we use ZFS/NFS instead, we can have
> > all that functionality?
> >
> To what purpose is it needed to be able to access a ZFS snapshot?
We need that for vzdump container
On Tue, 11 Oct 2016 17:39:02 +0200 (CEST)
Dietmar Maurer wrote:
>
> Besides, iSCSI has many other drawbacks, for example it is not possible
> to access ZFS snapshots over iSCSI. If we use ZFS/NFS instead, we can have
> all that functionality?
>
To what purpose is it needed
> I think this is highly hypothetical since a LUN at any point in time
> can only be active on one node, proxmox that is, so the hole operation
> is serializable which means that every step will be controlled and can
> be rolled back. Eg. we are dealing with a deterministic state machine.
On Tue, 11 Oct 2016 08:33:02 +0200 (CEST)
Alexandre DERUMIER wrote:
>
> I agree with Dietmar.
>
> Think of a missed ssh command to remote node (small network timeout for
> example), for a resize volume for example.
>
> the lun will still be there, but wrong size, and if
> posses another problem which will be a problem for other things too and
> must be handled by fencing.
not really that easy - we have a quorum system to handle most situations ...
fencing is only required for some special cases.
___
pve-devel mailing
m...@datanom.net>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Mardi 11 Octobre 2016 08:08:43
Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Tue, 11 Oct 2016 06:14:14 +0200 (CEST)
Dietmar Maurer <diet...@proxmox.com> wrote:
>
> Such things cannot work,
On Tue, 11 Oct 2016 06:14:14 +0200 (CEST)
Dietmar Maurer wrote:
>
> Such things cannot work, because nodes can be offline (or worse, online
> but not reachable).
>
Offline nodes is not a problem because when the get online their scsi
bus will not have references to
> Start/activate:
> New view
> Foreach node:
> iscsiadm --session --rescan
Such things cannot work, because nodes can be offline (or worse, online
but not reachable).
___
pve-devel mailing list
pve-devel@pve.proxmox.com
10.10.2016 23:30, Alexandre DERUMIER wrote:
>> We could rework iscsi-manipulation code into another behavior. For exmaple,
>> Dell PS-series SAN exports each volume in separate target, lun 0. So we can
>> login into this target in activate_volume() and logout in
>> deactivate_volume(). See my
On Mon, Oct 10, 2016 at 4:27 PM, Alexandre DERUMIER wrote:
>>>Having e.g. LXC working over NFS with ZFS on another server.
>
> Do you want to manage snasphot/clone on the zfs server ?
Yes, the purpose is to use a ZFS-based storage on multiple nodes
without iscsi. I don't
Would a hardware iSCSI SAN operate in a significantly different fashion?
On Oct 10, 2016 3:55 PM, "Michael Rasmussen" wrote:
> On Mon, 10 Oct 2016 22:38:03 +0200 (CEST)
> Alexandre DERUMIER wrote:
>
> >
> > Great ! (Sorry I can't help because I don't have
On Mon, 10 Oct 2016 22:38:03 +0200 (CEST)
Alexandre DERUMIER wrote:
>
> Great ! (Sorry I can't help because I don't have iscsi san anymore)
>
My 'iscsi san' for testing is a virtual debian and/or solaris;-)
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
t" <m...@datanom.net>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Lundi 10 Octobre 2016 22:33:58
Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Mon, 10 Oct 2016 22:30:04 +0200 (CEST)
Alexandre DERUMIER <aderum...@odiso.com> wrote:
>
> I'm
On Mon, 10 Oct 2016 22:30:04 +0200 (CEST)
Alexandre DERUMIER wrote:
>
> I'm afraid of behaviour if a node is not joinable to do the delete or rescan.
> Seem to be difficult to manage with big cluster.
> I'm curious to see result of:
>
>
> Start/activate:
e::deactivate_volume)
echo 1 > /sys/bus/scsi/devices/${H:B:T:L}/delete
- Mail original -
De: "datanom.net" <m...@datanom.net>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Lundi 10 Octobre 2016 21:31:51
Objet: Re: [pve-devel] LXC volumes on ZFS ov
On Mon, 10 Oct 2016 22:12:56 +0300
Dmitry Petuhov wrote:
> We could rework iscsi-manipulation code into another behavior. For exmaple,
> Dell PS-series SAN exports each volume in separate target, lun 0. So we can
> login into this target in activate_volume() and logout
10.10.2016 21:08, Alexandre DERUMIER wrote:
>>> This is because the Lun is persisted through the scsi bus so the
>>> following should do it:
>>> echo 1 > /sys/bus/scsi/devices/${H:B:T:L}/delete (Where H = host:B = bus:T
>>> = target:L = lun)
> yes, but you need to do it on all nodes to be
mox.com>
Envoyé: Lundi 10 Octobre 2016 19:48:32
Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI
On Mon, 10 Oct 2016 19:32:28 +0200 (CEST)
Alexandre DERUMIER <aderum...@odiso.com> wrote:
>
> But, if you delete a lun, rescan is not removing it.
> and if you remove then add
On Mon, 10 Oct 2016 19:32:28 +0200 (CEST)
Alexandre DERUMIER wrote:
>
> But, if you delete a lun, rescan is not removing it.
> and if you remove then add a new lun on same lunid, this is where problems
> begins.
>
This is because the Lun is persisted through the scsi bus
, if you delete a lun, rescan is not removing it.
and if you remove then add a new lun on same lunid, this is where problems
begins.
Mail original -
De: "datanom.net" <m...@datanom.net>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Lundi 10 Octobre 2
On Mon, 10 Oct 2016 18:46:50 +0200
Michael Rasmussen wrote:
> On Mon, 10 Oct 2016 13:57:36 +0200 (CEST)
> Alexandre DERUMIER wrote:
>
> > I think the more difficult part is to manage iscsi lun add|remove with
> > iscsiadm. (without doing a full scan)
>
On Mon, 10 Oct 2016 13:57:36 +0200 (CEST)
Alexandre DERUMIER wrote:
> I think the more difficult part is to manage iscsi lun add|remove with
> iscsiadm. (without doing a full scan)
Should option rescan for iscsiadm not do this?
> (and with multipath is also more complex)
>
> On October 10, 2016 at 6:07 PM Michael Rasmussen wrote:
>
>
> On Mon, 10 Oct 2016 16:58:12 +0200 (CEST)
> Dietmar Maurer wrote:
>
> > Sign, I should read the code before ...
> >
> > The problem is that the ZFSPlugin use the userspace iscsi library.
On Mon, 10 Oct 2016 16:58:12 +0200 (CEST)
Dietmar Maurer wrote:
> Sign, I should read the code before ...
>
> The problem is that the ZFSPlugin use the userspace iscsi library.
>
> You would need to replace that with kernel level iSCSI, so that
> we can 'mount' exported
- Mail original -
De: "dietmar" <diet...@proxmox.com>
À: "datanom.net" <m...@datanom.net>, "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Lundi 10 Octobre 2016 16:58:12
Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI
Sign, I should read the
> What exactly is required to enable support for LXC volumes on ZFS over
> iSCSI?
Sorry, but what is the problem exactly?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Sign, I should read the code before ...
The problem is that the ZFSPlugin use the userspace iscsi library.
You would need to replace that with kernel level iSCSI, so that
we can 'mount' exported volumes.
Not sure if it makes sense to export volume using NFS instead.
> On October 10, 2016 at
oxmox.com>
Envoyé: Lundi 10 Octobre 2016 15:15:42
Objet: Re: [pve-devel] LXC volumes on ZFS over iSCSI
This is a similar thing that I wanted to discuss with my ZFS-over-NFS
question a while ago. Having e.g. LXC working over NFS with ZFS on
another server.
On Mon, Oct 10, 2016 at 1:57 PM, A
: Lundi 10 Octobre 2016 13:50:53
Objet: [pve-devel] LXC volumes on ZFS over iSCSI
Hi all,
What exactly is required to enable support for LXC volumes on ZFS over
iSCSI?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup
Hi all,
What exactly is required to enable support for LXC volumes on ZFS over
iSCSI?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir datanom net
49 matches
Mail list logo