Hi all,
I think there are many opinions when it comes up to storage technologies, and
that is the reason why there are so many different storage projects out there.
And for that reason, we have a plugin system for different storage types :-)
> On April 4, 2018 at 9:50 AM Eneko Lacunza
.
--
Lindsay Mathieson
From: pve-user <pve-user-boun...@pve.proxmox.com> on behalf of ad...@extremeshok.com
<ad...@extremeshok.com>
Sent: Friday, March 30, 2018 11:53:10 AM
To: pve-user@pve.proxmox.com
Subject: Re: [PVE-User] Custom storage in Prox
Hey Lindsay,
A bit off topic, but we're still using glusterFS here,
and for the most part still happy with it
(but I still haven't had the courrage to update past 3.7.15).
Since I remember you using glusterFS too, how would you recommand
swithing to lizardFS ? It does sound good, but the
Hi Mark,
In a way I would agree with you on the total idiot part, speaking from
experience.
https://pve.proxmox.com/pipermail/pve-user/2018-January/169179.html
Where I nuked our whole ceph cluster with a single command (although a
warning would have been nice)
My experience with Ceph so far is
On Sat, 2018-03-31 at 09:58 +1000, Lindsay Mathieson wrote:
> The performance I got with Ceph was sub optimal - as mentioned
> earlier,
> if you throw lots of money at Enterprise hardware & SSD's then its
> ok,
> but that sort of expenditure was not possible for our SMB. Something
> not
>
On 30/03/2018 7:40 PM, Alexandre DERUMIER wrote:
Hi,
Ceph has rather larger overheads
Agree. they are overhead, but performance increase with each release.
I think the biggest problem is that you can reach more than 70-90k iops with 1
vm disk currently.
and maybe latency could be improve
On 30/03/2018 5:10 PM, Thomas Lamprecht wrote:
the changes should be trivial though (see commit), only restore
bwlimit is implemented and exposed on newest WebUI.
Besides that I did not found or remembered anything big, API-wise.
Thanks Thomas, good to know.
--
Lindsay
g lizard inside your vm?
or for hosting vm disk ?
>>Run on commodity hardware with trivial adding of disks as required?
yes
- Mail original -
De: "Lindsay Mathieson" <lindsay.mathie...@gmail.com>
À: "proxmoxve" <pve-user@pve.proxmox.com>
Envoyé: Vend
emeshok.com <ad...@extremeshok.com>
> Sent: Friday, March 30, 2018 11:53:10 AM
> To: pve-user@pve.proxmox.com
> Subject: Re: [PVE-User] Custom storage in ProxMox 5
>
> Don't waste your time with lizardfs.
>
> Proxmox 5+ has proper Ceph and ZFS support.
>
> Ceph d
Hi,
Am 03/30/2018 um 03:20 AM schrieb Lindsay Mathieson:
I was working on a custom storage plugin (lizardfs) for VE 4.x, looking to
revisit it. Has the API changed much (or at all) for PX 5? Is there any
documentation for it?
The base work for per-storage bandwidth limiting was added,
see
e: [PVE-User] Custom storage in ProxMox 5
Don't waste your time with lizardfs.
Proxmox 5+ has proper Ceph and ZFS support.
Ceph does everything and more, and ZFS is about the de-facto container
storage medium.
On 03/30/2018 03:20 AM, Lindsay Mathieson wrote:
> I was working on a custom
Don't waste your time with lizardfs.
Proxmox 5+ has proper Ceph and ZFS support.
Ceph does everything and more, and ZFS is about the de-facto container
storage medium.
On 03/30/2018 03:20 AM, Lindsay Mathieson wrote:
I was working on a custom storage plugin (lizardfs) for VE 4.x, looking
nb: still no way to integrate them into the WebUI?
--
Lindsay Mathieson
From: Lindsay Mathieson
Sent: Friday, March 30, 2018 11:20:01 AM
To: pve-user@pve.proxmox.com
Subject: Custom storage in ProxMox 5
I was working on a custom
I was working on a custom storage plugin (lizardfs) for VE 4.x, looking to
revisit it. Has the API changed much (or at all) for PX 5? Is there any
documentation for it?
Thanks,
--
Lindsay Mathieson
___
pve-user mailing list
pve-user@pve.proxmox.com
14 matches
Mail list logo