On 10/06/2020 4:38 pm, Mark Adams via pve-user wrote:
The simplest thing to set also is to make sure you are using writeback
cache in your vms with ceph. It makes a huge difference in performance.
Chiming in - doing some testing with a 5 node ceph/proxmox cluster here.
Basic spinners and
Note that with only 10 Gbps network, you will get only 1 GB/s wich is
only the 25-30% performance of a NVMe.
To profit the 100% performance of a NVMe you need at least a 40G network.
On 09/06/2020 19:46, Marco Bellini wrote:
Dear All,
I'm trying to use proxmox on a 4 nodes cluster with
19.5k
34.5k 53.0k
seqwrite 4k 7850 37.5k
24.9k 82.6k
"
- Mail original -
De: "Eneko Lacunza"
À: "proxmoxve"
Envoyé: Mercredi 10 Jui
--- Begin Message ---
The simplest thing to set also is to make sure you are using writeback
cache in your vms with ceph. It makes a huge difference in performance.
On Wed, 10 Jun 2020, 07:31 Eneko Lacunza, wrote:
> Hi Marco,
>
> El 9/6/20 a las 19:46, Marco Bellini escribió:
> > Dear All,
> >
Hi Marco,
El 9/6/20 a las 19:46, Marco Bellini escribió:
Dear All,
I'm trying to use proxmox on a 4 nodes cluster with ceph.
every node has a 500G NVME drive, with dedicated 10G ceph network with
9000bytes MTU.
despite off nvme warp speed I can reach when used as lvm volume, as soon as I
Dear All,
I'm trying to use proxmox on a 4 nodes cluster with ceph.
every node has a 500G NVME drive, with dedicated 10G ceph network with
9000bytes MTU.
despite off nvme warp speed I can reach when used as lvm volume, as soon as I
convert it into a 4-osd ceph, performance are very very
Hi,
a good start to get a feeling about expected performance, read this
great paper from Redhat:
http://www.redhat.com/en/resources/red-hat-ceph-storage-clusters-supermicro-storage-servers
if you want to build a small three node cluster, you should use a intel
3700 200 GB SSD for your
Hello Tobias.
Check if your SSD is suitable for Ceph Journal:
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
If you can, add 2 OSD Sata disks per host.
Regards, Fabrizio
- Il 28-set-15, alle 18:10, Tobias Kropf - inett GmbH
Hi @ all
i have a question to ceph we plan to build our own ceph cluster in
datacenter. Can you tell me the performance statics from running ceph cluster
with the same setup?
We want to buy the follow setup:
3x Chassis with:
CPUs: 2 x Intel E5-2620v3
RAM: 64GB
NIC: 2x10GBit/s CEPH,