Hi !
I'm currently trying to get the xenserver on centos 6.4 tech preview
working against a test ceph cluster and having the same issue.
Some infos: the cluster is named ceph, the pool is named rbd.
ceph.xml:
pool type='rbd'
namerbd/name
source
nameceph/name
host
I have a server with 2 x 2TB disks. For performance, is it better to combine
them as a single OSD backed by RAID0 or have 2 OSD's backed by a single disk?
(log will be on SSD in either case).
My need in performance is more IOPS than overall throughput (maybe that's a
universal thing? :)
Hi,
Rgw bucket index is in one file (one osd performance issues).
Is there on roudmap sharding or other change to increase performance?
--
Pozdrawiam
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
On 07/21/2013 07:20 AM, James Harper wrote:
I have a server with 2 x 2TB disks. For performance, is it better to combine
them as a single OSD backed by RAID0 or have 2 OSD's backed by a single disk?
(log will be on SSD in either case).
I'd saw two disks and not raid0. Since when you
Hi,
On 07/21/2013 08:14 AM, Sébastien RICCIO wrote:
Hi !
I'm currently trying to get the xenserver on centos 6.4 tech preview
working against a test ceph cluster and having the same issue.
Some infos: the cluster is named ceph, the pool is named rbd.
ceph.xml:
pool type='rbd'
On 07/20/2013 11:42 PM, Wido den Hollander wrote:
On 07/20/2013 05:16 PM, Sage Weil wrote:
On Sat, 20 Jul 2013, Wido den Hollander wrote:
On 07/20/2013 06:56 AM, Jeffrey 'jf' Lim wrote:
On Fri, Jul 19, 2013 at 12:54 PM, Jeffrey 'jf' Lim
jfs.wo...@gmail.com
wrote:
hey folks, I was hoping to
Hi again,
[root@xen-blade05 ~]# virsh pool-info rbd
Name: rbd
UUID: ebc61120-527e-6e0a-efdc-4522a183877e
State: running
Persistent: no
Autostart: no
Capacity: 5.28 TiB
Allocation: 16.99 GiB
Available: 5.24 TiB
I managed to get it running. How
Hello.
I am intending to build a Ceph cluster using several Dell C6100 multi-node
chassis servers.
These have only 3 disk bays per node (12 x 3.5 drives across 4 nodes) so I
can't afford to sacrifice a third of my capacity for SSDs. However, fitting the
SSD via PCI-e seems a valid option.
On 07/21/13 20:37, Wido den Hollander wrote:
I'd saw two disks and not raid0. Since when you are doing parallel I/O
both disks can be doing something completely different.
Completely agree, Ceph is already doing the stripping :)
___
ceph-users
On Mon, Jul 22, 2013 at 08:45:07AM +1100, Mikaël Cluseau wrote:
On 22/07/2013 08:03, Charles 'Boyo wrote:
Counting on the kernel's cache, it appears I will be best served
purchasing write-optimized SSDs?
Can you share any information on the SSD you are using, is it PCIe
connected?
We are
10 matches
Mail list logo