Hi,
I am trying to map a rbd device in Ubuntu 14.04 (kernel 3.13.0-30-generic):
# rbd -p mypool create test1 --size 500
# rbd -p mypool ls
test1
# rbd -p mypool map test1
rbd: add failed: (5) Input/output error
and in the syslog:
Jul 4 09:31:48 testceph kernel: [70503.356842] libceph: mon2
1
El 04/07/14 17:58, Ilya Dryomov escribió:
> On Fri, Jul 4, 2014 at 11:48 AM, Xabier Elkano wrote:
>> Hi,
>>
>> I am trying to map a rbd device in Ubuntu 14.04 (kernel 3.13.0-30-generic):
>>
>> # rbd -p mypool create test1 --size 500
>>
>> # rbd -p
Hi,
I was doing some tests in my cluster with fio tool, one fio instance
with 70 jobs, each job writing 1GB random with 4K block size. I did this
test with 3 variations:
1- Creating 70 images, 60GB each, in the pool. Using rbd kernel module,
format and mount each image as ext4. Each fio job wri
El 09/07/14 13:10, Mark Nelson escribió:
> On 07/09/2014 05:57 AM, Xabier Elkano wrote:
>>
>>
>> Hi,
>>
>> I was doing some tests in my cluster with fio tool, one fio instance
>> with 70 jobs, each job writing 1GB random with 4K block size. I did this
>&
El 09/07/14 14:07, Mark Nelson escribió:
> On 07/09/2014 06:52 AM, Xabier Elkano wrote:
>> El 09/07/14 13:10, Mark Nelson escribió:
>>> On 07/09/2014 05:57 AM, Xabier Elkano wrote:
>>>>
>>>>
>>>> Hi,
>>>>
>>>> I was do
El 09/07/14 13:14, hua peng escribió:
> what're the IO throughput (MB/s) for the test cases?
>
> Thanks.
Hi Hua,
the throughput in each test is IOPS x 4K block size, all tests are
random write.
Xabier
>
> On 14-7-9 下午6:57, Xabier Elkano wrote:
>>
>>
>>
El 09/07/14 16:53, Christian Balzer escribió:
> On Wed, 09 Jul 2014 07:07:50 -0500 Mark Nelson wrote:
>
>> On 07/09/2014 06:52 AM, Xabier Elkano wrote:
>>> El 09/07/14 13:10, Mark Nelson escribió:
>>>> On 07/09/2014 05:57 AM, Xabier Elkano wrote:
>>>>&
El 10/07/14 09:18, Christian Balzer escribió:
> On Thu, 10 Jul 2014 08:57:56 +0200 Xabier Elkano wrote:
>
>> El 09/07/14 16:53, Christian Balzer escribió:
>>> On Wed, 09 Jul 2014 07:07:50 -0500 Mark Nelson wrote:
>>>
>>>> On 07/09/2014 06:52 AM, Xabier E
El 22/03/15 a las 10:55, Saverio Proto escribió:
> Hello,
>
> I started to work with CEPH few weeks ago, I might ask a very newbie
> question, but I could not find an answer in the docs or in the ml
> archive for this.
>
> Quick description of my setup:
> I have a ceph cluster with two servers. Eac
Hi,
I'm designing a new ceph pool with new hardware and I would like to
receive some suggestion.
I want to use a replica count of 3 in the pool and the idea is to buy 3
new servers with a 10-drive 2,5" chassis each and 2 10Gbps nics. I have
in mind two configurations:
1- With journal in SSDs
O
le SSD failure also mean 3
OSD failure (50% loss capacity of each node and 16% of total capacity ),
but the journal SSDs are intel SC3700 and them should be very reliable.
>> 06 мая 2014 г., в 18:07, Xabier Elkano написал(а):
>>
>>
>> Hi,
>>
>> I'm designin
El 06/05/14 17:51, Wido den Hollander escribió:
> On 05/06/2014 05:07 PM, Xabier Elkano wrote:
>>
>> Hi,
>>
>> I'm designing a new ceph pool with new hardware and I would like to
>> receive some suggestion.
>> I want to use a replica count of 3 in the
pen.
>
> However using a 100GB DC3700 with those drives isn't particular wise
> performance wise. I'd at least use the 200GB ones.
Hi Christian, you are right, I should use the 200GB ones at least. Thanks!
>
> Regards,
>
> Christian
>>> 06 мая 2014 г., в 18:07
El 06/05/14 19:31, Cedric Lemarchand escribió:
> Le 06/05/2014 17:07, Xabier Elkano a écrit :
>> the goal is the performance over the capacity.
> I am sure you already consider the "full SSD" option, did you ?
>
Yes, I considered full SSD option, but it is very expensiv
El 06/05/14 19:38, Sergey Malinin escribió:
> If you plan to scale up in the future you could consider the following config
> to start with:
>
> Pool size=2
> 3 x servers with OS+journal on 1 ssd, 3 journal ssds, 4 x 900 gb data disks.
> It will get you 5+ TB capacity and you will be able to incre
El 06/05/14 18:40, Christian Balzer escribió:
> Hello,
>
> On Tue, 06 May 2014 17:07:33 +0200 Xabier Elkano wrote:
>
>> Hi,
>>
>> I'm designing a new ceph pool with new hardware and I would like to
>> receive some suggestion.
>> I want to use a replic
El 13/05/14 11:31, Christian Balzer escribió:
> Hello,
>
> No actual question, just some food for thought and something that later
> generations can scour from the ML archive.
>
> I'm planning another Ceph storage cluster, this time a "classic" Ceph
> design, 3 storage nodes with 8 HDDs for OSDs an
El 13/05/14 14:23, Christian Balzer escribió:
> On Tue, 13 May 2014 12:07:12 +0200 Xabier Elkano wrote:
>
>> El 13/05/14 11:31, Christian Balzer escribió:
>>> Hello,
>>>
>>> No actual question, just some food for thought and something that later
>&g
Hi,
I'm just deployed the ceph object gateway as an object storage in
OpenStack. I've followed this doc to achieve the integration with
Keystone:
http://docs.ceph.com/docs/master/radosgw/keystone/
"It is possible to integrate the Ceph Object Gateway with Keystone, the
OpenStack identity service
all on jessie. Or (better) provide
> repository. Please.
>
> Have a nice day.
> Dmitry.
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinf
Hi all,
my cluster is in WARN state because apparently there are some pgs
unfound. I think that I reached this situation because the metadata
pool, this pool was in default root but without any use because I don't
use cephfs, I only use rbd for VMs. I don't have OSDs in the default
root, they are
Hi,
I managed to remove the warning reweighting the crashed OSD:
ceph osd crush reweight osd.33 0.8
After the recovery, the cluster is not showing the warning any more
Xabier
On 29/11/16 11:18, Xabier Elkano wrote:
> Hi all,
>
> my cluster is in WARN state because apparently there
22 matches
Mail list logo