hi,
since you are playing on centos7, why not following
http://docs.ceph.com/docs/master/install/get-packages/ or just downloading the
binary packages form https://download.ceph.com/rpm-jewel/ ? :)
if you insist to install ceph from ceph-10.2.2.tar.gz, please follow
Hi Ben,
thanks for the information as well. It looks like we first will do some latency
tests between our data centers (thanks for the netem hint), before deciding
which topology is best for us. For simple DR scenarios rbd mirroring sounds
like the better solution so far.
We are still fans of
|
hi,
I build ceph-10.2.2.tar.gz , but there is an error like this:
[root@mds0 ceph-10.2.2]# ceph -s
Traceback (most recent call last):
File "/usr/local/bin/ceph", line 118, in
import rados
ImportError: No module named rados
I find the module rados like this:
[root@mds0
Hi Brad,
We fully understood the hardware we currently use are under Ceph's
recommendation, so we are seeking for a method to lower or restrict the
resources needed by OSD. Definitely losing some performance is acceptable for
us.
The reason why we did these experiments and discuss causes is
On Mon, Nov 28, 2016 at 2:59 PM Ilya Dryomov wrote:
> On Mon, Nov 28, 2016 at 6:20 PM, Francois Blondel
> wrote:
> > Hi *,
> >
> > I am currently testing different scenarios to try to optimize sequential
> > read and write speeds using Kernel RBD.
> >
Thank you Jason.
We are designing a backup system for production cluster based on ceph's
export /import diff feature.
We found this issue and hopefully it can be confirmed and then fixed soon.
If you need any more information for debugging, Please just let me know.
Thanks,
Zhongyan
On Mon, Nov
hi,
1.try below,
remove /root/.python-eggs/rados-0-py2.7-linux-x86_64.egg-tmp/
if you are sure that you want to keep it, back it up.
2. the command you ran, #cp -vf /usr/local/lib/python2.7/site-packages/*
/usr/lib64/python2.7/ , generally, this is not recommended.
--
|
the ceph version is :ceph-10.2.2.tar.gz
this error message is:
[root@mon0 ceph-10.2.2]# ceph -s
Traceback (most recent call last):
File "/usr/local/bin/ceph", line 118, in
import rados
File "build/bdist.linux-x86_64/egg/rados.py", line 7, in
File
On Mon, Nov 28, 2016 at 9:54 PM, Piotr Dzionek wrote:
> Hi,
> I recently installed 3 nodes ceph cluster v.10.2.3. It has 3 mons, and 12
> osds. I removed default pool and created the following one:
>
> pool 7 'data' replicated size 2 min_size 1 crush_ruleset 0
On Mon, Nov 28, 2016 at 6:20 PM, Francois Blondel wrote:
> Hi *,
>
> I am currently testing different scenarios to try to optimize sequential
> read and write speeds using Kernel RBD.
>
> I have two block devices created with :
> rbd create block1 --size 500G --pool rbd
Hi Nick,
We have a Ceph cluster spread across 3 datacenters at 3 institutions
in Michigan (UM, MSU, WSU). It certainly is possible. As noted you
will have increased latency for write operations and overall reduced
throughput as latency increases. Latency between our sites is 3-5ms.
We did
Thanks guys, I'll make sure the dashboard gets updated
On Thu, Nov 24, 2016 at 6:25 PM, Brad Hubbard wrote:
> Patrick,
>
> I remember hearing you talk about this site recently. Do you know who
> can help with this query?
>
> On Fri, Nov 25, 2016 at 2:13 AM, Nick Fisk
To optimize for non-direct, sequential IO, you'd actually most likely
be better off with smaller RBD object sizes. The rationale is that
each backing object is handled by a single PG and by using smaller
objects, you can distribute the IO load to more PGs (and associated
OSDs) in parallel. The 4MB
Hi *,
I am currently testing different scenarios to try to optimize sequential read
and write speeds using Kernel RBD.
I have two block devices created with :
rbd create block1 --size 500G --pool rbd --image-feature layering
rbd create block132m --size 500G --pool rbd --image-feature
Hi guys,
Thanks to both of your suggestions, we had some progression on this issue.
I tuned vm.min_free_kbytes to 16GB and raised vm.vfs_cache_pressure to 200, and
I did observe that the OS keep releasing cache while the OSDs want more and
more memory.
OK. Now we are going to reproduce the
In the cluster your OSD is down, not out. When an osd goes out, that is when
the data will start to rebuild. Once the osd is marked out, it will show as
11/11 osds are up instead of 1/12 osds are down.
OK, in that case, none of my previous explanation is relevant. I'll
spin up a hammer cluster and try to reproduce.
On Wed, Nov 23, 2016 at 9:13 PM, Zhongyan Gu wrote:
> BTW, I used Hammer 0.94.5 to do the test.
>
> Zhongyan
>
> On Thu, Nov 24, 2016 at 10:07 AM, Zhongyan Gu
Hi,
1. It is possible to do that with the primary affinity setting. The
documentation gives an example with SSD as primary OSD and HDD as secondary. I
think it would work for Active/Passive DC scenario might be tricky for
Active/Active. If you do Ceph across 2 DCs you might have
(Copying ceph-users to share the info more broadly)
On Thu, Nov 24, 2016 at 10:50 AM, wrote:
> Hi John
>
> I have some questions about the use of cephfs,
> Can you help me answer, Thank you!
>
> We built a Openstack(M) file share, and use Manila componets based CephFS.
>
On 11/28/16 10:02, Kevin Olbrich wrote:
> Hi!
>
> I want to deploy two nodes with 4 OSDs each. I already prepared OSDs
> and only need to activate them.
> What is better? One by one or all at once?
>
> Kind regards,
> Kevin.
I think the general statement is that if your cluster is very small, you
Hi,
I recently installed 3 nodes ceph cluster v.10.2.3. It has 3 mons, and
12 osds. I removed default pool and created the following one:
/pool 7 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 1024 pgp_num 1024 last_change 126 flags hashpspool
stripe_width 0/
I need to note that I already have 5 hosts with one OSD each.
Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
2016-11-28 10:02 GMT+01:00 Kevin Olbrich :
> Hi!
>
> I want to deploy two nodes with 4 OSDs each. I already prepared OSDs and
> only need to activate them.
> What
Hello Tim,
Can you please confirm, if the DeepSea works on Ubuntu also?
Thanks
On Fri, Nov 25, 2016 at 3:34 PM, M Ranga Swami Reddy
wrote:
> Hello Tim,
> Can you please confirm, if the DeepSea works on Ubuntu also?
>
> Thanks
> Swami
>
> On Thu, Nov 3, 2016 at 11:22 AM,
Hey!
I have using ceph for a while bu is not a real expert but i will give you some
pointers to make everyone able to help you further.
1. The crush map is kind of devided into two parts, the topology description,
(which you provided us with) and also the crush rules that defines how the data
- Den 11.nov.2016 14:35 skrev Wido den Hollander w...@42on.com:
>> Op 11 november 2016 om 14:23 schreef Trygve Vea
>> :
>>
>>
>> Hi,
>>
>> We recently experienced a problem with a single OSD. This occurred twice.
>>
>> The problem manifested itself thus:
>>
Hey guys,
we're evaluating ceph at the moment for a bigger production-ready
implementation. So far we've had some success and
some problems with ceph. In combination with Proxmox CEPH works quite well, if
taken out of the box. I've tried to coverup my questions
with existing answers and
Hi!
I want to deploy two nodes with 4 OSDs each. I already prepared OSDs and
only need to activate them.
What is better? One by one or all at once?
Kind regards,
Kevin.
___
ceph-users mailing list
ceph-users@lists.ceph.com
27 matches
Mail list logo