> 在 2016年9月26日,下午10:44,Eugen Block 写道:
>
> What I have tried is to manually repair single PGs as described in [1]. But
> some of the broken PGs have no entries in the log file so I don't have
> anything to look at.
> In case there is one object in one OSD but is missing in the
> 在 2016年9月26日,下午10:44,Eugen Block 写道:
>
> And the number of scrub errors is increasing, although I started with more
> thatn 400 scrub errors.
> What I have tried is to manually repair single PGs as described in [1]. But
> some of the broken PGs have no entries in the log file
please try with:
ceph pg repair
most of the time will be good!
good luck!
> 在 2016年9月26日,下午10:44,Eugen Block 写道:
>
> (Sorry, sometimes I use the wrong shortcuts too quick)
>
> Hi experts,
>
> I need your help. I have a running cluster with 19 OSDs and 3 MONs. I created
>
We are looking to implement a small setup in Ceph+Openstack+kvm for a
college that teaches IT careers.We want to empower teachers and students to
self-provision resources and to develop skills to extend and/or build
Multi-Tenant portals.
Currently:
45VMs (90% Linux and 10% Wndows) using 70vCPUs
We are running on Hammer 0.94.7 and have had very bad experiences with PG
folders splitting a sub-directory further. OSDs being marked out, hundreds of
blocked requests, etc. We have modified our settings and watched the behavior
match the ceph documentation for splitting, but right now the
Agreed no announcement like there usually is, what is going on?
Hopefully there is an explanation. :|
On Mon, Sep 26, 2016 at 6:01 AM Henrik Korkuc wrote:
> Hey,
>
> 10.2.3 is tagged in jewel branch for more than 5 days already, but there
> were no announcement for that yet. Is
On Mon, Sep 26, 2016 at 5:44 PM, Wido den Hollander wrote:
>
> > Op 26 september 2016 om 17:48 schreef Sam Yaple :
> >
> >
> > On Mon, Sep 26, 2016 at 9:31 AM, Wido den Hollander
> wrote:
> >
> > > Hi,
> > >
> > > This has been discussed on the ML
> Op 26 september 2016 om 17:48 schreef Sam Yaple :
>
>
> On Mon, Sep 26, 2016 at 9:31 AM, Wido den Hollander wrote:
>
> > Hi,
> >
> > This has been discussed on the ML before [0], but I would like to bring
> > this up again with the outlook towards BlueStore.
Hello all
I need your help.
I have running ceph cluster on aws with 3 mons and 3 osd.
my question is that can I use EBS snapshot of OSD as backup solution will
it work if I crate volume from snapshot of OSD and add to ceph cluster as
new OSD
any help weather this approach is correct or not.
On Mon, Sep 26, 2016 at 9:31 AM, Wido den Hollander wrote:
> Hi,
>
> This has been discussed on the ML before [0], but I would like to bring
> this up again with the outlook towards BlueStore.
>
> Bcache [1] allows for block device level caching in Linux. This can be
>
2016-09-26 11:31 GMT+02:00 Wido den Hollander :
...
> Does anybody know the proper route we need to take to get this fixed
> upstream? Has any contacts with the bcache developers?
I do not have direct contacts either, but having partitions on bcache
would be really great.
(Sorry, sometimes I use the wrong shortcuts too quick)
Hi experts,
I need your help. I have a running cluster with 19 OSDs and 3 MONs. I
created a separate LVM for /var/lib/ceph on one of the nodes. I
stopped the mon service on that node, rsynced the content to the newly
created LVM and
Hi experts,
I need your help. I have a running cluster with 19 OSDs and 3 MONs. I
created a separate LVM for /var/lib/ceph on one of the nodes. I
stopped the mon service on that node, rsynced the content to the newly
created LVM and restarted the monitor, but obviously, I didn't do that
Hey,
10.2.3 is tagged in jewel branch for more than 5 days already, but there
were no announcement for that yet. Is there any reasons for that?
Packages seems to be present too
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hello,
> Yes, you are right!
> I've changed this for all pools, but not for last two!
>
> pool 1 '.rgw.root' replicated size 2 min_size 2 crush_ruleset 0 object_hash
> rjenkins pg_num 8 pgp_num 8 last_change 27 owner
> 18446744073709551615 flags hashpspool strip
> e_width 0
> pool 2
Yes, you are right!
I've changed this for all pools, but not for last two!
pool 1 '.rgw.root' replicated size 2 min_size 2 crush_ruleset 0 object_hash
rjenkins pg_num 8 pgp_num 8 last_change 27 owner 18446744073709551615 flags
hashpspool strip
e_width 0
pool 2 'default.rgw.control' replicated
Hi,
On 09/26/2016 12:58 PM, Dmitriy Lock wrote:
Hello all!
I need some help with my Ceph cluster.
I've installed ceph cluster with two physical servers with osd /data
40G on each.
Here is ceph.conf:
[global]
fsid = 377174ff-f11f-48ec-ad8b-ff450d43391c
mon_initial_members = vm35, vm36
Hello all!
I need some help with my Ceph cluster.
I've installed ceph cluster with two physical servers with osd /data 40G on
each.
Here is ceph.conf:
[global]
fsid = 377174ff-f11f-48ec-ad8b-ff450d43391c
mon_initial_members = vm35, vm36
mon_host = 192.168.1.35,192.168.1.36
auth_cluster_required =
Hi,
This has been discussed on the ML before [0], but I would like to bring this up
again with the outlook towards BlueStore.
Bcache [1] allows for block device level caching in Linux. This can be
read/write(back) and vastly improves read and write performance to a block
device.
With the
Hi John,
Can you provide:
radosgw-admin zonegroupmap get on both us-dfw and us-phx?
radosgw-admin realm get and radosgw-admin period get on all the gateways?
Orit
On Thu, Sep 22, 2016 at 4:37 PM, John Rowe wrote:
> Hello Orit, thanks.
>
> I will do all 6 just in case.
On Mon, Sep 26, 2016 at 11:13 AM, Ilya Dryomov wrote:
> On Mon, Sep 26, 2016 at 8:39 AM, Nikolay Borisov wrote:
>>
>>
>> On 09/22/2016 06:36 PM, Ilya Dryomov wrote:
>>> On Thu, Sep 15, 2016 at 3:18 PM, Ilya Dryomov wrote:
On Thu, Sep
On Mon, Sep 26, 2016 at 8:28 AM, David wrote:
> Ryan, a team at Ebay recently did some metadata testing, have a search on
> this list. Pretty sure they found there wasn't a huge benefit it putting the
> metadata pool on solid. As Christian says, it's all about ram and Cpu.
On Mon, Sep 26, 2016 at 8:39 AM, Nikolay Borisov wrote:
>
>
> On 09/22/2016 06:36 PM, Ilya Dryomov wrote:
>> On Thu, Sep 15, 2016 at 3:18 PM, Ilya Dryomov wrote:
>>> On Thu, Sep 15, 2016 at 2:43 PM, Nikolay Borisov wrote:
[snipped]
On Fri, Sep 23, 2016 at 09:31:46AM +0200, Wido den Hollander wrote:
>
> > Op 23 september 2016 om 5:59 schreef Chengwei Yang
> > :
> >
> >
> > Hi list,
> >
> > I found that ceph repo is broken these days, no any repodata in the repo at
> > all.
> >
> >
Hello,
On Mon, 26 Sep 2016 08:28:02 +0100 David wrote:
> Ryan, a team at Ebay recently did some metadata testing, have a search on
> this list. Pretty sure they found there wasn't a huge benefit it putting
> the metadata pool on solid. As Christian says, it's all about ram and Cpu.
> You want
Ryan, a team at Ebay recently did some metadata testing, have a search on
this list. Pretty sure they found there wasn't a huge benefit it putting
the metadata pool on solid. As Christian says, it's all about ram and Cpu.
You want to get as many inodes into cache as possible.
On 26 Sep 2016 2:09
On 09/22/2016 06:36 PM, Ilya Dryomov wrote:
> On Thu, Sep 15, 2016 at 3:18 PM, Ilya Dryomov wrote:
>> On Thu, Sep 15, 2016 at 2:43 PM, Nikolay Borisov wrote:
>>>
>>> [snipped]
>>>
>>> cat /sys/bus/rbd/devices/47/client_id
>>> client157729
>>> cat
Hi, cephers,
I want to run ceph cluster in docker, including MONs, OSDs, RGWs and maybe MDS.
I do it with the guide
https://github.com/ceph/ceph-docker/tree/master/ceph-releases/hammer/ubuntu/14.04/daemon
,
and it runs well in single nodes without KV store. That is, every component
runs as one
28 matches
Mail list logo