s
> Swami
>
> On Tue, Mar 22, 2016 at 3:32 PM, Zhang Qiang <dotslash...@gmail.com>
> wrote:
> > Hi all,
> >
> > I have 20 OSDs and 1 pool, and, as recommended by the
> > doc(http://docs.ceph.com/docs/master/rados/operations/placement-groups/),
> I
> >
Hi all,
I have 20 OSDs and 1 pool, and, as recommended by the doc(
http://docs.ceph.com/docs/master/rados/operations/placement-groups/), I
configured pg_num and pgp_num to 4096, size 2, min size 1.
But ceph -s shows:
HEALTH_WARN
534 pgs degraded
551 pgs stuck unclean
534 pgs undersized
too many
-3
> 63571 Gelnhausen
>
> HRB 93402 beim Amtsgericht Hanau
> Geschäftsführung: Oliver Dzombic
>
> Steuer Nr.: 35 236 3622 1
> UST ID: DE274086107
>
>
> Am 22.03.2016 um 19:02 schrieb Zhang Qiang:
> > Hi Oliver,
> >
> > Thanks for your reply to my quest
2016 at 19:10 Zhang Qiang <dotslash...@gmail.com> wrote:
> Oliver, Goncalo,
>
> Sorry to disturb again, but recreating the pool with a smaller pg_num
> didn't seem to work, now all 666 pgs are degraded + undersized.
>
> New status:
> cluster d2a69513-ad8e-4b25-8f10
Oliver, Goncalo,
Sorry to disturb again, but recreating the pool with a smaller pg_num
didn't seem to work, now all 666 pgs are degraded + undersized.
New status:
cluster d2a69513-ad8e-4b25-8f10-69c4041d624d
health HEALTH_WARN
666 pgs degraded
82 pgs stuck
Goncalo
>
> --
> *From:* Zhang Qiang [dotslash...@gmail.com]
> *Sent:* 23 March 2016 23:17
> *To:* Goncalo Borges
> *Cc:* Oliver Dzombic; ceph-users
> *Subject:* Re: [ceph-users] Need help for PG problem
>
> And here's the osd tree if it matters.
&
000 pgs, 2 pools, 14925 MB data, 3851 objects
37827 MB used, 20837 GB / 21991 GB avail
1000 active+clean
On Fri, 25 Mar 2016 at 16:44 Christian Balzer <ch...@gol.com> wrote:
>
> Hello,
>
> On Fri, 25 Mar 2016 08:11:27 + Zhang Qiang wrote:
>
> > Hi all,
Hi all,
According to fio, with 4k block size, the sequence write performance of my
ceph-fuse mount is just about 20+ M/s, only 200 Mb of 1 Gb full duplex NIC
outgoing bandwidth was used for maximum. But for 1M block size the
performance could achieve as high as 1000 M/s, approaching the limit of
being
> labeled as osd.0; both up and in. I'd recommend trying to get rid of the
> one listed on host 148_96 and see if it clears the issues.
>
>
>
> On Tue, Mar 22, 2016 at 6:28 AM, Zhang Qiang <dotslash...@gmail.com>
> wrote:
>
>> Hi Reddy,
>> It's over a
I installed jewel el7 via yum on CentOS 7.1, but it seems no systemd
scripts are available. But I do find there's a folder named 'systemd' in
the source, so maybe we forgot to build it into the package?
___
ceph-users mailing list
Hi, I need help for deploying jewel OSDs on CentOS 7.
Following the guide, I have successfully run OSD daemons but all of them
are down according to `ceph -s`: 15/15 in osds are down
No errors in /var/log/ceph/ceph-osd.1.log, it just stoped at these lines
and never made progress:
2016-05-09
Thanks Wang, looks like so, not Ceph to blame :)
On 25 October 2016 at 09:59, Haomai Wang <hao...@xsky.com> wrote:
> could you check dmesg? I think there exists disk EIO error
>
> On Tue, Oct 25, 2016 at 9:58 AM, Zhang Qiang <dotslash...@gmail.com>
> wrote:
>
>&
Hi,
One of several OSDs on the same machine crashed several times within days.
It's always that one, other OSDs are all fine. Below is the dumped message,
since it's too long here, I only pasted the head and tail of the recent
events. If it's necessary to inspect the full log, please see
Hi all,
To observe what will happen to ceph-fuse mount if the network is down, we
blocked
network connections to all three monitors by iptables. If we restore the
network
immediately(within minutes), the blocked I/O request will be restored,
every thing will
be back to normal.
But if we continue
Thanks! I'll check it out.
2017年11月24日 17:58,"Yan, Zheng" <uker...@gmail.com>写道:
> On Fri, Nov 24, 2017 at 4:59 PM, Zhang Qiang <dotslash...@gmail.com>
> wrote:
> > Hi all,
> >
> > To observe what will happen to ceph-fuse mount if the network i
Hi all,
I'm new to Luminous, when I use ceph-volume create to add a new
filestore OSD, it will tell me that the journal's header magic is not
good. But the journal device is a new LV. How to make it write the new
OSD's header to the journal?
And it seems this error message will not affect the
Hi,
Is it normal that I deleted files from the cephfs and ceph didn't
delete the back objects a day later? Until I restart the mds deamon
then it started to release the storage space.
I noticed the doc(http://docs.ceph.com/docs/mimic/dev/delayed-delete/)
says the file is marked as deleted on the
Hi,
Ceph version 10.2.3. After a power outage, I tried to start the MDS
deamons, but they stuck forever replaying journals, I had no idea why
they were taking that long, because this is just a small cluster for
testing purpose with only hundreds MB data. I restarted them, and the
error below was
Hi,
I'm using ceph-fuse 10.2.3 on CentOS 7.3.1611. ceph-fuse always
segfaults after running for some time.
*** Caught signal (Segmentation fault) **
in thread 7f455d832700 thread_name:ceph-fuse
ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
1: (()+0x2a442a) [0x7f457208e42a]
de your version of ceph-fuse.
>
> On Mon, Apr 2, 2018 at 12:56 AM, Zhang Qiang <dotslash...@gmail.com> wrote:
>> Hi,
>>
>> I'm using ceph-fuse 10.2.3 on CentOS 7.3.1611. ceph-fuse always
>> segfaults after running for some time.
>>
>> *** Caught signal (Seg
20 matches
Mail list logo