gentooserver ~ # ceph -s --debug-ms=1
2018-01-18 21:09:55.981886 7f9581f33700 1 Processor -- start
2018-01-18 21:09:55.981919 7f9581f33700 1 -- - start start
2018-01-18 21:09:55.982006 7f9581f33700 1 -- - -->
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 -- auth(proto 0 30 bytes
epoch 0) v1
here is my core-site.xml file
fs.default.name
ceph://host01:6789/
fs.defaultFS
ceph://host01:6789
io.file.buffer.size
131072
hadoop.tmp.dir
/mnt/hadoop/hadoop_tmp
ceph.conf.file
/etc/ceph/ceph.conf
Hi,
What’s your Hadoop xml config file like?
Have you checked the permissions of the ceph.conf and keyring file?
In case all good may be consider setting debug option in ceph.conf.options with
Hadoop xml config file
JC
> On Jan 18, 2018, at 16:55, Bishoy Mikhael
Hi All,
I've a tiny Ceph 12.2.2 cluster setup with three nodes, 17 OSDs, 3
MON,MDS,MGR (spanned across the three nodes).
Hadoop 2.7.3 is configured on only one of the three nodes as follows:
- Hadoop binaries was extracted to /opt/hadoop/bin/
- Hadoop config files where at /opt/hadoop/etc/hadoop/
With the help of robbat2 and llua on IRC channel I was able to solve this
situation by taking down the 2-OSD only hosts.
After crush reweighting OSDs 8 and 23 from host mia1-master-fe02 to 0, ceph
df showed the expected storage capacity usage (about 70%)
With this in mind, those guys have told
Hi David, thanks for replying.
On Thu, Jan 18, 2018 at 5:03 PM David Turner wrote:
> You can have overall space available in your cluster because not all of
> your disks are in the same crush root. You have multiple roots
> corresponding to multiple crush rulesets. All
You hosts are also not balanced in your default root. Your failure domain
is host, but one of your hosts has 8.5TB of storage in it compared to
26.6TB and 29.6TB. You only have size=2 (along with min_size=1 which is
bad for a lot of reasons) so it should still be able to place data mostly
`ceph osd df` is a good command for you to see what's going on. Compare
the osd numbers with `ceph osd tree`.
On Thu, Jan 18, 2018 at 5:03 PM David Turner wrote:
> You can have overall space available in your cluster because not all of
> your disks are in the same crush
You can have overall space available in your cluster because not all of
your disks are in the same crush root. You have multiple roots
corresponding to multiple crush rulesets. All pools using crush ruleset 0
are full because all of the osds in that crush rule are full.
On Thu, Jan 18, 2018 at
Quoting Steven Vacaroaia (ste...@gmail.com):
> Hi,
>
> I have noticed the below error message when creating a new OSD using
> ceph-volume
> deleting the OSD and recreating it does not work - same error message
>
> However, creating a new one OSD works
>
> Note
> No firewall /iptables are
On Thu, Jan 18, 2018 at 5:57 AM, Alex Gorbachev
wrote:
> On Tue, Jan 16, 2018 at 2:17 PM, Gregory Farnum wrote:
>> On Tue, Jan 16, 2018 at 6:07 AM Alex Gorbachev
>> wrote:
>>>
>>> I found a few WAN RBD cluster design
Took around 30min for the monitor join and I could execute ceph -s
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I have seen messages pass by here, on when a monitor tries to join it
takes a while. I had the monitor disk run out of space. Monitor was
killed and now restarting it. I can't do a ceph -s and have to wait for
this monitor to join also.
2018-01-18 21:34:05.787749 7f5187a40700 0 --
Sorry I forgot, this is a ceph jewel 10.2.10
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*
___
ceph-users mailing list
ceph-users@lists.ceph.com
Also, there is no quota set for the pools
Here is "ceph osd pool get xxx all": http://termbin.com/ix0n
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*
___
ceph-users mailing list
Hello,
I'm running near-out-of service radosgw (very slow to write new objects)
and I suspect it's because of ceph df is showing 100% usage in some pools,
though I don't know what that information comes from.
Pools:
#~ ceph osd pool ls detail -> http://termbin.com/lsd0
Crush Rules (important
Hi Andras,
On Thu, Jan 18, 2018 at 3:38 AM, Andras Pataki
wrote:
> Hi John,
>
> Some other symptoms of the problem: when the MDS has been running for a few
> days, it starts looking really busy. At this time, listing directories
> becomes really slow. An "ls -l"
Hi,
I have noticed the below error message when creating a new OSD using
ceph-volume
deleting the OSD and recreating it does not work - same error message
However, creating a new one OSD works
Note
No firewall /iptables are enabled and nothing shows on those ports using
netstat -ant
Any ideas
On Tue, Jan 16, 2018 at 2:17 PM, Gregory Farnum wrote:
> On Tue, Jan 16, 2018 at 6:07 AM Alex Gorbachev
> wrote:
>>
>> I found a few WAN RBD cluster design discussions, but not a local one,
>> so was wonderinng if anyone has experience with a
You have to check the admin Ops API documentation:
http://docs.ceph.com/docs/master/radosgw/adminops/
Cheers,
Valery
On 18/01/18 12:32 , 13605702...@163.com wrote:
hi:
is there a way to create radosgw user using RESTful API ?
i'm using Jewel.
thanks
Hi John,
Some other symptoms of the problem: when the MDS has been running for a
few days, it starts looking really busy. At this time, listing
directories becomes really slow. An "ls -l" on a directory with about
250 entries takes about 2.5 seconds. All the metadata is on OSDs with
NVMe
hi:
is there a way to create radosgw user using RESTful API ?
i'm using Jewel.
thanks
13605702...@163.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
I finally found a working way to replace the failed OSD. Everthing looks
fine again.
Thanks again for your comments and suggestions.
Dietmar
On 01/12/2018 04:08 PM, Dietmar Rieder wrote:
> Hi,
>
> can someone, comment/confirm my planned OSD replacement procedure?
>
> It would be very
23 matches
Mail list logo