else think?
Thanks!
- Dyweni
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ing a hard reboot to recover should be treated as a bug and
investigated. I saw a similar issue (similar client kernel message)
in the 4.9.x kernels regarding CephFS, but this is RBD.
Thank you,
Dyweni
On 2018-12-25 22:55, Dyweni - Ceph-Users wrote:
Hi again!
Prior to rebooting the client
,
Dyweni
On 2018-12-25 22:38, Dyweni - Ceph-Users wrote:
Hi Everyone/Devs,
Would someone please help me troubleshoot a strange data issue
(unexpected client hang on OSD I/O Error)?
On the client, I had a process reading a large amount of data from a
mapped RBD image. I noticed tonight that it had
e Ceph execution/data paths that originally failed).
For reference:
All Ceph versions are 12.2.5.
Client kernel version is 4.9.95.
Thank you,
Dyweni
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
You could be running out of memory due to the default Bluestore cache
sizes.
How many disks/OSDs in the R730xd versus the R740xd? How much memory in
each server type? How many are HDD versus SSD? Are you running
Bluestore?
OSD's in Luminous, which run Bluestore, allocate memory to use
Hi,
If you are running Ceph Luminous or later, use the Ceph Manager Daemon's
Balancer module. (http://docs.ceph.com/docs/luminous/mgr/balancer/).
Otherwise, tweak the OSD weights (not the OSD CRUSH weights) until you
achieve uniformity. (You should be able to get under 1 STDDEV). I
would
we're hitting some kind of variable size issue... maybe
overflow too?
Would appreciate any insight you could give.
Thanks!
Dyweni
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ogs as fast as the older versions.
Good luck!
Dyweni
_______
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Safest to just 'osd crush reweight osd.X 0' and let rebalancing finish.
Then 'osd out X' and shutdown/remove osd drive.
On 2018-12-04 03:15, Jarek wrote:
On Mon, 03 Dec 2018 16:41:36 +0100
si...@turka.nl wrote:
Hi,
Currently I am decommissioning an old cluster.
For example, I want to
occurred at a time when no snapshots were being
created. The cluster was brought back up in a controlled manner and no
errors were discovered immediately afterward (Ceph reported healthly).
Could this have caused corruption?
Thanks,
Dyweni
On 2018-06-25 09:34, Dyweni - Ceph-Users wrote
apTrim, PGScrub,
PGRecovery>&>(PGQueueable::RunVis&,
boost::variant, PGSnapTrim, PGScrub,
PGRecovery>&)+0x2c) [0x1fbfb70]
25: (PGQueueable::run(OSD*, boost::intrusive_ptr&,
ThreadPool::TPHandle&)+0x5c) [0x1f9b6c8]
26: (OSD::
le::RunVis::result_type
boost::apply_visitorboost::variant, PGSnapTrim, PGScrub,
PGRecovery>&>(PGQueueable::RunVis&,
boost::variant, PGSnapTrim, PGScrub,
PGRecovery>&)+0x2c) [0x1fbfb70]
25: (PGQueueable::run(OSD*, boost::intrusive_ptr&,
ThreadPool::TPHandle&)+0x5c) [0x1f9b6c8]
26: (OSD::ShardedOpWQ::_process(unsigned int,
ceph::heartbeat_handle_d*)+0x23e4) [0x1f737bc]
27: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x508)
[0x2a24f2c]
28: (ShardedThreadPool::WorkThreadSharded::entry()+0x2c) [0x2a26a68]
29: (Thread::entry_wrapper()+0xf4) [0x2c2de34]
30: (Thread::_entry_func(void*)+0x18) [0x2c2dd28]
NOTE: a copy of the executable, or `objdump -rdS ` is
needed to interpret this.
Thanks,
Dyweni
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
the monitor daemon
is running) be sufficient?
Thanks,
Dyweni
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
/ceph-12.2.2/src/rocksdb/monitoring/statistics.cc
However, if I turn off all optimizations (replace all '-O2' with '-O0')
or remove '-fno-omit-frame-pointer' (while keeping all the '-O2'), then
the compilation finishes.
Thanks,
Dyweni
___
ceph-users
I moved the drive from the crashing 10.2.10 OSD node into a different
10.2.10 OSD and everything is working fine now.
On 2018-01-10 20:42, Dyweni - Ceph-Users wrote:
Hi,
My cluster has 12.2.2 Mons and Mgrs, and 10.2.10 OSDs.
I tried adding a new 12.2.2 OSD into the mix and it crashed
:read_message(Message**, AuthSessionHandler*) ()
#7 0x00eaa44c in Pipe::reader() ()
#8 0x00eb2acc in Pipe::Reader::entry() ()
#9 0xb6e1a890 in start_thread () from /lib/libpthread.so.0
#10 0xb6978408 in ?? () from /lib/libc.so.6
Backtrace stopped: previous frame identical to this frame (corrupt
stack?
solution might require turning
> down the memory tuning options. Sage has discussed those in various places.
>
> On Sun, Sep 10, 2017 at 11:52 PM Dyweni - Ceph-Users
> <6exbab4fy...@dyweni.com> wrote:
>
>> Hi,
>>
>> Is anyone running Ceph Luminous (
is occuring in the 'msgr-worker-' thread.
My data is fine, just would like to get Ceph 12.2.0 running stably on
this node, so I can upgrade the remaining nodes and switch everything
over to BlueStore.
Thanks,
Dyweni
___
ceph-users mailing list
Hi,
Yes and no, for the actual data loss. This depends on your crush map.
If you're using the original map (which came with the installation),
then your smallest failure domain will be the host. If you have replica
size and 3 hosts and 5 OSDs per host (15 OSDs total), then loosing the
After installed required dependency 'virtualenv', then this error also
occurs with -j2.
The workaround I found is to include '--without-openldap' when using
'--with-radosgw'.
On 2016-04-29 15:55, Dyweni - Ceph-Users wrote:
Hi,
When I compile Ceph Jewel 10.2.0 using 'make -j1' I get
or higher, I do not and the build proceeds onwards...
Thanks,
Dyweni
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
I noticed while compiling Ceph Jewel (10.2.0), that the compiling
process does not fully honor make's -j switch. In the ps output I've
attached, you will see that I've requested only 3 concurrent jobs.
Make assigned 2 jobs to ceph and 1 job to rocksdb. The rocksdb then
took 6
oup()':
/var/tmp/portage/sys-cluster/ceph-10.2.0/work/ceph-10.2.0/src/rgw/rgw_rados.h:1064:
undefined reference to `vtable for RGWZoneGroup'
/var/tmp/portage/sys-cluster/ceph-10.2.0/work/ceph-10.2.0/src/rgw/rgw_rados.h:764:
undefined reference to `vtable for RGWSystemMetaObj'
Thanks,
Dyweni
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Your patch lists the command as "addfailed" but the email lists the
command as "add failed". (Note the space).
On 2016-01-14 18:46, Yan, Zheng wrote:
Here is patch for v9.2.0. After install the modified version of
ceph-mon, run “ceph mds add failed 1”
On Jan 15, 2016, at 08:20,
Does this support rbd images with stripe count > 1?
If yes, then this is also a solution for this problem:
http://tracker.ceph.com/issues/3837
Thanks,
Dyweni
On 2016-01-14 13:27, Bill Sanders wrote:
Is there some information about rbd-nbd somewhere? If it has feature
parity with
Looks good to me.
Dyweni
On 2015-05-29 17:08, Loic Dachary wrote:
Hi,
On 28/05/2015 05:13, Dyweni - Ceph-Users wrote:
Hi Guys,
Running the install-deps.sh script on Debian Squeeze results in the
package 'cryptsetup-bin' not being found (and 'cryptsetup' not being
used).
This is due
*/\\\|/g;' \
Thought you'd like to include this into the main line code.
(FYI, This is somewhat related to this bug:
http://tracker.ceph.com/issues/4943)
Thanks,
Dyweni
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
On 2015-01-04 08:21, Jiri Kanicky wrote:
More googling took me to the following post:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040279.html
Linux 3.14.1 is affected by serious Btrfs regression(s) that were fixed
in
later releases.
Unfortunately even latest Linux can
Hi,
If its the only think in your pool, you could try deleting the pool
instead.
I found that to be faster in my testing; I had created 500TB when I
meant to create 500GB.
Note for the Devs: I would be nice if rbd create/resize would accept
sizes with units (i.e. MB GB TB PB, etc).
On 2015-01-01 08:27, Dyweni - Ceph-Users wrote:
Hi, I'm going to take a stab at this, since I've just recently/am
currently
dealing with this/something similar myself.
On 2014-12-31 21:59, Lindsay Mathieson wrote:
As mentioned before :) we have two osd ndoes with one 3TB osd each.
(replica
On 2015-01-01 14:04, Lindsay Mathieson wrote:
On Thu, 1 Jan 2015 08:27:33 AM Dyweni - Ceph-Users wrote:
You
might a little improvement on the writes (since the spinners have to
work too),
but the reads should have the most improvement (since ceph only has to
read
from the ssd).
Couple
Your OSDs are full. The cluster will block, until space is freed up and
both OSDs leave full state.
You have 2 OSDs, so I'm assuming you are running replica size of 2? A
quick (but risky) method might be to reduce your replica down to 1, to
get the cluster unblocked, clean up space, then go
:45.900680 7fb0037fe700 0
mon.a@0(leader).data_health(1) update_stats avail 86% total 36863 MB,
used 3810 MB, avail 31958 MB
---
--
Thanks,
Dyweni
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph
explicitly:
http://ceph.com/docs/master/cephfs/createfs/
John
On Thu, Dec 18, 2014 at 12:52 PM, Dyweni - Ceph-Users
6exbab4fy...@dyweni.com wrote:
Hi All,
Just setup the monitor for a new cluster based on Giant (0.87) and I
find
that only the 'rbd' pool was created automatically. I don't see
On 2014-12-18 11:55, John Spray wrote:
Can you point out the specific page that's out of date so that we can
update it?
Thanks,
John
On Thu, Dec 18, 2014 at 5:52 PM, Dyweni - Ceph-Users
6exbab4fy...@dyweni.com wrote:
Thanks!!
Looks like the the manual installation instructions should be updated
'.
Would you tell me, please, what is the correct method to benchmark a
single OSD?
Thanks!
Dyweni
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
to expose the rbd images
and then use kpartx/device-mapper to mount from there...
Thanks,
Dyweni
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
if this is a misstatement.
Brad
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Dyweni -
Ceph-Users
Sent: Thursday, April 24, 2014 10:08 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] RBD Cloning
Hi,
Per the docs, I see
the monitor has finished restarting and
is operational again?
Thanks,
Dyweni
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
40 matches
Mail list logo