Trying to understand why some OSDs (6 out of 21) went down in my cluster while
running a CBT radosbench benchmark. From the logs below, is this a networking
problem between systems, or is it some kind of FileStore problem.
Looking at one crashed OSD log, I see the following crash error:
If I have an rbd image that is being used by a VM and I want to mount it
as a read-only /dev/rbd0 kernel device, is that possible?
When I try it I get:
mount: /dev/rbd0 is write-protected, mounting read-only
mount: wrong fs type, bad option, bad superblock on /dev/rbd0,
missing codepage
> -Original Message-
> From: Jason Dillaman [mailto:jdill...@redhat.com]
> Sent: Thursday, June 30, 2016 6:15 PM
> To: Deneau, Tom <tom.den...@amd.com>
> Cc: ceph-users <ceph-us...@ceph.com>
> Subject: Re: [ceph-users] rbd cache command thru admin socket
>
I was following the instructions in
https://www.sebastien-han.fr/blog/2015/09/02/ceph-validate-that-the-rbd-cache-is-active/
because I wanted to look at some of the rbd cache state and possibly flush and
invalidate it
My ceph.conf has
[client]
rbd default features = 1
rbd
Ah that makes sense. The places where it was not adding the "default"
prefix were all pre-jewel.
-- Tom
> -Original Message-
> From: Yehuda Sadeh-Weinraub [mailto:yeh...@redhat.com]
> Sent: Friday, June 10, 2016 2:36 PM
> To: Deneau, Tom <tom.den...@amd.com&g
When I start radosgw, I create the pool .rgw.buckets manually to control
whether it is replicated or erasure coded and I let the other pools be
created automatically.
However, I have noticed that sometimes the pools get created with the "default"
prefix, thus
rados lspools
.rgw.root
Wednesday, April 27, 2016 2:59 PM
> To: Deneau, Tom <tom.den...@amd.com>
> Cc: ceph-users <ceph-us...@ceph.com>
> Subject: Re: [ceph-users] mount -t ceph
>
> On Wed, Apr 27, 2016 at 2:55 PM, Deneau, Tom <tom.den...@amd.com> wrote:
> > What kernel versions are
What kernel versions are required to be able to use CephFS thru mount -t ceph?
-- Tom Deneau
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> -Original Message-
> From: Ken Dreyer [mailto:kdre...@redhat.com]
> Sent: Tuesday, March 08, 2016 10:24 PM
> To: Shinobu Kinjo
> Cc: Deneau, Tom; ceph-users
> Subject: Re: [ceph-users] yum install ceph on RHEL 7.2
>
> On Tue, Mar 8, 2016 at 4:11 PM, Shinobu Kin
Yes, that is what lsb_release is showing...
> -Original Message-
> From: Shinobu Kinjo [mailto:shinobu...@gmail.com]
> Sent: Tuesday, March 08, 2016 5:01 PM
> To: Deneau, Tom
> Cc: ceph-users
> Subject: Re: [ceph-users] yum install ceph on RHEL 7.2
>
> On Wed
Just checking...
On vanilla RHEL 7.2 (x64), should I be able to yum install ceph without adding
the EPEL repository?
(looks like the version being installed is 0.94.6)
-- Tom Deneau, AMD
___
ceph-users mailing list
ceph-users@lists.ceph.com
The commands shown below had successfully mapped rbd images in the past on
kernel version 4.1.
Now I need to map one on a system running the 3.13 kernel.
Ceph version is 9.2.0. Rados bench operations work with no problem.
I get the same error message whether I use format 1 or format 2 or
dev on the
client).
-- Tom
> -Original Message-
> From: Ilya Dryomov [mailto:idryo...@gmail.com]
> Sent: Friday, January 29, 2016 4:53 PM
> To: Deneau, Tom
> Cc: ceph-users; c...@lists.ceph.com
> Subject: Re: [ceph-users] rbd kernel mapping on 3.13
>
> On Fr
If using s3cmd to radosgw and using s3cmd's --disable-multipart option, is
there any limit to the size of the object that can be stored thru radosgw?
Also, is there a recommendation for multipart chunk size for radosgw?
-- Tom
___
ceph-users mailing
I have the following 4 pools:
pool 1 'rep2host' replicated size 2 min_size 1 crush_ruleset 1 object_hash
rjenkins pg_num 128 pgp_num 128 last_change 88 flags hashpspool stripe_width 0
pool 17 'rep2osd' replicated size 2 min_size 1 crush_ruleset 1 object_hash
rjenkins pg_num 256 pgp_num 256
I see that I can create a crush rule that only selects osds
from a certain node by this:
ceph osd crush rule create-simple byosdn1 myhostname osd
and if I then create a replicated pool that uses that rule,
it does indeed select osds only from that node.
I would like to do a similar thing with
> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: Monday, September 14, 2015 5:32 PM
> To: Deneau, Tom
> Cc: ceph-users
> Subject: Re: [ceph-users] rados bench seq throttling
>
> On Thu, Sep 10, 2015 at 1:02 PM, Deneau, Tom &l
Running 9.0.3 rados bench on a 9.0.3 cluster...
In the following experiments this cluster is only 2 osd nodes, 6 osds each
and a separate mon node (and a separate client running rados bench).
I have two pools populated with 4M objects. The pools are replicated x2
with identical parameters. The
When measuring read bandwidth using rados bench, I've been doing the
following:
* write some objects using rados bench write --no-cleanup
* drop caches on the osd nodes
* use rados bench seq to read.
I've noticed that on the first rados bench seq immediately following the rados
bench
parameter that might be throttling the
2 node configuation?
-- Tom
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: Wednesday, September 02, 2015 7:29 PM
> To: ceph-users
> Cc: Deneau, Tom
> Subject: Re: [ceph-users] osds on 2 nodes vs. on one
; From: Deneau, Tom
> Sent: Thursday, September 03, 2015 10:39 AM
> To: 'Christian Balzer'; ceph-users
> Subject: RE: [ceph-users] osds on 2 nodes vs. on one node
>
> Rewording to remove confusion...
>
> Config 1: set up a cluster with 1 node with 6 OSDs Config 2: identical
&
In a small cluster I have 2 OSD nodes with identical hardware, each with 6 osds.
* Configuration 1: I shut down the osds on one node so I am using 6 OSDS on a
single node
* Configuration 2: I shut down 3 osds on each node so now I have 6 total OSDS
but 3 on each node.
I measure read
il.com]
> Sent: Saturday, August 29, 2015 5:27 PM
> To: Brad Hubbard
> Cc: Deneau, Tom; ceph-users
> Subject: Re: [ceph-users] a couple of radosgw questions
>
> I'm not the OP, but in my particular case, gc is proceeding normally
> (since 94.2, i think) -- i just have millions of
A couple of questions on the radosgw...
1. I noticed when I use s3cmd to put a 10M object into a bucket in the rados
object gateway,
I get the following objects created in .rgw.buckets:
0.5M
4M
4M
1.5M
I assume the 4M breakdown is controlled by rgw obj stripe
-Original Message-
From: Dałek, Piotr [mailto:piotr.da...@ts.fujitsu.com]
Sent: Wednesday, August 26, 2015 2:02 AM
To: Sage Weil; Deneau, Tom
Cc: ceph-de...@vger.kernel.org; ceph-us...@ceph.com
Subject: RE: rados bench object not correct errors on v9.0.3
-Original Message
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Sage Weil
Sent: Monday, August 24, 2015 12:45 PM
To: ceph-annou...@ceph.com; ceph-de...@vger.kernel.org; ceph-us...@ceph.com;
ceph-maintain...@ceph.com
Subject: v9.0.3
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Tuesday, August 25, 2015 12:43 PM
To: Deneau, Tom
Cc: ceph-de...@vger.kernel.org; ceph-us...@ceph.com;
piotr.da...@ts.fujitsu.com
Subject: Re: rados bench object not correct errors on v9.0.3
On Tue, 25 Aug 2015
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Deneau, Tom
Sent: Tuesday, August 25, 2015 1:24 PM
To: Sage Weil
Cc: ceph-de...@vger.kernel.org; ceph-us...@ceph.com;
piotr.da...@ts.fujitsu.com
Subject: RE: rados
If I run rados load-gen with the following parameters:
--num-objects 50
--max-ops 16
--min-object-size 4M
--max-object-size 4M
--min-op-len 4M
--max-op-len 4M
--percent 100
--target-throughput 2000
So every object is 4M in size and all the ops are reads of the entire 4M.
I
Ah, I see that --max-backlog must be expressed in bytes/sec,
in spite of what the --help message says.
-- Tom
-Original Message-
From: Deneau, Tom
Sent: Wednesday, July 22, 2015 5:09 PM
To: 'ceph-users@lists.ceph.com'
Subject: load-gen throughput numbers
If I run rados load-gen
False alarm, things seem to be fine now.
-- Tom
-Original Message-
From: Deneau, Tom
Sent: Wednesday, July 15, 2015 1:11 PM
To: ceph-users@lists.ceph.com
Subject: Any workaround for ImportError: No module named ceph_argparse?
I just installed 9.0.2 on Trusty using ceph-deploy
I just installed 9.0.2 on Trusty using ceph-deploy install --testing and I am
hitting
the ImportError: No module named ceph_argparse issue.
What is the best way to get around this issue and still run a version that is
compatible with other (non-Ubuntu) nodes in the cluster that are running
...@target.com]
Sent: Monday, July 13, 2015 10:19 PM
To: Deneau, Tom; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] slow requests going up and down
Does the ceph health detail show anything about stale or unclean PGs, or
are you just getting the blocked ops messages?
On 7/13/15, 5:38 PM, Deneau
I have a cluster where over the weekend something happened and successive calls
to ceph health detail show things like below.
What does it mean when the number of blocked requests goes up and down like
this?
Some clients are still running successfully.
-- Tom Deneau, AMD
HEALTH_WARN 20
what is the correct way to make radosgw create its pools as erasure coded pools?
-- Tom Deneau, AMD
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I am experimenting with different external journal partitions as raw partitions
(no file system).
using
ceph-deploy osd prepare foo:/mount-point-for-data-partition:journal-partition
followed by
ceph-deploy osd activate (same arguments)
When the specified journal-partition is on an ssd drive
If my cluster is quiet and on one node I want to switch the location of the
journal from
the default location to a file on an SSD drive (or vice versa), what is the
quickest way to do that? Can I make a soft link to the new location and
do it without restarting the OSDs?
-- Tom Deneau, AMD
Referencing this old thread below, I am wondering what is the proper way
to install say new versions of ceph and start up daemons but keep
all the data on the osd drives.
I had been using ceph-deploy new which I guess creates a new cluster fsid.
Normally for my testing I had been starting with
I've noticed when I use large object sizes like 100M with rados bench write, I
get
rados -p data2 bench 60 write --no-cleanup -b 100M
Maintaining 16 concurrent writes of 104857600 bytes for up to 60 seconds or 0
objects
sec Cur ops started finished avg MB/s cur MB/s last lat avg lat
Ah, I see there is an osd parameter for this
osd max write size
Description:The maximum size of a write in megabytes.
Default:90
-Original Message-
From: Deneau, Tom
Sent: Wednesday, April 08, 2015 3:57 PM
To: 'ceph-users@lists.ceph.com'
Subject: object size
Say I have a single node cluster with 5 disks.
And using dd iflag=direct on that node, I can see disk read bandwidth at 160
MB/s
I populate a pool with 4MB objects.
And then on that same single node, I run
$ drop-caches using /proc/sys/vm/drop_caches
$ rados -p mypool bench nn seq -t 1
A couple of client-monitor questions:
1) When a client contacts a monitor to get the cluster map, how does it
decide which monitor to try to contact?
2) Having gotten the cluster map, assuming a client wants to do multiple reads
and writes,
does the client have to re-contact the monitor
Robert --
We are still having trouble with this.
Can you share your [client.radosgw.gateway] section of ceph.conf and
were there any other special things to be aware of?
-- Tom
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On
I need to set up a cluster where the rados client (for running rados
bench) may be on a different architecture and hence running a different
ceph version from the osd/mon nodes. Is there a list of which ceph
versions work together for a situation like this?
-- Tom
Is it possible to run an erasure coded pool using default k=2, m=2 profile on a
single node?
(this is just for functionality testing). The single node has 3 OSDs.
Replicated pools run fine.
ceph.conf does contain:
osd crush chooseleaf type = 0
-- Tom Deneau
45 matches
Mail list logo