After your comment about the dual mds servers I decided to just give up
trying to get the second restarted. After eyeballing what I had on one
of the new Ryzen boxes for drive space, I decided to just dump the
filesystem. That will also make things go faster if and when I flip
everything over to
Well, I wouldn't use bcache on filestore at all.First there are problems with
all that you have said and second but way important you got doble writes (in FS
data was written to journal and to storage disk at the same time), if jounal
and data disk were the same then speed was divided by two
On 12.10.2017 20:28, Jorge Pinilla López wrote:
> Hey all!
> I have a ceph with multiple HDD and 1 really fast SSD with (30GB per OSD) per
> host.
>
> I have been thinking and all docs say that I should give all the SSD space
> for RocksDB, so I would have a HDD data and a 30GB partition for
The sparseness is actually preserved, but the fast-diff stats are
incorrect because zero-byte objects were being created during the
flatten operation. This should be fixed under Luminous [1] where the
flatten operation (and any writes to a clone more generally) no longer
performs a zero-byte
John covered everything better than I was going to, so I'll just remove
that from my reply.
If you aren't using DC SSDs and this is prod, then I wouldn't recommend
moving towards this model. However you are correct on how to move the pool
to the SSDs from the HDDs and based on how simple and
Yes -- all image creation commands (create, clone, copy, import, etc)
should accept the "--data-pool" optional.
On Thu, Oct 12, 2017 at 5:01 PM, Josy wrote:
> I would also like to create a clone of the image present in another pool.
>
> Will this command work ?
>
>
On Thu, Oct 12, 2017 at 9:34 PM, Reed Dier wrote:
> I found an older ML entry from 2015 and not much else, mostly detailing the
> doing performance testing to dispel poor performance numbers presented by
> OP.
>
> Currently have the metadata pool on my slow 24 HDDs, and am
Hi,
I'm evaluating ceph (Jewel) for an application that will have a chain of
layered images, with the need to sometimes flatten from the top to limit chain
length. However, it appears that running "rbd flatten" causes loss of
sparseness in the clone. For example:
$ rbd --version
ceph version
I would also like to create a clone of the image present in another pool.
Will this command work ?
rbd clone --data-pool ecpool pool/image@snap_image ecpool/cloneimage
On 13-10-2017 00:19, Jorge Pinilla López wrote:
The rbd has 2 kinds of data, metadata and data itself, the metadata
gives
Thank you all!
On 13-10-2017 00:19, Jorge Pinilla López wrote:
The rbd has 2 kinds of data, metadata and data itself, the metadata
gives information about the rbd imagen and small amounts of internal
information. This one is place on the replicated pool and that's why
it's considering that
I found an older ML entry from 2015 and not much else, mostly detailing the
doing performance testing to dispel poor performance numbers presented by OP.
Currently have the metadata pool on my slow 24 HDDs, and am curious if I should
see any increased performance with CephFS by moving the
The rbd has 2 kinds of data, metadata and data itself, the metadata gives
information about the rbd imagen and small amounts of internal information.
This one is place on the replicated pool and that's why it's considering that
the image is on the replicated pool.But the actual data (big
Yes -- the "image" will be in the replicated pool and its data blocks
will be in the specified data pool. An "rbd info" against the image
will show the data pool.
On Thu, Oct 12, 2017 at 2:40 PM, Josy wrote:
> Thank you for your reply.
>
> I created a erasure coded
Thank you for your reply.
I created a erasure coded pool 'ecpool' and a replicated pool to store
metadata 'ec_rep_pool'
And created image as you mentioned :
rbd create --size 20G --data-pool ecpool ec_rep_pool/ectestimage1
But the image seems to be created in ec_rep_pool*
*
Hey all!
I have a ceph with multiple HDD and 1 really fast SSD with (30GB per
OSD) per host.
I have been thinking and all docs say that I should give all the SSD
space for RocksDB, so I would have a HDD data and a 30GB partition for
RocksDB.
But it came to my mind that if the OSD isnt full maybe
Here is your friend.
http://docs.ceph.com/docs/luminous/rados/operations/erasure-code/#erasure-coding-with-overwrites
On Thu, Oct 12, 2017 at 2:09 PM Jason Dillaman wrote:
> The image metadata still needs to live in a replicated data pool --
> only the data blocks can be
Hey Mark,
Thanks a lot for the info. You should really make a paper of it and post
it :)
First of all, I am sorry if I say something wrong, I am still learning
about this topic and I am speaking from totally unawareness.
Second, I understood that ratios ar a way of controling priorities and
The image metadata still needs to live in a replicated data pool --
only the data blocks can be stored in an EC pool. Therefore, when
creating the image, you should provide the "--data-pool "
optional to specify the EC pool name.
On Thu, Oct 12, 2017 at 2:06 PM, Josy
Hi,
I am trying to setup an erasure coded pool with rbd image.
The ceph version is Luminous 12.2.1. and I understand, since Luminous,
RBD and Cephfs can store their data in an erasure coded pool without use
of cache tiring.
I created a pool ecpool and when trying to create a rbd image,
On Thu, Oct 12, 2017 at 10:52 AM Florian Haas wrote:
> On Thu, Oct 12, 2017 at 7:22 PM, Gregory Farnum
> wrote:
> >
> >
> > On Thu, Oct 12, 2017 at 3:50 AM Florian Haas
> wrote:
> >>
> >> On Mon, Sep 11, 2017 at 8:13 PM, Andreas
On Thu, Oct 12, 2017 at 7:22 PM, Gregory Farnum wrote:
>
>
> On Thu, Oct 12, 2017 at 3:50 AM Florian Haas wrote:
>>
>> On Mon, Sep 11, 2017 at 8:13 PM, Andreas Herrmann
>> wrote:
>> > Hi,
>> >
>> > how could this happen:
>> >
>> >
On Thu, Oct 12, 2017 at 3:50 AM Florian Haas wrote:
> On Mon, Sep 11, 2017 at 8:13 PM, Andreas Herrmann
> wrote:
> > Hi,
> >
> > how could this happen:
> >
> > pgs: 197528/1524 objects degraded (12961.155%)
> >
> > I did some heavy failover tests,
On Wed, Oct 11, 2017 at 7:42 AM Reed Dier wrote:
> Just for the sake of putting this in the public forum,
>
> In theory, by placing the primary copy of the object on an SSD medium, and
> placing replica copies on HDD medium, it should still yield *some* improvement
> in
On 12/10/17 17:15, Josy wrote:
> Hello,
>
> After taking down couple of OSDs, the dashboard is not showing the
> corresponding hostname.
Ceph-mgr is known to have issues with associated services with hostnames
sometimes, e.g. http://tracker.ceph.com/issues/20887
Fixes look to be incoming.
Hello,
After taking down couple of OSDs, the dashboard is not showing the
corresponding hostname.
It shows correctly in ceph osd tree output
--
-15 3.49280 host ceph-las1-a7-osd
21 ssd 0.87320 osd.21 up 1.0 1.0
22 ssd 0.87320
Thanks Enrico. I wrote a test case that reproduces the issue, and opened
http://tracker.ceph.com/issues/21772 to track the bug. It sounds like
this is a regression in luminous.
On 10/11/2017 06:41 PM, Enrico Kern wrote:
or this:
{
"shard_id": 22,
"entries": [
Olivier Bonvalet writes:
> Le jeudi 12 octobre 2017 à 09:12 +0200, Ilya Dryomov a écrit :
>> It's a crash in memcpy() in skb_copy_ubufs(). It's not in ceph, but
>> ceph-induced, it looks like. I don't remember seeing anything
>> similar
>> in the context of krbd.
>>
>>
On Thu, Oct 12, 2017 at 5:02 AM, Maged Mokhtar wrote:
> On 2017-10-11 14:57, Jason Dillaman wrote:
>
> On Wed, Oct 11, 2017 at 6:38 AM, Jorge Pinilla López
> wrote:
>
>> As far as I am able to understand there are 2 ways of setting iscsi for
>> ceph
>>
Hi,
On 09/10/17 16:09, Sage Weil wrote:
To put this in context, the goal here is to kill ceph-disk in mimic.
One proposal is to make it so new OSDs can *only* be deployed with LVM,
and old OSDs with the ceph-disk GPT partitions would be started via
ceph-volume support that can only start (but
Hi,
The recent FOSDEM CFP reminded me to wonder if there's likely to be a
Cephalocon in 2018? It was mentioned as a possibility when the 2017 one
was cancelled...
Regards,
Matthew
--
The Wellcome Trust Sanger Institute is operated by Genome Research
Limited, a charity registered in
CfP for the Software Defined Storage devroom at FOSDEM 2018
(Brussels, Belgium, February 4th).
FOSDEM is a free software event that offers open source communities a place to
meet, share ideas and collaborate. It is renown for being highly developer-
oriented and brings together 8000+
On Thu, Oct 12, 2017 at 12:23 PM, Jeff Layton wrote:
> On Thu, 2017-10-12 at 09:12 +0200, Ilya Dryomov wrote:
>> On Wed, Oct 11, 2017 at 4:40 PM, Olivier Bonvalet
>> wrote:
>> > Hi,
>> >
>> > I had a "general protection fault: " with Ceph RBD kernel
On Mon, Sep 11, 2017 at 8:13 PM, Andreas Herrmann wrote:
> Hi,
>
> how could this happen:
>
> pgs: 197528/1524 objects degraded (12961.155%)
>
> I did some heavy failover tests, but a value higher than 100% looks strange
> (ceph version 12.2.0). Recovery is quite slow.
>
John,
I tried to write some data to the new created files, it failed, just as you
said.
Thanks very much.
On Thu, Oct 12, 2017 at 6:20 PM, John Spray wrote:
> On Thu, Oct 12, 2017 at 11:12 AM, Frank Yu wrote:
> > Hi,
> > I have a ceph cluster with
On 2017-10-12 11:32, David Disseldorp wrote:
> On Wed, 11 Oct 2017 14:03:59 -0400, Jason Dillaman wrote:
>
> On Wed, Oct 11, 2017 at 1:10 PM, Samuel Soulard
> wrote: Hmmm, If you failover the identity of the
> LIO configuration including PGRs
> (I believe they are
On Thu, Oct 12, 2017 at 11:12 AM, Frank Yu wrote:
> Hi,
> I have a ceph cluster with three nodes, and I have a cephfs, use pool
> cephfs_data, cephfs_metadata, and there're also a rbd pool with name
> 'rbd-test'.
>
> # rados lspools
> .rgw.root
> default.rgw.control
>
Hi,
I have a ceph cluster with three nodes, and I have a cephfs, use pool
cephfs_data, cephfs_metadata, and there're also a rbd pool with name
'rbd-test'.
# rados lspools
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log
cephfs_data
cephfs_metadata
default.rgw.buckets.index
On Thu, Oct 12, 2017 at 12:23 AM, Bill Sharer wrote:
> I was wondering if I can't get the second mds back up That offline
> backward scrub check sounds like it should be able to also salvage what
> it can of the two pools to a normal filesystem. Is there an option for
On Wed, 11 Oct 2017 14:03:59 -0400, Jason Dillaman wrote:
> On Wed, Oct 11, 2017 at 1:10 PM, Samuel Soulard
> wrote:
> > Hmmm, If you failover the identity of the LIO configuration including PGRs
> > (I believe they are files on disk), this would work no? Using an 2
On 2017-10-11 14:57, Jason Dillaman wrote:
> On Wed, Oct 11, 2017 at 6:38 AM, Jorge Pinilla López
> wrote:
>
>> As far as I am able to understand there are 2 ways of setting iscsi for ceph
>>
>> 1- using kernel (lrbd) only able on SUSE, CentOS, fedora...
>
> The
Le jeudi 12 octobre 2017 à 09:12 +0200, Ilya Dryomov a écrit :
> It's a crash in memcpy() in skb_copy_ubufs(). It's not in ceph, but
> ceph-induced, it looks like. I don't remember seeing anything
> similar
> in the context of krbd.
>
> This is a Xen dom0 kernel, right? What did the workload
On Wed, Oct 11, 2017 at 4:40 PM, Olivier Bonvalet wrote:
> Hi,
>
> I had a "general protection fault: " with Ceph RBD kernel client.
> Not sure how to read the call, is it Ceph related ?
>
>
> Oct 11 16:15:11 lorunde kernel: [311418.891238] general protection fault:
>
Quoting Ashley Merrick (ash...@amerrick.co.uk):
> Hello,
>
> Setting up a new test lab, single server 5 disks/OSD.
>
> Want to run an EC Pool that has more shards than avaliable OSD's , is
> it possible to force crush to 're use an OSD for another shard?
>
> I know normally this is bad practice
43 matches
Mail list logo