i am running a cephfs cluster (jewel 10.2.10) with a ec + cache pool. I plan on
upgrading to luminous around the end of December and wanted to know if this is
fine regarding the issues around 12.2.9. It should be fine since 12.2.10 is
released I guess ?
Should be fine.
k
_
Hi All,
I would like to perform a proof of concept on changing a running Ceph
cluster which is using non-default erasure code plugin back to jerasure
plugin on its default.rgw.buckets.data pool. I could not find a
documentation on how to achieve this with minimal downtime. I know changing
erasure
The slides are now posted:
https://ceph.com/cephdays/ceph-day-berlin/
--
Mike Perez (thingee)
On Thu, Dec 6, 2018 at 8:49 AM Mike Perez wrote:
>
> Hi Serkan,
>
> I'm currently working on collecting the slides to have them posted to
> the Ceph Day Berlin page as Lenz mentioned they would show up
https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf
On Thu, Dec 6, 2018 at 8:11 PM Leon Robinson wrote:
>
> The most important thing to remember about CRUSH is that the H stands for
> hashing.
>
> If you hash the same object you're going to get the same result.
>
> e.g. cat /etc/fstab
Well, it looks like you have different data in the MDSMap across your
monitors. That's not good on its face, but maybe there are extenuating
circumstances. Do you actually use CephFS, or just RBD/RGW? What's the
full output of "ceph -s"?
-Greg
On Thu, Dec 6, 2018 at 1:39 PM Marco Aroldi wrote:
>
Sorry about this, I hate "to bump" a thread, but...
Anyone has faced this situation?
There is a procedure to follow?
Thanks
Marco
Il giorno gio 8 nov 2018, 10:54 Marco Aroldi ha
scritto:
> Hello,
> Since upgrade from Jewel to Luminous 12.2.8, in the logs are reported some
> errors related to "s
On 7/12/18 4:27 AM, Florian Haas wrote:
> On 05/12/2018 23:08, Mark Kirkwood wrote:
>> Hi, another question relating to multi tenanted RGW.
>>
>> Let's do the working case 1st. For a user that still uses the global
>> namespace, if I set a bucket as world readable (header
>> "X-Container-Read: .r:*
Hi Serkan,
I'm currently working on collecting the slides to have them posted to
the Ceph Day Berlin page as Lenz mentioned they would show up. I will
notify once the slides are available on mailing list/twitter. Thanks!
On Fri, Nov 16, 2018 at 2:30 AM Serkan Çoban wrote:
>
> Hi,
>
> Does anyone
On 05/12/2018 23:08, Mark Kirkwood wrote:
> Hi, another question relating to multi tenanted RGW.
>
> Let's do the working case 1st. For a user that still uses the global
> namespace, if I set a bucket as world readable (header
> "X-Container-Read: .r:*") then I can fetch objects from the bucket vi
Been mounted many times after, was never mounted before the VM’s was
created after the change was made.
I will upgrade the kernel and re-rest.
Thanks for your help
On Thu, 6 Dec 2018 at 6:24 PM, Ilya Dryomov wrote:
> On Thu, Dec 6, 2018 at 11:15 AM Ashley Merrick
> wrote:
> >
> > That is corr
Hi,
I've been benchmarking my Luminous test cluster, the s3 user has deleted
all objects and buckets, and yet the RGW data pool is using 7TiB of data:
default.rgw.buckets.data 11 7.16TiB 3.27212TiB 1975644
There are no buckets left (radosgw-admin bucket list returns []), and
the
I tested the dirty_ratio and dirty_background_ratio setting on ceph storage
server side.
It didn’t work.
The problem reproduced just now.
Some process stay in D state for over 10 minitues.
These osd requests is about to take several minutes.
Here are some logs:
[Thu Dec 6 17:59:18 2018] I
On Thu, Dec 6, 2018 at 11:15 AM Ashley Merrick wrote:
>
> That is correct, but that command was run weeks ago.
>
> And the RBD connected fine on 2.9 via the kernel 4.12 so I’m really lost to
> why suddenly it’s now blocking a connection it originally allowed through
> (even if by mistake)
When
That is correct, but that command was run weeks ago.
And the RBD connected fine on 2.9 via the kernel 4.12 so I’m really lost to
why suddenly it’s now blocking a connection it originally allowed through
(even if by mistake)
Which kernel do I need to run to support luminous level?
,Ash
On Thu, 6
On Thu, Dec 6, 2018 at 10:58 AM Ashley Merrick wrote:
>
> That command returns luminous.
This is the issue.
My guess is someone ran "ceph osd set-require-min-compat-client
luminous", making it so that only luminous aware clients are allowed to
connect to the cluster. Kernel 4.12 doesn't support
The most important thing to remember about CRUSH is that the H stands for
hashing.
If you hash the same object you're going to get the same result.
e.g. cat /etc/fstab | md5sum is always the same output, unless you change the
file contents.
CRUSH uses the number of osds and the object and the
That command returns luminous.
Understand where your coming from just weird as running the exact same
kernel before and after the package upgraded. As the only updates available
where the CEPH related ones.
I haven’t tried rolling back to .9 as I moved to rbd-nbd while opening this
thread, but I
I did not meet error message like timeout or connection broken when scping or
rsyncing. Just the transfer speed slow down to zero.
At least the connection between our scp/rsync server and client is ok.
Your problem maybe the server side (scp/rsync server process) or the client
side,otherwise
On Thu, Dec 6, 2018 at 4:22 AM Ashley Merrick wrote:
>
> Hello,
>
> As mentioned earlier the cluster is seperatly running on the latest mimic.
>
> Due to 14.04 only supporting up to Luminous I was running the 12.2.9 version
> of ceph-common for the rbd binary.
>
> This is what was upgraded when I
Afaik it is not random, it is calculated where your objects are stored.
Some algorithm that probably takes into account how many osd's you have
and their sizes.
How can it be random placed? You would not be able to ever find it
again. Because there is not such a thing as a 'file allocation ta
20 matches
Mail list logo