[ceph-users] Re: CephFS as Offline Storage

2024-05-21 Thread Marc
> It's all non-corperate data, I'm just trying to cut back on wattage > (removes around 450W of the 2.4KW) by powering down backup servers that 450W for one server seems quite hefty. Under full load? You can also check your cpu power states and frequency that cuts also some power. > > So

[ceph-users] Re: CephFS as Offline Storage

2024-05-21 Thread Marc
> > I think it is his lab so maybe it is a test setup for production. > > Home production? A home setup to test on, before he applies changes to his production Saluti  ;) ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email

[ceph-users] Re: CephFS as Offline Storage

2024-05-21 Thread Marc
I think it is his lab so maybe it is a test setup for production. I don't think it matters to much with scrubbing, it is not like it is related to how long you were offline. It will scrub just as much being 1 month offline as being 6 months offline. > > If you have a single node arguably ZFS

[ceph-users] dkim on this mailing list

2024-05-21 Thread Marc
Just to confirm if I am messing up my mailserver configs. But currently all messages from this mailing list should generate a dkim pass status? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Cephfs over internet

2024-05-20 Thread Marc
> Hi all, > Due to so many reasons (political, heating problems, lack of space > aso.) we have to > plan for our ceph cluster to be hosted externaly. > The planned version to setup is reef. > Reading up on documentation we found that it was possible to run in > secure mode. > > Our ceph.conf file

[ceph-users] Re: Have a problem with haproxy/keepalived/ganesha/docker

2024-05-06 Thread Marc
> Hello! Any news? > Yes, it will be around 18° today, Israel was heckled at EU song contest .. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Status of IPv4 / IPv6 dual stack?

2024-04-23 Thread Marc
> I have removed dual-stack-mode-related information from the documentation > on the assumption that dual-stack mode was planned but never fully > implemented. > > See https://tracker.ceph.com/issues/65631. > > See https://github.com/ceph/ceph/pull/57051. > > Hat-tip to Dan van der Ster, who

[ceph-users] Re: Why CEPH is better than other storage solutions?

2024-04-21 Thread Marc
> I know this high level texts about > - scalability, > - flexibility, > - distributed, > - cost-Effectiveness If you are careful not to over estimate the performance, then you are ok. > > Why not something from robin.io or purestorage, netapp, dell/EMC. From > opensource longhorn or openEBS. >

[ceph-users] Re: Client kernel crashes on cephfs access

2024-04-08 Thread Marc
> > I would like to ask for help regarding client kernel crashes that happen > on cephfs access. We have been struggling with this for over a month now > with over 100 crashes on 7 hosts during that time. > > Our cluster runs version 18.2.1. Our clients run CentOS Stream. > > On CentOS Stream 9

[ceph-users] Client kernel crashes on cephfs access

2024-04-08 Thread Marc Ruhmann
lds/linux.git/commit/?id=07045648c07c5632e0dfd5ce084d3cd0cec0258a The first one adds changes that look related. Does anybody have experienced this as well or know something about this? Thanks and best regards, Marc smime.p7s Description: Kryptografische S/MIME-Signatur ___ ceph-us

[ceph-users] Re: Ha proxy and S3

2024-03-27 Thread Marc
> > But is it a good practice to use cephadm to deploy the HA Proxy or it's > better do deploy it manually on a other server (who does only that). > Afaik cephadm's only viable option is podman. As I undertand, podman does nothing with managing tasks that can move to other hosts automatically.

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-26 Thread Marc
is that the RBD Image needs to have a partition entry > created for it - that might be "obvious" to some, but my ongoing belief > is that most "obvious" things aren't, so its better to be explicit about > such things. > > Are you absolutely sure about this? I think you are missing something

[ceph-users] Re: Best practice in 2024 for simple RGW failover

2024-03-26 Thread Marc
> > The requirements are actually not high: 1. there should be a generally > known address for access. 2. it should be possible to reboot or shut down a > server without the RGW connections being down the entire time. A downtime > of a few seconds is OK. > > Constant load balancing would be

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2024-03-25 Thread Marc
> "complexity, OMG!!!111!!!" is not enough of a statement. You have to explain > what complexity you gain and what complexity you reduce. > Installing SeaweedFS consists of the following: `cd seaweedfs/weed && make > install` > This is the type of problem that Ceph is trying to solve, and starting

[ceph-users] el7 + nautilus rbd snapshot map + lvs mount crash

2024-03-24 Thread Marc
Looks like this procedure crashes the ceph node. Tried this now for 2nd time after updating and again crash. el7 + nautilus -> rbd snapshot map -> lvs mount -> crash (lvs are not even duplicate names) ___ ceph-users mailing list -- ceph-users@ceph.io

[ceph-users] Re: Ceph-storage slack access

2024-03-07 Thread Marc
work to keep alive!). > -Greg > > On Wed, Mar 6, 2024 at 9:07 AM Marc wrote: > > > > Is it possible to access this also with xmpp? > > > > > > > > At the very bottom of this page is a link > > > https://ceph.io/en/community/conne

[ceph-users] Running dedicated RGWs for async tasks

2024-03-07 Thread Marc Singer
to set on the exposed RGWs and which on the Async Task RGWs? Thanks for your input. Marc ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph-storage slack access

2024-03-06 Thread Marc
Is it possible to access this also with xmpp? > > At the very bottom of this page is a link > https://ceph.io/en/community/connect/ > > Respectfully, > > *Wes Dillingham* > w...@wesdillingham.com > LinkedIn > > > On Wed, Mar 6, 2024 at 11:45 AM

[ceph-users] Re: Performance improvement suggestion

2024-03-04 Thread Marc
> > Fast write enabled would mean that the primary OSD sends #size copies to the > entire active set (including itself) in parallel and sends an ACK to the > client as soon as min_size ACKs have been received from the peers (including > itself). In this way, one can tolerate (size-min_size)

[ceph-users] HA service for RGW and dnsmasq

2024-02-16 Thread Jean-Marc FONTANA
Hello everyone, We operate 2 clusters as S3 external storage for owncloud. Both were installed with ceph-deploy (Nautilus), then converted to cephadm and updated till Reef. Each cluster has one RGW host, managing a dnsmasq pack for DNS delegation. We feel know we should create a HA endpoint

[ceph-users] Re: Snapshot automation/scheduling for rbd?

2024-02-03 Thread Marc
I am having a script that checks on each node what vm's are active and then the script makes a snap shot of their rbd's. It first issues some command to the vm to freeze the fs if the vm supports it. > > Am I just off base here or missing something obvious? > > Thanks > > > > > On

[ceph-users] Re: RGW crashes when rgw_enable_ops_log is enabled

2024-01-30 Thread Marc Singer
Hi The issue is open: https://tracker.ceph.com/issues/64244 If you could take a look or let me know what are the next steps I would be super grateful. In the meantime I will try to increase the read throughput. Thanks Marc On 1/26/24 15:23, Matt Benjamin wrote: Hi Marc, 1. if you can

[ceph-users] Re: RGW crashes when rgw_enable_ops_log is enabled

2024-01-26 Thread Marc Singer
Hi Matt Thanks for your answer. Should I open a bug report then? How would I be able to read more from it? Have multiple threads access it and read from it simultaneously? Marc On 1/25/24 20:25, Matt Benjamin wrote: Hi Marc, No, the only thing you need to do with the Unix socket

[ceph-users] podman / docker issues

2024-01-25 Thread Marc
More and more I am annoyed with the 'dumb' design decisions of redhat. Just now I have an issue on an 'air gapped' vm that I am unable to start a docker/podman container because it tries to contact the repository to update the image and instead of using the on disk image it just fails. (Not to

[ceph-users] Re: RGW crashes when rgw_enable_ops_log is enabled

2024-01-25 Thread Marc Singer
j op status=0   -482> 2024-01-25T14:54:31.680+ 7f5185bc8b00  2 req 2568229052387020224 0.092001401s s3:put_obj http status=200   -481> 2024-01-25T14:54:31.680+ 7f5185bc8b00  1 == req done req=0x7f517ffca720 op status=0 http_status=200 latency=0.092001401s == Thanks for your h

[ceph-users] RGW crashes when rgw_enable_ops_log is enabled

2024-01-25 Thread Marc Singer
b00 / safe_timer   7f2472cadb00 / radosgw   ...   log_file /var/lib/ceph/crash/2024-01-25T13:10:13.909546Z_01ee6e6a-e946-4006-9d32-e17ef2f9df74/log --- end dump of recent events --- reraise_fatal: default handler for signal 25 didn't terminate the process? Thank you for your help. M

[ceph-users] rbd map snapshot, mount lv, node crash

2024-01-19 Thread Marc
Am I doing something weird when I do on a ceph node (nautilus, el7): rbd snap ls vps-test -p rbd rbd map vps-test@vps-test.snap1 -p rbd mount -o ro /dev/mapper/VGnew-LVnew /mnt/disk <--- reset/reboot ceph node ___ ceph-users mailing list --

[ceph-users] Re: Upgrading nautilus / centos7 to octopus / ubuntu 20.04. - Suggestions and hints?

2024-01-17 Thread Marc
I have compiled nautilus for el9 and am going to test adding a el9 osd node the the existing el7 cluster. If that is ok, I will upgrade all nodes first to el9. > -Original Message- > From: Szabo, Istvan (Agoda) > Sent: Wednesday, 17 January 2024 08:09 > To: balli...@45drives.com; Eugen

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Marc
> > > >> It's been claimed to me that almost nobody uses podman in > >> production, but I have no empirical data. As opposed to docker or to having no containers at all? > > I even converted clusters from Docker to podman while they stayed > > online thanks to "ceph orch redeploy". > > Does

[ceph-users] Re: Disable signature url in ceph rgw

2023-12-13 Thread Marc Singer
on":{ "IpAddress":{ "aws:SourceIp":[ "redacted" ] } } }, { "Sid":"username%%%policy_control", "Effect":"Deny",

[ceph-users] Re: Disable signature url in ceph rgw

2023-12-12 Thread Marc Singer
. Thanks you and have a great day Marc On 12/9/23 00:37, Robin H. Johnson wrote: On Fri, Dec 08, 2023 at 10:41:59AM +0100,marc@singer.services wrote: Hi Ceph users We are using Ceph Pacific (16) in this specific deployment. In our use case we do not want our users to be able to generate si

[ceph-users] Disable signature url in ceph rgw

2023-12-08 Thread marc
it as a configuration option? 3. Or is the behaviour of not respecting bucket policies in RGW with signature v4 URLs a bug and they should be actually applied? Thanks you for your help and let me know if you have any questions Marc Singer ___ ceph

[ceph-users] Re: Problem while upgrade 17.2.6 to 17.2.7

2023-11-17 Thread Jean-Marc FONTANA
a écrit : Hi, I think it should be in /var/log/ceph/ceph-mgr..log, probably you can reproduce this error again and hopefully you'll be able to see a python traceback or something related to rgw in the mgr logs. Regards On Thu, Nov 16, 2023 at 7:43 PM Jean-Marc FONTANA wrote: Hello, These are t

[ceph-users] Re: Problem while upgrade 17.2.6 to 17.2.7

2023-11-16 Thread Jean-Marc FONTANA
cephadm ['--timeout', '895', 'gather-facts'] Le 16/11/2023 à 12:41, Nizamudeen A a écrit : Hello, can you also add the mgr logs at the time of this error? Regards, On Thu, Nov 16, 2023 at 4:12 PM Jean-Marc FONTANA

[ceph-users] Re: Problem while upgrade 17.2.6 to 17.2.7

2023-11-16 Thread Jean-Marc FONTANA
one or the 404 error message with the second one. Thanks for your helping, Cordialement, JM Fontana Le 14/11/2023 à 20:53, David C. a écrit : Hi Jean Marc, maybe look at this parameter "rgw_enable_apis", if the values you have correspond to the default (need rgw restart) : https:

[ceph-users] Problem while upgrade 17.2.6 to 17.2.7

2023-11-14 Thread Jean-Marc FONTANA
an idea ? Best regards, Jean-Marc Fontana ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: How to deal with increasing HDD sizes ? 1 OSD for 2 LVM-packed HDDs ?

2023-10-19 Thread Marc
> > The question here is a rather simple one: > when you add to an existing Ceph cluster a new node having disks twice > (12TB) the size of the existing disks (6TB), how do you let Ceph evenly > distribute the data across all disks ? Ceph already does this. If you have 6TB+12TB in one node you

[ceph-users] Re: Nautilus - Octopus upgrade - more questions

2023-10-18 Thread Marc
> > I have a Nautilus cluster built using Ceph packages from Debian 10 > Backports, deployed with Ceph-Ansible. > > I see that Debian does not offer Ceph 15/Octopus packages. However, > download.ceph.com does offer such packages. > > Question: Is it a safe upgrade to install the

[ceph-users] Re: Ceph 16.2.x excessive logging, how to reduce?

2023-10-09 Thread Marc
Hi Zakhar, > > I did try to play with various debug settings. The issue is that mons > produce logs of all commands issued by clients, not just mgr. For example, > an Openstack Cinder node asking for space it can use: > > Oct 9 07:59:01 ceph03 bash[4019]: debug 2023-10-09T07:59:01.303+ >

[ceph-users] Re: Ceph 16.2.x excessive logging, how to reduce?

2023-10-09 Thread Marc
Did you do something like this Getting keys with ceph daemon mon.a config show | grep debug_ | grep mgr ceph tell mon.* injectargs --$monk=0/0 > > Any input from anyone, please? > > This part of Ceph is very poorly documented. Perhaps there's a better place > to ask this question? Please

[ceph-users] Re: cephfs mount 'stalls'

2023-09-20 Thread Marc
> > William, this is fuse client, not the kernel > > Mark, you can use kernel client. Stock c7 or install, for example, kernel- > ml from ELrepo [1], and use the latest krbd version > > I think I had to move to the fuse because with one of the latest releases of luminous, I was getting

[ceph-users] Re: cephfs mount 'stalls'

2023-09-17 Thread Marc
> > > > > > I am still on nautilus and some clients are still on centos7 which mount > the cephfs. These mounts stall at some point. Currently I am mounting with > something like this in the fstab. > > Define ‘stall’. > ls -l /mountpoint will not return anything until the umount -l > > > >

[ceph-users] cephfs mount 'stalls'

2023-09-17 Thread Marc
I am still on nautilus and some clients are still on centos7 which mount the cephfs. These mounts stall at some point. Currently I am mounting with something like this in the fstab. id=cephfsclientid,client_mountpoint=/cephfs/test /mnt/test fuse.ceph

[ceph-users] Re: Awful new dashboard in Reef

2023-09-15 Thread Marc
ps://ceph.io/en/news/blog/2023/landing-page/ > > and also in the documentation: > > https://docs.ceph.com/en/latest/mgr/dashboard/#overview-of-the-dashboard- > > landing-page > > > > Regards, > > > > On Wed, Sep 13, 2023 at 5:59 PM Marc > <mailto:m...@f1

[ceph-users] Re: ceph orchestator pulls strange images from docker.io

2023-09-15 Thread Marc
> > I currently try to adopt our stage cluster, some hosts just pull strange > > images. > > > > root@0cc47a6df330:/var/lib/containers/storage/overlay-images# podman ps > > CONTAINER ID IMAGE COMMAND > > CREATEDSTATUSPORTS

[ceph-users] Re: Awful new dashboard in Reef

2023-09-13 Thread Marc
board/#overview-of-the-dashboard- > landing-page > > Regards, > > On Wed, Sep 13, 2023 at 5:59 PM Marc <mailto:m...@f1-outsourcing.eu> > wrote: > > > Screen captures please. Not everyone is installing the default ones. > > > >

[ceph-users] Re: Awful new dashboard in Reef

2023-09-13 Thread Marc
Screen captures please. Not everyone is installing the default ones. > > We are collecting these feedbacks. For a while we weren't focusing on the > mobile view > of the dashboard. If there are users using those, we'll look into it as > well. Will let everyone know > soon with the improvements

[ceph-users] Re: *****SPAM***** Re: librbd 4k read/write?

2023-08-12 Thread Marc
> To allow for faster linear reads and writes, please create a file, > /etc/udev/rules.d/80-rbd.rules, with the following contents (assuming > that the VM sees the RBD as /dev/sda): > > KERNEL=="sda", ENV{DEVTYPE}=="disk", ACTION=="add|change", > ATTR{bdi/read_ahead_kb}="32768" > > Or test it

[ceph-users] Re: librbd 4k read/write?

2023-08-10 Thread Marc
> > Good afternoon everybody! > > > > I have the following scenario: > > Pool RBD replication x3 > > 5 hosts with 12 SAS spinning disks each > > > > I'm using exactly the following line with FIO to test: > > fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -size=10G > > -iodepth=16

[ceph-users] Re: librbd 4k read/write?

2023-08-10 Thread Marc
> I have the following scenario: > Pool RBD replication x3 > 5 hosts with 12 SAS spinning disks each > > I'm using exactly the following line with FIO to test: > fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -size=10G > -iodepth=16 -rw=write -filename=./test.img > > If I

[ceph-users] Re: Is it safe to add different OS but same ceph version to the existing cluster?

2023-08-07 Thread Marc
> I have an octopus cluster on the latest octopus version with > mgr/mon/rgw/osds on centos 8. > Is it safe to add an ubuntu osd host with the same octopus version? > I am also wondering a bit about such things. For instance having el9 Nautilus mixed with el7 Nautilus. If I remember correctly

[ceph-users] Re: Ceph Quincy and liburing.so.2 on Rocky Linux 9

2023-08-04 Thread Marc
But Rocky Linux 9 is the continuation of what CentOS would have been on el9. Afaik is ceph being developed on elX distributions and not the 'trial' stream versions, not? > > In most cases the 'Alternative' distro like Alma or Rocky have outdated > versions of packages, if we compared it with

[ceph-users] Re: Upgrading nautilus / centos7 to octopus / ubuntu 20.04. - Suggestions and hints?

2023-08-02 Thread Marc
> > from Ceph perspective it's supported to upgrade from N to P, you can > safely skip O. We have done that on several clusters without any > issues. You just need to make sure that your upgrade to N was > complete. How do you verify if the upgrade was complete?

[ceph-users] Re: Upgrading nautilus / centos7 to octopus / ubuntu 20.04. - Suggestions and hints?

2023-08-01 Thread Marc
> > As I’v read and thought a lot about the migration as this is a bigger > project, I was wondering if anyone has done that already and might share > some notes or playbooks, because in all readings there where some parts > missing or miss understandable to me. > > I do have some different

[ceph-users] Re: Blank dashboard

2023-07-31 Thread Marc
Grafana is just a bit buggy. I used to have a lot of issues with earlier versions. Now with 9.4.7 it is better. Don't really know what version comes with (your) ceph. I have still that the tv with the slideshow is being logged out every x period. But I guess that is just a (new) setting. > >

[ceph-users] Re: precise/best way to check ssd usage

2023-07-29 Thread Marc
> >I have a use % between 48% and 57%, and assume that with a node failure > 1/3 (only using 3x repl.) of this 57% needs to be able to migrate and > added to a different node. > > If you by this mean you have 3 nodes with 3x replica and failure domain > set to No it is more than 3 nodes

[ceph-users] precise/best way to check ssd usage

2023-07-28 Thread Marc
Currently I am checking usage on ssd drives with ceph osd df| egrep 'CLASS|ssd' I have a use % between 48% and 57%, and assume that with a node failure 1/3 (only using 3x repl.) of this 57% needs to be able to migrate and added to a different node. Is there a better way of checking this

[ceph-users] Re: cephbot - a Slack bot for Ceph has been added to the github.com/ceph project

2023-07-26 Thread Marc
> > The instructions show how to set it up so that read-only operations can > be > performed from Slack for security purposes, but there are settings that > could make it possible to lock down who can communicate with cephbot > which > could make it relatively secure to run administrative tasks

[ceph-users] Re: what is the point of listing "auth: unable to find a keyring on /etc/ceph/ceph.client nfs-ganesha

2023-07-21 Thread Marc
Hi Dhairya, Yes I have in ceph.conf (only copied the lines below, there are more in these sections). I do not have a keyring path setting in ceph.conf public network = a.b.c.111/24 [mon] mon host = a.b.c.111,a.b.c.112,a.b.c.113 [mon.a] mon addr = a.b.c.111 [mon.b] mon addr = a.b.c.112

[ceph-users] Re: MDS cache is too large and crashes

2023-07-21 Thread Marc
> > At 01:27 this morning I received the first email about MDS cache is too > large (mailing happens every 15 minutes if something happens). Looking > into it, it was again a standby-replay host which stops working. > > At 01:00 a few rsync processes start in parallel on a client machine. > This

[ceph-users] what is the point of listing "auth: unable to find a keyring on /etc/ceph/ceph.client nfs-ganesha

2023-07-20 Thread Marc
I need some help understanding this. I have configured nfs-ganesha for cephfs using something like this in ganesha.conf FSAL { Name = CEPH; User_Id = "testing.nfs"; Secret_Access_Key = "AAA=="; } But I contstantly have these messages in de ganesha logs, 6x per user_id auth:

[ceph-users] Re: device class for nvme disk is ssd

2023-06-28 Thread Marc
> > What would we use instead? SATA / SAS that are progressively withering > in the market, less performance for the same money? Why pay extra for an > HBA just to use legacy media? I am still buying sas/sata ssd's, these are for me still ~half price of the nvme equivalent.

[ceph-users] Re: device class for nvme disk is ssd

2023-06-28 Thread Marc
> Hi, > is it a problem that the device class for all my disks is SSD even all > of > these disks are NVME disks? If it is just a classification for ceph, so > I > can have pools on SSDs and NVMEs separated I don't care. But maybe ceph > handles NVME disks differently internally? > I am not

[ceph-users] Re: How to secure erasing a rbd image without encryption?

2023-06-08 Thread Marc
> > I bumped into an very interesting challenge, how to secure erase a rbd > image data without any encryption? > > The motivation is to ensure that there is no information leak on OSDs > after deleting a user specified rbd image, without the extra burden of > using rbd encryption. > > any

[ceph-users] Re: CEPH Version choice

2023-05-31 Thread Marc
Hi Frank, Thanks! I have added this to my test environment todo > > I uploaded all scripts and a rudimentary readme to > https://github.com/frans42/cephfs-bench . I hope it is sufficient to get > started. I'm afraid its very much tailored to our deployment and I can't > make it fully

[ceph-users] Re: `ceph features` on Nautilus still reports "luminous"

2023-05-25 Thread Marc
> > on our way towards getting our cluster to a current Ceph release, we > updated all hosts and clients to Nautilus 14.2.22. I think for an upgrade the rocksdb is necessary. Check this for your monitors cat /var/lib/ceph/mon/ceph-a/kv_backend ___

[ceph-users] Re: Ceph OSDs suddenly use public network for heardbeat_check

2023-05-17 Thread Marc
> > > In fact, when we start up the cluster, we don't have DNS available to > resolve the IP addresses, and for a short while, all OSDs are located > in a new host called "localhost.localdomain". At that point, I fixed > it by setting the static hostname using `hostnamectl set -hostname >

[ceph-users] Re: CEPH Version choice

2023-05-15 Thread Marc
> > By the way, regarding performance I recommend the Cephalocon > presentations by Adam and Mark. There you can learn what efforts are > made to improve ceph performance for current and future versions. > Link? ___ ceph-users mailing list --

[ceph-users] Re: CEPH Version choice

2023-05-15 Thread Marc
> > We set up a test cluster with a script producing realistic workload and > started testing an upgrade under load. This took about a month (meaning > repeating the upgrade with a cluster on mimic deployed and populated Hi Frank, do you have such scripts online? On github or so? I was thinking

[ceph-users] Re: CEPH Version choice

2023-05-15 Thread Marc
> > I've been reading through this email list for a while now, but one thing > that I'm curious about is why a lot of installations out there aren't > upgraded to the latest version of CEPH (Quincy). > > What are the main reasons for not upgrading to the latest and greatest? If you are starting

[ceph-users] Re: Upgrade Ceph cluster + radosgw from 14.2.18 to latest 15

2023-05-15 Thread Marc
why are you still not on 14.2.22? > > Yes, the documents show an example of upgrading from Nautilus to > Pacific. But I'm not really 100% trusting the Ceph documents, and I'm > also afraid of what if Nautilus is not compatible with Pacific in some > operations of monitor or osd =)

[ceph-users] Re: v16.2.13 Pacific released

2023-05-10 Thread Marc
What is with this latency issue? From what I have read here on mailing list, to me this looks bad. Until someone from ceph/redhat says it is not. https://tracker.ceph.com/issues/58530 https://www.mail-archive.com/ceph-users@ceph.io/msg19012.html > > We're happy to announce the 13th backport

[ceph-users] Re: Upgrade Ceph cluster + radosgw from 14.2.18 to latest 15

2023-05-09 Thread Marc
Because pacific has performance issues > > Curious, why not go to Pacific? You can upgrade up to 2 major releases > in a go. > > > The upgrade process to pacific is here: > https://docs.ceph.com/en/latest/releases/pacific/#upgrading-non-cephadm- > clusters > The upgrade to Octopus is here: >

[ceph-users] Re: Upgrade Ceph cluster + radosgw from 14.2.18 to latest 15

2023-05-09 Thread Marc
> > Hi, I want to upgrade my old Ceph cluster + Radosgw from v14 to v15. But > I'm not using cephadm and I'm not sure how to limit errors as much as > possible during the upgrade process? Maybe check the changelog, check upgrading notes, and continuosly monitor the mailing list? I have to do

[ceph-users] Re: Deep-scrub much slower than HDD speed

2023-04-27 Thread Marc
> > > > > The question you should ask yourself, why you want to > > change/investigate this? > > > > Because if scrubbing takes 10x longer thrashing seeks, my scrubs never > > finish in time (the default is 1 week). > > I end with e.g. > > > > > 267 pgs not deep-scrubbed in time > > > > On a 38

[ceph-users] Re: Deep-scrub much slower than HDD speed

2023-04-27 Thread Marc
> > > The question you should ask yourself, why you want to > change/investigate this? > > Because if scrubbing takes 10x longer thrashing seeks, my scrubs never > finish in time (the default is 1 week). > I end with e.g. > > > 267 pgs not deep-scrubbed in time > > On a 38 TB cluster, if you

[ceph-users] Re: Deep-scrub much slower than HDD speed

2023-04-26 Thread Marc
Hi Niklas, > > > 100MB/s is sequential, your scrubbing is random. afaik everything is > random. > > Is there any docs that explain this, any code, or other definitive > answer? do a fio[1] test on a disk to see how it performs under certain conditions. Or look at atop during scrubbing, it

[ceph-users] Re: Deep-scrub much slower than HDD speed

2023-04-26 Thread Marc
> > I observed that on an otherwise idle cluster, scrubbing cannot fully > utilise the speed of my HDDs. Maybe the configured limit is set like this, because of that once (a part of) the scrubbing process is started it is not possible/easy to automatically scale down the performance to benefit

[ceph-users] Re: For suggestions and best practices on expanding Ceph cluster and removing old nodes

2023-04-25 Thread Marc
Maybe he is limited by the supported OS > > I would create a new cluster with Quincy and would migrate the data from > the old to the new cluster bucket by bucket. Nautilus is out of support > and > I would recommend at least to use a ceph version that is receiving > Backports. > >

[ceph-users] upgrading from el7 / nautilus

2023-04-19 Thread Marc
Sorry for addressing this again. But I think there are quite a few still with Nautilus, that are planning such upgrade. Nautilus is currently available for el7, el8 Octopus is currently available for el7, el8 Pacific is currently available for el8, el9 Quincy is currently available for el8,

[ceph-users] Re: pacific el7 rpms

2023-04-19 Thread Marc
It would be better to remove such folders, because it gives the impression something is due > > On EL7 only Nautilus was present. Pacific was from EL8 > > > k > > > > On 17 Apr 2023, at 11:29, Marc wrote: > > > Is there ever

[ceph-users] pacific el7 rpms

2023-04-17 Thread Marc
Is there ever going to be rpms in https://download.ceph.com/rpm-pacific/el7/ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: CEPH Mirrors are lacking packages

2023-04-17 Thread Marc
these you are mentioning are missing even the pacific release, so mabye try https://download.ceph.com/ > > > at least eu.ceph.com and de.ceph.com are lacking packages for the > pacific release. All package not start with "c" (e.g. librbd, librados, > radosgw) are missing. >

[ceph-users] Re: Nearly 1 exabyte of Ceph storage

2023-04-12 Thread Marc
> > We are excited to share with you the latest statistics from our Ceph > public telemetry dashboards . :) > One of the things telemetry helps us to understand is version adoption > rate. See, for example, the trend of Quincy

[ceph-users] Re: compiling Nautilus for el9

2023-04-03 Thread Marc
I am building with a centos9 stream container currently. I have been adding some rpms that were missing and not in the dependencies. Currently with these cmake options, these binaries are not build. Anyone an idea what this could be. cmake .. -DCMAKE_INSTALL_PREFIX=/usr

[ceph-users] Re: compiling Nautilus for el9

2023-04-02 Thread Marc
> > > > > > > > Is it possible to compile Nautilus for el9? Or maybe just the osd's? > > > > > > > I was thinking of updating first to el9/centos9/rocky9 one node at a > time, and after that do the ceph upgrade(s). I think this will give me > the least intrusive upgrade path. > > > > However that

[ceph-users] Re: compiling Nautilus for el9

2023-04-01 Thread Marc
> > Is it possible to compile Nautilus for el9? Or maybe just the osd's? > I was thinking of updating first to el9/centos9/rocky9 one node at a time, and after that do the ceph upgrade(s). I think this will give me the least intrusive upgrade path. However that requires the availability

[ceph-users] Re: how ceph OSD bench works?

2023-03-31 Thread Marc
> > OSD bench performs IOs at the objectstore level and the stats are > reported > based on the response from those transactions. It performs either > sequential > or random IOs (i.e. a random offset into an object) based on the > arguments > passed to it. IIRC if number of objects and object

[ceph-users] compiling Nautilus for el9

2023-03-30 Thread Marc
Is it possible to compile Nautilus for el9? Or maybe just the osd's? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Unexpected slow read for HDD cluster (good write speed)

2023-03-28 Thread Marc
Yes it pays off to know what to do before you do it, instead of after. If you complain about speed, is it a general unfounded complaint or did you compare ceph with similar solutions? I have really no idea what the standards are for these types of solutions. I can remember asking at such

[ceph-users] Re: Question about adding SSDs

2023-03-27 Thread Marc
> > We have a ceph cluster (Proxmox based) with is HDD-based. We’ve had > some performance and “slow MDS” issues while doing VM/CT backups from > the Proxmox cluster, especially when rebalancing is going on at the same > time. I also had to increase the mds cache quite a lot to get rid of

[ceph-users] Re: avg apply latency went up after update from octopus to pacific

2023-03-27 Thread Marc
> > > > And for your reference - IOPS numbers I'm getting in my lab with > data/DB > > colocated: > > > > 1) OSD on top of Intel S4600 (SATA SSD) - ~110 IOPS > > sata ssd's on Nautilus: Micron 5100 117 MZ7KM1T9HMJP-5 122 ___ ceph-users mailing

[ceph-users] Re: avg apply latency went up after update from octopus to pacific

2023-03-27 Thread Marc
> > > > >> > >> What I also see is that I have three OSDs that have quite a lot of > OMAP > >> data, in compare to other OSDs (~20 time higher). I don't know if > this > >> is an issue: > > > > I have on 2TB ssd's with 2GB - 4GB omap data, while on 8TB hdd's the > omap data is only 53MB - 100MB.

[ceph-users] Re: avg apply latency went up after update from octopus to pacific

2023-03-27 Thread Marc
> > What I also see is that I have three OSDs that have quite a lot of OMAP > data, in compare to other OSDs (~20 time higher). I don't know if this > is an issue: I have on 2TB ssd's with 2GB - 4GB omap data, while on 8TB hdd's the omap data is only 53MB - 100MB. Should I manually clean this?

[ceph-users] Re: avg apply latency went up after update from octopus to pacific

2023-03-26 Thread Marc
> > sadly we do not have the data from the time where c1 was on nautilus. > The RocksDB warning persisted the recreation. > Hi Boris, I was monitoring this thread a bit because I also still need to update from Nautilus, and am interested in this performance degradation. I am happy to provide

[ceph-users] Re: With Ceph Quincy, the "ceph" package does not include ceph-volume anymore

2023-03-24 Thread Marc
> > Until Ceph Pacific, installing just the "ceph" package was enough to > get everything needed to deploy Ceph. > > However, with Quincy, ceph-volume was split off into its own package, > and it is not automatically installed anymore. > > > > Should I file a bug for this? > > I would be

[ceph-users] Cephalocon Amsterdam 2023 Photographer Volunteer + tld common sense

2023-03-22 Thread Marc
I just forwarded your message to a photographer in the Amsterdam area who might be able to help you out. Then I noticed your .foundation email address. I know the marketing people just love all the new weird extension being released, but think about this for a 1s. When you propagate in

[ceph-users] Re: Unexpected slow read for HDD cluster (good write speed)

2023-03-20 Thread Marc
> While > reading, we barely hit the mark of 100MB/s; we would expect at least > something similar to the write speed. These tests are being performed in > a > pool with a replication factor of 3. > > You don't even describe how you test? And why would you expect something like the write

[ceph-users] Re: s3 compatible interface

2023-03-18 Thread Marc
> for testing you can try: https://github.com/aquarist-labs/s3gw > Yes indeed, that looks like it can be used with a simple fs backend. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Moving From BlueJeans to Jitsi for Ceph meetings

2023-03-16 Thread Marc
> > We have been using BlueJeans to meet and record some of our meetings > that later get posted to our YouTube channel. Unfortunately, we have > to figure out a new meeting platform due to Red Hat discontinuing > BlueJeans by the end of this month. > > Google Meets is an option, but some

  1   2   3   4   5   6   7   >