Hi Community,
I recently proposed a new authorization mechanism for RGW that can let the
RGW daemon ask an external service to authorize a request based on AWS S3
IAM tags (that means the external service would receive the same env as an
IAM policy doc would have to evaluate the policy).
You can
Yea exactly that was what I wanted.
Thanks. :)
On Thu, 23 Jun 2022 at 15:50, Tim Düsterhus wrote:
> Seena,
>
> On 6/22/22 19:57, Seena Fallah wrote:
> > I'm trying to compare two variables in ACL but seems the one on the right
> > side is not rendering and assume
Hi,
I'm trying to compare two variables in ACL but seems the one on the right
side is not rendering and assumed as a literal string.
Is there any example of how can I compare two variables in haproxy acls?
Testing on haproxy v2.6
Thanks.
Got it!
Thanks. Works like a charm =)
On Tue, 7 Jun 2022 at 17:50, Willy Tarreau wrote:
> On Tue, Jun 07, 2022 at 01:51:06PM +0200, Seena Fallah wrote:
> > I also tried with this one but this will give me 20req/s 200 OK and the
> > rest of it 429 too many requests
> >
s sense other requests get 429 but
actually, only 20req/s is responding "200" because the http_req_rate is not
decreasing in the correct intervals!
On Fri, 3 Jun 2022 at 17:44, Seena Fallah wrote:
> Do you see any diff between my conf and the one in the link? :/
>
> On Fri, 3 Ju
Do you see any diff between my conf and the one in the link? :/
On Fri, 3 Jun 2022 at 17:37, Aleksandar Lazic wrote:
> Hi.
>
> On Fri, 3 Jun 2022 17:12:25 +0200
> Seena Fallah wrote:
>
> > When using the below config to have 100req/s rate-limiting after passing
> > t
When using the below config to have 100req/s rate-limiting after passing
the 100req/s all of the reqs will deny not reqs more than 100req/s!
```
listen test
bind :8000
stick-table type ip size 100k expire 30s store http_req_rate(1s)
http-request track-sc0 src
http-request deny
be good!
Maybe another method that could define local configs for nodes in cluster
mode would be better but now Wouldn't it better to move with this
implementation?
On Tue, Nov 2, 2021 at 2:05 AM Ilya Maximets wrote:
> On 10/28/21 19:17, Seena Fallah wrote:
> > Just wanted to ping you gu
Just wanted to ping you guys for my question :)
On Wed, Oct 20, 2021 at 2:11 AM Seena Fallah wrote:
> I'm trying to write the test for it but because I faced and verify the
> issue over ovn I can't find the way to verify where can I verify that
> inactivity_probe is set correctly. Can
AM Seena Fallah wrote:
> Thanks for your notes. Well because I'm much comfortable with Github
> features if it's okay I continue to push further changes in Github.
>
> The reason I add this cmdline arg is creating dedicated remotes for each
> ovsdb server seems only availab
ote:
> >
> >
> > On 10/14/21 8:45 PM, Seena Fallah wrote:
> >> Hi,
> >>
> >> I've made a patch in GitHub https://github.com/openvswitch/ovs/pull/371
> >> Please review it.
> > Hi Seena,
> >
> > We don't review pull request o
ovs to ovn.
On Fri, Oct 15, 2021 at 4:41 AM Han Zhou wrote:
>
>
> On Thu, Oct 14, 2021 at 7:25 AM Seena Fallah
> wrote:
>
>> It's mostly on nb.
>>
> I am surprised since we usually don't see any scale problem for the NB DB
> servers, because usually SB data si
Hi,
I've made a patch in GitHub https://github.com/openvswitch/ovs/pull/371
Please review it.
Thanks.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev
It's mostly on nb.
Yes, I set that value before to 6 but it didn't help!
On Sun, Oct 10, 2021 at 10:34 PM Han Zhou wrote:
>
>
> On Sat, Oct 9, 2021 at 12:02 PM Seena Fallah
> wrote:
> >
> > Also I get many logs like this in ovn:
> >
> > 2021-10-09T18:54:
:10.0.0.3:48796:
connection dropped (Connection reset by peer)
What does it mean about excessive rate? How many req/s is going to be an
excessive rate?
On Thu, Oct 7, 2021 at 12:46 AM Seena Fallah wrote:
> Seems the most leader failure is for NB and the command you said is for SB.
>
> Do you
Hi,
When building OVN with jemalloc and ddlog it will face an issue with
jemalloc:
Compiling const-random v0.1.13
Running `rustc --crate-name const_random --edition=2018
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/const-random-0.1.13/src/
lib.rs --error-format=json
Seems the most leader failure is for NB and the command you said is for SB.
Do you have any benchmarks of how many ACLs can OVN perform normally?
I see many failures after 100k ACLs.
On Thu, Oct 7, 2021 at 12:14 AM Numan Siddique wrote:
> On Wed, Oct 6, 2021 at 2:49 PM Seena Fallah wr
:15 PM Seena Fallah
> wrote:
> >
> > Hi,
> >
> > I use ovn for OpenStack neutron plugin for my production. After days I
> see issues about losing a leader in ovsdb. It seems it was because of the
> failing inactivity probe and because I had 17k acls. After I disable
Hi,
I use ovn for OpenStack neutron plugin for my production. After days I see
issues about losing a leader in ovsdb. It seems it was because of the
failing inactivity probe and because I had 17k acls. After I disable the
inactivity probe it works fine but when I did a scale test on it (about 40k
I found a way to set it using:
ceph-kvstore-tool rocksdb . set osdmap first_committed ver 12261
But is that safe to do it? =)
On Mon, Sep 27, 2021 at 5:06 PM Seena Fallah wrote:
> Hi,
>
> I've lost all my mon dbs and after rebuilding it using OSDs the osdmap
> first_committed
Hi,
I've lost all my mon dbs and after rebuilding it using OSDs the osdmap
first_committed is set to 1 but my osdmaps commits start from 12261 (get a
list of osdmaps from mon db) and now when mon wants to trim osdmaps it will
fail because it can't find osdmap 1.
Is there a way to change osdmap
ّif you are using S3 you can try to use bucket policy:
https://docs.ceph.com/en/latest/radosgw/bucketpolicy/
On Wed, Jul 21, 2021 at 6:28 PM Rok Jaklič wrote:
> Hi,
>
> is it possible to limit access of the subuser that he sees (read, write)
> only "his" bucket? And also be able to create a
?
The way I want to unset is to decompile osdmap remove this flag and compile
it again and set it to Ceph.
On Mon, Jul 19, 2021 at 12:04 AM Seena Fallah wrote:
> I don't think it's a pool based config and in my cluster, it's set on
> osdmap level flags. The pool I test in the higher latency c
18, 2021 at 11:57 PM Brett Niver wrote:
> Seena,
>
> Which pool has the hardlimit flag set, the lower latency one, or the
> higher?
> Brett
>
>
> On Sun, Jul 18, 2021 at 12:17 PM Seena Fallah
> wrote:
>
>> I've checked out my logs and see there is pg
://github.com/ceph/ceph/pull/20394) that is not
backported to the luminus. Could this help?
On Sun, Jul 18, 2021 at 12:09 AM Seena Fallah wrote:
> I've trimmed pg log on all OSDs and whoops (!) latency came from 100ms to
> 20ms! But based on the other cluster I think it should come to arou
I've trimmed pg log on all OSDs and whoops (!) latency came from 100ms to
20ms! But based on the other cluster I think it should come to around 7ms.
Is there anything related to pg log or other things that can help to
continue debugging?
On Thu, Jul 15, 2021 at 3:13 PM Seena Fallah wrote:
>
Hi,
I'm facing something strange in ceph (v12.2.13, filestore). I have two
clusters with the same config (kernel, network, disks, ...). One of them
has 3ms latency the other has 100ms latency. Both physical disk latency on
write is less than 1ms.
In the cluster with 100ms latency on write when I
Hi,
In ceph osd dump I see many removed_snaps in order of 500k.
I see snap trimming event in ceph status sometimes, but after that when I
again dump removed_snaps the length of it didn't get smaller!
How can I get rid of these removed_snaps?
Thanks.
Hi,
Is ceph using TCP_FASTOPEN for its sockets?
If not why?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I had the same problem in my cluster and it was because of insights mgr
module that was storing lots of data to the RocksDB because mu cluster was
degraded.
If you have degraded pgs try to disable insights module.
On Thu, Feb 25, 2021 at 11:40 PM Dan van der Ster
wrote:
> > "source":
Many thanks for your response.
One more question, In the case of a CRC mismatch how many times does it
retry and does it raise any error logs in the kernel to see if it had a CRC
mismatch or not?
On Thu, Feb 11, 2021 at 3:05 PM Ilya Dryomov wrote:
> On Thu, Feb 11, 2021 at 1:34 AM Seena Fal
Hi,
I have a few questions about krbd on kernel 4.15
1. Does it support msgr v2? (If not which kernel supports msgr v2?)
2. If krbd is using msgr v1, does it checksum (CRC) the messages that it
sends to see for example if the write is correct or not? and if it does
checksums, If there were a
Yes but this can speed up and balance the recovery ops to all OSDs and
because it's a read op for the secondary or third OSD this can't be much
heartful!
On Wed, Feb 10, 2021 at 10:03 PM Janne Johansson
wrote:
> Den ons 10 feb. 2021 kl 19:09 skrev Seena Fallah :
>
>> But I think t
But I think they can have no recovery ops.
On Wed, Feb 10, 2021 at 9:28 PM Janne Johansson wrote:
> Den ons 10 feb. 2021 kl 18:05 skrev Seena Fallah :
>
>> I have the same question about when recovery is going to happen! I think
>> recovering from second and third OSD can
I have the same question about when recovery is going to happen! I think
recovering from second and third OSD can lead to not impact client IO too
when the primary OSD has another recovery ops!
On Tue, Feb 9, 2021 at 1:28 PM mj wrote:
> Hi,
>
> Quoting the page
After disabling insights module in mgr, mons rocksdb submit sync latency
gets down and my problem solved!!
On Fri, Feb 5, 2021 at 2:36 PM Seena Fallah wrote:
> Is there any suggestion on disk spec? I don’t find any doc about it on
> ceph too!
>
> On Fri, Feb 5, 2021 at 11:37 AM
r processes to protect the
> > monitor's available disk space from things like log file creep.
>
> Regards,
> Eugen
>
> [1]
> https://documentation.suse.com/ses/7/single-html/ses-deployment/#sysreq-mon
>
> Zitat von Seena Fallah :
>
> > This is my osdmap
odes?
On Thu, Feb 4, 2021 at 3:09 AM Seena Fallah wrote:
> Hi all,
>
> My monitor nodes are getting up and down because of paxos lease timeout
> and there is a high iops (2k iops) and 500MB/s throughput on
> /var/lib/ceph/mon/ceph.../store.db/.
> My cluster is in a recovery stat
Hi all,
My monitor nodes are getting up and down because of paxos lease timeout and
there is a high iops (2k iops) and 500MB/s throughput on
/var/lib/ceph/mon/ceph.../store.db/.
My cluster is in a recovery state and there is a bunch of degraded pgs on
my cluster.
It seems it's doing a 200k block
Hi,
Is there any reason why RBD image size isn't exported in the Prometheus
module?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
It's for a long time ago and I don't have the `ceph health detail` output!
On Sat, Jan 16, 2021 at 9:42 PM Alexander E. Patrakov
wrote:
> For a start, please post the "ceph health detail" output.
>
> сб, 19 дек. 2020 г. в 23:48, Seena Fallah :
> >
> > Hi,
>
All of my daemons are 14.2.24
On Sat, Jan 16, 2021 at 2:39 AM wrote:
> Hello Seena,
>
> Which Version of ceph you are using?
>
> IIRC there Was a bug in an older luminous which caused an empty list...
>
> HTH
> Mehmet
>
> Am 19. Dezember 2020 19:47:10 MEZ sch
If you are using ceph-container images you should update your image. This
feature has been introduced in v5.0.5:
https://github.com/ceph/ceph-container/releases/tag/v5.0.5
On Wed, Jan 6, 2021 at 1:22 AM Tony Liu wrote:
> Any comments?
>
> Thanks!
> Tony
> > -Original Message-
> > From:
out what are these reads for? I don't have any backfilling and there is
just regular scrub and deep scrubs on my server!
Thanks.
On Thu, Dec 24, 2020 at 12:47 AM Seena Fallah wrote:
> I have enabled bluefs_buffered_io on some of my OSD nodes and disable some
> others based on the serve
Hence the potential workarounds are adjusting bluefs_buffered_io and
> manual RocksDB compaction.
>
> This topic has been discussed in this mailing list and relevant tickets
> multiple times.
>
>
> Thanks,
>
> Igor
>
> On 12/23/2020 3:24 PM, Seena Fallah wrote:
> >
Hi,
All my OSD nodes in the SSD tier are getting heartbeat_map timed out
randomly and I don't find why!
7ff2ed3f2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread
0x7ff2c8943700' had timed out after 15
It occurs many times in a day and causes my cluster to be down.
Is there any way to
Hi,
I want to enable the firewall on my ceph nodes with ufw. Does anyone have
any experience with any performance regression in it?
Is there any solution for blocking exporter ports without a firewall in a
Ceph cluster like node exporter and ceph exporter?
Thanks.
Hi,
I used radosgw-admin reshard process to process a manual bucket resharding
after it completes it logs an error below
ERROR: failed to process reshard logs, error=(2) No such file or directory
I've added a bucket to resharding queue with radosgw-admin reshard add
--bucket bucket-tmp
Hi,
I'm facing something strange! One of the PGs in my pool got inconsistent
and when I run `rados list-inconsistent-obj $PG_ID --format=json-pretty`
the `inconsistents` key was empty! What is this? Is it a bug in Ceph or..?
Thanks.
___
ceph-users
Hi.
When I deployed an OSD with a separate db block it gets me a Permission
denied on its path! I don't have any idea why but the only change I've done
with my previous deployments was I change the osd_crush_initial_weight from
0 to 1. when I restart the host OSD can get up without any errors. I
Hi all,
I want to benchmark my production cluster with cbt. I read a bit of the
code and I see something strange in it, for example, it's going to create
ceph-osd by it selves (
https://github.com/ceph/cbt/blob/master/cluster/ceph.py#L373) and also
shutdown the whole cluster!! (
Hi,
I'm facing this issue too and I see the attached rocksdb log from Mark in
my cluster which means there is a burst read on my block.db. I've sent some
information from my issue in this thread[1]. Hope you help me with what's
going on in my cluster.
Thanks.
[1]:
I found that bluefs_max_prefetch is set to 1048576 which equals to 1MiB! So
why it's reading about 1GiB/s?
On Thu, Dec 3, 2020 at 8:03 PM Seena Fallah wrote:
> My first question is about this metric: ceph_bluefs_read_prefetch_bytes
> and I want to know what operation is related to this
My first question is about this metric: ceph_bluefs_read_prefetch_bytes and
I want to know what operation is related to this metric?
On Thu, Dec 3, 2020 at 7:49 PM Seena Fallah wrote:
> Hi all,
>
> When my cluster gets into a recovery state (adding new node) I see a huge
> rea
Hi all,
When my cluster gets into a recovery state (adding new node) I see a huge
read throughput on its disks and it affects the latency! Disks are SSD and
they don't have a separate WAL/DB.
I'm using nautilus 14.2.14 and bluefs_buffered_io is false by default. When
this throughput came on my
Thanks. It seems it is related to wpq implementation on how it is
organizing priorities!
I want to slow down the keys/s and I've set all the priorities to 1 for
recovery but it doesn't slow down!
On Thu, Dec 3, 2020 at 1:13 PM Anthony D'Atri
wrote:
>
> >> If so why the client op priority is
has some discussion of op priorities, though client ops aren’t mentioned
> explicitly. If you like, enter a documentation tracker and tag me and I’ll
> look into adding that.
>
> > On Dec 2, 2020, at 9:56 AM, Stefan Kooman wrote:
> >
> > On 12/2/20 5:36 PM, Seena Fallah
; this doc.
>
> On Wed, Dec 2, 2020 at 7:04 PM Peter Lieven wrote:
>
>> Am 02.12.20 um 15:04 schrieb Seena Fallah:
>> > I don't think so! I want to slow down the recovery not speed up and it
>> says
>> > I should reduce these values.
>>
>>
>>
2.12.20 um 15:04 schrieb Seena Fallah:
> > I don't think so! I want to slow down the recovery not speed up and it
> says
> > I should reduce these values.
>
>
> I read the documentation the same. Low value = low weight, High value =
> high weight. [1]
>
> Operations wi
I don't think so! I want to slow down the recovery not speed up and it says
I should reduce these values.
On Wed, Dec 2, 2020 at 5:31 PM Stefan Kooman wrote:
> On 12/2/20 2:55 PM, Seena Fallah wrote:
> > This is what I used in recovery:
> > osd max backfills = 1
> > osd re
This is what I used in recovery:
osd max backfills = 1
osd recovery max active = 1
osd recovery op priority = 1
osd recovery priority = 1
osd recovery sleep ssd = 0.2
But it doesn't help much!
On Wed, Dec 2, 2020 at 5:23 PM Stefan Kooman wrote:
> On 12/2/20 2:46 PM, Seena Fallah wrote:
&g
I did the same but it moved 200K keys/s!
On Wed, Dec 2, 2020 at 5:14 PM Stefan Kooman wrote:
> On 12/1/20 12:37 AM, Seena Fallah wrote:
> > Hi all,
> >
> > Is there any configuration to slow down keys/s in recovery mode?
>
> Not just keys, but you can lim
Hi all,
Is there any configuration to slow down keys/s in recovery mode?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
can see this
> video https://www.youtube.com/watch?v=-9_53PtwQHk which will only help
> you know how to see the tracing in frontend)
>
> On Thu, Nov 19, 2020 at 6:02 AM Seena Fallah
> wrote:
>
>> Isn't there any plan to upgrade this doc?
>> https://docs.ceph.com/en/latest/
idea why with these parameters latency got too much effect?
On Tue, Nov 24, 2020 at 12:42 PM Seena Fallah wrote:
> I add one OSD node to the cluster and I get 500MB/s throughput over my
> disks and it was 2 or 3 times better than before! but my latency raised 5
> times!!!
> W
,
>
> Igor
>
> On 11/23/2020 2:51 AM, Seena Fallah wrote:
> > Now one of my OSDs gets segfault.
> > Here is the full trace: https://paste.ubuntu.com/p/4KHcCG9YQx/
> >
> > On Mon, Nov 23, 2020 at 2:16 AM Seena Fallah
> wrote:
> >
> >> Hi all
Now one of my OSDs gets segfault.
Here is the full trace: https://paste.ubuntu.com/p/4KHcCG9YQx/
On Mon, Nov 23, 2020 at 2:16 AM Seena Fallah wrote:
> Hi all,
>
> After I upgrade from 14.2.9 to 14.2.14 my OSDs are using less more memory
> than before! I give each OSD 6GB m
Hi all,
After I upgrade from 14.2.9 to 14.2.14 my OSDs are using less more memory
than before! I give each OSD 6GB memory target and before the free memory
was 20GB and now after 24h from the upgrade I have 104GB free memory of
128GB memory! Also, my OSD latency got increases!
This happens in
Isn't there any plan to upgrade this doc?
https://docs.ceph.com/en/latest/dev/blkin/
On Fri, Nov 13, 2020 at 3:21 AM Seena Fallah wrote:
> Hi all,
>
> Does this project work with the latest zipkin apis?
> https://github.com/ceph/babeltrace-zipkin
>
> Also what do you prefer
Also when I reclassify-bucket to a non exist base bucket it says: "default
parent test does not exist"
But as documented in
https://docs.ceph.com/en/latest/rados/operations/crush-map-edits/ it should
create it!
On Tue, Nov 17, 2020 at 6:05 PM Seena Fallah wrote:
> Hi al
Hi all,
I want to reclassify my crushmap. I have two roots, one hiops and one
default. In hiops root I have one datacenter and in that I have three rack
and in each rack I have 3 osds. When I run the command below it says "item
-55 in bucket -54 is not also a reclassified bucket". I see the new
Hi all,
Does this project work with the latest zipkin apis?
https://github.com/ceph/babeltrace-zipkin
Also what do you prefer to trace requests for rgw and rbd in ceph?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
nd therefore cannot be included in a universally-applicable set
> of tuning recommendations. Also, look again: the title talks about
> all-flash deployments, while the context of the benchmark talks about
> 7200RPM HDDs!
>
> On Wed, Nov 4, 2020 at 12:37 AM Seena Fallah
> wrote:
AM Seena Fallah wrote:
> >
> > Hi all,
> >
> > Does this guid is still valid for a bluestore deployment with nautilus or
> > octopus?
> >
> https://tracker.ceph.com/projects/ceph/wiki/Tuning_for_All_Flash_Deployments
>
> Some of the guidance is of course out
Hi all,
Does this guid is still valid for a bluestore deployment with nautilus or
octopus?
https://tracker.ceph.com/projects/ceph/wiki/Tuning_for_All_Flash_Deployments
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
Hi all,
There is a huge difference between node exporter and ceph exporter
(prometheus mgr module) data. For example I see there is a 120MB/s write on
my disk from node exporter but ceph exporter says it is 22MB! Also for
latency and IOPS and...
Which one is reliable?
Thanks.
Hi
When I use haproxy with keep-alive mode to rgws, haproxy gives many
responses like this!
Is there any problem with keep-alive mode in rgw?
Using nautilus 14.2.9 with beast frontend.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
I used "show errors -1 response" in haproxy socket to see these errors but
nothing found!
Is there any way I can see the errors?
On Thu, Oct 15, 2020 at 9:48 PM Seena Fallah wrote:
> Based on this comment is this related to the client and there is no
> problem on the serv
Based on this comment is this related to the client and there is no problem
on the server side?
https://github.com/haproxy/haproxy/blob/master/include/haproxy/channel-t.h#L68
On Wed, Oct 14, 2020 at 3:29 PM Seena Fallah wrote:
> Hi.
>
> I'm facing many response errors from my backe
)
>
>
> Mark
>
>
> On 10/13/20 5:46 PM, Seena Fallah wrote:
> > Hi all,
> >
> > Is TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES configured just for filestore or
> > can be used for bluestore, too?
> > https://github.
Hi.
I'm facing many response errors from my backends and I have checked the
logs but there were no 5xx errors for these response errors! It seems I'm
in this section of code and because I use http-server-close it will count
failed_resp!
Hi all,
Is TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES configured just for filestore or
can be used for bluestore, too?
https://github.com/ceph/ceph/blob/master/etc/default/ceph#L7
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send
If everything is stable isn't it good to update this doc?
https://docs.ceph.com/en/latest/start/os-recommendations/
On Mon, Oct 12, 2020 at 12:56 PM Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:
> Hi,
>
> On 10/12/20 2:31 AM, Seena Fallah wr
tup with no
> problems that I can attribute to using Ubuntu 20.
>
> Regards
> Robert Ruge
>
>
>
> -Original Message-
> From: Seena Fallah
> Sent: Monday, 12 October 2020 11:35 AM
> To: ceph-users
> Subject: [ceph-users] Re: Ubuntu 20 with octopus
>
> T
The main reason I asked is because I don’t see any Ubuntu 20 in this doc
https://docs.ceph.com/en/latest/start/os-recommendations/
On Mon, Oct 12, 2020 at 4:01 AM Seena Fallah wrote:
> Hi all,
>
> Does anyone has any production cluster with ubuntu 20 (focal) or any
> suggestion
Hi all,
Does anyone has any production cluster with ubuntu 20 (focal) or any
suggestion or any bugs that prevents to deploy Ceph octopus on Ubuntu 20?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
Hi. Does haproxy support partial response form servers?
In nginx there is a parameter named proxy_read_timeout that defines a
timeout for reading a response from the proxied server. The timeout is set
only between two successive read operations, not for the transmission of
the whole response. If
m/unitedkingdom/en/ssd/dc1000b-data-center-boot-ssd
>>
>> look good for your purpose.
>>
>>
>>
>> - Original Message -
>> From: "Seena Fallah"
>> To: "Виталий Филиппов"
>> Cc: "Anthony D'Atri" , "ceph-users&qu
capacitors and 970 evo doesn't
> 13 сентября 2020 г. 0:57:43 GMT+03:00, Seena Fallah
> пишет:
>
> Hi. How do you say 883DCT is faster than 970 EVO? I saw the specifications
> and 970 EVO has higher IOPS than 883DCT! Can you please tell why 970 EVO act
> lower than 883DCT?
>
nvme disk in this space! Do you have any recommendations?
On Sun, Sep 13, 2020 at 10:17 PM Виталий Филиппов
wrote:
> Easy, 883 has capacitors and 970 evo doesn't
>
> 13 сентября 2020 г. 0:57:43 GMT+03:00, Seena Fallah
> пишет:
>>
>> Hi. How do you say 883DCT is faste
Hi. How do you say 883DCT is faster than 970 EVO?
I saw the specifications and 970 EVO has higher IOPS than 883DCT!
Can you please tell why 970 EVO act lower than 883DCT?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
rgent you probably better to proceed with enabling the new DB
> space management feature.
>
> But please do that eventually, modify 1-2 OSDs at the first stage and test
> them for some period (may be a week or two).
>
>
> Thanks,
>
> Igor
>
>
> On 8/20/2020 5:36 PM, S
and doing it for
a month doesn't look very good!
On Thu, Aug 20, 2020 at 6:52 PM Igor Fedotov wrote:
> Correct.
> On 8/20/2020 5:15 PM, Seena Fallah wrote:
>
> So you won't backport it to nautilus until it gets default to master for a
> while?
>
> On Thu, Aug 20, 2020 at
>
> Hence you can definitely try it but this exposes your cluster(s) to some
> risk as for any new (and incompletely tested) feature
>
>
> Thanks,
>
> Igor
>
>
> On 8/20/2020 4:06 PM, Seena Fallah wrote:
>
> Greate, thanks.
>
> Is it safe to change it manua
setting is invalid. It should be
> 'use_some_extra'. Gonna fix that shortly...
>
>
> Thanks,
>
> Igor
>
>
>
>
> On 8/20/2020 1:44 PM, Seena Fallah wrote:
>
> Hi Igor.
>
> Could you please tell why this config is in LEVEL_DEV (
> https://github.com/cep
Hi Igor.
Could you please tell why this config is in LEVEL_DEV (
https://github.com/ceph/ceph/pull/29687/files#diff-3d7a065928b2852c228ffe669d7633bbR4587)?
As it is documented in Ceph we can't use LEVEL_DEV in production
environments!
Thanks
On Thu, Aug 20, 2020 at 1:58 PM Igor Fedotov wrote:
Hi all.
Is there any docs related to default.rgw.data.root pool? I have this
pool and there are no objects in default.rgw.meta pool.
Thanks for your help.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
Hi all.
I see this sentence in many sites. Does anyone knows why?
> Then turn off print continue. If you have it set to true, you may encounter
> problems with PUT operations
I use nginx in front of my rgw and proxy pass expect header in it.
Thanks.
Hi all.
There are high iops on my bucket index pool when there is about 1K PUT
request/s.
Is there any way I can debug why there are so many iops on the bucket
index pool?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
Hi all.
Is there any rbd audit like ceph.audit.log that could log which client
runs which command from rbd client?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
1 - 100 of 110 matches
Mail list logo