ayer does not know that data has been removed
> and are thus no longer needed
>
>
>
> On 1/6/20 10:30 AM, M Ranga Swami Reddy wrote:
> > Hello,
> > I ran the "rbd du /image" command. Its shows increasing, when I add
> > data to the image. That looks good.
Hello,
I ran the "rbd du /image" command. Its shows increasing, when I add
data to the image. That looks good. But when I removed data from the image,
its not showing the decreasing the size.
Is this expected with "rbd du" or its not implemented?
NOTE: Expected behavior is the same as " Linux du
e syntax and the name of the other placement.
> If you need to prevent that I'd recommend putting a proxy in front of
> radosgw and blocking bucket create requests with an explicit placement
> specified.
>
>
> On Dec 13, 2019, at 3:23 AM, M Ranga Swami Reddy
> wrote:
>
> Hel
Hello - I want to have 2 diff. rgw pools for 2 diff. clients. For ex:
For client#1 - rgw.data1, rgw.index1, rgw.user1, rgw.metadata1
For client#2 - rgw.data2, rgw.index2, rgw.user2, rgw.metadata2
Is the above possible with ceph radosgw?
Thanks
Swami
Primary OSD crashes with below assert:
12.2.11/src/osd/ReplicatedBackend.cc:1445 assert(peer_missing.count(
fromshard))
==
here I have 2 OSDs with bluestore backend and 1 osd with filestore backend.
On Mon, Nov 25, 2019 at 3:34 PM M Ranga Swami Reddy
wrote:
> Hello - We are using the c
upgraded to nautilus (clean
> bluestore installation)
>
> - Original Message -
> > From: "M Ranga Swami Reddy"
> > To: "ceph-users" , "ceph-devel" <
> ceph-de...@vger.kernel.org>
> > Sent: Monday, 25 November, 2019 12:04:
Hello - We are using the ceph 12.2.11 version (upgraded from Jewel 10.2.12
to 12.2.11). In this cluster, we are having mix of filestore and bluestore
OSD backends.
Recently we are seeing the scrub errors on rgw buckets.data pool every day,
after scrub operation performed by Ceph. If we run the PG
Hello - Recently we have upgraded to Luminous 12.2.11. After that we can
see the scrub errors on the object storage pool only on daily basis. After
repair, it will be cleared. But again it will come tomorrow after scrub
performed the PG.
Any known issue - on scrub errs with 12.2.11 version?
Hello, I used "ceph report" command and output show laggyInt and laggyPrd..
Can you plz let me know, how these numbers calculated and what is the
expected numbers for laggyInt and laggyPrd numbers for production Ceph
cluster.
Thank you
Swami
___
Mon, Oct 7, 2019 at 12:46 PM M Ranga Swami Reddy
wrote:
> Thank you...Let me confirm the same..and update here.
>
> On Sat, Oct 5, 2019 at 12:27 AM wrote:
>
>> Swami;
>>
>> For 12.2.11 (Luminous), the previously linked document would be:
>>
>> h
> ___________
>
> Clyso GmbH
>
>
> Am 03.10.2019 um 13:32 schrieb M Ranga Swami Reddy:
> Thank you. Do we have a quick document to do this migration?
>
> Thanks
> Swami
>
> On Thu, Oct 3, 2019 at 4:38 PM Paul Emmerich
> wrote:
Thank you. Do we have a quick document to do this migration?
Thanks
Swami
On Thu, Oct 3, 2019 at 4:38 PM Paul Emmerich wrote:
> On Thu, Oct 3, 2019 at 12:03 PM M Ranga Swami Reddy
> wrote:
> >
> > Below url says: "Switching from a standalone deployment to a multi-site
Below url says: "Switching from a standalone deployment to a multi-site
replicated deployment is not supported.
https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-rgw-multisite.html
Please advise.
On Thu, Oct 3, 2019 at 3:28 PM M Ranga Swami Reddy
wrote:
Hi,
Iam using the 2 ceph clusters in diff DCs (away by 500 KM) with ceph
12.2.11 version.
Now, I want to setup rgw multisite using the above 2 ceph clusters.
is it possible? if yes, please share good document to do the same.
Thanks
Swami
___
ceph-users
repair took almost 6 hours..but after repair, still sees the scrub errs.
On Wed, Sep 25, 2019 at 5:11 AM Brad Hubbard wrote:
> On Tue, Sep 24, 2019 at 10:51 PM M Ranga Swami Reddy
> wrote:
> >
> > Interestingly - "rados list-inconsistent-obj ${PG} --format=json"
rote:
> On Thu, Sep 19, 2019 at 4:34 AM M Ranga Swami Reddy
> wrote:
> >
> > Hi-Iam using ceph 12.2.11. here I am getting a few scrub errors. To fix
> these scrub error I ran the "ceph pg repair ".
> > But scrub error not going and the repair is talking long tim
Hi-Iam using ceph 12.2.11. here I am getting a few scrub errors. To fix
these scrub error I ran the "ceph pg repair ".
But scrub error not going and the repair is talking long time like 8-12
hours.
Thanks
Swami
___
ceph-users mailing list
ceph using for RBD only
On Wed, Jul 24, 2019 at 12:55 PM Wido den Hollander wrote:
>
>
> On 7/16/19 6:53 PM, M Ranga Swami Reddy wrote:
> > Thanks for your reply..
> > Here, new pool creations and pg auto scale may cause rebalance..which
> > impact the ceph cluster p
Hi - I can start using the 64 PGs...as Iam having 10 nodes - with 18 OSDs
per node..
On Tue, Jul 16, 2019 at 9:01 PM Janne Johansson wrote:
> Den tis 16 juli 2019 kl 16:16 skrev M Ranga Swami Reddy <
> swamire...@gmail.com>:
>
>> Hello - I have created 10 nodes ceph clus
Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
>
> On Tue, Jul 16, 2019 at 4:17 PM M Ranga Swami Reddy
> wrote:
>
>> Hello - I have created 10 nodes ceph cluster with 14.x version. Can yo
Hello - I have created 10 nodes ceph cluster with 14.x version. Can you
please confirm below:
Q1 - Can I create 100+ pool (or more) on the cluster? (the reason is -
creating a pool per project). Any limitation on pool creation?
Q2 - In the above pool - I use 128 PG-NUM - to start with and enable
Thanks Jason.
Btw, we use Ceph with OpenStack Cinder and Cinder Release (Q and above)
supports multi attach. can we use the OpenStack Cinder with Q release with
Ceph rbd for multi attach functionality?
Thanks
Swami
___
ceph-users mailing list
Hello - Is ceph rbd support multi attach volumes (with ceph luminous
version0)?
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
odule/
>
> /Torben
>
> On 25.06.2019 08:19, M Ranga Swami Reddy wrote:
>
> > Thanks for the reply.
> > Btw, one my customer wants to get the objects based on last modified
> > date filed. How do we can achive this?
> >
> > On Thu, Jun 13, 2019 at 7:09 PM Paul Emm
etting everything and sorting it :(
>
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
>
> On
hello - Can we list the objects in rgw, via last modified date?
For example - I wanted to list all the objects which were modified 01 Jun
2019.
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hello - I planned to use the bluestore's block.db on SSD (and data is on
HDD) with 4% of HDD size. Here I have not mentioned the block.wal..in this
case where block.wal place?
is it in HDD (ie data) or in block.db of SSD?
Thanks
Swami
___
ceph-users
-- Forwarded message -
From: M Ranga Swami Reddy
Date: Mon, May 27, 2019 at 5:37 PM
Subject: Luminous OSD: replace block.db partition
To: ceph-devel , ceph-users
Hello - I have created an OSD with 20G block.db, now I wanted to change the
block.db to 100G size.
Please let us
Hello - I have created an OSD with 20G block.db, now I wanted to change the
block.db to 100G size.
Please let us know if there is a process for the same.
PS: Ceph version 12.2.4 with bluestore backend.
Thanks
Swami
___
ceph-users mailing list
! M Ranga Swami Reddy
> In chel di` si favelave...
>
> > Hello - Recevenlt we had an issue with storage node's battery failure,
> which
> > cause ceph client IO dropped to '0' bytes. Means ceph cluster couldn't
> perform
> > IO operations on the cluster till the node
ize 3 and also set min_size to 3 the I/O would pause if a
> node or OSD fails. So more information about the cluster would help,
> can you share that?
>
> ceph osd tree
> ceph osd pool ls detail
>
> Were all pools affected or just specific pools?
>
> Regards,
> Eugen
&
Hello - Recevenlt we had an issue with storage node's battery failure,
which cause ceph client IO dropped to '0' bytes. Means ceph cluster
couldn't perform IO operations on the cluster till the node takes out. This
is not expected from Ceph, as some HW fails, those respective OSDs should
mark as
Hello - Recevenlt we had an issue with storage node's battery failure,
which cause ceph client IO dropped to '0' bytes. Means ceph cluster
couldn't perform IO operations on the cluster till the node takes out. This
is not expected from Ceph, as some HW fails, those respective OSDs should
mark as
Hello - We have seen the ceph-mon election is taking time or cause some
issues when a mon leader down or during the maintenance.
So in this case - spl. during the maintaince - its required that -
ceph-mon leader set via CLI and do the maintenance for ceph-mon node.
This functionality very much
Hello - We have seen the ceph-mon election is taking time or cause some
issues when a mon leader down or during the maintenance.
So in this case - spl. during the maintaince - its required that -
ceph-mon leader set via CLI and do the maintenance for ceph-mon node.
This functionality very much
Fixed: use only user id (swamireddy) instead of full openID url.
On Thu, Feb 28, 2019 at 7:04 PM M Ranga Swami Reddy
wrote:
>
> I tried to login to ceph tracker - it failing with openID url.?
>
> I tried with my OpenID:
> http://tracker.ceph.com/login
>
> my id: https
Hi - we are using ceph 12.2.4 and bug#24445 hitting, which caused 10
min IO pause on ceph cluster..
Is this bug fixed?
bug: https://tracker.ceph.com/issues/24445/
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
I tried to login to ceph tracker - it failing with openID url.?
I tried with my OpenID:
http://tracker.ceph.com/login
my id: https://code.launchpad.net/~swamireddy
___
ceph-users mailing list
ceph-users@lists.ceph.com
aravičius
> > wrote:
> >>
> >> If your using hdd for monitor servers. Check their load. It might be
> >> the issue there.
> >>
> >> On Fri, Feb 22, 2019 at 1:50 PM M Ranga Swami Reddy
> >> wrote:
> >>>
> >>
Opps...is this really impact...will righ-away change this and test it.
On Fri, Feb 22, 2019 at 5:29 PM Janne Johansson wrote:
>
> Den fre 22 feb. 2019 kl 12:35 skrev M Ranga Swami Reddy
> :
>>
>> No seen the CPU limitation because we are using the 4 cores per osd daemo
On Fri, Feb 22, 2019 at 1:50 PM M Ranga Swami Reddy
> wrote:
> >
> > ceph-mon disk with 500G with HDD (not journals/SSDs). Yes, mon use
> > folder on FS on a disk
> >
> > On Fri, Feb 22, 2019 at 5:13 PM David Turner wrote:
> > >
> > > Mo
ceph-mon disk with 500G with HDD (not journals/SSDs). Yes, mon use
folder on FS on a disk
On Fri, Feb 22, 2019 at 5:13 PM David Turner wrote:
>
> Mon disks don't have journals, they're just a folder on a filesystem on a
> disk.
>
> On Fri, Feb 22, 2019, 6:40 AM M Ranga Swami
ts during a recovery, I could see that
> impacting client io. What disks are they running on? CPU? Etc.
>
> On Fri, Feb 22, 2019, 6:01 AM M Ranga Swami Reddy
> wrote:
>>
>> Debug setting defaults are using..like 1/5 and 0/5 for almost..
>> Shall I try with 0 for all deb
to bluestore from filestore. This will also
> lower your CPU usage. I'm not sure that this is bluestore that does
> it, but I'm seeing lower cpu usage when moving to bluestore + rocksdb
> compared to filestore + leveldb .
>
>
> On Wed, Feb 20, 2019 at 4:27 PM M Ranga Swami Red
e. I'm not sure that this is bluestore that does
> it, but I'm seeing lower cpu usage when moving to bluestore + rocksdb
> compared to filestore + leveldb .
>
>
> On Wed, Feb 20, 2019 at 4:27 PM M Ranga Swami Reddy
> wrote:
> >
> > Thats expected from Ceph by design.
also
> > lower your CPU usage. I'm not sure that this is bluestore that does
> > it, but I'm seeing lower cpu usage when moving to bluestore + rocksdb
> > compared to filestore + leveldb .
> >
> >
> > On Wed, Feb 20, 2019 at 4:27 PM M Ranga Swami Reddy
>
should be able to have an entire rack powered
> down without noticing any major impact on the clients. I regularly take down
> OSDs and nodes for maintenance and upgrades without seeing any problems with
> client IO.
>
> On Tue, Feb 12, 2019 at 5:01 AM M Ranga Swami Reddy
> wro
nts.
>
> -- dan
>
>
> On Mon, Feb 18, 2019 at 12:11 PM M Ranga Swami Reddy
> wrote:
> >
> > Hi Sage - If the mon data increases, is this impacts the ceph cluster
> > performance (ie on ceph osd bench, etc)?
> >
> > On Fri, Feb 15, 2019 at 3:13 PM M Ranga
Hi Sage - If the mon data increases, is this impacts the ceph cluster
performance (ie on ceph osd bench, etc)?
On Fri, Feb 15, 2019 at 3:13 PM M Ranga Swami Reddy
wrote:
>
> today I again hit the warn with 30G also...
>
> On Thu, Feb 14, 2019 at 7:39 PM Sage Weil wrote:
> >
today I again hit the warn with 30G also...
On Thu, Feb 14, 2019 at 7:39 PM Sage Weil wrote:
>
> On Thu, 7 Feb 2019, Dan van der Ster wrote:
> > On Thu, Feb 7, 2019 at 12:17 PM M Ranga Swami Reddy
> > wrote:
> > >
> > > Hi Dan,
> > > >Duri
Sure, will this. For now I have creased the size to 30G (from 15G).
On Thu, Feb 14, 2019 at 7:39 PM Sage Weil wrote:
>
> On Thu, 7 Feb 2019, Dan van der Ster wrote:
> > On Thu, Feb 7, 2019 at 12:17 PM M Ranga Swami Reddy
> > wrote:
> > >
> > > Hi Dan,
Hello - Can we use the ceph osd journal disk in RAID#1 to achieve the
HA for journal disks?
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello - I have a couple of questions on ceph cluster stability, even
we follow all recommendations as below:
- Having separate replication n/w and data n/w
- RACK is the failure domain
- Using SSDs for journals (1:4ratio)
Q1 - If one OSD down, cluster IO down drastically and customer Apps
Alternatively, will increase the mon_data_size to 30G (from 15G)..
Thanks
Swami
On Thu, Feb 7, 2019 at 8:44 PM Dan van der Ster wrote:
>
> On Thu, Feb 7, 2019 at 4:12 PM M Ranga Swami Reddy
> wrote:
> >
> > >Compaction isn't necessary -- you should only need to
?
Thanks
Swami
On Thu, Feb 7, 2019 at 6:30 PM Dan van der Ster wrote:
>
> On Thu, Feb 7, 2019 at 12:17 PM M Ranga Swami Reddy
> wrote:
> >
> > Hi Dan,
> > >During backfilling scenarios, the mons keep old maps and grow quite
> > >quickly. So if you have balan
Hello - We are using the ceph osd nodes with cache controller cache of 1G size.
Are there any recommendation for using the cache for read and write?
Here we are using - HDDs with colocated journals.
For SSD journal - 0% cache and 100% write.
Thanks
On Mon, Feb 4, 2019 at 6:07 PM M Ranga Swami
trary, based on cluster sizes we had seen when
> we picked it. In your case it should be perfectly safe to increase it.
>
> sage
>
>
> On Wed, 6 Feb 2019, M Ranga Swami Reddy wrote:
>
> > Hello - Are the any limits for mon_data_size for cluster with 2PB
> > (with 2000+
Dan
>
>
> On Wed, Feb 6, 2019 at 1:26 PM Sage Weil wrote:
> >
> > Hi Swami
> >
> > The limit is somewhat arbitrary, based on cluster sizes we had seen when
> > we picked it. In your case it should be perfectly safe to increase it.
> >
> > sage
> &g
Hello - Are the any limits for mon_data_size for cluster with 2PB
(with 2000+ OSDs)?
Currently it set as 15G. What is logic behind this? Can we increase
when we get the mon_data_size_warn messages?
I am getting the mon_data_size_warn message even though there a ample
of free space on the disk
Hello - We are using the ceph osd nodes with cache controller cache of 1G size.
Are there any recommendation for using the cache for read and write?
Here we are using - HDDs with colocated journals.
For SSD journal - 0% cache and 100% write.
Thanks
Swami
Here user requirement is - less write and more reads...so not much
worried on performance .
Thanks
Swami
On Thu, Jan 31, 2019 at 1:55 PM Piotr Dałek wrote:
>
> On 2019-01-31 6:05 a.m., M Ranga Swami Reddy wrote:
> > My thought was - Ceph block volume with raid#0 (means I mo
Johansson wrote:
>
> Den ons 30 jan. 2019 kl 14:47 skrev M Ranga Swami Reddy
> :
>>
>> Hello - Can I use the ceph block volume with RAID#0? Are there any
>> issues with this?
>
>
> Hard to tell if you mean raid0 over a block volume or a block volume over
>
Hello - Can I use the ceph block volume with RAID#0? Are there any
issues with this?
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks for reply.
If the OSD represents the primary one for a PG, then all IO will be
stopped..which may lead to application failure..
On Tue, Jan 22, 2019 at 5:32 PM Matthew Vernon wrote:
>
> Hi,
>
> On 22/01/2019 10:02, M Ranga Swami Reddy wrote:
> > Hello - If an
Hello - If an OSD shown as down and but its still "in" state..what
will happen with write/read operations on this down OSD?
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
is a lot of
> documentation in the ceph docs as well as a wide history of questions like
> this on the ML.
>
> On Mon, Sep 3, 2018 at 5:24 AM M Ranga Swami Reddy
> wrote:
>>
>> Hi - I am using the Ceph Luminous release. here what are the OSD
>> journal settings needed
Hi - I am using the Ceph Luminous release. here what are the OSD
journal settings needed for OSD?
NOTE: I used SSDs for journal till Jewel release.
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hello,
What is best and simple way to showcase that - each ceph image
replicated 3 (ie size=3)?
A few ideas:
- Use "ceph osd map image_id
If any simple way to showcase the above, please share.
Thanks
Swami
___
ceph-users mailing list
oved.
Thanks
Swami
On Thu, Jan 4, 2018 at 6:49 PM, Sergey Malinin <h...@newmail.com> wrote:
> http://cephnotes.ksperis.com/blog/2014/07/04/remove-big-rbd-image
>
>
> From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of M Ranga
Hello,
In Ceph, is the way to cleanup data before deleting an image?
Means wipe the data with '0' before deleting an image.
Please let me know if you have any suggestions here.
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
dwidth, and SSD clusters/pools in general. You should be able to find
> everything you need in the archives.
>
>
> On Mon, Nov 20, 2017, 12:56 AM M Ranga Swami Reddy <swamire...@gmail.com>
> wrote:
>>
>> Hello,
>> We plan to use the ceph cluster with all SS
Hello,
We plan to use the ceph cluster with all SSDs. Do we have any
recommendations for Ceph cluster with Full SSD disks.
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> You can check your OSD partition uuid and get particular key as:
>
> # change path to your OSD (not journal) partition path
> OSD_PATH=/dev/sdXN
> OSD_UUID=`blkid -s PARTUUID -o value $OSD_PATH`
>
> /usr/bin/ceph config-key get dm-crypt/osd/$OSD_UUID/luks
>
>
>
>
when I create dmcrypted jounral using cryptsetup command, its asking
for passphase? Can I use passphase as empty?
On Wed, Sep 6, 2017 at 11:23 PM, M Ranga Swami Reddy
<swamire...@gmail.com> wrote:
> Thank you. Iam able to replace the dmcrypt journal successfully.
>
> On Sep 5, 201
al create
> command for the osd and start the osd.
>
> On Tue, Sep 5, 2017, 2:47 AM M Ranga Swami Reddy <swamire...@gmail.com>
> wrote:
>
>> Hello,
>> How to replace an OSD's journal created with dmcrypt, from one drive
&
Hello,
How to replace an OSD's journal created with dmcrypt, from one drive
to another drive, in case of current journal drive failed.
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
On Mon, Aug 21, 2017 at 5:37 PM, Christian Balzer <ch...@gol.com> wrote:
> On Mon, 21 Aug 2017 17:13:10 +0530 M Ranga Swami Reddy wrote:
>
>> Thank you.
>> Here I have NVMes from Intel. but as the support of these NVMes not
>> there from Intel, we decided not t
, Christian Balzer <ch...@gol.com> wrote:
>
> Hello,
>
> On Sat, 19 Aug 2017 23:22:11 +0530 M Ranga Swami Reddy wrote:
>
>> SSD make details : SSD 850 EVO 2.5" SATA III 4TB Memory & Storage -
>> MZ-75E4T0B/AM | Samsung
>>
> And there's your answer.
&g
SSD make details : SSD 850 EVO 2.5" SATA III 4TB Memory & Storage -
MZ-75E4T0B/AM | Samsung
On Sat, Aug 19, 2017 at 10:44 PM, M Ranga Swami Reddy
<swamire...@gmail.com> wrote:
> Yes, Its in production and used the pg count as per the pg calcuator @
> ceph.com.
>
> On F
at oranges, this is
> pointless.
>
> Also testing osd bench is the LEAST relevant test you can do, as it only
> deals with local bandwidth, while what people nearly always want/need in
> the end is IOPS and low latency.
> Which you test best from a real client perspective.
>
&
Yes, Its in production and used the pg count as per the pg calcuator @ ceph.com.
On Fri, Aug 18, 2017 at 3:30 AM, Mehmet <c...@elchaka.de> wrote:
> Which ssds are used? Are they in production? If so how is your PG Count?
>
> Am 17. August 2017 20:04:25 MESZ schrieb M Ranga Swami R
Hello,
I am using the Ceph cluster with HDDs and SSDs. Created separate pool for each.
Now, when I ran the "ceph osd bench", HDD's OSDs show around 500 MB/s
and SSD's OSD show around 280MB/s.
Ideally, what I expected was - SSD's OSDs should be at-least 40% high
as compared with HDD's OSD bench.
Dear All -
Please vote (with number # 3) for the proposed sessions to be
presented at OpenStack Summit - 2017 in Sydney.
OpenStack cloud storage - Advanced performance tuning & operational
best practices with Ceph
https://www.openstack.org/summit/sydney-2017/vote-for-speakers/#/19056
How
+ Ceph-devel
On Tue, May 2, 2017 at 11:46 AM, M Ranga Swami Reddy
<swamire...@gmail.com> wrote:
> Hello,
> I have added 5 new Ceph OSD nodes to my ceph cluster. Here, I wanted
> to increase PG/PGP numbers of pools based new OSDs count. Same time
> need to increase the newly adde
Hello,
I have added 5 new Ceph OSD nodes to my ceph cluster. Here, I wanted
to increase PG/PGP numbers of pools based new OSDs count. Same time
need to increase the newly added OSDs weight from 0 -> 1.
My question is:
Do I need to increase the PG/PGP num increase and then reweight the OSDs?
Or
Hello,
I am using the ceph 10.2.5 release version. Is this version's cephFS
support hadoop cluster requirement? (Anyone using the same)
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
-scrubbed on Friday 10AM and if its not deep-scrubed on the next
Friday 10AM, then Saturday 10AM its will be performed (assumed that
Saturday 7AM deep-scrub enabled).
Thanks
Swami
On Mon, Apr 3, 2017 at 6:57 PM, Sage Weil <s...@newdream.net> wrote:
> On Mon, 3 Apr 2017, M Ranga Swami Re
+ ceph-devel
On Mon, Feb 27, 2017 at 3:54 PM, M Ranga Swami Reddy
<swamire...@gmail.com> wrote:
> Hello,
> I use a ceph cluster and its show the deeps scrub's PG distribution as below
> from "ceph pg dump" command:
>
>
>2000 Friday
&g
n Saturday or is it will be
done on Next Friday?
Thanks
Swami
On Mon, Feb 27, 2017 at 3:54 PM, M Ranga Swami Reddy <swamire...@gmail.com>
wrote:
> Hello,
> I use a ceph cluster and its show the deeps scrub's PG distribution as
> below from "ceph pg dump" command:
&g
Hello,
We use ceph cluster with 10 nodes/servers with 15 OSDs per node.
Here, I wanted to use 10 OSDs for block storage (i.e volumes pool) and 5
OSDs for obj. storage (ie rgw pool). And plan to use "replica" type for
block and obj. pools.
Please advise, if the above is good use or any bottlenecks
Hello,
I use a ceph cluster and its show the deeps scrub's PG distribution as
below from "ceph pg dump" command:
2000 Friday
1000 Saturday
4000 Sunday
==
On Friday, I have disabled the deep-scrub due to some reason. If this case,
all Friday's PG deep-scrub will be performed on
Hello,
I am looking for ceph performance results for Firefly 0.80.11 and Hammer
0.94.9.
Please share the above, if anyone has the same..
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
>>
>> __ __
>>
>> __ __
>>
>> *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com
>> <mailto:ceph-users-boun...@lists.ceph.com>] *On Behalf Of *Kent Borg
>> *Sent:* 03 January 2017 12:47
>> *To:* M Ranga S
/pipermail/ceph-users-ceph.com/
> 2016-March/008444.html
>
> On Tue, Jan 3, 2017 at 2:30 PM, Nick Fisk <n...@fisk.me.uk> wrote:
>
>>
>>
>>
>>
>> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
>> Of *Kent Borg
>> *Sent:*
*Kent Borg
> *Sent:* 03 January 2017 12:47
> *To:* M Ranga Swami Reddy <swamire...@gmail.com>
> *Cc:* ceph-users <ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] performance with/without dmcrypt OSD
>
>
>
> On 01/03/2017 06:42 AM, M Ranga Swami Reddy
Hello,
I have a ceph cluster with 25% OSDs ( 200 OSDs in a cluster and 50 OSDs
are above 80% ) filled with data. Is this (25% of OSDs filled above 80%)
causes the ceph clusetr slowness (write operations slow)? Any hint will
help?
Thnanks
Swami
___
Hello,
Any one ran with and without dmcrypt OSD performance tests?
For ex: I plan to build a ceph cluster with 1 PB raw size. and also plan to
use dmcrypt for all OSD. Can you please clarify, if this dmcrypt will
reduce/impact the ceph clsuter perofmrance?
Thanks
Swami
Hello,
>From the "ceph df" command, USED details.
Is the USED refers to actual usage of the cluster?
or
is it provision to USED the cluster?
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
no noout and other flags set...
How do we confirm if the down OSD is down of the cluster?
Thanks
Swami
On Fri, Dec 9, 2016 at 11:18 AM, Brad Hubbard <bhubb...@redhat.com> wrote:
>
>
> On Fri, Dec 9, 2016 at 3:28 PM, M Ranga Swami Reddy <swamire...@gmail.com>
&
? I guess
ceph don't do it.
Now I ran the OSD out - (after 5days of down state), still recovery and
reblanced stating...worried about it...
Thanks
Swami
On Thu, Dec 8, 2016 at 6:40 AM, Brad Hubbard <bhubb...@redhat.com> wrote:
>
>
> On Wed, Dec 7, 2016 at 9:11 PM, M Ranga Swami
1 - 100 of 188 matches
Mail list logo