Hi,
I have created a bug report for an issue affecting our Ceph Hammer
environment, and I was wondering if anybody has some input on what can we
do to troubleshoot/fix it:
http://tracker.ceph.com/issues/13764
Thank you,
George
___
ceph-users mailing
com>
wrote:
> I wouldn't run with those settings in production. That was a test to
> squeeze too many OSDs into too little RAM.
>
> Check the values from infernalis/master. Those should be safe.
>
> --
> Dan
> On 30 Nov 2015 21:45, "George Mihaiescu" <lmihaie...
Hi,
I've read the recommendation from CERN about the number of OSD maps (
https://cds.cern.ch/record/2015206/files/CephScaleTestMarch2015.pdf, page
3) and I would like to know if there is any negative impact from these
changes:
[global]
osd map message max = 10
[osd]
osd map cache size = 20
osd
One benefit of separate networks is that you can graph the client vs
replication traffic.
> On Jun 4, 2016, at 12:12 PM, Nick Fisk wrote:
>
> Yes, this is fine. I currently use 2 bonded 10G nics which have the untagged
> vlan as the public network and a tagged vlan as the
Aws cli is used by us.
> On May 25, 2016, at 5:11 PM, Andrey Ptashnik wrote:
>
> Team,
>
> I wanted to ask if some of you are using CLI or GUI based S3 browsers/clients
> with Ceph and what are the best ones?
>
> Regards,
>
> Andrey Ptashnik
>
>
Thank you Greg, much appreciated.
I'll test with the crush tool to see if it complains about this new layout.
George
On Mon, Feb 22, 2016 at 3:19 PM, Gregory Farnum <gfar...@redhat.com> wrote:
> On Mon, Feb 22, 2016 at 9:29 AM, George Mihaiescu <lmihaie...@gmail.com>
Hi,
We have a fairly large Ceph cluster (3.2 PB) that we want to expand and we
would like to get your input on this.
The current cluster has around 700 OSDs (4 TB and 6 TB) in three racks with
the largest pool being rgw and using a replica 3.
For non-technical reasons (budgetary, etc) we are
We have three replicas, so we just performed md5sum on all of them in order
to find the correct ones, then we deleted the bad file and ran pg repair.
On 15 Feb 2016 10:42 a.m., "Zoltan Arnold Nagy"
wrote:
> Hi Bryan,
>
> You were right: we’ve modified our PG weights a
Hi Blair,
We use 36 OSDs nodes with journals on HDD running in a 90% object storage
cluster.
The servers have 128 GB RAM and 40 cores (HT) for the storage nodes with 4
TB SAS drives, and 256 GB and 48 cores for the storage nodes with 6 TB SAS
drives.
We use 2x10 Gb bonded for the client network,
Hi Can,
I gave it a try and I can see my buckets, but I get an error (see attached)
when trying to see the contents of any bucket.
The application is pretty simplistic now, and it would be great if support
for the object and size count would be added. The bucket access type should
be displayed
Look in the cinder db, the volumes table to find the Uuid of the deleted
volume.
If you go through yours OSDs and look for the directories for PG index 20, you
might find some fragments from the deleted volume, but it's a long shot...
> On Aug 8, 2016, at 4:39 PM, Georgios Dimitrakakis
Hi,
I need your help with upgrading our cluster from Hammer (last version) to
Jewel 10.2.5 without loosing write access to Radosgw.
We have a fairly large cluster (4.3 PB raw) mostly used to store large S3
objects, and we currently have more than 500 TB of data in the
".rgw.buckets" pool, so I'm
SDs, metadata servers and
> object gateways finally.
>
> I would suggest trying to support path, if you’re still having issues *with*
> the correct upgrade sequence, I would look further into it
>
> Thanks
> Mohammed
>
>> On Jan 25, 2017, at 6:24 PM, George Mi
Hi,
I updated http://tracker.ceph.com/issues/18331 with my own issue, and I am
hoping Orit or Yehuda could give their opinion on what to do next.
What was the purpose of the "orphan find" tool and how to actually clean up
these files?
Thank you,
George
On Fri, Jan 13, 2017 at 2:22 PM, Wido den
reak existing
> objects, you can remove the backup pool.
>
> Yehuda
>
> On Fri, Feb 24, 2017 at 8:23 AM, George Mihaiescu <lmihaie...@gmail.com>
> wrote:
> > Hi,
> >
> > I updated http://tracker.ceph.com/issues/18331 with my own issue, and I
> am
gt; successfully). If you didn't catch that, you should be able to still
> run the same scan (using the same scan id) and retrieve that info
> again.
>
> Yehuda
>
> On Fri, Feb 24, 2017 at 9:48 AM, George Mihaiescu <lmihaie...@gmail.com>
> wrote:
> > Hi Yehuda,
>
Are these problems fixed in the latest version of the Debian packages?
I'm a fairly large user with a lot of existing data stored in .rgw.buckets
pool, and I'm running Hammer.
I just hope that upgrading to Jewel so long after its release will not cause
loss of access to this data for my users,
Make sure the OSD processes on the Jewel node are running. If you didn't change
the ownership to user ceph, they won't start.
> On Mar 27, 2017, at 11:53, Jaime Ibar wrote:
>
> Hi all,
>
> I'm upgrading ceph cluster from Hammer 0.94.9 to jewel 10.2.6.
>
> The ceph
Hi,
We initially upgraded from Hammer to Jewel while keeping the ownership
unchanged, by adding "setuser match path =
/var/lib/ceph/$type/$cluster-$id" in ceph.conf
Later, we used the following steps to change from running as root to
running as ceph.
On the storage nodes, we ran the following
Hi Patrick,
You could add more RAM to the servers witch will not increase the cost too
much, probably.
You could change swappiness value or use something like
https://hoytech.com/vmtouch/ to pre-cache inodes entries.
You could tarball the smaller files before loading them into Ceph maybe.
One problem that I can see with this setup is that you will fill up the SSDs
holding the primary replica before the HDD ones, if they are much different in
size.
Other than that, it's a very inventive solution to increase read speeds without
using a possibly buggy cache configuration.
> On
Terminate the connections on haproxy which is great for ssl as well, and use
these instructions to set qos per connection and data transferred:
http://blog.serverfault.com/2010/08/26/1016491873/
> On May 4, 2017, at 04:35, hrchu wrote:
>
> Thanks for reply.
>
> tc
22 matches
Mail list logo