Hi,
I'm installing Rados Gateway, using Jewel 10.2.5, and can't seem to find the
correct documentation.
I used ceph-deploy to start the gateway, but cant seem to restart the process
correctly.
Can someone point me to the correct steps?
Also, how do I start my rados gateway back.
This is what I
Hi,
I'm building Ceph 10.2.5 and doing some benchmarking with Erasure Coding.
However I notice that perf can't find any symbols in Erasure Coding libraries.
It seems those have been stripped, whereas most other stuff has the symbols
intact.
How can I build with symbols or make sure they don't
Hi,
I just created a new cluster with 0.94.8 and I'm getting this message:
2016-09-29 21:36:47.065642 mon.0 [INF] disallowing boot of OSD osd.35
10.22.21.49:6844/9544 because the osdmap requires CEPH_FEATURE_SERVER_JEWEL but
the osd lacks CEPH_FEATURE_SERVER_JEWEL
This is really bizzare. All
Hi,
Has anyone configured compression in RockDB for BlueStore? Does it work?
Thanks
Pankaj
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Disregard the last msg. Still getting long 0 IOPS periods.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Garg,
Pankaj
Sent: Thursday, July 14, 2016 10:05 AM
To: Somnath Roy; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Terrible RBD performance with Jewel
= 32
filestore_fd_cache_size = 64
From: Somnath Roy [mailto:somnath@sandisk.com]
Sent: Wednesday, July 13, 2016 7:05 PM
To: Garg, Pankaj; ceph-users@lists.ceph.com
Subject: RE: Terrible RBD performance with Jewel
I am not sure whether you need to set the following. What's the point
filestore_fd_cache_size = 64
From: Somnath Roy [mailto:somnath@sandisk.com]
Sent: Wednesday, July 13, 2016 6:06 PM
To: Garg, Pankaj; ceph-users@lists.ceph.com
Subject: RE: Terrible RBD performance with Jewel
You should do that first to get a stable performance out with filestore.
1M seq write
No I have not.
From: Somnath Roy [mailto:somnath@sandisk.com]
Sent: Wednesday, July 13, 2016 6:00 PM
To: Garg, Pankaj; ceph-users@lists.ceph.com
Subject: RE: Terrible RBD performance with Jewel
In fact, I was wrong , I missed you are running with 12 OSDs (considering one
OSD per SSD
Hi,
I just installed jewel on a small cluster of 3 machines with 4 SSDs each. I
created 8 RBD images, and use a single client, with 8 threads, to do random
writes (using FIO with RBD engine) on the images ( 1 thread per image).
The cluster has 3X replication and 10G cluster and client networks.
, 2016 9:01 PM
To: ceph-users@lists.ceph.com
Cc: Garg, Pankaj
Subject: Re: [ceph-users] OSD - Slow Requests
Hello,
On Wed, 4 May 2016 21:08:02 + Garg, Pankaj wrote:
> Hi,
>
> I am getting messages like the following from my Ceph systems.
> Normally this would indicate issues
: Friday, April 29, 2016 9:03 AM
To: Garg, Pankaj; Samuel Just
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] OSD Crashes
Check system log and search for the corresponding drive. It should have the
information what is failing..
Thanks & Regards
Somnath
-Original Message-
From:
I can see that. I guess what would that be symptomatic of? How is it doing that
on 6 different systems and on multiple OSDs?
-Original Message-
From: Samuel Just [mailto:sj...@redhat.com]
Sent: Friday, April 29, 2016 8:57 AM
To: Garg, Pankaj
Cc: ceph-users@lists.ceph.com
Subject: Re
Hi,
I had a fully functional Ceph cluster with 3 x86 Nodes and 3 ARM64 nodes, each
with 12 HDD Drives and 2SSD Drives. All these were initially running Hammer,
and then were successfully updated to Infernalis (9.2.0).
I recently deleted all my OSDs and swapped my drives with new ones on the x86
To: Garg, Pankaj; ceph-users@lists.ceph.com
Subject: RE: INFARNALIS with 64K Kernel PAGES
Did you recreated OSDs on this setup meaning did you do mkfs with 64K page size
?
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Garg,
Pankaj
Sent: Tuesday, March 01, 2016 9:07 PM
Subject: Re: [ceph-users] Upgrade to INFERNALIS
Hi,
On 02/03/2016 00:12, Garg, Pankaj wrote:
> I have upgraded my cluster from 0.94.4 as recommended to the just released
> Infernalis (9.2.1) Update directly (skipped 9.2.0).
> I installed the packaged on each system, manually (.deb files tha
Hi,
Is there a known issue with using 64K Kernel PAGE_SIZE?
I am using ARM64 systems, and I upgraded from 0.94.4 to 9.2.1 today. The system
which was on 4K page size, came up OK and OSDs are all online.
Systems with 64K Page size are all seeing the OSDs crash with following stack:
Begin dump of
Hi,
I have upgraded my cluster from 0.94.4 as recommended to the just released
Infernalis (9.2.1) Update directly (skipped 9.2.0).
I installed the packaged on each system, manually (.deb files that I built).
After that I followed the steps :
Stop ceph-all
chown -R ceph:ceph /var/lib/ceph
start
Hi,
I'm experiencing READ performance issues in my Cluster. I have 3 x86 servers
each with 2 SSDs and 9 OSDs. SSDs are being used for Journaling.
I seem to get erratic READ performance numbers when using Rados Bench read test.
I ran a test with just a single x86 server, with 2 SSDs, and 9 OSDS.
Hi,
I have been formatting my OSD drives with XFS (using mkfs.xfs )with default
options. Is it recommended for Ceph to choose a bigger block size?
I'd like to understand the impact of block size. Any recommendations?
Thanks
Pankaj
___
ceph-users
Hi,
I have 5 OSD servers, with total of 45 OSDS in my clusters. I am trying out
Erasure Coding with different K and m values.
I seem to always get Warnings about : Degraded and Undersized PGs, whenever I
create a profile and create a Pool based on that profile.
I have profiles with K and M
:01 PM
To: Garg, Pankaj; ceph-users@lists.ceph.com
Subject: RE: RADOS Bench
Pankaj,
It is the cumulative BW of ceph cluster but you will be limited by your single
client BW always.
To verify if you are single client 10Gb network limited or not, put another
client and see if it is scaling
Hi,
I have a few machines in my Ceph Cluster. I have another machine that I use to
run RADOS Bench to get the performance.
I am now seeing numbers around 1100 MB/Sec, which is quite close to saturation
point of the 10Gbps link.
I'd like to understand what does the total bandwidth number
: Thursday, May 28, 2015 8:02 AM
To: Garg, Pankaj
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-deploy for Hammer
Hi Pankaj,
While there have been times in the past where ARM binaries were hosted on
ceph.com, there is not currently any ARM hardware for builds. I don't think
you will see
Hi,
Does ceph typically use TCP or UDP or something else for data path for
connection to clients and inter OSD cluster traffic?
Thanks
Pankaj
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
Is there a particular verion of Ceph-Deploy that should be used with Hammer
release? This is a brand new cluster.
I'm getting the following error when running command : ceph-deploy mon
create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/cephuser/.cephdeploy.conf
erasure_code_init(jerasure,/usr/lib/aarch64-linux-gnu/ceph/erasure-code): (5)
Input/output error
The lib file exists, so not sure why this is happening. Any help appreciated.
Thanks
Pankaj
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Garg,
Pankaj
Sent: Wednesday, May 27, 2015
on ceph1
[ceph_deploy][ERROR ] KeyNotFoundError: Could not find keyring file:
/etc/ceph/ceph.client.admin.keyring on host ceph1
From: Somnath Roy [mailto:somnath@sandisk.com]
Sent: Wednesday, May 27, 2015 4:29 PM
To: Garg, Pankaj
Cc: ceph-users@lists.ceph.com
Subject: RE: ceph-deploy
Hi,
What block size does ceph use, and what is the most optimal size? I'm assuming
it uses whatever the file system has been formatted with.
Thanks
Pankaj
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
Can I simply do apt-get upgrade on my FireFly cluster and move to Hammer? I'm
assuming Monitor nodes should be done first.
Any particular sequence or any other procedures that I need to follow? Any
information is appreciated.
Thanks
Pankaj
___
Hi,
I would like to use the gf-complete library for Erasure coding since it has
some ARM v8 based optimizations. I see that the code is part of my tree, but
not sure if these libraries are included in the final build.
I only see the libec_jerasure*.so in my libs folder after installation.
Are
or do I have to
select a particular one to take advantage of them?
-Pankaj
-Original Message-
From: Loic Dachary [mailto:l...@dachary.org]
Sent: Thursday, April 23, 2015 2:47 PM
To: Garg, Pankaj; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Erasure Coding : gf-Complete
Hi
Hi,
I have a small cluster of 7 machines. Can I just individually upgrade each of
them (using apt-get upgrade) from Firefly to Hammer release, or there more to
it than that?
Thank
Pankaj
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
I am building Ceph Debian Packages off of the 0.80.9 (latest firefly) and on
top of that I am applying an optimization patch.
I am following the standard instructions from the README file and effectively
running commands in this order:
$ ./autogen.sh
$
@lists.ceph.com
Subject: Re: [ceph-users] SSD Journaling
On 03/30/2015 03:01 PM, Garg, Pankaj wrote:
Hi,
I'm benchmarking my small cluster with HDDs vs HDDs with SSD Journaling.
I am using both RADOS bench and Block device (using fio) for testing.
I am seeing significant Write performance
Hi,
I'm benchmarking my small cluster with HDDs vs HDDs with SSD Journaling. I am
using both RADOS bench and Block device (using fio) for testing.
I am seeing significant Write performance improvements, as expected. I am
however seeing the Reads coming out a bit slower on the SSD Journaling
Hi,
I have a Ceph cluster with both ARM and x86 based servers in the same cluster.
Is there a way for me to define Pools or some logical separation that would
allow me to use only 1 set of machines for a particular test.
That way it makes easy for me to run tests either on x86 or ARM and do
Hi,
I have ceph cluster that is contained within a rack (1 Monitor and 5 OSD
nodes). I kept the same public and private address for configuration.
I do have 2 NICS and 2 valid IP addresses (one internal only and one external)
for each machine.
Is it possible now, to change the Public Network
Hi,
I had a cluster that was working correctly with Calamari and I was able to see
and manage from the Dashboard.
I had to reinstall the cluster and change IP Addresses etc. so I built my
cluster back up, with same name, but mainly network changes.
When I went to calamari, it shows some stale
Hi,
I had a successful ceph cluster that I am rebuilding. I have completely
uninstalled ceph and any remnants and directories and config files.
While setting up the new cluster, I follow the Ceph-deploy documentation as
described before. I seem to get an error now (tried many times) :
connecting to cluster: ObjectNotFound
Thanks
Pankaj
-Original Message-
From: Travis Rhoden [mailto:trho...@gmail.com]
Sent: Wednesday, February 25, 2015 3:55 PM
To: Garg, Pankaj
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph-deploy issues
Hi Pankaj,
I can't say
[mailto:al...@supermicro.com]
Sent: Wednesday, February 25, 2015 4:24 PM
To: Garg, Pankaj; Travis Rhoden
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Ceph-deploy issues
Try sudo chmod +r /etc/ceph/ceph.client.admin.keyring for the error below?
-Original Message-
From: ceph
Hi,
I have a Ceph cluster and I am trying to create a block device. I execute the
following command, and get errors:
รจ sudo rbd map cephblockimage --pool rbd -k /etc/ceph/ceph.client.admin.keyring
libkmod: ERROR ../libkmod/libkmod.c:556 kmod_search_moddep: could not open
moddep file
Message-
From: Brad Hubbard [mailto:bhubb...@redhat.com]
Sent: Tuesday, February 17, 2015 5:06 PM
To: Garg, Pankaj; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph Block Device
On 02/18/2015 09:56 AM, Garg, Pankaj wrote:
Hi,
I have a Ceph cluster and I am trying to create a block
Hi,
I am trying to get a very minimal Ceph cluster up and running (on ARM) and I'm
wondering what is the smallest unit that I can run rados-bench on ?
Documentation at (http://ceph.com/docs/next/start/quick-ceph-deploy/) seems to
refer to 4 different nodes. Admin Node, Monitor Node and 2 OSD
[mailto:kdre...@redhat.com]
Sent: Monday, January 05, 2015 11:23 AM
To: Garg, Pankaj
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Building Ceph
On 01/05/2015 11:26 AM, Garg, Pankaj wrote:
I'm trying to build Ceph on my RHEL (Scientific Linux 7 - Nitrogen),
with 3.10.0.
I am using
Hi,
I'm trying to build Ceph on my RHEL (Scientific Linux 7 - Nitrogen), with
3.10.0.
I am using the configure script and I am now stuck on libkeyutils not found.
I can't seem to find the right library for this. What Is the right yum update
name for this library?
Any help appreciated.
Thanks
Hi,
Where can I find ARMv8 binaries for Ceph Firefly for either RHEL or Ubuntu? Or
do we just have to compile from source files to produce an installable package.
Thanks
Pankaj
___
ceph-users mailing list
ceph-users@lists.ceph.com
47 matches
Mail list logo