Hello Guys
Is several millions of object with Ceph ( for RGW use case ) still an issue
? Or is it fixed ?
Thnx
Vickey
On Thu, Jan 28, 2016 at 12:55 AM, Krzysztof Księżyk
wrote:
> Stefan Rogge writes:
>
> >
> >
> > Hi,
> > we are using the Ceph with
Hello Geeks
Can someone please review and comment on my custom crush maps. I would
really appreciate your help
My setup : 1 Rack , 4 chassis , 3 storage nodes each chassis ( so total 12
storage nodes ) , pool size = 3
What i want to achieve is:
- Survive chassis failures , even if i loose 2
Adding community for further help on this.
On Tue, Feb 23, 2016 at 10:57 PM, Vickey Singh <vickey.singh22...@gmail.com>
wrote:
>
>
> On Tue, Feb 23, 2016 at 9:53 PM, Gregory Farnum <gfar...@redhat.com>
> wrote:
>
>>
>>
>> On Tuesday, February 23, 20
ere actually is such
> an object.
> -Greg
>
>
> On Tuesday, February 23, 2016, Vickey Singh <vickey.singh22...@gmail.com>
> wrote:
>
>> Hello Guys
>>
>> I am getting wired output from osd map. The object does not exists on
>> pool but osd map stil
Hello Guys
I am getting wired output from osd map. The object does not exists on pool
but osd map still shows its PG and OSD on which its stored.
So i have rbd device coming from pool 'gold' , this image has an object
'rb.0.10f61.238e1f29.2ac5'
The below commands verifies this
Hello Community
Happy Valentines Day ;-)
I need some advice on using EXATA RAM on my OSD servers to improve Ceph's
write performance.
I have 20 OSD servers each with 256GB RAM and 6TB x 16 OSD's, so assuming
cluster is not recovering, most of the time system will have at least
~150GB RAM free.
> patches to fully support Nova+RBD.
>
> [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
>
> --
>
> Jason Dillaman
>
>
> - Original Message -
>
> > From: "Vickey Singh" <vickey.singh22...@gmail.com>
> > To: ceph-users@lists.ceph.com, &qu
Hello Community
I need some guidance how can i reduce openstack instance boot time using
Ceph
We are using Ceph Storage with openstack ( cinder, glance and nova ). All
OpenStack images and instances are being stored on Ceph in different pools
glance and nova pool respectively.
I assume that
Hello Guys
Need help with this , thanks
- vickey -
On Tue, Jan 12, 2016 at 12:10 PM, Vickey Singh <vickey.singh22...@gmail.com>
wrote:
> Hello Community , wishing you a great new year :)
>
> This is the recommended upgrade path
> http://docs.ceph.com/docs/master/inst
Hello Community , wishing you a great new year :)
This is the recommended upgrade path
http://docs.ceph.com/docs/master/install/upgrading-ceph/
Ceph Deploy
Ceph Monitors
Ceph OSD Daemons
Ceph Metadata Servers
Ceph Object Gateways
How about upgrading Ceph clients ( in my case openstack compute
A BIG Thanks Dmitry for your HELP.
On Wed, Nov 18, 2015 at 11:47 AM, Дмитрий Глушенок <gl...@jet.msk.su> wrote:
> Hi Vickey,
>
> 18 нояб. 2015 г., в 11:36, Vickey Singh <vickey.singh22...@gmail.com>
> написал(а):
>
> Can anyone please help me understand this.
>
Can anyone please help me understand this.
Thank You
On Mon, Nov 16, 2015 at 5:55 PM, Vickey Singh <vickey.singh22...@gmail.com>
wrote:
> Hello Community
>
> Need your help in understanding this.
>
> I have the below node, which is hosting 60 physical disks, running 1 OS
Hello Community
Need your help in understanding this.
I have the below node, which is hosting 60 physical disks, running 1 OSD
per disk so total 60 Ceph OSD daemons
*[root@node01 ~]# service ceph status | grep -i osd | grep -i running | wc
-l*
*60*
*[root@node01 ~]#*
However if i check OSD
Hello Community
Need your help in understanding this.
I have the below node, which is hosting 60 physical disks, running 1 OSD
per disk so total 60 Ceph OSD daemons
*[root@node01 ~]# service ceph status | grep -i osd | grep -i running | wc
-l*
*60*
*[root@node01 ~]#*
However if i check OSD
On Mon, Nov 9, 2015 at 8:16 PM, Wido den Hollander <w...@42on.com> wrote:
> On 11/09/2015 05:27 PM, Vickey Singh wrote:
> > Hello Ceph Geeks
> >
> > Need your comments with my understanding on straw2.
> >
> >- Is Straw2 better than straw ?
>
> It
Hello Ceph Geeks
Need your comments with my understanding on straw2.
- Is Straw2 better than straw ?
- Is it straw2 recommended for production usage ?
I have a production Ceph Firefly cluster , that i am going to upgrade to
Ceph hammer pretty soon. Should i use straw2 for all my ceph
On Fri, Sep 18, 2015 at 6:33 PM, Robert LeBlanc
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Depends on how easy it is to rebuild an OS from scratch. If you have
> something like Puppet or Chef that configure a node completely for
> you, it may not be too
Hello Guys
Doing hardware planning / selection for a new production Ceph cluster. Just
wondering how should i select memory.
*I have found two different rules of selecting memory for Ceph OSD.( on
Internet / googling / presentations )*
*#11GB / Ceph OSD or 2GB / Ceph OSD ( for more
Crowd* ' Hello
IT , Have you tried turning it OFF and ON again ' ]
It would be really helpful if someone provides a real solution.
>
> —Lincoln
>
>
> > On Sep 7, 2015, at 3:35 PM, Vickey Singh <vickey.singh22...@gmail.com>
> wrote:
> >
> > Adding ceph-use
Agreed with Alphe , Ceph Hammer (0.94.2) sucks when it comes to recovery
and rebalancing.
Here is my Ceph Hammer cluster , which is like this for more than 30 hours.
You might be thinking about that one OSD which is down and not in. Its
intentional, i want to remove that OSD.
I want the cluster
0 - 0.193655
> > 404 16 32445 32429 321.031 0 - 0.193655
> > 405 16 32445 32429 320.238 0 - 0.193655
> > 406 16 32445 32429319.45 0 - 0.193655
> > 407 16 3244
Dear Experts
Can someone please help me , why my cluster is not able write data.
See the below output cur MB/S is 0 and Avg MB/s is decreasing.
Ceph Hammer 0.94.2
CentOS 6 (3.10.69-1)
The Ceph status says OPS are blocked , i have tried checking , what all i
know
- System resources ( CPU
Hello Experts ,
I want to increase my Ceph cluster's read performance.
I have several OSD nodes having 196G RAM. On my OSD nodes Ceph just uses
15-20 GB of RAM.
So, can i instruct Ceph to make use of the remaining 150GB+ RAM as read
cache. So that it should cache data in RAM and server to
>
Yeah i did that , iperf shows no problem.
Is there anything else i should do ??
>
> --Lincoln
>
>
> On 9/7/2015 9:36 AM, Vickey Singh wrote:
>
> Dear Experts
>
> Can someone please help me , why my cluster is not able write data.
>
> See the below output cur
Adding ceph-users.
On Mon, Sep 7, 2015 at 11:31 PM, Vickey Singh <vickey.singh22...@gmail.com>
wrote:
>
>
> On Mon, Sep 7, 2015 at 10:04 PM, Udo Lembke <ulem...@polarzone.de> wrote:
>
>> Hi Vickey,
>>
> Thanks for your time in replying to my problem.
&
and
CPU utilization
- Vickey -
On Wed, Sep 2, 2015 at 11:28 PM, Vickey Singh <vickey.singh22...@gmail.com>
wrote:
> Thank You Mark , please see my response below.
>
> On Wed, Sep 2, 2015 at 5:23 PM, Mark Nelson <mnel...@redhat.com> wrote:
>
>> On 09/02/201
Thank You Mark , please see my response below.
On Wed, Sep 2, 2015 at 5:23 PM, Mark Nelson <mnel...@redhat.com> wrote:
> On 09/02/2015 08:51 AM, Vickey Singh wrote:
>
>> Hello Ceph Experts
>>
>> I have a strange problem , when i am reading or writing to Ceph pool
Hello Ceph Experts
I have a strange problem , when i am reading or writing to Ceph pool , its
not writing properly. Please notice Cur MB/s which is going up and down
--- Ceph Hammer 0.94.2
-- CentOS 6, 2.6
-- Ceph cluster is healthy
One interesting thing is when every i start rados bench
Hello Ceph Geeks
I am planning to develop a python plugin that pulls out cluster *recovery
IO* and *client IO* operation metrics , that can be further used with
collectd.
*For example , i need to take out these values*
*recovery io 814 MB/s, 101 objects/s*
*client io 85475 kB/s rd, 1430 kB/s
Hello Ceph Geeks
I am planning to develop a python plugin that pulls out cluster *recovery
IO* and *client IO* operation metrics , that can be further used with
collectd.
*For example , i need to take out these values*
*recovery io 814 MB/s, 101 objects/s*
*client io 85475 kB/s rd, 1430 kB/s
is
order=19.
Thanks,
Bill Sanders
On Thu, Aug 13, 2015 at 1:31 AM, Vickey Singh vickey.singh22...@gmail.com
wrote:
Thanks Nick for your suggestion.
Can you also tell how i can reduce RBD block size to 512K or 1M , do i
need to put something in clients ceph.conf ( what parameter i need
Thanks Nick for your suggestion.
Can you also tell how i can reduce RBD block size to 512K or 1M , do i need
to put something in clients ceph.conf ( what parameter i need to set )
Thanks once again
- Vickey
On Wed, Aug 12, 2015 at 4:49 PM, Nick Fisk n...@fisk.me.uk wrote:
-Original
Hello Community
I am facing a very wired problem with Ceph socket files.
For all monitor nodes under /var/run/ceph/ i can see ~160 Thousands asok
files , most of the file names are ceph-client.admin.*
*If i delete these files are the getting generated very quickly.*
*Could someone please
Hello Ceph lovers
You would have noticed that recently RedHat has released RedHat Ceph
Storage 1.3
http://redhatstorage.redhat.com/2015/06/25/announcing-red-hat-ceph-storage-1-3/
My question is
- What's the exact version number of OpenSource Ceph is provided with this
Product
- RHCS 1.3
this:
ceph-deploy osd create ceph-node1:/ceph-disk
Your journal would be a file doing it this way.
[image: yp]
Michael Kuriger
Sr. Unix Systems Engineer
* mk7...@yp.com |( 818-649-7235
From: Vickey Singh vickey.singh22...@gmail.com
Date: Tuesday, June 9, 2015 at 12:21 AM
Hello Geeks
Need your help and advice in this problem.
- VS -
On Tue, Apr 28, 2015 at 12:48 AM, Vickey Singh vickey.singh22...@gmail.com
wrote:
Hello Alfredo / Craig
First of all Thank You So much for replying and giving your precious time
to this problem.
@Alfredo : I tried version
Any help with related to this problem would be highly appreciated.
-VS-
On Sun, Apr 26, 2015 at 6:01 PM, Vickey Singh vickey.singh22...@gmail.com
wrote:
Hello Geeks
I am trying to setup Ceph Radosgw multi site data replication using
official documentation
http://ceph.com/docs/master
of the 1.2.1 release.
I just finished getting 1.2.2 out (try upgrading please). You should no
longer see that
error.
Hope that helps!
-Alfredo
- Original Message -
From: Craig Lewis cle...@centraldesktop.com
To: Vickey Singh vickey.singh22...@gmail.com
Cc: ceph-users@lists.ceph.com
Hello Geeks
I am trying to setup Ceph Radosgw multi site data replication using
official documentation
http://ceph.com/docs/master/radosgw/federated-config/#multi-site-data-replication
Everything seems to work except radosgw-agent sync , Request you to please
check the below outputs and help
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello Geeks
I am trying to setup Ceph Radosgw multi site data replication using
official documentation
http://ceph.com/docs/master/radosgw/federated-config/#multi-site-data-replication
Everything seems to work except radosgw-agent sync , Request you to please
check the below outputs and help
Hello Cephers
I am trying to setup RGW using Ceph-deploy which is described here
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/#add-an-rgw-instance
But unfortunately it doesn't seems to be working
Is there something i am missing or you know some fix for this.
[root@ceph-node1
.
- Travis
On Wed, Apr 8, 2015 at 4:11 PM, Vickey Singh
vickey.singh22...@gmail.com wrote:
Community , need help.
-VS-
On Wed, Apr 8, 2015 at 4:36 PM, Vickey Singh
vickey.singh22...@gmail.com wrote:
Any suggestion geeks
VS
On Wed, Apr 8, 2015 at 2:15 PM, Vickey Singh
testing that out. The Hammer
release's packages have been split up to match the split that happened
in EPEL.
- Ken
On 04/07/2015 04:09 PM, Vickey Singh wrote:
Hello There
I am trying to install Giant on CentOS7 using ceph-deploy and
encountered below problem.
[rgw-node1][DEBUG
=epel ceph-0.80.7-0.el7.centos
-y
2015-04-08 12:40 GMT+03:00 Vickey Singh vickey.singh22...@gmail.com:
Hello Everyone
I also tried setting higher priority as suggested by SAM but no luck
Please see the Full logs here http://paste.ubuntu.com/10771358/
While installing yum searches
On 08-04-15 09:32, Vickey Singh wrote:
Hi Ken
As per your suggestion , i tried enabling epel-testing repository but
still no luck.
Please check the below output. I would really appreciate any help here.
# yum install ceph --enablerepo=epel-testing
--- Package python-rbd.x86_64
Any suggestion geeks
VS
On Wed, Apr 8, 2015 at 2:15 PM, Vickey Singh vickey.singh22...@gmail.com
wrote:
Hi
The below suggestion also didn’t worked
Full logs here : http://paste.ubuntu.com/10771939/
[root@rgw-node1 yum.repos.d]# yum --showduplicates list ceph
Loaded plugins
Community , need help.
-VS-
On Wed, Apr 8, 2015 at 4:36 PM, Vickey Singh vickey.singh22...@gmail.com
wrote:
Any suggestion geeks
VS
On Wed, Apr 8, 2015 at 2:15 PM, Vickey Singh vickey.singh22...@gmail.com
wrote:
Hi
The below suggestion also didn’t worked
Full logs here : http
Hello There
I am trying to install Giant on CentOS7 using ceph-deploy and encountered
below problem.
[rgw-node1][DEBUG ] Package python-ceph is obsoleted by python-rados, but
obsoleting package does not provide for requirements
[rgw-node1][DEBUG ] --- Package cups-libs.x86_64 1:1.6.3-17.el7 will
Amazing piece of work Karan , this was something which is missing since
long , thanks for filling the gap.
I got my book today and just finished reading couple of pages , excellent
introduction to Ceph.
Thanks again , its worth purchasing this book.
Best Regards
Vicky
On Fri, Feb 6, 2015
to read secretfile: No such file or directory
Looks like it's trying to mount, but your secretfile is gone.
*Chris Armstrong*Head of Services
OpDemand / Deis.io
GitHub: https://github.com/deis/deis -- Docs: http://docs.deis.io/
On Sat, Oct 25, 2014 at 2:07 PM, Vickey Singh vickey.singh22
Hello Cephers , need your advice and tips here.
*Problem statement : Ceph RBD getting unmapped each time i reboot my server
. After reboot every time i need to manually map it and mount it.*
*Setup : *
Ceph Firefly 0.80.1
CentOS 6.5 , Kernel : 3.15.0-1
I have tried doing as mentioned in the
*Hello Cephers*
*I have followed Ceph documentation and my radios gateway setup is working
fine.*
# swift -V 1.0 -A http://bmi-pocfe2.scc.fi/auth -U scc:swift -K secretkey
list
Hello-World
bmi-pocfe2
scc
packstack
test
#
# radosgw-admin bucket stats --bucket=scc
{ bucket: scc,
Hello Cephers
I have been following ceph documentation to install and configure RGW and
fortunately everything went fine and RGW is correctly setup.
Next i would like to use RGW with OpenStack , and for this i have followed
http://ceph.com/docs/master/radosgw/keystone/ , as per the document i
54 matches
Mail list logo