Hi All,
I upgraded my cluster from Hammer to Jewel and then to Luminous , changed
from filestore to bluestore backend.
on a KVM vm with 4 cpu /2 Gb RAM i have attached a 20gb rbd volume as vdc
and performed following test.
dd if=/dev/zero of=/dev/vdc bs=4k count=1000 oflag=direct
1000+0 records
It was a firewall issue on the controller nodes.After allowing ceph-mgr
port in iptables everything is displaying correctly.Thanks to people on
IRC.
Thanks alot,
Kevin
On Thu, Dec 21, 2017 at 5:24 PM, kevin parrikar <kevin.parker...@gmail.com>
wrote:
> accidently removed mailing l
key: AQByfDparprIEBAAj7Pxdr/87/v0kmJV49aKpQ==
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
Regards,
Kevin
On Thu, Dec 21, 2017 at 8:10 AM, kevin parrikar <kevin.parker...@gmail.com>
wrote:
> Thanks JC,
> I tried
>
the update process that was not adding the
> MGR cap to the client.admin user.
>
> JC
>
>
> On Dec 20, 2017, at 10:02, kevin parrikar <kevin.parker...@gmail.com>
> wrote:
>
> hi All,
> I have upgraded the cluster from Hammer to Jewel and to Luminous .
>
> i am able
hi All,
I have upgraded the cluster from Hammer to Jewel and to Luminous .
i am able to upload/download glance images but ceph -s shows 0kb used and
Available and probably because of that cinder create is failing.
ceph -s
cluster:
id: 06c5c906-fc43-499f-8a6f-6c8e21807acf
health:
p (or use the "setuser
> match path" option) and such after upgrading from Hammer to Jewel? I am not
> sure if that matters here, but it might help if you elaborate on your
> upgrade process a bit.
>
> --Lincoln
>
> > On Sep 12, 2017, at 2:22 PM, kevin parrikar <k
Can some one please help me on this.I have no idea how to bring up the
cluster to operational state.
Thanks,
Kev
On Tue, Sep 12, 2017 at 11:12 AM, kevin parrikar <kevin.parker...@gmail.com>
wrote:
> hello All,
> I am trying to upgrade a small test setup having one monitor and on
hello All,
I am trying to upgrade a small test setup having one monitor and one osd
node which is in hammer release .
I updating from hammer to jewel using package update commands and things
are working.
How ever after updating from Jewel to Luminous, i am facing issues with osd
failing to start
hello All,
I have 50 compute nodes in my environment which are running virtual
machines.I can add one more 10k RPM SAS disk and 1X10G interface to each
server and thus there will be 50 OSD running on 50 compute nodes. Its not
easy to obtain more servers for running Ceph nor taking away servers
hello All,
I am trying Ceph - Jewel on Ubuntu 16.04 with Kubernetes 1.6.2 and Docker
1.11.2
but for some unknown reason its not coming up and crashing often,all ceph
commands are failing.
from *ceph-mon-check:*
kubectl logs -n ceph ceph-mon-check-3190136794-21xg4 -f
bought S3500 because last time when we tried ceph, people were
suggesting this model :) :)
Thanks alot for your help
On Sat, Jan 7, 2017 at 6:01 PM, Lionel Bouton <
lionel-subscript...@bouton.name> wrote:
> Hi,
>
> Le 07/01/2017 à 04:48, kevin parrikar a écrit :
>
> i reall
; ---- Original message
> From: kevin parrikar <kevin.parker...@gmail.com>
> Date: 07/01/2017 05:48 (GMT+02:00)
> To: Christian Balzer <ch...@gol.com>
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Analysing ceph performance with SSD journal,
.
Regards,
Kevin
On Fri, Jan 6, 2017 at 4:42 PM, kevin parrikar <kevin.parker...@gmail.com>
wrote:
> Thanks Christian for your valuable comments,each comment is a new learning
> for me.
> Please see inline
>
> On Fri, Jan 6, 2017 at 9:32 AM, Christian Balzer <ch...@gol.c
Thanks Christian for your valuable comments,each comment is a new learning
for me.
Please see inline
On Fri, Jan 6, 2017 at 9:32 AM, Christian Balzer <ch...@gol.com> wrote:
>
> Hello,
>
> On Fri, 6 Jan 2017 08:40:36 +0530 kevin parrikar wrote:
>
> > Hello All,
> >
for your
suggestion.
Regards,
Kevin
On Fri, Jan 6, 2017 at 8:56 AM, jiajia zhong <zhong2p...@gmail.com> wrote:
>
>
> 2017-01-06 11:10 GMT+08:00 kevin parrikar <kevin.parker...@gmail.com>:
>
>> Hello All,
>>
>> I have setup a ceph cluster based on 0.94.6 rel
Hello All,
I have setup a ceph cluster based on 0.94.6 release in 2 servers each with
80Gb intel s3510 and 2x3 Tb 7.2 SATA disks,16 CPU,24G RAM
which is connected to a 10G switch with a replica of 2 [ i will add 3 more
servers to the cluster] and 3 seperate monitor nodes which are vms.
I have 4 node cluster each with 5 disks (4 OSD and 1 Operating system also
hosting 3 monitoring process) with default replica 3.
Total OSD disks : 16
Total Nodes : 4
How can i calculate the
- Maximum number of disk failures my cluster can handle with out any
impact on current data and
Hi,
I am trying hammer 0.93 on Ubuntu 14.04.
rbd is mapped in client ,which is also ubuntu 14.04 .
When i did a stop ceph-osd-all and then a start,client machine crashed and
attached pic was in the console.Not sure if its related to ceph.
Thanks
if you want to stick with 14.04.
The workaround is, you need to unmap your clients before restarting osds.
Thanks Regards
Somnath
*From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
Of *kevin parrikar
*Sent:* Wednesday, March 11, 2015 11:44 AM
*To:* ceph-users
hello All,
I just setup single node ceph with no replication to
familiarize with ceph.
using 2 intel S3500 SSD 800 Gb and 8Gb RAM and 16 core CPU.
Os is ubuntu 14.04 64 bit ,kbd is loaded (modprobe kbd)
When running bonniee++ against /dev/rbd0
it shows a seekrate of 892.2/s.
How
20 matches
Mail list logo