Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-26 Thread kevin parrikar
It was a firewall issue on the controller nodes.After allowing ceph-mgr
port in iptables everything is displaying correctly.Thanks to people on
IRC.

Thanks alot,
Kevin

On Thu, Dec 21, 2017 at 5:24 PM, kevin parrikar 
wrote:

> accidently removed mailing list email
>
> ++ceph-users
>
> Thanks a lot JC for looking into this issue. I am really out of ideas.
>
>
> ceph.conf on mgr node which is also monitor node.
>
> [global]
> fsid = 06c5c906-fc43-499f-8a6f-6c8e21807acf
> mon_initial_members = node-16 node-30 node-31
> mon_host = 172.16.1.9 172.16.1.3 172.16.1.11
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> filestore_xattr_use_omap = true
> log_to_syslog_level = info
> log_to_syslog = True
> osd_pool_default_size = 2
> osd_pool_default_min_size = 1
> osd_pool_default_pg_num = 64
> public_network = 172.16.1.0/24
> log_to_syslog_facility = LOG_LOCAL0
> osd_journal_size = 2048
> auth_supported = cephx
> osd_pool_default_pgp_num = 64
> osd_mkfs_type = xfs
> cluster_network = 172.16.1.0/24
> osd_recovery_max_active = 1
> osd_max_backfills = 1
> mon allow pool delete = true
>
> [client]
> rbd_cache_writethrough_until_flush = True
> rbd_cache = True
>
> [client.radosgw.gateway]
> rgw_keystone_accepted_roles = _member_, Member, admin, swiftoperator
> keyring = /etc/ceph/keyring.radosgw.gateway
> rgw_frontends = fastcgi socket_port=9000 socket_host=127.0.0.1
> rgw_socket_path = /tmp/radosgw.sock
> rgw_keystone_revocation_interval = 100
> rgw_keystone_url = http://192.168.1.3:35357
> rgw_keystone_admin_token = jaJSmlTNxgsFp1ttq5SuAT1R
> rgw_init_timeout = 36
> host = controller3
> rgw_dns_name = *.sapiennetworks.com
> rgw_print_continue = True
> rgw_keystone_token_cache_size = 10
> rgw_data = /var/lib/ceph/radosgw
> user = www-data
>
>
>
>
> ceph auth list
>
>
> osd.100
> key: AQAtZjpaVZOFBxAAwl0yFLdUOidLzPFjv+HnjA==
> caps: [mgr] allow profile osd
> caps: [mon] allow profile osd
> caps: [osd] allow *
> osd.101
> key: AQA4ZjpaS4wwGBAABwgoXQRc1J8sav4MUkWceQ==
> caps: [mgr] allow profile osd
> caps: [mon] allow profile osd
> caps: [osd] allow *
> osd.102
> key: AQBDZjpaBS2tEBAAtFiPKBzh8JGi8Nh3PtAGCg==
> caps: [mgr] allow profile osd
> caps: [mon] allow profile osd
> caps: [osd] allow *
>
> client.admin
> key: AQD0yXFYflnYFxAAEz/2XLHO/6RiRXQ5HXRAnw==
> caps: [mds] allow *
> caps: [mgr] allow *
> caps: [mon] allow *
> caps: [osd] allow *
> client.backups
> key: AQC0y3FY4YQNNhAAs5fludq0yvtp/JJt7RT4HA==
> caps: [mgr] allow r
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
> pool=backups, allow rwx pool=volumes
> client.bootstrap-mds
> key: AQD5yXFYyIxiFxAAyoqLPnxxqWmUr+zz7S+qVQ==
> caps: [mgr] allow r
> caps: [mon] allow profile bootstrap-mds
> client.bootstrap-mgr
> key: AQBmOTpaXqHQDhAAyDXoxlPmG9QovfmmUd8gIg==
> caps: [mon] allow profile bootstrap-mgr
> client.bootstrap-osd
> key: AQD0yXFYuGkSIhAAelSb3TCPuXRFoFJTBh7Vdg==
> caps: [mgr] allow r
> caps: [mon] allow profile bootstrap-osd
> client.bootstrap-rbd
> key: AQBnOTpafDS/IRAAnKzuI9AYEF81/6mDVv0QgQ==
> caps: [mon] allow profile bootstrap-rbd
>
> client.bootstrap-rgw
> key: AQD3yXFYxt1mLRAArxOgRvWmmzT9pmsqTLpXKw==
> caps: [mgr] allow r
> caps: [mon] allow profile bootstrap-rgw
> client.compute
> key: AQCbynFYRcNWOBAAPzdAKfP21GvGz1VoHBimGQ==
> caps: [mgr] allow r
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
> pool=volumes, allow rx pool=images, allow rwx pool=compute
> client.images
> key: AQCyy3FYSMtlJRAAbJ8/U/R82NXvWBC5LmkPGw==
> caps: [mgr] allow r
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
> pool=images
> client.radosgw.gateway
> key: AQA3ynFYAYMSAxAApvfe/booa9KhigpKpLpUOA==
> caps: [mgr] allow r
> caps: [mon] allow rw
> caps: [osd] allow rwx
> client.volumes
> key: AQCzy3FYa3paKBAA9BlYpQ1PTeR770ghVv1jKQ==
> caps: [mgr] allow r
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
> pool=volumes, allow rx pool=images
> mgr.controller2
> key: AQAmVTpaA+9vBhAApD3rMs//Qri+SawjUF4U4Q==
> caps: [mds] allow *
> caps: [mgr] allow *
> caps: [mon] allow *
> caps: [osd] allow *
> mgr.controller3
> key: AQByfDparprIEBAAj7Pxdr/87/v0kmJV49aKpQ==
> caps: [mds] allow *
> caps: [mgr] allow *
> caps: [mon] allow *
> caps: [osd] allow *
>
> Regards,
> Kevin
>
> On Thu, Dec 21, 2017 at 8:10 AM, kevin parrikar  > wrote:
>
>> 

Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-21 Thread kevin parrikar
accidently removed mailing list email

++ceph-users

Thanks a lot JC for looking into this issue. I am really out of ideas.


ceph.conf on mgr node which is also monitor node.

[global]
fsid = 06c5c906-fc43-499f-8a6f-6c8e21807acf
mon_initial_members = node-16 node-30 node-31
mon_host = 172.16.1.9 172.16.1.3 172.16.1.11
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
log_to_syslog_level = info
log_to_syslog = True
osd_pool_default_size = 2
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 64
public_network = 172.16.1.0/24
log_to_syslog_facility = LOG_LOCAL0
osd_journal_size = 2048
auth_supported = cephx
osd_pool_default_pgp_num = 64
osd_mkfs_type = xfs
cluster_network = 172.16.1.0/24
osd_recovery_max_active = 1
osd_max_backfills = 1
mon allow pool delete = true

[client]
rbd_cache_writethrough_until_flush = True
rbd_cache = True

[client.radosgw.gateway]
rgw_keystone_accepted_roles = _member_, Member, admin, swiftoperator
keyring = /etc/ceph/keyring.radosgw.gateway
rgw_frontends = fastcgi socket_port=9000 socket_host=127.0.0.1
rgw_socket_path = /tmp/radosgw.sock
rgw_keystone_revocation_interval = 100
rgw_keystone_url = http://192.168.1.3:35357
rgw_keystone_admin_token = jaJSmlTNxgsFp1ttq5SuAT1R
rgw_init_timeout = 36
host = controller3
rgw_dns_name = *.sapiennetworks.com
rgw_print_continue = True
rgw_keystone_token_cache_size = 10
rgw_data = /var/lib/ceph/radosgw
user = www-data




ceph auth list


osd.100
key: AQAtZjpaVZOFBxAAwl0yFLdUOidLzPFjv+HnjA==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.101
key: AQA4ZjpaS4wwGBAABwgoXQRc1J8sav4MUkWceQ==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.102
key: AQBDZjpaBS2tEBAAtFiPKBzh8JGi8Nh3PtAGCg==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *

client.admin
key: AQD0yXFYflnYFxAAEz/2XLHO/6RiRXQ5HXRAnw==
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
client.backups
key: AQC0y3FY4YQNNhAAs5fludq0yvtp/JJt7RT4HA==
caps: [mgr] allow r
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=backups, allow rwx pool=volumes
client.bootstrap-mds
key: AQD5yXFYyIxiFxAAyoqLPnxxqWmUr+zz7S+qVQ==
caps: [mgr] allow r
caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
key: AQBmOTpaXqHQDhAAyDXoxlPmG9QovfmmUd8gIg==
caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
key: AQD0yXFYuGkSIhAAelSb3TCPuXRFoFJTBh7Vdg==
caps: [mgr] allow r
caps: [mon] allow profile bootstrap-osd
client.bootstrap-rbd
key: AQBnOTpafDS/IRAAnKzuI9AYEF81/6mDVv0QgQ==
caps: [mon] allow profile bootstrap-rbd

client.bootstrap-rgw
key: AQD3yXFYxt1mLRAArxOgRvWmmzT9pmsqTLpXKw==
caps: [mgr] allow r
caps: [mon] allow profile bootstrap-rgw
client.compute
key: AQCbynFYRcNWOBAAPzdAKfP21GvGz1VoHBimGQ==
caps: [mgr] allow r
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=volumes, allow rx pool=images, allow rwx pool=compute
client.images
key: AQCyy3FYSMtlJRAAbJ8/U/R82NXvWBC5LmkPGw==
caps: [mgr] allow r
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=images
client.radosgw.gateway
key: AQA3ynFYAYMSAxAApvfe/booa9KhigpKpLpUOA==
caps: [mgr] allow r
caps: [mon] allow rw
caps: [osd] allow rwx
client.volumes
key: AQCzy3FYa3paKBAA9BlYpQ1PTeR770ghVv1jKQ==
caps: [mgr] allow r
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=volumes, allow rx pool=images
mgr.controller2
key: AQAmVTpaA+9vBhAApD3rMs//Qri+SawjUF4U4Q==
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
mgr.controller3
key: AQByfDparprIEBAAj7Pxdr/87/v0kmJV49aKpQ==
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *

Regards,
Kevin

On Thu, Dec 21, 2017 at 8:10 AM, kevin parrikar 
wrote:

> Thanks JC,
> I tried
> ceph auth caps client.admin osd 'allow *' mds 'allow *' mon 'allow *' mgr
> 'allow *'
>
> but still status is same,also  mgr.log is being flooded with below errors.
>
> 2017-12-21 02:39:10.622834 7fb40a22b700  0 Cannot get stat of OSD 140
> 2017-12-21 02:39:10.622835 7fb40a22b700  0 Cannot get stat of OSD 141
> Not sure whats wrong in my setup
>
> Regards,
> Kevin
>
>
> On Thu, Dec 21, 2017 at 2:37 AM, Jean-Charles Lopez 
> wrote:
>
>> Hi,
>>
>> make sure client.admin user has an MGR cap 

Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-20 Thread Jean-Charles Lopez
Hi Kevin

looks like the pb comes from the mgr user itself then. 

Can you get me the output of 
- ceph auth list 
- cat /etc/ceph/ceph.conf on your mgr node

Regards

JC

While moving. Excuse unintended typos.

> On Dec 20, 2017, at 18:40, kevin parrikar  wrote:
> 
> Thanks JC,
> I tried 
> ceph auth caps client.admin osd 'allow *' mds 'allow *' mon 'allow *' mgr 
> 'allow *'
> 
> but still status is same,also  mgr.log is being flooded with below errors.
> 
> 2017-12-21 02:39:10.622834 7fb40a22b700  0 Cannot get stat of OSD 140
> 2017-12-21 02:39:10.622835 7fb40a22b700  0 Cannot get stat of OSD 141
> Not sure whats wrong in my setup
> 
> Regards,
> Kevin
> 
> 
>> On Thu, Dec 21, 2017 at 2:37 AM, Jean-Charles Lopez  
>> wrote:
>> Hi,
>> 
>> make sure client.admin user has an MGR cap using ceph auth list. At some 
>> point there was a glitch with the update process that was not adding the MGR 
>> cap to the client.admin user.
>> 
>> JC
>> 
>> 
>>> On Dec 20, 2017, at 10:02, kevin parrikar  wrote:
>>> 
>>> hi All,
>>> I have upgraded the cluster from Hammer to Jewel and to Luminous .
>>> 
>>> i am able to upload/download glance images but ceph -s shows 0kb used and 
>>> Available and probably because of that cinder create is failing.
>>> 
>>> 
>>> ceph -s
>>>   cluster:
>>> id: 06c5c906-fc43-499f-8a6f-6c8e21807acf
>>> health: HEALTH_WARN
>>> Reduced data availability: 6176 pgs inactive
>>> Degraded data redundancy: 6176 pgs unclean
>>> 
>>>   services:
>>> mon: 3 daemons, quorum controller3,controller2,controller1
>>> mgr: controller3(active)
>>> osd: 71 osds: 71 up, 71 in
>>> rgw: 1 daemon active
>>> 
>>>   data:
>>> pools:   4 pools, 6176 pgs
>>> objects: 0 objects, 0 bytes
>>> usage:   0 kB used, 0 kB / 0 kB avail
>>> pgs: 100.000% pgs unknown
>>>  6176 unknown
>>> 
>>> 
>>> i deployed ceph-mgr using ceph-deploy gather-keys && ceph-deploy mgr create 
>>> ,it was successfull but for some reason ceph -s is not showing correct 
>>> values.
>>> Can some one help me here please
>>> 
>>> Regards,
>>> Kevin
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-20 Thread kevin parrikar
Thanks JC,
I tried
ceph auth caps client.admin osd 'allow *' mds 'allow *' mon 'allow *' mgr
'allow *'

but still status is same,also  mgr.log is being flooded with below errors.

2017-12-21 02:39:10.622834 7fb40a22b700  0 Cannot get stat of OSD 140
2017-12-21 02:39:10.622835 7fb40a22b700  0 Cannot get stat of OSD 141
Not sure whats wrong in my setup

Regards,
Kevin


On Thu, Dec 21, 2017 at 2:37 AM, Jean-Charles Lopez 
wrote:

> Hi,
>
> make sure client.admin user has an MGR cap using ceph auth list. At some
> point there was a glitch with the update process that was not adding the
> MGR cap to the client.admin user.
>
> JC
>
>
> On Dec 20, 2017, at 10:02, kevin parrikar 
> wrote:
>
> hi All,
> I have upgraded the cluster from Hammer to Jewel and to Luminous .
>
> i am able to upload/download glance images but ceph -s shows 0kb used and
> Available and probably because of that cinder create is failing.
>
>
> ceph -s
>   cluster:
> id: 06c5c906-fc43-499f-8a6f-6c8e21807acf
> health: HEALTH_WARN
> Reduced data availability: 6176 pgs inactive
> Degraded data redundancy: 6176 pgs unclean
>
>   services:
> mon: 3 daemons, quorum controller3,controller2,controller1
> mgr: controller3(active)
> osd: 71 osds: 71 up, 71 in
> rgw: 1 daemon active
>
>   data:
> pools:   4 pools, 6176 pgs
> objects: 0 objects, 0 bytes
> usage:   0 kB used, 0 kB / 0 kB avail
> pgs: 100.000% pgs unknown
>  6176 unknown
>
>
> i deployed ceph-mgr using ceph-deploy gather-keys && ceph-deploy mgr
> create ,it was successfull but for some reason ceph -s is not showing
> correct values.
> Can some one help me here please
>
> Regards,
> Kevin
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-20 Thread Jean-Charles Lopez
Hi,

make sure client.admin user has an MGR cap using ceph auth list. At some point 
there was a glitch with the update process that was not adding the MGR cap to 
the client.admin user.

JC


> On Dec 20, 2017, at 10:02, kevin parrikar  wrote:
> 
> hi All,
> I have upgraded the cluster from Hammer to Jewel and to Luminous .
> 
> i am able to upload/download glance images but ceph -s shows 0kb used and 
> Available and probably because of that cinder create is failing.
> 
> 
> ceph -s
>   cluster:
> id: 06c5c906-fc43-499f-8a6f-6c8e21807acf
> health: HEALTH_WARN
> Reduced data availability: 6176 pgs inactive
> Degraded data redundancy: 6176 pgs unclean
> 
>   services:
> mon: 3 daemons, quorum controller3,controller2,controller1
> mgr: controller3(active)
> osd: 71 osds: 71 up, 71 in
> rgw: 1 daemon active
> 
>   data:
> pools:   4 pools, 6176 pgs
> objects: 0 objects, 0 bytes
> usage:   0 kB used, 0 kB / 0 kB avail
> pgs: 100.000% pgs unknown
>  6176 unknown
> 
> 
> i deployed ceph-mgr using ceph-deploy gather-keys && ceph-deploy mgr create 
> ,it was successfull but for some reason ceph -s is not showing correct values.
> Can some one help me here please
> 
> Regards,
> Kevin
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-20 Thread Ronny Aasen

On 20.12.2017 19:02, kevin parrikar wrote:

hi All,
I have upgraded the cluster from Hammer to Jewel and to Luminous .

i am able to upload/download glance images but ceph -s shows 0kb used 
and Available and probably because of that cinder create is failing.



ceph -s
  cluster:
    id: 06c5c906-fc43-499f-8a6f-6c8e21807acf
    health: HEALTH_WARN
    Reduced data availability: 6176 pgs inactive
    Degraded data redundancy: 6176 pgs unclean

  services:
    mon: 3 daemons, quorum controller3,controller2,controller1
    mgr: controller3(active)
    osd: 71 osds: 71 up, 71 in
    rgw: 1 daemon active

  data:
    pools:   4 pools, 6176 pgs
    objects: 0 objects, 0 bytes
    usage:   0 kB used, 0 kB / 0 kB avail
    pgs: 100.000% pgs unknown
 6176 unknown


i deployed ceph-mgr using ceph-deploy gather-keys && ceph-deploy mgr 
create ,it was successfull but for some reason ceph -s is not showing 
correct values.

Can some one help me here please

Regards,
Kevin



is ceph-mgr actually running ? all statistics now require a ceph-mgr to 
be running.
also check the mgr's logfile to see if it is able to authenticate/start 
properly.


kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-20 Thread kevin parrikar
hi All,
I have upgraded the cluster from Hammer to Jewel and to Luminous .

i am able to upload/download glance images but ceph -s shows 0kb used and
Available and probably because of that cinder create is failing.


ceph -s
  cluster:
id: 06c5c906-fc43-499f-8a6f-6c8e21807acf
health: HEALTH_WARN
Reduced data availability: 6176 pgs inactive
Degraded data redundancy: 6176 pgs unclean

  services:
mon: 3 daemons, quorum controller3,controller2,controller1
mgr: controller3(active)
osd: 71 osds: 71 up, 71 in
rgw: 1 daemon active

  data:
pools:   4 pools, 6176 pgs
objects: 0 objects, 0 bytes
usage:   0 kB used, 0 kB / 0 kB avail
pgs: 100.000% pgs unknown
 6176 unknown


i deployed ceph-mgr using ceph-deploy gather-keys && ceph-deploy mgr create
,it was successfull but for some reason ceph -s is not showing correct
values.
Can some one help me here please

Regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com