Re: [ceph-users] Nautilus 14.2.3 packages appearing on the mirrors

2019-09-04 Thread Abhishek Lekshmanan
"Ashley Merrick"  writes:

> I guess so and they just haven't pushed this 
> https://github.com/ceph/ceph/pull/29973 yet
We're just checking this, please don't install until the official
announcements hit the mailing lists yet.
>
>
>
>
>  On Wed, 04 Sep 2019 09:41:03 +0800 Alex Litvak 
>  wrote 
>
>
> If it is a release it broke my ansible installation because it is missing 
> librados2 
>  
> https://download.ceph.com/rpm-nautilus/el7/x86_64/librados2-14.2.3-0.el7.x86_64.rpm
>  (404 Not Found). 
>  
> Please fix it one way or another. 
>  
> On 9/3/2019 8:31 PM, Sasha Litvak wrote: 
>> Is there an actual release or an accident? 
>> 
>> ___ 
>> ceph-users mailing list -- mailto:ceph-us...@ceph.io 
>> To unsubscribe send an email to mailto:ceph-users-le...@ceph.io 
>> 
>  
>  
> ___ 
> ceph-users mailing list 
> mailto:ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Lifecycle and dynamic resharding

2019-08-02 Thread Abhishek Lekshmanan
"Sean Purdy"  writes:

> Hi,
>
> A while back I reported a bug in luminous where lifecycle on a versioned 
> bucket wasn't removing delete markers.
>
> I'm interested in this phrase in the pull request:
>
> "you can't expect lifecycle to work with dynamic resharding enabled."

the luminous backport is a wip, until the backport is done, lifecycle
with dynamic resharding in luminous will have issues
>
> Why not?
>
>
> https://github.com/ceph/ceph/pull/29122
> https://tracker.ceph.com/issues/36512
>
> Sean
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Should I use "rgw s3 auth order = local, external"

2019-07-29 Thread Abhishek Lekshmanan
Christian  writes:

> Hi,
>
> I found this (rgw s3 auth order = local, external) on the web:
> https://opendev.org/openstack/charm-ceph-radosgw/commit/3e54b570b1124354704bd5c35c93dce6d260a479
>
> Which is seemingly exactly what I need for circumventing higher
> latency when switching on keystone authentication. In fact it even
> improves performance slightly without enabling keystone authentication
> which strikes me as odd. Which leads me to the conclusion that this is
> disabling some mechanism that usually takes time.

By default rgw tries external authentication engines first before
attempting locally, in case rgw s3 auth use keystone is enabled, then
keystone is attempted first, which would be the right behaviour if you
don't want users created via rgw-admin to shadow the actual users in
keystone. Changing the order first tries to find the user locally which
reduces that roundtrip to keystone.

In the case when you disabled keystone was rgw keystone url empty?

> I could not find any official documentation for this option.
> Does anyone have any experience with this?
>
> Regards,
> Christian
>
> PS: Sorry for the resend, I used the wrong sending address.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v13.2.6 Mimic released

2019-06-04 Thread Abhishek Lekshmanan

We're glad to announce the sixth bugfix release of the Mimic v13.2.x
long term stable release series. We recommend that all Mimic users
upgrade. We thank everyone for contributing towards this release.

Notable Changes
---
* Ceph v13.2.6 now packages python bindings for python3.6 instead of
  python3.4, because EPEL7 recently switched from python3.4 to
  python3.6 as the native python3. See the announcement[1] _`
  for more details on the background of this change.


For a detailed changelog, please refer to the official blog post entry
at https://ceph.com/releases/v13-2-6-mimic-released/


[1]: 
https://lists.fedoraproject.org/archives/list/epel-annou...@lists.fedoraproject.org/message/EGUMKAIMPK2UD5VSHXM53BH2MBDGDWMO

Getting Ceph
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-13.2.6.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 7b695f835b03642f85998b2ae7b6dd093d9fbce4

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW Beast frontend and ipv6 options

2019-05-02 Thread Abhishek Lekshmanan
Daniel Gryniewicz  writes:

> After discussing with Casey, I'd like to propose some clarifications to 
> this.
>
> First, we do not treat EAFNOSUPPORT as a non-fatal error.  Any other 
> error binding is fatal, but that one we warn and continue.
>
> Second, we treat "port=" as expanding to "endpoint=0.0.0.0:, 
> endpoint=[::]".
>
> Then, we just process the set of endpoints properly.  This should, I 
> believe, result in simple, straight-forward code, and easily 
> understandable semantics, and should make it simple for orchetrators.

Agree, this makes a lot of sense. specifying both a port and a endpoint
is somewhat of a corner case and I guess for this particular case
failure to bind is acceptable with the documentation already mentioning
the port's implicit endpoint behaviour.
>
> This would make 1 and 2 below fallout naturally.  3 is modified so that 
> we only use configured endpoints, but port= is now implicit endpoint 
> configuration.



>
> Daniel
>
> On 5/2/19 10:08 AM, Daniel Gryniewicz wrote:
>> Based on past experience with this issue in other projects, I would 
>> propose this:
>> 
>> 1. By default (rgw frontends=beast), we should bind to both IPv4 and 
>> IPv6, if available.
>> 
>> 2. Just specifying port (rgw frontends=beast port=8000) should apply to 
>> both IPv4 and IPv6, if available.
>> 
>> 3. If the user provides endpoint config, we should use only that 
>> endpoint config.  For example, if they provide only v4 addresses, we 
>> should only bind to v4.
>> 
>> This should all be independent of the bindv6only setting; that is, we 
>> should specifically bind our v4 and v6 addresses, and not depend on the 
>> system to automatically bind v4 when binding v6.
>> 
>> In the case of 1 or 2, if the system has disabled either v4 or v6, this 
>> should not be an error, as long as one of the two binds works.  In the 
>> case of 3, we should error out if any configured endpoint cannot be bound.
>> 
>> This should allow an orchestrator to confidently install a system, 
>> knowing what will happen, without needing to know or manipulate the 
>> bindv6only flag.
>> 
>> As for what happens if you specify an endpoint and a port, I don't have 
>> a strong opinion.  I see 2 reasonable possibilites:
>> 
>> a. Make it an error
>> 
>> b. Treat a port in this case as an endpoint of 0.0.0.0:port (v4-only)
>> 
>> Daniel
>> 
>> On 4/26/19 4:49 AM, Abhishek Lekshmanan wrote:
>>>
>>> Currently RGW's beast frontend supports ipv6 via the endpoint
>>> configurable. The port option will bind to ipv4 _only_.
>>>
>>> http://docs.ceph.com/docs/master/radosgw/frontends/#options
>>>
>>> Since many Linux systems may default the sysconfig net.ipv6.bindv6only
>>> flag to true, it usually means that specifying a v6 endpoint will bind
>>> to both v4 and v6. But this also means that deployment systems must be
>>> aware of this while configuring depending on whether both v4 and v6
>>> endpoints need to work or not. Specifying both a v4 and v6 endpoint or a
>>> port (v4) and endpoint with the same v6 port will currently lead to a
>>> failure as the system would've already bound the v6 port to both v4 and
>>> v6. This leaves us with a few options.
>>>
>>> 1. Keep the implicit behaviour as it is, document this, as systems are
>>> already aware of sysconfig flags and will expect that at a v6 endpoint
>>> will bind to both v4 and v6.
>>>
>>> 2. Be explicit with endpoints & configuration, Beast itself overrides
>>> the socket option to bind both v4 and v6, which means that v6 endpoint
>>> will bind to v6 *only* and binding to a v4 will need an explicit
>>> specification. (there is a pr in progress for this:
>>> https://github.com/ceph/ceph/pull/27270)
>>>
>>> Any more suggestions on how systems handle this are also welcome.
>>>
>>> -- 
>>> Abhishek
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> 
>
>

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RGW Beast frontend and ipv6 options

2019-04-26 Thread Abhishek Lekshmanan


Currently RGW's beast frontend supports ipv6 via the endpoint
configurable. The port option will bind to ipv4 _only_.

http://docs.ceph.com/docs/master/radosgw/frontends/#options

Since many Linux systems may default the sysconfig net.ipv6.bindv6only
flag to true, it usually means that specifying a v6 endpoint will bind
to both v4 and v6. But this also means that deployment systems must be
aware of this while configuring depending on whether both v4 and v6
endpoints need to work or not. Specifying both a v4 and v6 endpoint or a
port (v4) and endpoint with the same v6 port will currently lead to a
failure as the system would've already bound the v6 port to both v4 and
v6. This leaves us with a few options.

1. Keep the implicit behaviour as it is, document this, as systems are
already aware of sysconfig flags and will expect that at a v6 endpoint
will bind to both v4 and v6.

2. Be explicit with endpoints & configuration, Beast itself overrides
the socket option to bind both v4 and v6, which means that v6 endpoint
will bind to v6 *only* and binding to a v4 will need an explicit
specification. (there is a pr in progress for this:
https://github.com/ceph/ceph/pull/27270)

Any more suggestions on how systems handle this are also welcome.

--
Abhishek 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v12.2.12 Luminous released

2019-04-15 Thread Abhishek Lekshmanan
Paul Emmerich  writes:

> I think the most notable change here is the backport of the new bitmap
> allocator, but that's missing completely from the change log.

Updated the changelog in docs and the blog. The earlier script was
ignoring entries that didn't link to backport tracker following back to
a master tracker.
>
>
> Paul
>
> -- 
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Fri, Apr 12, 2019 at 6:48 PM Abhishek Lekshmanan  wrote:
>>
>>
>> We are happy to announce the next bugfix release for v12.2.x Luminous
>> stable release series. We recommend all luminous users to upgrade to
>> this release. Many thanks to everyone who contributed backports and a
>> special mention to Yuri for the QE efforts put in to this release.
>>
>> Notable Changes
>> ---
>> * In 12.2.11 and earlier releases, keyring caps were not checked for 
>> validity,
>>   so the caps string could be anything. As of 12.2.12, caps strings are
>>   validated and providing a keyring with an invalid caps string to, e.g.,
>>   `ceph auth add` will result in an error.
>>
>> For the complete changelog, please refer to the release blog entry at
>> https://ceph.com/releases/v12-2-12-luminous-released/
>>
>> Getting ceph:
>> 
>> * Git at git://github.com/ceph/ceph.git
>> * Tarball at http://download.ceph.com/tarballs/ceph-12.2.12.tar.gz
>> * For packages, see http://docs.ceph.com/docs/master/install/get-packages/
>> * Release git sha1: 1436006594665279fe734b4c15d7e08c13ebd777
>>
>> --
>> Abhishek Lekshmanan
>> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>> HRB 21284 (AG Nürnberg)
>

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v14.2.0 Nautilus released

2019-03-19 Thread Abhishek Lekshmanan
cts cluster creation and
  bootstrapping, and it does not support listing multiple addresses
  (e.g., both a v2 and v1 protocol address).  We strongly recommend
  the option be removed and instead a single ``mon host`` option be
  specified in the ``[global]`` section to allow daemons and clients
  to discover the monitors.

* New command ``ceph fs fail`` has been added to quickly bring down a file
  system. This is a single command that unsets the joinable flag on the file
  system and brings down all of its ranks.

* The ``cache drop`` admin socket command has been removed. The ``ceph
  tell mds.X cache drop`` remains.

Getting Ceph


* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.0.tar.gz
* For packages, see 
http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 3a54b2b6d167d4a2a19e003a705696d4fe619afc

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v13.2.5 Mimic released

2019-03-13 Thread Abhishek Lekshmanan

We're glad to announce the fifth bug fix release of Mimic v13.2.X stable
release series. We recommend that all users upgrade.

Notable Changes
---

* This release fixes the pg log hard limit bug that was introduced in
  13.2.2, https://tracker.ceph.com/issues/36686. A flag called
  `pglog_hardlimit` has been introduced, which is off by default. Enabling
  this flag will limit the length of the pg log. In order to enable
  that, the flag must be set by running `ceph osd set pglog_hardlimit`
  after completely upgrading to 13.2.2. Once the cluster has this flag
  set, the length of the pg log will be capped by a hard limit. Once set,
  this flag *must not* be unset anymore. In luminous, this feature was
  introduced in 12.2.11. Users who are running 12.2.11, and want to
  continue to use this feature, should upgrade to 13.2.5 or later.

* This release also fixes a CVE on civetweb, CVE-2019-3821 where SSL file
  descriptors were not closed in civetweb in case the initial negotiation fails.

* There have been fixes to RGW dynamic and manual resharding, which no longer
  leaves behind stale bucket instances to be removed manually. For finding and
  cleaning up older instances from a reshard a radosgw-admin command `reshard
  stale-instances list` and `reshard stale-instances rm` should do the necessary
  cleanup. These commands should *not* be used on a multisite setup as the stale
  instances may be unlikely to be from a reshard and can have consequences. In
  the next version the admin CLI will prevent this command to be run on a
  multisite cluster, however for the current release users are urged not to
  use the delete command on a multisite cluster.

For a detailed changelog please refer to the official release blog at
https://ceph.com/releases/v13-2-5-mimic-released/

Getting ceph
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-13.2.5.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: cbff874f9007f1869bfd3821b7e33b2a6ffd4988

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v13.2.4 Mimic released

2019-01-07 Thread Abhishek Lekshmanan

This is the fourth bugfix release of the Mimic v13.2.x long term stable
release series. This release includes two security fixes atop of v13.2.3
We recommend all users upgrade to this version. If you've already
upgraded to v13.2.3, the same restrictions from v13.2.2->v13.2.3 apply
here as well.

Notable Changes
---

* CVE-2018-16846: rgw: enforce bounds on max-keys/max-uploads/max-parts 
(`issue#35994 <http://tracker.ceph.com/issues/35994>`_)
* CVE-2018-14662: mon: limit caps allowed to access the config store

Notable Changes in v13.2.3
---

* The default memory utilization for the mons has been increased
  somewhat.  Rocksdb now uses 512 MB of RAM by default, which should
  be sufficient for small to medium-sized clusters; large clusters
  should tune this up.  Also, the `mon_osd_cache_size` has been
  increase from 10 OSDMaps to 500, which will translate to an
  additional 500 MB to 1 GB of RAM for large clusters, and much less
  for small clusters.

* Ceph v13.2.2 includes a wrong backport, which may cause mds to go into
  'damaged' state when upgrading Ceph cluster from previous version.
  The bug is fixed in v13.2.3. If you are already running v13.2.2,
  upgrading to v13.2.3 does not require special action.

* The bluestore_cache_* options are no longer needed. They are replaced
  by osd_memory_target, defaulting to 4GB. BlueStore will expand
  and contract its cache to attempt to stay within this
  limit. Users upgrading should note this is a higher default
  than the previous bluestore_cache_size of 1GB, so OSDs using
  BlueStore will use more memory by default.
  For more details, see the `BlueStore docs 
<http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/#automatic-cache-sizing>`_.

* This version contains an upgrade bug, http://tracker.ceph.com/issues/36686,
  due to which upgrading during recovery/backfill can cause OSDs to fail. This
  bug can be worked around, either by restarting all the OSDs after the upgrade,
  or by upgrading when all PGs are in "active+clean" state. If you have already
  successfully upgraded to 13.2.2, this issue should not impact you. Going
  forward, we are working on a clean upgrade path for this feature.


For more details please refer to the release blog at
https://ceph.com/releases/13-2-4-mimic-released/

Getting ceph
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-13.2.4.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: b10be4d44915a4d78a8e06aa31919e74927b142e

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)


signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Mimic 13.2.3?

2019-01-04 Thread Abhishek Lekshmanan
Ashley Merrick  writes:

> If this is another nasty bug like .2? Can’t you remove .3 from being
> available till .4 comes around?

This time there isn't a nasty bug, just a a couple of more fixes in .4
which would be better to have. We're building 12.2.4 as we speak
> Myself will wait for proper confirmation always but others may run an apt
> upgrade for any other reason and end up with .3 packages.

>
> ,Ashley
>
> On Fri, 4 Jan 2019 at 11:21 PM, Abhishek Lekshmanan 
> wrote:
>
>> Ashley Merrick  writes:
>>
>> > Another day and still nothing to say there has been an official
>> release..?!?
>>
>> Sorry just wait for a bit, we're building 13.2.4, don't install
>> 13.2.3 yet.
>> >
>> >
>> > On Fri, 4 Jan 2019 at 2:27 AM, Alex Litvak > >
>> > wrote:
>> >
>> >> It is true for all distros.  It doesn't happen the first time either. I
>> >> think it is a bit dangerous.
>> >>
>> >> On 1/3/19 12:25 AM, Ashley Merrick wrote:
>> >> > Have just run an apt update and have noticed there are some CEPH
>> >> > packages now available for update on my mimic cluster / ubuntu.
>> >> >
>> >> > Have yet to install these yet but it look's like we have the next
>> point
>> >> > release of CEPH Mimic, but not able to see any release note's or
>> >> > official comm's yet?..
>> >> >
>> >> > ___
>> >> > ceph-users mailing list
>> >> > ceph-users@lists.ceph.com
>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >> >
>> >>
>> >>
>> >> ___
>> >> ceph-users mailing list
>> >> ceph-users@lists.ceph.com
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> --
>> Abhishek Lekshmanan
>> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>> HRB 21284 (AG Nürnberg)
>>

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Mimic 13.2.3?

2019-01-04 Thread Abhishek Lekshmanan
Ashley Merrick  writes:

> Another day and still nothing to say there has been an official release..?!?

Sorry just wait for a bit, we're building 13.2.4, don't install
13.2.3 yet.
>
>
> On Fri, 4 Jan 2019 at 2:27 AM, Alex Litvak 
> wrote:
>
>> It is true for all distros.  It doesn't happen the first time either. I
>> think it is a bit dangerous.
>>
>> On 1/3/19 12:25 AM, Ashley Merrick wrote:
>> > Have just run an apt update and have noticed there are some CEPH
>> > packages now available for update on my mimic cluster / ubuntu.
>> >
>> > Have yet to install these yet but it look's like we have the next point
>> > release of CEPH Mimic, but not able to see any release note's or
>> > official comm's yet?..
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Luminous v12.2.10 released

2018-11-27 Thread Abhishek Lekshmanan

We're happy to announce the tenth bug fix release of the Luminous
v12.2.x long term stable release series. The previous release, v12.2.9,
introduced the PG hard-limit patches which were found to cause an issue
in certain upgrade scenarios, and this release was expedited to revert
those patches. If you already successfully upgraded to v12.2.9, you
should **not** upgrade to v12.2.10, but rather **wait** for a release in
which http://tracker.ceph.com/issues/36686 is addressed. All other users
are encouraged to upgrade to this release.

Notable Changes
---

* This release reverts the PG hard-limit patches added in v12.2.9 in which,
  a partial upgrade during a recovery/backfill, can cause the osds on the
  previous version, to fail with assert(trim_to <= info.last_complete). The
  workaround for users is to upgrade and restart all OSDs to a version with the
  pg hard limit, or only upgrade when all PGs are active+clean.

  See also: http://tracker.ceph.com/issues/36686

  As mentioned above if you've successfully upgraded to v12.2.9 DO NOT
  upgrade to v12.2.10 until the linked tracker issue has been fixed.

* The bluestore_cache_* options are no longer needed. They are replaced
  by osd_memory_target, defaulting to 4GB. BlueStore will expand
  and contract its cache to attempt to stay within this
  limit. Users upgrading should note this is a higher default
  than the previous bluestore_cache_size of 1GB, so OSDs using
  BlueStore will use more memory by default.

  For more details, see BlueStore docs[1]


For the complete release notes with changelog, please check out the
release blog entry at:
http://ceph.com/releases/v12-2-10-luminous-released

Getting ceph:

* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.2.10.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 177915764b752804194937482a39e95e0ca3de94


[1]: 
http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#cache-size

--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v13.2.2 Mimic released

2018-09-26 Thread Abhishek Lekshmanan
avoid dereferencing invalid complete_to (pr#23951, xie 
xingguo)
* osd: do_sparse_read(): Verify checksum earlier so we will try to repair 
(issue#24875, pr#23378, David Zafman)
* osd: segv in OSDMap::calc_pg_upmaps from balancer (issue#22056, issue#26933, 
pr#23888, Brad Hubbard)
* qa/rgw: patch keystone requirements.txt (issue#26946, issue#23659, pr#23771, 
Casey Bodley)
* qa/suites/rados: move valgrind test to singleton-flat (issue#24992, pr#23744, 
Sage Weil)
* qa/tasks: s3a fix mirror (pr#24038, Vasu Kulkarni)
* qa/tests: update ansible version to 2.5 (pr#24091, Yuri Weinstein)
* qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04 (issue#26956, 
issue#26967, issue#24679, pr#23769, Patrick Donnelly)
* qa: fix ceph-disk suite and add coverage for ceph-detect-init (pr#23337, 
Nathan Cutler)
* rados python bindings use prval from stack (issue#25204, issue#25175, 
pr#23863, Sage Weil)
* rados: not all exceptions accept keyargs (issue#25178, issue#24033, pr#23335, 
Rishabh Dave)
* rbd: improved trash snapshot namespace handling (issue#25121, issue#23398, 
issue#25114, pr#23559, Mykola Golub, Jason Dillaman)
* rgw: Fix log level of gc_iterate_entries (issue#23801, issue#26921, pr#23686, 
iliul)
* rgw: Limit the number of lifecycle rules on one bucket (issue#26845, 
issue#24572, pr#23521, Zhang Shaowen)
* rgw: The delete markers generated by object expiration should have owner 
(issue#24568, issue#26847, pr#23541, Zhang Shaowen)
* rgw: add curl_low_speed_limit and curl_low_speed_time config to avoid 
(issue#25021, pr#23173, Mark Kogan, Zhang Shaowen)
* rgw: change default rgw_thread_pool_size to 512 (issue#25214, issue#25088, 
issue#25218, issue#24544, pr#23383, Douglas Fuller, Casey Bodley)
* rgw: civetweb fails on urls with control characters (issue#26849, 
issue#24158, pr#23855, Abhishek Lekshmanan)
* rgw: civetweb: use poll instead of select while waiting on sockets 
(issue#35954, pr#24058, Abhishek Lekshmanan)
* rgw: do not ignore EEXIST in RGWPutObj::execute (issue#25078, issue#22790, 
pr#23206, Matt Benjamin)
* rgw: fail to recover index from crash mimic backport (issue#24640, 
issue#24629, issue#24280, pr#23118, Tianshan Qu)
* rgw: radosgw-admin: 'sync error trim' loops until complete (issue#24873, 
issue#24984, pr#23140, Casey Bodley)
* rgw_file: deep stat handling (issue#26842, issue#24915, pr#23498, Matt 
Benjamin)
* rpm: should change ceph-mgr package depency from py-bcrypt to python2-bcrypt 
(issue#27212, pr#23868, Konstantin Sakhinov)
* rpm: silence osd block chown (issue#25152, pr#23324, Dan van der Ster)
* run-rbd-unit-tests.sh test fails to finish in jenkin's make check run 
(issue#27060, issue#24910, pr#23858, Mykola Golub)
* scrub livelock (issue#26931, issue#26890, pr#23722, Sage Weil)
* spdk: compile with -march=core2 instead of -march=native (issue#25032, 
pr#23175, Nathan Cutler)
* test: Use pids instead of jobspecs which were wrong (issue#32079, 
issue#27056, pr#23893, David Zafman)
* tests: cluster [WRN] 25 slow requests in powercycle (issue#25119, pr#23886, 
Neha Ojha)
* tools/ceph-detect-init: support RHEL as a platform (issue#18163, pr#23303, 
Nathan Cutler)
* tools: ceph-detect-init: support SLED (issue#18163, pr#23111, Nathan Cutler)
* tools: cephfs-data-scan: print the max used ino (issue#26978, issue#26925, 
pr#23880, "Yan, Zheng")
* qa/tests:  added OBJECT_MISPLACED to the whitelist (pr#23301, Yuri Weinstein)
* qa/tests: added v13.2.1 to the mix (pr#23218, Yuri Weinstein)

Getting ceph
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-13.2.2.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 02899bfda814146b021136e9d8e80eba494e1126
-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v12.2.8 Luminous released

2018-09-06 Thread Abhishek Lekshmanan
Adrian Saul  writes:

> Can I confirm if this bluestore compression assert issue is resolved in 
> 12.2.8?
>
> https://tracker.ceph.com/issues/23540

The PR itself in the backport issue is in the release notes, ie.
pr#22909, which references two tracker issues. Unfortunately,the script
that generates release notes handles one tracker issue back to its
original non-backport tracker, which caused only #21480 being mentioned.

Thanks for noticing, I'll fix the script to follow multiple issues. 
>
> I notice that it has a backport that is listed against 12.2.8 but there is no 
> mention of that issue or backport listed in the release notes.
>
>
>> -Original Message-
>> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
>> ow...@vger.kernel.org] On Behalf Of Abhishek Lekshmanan
>> Sent: Wednesday, 5 September 2018 2:30 AM
>> To: ceph-de...@vger.kernel.org; ceph-us...@ceph.com; ceph-
>> maintain...@ceph.com; ceph-annou...@ceph.com
>> Subject: v12.2.8 Luminous released
>>
>>
>> We're glad to announce the next point release in the Luminous v12.2.X stable
>> release series. This release contains a range of bugfixes and stability
>> improvements across all the components of ceph. For detailed release notes
>> with links to tracker issues and pull requests, refer to the blog post at
>> http://ceph.com/releases/v12-2-8-released/
>>
>> Upgrade Notes from previous luminous releases
>> -
>>
>> When upgrading from v12.2.5 or v12.2.6 please note that upgrade caveats
>> from
>> 12.2.5 will apply to any _newer_ luminous version including 12.2.8. Please
>> read the notes at https://ceph.com/releases/12-2-7-luminous-
>> released/#upgrading-from-v12-2-6
>>
>> For the cluster that installed the broken 12.2.6 release, 12.2.7 fixed the
>> regression and introduced a workaround option `osd distrust data digest =
>> true`, but 12.2.7 clusters still generated health warnings like ::
>>
>>   [ERR] 11.288 shard 207: soid
>>   11:1155c332:::rbd_data.207dce238e1f29.0527:head
>> data_digest
>>   0xc8997a5b != data_digest 0x2ca15853
>>
>>
>> 12.2.8 improves the deep scrub code to automatically repair these
>> inconsistencies. Once the entire cluster has been upgraded and then fully
>> deep scrubbed, and all such inconsistencies are resolved; it will be safe to
>> disable the `osd distrust data digest = true` workaround option.
>>
>> Changelog
>> -
>> * bluestore: set correctly shard for existed Collection (issue#24761, 
>> pr#22860,
>> Jianpeng Ma)
>> * build/ops: Boost system library is no longer required to compile and link
>> example librados program (issue#25054, pr#23202, Nathan Cutler)
>> * build/ops: Bring back diff -y for non-FreeBSD (issue#24396, issue#21664,
>> pr#22848, Sage Weil, David Zafman)
>> * build/ops: install-deps.sh fails on newest openSUSE Leap (issue#25064,
>> pr#23179, Kyr Shatskyy)
>> * build/ops: Mimic build fails with -DWITH_RADOSGW=0 (issue#24437,
>> pr#22864, Dan Mick)
>> * build/ops: order rbdmap.service before remote-fs-pre.target
>> (issue#24713, pr#22844, Ilya Dryomov)
>> * build/ops: rpm: silence osd block chown (issue#25152, pr#23313, Dan van
>> der Ster)
>> * cephfs-journal-tool: Fix purging when importing an zero-length journal
>> (issue#24239, pr#22980, yupeng chen, zhongyan gu)
>> * cephfs: MDSMonitor: uncommitted state exposed to clients/mdss
>> (issue#23768, pr#23013, Patrick Donnelly)
>> * ceph-fuse mount failed because no mds (issue#22205, pr#22895, liyan)
>> * ceph-volume add a __release__ string, to help version-conditional calls
>> (issue#25170, pr#23331, Alfredo Deza)
>> * ceph-volume: adds test for `ceph-volume lvm list /dev/sda` (issue#24784,
>> issue#24957, pr#23350, Andrew Schoen)
>> * ceph-volume: do not use stdin in luminous (issue#25173, issue#23260,
>> pr#23367, Alfredo Deza)
>> * ceph-volume enable the ceph-osd during lvm activation (issue#24152,
>> pr#23394, Dan van der Ster, Alfredo Deza)
>> * ceph-volume expand on the LVM API to create multiple LVs at different
>> sizes (issue#24020, pr#23395, Alfredo Deza)
>> * ceph-volume lvm.activate conditional mon-config on prime-osd-dir
>> (issue#25216, pr#23397, Alfredo Deza)
>> * ceph-volume lvm.batch remove non-existent sys_api property
>> (issue#34310, pr#23811, Alfredo Deza)
>> * ceph-volume lvm.listing only include devices if they exist (issue#24952,
>> pr#23150, Alfredo Deza)
>> * ceph-volume: process.call with stdin in Python 3 fix (issue#24993, 
>> pr#23238,
>

[ceph-users] v12.2.8 Luminous released

2018-09-04 Thread Abhishek Lekshmanan
efu Chai)
* librados: fix buffer overflow for aio_exec python binding (issue#23964, 
pr#22708, Aleksei Gutikov)
* librbd: force 'invalid object map' flag on-disk update (issue#24434, 
pr#22753, Mykola Golub)
* librbd: utilize the journal disabled policy when removing images 
(issue#23512, pr#23595, Jason Dillaman)
* mds: don't report slow request for blocked filelock request (issue#22428, 
pr#22782, "Yan, Zheng")
* mds: dump recent events on respawn (issue#24853, pr#23213, Patrick Donnelly)
* mds: handle discontinuous mdsmap (issue#24856, pr#23169, "Yan, Zheng")
* mds: increase debug level for dropped client cap msg (issue#24855, pr#23214, 
Patrick Donnelly)
* mds: low wrlock efficiency due to dirfrags traversal (issue#24467, pr#22885, 
Xuehan Xu)
* mds: print mdsmap processed at low debug level (issue#24852, pr#23212, 
Patrick Donnelly)
* mds: scrub doesn't always return JSON results (issue#23958, pr#23222, Venky 
Shankar)
* mds: unset deleted vars in shutdown_pass (issue#23766, pr#23015, Patrick 
Donnelly)
* mgr: add units to performance counters (issue#22747, pr#23266, Ernesto 
Puerta, Rubab Syed)
* mgr: ceph osd safe-to-destroy crashes the mgr (issue#23249, pr#22806, Sage 
Weil)
* mgr/MgrClient: Protect daemon_health_metrics (issue#23352, pr#23459, Kjetil 
Joergensen, Brad Hubbard)
* mon: Add option to view IP addresses of clients in output of 'ceph features' 
(issue#21315, pr#22773, Paul Emmerich)
* mon/HealthMonitor: do not send MMonHealthChecks to pre-luminous mon 
(issue#24481, pr#22655, Sage Weil)
* os/bluestore: fix flush_commit locking (issue#21480, pr#22904, Sage Weil)
* os/bluestore: fix incomplete faulty range marking when doing compression 
(issue#21480, pr#22909, Igor Fedotov)
* os/bluestore: fix races on SharedBlob::coll in ~SharedBlob (issue#24859, 
pr#23064, Radoslaw Zarzynski)
* osdc: Fix the wrong BufferHead offset (issue#24484, pr#22865, dongdong tao)
* osd: do_sparse_read(): Verify checksum earlier so we will try to repair and 
missed backport (issue#24875, pr#23379, xie xingguo, David Zafman)
* osd: eternal stuck PG in 'unfound_recovery' (issue#24373, pr#22546, Sage Weil)
* osd: may get empty info at recovery (issue#24588, pr#22862, Sage Weil)
* osd/OSDMap: CRUSH_TUNABLES5 added in jewel, not kraken (issue#25057, 
pr#23227, Sage Weil)
* osd/Session: fix invalid iterator dereference in Sessoin::have_backoff() 
(issue#24486, pr#22729, Sage Weil)
* pjd: cd: too many arguments (issue#24307, pr#22883, Neha Ojha)
* PurgeQueue sometimes ignores Journaler errors (issue#24533, pr#22811, John 
Spray)
* pybind: pybind/mgr/mgr_module: make rados handle available to all modules 
(issue#24788, issue#25102, pr#23235, Ernesto Puerta, Sage Weil)
* pybind: Python bindings use iteritems method which is not Python 3 compatible 
(issue#24779, pr#22918, Nathan Cutler, Kefu Chai)
* pybind: rados.pyx: make all exceptions accept keyword arguments (issue#24033, 
pr#22979, Rishabh Dave)
* rbd: fix issues in IEC unit handling (issue#26927, issue#26928, pr#23776, 
Jason Dillaman)
* repeated eviction of idle client until some IO happens (issue#24052, 
pr#22780, "Yan, Zheng")
* rgw: add curl_low_speed_limit and curl_low_speed_time config to avoid the 
thread hangs in data sync (issue#25019, pr#23144, Mark Kogan, Zhang Shaowen)
* rgw: add unit test for cls bi list command (issue#24483, pr#22846, Orit 
Wasserman, Xinying Song)
* rgw: do not ignore EEXIST in RGWPutObj::execute (issue#22790, pr#23207, Matt 
Benjamin)
* rgw: fail to recover index from crash luminous backport (issue#24640, 
issue#24280, pr#23130, Tianshan Qu)
* rgw: fix gc may cause a large number of read traffic (issue#24767, pr#22984, 
Xin Liao)
* rgw: fix the bug of radowgw-admin zonegroup set requires realm (issue#21583, 
pr#22767, lvshanchun)
* rgw: have a configurable authentication order (issue#23089, pr#23501, 
Abhishek Lekshmanan)
* rgw: index complete miss zones_trace set (issue#24590, pr#22820, Tianshan Qu)
* rgw: Invalid Access-Control-Request-Request may bypass 
validate_cors_rule_method (issue#24223, pr#22934, Jeegn Chen)
* rgw: meta and data notify thread miss stop cr manager (issue#24589, pr#22822, 
Tianshan Qu)
* rgw-multisite: endless loop in RGWBucketShardIncrementalSyncCR (issue#24603, 
pr#22817, cfanz)
* rgw performance regression for luminous 12.2.4 (issue#23379, pr#22930, Mark 
Kogan)
* rgw: radogw-admin reshard status command should print text for reshar… 
(issue#23257, pr#23019, Orit Wasserman)
* rgw: "radosgw-admin objects expire" always returns ok even if the pro… 
(issue#24592, pr#23000, Zhang Shaowen)
* rgw: require --yes-i-really-mean-it to run radosgw-admin orphans find 
(issue#24146, pr#22985, Matt Benjamin)
* rgw: REST admin metadata API paging failure bucket & bucket.instance: 
InvalidArgument (issue#23099, pr#22932, Matt Benjamin)
* rgw: set cr state if aio_read err return in RGWCloneMetaLogCoroutine 
(issue#24566, pr#22942, Tianshan Qu)
* spdk: fix ceph-osd crash when

Re: [ceph-users] Packages for debian in Ceph repo

2018-09-03 Thread Abhishek Lekshmanan
arad...@tma-0.net writes:

> Can anyone confirm if the Ceph repos for Debian/Ubuntu contain packages for 
> Debian? I'm not seeing any, but maybe I'm missing something...
>
> I'm seeing ceph-deploy install an older version of ceph on the nodes (from 
> the 
> Debian repo) and then failing when I run "ceph-deploy osd ..." because ceph-
> volume doesn't exist on the nodes.
>
The newer versions of Ceph (from mimic onwards) requires compiler
toolchains supporting c++17 which we unfortunately do not have for
stretch/jessie yet. 

-
Abhishek 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v12.2.7 Luminous released

2018-07-17 Thread Abhishek Lekshmanan
 rgw_cache_expiry_interval 
(issue#24346, pr#22369, Casey Bodley, Matt Benjamin)

Notable changes in v12.2.6 Luminous
===

:note: This is a broken release with serious known regressions.  Do not
install it. The release notes below are to help track the changes that
went in 12.2.6 and hence a part of 12.2.7


- *Auth*:

  * In 12.2.4 and earlier releases, keyring caps were not checked for validity,
so the caps string could be anything. As of 12.2.6, caps strings are
validated and providing a keyring with an invalid caps string to, e.g.,
"ceph auth add" will result in an error.
  * CVE 2018-1128: auth: cephx authorizer subject to replay attack 
(issue#24836, Sage Weil)
  * CVE 2018-1129: auth: cephx signature check is weak (issue#24837, Sage Weil)
  * CVE 2018-10861: mon: auth checks not correct for pool ops (issue#24838, 
Jason Dillaman)


- The config-key interface can store arbitrary binary blobs but JSON
  can only express printable strings.  If binary blobs are present,
  the 'ceph config-key dump' command will show them as something like
  ``<<< binary blob of length N >>>``.

The full changelog for 12.2.6 is published in the release blog.

Getting ceph:
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.2.7.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v10.2.11 Jewel released

2018-07-11 Thread Abhishek Lekshmanan

We're glad to announce v10.2.11 release of the Jewel stable release
series. This point releases brings a number of important bugfixes and
has a few important security fixes. This is most likely going to be the
final Jewel release (shine on you crazy diamond). We thank everyone in
the community for contributing towards this release and particularly
want to thank Nathan and Yuri for their relentless efforts in
backporting and testing this release.

We recommend that all Jewel 10.2.x users upgrade.

Notable Changes
---

* CVE 2018-1128: auth: cephx authorizer subject to replay attack (issue#24836 
http://tracker.ceph.com/issues/24836, Sage Weil)

* CVE 2018-1129: auth: cephx signature check is weak (issue#24837 
http://tracker.ceph.com/issues/24837, Sage Weil)

* CVE 2018-10861: mon: auth checks not correct for pool ops (issue#24838 
http://tracker.ceph.com/issues/24838, Jason Dillaman)

* The RBD C API's rbd_discard method and the C++ API's Image::discard method
  now enforce a maximum length of 2GB. This restriction prevents overflow of
  the result code.

* New OSDs will now use rocksdb for omap data by default, rather than
  leveldb. omap is used by RGW bucket indexes and CephFS directories,
  and when a single leveldb grows to 10s of GB with a high write or
  delete workload, it can lead to high latency when leveldb's
  single-threaded compaction cannot keep up. rocksdb supports multiple
  threads for compaction, which avoids this problem.

* The CephFS client now catches failures to clear dentries during startup
  and refuses to start as consistency and untrimmable cache issues may
  develop. The new option client_die_on_failed_dentry_invalidate (default:
  true) may be turned off to allow the client to proceed (dangerous!).

* In 10.2.10 and earlier releases, keyring caps were not checked for validity,
  so the caps string could be anything. As of 10.2.11, caps strings are
  validated and providing a keyring with an invalid caps string to, e.g.,
  "ceph auth add" will result in an error.

The changelog and the full release notes are at the release blog entry
at https://ceph.com/releases/v10-2-11-jewel-released/

Getting Ceph

* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-10.2.11.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: e4b061b47f07f583c92a050d9e84b1813a35671e


Best,
Abhishek

--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-announce] Luminous v12.2.4 released

2018-03-01 Thread Abhishek Lekshmanan
Jaroslaw Owsiewski <jaroslaw.owsiew...@allegro.pl> writes:

> What about this: https://tracker.ceph.com/issues/22015#change-105987 ?

Still has to wait for 12.2.5 unfortunately. We only had some critical
build/ceph-disk and whatever prs had already passed QE post 12.2.3 in
12.2.4.  
>
> Regards
>
> -- 
> Jarek
>
> 2018-02-28 16:46 GMT+01:00 Abhishek Lekshmanan <abhis...@suse.com>:
>
>>
>> This is the fourth bugfix release of Luminous v12.2.x long term stable
>> release series. This was primarily intended to fix a few build,
>> ceph-volume/ceph-disk issues from 12.2.3 and a few RGW issues. We
>> recommend all the users of 12.2.x series to update. A full changelog is
>> also published at the official release blog at
>> https://ceph.com/releases/v12-2-4-luminous-released/
>>
>> Notable Changes
>> ---
>> * cmake: check bootstrap.sh instead before downloading boost (issue#23071,
>> pr#20515, Kefu Chai)
>> * core: Backport of cache manipulation: issues #22603 and #22604
>> (issue#22604, issue#22603, pr#20353, Adam C. Emerson)
>> * core: Snapset inconsistency is detected with its own error (issue#22996,
>> pr#20501, David Zafman)
>> * tools: ceph-objectstore-tool: "$OBJ get-omaphdr" and "$OBJ list-omap"
>> scan all pgs instead of using specific pg (issue#21327, pr#20283, David
>> Zafman)
>> * ceph-volume: warn on mix of filestore and bluestore flags (issue#23003,
>> pr#20568, Alfredo Deza)
>> * ceph-volume: adds support to zap encrypted devices (issue#22878,
>> pr#20545, Andrew Schoen)
>> * ceph-volume: log the current running command for easier debugging
>> (issue#23004, pr#20597, Andrew Schoen)
>> * core: last-stat-seq returns 0 because osd stats are cleared
>> (issue#23093, pr#20548, Sage Weil, David Zafman)
>> * rgw:  make init env methods return an error (issue#23039, pr#20564,
>> Abhishek Lekshmanan)
>> * rgw: URL-decode S3 and Swift object-copy URLs (issue#22121, issue#22729,
>> pr#20236, Malcolm Lee, Matt Benjamin)
>> * rgw: parse old rgw_obj with namespace correctly (issue#22982, pr#20566,
>> Yehuda Sadeh)
>> * rgw: return valid Location element, CompleteMultipartUpload
>> (issue#22655, pr#20266, Matt Benjamin)
>> * rgw: use explicit index pool placement (issue#22928, pr#20565, Yehuda
>> Sadeh)
>> * tools: ceph-disk: v12.2.2 unable to create bluestore osd using ceph-disk
>> (issue#22354, pr#20563, Kefu Chai)
>>
>> Getting Ceph
>> 
>> * Git at git://github.com/ceph/ceph.git
>> * Tarball at http://download.ceph.com/tarballs/ceph-12.2.4.tar.gz
>> * For packages, see http://docs.ceph.com/docs/master/install/get-packages/
>> * For ceph-deploy, see http://docs.ceph.com/docs/
>> master/install/install-ceph-deploy
>> * Release git sha1: 52085d5249a80c5f5121a76d6288429f35e4e77b
>>
>> --
>> Abhishek Lekshmanan
>> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>> HRB 21284 (AG Nürnberg)
>> ___
>> Ceph-announce mailing list
>> ceph-annou...@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-announce-ceph.com
>>

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Luminous v12.2.4 released

2018-02-28 Thread Abhishek Lekshmanan

This is the fourth bugfix release of Luminous v12.2.x long term stable
release series. This was primarily intended to fix a few build,
ceph-volume/ceph-disk issues from 12.2.3 and a few RGW issues. We
recommend all the users of 12.2.x series to update. A full changelog is
also published at the official release blog at
https://ceph.com/releases/v12-2-4-luminous-released/

Notable Changes
---
* cmake: check bootstrap.sh instead before downloading boost (issue#23071, 
pr#20515, Kefu Chai)
* core: Backport of cache manipulation: issues #22603 and #22604 (issue#22604, 
issue#22603, pr#20353, Adam C. Emerson)
* core: Snapset inconsistency is detected with its own error (issue#22996, 
pr#20501, David Zafman)
* tools: ceph-objectstore-tool: "$OBJ get-omaphdr" and "$OBJ list-omap" scan 
all pgs instead of using specific pg (issue#21327, pr#20283, David Zafman)
* ceph-volume: warn on mix of filestore and bluestore flags (issue#23003, 
pr#20568, Alfredo Deza)
* ceph-volume: adds support to zap encrypted devices (issue#22878, pr#20545, 
Andrew Schoen)
* ceph-volume: log the current running command for easier debugging 
(issue#23004, pr#20597, Andrew Schoen)
* core: last-stat-seq returns 0 because osd stats are cleared (issue#23093, 
pr#20548, Sage Weil, David Zafman)
* rgw:  make init env methods return an error (issue#23039, pr#20564, Abhishek 
Lekshmanan)
* rgw: URL-decode S3 and Swift object-copy URLs (issue#22121, issue#22729, 
pr#20236, Malcolm Lee, Matt Benjamin)
* rgw: parse old rgw_obj with namespace correctly (issue#22982, pr#20566, 
Yehuda Sadeh)
* rgw: return valid Location element, CompleteMultipartUpload (issue#22655, 
pr#20266, Matt Benjamin)
* rgw: use explicit index pool placement (issue#22928, pr#20565, Yehuda Sadeh)
* tools: ceph-disk: v12.2.2 unable to create bluestore osd using ceph-disk 
(issue#22354, pr#20563, Kefu Chai)

Getting Ceph

* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.2.4.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* For ceph-deploy, see 
http://docs.ceph.com/docs/master/install/install-ceph-deploy
* Release git sha1: 52085d5249a80c5f5121a76d6288429f35e4e77b

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Luminous v12.2.3 released

2018-02-21 Thread Abhishek Lekshmanan
tion if OSD not in pgmap stats (issue#21707, 
pr#19084, Yanhu Cao)
* osd, pg, mgr: make snap trim queue problems visible (issue#22448, pr#20098, 
Piotr Dałek)
* osd: Pool Compression type option doesn't apply to new OSDs (issue#22419, 
pr#20106, Kefu Chai)
* osd: replica read can trigger cache promotion (issue#20919, pr#19499, Sage 
Weil)
* osd/ReplicatedPG.cc: recover_replicas: object added to missing set for 
backfill, but is not in recovering, error! (issue#21382, issue#14513, 
issue#18162, pr#20081, David Zafman)
* osd: subscribe osdmaps if any pending pgs (issue#22113, pr#19059, Kefu Chai)
* osd: "sudo cp /var/lib/ceph/osd/ceph-0/fsid ..." fails (issue#20736, 
pr#19631, Patrick Donnelly)
* os: fix 0-length zero semantics, test (issue#21712, pr#20049, Sage Weil)
* qa/tests: Applied PR 20053 to stress-split tests (issue#22665, pr#20451, Yuri 
Weinstein)
* rbd: abort in listing mapped nbd devices when running in a container 
(issue#22012, issue#22011, pr#19051, Li Wang)
* rbd: [api] compare-and-write methods not properly advertised (issue#22036, 
pr#18834, Jason Dillaman)
* rbd: class rbd.Image discardOSError: [errno 2147483648] error discarding 
region (issue#21966, pr#19058, Jason Dillaman)
* rbd: cluster resource agent ocf:ceph:rbd - wrong permissions (issue#22362, 
pr#19554, Nathan Cutler)
* rbd: disk usage on empty pool no longer returns an error message 
(issue#22200, pr#19107, Jason Dillaman)
* rbd: fix crash during map (issue#21808, pr#18698, Peter Keresztes Schmidt)
* rbd: [journal] tags are not being expired if no other clients are registered 
(issue#21960, pr#18840, Jason Dillaman)
* rbd: librbd: filter out potential race with image rename (issue#18435, 
pr#19853, Jason Dillaman)
* rbd-mirror: Allow a different data-pool to be used on the secondary cluster 
(issue#21088, pr#19305, Adam Wolfe Gordon)
* rbd-mirror: primary image should register in remote, non-primary image's 
journal (issue#21961, issue#21561, pr#20207, Jason Dillaman)
* rbd-mirror: sync image metadata when transfering remote image (issue#21535, 
pr#19484, Jason Dillaman)
* rbd: Python RBD metadata_get does not work (issue#22306, pr#19479, Mykola 
Golub)
* rbd: rbd ls -l crashes with SIGABRT (issue#21558, pr#19800, Jason Dillaman)
* rbd: [rbd-mirror] new pools might not be detected (issue#22461, pr#19625, 
Jason Dillaman)
* rbd: [rbd-nbd] Fedora does not register resize events (issue#22131, pr#19066, 
Jason Dillaman)
* rbd: [test] UpdateFeatures RPC message should be included in test_notify.py 
(issue#21936, pr#18838, Jason Dillaman)
* Revert " luminous: msg/async: unregister connection failed when racing 
happened" (issue#22231, pr#20247, Sage Weil)
* rgw: 501 is returned When init multipart is using V4 signature and chunk 
encoding (issue#22129, pr#19506, Jeegn Chen)
* rgw: add cors header rule check in cors option request (issue#22002, 
pr#19053, yuliyang)
* rgw: backport beast frontend and boost 1.66 update (issue#22101, issue#20935, 
issue#21831, issue#20048, issue#22600, issue#20971, pr#19848, Casey Bodley, 
Jiaying Ren)
* rgw: bucket index object not deleted after radosgw-admin bucket rm 
--purge-objects --bypass-gc (issue#22122, issue#19959, pr#19085, Aleksei 
Gutikov)
* rgw: bucket policy evaluation logical error (issue#21901, issue#21896, 
pr#19810, Adam C. Emerson)
* rgw: bucket resharding should not update bucket ACL or user stats 
(issue#22742, issue#22124, pr#20327, Orit Wasserman)
* rgw: check going_down() when lifecycle processing (issue#22099, pr#19088, Yao 
Zongyou)
* rgw: Dynamic bucket indexing, resharding and tenants seems to be broken 
(issue#22046, pr#19050, Orit Wasserman)
* rgw: file deadlock on lru evicting (issue#22736, pr#20075, Matt Benjamin)
* rgw: fix chained cache invalidation to prevent cache size growth 
(issue#22410, pr#19785, Mark Kogan)
* rgw: fix for empty query string in beast frontend (issue#22797, pr#20338, 
Casey Bodley)
* rgw: fix GET website response error code (issue#22272, pr#19489, Dmitry 
Plyakin)
* rgw: fix rewrite a versioning object create a new object bug (issue#21984, 
issue#22529, pr#19787, Enming Zhang, Matt Benjamin)
* rgw: Fix swift object expiry not deleting objects (issue#22084, pr#18972, 
Pavan Rallabhandi)
* rgw: Fix swift object expiry not deleting objects (issue#22084, pr#19090, 
Pavan Rallabhandi)
* rgw: librgw: fix shutdown error with resources uncleaned (issue#22296, 
pr#20073, Tao Chen)
* rgw: log keystone errors at a higher level (issue#22151, pr#19077, Abhishek 
Lekshmanan)
* rgw: make HTTP dechunking compatible with Amazon S3 (issue#21015, pr#19500, 
Radoslaw Zarzynski)
* rgw: modify s3 type subuser access permission fail (issue#21983, pr#18766, 
yuliyang)
* rgw: multisite: destination zone does not compress synced objects 
(issue#21895, pr#18867, Casey Bodley)
* rgw: multisite: 'radosgw-admin sync error list' contains temporary EBUSY 
errors (issue#22473, pr#19799, Casey Bodley)
* rgw: null instance mtime incorrect when enable versio

Re: [ceph-users] Luminous 12.2.3 release date?

2018-02-12 Thread Abhishek Lekshmanan
Hans van den Bogert <hansbog...@gmail.com> writes:

> Hi Wido,
>
> Did you ever get an answer? I'm eager to know as well.

We're currently testing 12.2.3; once the QE process completes we can
publish the packages, hopefully by the end of this week
>
>
> Hans
>
> On Tue, Jan 30, 2018 at 10:35 AM, Wido den Hollander <w...@42on.com> wrote:
>> Hi,
>>
>> Is there a ETA yet for 12.2.3? Looking at the tracker there aren't that many
>> outstanding issues: http://tracker.ceph.com/projects/ceph/roadmap
>>
>> On Github we have more outstanding PRs though for the Luminous milestone:
>> https://github.com/ceph/ceph/milestone/10
>>
>> Are we expecting 12.2.3 in Feb? I'm asking because there are some Mgr
>> related fixes I'm backporting now for a few people which are in 12.2.3.
>>
>> Wido
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Luminous v12.2.2 released

2017-12-01 Thread Abhishek Lekshmanan
ue#21409, pr#17730, xie 
xingguo)
* mon/PGMap: Fix %USED calculation (issue#22247, pr#19230, Xiaoxi Chen)
* mon: update get_store_prefixes implementations (issue#21534, pr#18621, John 
Spray, huanwen ren)
* msgr: messages/MOSDMap: do compat reencode of crush map, too (issue#21882, 
pr#18456, Sage Weil)
* msgr: src/messages/MOSDMap: reencode OSDMap for older clients (issue#21660, 
pr#18140, Sage Weil)
* os/bluestore/BlueFS: fix race with log flush during async log compaction 
(issue#21878, pr#18503, Sage Weil)
* os/bluestore: fix another aio stall/deadlock (issue#21470, pr#18127, Sage 
Weil)
* os/bluestore: fix SharedBlob unregistration (issue#22039, pr#18983, Sage Weil)
* os/bluestore: handle compressed extents in blob unsharing checks 
(issue#21766, pr#18501, Sage Weil)
* os/bluestore: replace 21089 repair with something online (instead of fsck) 
(issue#21089, pr#17734, Sage Weil)
* os/bluestore: set bitmap freelist resolution to min_alloc_size (issue#21408, 
pr#18050, Sage Weil)
* os/blueStore::umount will crash when the BlueStore is opened by 
start_kv_only() (issue#21624, pr#18750, Chang Liu)
* osd: additional protection for out-of-bounds EC reads (issue#21629, pr#18413, 
Jason Dillaman)
* osd: allow recovery preemption (issue#21613, pr#18025, Sage Weil)
* osd: build_past_intervals_parallel: Ignore new partially created PGs 
(issue#21833, pr#18673, David Zafman)
* osd: dump bluestore debug on shutdown if debug option is set (issue#21259, 
pr#18103, Sage Weil)
* osd: make stat_bytes and stat_bytes_used counters PRIO_USEFUL (issue#21981, 
pr#18723, Yao Zongyou)
* osd: make the PG’s SORTBITWISE assert a more generous shutdown (issue#20416, 
pr#18132, Greg Farnum)
* osd: OSD metadata ‘backend_filestore_dev_node’ is unknown even for simple 
deployment (issue#20944, pr#17865, Sage Weil)
* rbd: [cli] mirror getter commands will fail if mirroring has never been 
enabled (issue#21319, pr#17861, Jason Dillaman)
* rbd: cls/journal: fixed possible infinite loop in expire_tags (issue#21956, 
pr#18626, Jason Dillaman)
* rbd: cls/journal: possible infinite loop within tag_list class method 
(issue#21771, pr#18417, Jason Dillaman)
* rbd: [rbd-mirror] asok hook names not updated when image is renamed 
(issue#20860, pr#17860, Mykola Golub)
* rbd: [rbd-mirror] forced promotion can result in incorrect status 
(issue#21559, pr#18337, Jason Dillaman)
* rbd: [rbd-mirror] peer cluster connections should filter out command line 
optionals (issue#21894, pr#18566, Jason Dillaman)
* rgw: add support for Swift’s per storage policy statistics (issue#17932, 
issue#21506, pr#17835, Radoslaw Zarzynski, Casey Bodley)
* rgw: add support for Swift’s reversed account listings (issue#21148, 
pr#17834, Radoslaw Zarzynski)
* rgw: avoid logging keystone revocation failures when no keystone is 
configured (issue#21400, pr#18441, Abhishek Lekshmanan)
* rgw: disable dynamic resharding in multisite enviorment (issue#21725, 
pr#18432, Orit Wasserman)
* rgw: encryption: PutObj response does not include sse-kms headers 
(issue#21576, pr#18442, Casey Bodley)
* rgw: encryption: reject requests that don’t provide all expected headers 
(issue#21581, pr#18429, Enming Zhang)
* rgw: expose --sync-stats via admin api (issues#21301, pr#18439, Nathan 
Johnson)
* rgw: failed CompleteMultipartUpload request does not release lock 
(issue#21596, pr#18430, Matt Benjamin)
* rgw_file: set s->obj_size from bytes_written (issue#21940, pr#18599, Matt 
Benjamin)
* rgw: fix a bug about inconsistent unit of comparison (issue#21590, pr#18438, 
gaosibei)
* rgw: fix bilog entries on multipart complete (issue#21772, pr#18334, Casey 
Bodley)
* rgw: fix error handling in ListBucketIndexesCR (issue#21735, pr#18591, Casey 
Bodley)
* rgw: fix refcnt issues (issue#21819, pr#18539, baixueyu)
* rgw: lc process only schdule the first item of lc objects (issue#21022, 
pr#17859, Shasha Lu)
* rgw: list bucket which enable versioning get wrong result when user marker 
(issue#21500, pr#18569, yuliyang)
* rgw: list_objects() honors end_marker regardless of namespace (issue#18977, 
pr#17832, Radoslaw Zarzynski)
* rgw: Multipart upload may double the quota (issue#21586, pr#18435, Sibei Gao)
* rgw: multisite: Get bucket location which is located in another zonegroup, 
will return 301 Moved Permanently (issue#21125, pr#17857, Shasha Lu)
* rgw: multisite: race between sync of bucket and bucket instance metadata 
(issue#21990, pr#18767, Casey Bodley)
* rgw: policy checks missing from Get/SetRequestPayment operations 
(issue#21389, pr#18440, Adam C. Emerson)
* rgw: radosgw-admin usage show loops indefinitly (issue#21196, pr#18437, Mark 
Kogan)
* rgw: rgw_file: explicit NFSv3 open() emulation (issue#21854, pr#18446, Matt 
Benjamin)
* rgw: rgw_file: fix write error when the write offset overlaps (issue#21455, 
pr#18004, Yao Zongyou)
* rgw: rgw file write error (issue#21455, pr#18433, Yao Zongyou)
* rgw: s3:GetBucketCORS/s3:PutBucketCORS policy fails with 403 (issue#21578, 
pr#18444, Adam C. Eme

Re: [ceph-users] s3 bucket policys

2017-11-07 Thread Abhishek Lekshmanan
Simon Leinen <simon.lei...@switch.ch> writes:

> Simon Leinen writes:
>> Adam C Emerson writes:
>>> On 03/11/2017, Simon Leinen wrote:
>>> [snip]
>>>> Is this supported by the Luminous version of RadosGW?
>
>>> Yes! There's a few bugfixes in master that are making their way into
>>> Luminous, but Luminous has all the features at present.
>
>> Does that mean it should basically work in 10.2.1?
>
> Sorry, I meant to say "in 12.2.1"!!!

Yeah bucket policies should be useable in 12.2.1 

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] s3 bucket permishions

2017-10-26 Thread Abhishek Lekshmanan
nigel davies <nigdav...@gmail.com> writes:

> I am fallowing a guide at the mo.
>
> But I believe it's RWG users

We have support for aws like bucket policies,
http://docs.ceph.com/docs/master/radosgw/bucketpolicy/

Some amount of permissions can also be controlled by acls
>
> On 25 Oct 2017 5:29 pm, "David Turner" <drakonst...@gmail.com> wrote:
>
>> Are you talking about RGW buckets with limited permissions for cephx
>> authentication? Or RGW buckets with limited permissions for RGW users?
>>
>> On Wed, Oct 25, 2017 at 12:16 PM nigel davies <nigdav...@gmail.com> wrote:
>>
>>> Hay All
>>>
>>> is it possible to set permissions to buckets
>>>
>>> for example if i have 2 users  (user_a and user_b) and 2 buckets (bk_a
>>> and bk_b)
>>>
>>> i want to set permissions, so user a can only see bk_a and user b can
>>> only see bk_b

This is the default case, a bucket created by user_a is only accessible
to user_a (ie. the bucket owner) and not anyone else
>>>
>>>
>>> I have been looking at cant see what i am after.
>>>
>>> Any advise would be welcome
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v10.2.10 Jewel released

2017-10-06 Thread Abhishek Lekshmanan
fault' 
(issue#19922, pr#15197, weiqiaomiao)
* rgw: replace '+' with "%20" in canonical query string for s3 v4 auth 
(issue#20501, pr#16951, Zhang Shaowen, Matt Benjamin)
* rgw: rgw_common.cc: modify the end check in RGWHTTPArgs::sys_get 
(issue#16072, pr#16268, zhao kun)
* rgw: rgw_file: cannot delete bucket w/uxattrs (issue#20061, issue#20047, 
issue#19214, issue#20045, pr#15459, Matt Benjamin)
* rgw: rgw_file: fix size and (c|m)time unix attrs in write_finish 
(issue#19653, pr#15449, Matt Benjamin)
* rgw: rgw_file:  incorrect lane lock behavior in evict_block() (issue#21141, 
pr#17597, Matt Benjamin)
* rgw: rgw_file: prevent conflict of mkdir between restarts (issue#20275, 
pr#17147, Gui Hecheng)
* rgw: rgw_file:  v3 write timer does not close open handles (issue#19932, 
pr#15456, Matt Benjamin)
* rgw: Segmentation fault when exporting rgw bucket in nfs-ganesha 
(issue#20663, pr#17285, Matt Benjamin)
* rgw: send data-log list infinitely (issue#20951, pr#17287, fang.yuxiang)
* rgw: set latest object's acl failed (issue#18649, pr#15451, Zhang Shaowen)
* rgw: Truncated objects (issue#20107, pr#17166, Yehuda Sadeh)
* rgw: uninitialized memory is accessed during creation of bucket's metadata 
(issue#20774, pr#17280, Radoslaw Zarzynski)
* rgw: usage logging on tenated buckets causes invalid memory reads 
(issue#20779, pr#17279, Radoslaw Zarzynski)
* rgw: user quota did not work well on multipart upload (issue#19285, 
issue#19602, pr#17277, Zhang Shaowen)
* rgw: VersionIdMarker and NextVersionIdMarker are not returned when listing 
object versions (issue#19886, pr#16316, Zhang Shaowen)
* rgw: when uploading objects continuously into a versioned bucket, some 
objects will not sync (issue#18208, pr#15452, lvshuhua)
* tools: ceph cli: Rados object in state configuring race (issue#16477, 
pr#15762, Loic Dachary)
* tools: ceph-disk: dmcrypt cluster must default to ceph (issue#20893, 
pr#16870, Loic Dachary)
* tools: ceph-disk: don't activate suppressed journal devices (issue#19489, 
pr#16703, David Disseldorp)
* tools: ceph-disk: separate ceph-osd --check-needs-\* logs (issue#19888, 
pr#15503, Loic Dachary)
* tools: ceph-disk: systemd unit timesout too quickly (issue#20229, pr#17133, 
Loic Dachary)
* tools: ceph-disk: Use stdin for 'config-key put' command (issue#21059, 
pr#17084, Brad Hubbard, Loic Dachary, Sage Weil)
* tools: libradosstriper processes arbitrary printf placeholders in user input 
(issue#20240, pr#17574, Stan K)

Getting Ceph

* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-10.2.10.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* For ceph-deploy, see
http://docs.ceph.com/docs/master/install/install-ceph-deploy
* Release sha1: 5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v12.2.0 Luminous released

2017-08-29 Thread Abhishek Lekshmanan
ransactions, however, client-side looping is not
practical, and the methods have been deprecated.  Note that use of
either the IoCtx methods on older librados versions or the
deprecated methods on any version of librados will lead to
incomplete results if/when the new OSD limits are enabled.

  * The original librados rados_objects_list_open (C) and objects_begin
(C++) object listing API, deprecated in Hammer, has finally been
removed.  Users of this interface must update their software to use
either the rados_nobjects_list_open (C) and nobjects_begin (C++) API or
the new rados_object_list_begin (C) and object_list_begin (C++) API
before updating the client-side librados library to Luminous.
Object enumeration (via any API) with the latest librados version
and pre-Hammer OSDs is no longer supported.  Note that no in-tree
Ceph services rely on object enumeration via the deprecated APIs, so
only external librados users might be affected.
The newest (and recommended) rados_object_list_begin (C) and
object_list_begin (C++) API is only usable on clusters with the
SORTBITWISE flag enabled (Jewel and later).  (Note that this flag is
required to be set before upgrading beyond Jewel.)

- *CephFS*:

  * When configuring ceph-fuse mounts in /etc/fstab, a new syntax is
available that uses "ceph.=" in the options column, instead
of putting configuration in the device column.  The old style syntax
still works.  See the documentation page "Mount CephFS in your
file systems table" for details.
  * CephFS clients without the 'p' flag in their authentication capability
string will no longer be able to set quotas or any layout fields.  This
flag previously only restricted modification of the pool and namespace
fields in layouts.
  * CephFS will generate a health warning if you have fewer standby daemons
than it thinks you wanted.  By default this will be 1 if you ever had
a standby, and 0 if you did not.  You can customize this using
`ceph fs set  standby_count_wanted `.  Setting it
to zero will effectively disable the health check.
  * The "ceph mds tell ..." command has been removed.  It is superceded
by "ceph tell mds. ..."
  * The `apply` mode of cephfs-journal-tool has been removed

Getting Ceph


* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.2.0.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* For ceph-deploy, see 
http://docs.ceph.com/docs/master/install/install-ceph-deploy
* Release git sha1: 32ce2a3ae5239ee33d6150705cdb24d43bab910c

--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Anybody gotten boto3 and ceph RGW working?

2017-08-23 Thread Abhishek Lekshmanan
Bryan Banister <bbanis...@jumptrading.com> writes:

> Hello,
>
> I have the boto python API working with our ceph cluster but haven't figured 
> out a way to get boto3 to communicate yet to our RGWs.  Anybody have a simple 
> example?

 I just use the client interface as described in
 http://boto3.readthedocs.io/en/latest/reference/services/s3.html

 so something like::

 s3 = boto3.client('s3','us-east-1', endpoint_url='http://',
   aws_access_key_id = 'access',
   aws_secret_access_key = 'secret')

 s3.create_bucket(Bucket='foobar')
 s3.put_object(Bucket='foobar',Key='foo',Body='foo')

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v12.1.4 Luminous (RC) released

2017-08-16 Thread Abhishek Lekshmanan
Alfredo Deza  writes:

> On Tue, Aug 15, 2017 at 10:35 PM, Matt Benjamin  wrote:
>> I think we need a v12.1.5 including #17040
We discussed this in the RGW Standups today & may not need one more RC
for the bug above, and should be fine as far as the fix is in 12.2.0

>
> *I* think that this is getting to a point where we should just have
> nightly development releases.
>
> What is the benefit of waiting for each RC every two weeks (or so) otherwise?

We could consider something of this sort for M maybe?

Abhishek
> On one side we are treating the RC releases somewhat like normal
> releases, with proper announcements, waiting for
> QA suites to complete, and have leads "ack" when their components are
> good enough. But on the other side of things
> we then try to cut releases to include fixes as immediate as possible
> (and as often as that means)
>
> We've had 3 releases already in August and this would mean discussing
> a *fourth*. 
>
>>
>> Matt
>>
>> On Tue, Aug 15, 2017 at 5:16 PM, Gregory Farnum  wrote:
>>> On Tue, Aug 15, 2017 at 2:05 PM, Abhishek  wrote:
 This is the fifth release candidate for Luminous, the next long term
 stable release. We’ve had to do this release as there was a bug in
 the previous RC, which affected upgrades to Luminous.[1]
>>>
>>> In particular, this will fix things for those of you who upgraded from
>>> Jewel or a previous RC and saw OSDs crash instantly on boot. We had an
>>> oversight in dealing with another bug. (Standard disclaimer: this was
>>> a logic error that resulted in no data changes. There were no
>>> durability implications — not that that helps much when you can't read
>>> your data out again.)
>>>
>>> Sorry guys!
>>> -Greg
>>>

 Please note that this is still a *release candidate* and
 not the final release, we're expecting the final Luminous release in
 a week's time, meanwhile, testing and feedback is very much welcom.

 Ceph Luminous (v12.2.0) will be the foundation for the next long-term
 stable release series. There have been major changes since Kraken
 (v11.2.z) and Jewel (v10.2.z), and the upgrade process is non-trivial.
 Please read these release notes carefully. Full details and changelog at
 http://ceph.com/releases/v12-1-4-luminous-rc-released/

 Notable Changes from 12.1.3
 ---
 * core: Wip 20985 divergent handling luminous (issue#20985, pr#17001, Greg
 Farnum)
 * qa/tasks/thrashosds-health.yaml: ignore MON_DOWN (issue#20910, pr#17003,
 Sage Weil)
 * crush, mon: fix weight set vs crush device classes (issue#20939, Sage
 Weil)


 Getting Ceph
 
 * Git at git://github.com/ceph/ceph.git
 * Tarball at http://download.ceph.com/tarballs/ceph-12.1.4.tar.gz
 * For packages, see http://docs.ceph.com/docs/master/install/get-packages/
 * For ceph-deploy, see
 http://docs.ceph.com/docs/master/install/install-ceph-deploy
 * Release sha1: a5f84b37668fc8e03165aaf5cbb380c78e4deba4

 [1]: http://tracker.ceph.com/issues/20985


 Best Regards
 Abhishek

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Jewel -> Luminous on Debian 9.1

2017-08-15 Thread Abhishek Lekshmanan
Dajka Tamás  writes:

> Dear All,
>
>  
>
> I'm trying to upgrade our env. from Jewel to the latest RC. Packages are
> installed (latest 12.1.3), but I'm unable to install the mgr. I've tried the
> following (nodes in cluster are from 03-05, 03 is the admin node):
>
>  
>
> root@stornode03:/etc/ceph# ceph-deploy -v mgr create stornode03 stornode04
> stornode05
>
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /root/.cephdeploy.conf
>
> [ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy -v mgr
> create stornode03 stornode04 stornode05
>
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
>
> [ceph_deploy.cli][INFO  ]  username  : None
>
> [ceph_deploy.cli][INFO  ]  verbose   : True
>
> [ceph_deploy.cli][INFO  ]  mgr   : [('stornode03',
> 'stornode03'), ('stornode04', 'stornode04'), ('stornode05', 'stornode05')]
>
> [ceph_deploy.cli][INFO  ]  overwrite_conf: False
>
> [ceph_deploy.cli][INFO  ]  subcommand: create
>
> [ceph_deploy.cli][INFO  ]  quiet : False
>
> [ceph_deploy.cli][INFO  ]  cd_conf   :
> 
>
> [ceph_deploy.cli][INFO  ]  cluster   : ceph
>
> [ceph_deploy.cli][INFO  ]  func  :  0x7f07b31712a8>
>
> [ceph_deploy.cli][INFO  ]  ceph_conf : None
>
> [ceph_deploy.cli][INFO  ]  default_release   : False
>
> [ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts
> stornode03:stornode03 stornode04:stornode04 stornode05:stornode05
>
> [ceph_deploy][ERROR ] RuntimeError: bootstrap-mgr keyring not found; run
> 'gatherkeys'
>
>  
>
> root@stornode03:/etc/ceph# ceph-deploy -v gatherkeys stornode03 stornode04
> stornode05
>
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /root/.cephdeploy.conf
>
> [ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy -v
> gatherkeys stornode03 stornode04 stornode05
>
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
>
> [ceph_deploy.cli][INFO  ]  username  : None
>
> [ceph_deploy.cli][INFO  ]  verbose   : True
>
> [ceph_deploy.cli][INFO  ]  overwrite_conf: False
>
> [ceph_deploy.cli][INFO  ]  quiet : False
>
> [ceph_deploy.cli][INFO  ]  cd_conf   :
> 
>
> [ceph_deploy.cli][INFO  ]  cluster   : ceph
>
> [ceph_deploy.cli][INFO  ]  mon   : ['stornode03',
> 'stornode04', 'stornode05']
>
> [ceph_deploy.cli][INFO  ]  func  :  gatherkeys at 0x7fac1c8d0aa0>
>
> [ceph_deploy.cli][INFO  ]  ceph_conf : None
>
> [ceph_deploy.cli][INFO  ]  default_release   : False
>
> [ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory
> /tmp/tmpQCCwSb
>
> [stornode03][DEBUG ] connected to host: stornode03
>
> [stornode03][DEBUG ] detect platform information from remote host
>
> [ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpQCCwSb
>
> [ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported: debian
> 9.1
>
>  
>
> root@stornode03:/etc/ceph#
>
This seems to be fixed in ceph-deploy via
https://github.com/ceph/ceph-deploy/pull/447, can you try ceph-deploy
from master

--
Abhishek 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v12.1.3 Luminous (RC) released

2017-08-11 Thread Abhishek Lekshmanan
en the new OSD limits are enabled.

  * The original librados rados_objects_list_open (C) and objects_begin
(C++) object listing API, deprecated in Hammer, has finally been
removed.  Users of this interface must update their software to use
either the rados_nobjects_list_open (C) and nobjects_begin (C++) API or
the new rados_object_list_begin (C) and object_list_begin (C++) API
before updating the client-side librados library to Luminous.

Object enumeration (via any API) with the latest librados version
and pre-Hammer OSDs is no longer supported.  Note that no in-tree
Ceph services rely on object enumeration via the deprecated APIs, so
only external librados users might be affected.

The newest (and recommended) rados_object_list_begin (C) and
object_list_begin (C++) API is only usable on clusters with the
SORTBITWISE flag enabled (Jewel and later).  (Note that this flag is
required to be set before upgrading beyond Jewel.)

- *CephFS*:

  * When configuring ceph-fuse mounts in /etc/fstab, a new syntax is
available that uses "ceph.=" in the options column, instead
of putting configuration in the device column.  The old style syntax
still works.  See the documentation page "Mount CephFS in your
file systems table" for details.

  * CephFS clients without the 'p' flag in their authentication capability
string will no longer be able to set quotas or any layout fields.  This
flag previously only restricted modification of the pool and namespace
fields in layouts.
  * CephFS will generate a health warning if you have fewer standby daemons
than it thinks you wanted.  By default this will be 1 if you ever had
a standby, and 0 if you did not.  You can customize this using
``ceph fs set  standby_count_wanted ``.  Setting it
to zero will effectively disable the health check.
  * The "ceph mds tell ..." command has been removed.  It is superceded
by "ceph tell mds. ..."

* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.1.3.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* For ceph-deploy, see 
http://docs.ceph.com/docs/master/install/install-ceph-deploy
* Release sha1: c56d9c07b342c08419bbc18dcf2a4c5fae62b9cf

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v11.2.1 Kraken Released

2017-08-10 Thread Abhishek Lekshmanan
g")
* mon: cache tiering: base pool last_force_resend not respected (racing read 
got wrong version) (issue#18366, issue#18403, pr#13116, Sage Weil)
* mon crash on shutdown, lease_ack_timeout event (issue#19928, issue#19825, 
pr#15084, Kefu Chai, Alexey Sheplyakov)
* mon: fail to form large quorum; msg/async busy loop (issue#20230, 
issue#20315, pr#15729, Haomai Wang)
* mon: force_create_pg could leave pg stuck in creating state (issue#19181, 
issue#18298, pr#13790, Adam C. Emerson, Sage Weil)
* mon/MonClient: make get_mon_log_message() atomic (issue#19618, issue#19427, 
pr#14588, Kefu Chai)
* mon: 'osd crush move ...' doesnt work on osds (issue#18682, issue#18587, 
pr#13500, Sage Weil)
* mon: osd crush set crushmap need sanity check (issue#19302, issue#20365, 
pr#16143, Loic Dachary)
* mon: peon wrongly delete routed pg stats op before receive pg stats ack 
(issue#18554, issue#18458, pr#13046, Mingxin Liu)
* mon/PGMap: factor mon_osd_full_ratio into MAX AVAIL calc (issue#18522, 
issue#20035, pr#15237, Sage Weil)
* msg/simple/SimpleMessenger.cc: 239: FAILED assert(!cleared) (issue#15784, 
issue#18378, pr#16133, Sage Weil)
* multisite: rest api fails to decode large period on 'period commit' 
(issue#19505, issue#19616, issue#19614, issue#20244, issue#19488, issue#19776, 
issue#20293, issue#19746, pr#16161, Casey Bodley, Abhishek Lekshmanan)
* objecter: full_try behavior not consistent with osd (issue#19560, 
issue#19430, pr#14732, Sage Weil)
* ojecter: epoch_barrier isn't respected in _op_submit() (issue#19396, 
issue#19496, pr#14331, Ilya Dryomov)
* os/bluestore: deep decode onode value (issue#20366, pr#15792, Sage Weil)
* os/bluestore: fix Allocator::allocate() int truncation (issue#20884, 
issue#18595, pr#13011, Sage Weil)
* osd: allow client throttler to be adjusted on-fly, without restart 
(issue#18791, issue#18793, pr#13216, Piotr Dałek)
* osd: An OSD was seen getting ENOSPC even with osd_failsafe_full_ratio passed 
(issue#20544, issue#16878, issue#19340, issue#19841, issue#20672, pr#16134, 
Sage Weil, David Zafman)
* osd: bogus assert when checking acting set on recovery completion in 
rados/upgrade (issue#18999, pr#13542, Sage Weil)
* osd: calc_clone_subsets misuses try_read_lock vs missing (issue#18610, 
issue#18583, issue#18723, issue#17831, pr#14616, Samuel Just)
* osd: ceph degraded and misplaced status output inaccurate (issue#18619, 
issue#19480, pr#14322, David Zafman)
* osd: condition object_info_t encoding on required (not up) features 
(issue#18842, issue#18831, issue#18814, pr#13485, Ilya Dryomov)
* osd: do not send ENXIO on misdirected op by default (issue#19622, pr#13253, 
Sage Weil)
* osd: FAILED assert(object_contexts.empty()) (live on master only from Jan-Feb 
2017, all other instances are different) (issue#20522, issue#20523, 
issue#18927, issue#18809, pr#16132, Samuel Just)
* osd: --flush-journal: sporadic segfaults on exit (issue#18952, issue#18820, 
pr#13490, Alexey Sheplyakov)
* osd: Give requested scrubs a higher priority (issue#19685, issue#15789, 
pr#14735, David Zafman)
* osd: Implement asynchronous scrub sleep (issue#20033, issue#19986, 
issue#20173, issue#19497, pr#15526, Brad Hubbard)
* osd: leaked MOSDMap (issue#19760, issue#18293, pr#14942, Sage Weil)
* osd: leveldb corruption leads to Operation not permitted not handled and 
assert (issue#18037, issue#18418, pr#12790, Nathan Cutler)
* osd: metadata reports filestore when using bluestore (issue#18677, 
issue#18638, pr#16083, Wido den Hollander)
* osd: New added OSD always down when full flag is set (issue#19485, pr#14321, 
Mingxin Liu)
* osd: Object level shard errors are tracked and used if no auth available 
(issue#20089, pr#15421, David Zafman)
* osd: os/bluestore: fix statfs to not include DB partition in free space 
(issue#18599, issue#18722, pr#13284, Sage Weil)
* osd: osd/PrimaryLogPG: do not call on_shutdown() if (pg.deleting) 
(issue#19902, issue#19916, pr#15066, Kefu Chai)
* osd: pg log split does not rebuild index for parent or child (issue#19315, 
issue#18975, pr#14048, Sage Weil)
* osd: pglog: with config, don't assert in the presence of stale diverg… 
(issue#17916, issue#19702, pr#14646, Greg Farnum)
* osd: publish PG stats when backfill-related states change (issue#18497, 
issue#18369, pr#13295, Sage Weil)
* osd: Revert "PrimaryLogPG::failed_push: update missing as well" (issue#18659, 
pr#13091, David Zafman)
* osd: unlock sdata_op_ordering_lock with sdata_lock hold to avoid missing 
wakeup signal (issue#20443, pr#15962, Alexey Sheplyakov)
* pre-jewel "osd rm" incrementals are misinterpreted (issue#19209, issue#19119, 
pr#13883, Ilya Dryomov)
* rbd: Add missing parameter feedback to 'rbd snap limit' (issue#18601, 
pr#14537, Tang Jin)
* rbd: [api] is_exclusive_lock_owner shouldn't return -EBUSY (issue#20266, 
issue#20182, pr#16187, Jason Dillaman)
* rbd: [api] temporarily restrict (rbd\_)mirror_peer_add from adding multiple 
peers (issue#19256, issue#19324, pr#14545, Jason Dillaman)

[ceph-users] v12.1.2 Luminous (RC) released

2017-08-02 Thread Abhishek Lekshmanan
uot; in the documentation.

* ceph-mgr now has a Zabbix plugin. Using zabbix_sender it sends trapper
  events to a Zabbix server containing high-level information of the Ceph
  cluster. This makes it easy to monitor a Ceph cluster's status and send
  out notifications in case of a malfunction.

* The 'mon_warn_osd_usage_min_max_delta' config option has been
  removed and the associated health warning has been disabled because
  it does not address clusters undergoing recovery or CRUSH rules that do
  not target all devices in the cluster.

* Specifying user authorization capabilities for RBD clients has been
  simplified. The general syntax for using RBD capability profiles is
  "mon 'profile rbd' osd 'profile rbd[-read-only][ pool={pool-name}[, ...]]'".
  For more details see "User Management" in the documentation.

* RGW: bucket index resharding now uses the reshard  namespace in log pool
  upgrade scenarios as well this is a changed behaviour from RC1 where a
  new pool for reshard was created

* RGW multisite now supports for enabling or disabling sync at a bucket level.

Getting Ceph


* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.1.2.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* For ceph-deploy, see 
http://docs.ceph.com/docs/master/install/install-ceph-deploy
* Release sha1: b661348f156f148d764b998b65b90451f096cb27

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v12.1.1 Luminous RC released

2017-07-18 Thread Abhishek Lekshmanan
l and later).  (Note that this flag is
required to be set before upgrading beyond Jewel.)

- *CephFS*:

  * When configuring ceph-fuse mounts in /etc/fstab, a new syntax is
available that uses "ceph.=" in the options column, instead
of putting configuration in the device column.  The old style syntax
still works.  See the documentation page "Mount CephFS in your
file systems table" for details.

  * CephFS clients without the 'p' flag in their authentication capability
string will no longer be able to set quotas or any layout fields.  This
flag previously only restricted modification of the pool and namespace
fields in layouts.
  * CephFS will generate a health warning if you have fewer standby daemons
than it thinks you wanted.  By default this will be 1 if you ever had
a standby, and 0 if you did not.  You can customize this using
`ceph fs set  standby_count_wanted `.  Setting it
to zero will effectively disable the health check.
  * The "ceph mds tell ..." command has been removed.  It is superceded
by "ceph tell mds. ..."


Notable Changes since v12.1.1 (RC1)
---

* choose_args encoding has been changed to make it architecture-independent.
  If you deployed Luminous dev releases or 12.1.0 rc release and made use of
  the CRUSH choose_args feature, you need to remove all choose_args mappings
  from your CRUSH map before starting the upgrade.

* The 'ceph health' structured output (JSON or XML) no longer contains
  a 'timechecks' section describing the time sync status.  This
  information is now available via the 'ceph time-sync-status'
  command.

* Certain extra fields in the 'ceph health' structured output that
  used to appear if the mons were low on disk space (which duplicated
  the information in the normal health warning messages) are now gone.

* The "ceph -w" output no longer contains audit log entries by default.
  Add a "--watch-channel=audit" or "--watch-channel=*" to see them.

* The 'apply' mode of cephfs-journal-tool has been removed

* Added new configuration "public bind addr" to support dynamic environments
  like Kubernetes. When set the Ceph MON daemon could bind locally to an IP
  address and advertise a different IP address "public addr" on the network.


For a detailed changelog refer to the blog post entry at
http://ceph.com/releases/v12-1-1-luminous-rc-released/

Getting Ceph


* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.1.1.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* For ceph-deploy, see 
http://docs.ceph.com/docs/master/install/install-ceph-deploy
* Release sha1: f3e663a190bf2ed12c7e3cda288b9a159572c800

-- 
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v12.0.2 Luminous (dev) released

2017-04-24 Thread Abhishek Lekshmanan
This is the third development checkpoint release of Luminous, the next
long term
stable release.

Major changes from v12.0.1
--
* The original librados rados_objects_list_open (C) and objects_begin
  (C++) object listing API, deprecated in Hammer, has finally been
  removed.  Users of this interface must update their software to use
  either the rados_nobjects_list_open (C) and nobjects_begin (C++) API or
  the new rados_object_list_begin (C) and object_list_begin (C++) API
  before updating the client-side librados library to Luminous.

  Object enumeration (via any API) with the latest librados version
  and pre-Hammer OSDs is no longer supported.  Note that no in-tree
  Ceph services rely on object enumeration via the deprecated APIs, so
  only external librados users might be affected.

  The newest (and recommended) rados_object_list_begin (C) and
  object_list_begin (C++) API is only usable on clusters with the
  SORTBITWISE flag enabled (Jewel and later).  (Note that this flag is
  required to be set before upgrading beyond Jewel.)

* CephFS clients without the 'p' flag in their authentication capability
  string will no longer be able to set quotas or any layout fields.  This
  flag previously only restricted modification of the pool and namespace
  fields in layouts.

* CephFS directory fragmentation (large directory support) is enabled
  by default on new filesystems.  To enable it on existing filesystems
  use "ceph fs set  allow_dirfrags".

* CephFS will generate a health warning if you have fewer standby daemons
  than it thinks you wanted.  By default this will be 1 if you ever had
  a standby, and 0 if you did not.  You can customize this using
  ``ceph fs set  standby_count_wanted ``.  Setting it
  to zero will effectively disable the health check.

* The "ceph mds tell ..." command has been removed.  It is superseded
  by "ceph tell mds. ..."

* RGW introduces server side encryption of uploaded objects with 3
options for
  the management of encryption keys, automatic encryption (only
recommended for
  test setups), customer provided keys similar to Amazon SSE KMS
specification &
  using a key management service (openstack barbician)

For a more detailed changelog, refer to
http://ceph.com/releases/ceph-v12-0-2-luminous-dev-released/

Getting Ceph


* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.0.2.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* For ceph-deploy, see
http://docs.ceph.com/docs/master/install/install-ceph-deploy
* Release sha1: 5a1b6b3269da99a18984c138c23935e5eb96f73e

--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v10.2.7 Jewel released

2017-04-11 Thread Abhishek Lekshmanan
This point release fixes several important bugs in RBD mirroring, librbd & RGW.

We recommend that all v10.2.x users upgrade.

For more detailed information, refer to the complete changelog[1] and the 
release notes[2]

Notable Changes
---

* librbd: possible race in ExclusiveLock handle_peer_notification (issue#19368, 
pr#14233, Mykola Golub)
* osd: Increase priority for inactive PGs backfill (issue#18350, pr#13232, 
Bartłomiej Święcki)
* osd: Scrub improvements and other fixes (issue#17857, issue#18114, 
issue#13937, issue#18113, pr#13146, Kefu Chai, David Zafman)
* osd: fix OSD network address in OSD heartbeat_check log message (issue#18657, 
pr#13108, Vikhyat Umrao)
* rbd-mirror: deleting a snapshot during sync can result in read errors 
(issue#18990, pr#13596, Jason Dillaman)
* rgw: 'period update' does not remove short_zone_ids of deleted zones 
(issue#15618, pr#14140, Casey Bodley)
* rgw: DUMPABLE flag is cleared by setuid preventing coredumps (issue#19089, 
pr#13844, Brad Hubbard)
* rgw: clear data_sync_cr if RGWDataSyncControlCR fails (issue#17569, pr#13886, 
Casey Bodley)
* rgw: fix openssl (issue#11239, issue#19098, issue#16535, pr#14215, Marcus 
Watts)
* rgw: fix swift cannot disable object versioning with empty 
X-Versions-Location (issue#18852, pr#13823, Jing Wenjun)
* rgw: librgw: RGWLibFS::setattr fails on directories (issue#18808, pr#13778, 
Matt Benjamin)
* rgw: make sending Content-Length in 204 and 304 controllable (issue#16602, 
pr#13503, Radoslaw Zarzynski, Matt Benjamin)
* rgw: multipart uploads copy part support (issue#12790, pr#13219, Yehuda 
Sadeh, Javier M. Mellid, Matt Benjamin)
* rgw: multisite: RGWMetaSyncShardControlCR gives up on EIO (issue#19019, 
pr#13867, Casey Bodley)
* rgw: radosgw/swift: clean up flush / newline behavior (issue#18473, pr#14100, 
Nathan Cutler, Marcus Watts, Matt Benjamin)
* rgw: radosgw/swift: clean up flush / newline behavior. (issue#18473, 
pr#13143, Marcus Watts, Matt Benjamin)
* rgw: rgw_fh: RGWFileHandle dtor must also cond-unlink from FHCache 
(issue#19112, pr#14231, Matt Benjamin)
* rgw: rgw_file: avoid interning .. in FHCache table and don't ref for them 
(issue#19036, pr#13848, Matt Benjamin)
* rgw: rgw_file: interned RGWFileHandle objects need parent refs (issue#18650, 
pr#13583, Matt Benjamin)
* rgw: rgw_file: restore (corrected) fix for dir partial match (return of 
FLAG_EXACT_MATCH) (issue#19060, issue#18992, issue#19059, pr#13858, Matt 
Benjamin)
* rgw: rgw_file: FHCache residence check should be exhaustive (issue#19111, 
pr#14169, Matt Benjamin)
* rgw: rgw_file: ensure valid_s3_object_name for directories, too (issue#19066, 
pr#13717, Matt Benjamin)
* rgw: rgw_file: fix marker computation (issue#19018, issue#18989, issue#18992, 
issue#18991, pr#13869, Matt Benjamin)
* rgw: rgw_file: wip dir orphan (issue#18992, issue#18989, issue#19018, 
issue#18991, pr#14205, Gui Hecheng, Matt Benjamin)
* rgw: rgw_file: various fixes (pr#14206, Matt Benjamin)
* rgw: rgw_file: expand argv (pr#14230, Matt Benjamin)

Getting Ceph


* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-10.2.7.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* For ceph-deploy, see 
http://docs.ceph.com/docs/master/install/install-ceph-deploy
* Release SHA1: 50e863e0f4bc8f4b9e31156de690d765af245185

[1]: http://docs.ceph.com/docs/master/_downloads/v10.2.7.txt
[2]: http://ceph.com/releases/v10-2-7-jewel-released/

-- 

Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v12.0.1 Luminous (dev) released

2017-03-28 Thread Abhishek Lekshmanan

This is the second development checkpoint release of Luminous, the next
long term stable release.

Major changes from 12.0.0
-
* The original librados rados_objects_list_open (C) and objects_begin
  (C++) object listing API, deprecated in Hammer, has finally been
  removed.  Users of this interface must update their software to use
  either the rados_nobjects_list_open (C) and nobjects_begin (C++) API or
  the new rados_object_list_begin (C) and object_list_begin (C++) API
  before updating the client-side librados library to Luminous.

  Object enumeration (via any API) with the latest librados version
  and pre-Hammer OSDs is no longer supported.  Note that no in-tree
  Ceph services rely on object enumeration via the deprecated APIs, so
  only external librados users might be affected.

  The newest (and recommended) rados_object_list_begin (C) and
  object_list_begin (C++) API is only usable on clusters with the
  SORTBITWISE flag enabled (Jewel and later).  (Note that this flag is
  required to be set before upgrading beyond Jewel.)

* CephFS clients without the 'p' flag in their authentication capability
  string will no longer be able to set quotas or any layout fields.  This
  flag previously only restricted modification of the pool and namespace
  fields in layouts.

* The rados copy-get-classic operation has been removed since it has not 
been

  used by the OSD since before hammer.  It is unlikely any librados user is
  using this operation explicitly since there is also the more modern 
copy-get.


* The RGW api for getting object torrent has changed its params from 
'get_torrent'
  to 'torrent' so that it can be compatible with Amazon S3. Now the 
request for

  object torrent is like 'GET /ObjectName?torrent'.

See http://ceph.com/releases/v12-0-1-luminous-dev-released/ for a more
detailed changelog on this release, and thank you everyone for
contributing.

While we're fixing a few issues in the build system, the arm64 packages
for centos7 are, unfortunately, not available for this dev release.

Getting Ceph


* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.0.1.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* For ceph-deploy, see 
http://docs.ceph.com/docs/master/install/install-ceph-deploy

* Release sha1: 5456408827a1a31690514342624a4ff9b66be1d5

--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, 
HRB 21284 (AG Nürnberg)



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Abhishek Lekshmanan



On 15/03/17 18:32, Shinobu Kinjo wrote:

So description of Jewel is wrong?

http://docs.ceph.com/docs/master/releases/
Yeah we missed updating jewel dates as well when updating about hammer, 
Jewel is an LTS and would get more upgrades. Once Luminous is released, 
however, we'll eventually shift focus on bugs that would hinder upgrades 
to Luminous itself


Abhishek

On Thu, Mar 16, 2017 at 2:27 AM, John Spray  wrote:

On Wed, Mar 15, 2017 at 5:04 PM, Shinobu Kinjo  wrote:

It may be probably kind of challenge but please consider Kraken (or
later) because Jewel will be retired:

http://docs.ceph.com/docs/master/releases/

Nope, Jewel is LTS, Kraken is not.

Kraken will only receive updates until the next stable release.  Jewel
will receive updates for longer.

John


On Thu, Mar 16, 2017 at 1:48 AM, Shain Miley  wrote:

No this is a production cluster that I have not had a chance to upgrade yet.

We had an is with the OS on a node so I am just trying to reinstall ceph and
hope that the osd data is still in tact.

Once I get things stable again I was planning on upgrading…but the upgrade
is a bit intensive by the looks of it so I need to set aside a decent amount
of time.

Thanks all!

Shain

On Mar 15, 2017, at 12:38 PM, Vasu Kulkarni  wrote:

Just curious, why you still want to deploy new hammer instead of stable
jewel? Is this a test environment? the last .10 release was basically for
bug fixes for 0.94.9.



On Wed, Mar 15, 2017 at 9:16 AM, Shinobu Kinjo  wrote:

FYI:
https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3

On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley  wrote:

Hello,
I am trying to deploy ceph to a new server using ceph-deply which I have
done in the past many times without issue.

Right now I am seeing a timeout trying to connect to git.ceph.com:


[hqosd6][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive
apt-get
-q install --assume-yes ca-certificates
[hqosd6][DEBUG ] Reading package lists...
[hqosd6][DEBUG ] Building dependency tree...
[hqosd6][DEBUG ] Reading state information...
[hqosd6][DEBUG ] ca-certificates is already the newest version.
[hqosd6][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 3 not
upgraded.
[hqosd6][INFO  ] Running command: wget -O release.asc
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[hqosd6][WARNIN] --2017-03-15 11:49:16--
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[hqosd6][WARNIN] Resolving ceph.com (ceph.com)... 158.69.68.141
[hqosd6][WARNIN] Connecting to ceph.com (ceph.com)|158.69.68.141|:443...
connected.
[hqosd6][WARNIN] HTTP request sent, awaiting response... 301 Moved
Permanently
[hqosd6][WARNIN] Location:
https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
[following]
[hqosd6][WARNIN] --2017-03-15 11:49:17--
https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
[hqosd6][WARNIN] Resolving git.ceph.com (git.ceph.com)... 8.43.84.132
[hqosd6][WARNIN] Connecting to git.ceph.com
(git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
[hqosd6][WARNIN] Retrying.
[hqosd6][WARNIN]
[hqosd6][WARNIN] --2017-03-15 11:51:25--  (try: 2)
https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
[hqosd6][WARNIN] Connecting to git.ceph.com
(git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
[hqosd6][WARNIN] Retrying.
[hqosd6][WARNIN]
[hqosd6][WARNIN] --2017-03-15 11:53:34--  (try: 3)
https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
[hqosd6][WARNIN] Connecting to git.ceph.com
(git.ceph.com)|8.43.84.132|:443... failed: Connection timed out.
[hqosd6][WARNIN] Retrying.


I am wondering if this is a known issue.

Just an fyi...I am using an older version of ceph-deply (1.5.36) because
in
the past upgrading to a newer version I was not able to install hammer
on
the cluster…so the workaround was to use a slightly older version.

Thanks in advance for any help you may be able to provide.

Shain


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] modify civetweb default port won't work

2017-03-13 Thread Abhishek Lekshmanan



On 03/13/2017 04:06 PM, Yair Magnezi wrote:

Thank you Abhishek

But still ...

root@ceph-rgw-02:/var/log/ceph# ps -ef | grep rgw
ceph  1332 1  1 14:59 ?00:00:00 /usr/bin/radosgw
--cluster=ceph --id *rgw.ceph-rgw-02* -f --setuser ceph --setgroup ceph


root@ceph-rgw-02:/var/log/ceph# cat /etc/ceph/ceph.conf
[global]
fsid = 00c167db-aea1-41b4-903b-69b0c86b6a0f
mon_initial_members = ceph-osd-01 ceph-osd-02
mon_host = 10.83.1.78,10.83.1.79
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 10.83.1.0/24 <http://10.83.1.0/24>
rbd default features = 3
#debug ms = 1
#debug rgw = 20

[client.radosgw.*rgw.ceph-rgw-02*]
host = ceph-rgw-02
keyring = /etc/ceph/ceph.client.radosgw.keyring
log file = /var/log/radosgw/client.radosgw.gateway.log
rgw_frontends = "civetweb port=8080"



Try client.rgw.ceph-rgw-02 here (and similar in id), ie. basically what 
you pass as the id should be what the ceph.conf section should look like.




root@ceph-rgw-02:/var/log/ceph# netstat -an | grep 80
tcp0  0 0.0.0.0:7480 <http://0.0.0.0:7480>
 0.0.0.0:*   LISTEN
tcp0  0 10.83.1.100:56884 <http://10.83.1.100:56884>
10.83.1.78:6800 <http://10.83.1.78:6800> ESTABLISHED
tcp0  0 10.83.1.100:47842 <http://10.83.1.100:47842>
10.83.1.78:6804 <http://10.83.1.78:6804> TIME_WAIT
tcp0  0 10.83.1.100:47846 <http://10.83.1.100:47846>
10.83.1.78:6804 <http://10.83.1.78:6804> ESTABLISHED
tcp0  0 10.83.1.100:44791 <http://10.83.1.100:44791>
10.83.1.80:6804 <http://10.83.1.80:6804> ESTABLISHED
tcp0  0 10.83.1.100:44782 <http://10.83.1.100:44782>
10.83.1.80:6804 <http://10.83.1.80:6804> TIME_WAIT
tcp0  0 10.83.1.100:38082 <http://10.83.1.100:38082>
10.83.1.80:6789 <http://10.83.1.80:6789> ESTABLISHED
tcp0  0 10.83.1.100:41999 <http://10.83.1.100:41999>
10.83.1.80:6800 <http://10.83.1.80:6800> ESTABLISHED
tcp0  0 10.83.1.100:59681 <http://10.83.1.100:59681>
10.83.1.79:6800 <http://10.83.1.79:6800> ESTABLISHED
tcp0  0 10.83.1.100:37590 <http://10.83.1.100:37590>
10.83.1.79:6804 <http://10.83.1.79:6804> ESTABLISHED


2017-03-13 15:05:23.836844 7f5c2fc80900  0 starting handler: civetweb
2017-03-13 15:05:23.838497 7f5c11379700  0 -- 10.83.1.100:0/2130438046
<http://10.83.1.100:0/2130438046> submit_message
mon_subscribe({osdmap=48}) v2 remote, 10.83.1.78:6789/0
<http://10.83.1.78:6789/0>, failed lossy con, dropping message
0x7f5bfc011850
2017-03-13 15:05:23.842769 7f5c11379700  0 monclient: hunting for new mon
2017-03-13 15:05:23.846976 7f5c2fc80900  0 starting handler: fastcgi
2017-03-13 15:05:23.849245 7f5b87a6a700  0 ERROR: no socket server point
defined, cannot start fcgi frontend




Any more ideas

Thanks




*
*
**



On Mon, Mar 13, 2017 at 4:34 PM, Abhishek Lekshmanan <abhis...@suse.com
<mailto:abhis...@suse.com>> wrote:



On 03/13/2017 03:26 PM, Yair Magnezi wrote:

Hello Wido

yes , the is my  /etc/cep/ceph.conf

and yes  radosgw.ceph-rgw-02 is the running instance .

root@ceph-rgw-02:/var/log/ceph# ps -ef | grep -i rgw
ceph 17226 1  0 14:02 ?00:00:01 /usr/bin/radosgw
--cluster=ceph --id rgw.ceph-rgw-02 -f --setuser ceph --setgroup
ceph


The ID passed to rgw here is `rgw.ceph-rgw-02`, whereas your conf
has a section named `radosgw.ceph-rgw-02` try running this service
(systemctl start ceph-rado...@radosgw.ceph-rgw-02 maybe?)

--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham
Norton, HRB 21284 (AG Nürnberg)


Thanks



*Yair Magnezi *
*Storage & Data Protection TL   // *Kenshoo*
*Office* +972 7 32862423 <tel:%2B972%207%2032862423>   //
*Mobile* +972 50 575-2955 <tel:%2B972%2050%20575-2955>
__
*
**



On Mon, Mar 13, 2017 at 4:06 PM, Wido den Hollander
<w...@42on.com <mailto:w...@42on.com>
<mailto:w...@42on.com <mailto:w...@42on.com>>> wrote:


> Op 13 maart 2017 om 15:03 schreef Yair Magnezi
<yair.magn...@kenshoo.com <mailto:yair.magn...@kenshoo.com>
<mailto:yair.magn...@kenshoo.com
<mailto:yair.magn...@kenshoo.com>>>:
>
>
> Hello Cephers .
>
> I'm trying to modify the   civetweb default  port to 80
but from some
> reason it insists on listening on the default 7480 port
>
> My configuration is quiet  simple ( experimental  ) 

Re: [ceph-users] modify civetweb default port won't work

2017-03-13 Thread Abhishek Lekshmanan



On 03/13/2017 03:26 PM, Yair Magnezi wrote:

Hello Wido

yes , the is my  /etc/cep/ceph.conf

and yes  radosgw.ceph-rgw-02 is the running instance .

root@ceph-rgw-02:/var/log/ceph# ps -ef | grep -i rgw
ceph 17226 1  0 14:02 ?00:00:01 /usr/bin/radosgw
--cluster=ceph --id rgw.ceph-rgw-02 -f --setuser ceph --setgroup ceph


The ID passed to rgw here is `rgw.ceph-rgw-02`, whereas your conf has a 
section named `radosgw.ceph-rgw-02` try running this service

(systemctl start ceph-rado...@radosgw.ceph-rgw-02 maybe?)

--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, 
HRB 21284 (AG Nürnberg)


Thanks



*Yair Magnezi *
*Storage & Data Protection TL   // *Kenshoo*
*Office* +972 7 32862423   // *Mobile* +972 50 575-2955
__
*
**



On Mon, Mar 13, 2017 at 4:06 PM, Wido den Hollander <w...@42on.com
<mailto:w...@42on.com>> wrote:


> Op 13 maart 2017 om 15:03 schreef Yair Magnezi
<yair.magn...@kenshoo.com <mailto:yair.magn...@kenshoo.com>>:
>
>
> Hello Cephers .
>
> I'm trying to modify the   civetweb default  port to 80 but from some
> reason it insists on listening on the default 7480 port
>
> My configuration is quiet  simple ( experimental  ) and looks like this :
>
>
> [global]
> fsid = 00c167db-aea1-41b4-903b-69b0c86b6a0f
> mon_initial_members = ceph-osd-01 ceph-osd-02
> mon_host = 10.83.1.78,10.83.1.79
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> public_network = 10.83.1.0/24 <http://10.83.1.0/24>
> rbd default features = 3
> #debug ms = 1
> #debug rgw = 20
>
> [client.radosgw.ceph-rgw-02]
> host = ceph-rgw-02
> keyring = /etc/ceph/ceph.client.radosgw.keyring
> log file = /var/log/radosgw/client.radosgw.gateway.log
> rgw_frontends = "civetweb port=80"
>
>

Are you sure this is in /etc/ceph/ceph.conf?

In addition, are you also sure the RGW is running as user
'radosgw.ceph-rgw-02' ?

Wido

> after restart still the same :
>
> root@ceph-rgw-02:/var/log/ceph# netstat -an |  grep 80
> *tcp0  0 0.0.0.0:7480 <http://0.0.0.0:7480>
<http://0.0.0.0:7480>
>  0.0.0.0:*   LISTEN*
> tcp0  0 10.83.1.100:56697 <http://10.83.1.100:56697>
 10.83.1.79:6804 <http://10.83.1.79:6804>
> ESTABLISHED
> tcp0  0 10.83.1.100:59482 <http://10.83.1.100:59482>
 10.83.1.79:6800 <http://10.83.1.79:6800>
> TIME_WAIT
> tcp0  0 10.83.1.100:33129 <http://10.83.1.100:33129>
 10.83.1.78:6804 <http://10.83.1.78:6804>
> ESTABLISHED
> tcp0  0 10.83.1.100:56318 <http://10.83.1.100:56318>
 10.83.1.80:6804 <http://10.83.1.80:6804>
> TIME_WAIT
> tcp0  0 10.83.1.100:56324 <http://10.83.1.100:56324>
 10.83.1.80:6804 <http://10.83.1.80:6804>
> ESTABLISHED
> tcp0  0 10.83.1.100:60990 <http://10.83.1.100:60990>
 10.83.1.78:6800 <http://10.83.1.78:6800>
> ESTABLISHED
> tcp0  0 10.83.1.100:60985 <http://10.83.1.100:60985>
 10.83.1.78:6800 <http://10.83.1.78:6800>
> TIME_WAIT
> tcp0  0 10.83.1.100:56691 <http://10.83.1.100:56691>
 10.83.1.79:6804 <http://10.83.1.79:6804>
> TIME_WAIT
> tcp0  0 10.83.1.100:33123 <http://10.83.1.100:33123>
 10.83.1.78:6804 <http://10.83.1.78:6804>
> TIME_WAIT
> tcp0  0 10.83.1.100:59494 <http://10.83.1.100:59494>
 10.83.1.79:6800 <http://10.83.1.79:6800>
> ESTABLISHED
> tcp0  0 10.83.1.100:55924 <http://10.83.1.100:55924>
 10.83.1.80:6800 <http://10.83.1.80:6800>
> ESTABLISHED
> tcp0  0 10.83.1.100:57629 <http://10.83.1.100:57629>
 10.83.1.80:6789 <http://10.83.1.80:6789>
> ESTABLISHED
>
>
> Besides that it also looks like the service tries  to start the
fcgi  (
> besides the civetweb ) is there a reason for that ?  ( fastcgi &
Apache are
> not  installed )  ?
>
>
> 2017-03-13 13:44:35.938897 7f05f3fd7700  1 handle_sigterm set
alarm for 120
> 2017-03-13 13:44:35.938916 7f06692c7900 -1 shutting down
> 2017-03-13 13:44:36.170559 7f06692c7900  1 final shutdown
> 2017-03-13 13:45:13.980814 7fbdb2e6c900  0 deferred set uid:gid to
> 64045:64045 (ceph:ceph)
> 2017-03-13 13:45:13.980992 7fbdb2e6c90

Re: [ceph-users] RGW listing users' quota and usage painfully slow

2017-03-09 Thread Abhishek Lekshmanan



On 03/09/2017 11:26 AM, Matthew Vernon wrote:

Hi,

I'm using Jewel / 10.2.3-0ubuntu0.16.04.2 . We want to keep track of our
S3 users' quota and usage. Even with a relatively small number of users
(23) it's taking ~23 seconds.

What we do is (in outline):
radosgw-admin metadata list user
for each user X:
  radosgw-admin user info --uid=X  #has quota details
  radosgw-admin user stats --uid=X #has usage details

None of these calls is particularly slow (~0.5s), but the net result is
not very satisfactory.

What am I doing wrong? :)


Is this a single site or a multisite cluster? If you're only trying to 
read info you could try disabling the cache (it is not recommended to 
use this if you're trying to write/modify info) for eg:


$ radosgw-admin user info --uid=x --rgw-cache-enabled=false

also you could run the info command with higher debug (--debug-rgw=20 
--debug-ms=1) and paste that somewhere (its very verbose) to help 
identify where we're slowing down


Best,
Abhishek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] clarification for rgw installation and conflagration ( jwel )

2017-03-08 Thread Abhishek Lekshmanan


On 03/08/2017 04:55 PM, Yair Magnezi wrote:
> Hello Guys .
> 
> I'm  new to RGW and need some clarification  ( i'm running 10.2.5 ) 
> As much as i understand 'jewl'  uses Civetweb instead of Apache and
> FastCGI but in the configuration guide ( just the next step in the  the
> install guide ) it says "Configuring a Ceph Object Gateway requires a
> running Ceph Storage Cluster, and an Apache web server with the FastCGI
> module"
> 
> http://docs.ceph.com/docs/jewel/radosgw/config/
> 
> 
> Civetweb is not mentioned at all and there are no instructions which
> relate to civetweb at all .
> I'd like to move on with configuration ( 'connecting' the rgw to my ceph
> cluster ) but don't understand how to do it .
> The section "ADDING A GATEWAY CONFIGURATION TO CEPH"  has instruction
> only to apache 
> Any clarification is much appreciated .
> 

There is some info. on the install section
http://docs.ceph.com/docs/jewel/install/install-ceph-gateway/#change-the-default-port

essentially the main configuration to run civetweb would be the value of
`rgw_frontends` with civetweb and port as shown in the example. There is
also some info on the migrating section in the same doc, the docs
require some love from a willing community member ;)

Best,
Abhishek

> Thanks
> 
> 
> 
> 
> 
> 
> 
> This e-mail, as well as any attached document, may contain material
> which is confidential and privileged and may include trademark,
> copyright and other intellectual property rights that are proprietary to
> Kenshoo Ltd,  its subsidiaries or affiliates ("Kenshoo"). This e-mail
> and its attachments may be read, copied and used only by the addressee
> for the purpose(s) for which it was disclosed herein. If you have
> received it in error, please destroy the message and any attachment, and
> contact us immediately. If you are not the intended recipient, be aware
> that any review, reliance, disclosure, copying, distribution or use of
> the contents of this message without Kenshoo's express permission is
> strictly prohibited.
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v10.2.3 Jewel Released

2016-09-28 Thread Abhishek Lekshmanan
-admin temp command as it was deprecated (#16023, Vikhyat 
Umrao)
* rgw: comparing return code to ERR_NOT_MODIFIED in rgw_rest_s3.cc (needs minus 
sign) (#16327, Nathan Cutler)
* rgw: custom metadata aren't camelcased in Swift's responses (#15902, Radoslaw 
Zarzynski)
* rgw: data sync stops after getting error in all data log sync shards (#16530, 
Yehuda Sadeh)
* rgw: default zone and zonegroup cannot be added to a realm (#16839, Casey 
Bodley)
* rgw: document multi tenancy (#16635, Pete Zaitcev)
* rgw: don't unregister request if request is not connected to manager (#15911, 
Yehuda Sadeh)
* rgw: failed to create bucket after upgrade from hammer to jewel (#16627, Orit 
Wasserman)
* rgw: fix ldap bindpw parsing (#16286, Matt Benjamin)
* rgw: fix multi-delete query param parsing. (#16618, Robin H. Johnson)
* rgw: improve support for Swift's object versioning. (#15925, Radoslaw 
Zarzynski)
* rgw: initial slashes are not properly handled in Swift's BulkDelete (#15948, 
Radoslaw Zarzynski)
* rgw: master: build failures with boost > 1.58 (#16392, #16391, Abhishek 
Lekshmanan)
* rgw: multisite segfault on ~RGWRealmWatcher if realm was deleted (#16817, 
Casey Bodley)
* rgw: multisite sync races with deletes (#16222, #16464, #16220, #16143, 
Yehuda Sadeh, Casey Bodley)
* rgw: multisite: preserve zone's extra pool (#16712, Abhishek Lekshmanan)
* rgw: object expirer's hints might be trimmed without processing in some 
circumstances (#16705, #16684, Radoslaw Zarzynski)
* rgw: radosgw-admin failure for user create after upgrade from hammer to jewel 
(#15937, Orit Wasserman, Abhishek Lekshmanan)
* rgw: radosgw-admin: EEXIST messages for create operations (#15720, Abhishek 
Lekshmanan)
* rgw: radosgw-admin: inconsistency in uid/email handling (#13598, Matt 
Benjamin)
* rgw: realm pull fails when using apache frontend (#15846, Orit Wasserman)
* rgw: retry on bucket sync errors (#16108, Yehuda Sadeh)
* rgw: s3website: x-amz-website-redirect-location header returns malformed HTTP 
response (#15531, Robin H. Johnson)
* rgw: segfault in RGWOp_MDLog_Notify (#1, Casey Bodley)
* rgw: segmentation fault on error_repo in data sync (#16603, Casey Bodley)
* rgw: selinux denials in RGW (#16126, Boris Ranto)
* rgw: support size suffixes for --max-size in radosgw-admin command (#16004, 
Vikhyat Umrao)
* rgw: updating CORS/ACLs might not work in some circumstances (#15976, 
Radoslaw Zarzynski)
* rgw: use zone endpoints instead of zonegroup endpoints (#16834, Casey Bodley)
* tests: improve rbd-mirror test case coverage (#16197, Mykola Golub, Jason 
Dillaman)
* tests: rados/test.sh workunit timesout on OpenStack (#15403, Loic Dachary)
* tools: ceph-disk: Accept bcache devices as data disks (#13278, Peter Sabaini)
* tools: src/script/subman fails with KeyError: 'nband' (#16961, Loic Dachary, 
Ali Maredia)

For more detailed information refer to the complete changelog[1] and the
release notes[2]

Getting Ceph


* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-10.2.3.tar.gz
* For packages, see http://ceph.com/docs/master/install/get-packages
* For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy

[1]: http://docs.ceph.com/docs/master/_downloads/v10.2.3.txt
[2]: http://docs.ceph.com/docs/master/release-notes/#v10-2-3-jewel

Regards,
Abhishek
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)


signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph keystone integration

2016-08-15 Thread Abhishek Lekshmanan

Niv Azriel writes:

> Hey, I have few questions regarding ceph integration with openstack
> components such as keystone.
>
> I'm trying to integrate keystone to work with my ceph cluster, I've been
> using this guide http://docs.ceph.com/docs/hammer/radosgw/keystone/
>
> Now in my openstack environment we decided to ditch the keystone admin
> token due to security issues though I cant find any other guide or such
> which provides me a proper integration with keystone without the admin token
>
> In other components I've seen that they use auth_uri to get a token as an
> admin, is there something similar for the ceph.conf?

it is possible to set the keystone admin user / password and tenant and
avoid using the admin token, refer the master docs.
http://docs.ceph.com/docs/master/radosgw/keystone/ this should be
applicable to hammer as well (with the exception that hammer would only
support keystone v2, jewel+ supports keystone v3 version of api as well)
>
> OS: ubuntu 14.04
> OStack: kilo
> Ceph: Hammer
> Git: Sebastian Han automation for ceph
>
> Thank you guys in advance :P
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Abhishek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rgw query bucket usage quickly

2016-07-28 Thread Abhishek Lekshmanan

Dan van der Ster writes:

> Hi,
>
> Does anyone know a fast way for S3 users to query their total bucket
> usage? 's3cmd du' takes a long time on large buckets (is it iterating
> over all the objects?). 'radosgw-admin bucket stats' seems to know the
> bucket usage immediately, but I didn't find a way to expose that to
> end users.
>
> Hoping this is an easy one for someone...

If swift api is enabled swift stat on the user account might
probably a quicker way.
>
> Thanks,
>
> Dan
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)


signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW Could not create user

2016-05-31 Thread Abhishek Lekshmanan
ones": [
> {
> "id": "e1d58724-e44f-4520-b56f-19a40b2ce8c4",
> "name": "ap-southeast",
> "endpoints": [
> "http:\/\/192.168.1.1:"
> ],
> "log_meta": "true",
> "log_data": "false",
>     "bucket_index_max_shards": 0,
> "read_only": "false"
> }
> ],
>  /// /// ///
> "master_zonegroup": "e9585cbd-df92-42a0-964b-15efb1cc0ad6",
> "master_zone": "e1d58724-e44f-4520-b56f-19a40b2ce8c4",
> "period_config": {
> "bucket_quota": {
> "enabled": false,
> "max_size_kb": -1,
> "max_objects": -1
> },
> "user_quota": {
> "enabled": false,
> "max_size_kb": -1,
> "max_objects": -1
> }
> },
> "realm_id": "93dc1f56-6ec6-48f8-8caa-a7e864eeaeb3",
> "realm_name": "default",
> "realm_epoch": 2
> }
>
> When I used radosgw-admin user create --uid = 1 --display-name = "user1"
> --email=us...@example.com, I get an error "could not create user: unable to
> create user, unable to store user info"

Is the ceph cluster healthy? BTW I don't think radosgw-admin accepts
spacing before & after the equals sign (this would end up creating a uid
called "=" I guess). Are there any other warnings or errors printed out?

You could also try cranking up the debug output by passing
--debug-rgw=20 --debug-ms=1 during user create and see the output, this
might indicate the problem better.
>
> I did wrong something? Can somebody please help me out ?
> Thank !
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)


signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW not start after upgrade to Jewel

2016-04-26 Thread Abhishek Lekshmanan

Ansgar Jazdzewski writes:

> Hi,
>
> After plaing with the setup i got some output that looks wrong
>
> # radosgw-admin zone get
>
> "placement_pools": [
> {
> "key": "default-placement",
> "val": {
> "index_pool": ".eu-qa.rgw.buckets.inde",
> "data_pool": ".eu-qa.rgw.buckets.dat",
> "data_extra_pool": ".eu-qa.rgw.buckets.non-e",
> "index_type": 0
> }
> }
> ],
>
> i think it sould be
>
> index_pool = .eu-qa.rgw.buckets.index.
> data_pool = .eu-qa.rgw.buckets
> data_extra_pool = .eu-qa.rgw.buckets.extra
>
> how can i fix it?

Not sure how it reached this state, but given a zone get json, you can
edit this and set it back using zone set for eg
# radosgw-admin zone get > zone.json # now edit this file
# radosgw-admin zone set --rgw-zone="eu-qa" < zone.json
>
> Thanks
> Ansgar
>
> 2016-04-26 13:07 GMT+02:00 Ansgar Jazdzewski <a.jazdzew...@googlemail.com>:
>> Hi all,
>>
>> i got an answer, that pointed me to:
>> https://github.com/ceph/ceph/blob/master/doc/radosgw/multisite.rst
>>
>> 2016-04-25 16:02 GMT+02:00 Karol Mroz <km...@suse.com>:
>>> On Mon, Apr 25, 2016 at 02:23:28PM +0200, Ansgar Jazdzewski wrote:
>>>> Hi,
>>>>
>>>> we test Jewel in our  QA environment (from Infernalis to Hammer) the
>>>> upgrade went fine but the Radosgw did not start.
>>>>
>>>> the error appears also with radosgw-admin
>>>>
>>>> # radosgw-admin user info --uid="images" --rgw-region=eu --rgw-zone=eu-qa
>>>> 2016-04-25 12:13:33.425481 7fc757fad900  0 error in read_id for id  :
>>>> (2) No such file or directory
>>>> 2016-04-25 12:13:33.425494 7fc757fad900  0 failed reading zonegroup
>>>> info: ret -2 (2) No such file or directory
>>>> couldn't init storage provider
>>>>
>>>> do i have to change some settings, also for upgrade of the radosgw?
>>>
>>> Hi,
>>>
>>> Testing a recent master build (with only default region and zone),
>>> I'm able to successfully run the command you specified:
>>>
>>> % ./radosgw-admin user info --uid="testid" --rgw-region=default 
>>> --rgw-zone=default
>>> ...
>>> {
>>> "user_id": "testid",
>>> "display_name": "M. Tester",
>>> ...
>>> }
>>>
>>> Are you certain the region and zone you specified exist?
>>>
>>> What do the following report:
>>>
>>> radosgw-admin zone list
>>> radosgw-admin region list
>>>
>>> --
>>> Regards,
>>> Karol
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com