"Ashley Merrick" writes:
> I guess so and they just haven't pushed this
> https://github.com/ceph/ceph/pull/29973 yet
We're just checking this, please don't install until the official
announcements hit the mailing lists yet.
>
>
>
>
> On Wed, 04 Sep 2019 09:41:03 +0800 Alex Litvak
>
"Sean Purdy" writes:
> Hi,
>
> A while back I reported a bug in luminous where lifecycle on a versioned
> bucket wasn't removing delete markers.
>
> I'm interested in this phrase in the pull request:
>
> "you can't expect lifecycle to work with dynamic resharding enabled."
the luminous backport
: Sorry for the resend, I used the wrong sending address.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF:
http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 7b695f835b03642f85998b2ae7b6dd093d9fbce4
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
r out if any configured endpoint cannot be bound.
>>
>> This should allow an orchestrator to confidently install a system,
>> knowing what will happen, without needing to know or manipulate the
>> bindv6only flag.
>>
>> As for what happens if you specify
We're happy to announce the first bug fix release of Ceph Nautilus
release series.
We recommend all nautilus users upgrade to this release. For upgrading
from older releases of
ceph, general guidelines for upgrade to nautilus must be followed
Notable Changes
---
* The default value
270)
Any more suggestions on how systems handle this are also welcome.
--
Abhishek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
r following back to
a master tracker.
>
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Fri,
/install/get-packages/
* Release git sha1: 1436006594665279fe734b4c15d7e08c13ebd777
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
signature.asc
Description: PGP signature
___
ceph-users
protocol address). We strongly recommend
the option be removed and instead a single ``mon host`` option be
specified in the ``[global]`` section to allow daemons and clients
to discover the monitors.
* New command ``ceph fs fail`` has been added to quickly bring down a file
system.
3-2-5-mimic-released/
Getting ceph
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-13.2.5.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: cbff874f9007f1869bfd3821b7e33b2a6ffd4988
--
Abhishek Lekshmanan
: 26dc3775efc7bb286a1d6d66faee0ba30ea23eee
Best,
Abhishek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ting ceph
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-13.2.4.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: b10be4d44915a4d78a8e06aa31919e74927b142e
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix I
ill wait for proper confirmation always but others may run an apt
> upgrade for any other reason and end up with .3 packages.
>
> ,Ashley
>
> On Fri, 4 Jan 2019 at 11:21 PM, Abhishek Lekshmanan
> wrote:
>
>> Ashley Merrick writes:
>>
>> > Another day a
ist
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Abhishek Lekshmanan
SUS
load.ceph.com/tarballs/ceph-12.2.10.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 177915764b752804194937482a39e95e0ca3de94
[1]:
http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#cache-size
--
Abhishek Lekshmanan
SUSE L
generated by object expiration should have owner
(issue#24568, issue#26847, pr#23541, Zhang Shaowen)
* rgw: add curl_low_speed_limit and curl_low_speed_time config to avoid
(issue#25021, pr#23173, Mark Kogan, Zhang Shaowen)
* rgw: change default rgw_thread_pool_size to 512 (issue#25214, iss
s no
> mention of that issue or backport listed in the release notes.
>
>
>> -Original Message-
>> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
>> ow...@vger.kernel.org] On Behalf Of Abhishek Lekshmanan
>> Sent: Wednesday, 5 September
ad traffic (issue#24767, pr#22984,
Xin Liao)
* rgw: fix the bug of radowgw-admin zonegroup set requires realm (issue#21583,
pr#22767, lvshanchun)
* rgw: have a configurable authentication order (issue#23089, pr#23501,
Abhishek Lekshmanan)
* rgw: index complete miss zones_trace se
ian repo) and then failing when I run "ceph-deploy osd ..." because ceph-
> volume doesn't exist on the nodes.
>
The newer versions of Ceph (from mimic onwards) requires compiler
toolchains supporting c++17 which we unfortunately do not have for
stretch/jessie yet.
-
Abhishek
at http://download.ceph.com/tarballs/ceph-12.2.7.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane
t-packages/
* Release git sha1: e4b061b47f07f583c92a050d9e84b1813a35671e
Best,
Abhishek
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ce
23634,
issue#23568, pr#21352, Patrick Donnelly)
* doc/rgw: add page for http frontend configuration (issue#13523,
issue#22884, pr#20242, Casey Bodley)
* doc: rgw: mention the civetweb support for binding to multiple ports
(issue#20942, issue#23317, pr#20906, Abhishek Lekshmanan)
* docs fix ceph-volum
>
> 2018-02-28 16:46 GMT+01:00 Abhishek Lekshmanan :
>
>>
>> This is the fourth bugfix release of Luminous v12.2.x long term stable
>> release series. This was primarily intended to fix a few build,
>> ceph-volume/ceph-disk issues from 12.2.3 and a few RGW issues
core: last-stat-seq returns 0 because osd stats are cleared (issue#23093,
pr#20548, Sage Weil, David Zafman)
* rgw: make init env methods return an error (issue#23039, pr#20564, Abhishek
Lekshmanan)
* rgw: URL-decode S3 and Swift object-copy URLs (issue#22121, issue#22729,
pr#20236, Malcol
cache invalidation to prevent cache size growth
(issue#22410, pr#19785, Mark Kogan)
* rgw: fix for empty query string in beast frontend (issue#22797, pr#20338,
Casey Bodley)
* rgw: fix GET website response error code (issue#22272, pr#19489, Dmitry
Plyakin)
* rgw: fix rewrite a versioning object cr
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph
ailures when no keystone is
configured (issue#21400, pr#18441, Abhishek Lekshmanan)
* rgw: disable dynamic resharding in multisite enviorment (issue#21725,
pr#18432, Orit Wasserman)
* rgw: encryption: PutObj response does not include sse-kms headers
(issue#21576, pr#18442, Casey Bodley)
* rgw: encry
wrong ?
Can I
use it also to S3 ?
Subusers can be used for s3 as well
Best
Abhishek
Best Regards
2017-11-23 1:04 GMT-02:00 David Turner :
If you create a subuser of the uid, then the subuser can have its own
name
and key while being the same user. You can also limit a subuser to
read,
write
ng their way into
>>> Luminous, but Luminous has all the features at present.
>
>> Does that mean it should basically work in 10.2.1?
>
> Sorry, I meant to say "in 12.2.1"!!!
Yeah bucket policies should be useable in 12.2.1
--
Abhishek Lekshmanan
SU
would be welcome
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>
asey Bodley)
* rgw: fix race in RGWCompleteMultipart (issue#20861, pr#16767, Abhishek
Varshney, Matt Benjamin)
* rgw: Fix up to 1000 entries at a time in check_bad_index_multipart
(issue#20772, pr#16880, Orit Wasserman, Matt Benjamin)
* rgw: folders starting with _ underscore are not in buck
1, pr#17446, Casey Bodley)
* rgw: need to stream metadata full sync init (issue#18079, pr#17448,
Yehuda Sadeh)
* rgw: object copied from remote src acl permission become full-control
issue (issue#20658, pr#17478, Enming Zhang)
* rgw: put lifecycle configuration fails if Prefix is not set
(issue#
tell mds. ..."
* The `apply` mode of cephfs-journal-tool has been removed
Getting Ceph
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.2.0.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* For ce
create_bucket(Bucket='foobar')
s3.put_object(Bucket='foobar',Key='foo',Body='foo')
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
think that this is getting to a point where we should just have
> nightly development releases.
>
> What is the benefit of waiting for each RC every two weeks (or so) otherwise?
We could consider something of this sort for M maybe?
Abhishek
> On one side we are treating the RC releases
see
http://docs.ceph.com/docs/master/install/get-packages/
* For ceph-deploy, see
http://docs.ceph.com/docs/master/install/install-ceph-deploy
* Release sha1: a5f84b37668fc8e03165aaf5cbb380c78e4deba4
[1]: http://tracker.ceph.com/issues/20985
Best Regard
tracker ticket has a link
to a set of development packages that should resolve the issue in the
meantime.
We've just started building packages for 12.1.4, so we should be able to
get this out of the door soon
Best,
Abhishek
[1] http://tracker.ceph.com/issues/20985
On Tue, Aug 15, 2017 at
cted to host: stornode03
>
> [stornode03][DEBUG ] detect platform information from remote host
>
> [ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpQCCwSb
>
> [ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported: debian
> 9.1
>
>
>
> root@stornode03:/etc/ceph#
>
This seems to be fixed in ceph-deploy via
https://github.com/ceph/ceph-deploy/pull/447, can you try ceph-deploy
from master
--
Abhishek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
..." command has been removed. It is superceded
by "ceph tell mds. ..."
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.1.3.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* For ceph-deploy, see
il)
* msg/simple/SimpleMessenger.cc: 239: FAILED assert(!cleared) (issue#15784,
issue#18378, pr#16133, Sage Weil)
* multisite: rest api fails to decode large period on 'period commit'
(issue#19505, issue#19616, issue#19614, issue#20244, issue#19488, issue#19776,
issue#20293, issue#19746
ing now uses the reshard namespace in log pool
upgrade scenarios as well this is a changed behaviour from RC1 where a
new pool for reshard was created
* RGW multisite now supports for enabling or disabling sync at a bucket level.
Getting Ceph
* Git at git://github.com/ceph/ceph.g
r" on the network.
For a detailed changelog refer to the blog post entry at
http://ceph.com/releases/v12-1-1-luminous-rc-released/
Getting Ceph
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.1.1.tar.gz
* For packages, see http://docs
ase.
We're currently running tests after the fix for MDS was merged.
So when we do announce the release we'll announce 10.2.9 so that users
can upgrade from 10.2.7->10.2.9
Best,
Abhishek
> 2017-07-12 22:44 GMT+08:00 David Turner :
>> The lack of communication on this makes
git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.1.0.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* For ceph-deploy, see
http://docs.ceph.com/docs/master/install/install-ceph-deploy
* Release sha1: 262617c9f16c55e863693258
a42762e3dffbbf
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-12.0.2.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* For ceph-deploy, see
http://docs.ceph.com/docs/master/install/install-ceph-deploy
* Release sha1: 5a1b6b3269da99a18984c138c23935e
eph.com/docs/master/install/install-ceph-deploy
* Release SHA1: 50e863e0f4bc8f4b9e31156de690d765af245185
[1]: http://docs.ceph.com/docs/master/_downloads/v10.2.7.txt
[2]: http://ceph.com/releases/v10-2-7-jewel-released/
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smit
r/install/install-ceph-deploy
* Release sha1: 5456408827a1a31690514342624a4ff9b66be1d5
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ually shift focus on bugs that would hinder upgrades
to Luminous itself
Abhishek
On Thu, Mar 16, 2017 at 2:27 AM, John Spray wrote:
On Wed, Mar 15, 2017 at 5:04 PM, Shinobu Kinjo wrote:
It may be probably kind of challenge but please consider Kraken (or
later) because Jewel will be retired:
On 03/13/2017 04:06 PM, Yair Magnezi wrote:
Thank you Abhishek
But still ...
root@ceph-rgw-02:/var/log/ceph# ps -ef | grep rgw
ceph 1332 1 1 14:59 ?00:00:00 /usr/bin/radosgw
--cluster=ceph --id *rgw.ceph-rgw-02* -f --setuser ceph --setgroup ceph
root@ceph-rgw-02:/var/log
rgw.ceph-rgw-02 -f --setuser ceph --setgroup ceph
The ID passed to rgw here is `rgw.ceph-rgw-02`, whereas your conf has a
section named `radosgw.ceph-rgw-02` try running this service
(systemctl start ceph-rado...@radosgw.ceph-rgw-02 maybe?)
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer
--debug-ms=1) and paste that somewhere (its very verbose) to help
identify where we're slowing down
Best,
Abhishek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
run civetweb would be the value of
`rgw_frontends` with civetweb and port as shown in the example. There is
also some info on the migrating section in the same doc, the docs
require some love from a willing community member ;)
Best,
Abhishek
> Thanks
>
>
>
>
>
>
>
&
issue#18526 , issue#16673 , pr#11497 , Pritha Srivastava, Radoslaw Zarzynski,
Pete Zaitcev, Abhishek Lekshmanan)
* rgw: add support for the prefix parameter in account listing of Swift API
(issue#17931 , pr#12258 , Radoslaw Zarzynski)
* rgw: Add workaround for upgrade issues for older jewel versio
l7/x86_64/.
Are you able to see the packages after following the instructions at
http://docs.ceph.com/docs/master/install/get-packages/ ?
Best,
Abhishek
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
___
cep
ush: reset bucket->h.items[i] when removing tree item (issue#16525,
pr#10724, Kefu Chai)
* doc: add "Upgrading to Hammer" section (issue#17386, pr#11372, Kefu Chai)
* doc: add orphan options to radosgw-admin --help and man page (issue#17281,
issue#17280, pr#11140, Abhishek Lekshm
ut rados mon (pr#12662, liuchang0812)
* doc: Fixes radosgw-admin ex: in swift auth section (`issue#16687, pr#12646,
SirishaGuduru)
* doc: fix the librados c api can not compile problem (pr#9396, song baisen)
* doc: mailmap: Michal Koutny affiliation (pr#13036, Nathan Cutler)
* doc: mailmap updates fo
g thank you to everyone for contributing towards this release.
Getting Ceph
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-11.2.0.tar.gz
* For packages, see http://ceph.com/docs/master/install/get-packages
* For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy
Best,
Abhishek
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
#12548, liuchang0812)
* doc: final additions to 11.1.0-rc release notes (pr#12448, Abhishek
Lekshmanan)
* doc: mention corresponding libvirt section in nova.conf (pr#12584,
Marc Koderer)
* fs: add snapshot tests to mds thrashing (pr#1073, Yan, Zheng)
* fs: enable ceph-fuse permission checking for all
11-1-0-kraken-released/
The debian and rpm packages are available at the usual locations at
http://download.ceph.com/debian-kraken/ and
http://download.ceph.com/rpm-kraken respectively. For more details refer
below.
Getting Ceph
* Git at git://github.com/ceph/ceph.git
* Tarball
cs/master/install/get-packages
* For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy
[1]: http://docs.ceph.com/docs/master/_downloads/v10.2.4.txt
[2]: http://docs.ceph.com/docs/master/release-notes/#v10-2-4-jewel
Best,
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Im
This can be archived by some custom nginx rules.
>
> Is this the right approach or Should I just use two different clusters
> instead? Looking forward to your awesome advises.
>
Since jewel, you can also consider looking into realms which sort of
provide for isolated namespaces within
ph-deploy
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo
6, Matt Benjamin)
* rgw: fix multi-delete query param parsing. (#16618, Robin H. Johnson)
* rgw: improve support for Swift's object versioning. (#15925, Radoslaw
Zarzynski)
* rgw: initial slashes are not properly handled in Swift's BulkDelete (#15948,
Radoslaw Zarzynski)
* rgw: master: bu
_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Abhishek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
__
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
signature.asc
Description: PGP signature
__
Hi,
Is the memory issue seen on OSD nodes or on the RGW nodes? We
encountered memory issues on OSD nodes with EC pools. Here is the mail
thread : http://www.spinics.net/lists/ceph-devel/msg30597.html
Hope this helps.
Thanks
Abhishek
On Mon, Jun 20, 2016 at 9:59 PM, Василий Ангапов wrote
],
> "log_meta": "true",
> "log_data": "false",
> "bucket_index_max_shards": 0,
> "read_only": "false"
> }
>
build (with only default region and zone),
>>> I'm able to successfully run the command you specified:
>>>
>>> % ./radosgw-admin user info --uid="testid" --rgw-region=default
>>> --rgw-zone=default
>>> ...
>>> {
>>> "user_id": "testid&
I was once faced with similar issue. Did you try increasing the rgw
log level and see what's happening? In my case, it was lot of gc
happening on rgw cache which was causing latent operations.
Thanks
Abhishek
On Tue, Mar 1, 2016 at 3:35 PM, Luis Periquito wrote:
> On Mon, Feb 29, 2016
:)
Thanks
Abhishek
On Thu, Oct 1, 2015 at 5:21 PM, Abhishek Varshney
wrote:
> Hi,
>
> I have a ceph cluster running v0.94.2 with civetweb configured on
> radosgw. I suddenly started experiencing error 500 on create_bucket.
>
> get_bucket and put/get objects are working fi
6.455132 7f6320ce7700 10 cache put: name=.rgw+test-bucket-1
2015-10-01 17:13:36.455150 7f6320ce7700 10 moving .rgw+test-bucket-1
to cache LRU end
2015-10-01 17:13:36.455166 7f6320ce7700 0 sending create_bucket
request to master region
2015-10-01 17:13:36.455169 7f6320ce7700 0 ERROR: endpoints not
configured for upstream zone
2015-10-01 17:13:36.455173 7f6320ce7700 0 WARNING: set_req_state_err
err_no=5 resorting to 500
2015-10-01 17:13:36.455249 7f6320ce7700 2 req 27:0.004424:s3:PUT
/test-bucket-1/:create_bucket:http status=500
2015-10-01 17:13:36.455259 7f6320ce7700 1 == req done
req=0x3017890 http_status=500 ==
Any help on this would be really helpful.
Thanks
Abhishek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Fri, Sep 18, 2015 at 4:38 AM, Robert Duncan wrote:
>
> Hi
>
>
>
> It seems that radosgw cannot find users in Keystone V3 domains, that is,
>
> When keystone is configured for domain specific drivers radossgw cannot find
> the users in the keystone users table (as they are not there)
>
> I hav
On Thu, Sep 10, 2015 at 3:27 PM, Shinobu Kinjo wrote:
> Thank you for letting me know your thought, Abhishek!!
>
>
> > The Ceph Object Gateway will query Keystone periodically
> > for a list of revoked tokens. These requests are encoded
> > and signed. Also,
On Thu, Sep 10, 2015 at 2:51 PM, Shinobu Kinjo wrote:
> Thank you for your really really quick reply, Greg.
>
> > Yes. A bunch shouldn't ever be set by users.
>
> Anyhow, this is one of my biggest concern right now -;
>
> rgw_keystone_admin_password =
>
>
> MU
map just includes .rgw as
"domain_root". Is this just an inconsistency in the documentation or am I
missing something? Also, it would be nice to have ceph version as an input
in the PG calculation page to avoid such confusion :)
Thank
__
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Abhishek
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Thu, Aug 27, 2015 at 3:01 PM, Wido den Hollander wrote:
> On 08/26/2015 05:17 PM, Yehuda Sadeh-Weinraub wrote:
>> On Wed, Aug 26, 2015 at 6:26 AM, Gregory Farnum wrote:
>>> On Wed, Aug 26, 2015 at 9:36 AM, Wido den Hollander wrote:
Hi,
It's something which has been 'bugging' me
f94663b3700
>>> -1 log_channel(cluster) log [ERR] : 2.490 deep-scrub 17 errors
>>>
>>> So, how i can solve "expected clone" situation by hand?
>>> Thank in advance!
I've had an inconsistent pg once, but it was a different sort of an
error (some sort o
On Thu, Aug 6, 2015 at 1:55 PM, Hector Martin wrote:
> On 2015-08-06 17:18, Wido den Hollander wrote:
>>
>> The mount of PGs is cluster wide and not per pool. So if you have 48
>> OSDs the rule of thumb is: 48 * 100 / 3 = 1600 PGs cluster wide.
>>
>> Now, with enough memory you can easily have 100
Hi Peter and Nigel,
I have tries /etc/hosts and it works perfectly fine! But I am looking for
an alternative (if any) to do away completely with hostnames and just use
IP addresses instead.
Thanks
Abhishek
On 13 July 2015 at 12:40, Nigel Williams wrote:
>
> > On 13 Jul 2015, a
deployment.
Thanks
Abhishek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
t; Any help is appreciated.
>
> Thanks,
> Steve
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Abhishek
signature.asc
Description: PGP signature
_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Abhishek
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
a.
>
> Hi Sage:
>
> You wrote "yet" - should we earmark it for hammer backport?
>
I'm guessing https://github.com/ceph/ceph/pull/4973 is the backport for hammer
(issue http://tracker.ceph.com/issues/11981)
Regards
Abhishek
__
gt; -Greg
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Abhishek
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
buckets index pool
(.rgw.buckets.index) which had the some number of pgs compared to a few
other pools which host almost no data like the gc pool, root pool etc.
Cheers!
Abhishek
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph
efault_pgp_num 128`
and then starting a fresh radosgw (assuming its not installed
previously); or creating any pool with rados commands, which will fail
when putting different objects because of the increased pgp count
compared to the pg count.
--
Abhishek
signature.asc
Description: PGP signature
On Tue, May 12, 2015 at 9:13 PM, Abhishek L
wrote:
>
> We've had a hammer (0.94.1) (virtual) 3 node/3 osd cluster with radosgws
> failing to start, failing continously with the following error:
>
> --8<---cut here---start->8---
&g
hing works as expected. We are seeing this issue
intermittently in both firefly & hammer in our test scenarios, where
radosgw is started at as soon as ceph is deployed (without waiting for
the cluster to become healthy). Any ideas on the cause &a
onfig/#configure-a-secondary-region
--
Abhishek
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
e cleanup tool that will address your specific
> issue, and we have a design for it, but it's not there yet.
>
Could you share the design/ideas for making the cleanup tool. After an
initial search I could only find two issues
[1] http://tracker.ceph.com/issues/10342
[2] http://t
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Abhishek
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph
Sage Weil writes:
[..]
> Thoughts? Suggestions?
>
[..]
Suggestion:
radosgw should handle injectargs like other ceph clients do?
This is not a major annoyance, but it would be nice to have.
--
Abhishek
signature.asc
Description: PGP sig
eph/blob/giant/src/osd/osd_types.h#L815-820
--
Abhishek
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ion 0.80.7?
>
Tried ceph.com/docs/firefly
[..]
--
Abhishek
signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
pragya jain writes:
> hi abhishek and lorieri!
>
> the link you have mentioned has two node installation: one is admin-node and
> another is server-node.
> for this installation, as i understand, i need two Ubuntu VMs - one for each
> node.
>
> Am i right?
>
You
Lorieri writes:
> http://ceph.com/docs/dumpling/start/quick-ceph-deploy/
These steps work against the current ceph release (firefly) as well, for
me, as far as the config file has the setting
osd crush chooseleaf type = 0
--
Abhishek L
pgp: 69CF 4838 8EE3 746C 5ED4 1F16 F9F0 641F 1B65 E
rror 1
>>> make[1]: Leaving directory `/home/cubie/Source/ceph'
>>> make: *** [build-stamp] Error 2
>>> dpkg-buildpackage: error: debian/rules build gave error exit status 2
For me (ubuntu trusty) building via dpkg-buildpackage seems to work
perfectly fine.
However the o
100 matches
Mail list logo