On 11/09/2019 12:18, Alfredo Deza wrote:
> On Wed, Sep 11, 2019 at 6:18 AM Matthew Vernon wrote:
>> or
>> ii) allow the bootstrap-osd credential to purge OSDs
>
> I wasn't aware that the bootstrap-osd credentials allowed to
> purge/destroy OSDs, are you sure this is po
Hi,
We keep finding part-made OSDs (they appear not attached to any host,
and down and out; but still counting towards the number of OSDs); we
never saw this with ceph-disk. On investigation, this is because
ceph-volume lvm create makes the OSD (ID and auth at least) too early in
the process and
Hi,
On 02/08/2019 13:23, Lars Marowsky-Bree wrote:
> On 2019-08-01T15:20:19, Matthew Vernon wrote:
>
>> One you don't mention is that multipart uploads break during resharding - so
>> if our users are filling up a bucket with many writers uploading multipart
>> objects,
Hi,
On 31/07/2019 19:02, Paul Emmerich wrote:
Some interesting points here, thanks for raising them :)
From our experience: buckets with tens of million objects work just fine with
no big problems usually. Buckets with hundreds of million objects require some
attention. Buckets with billions
On 29/07/2019 23:24, Brent Kennedy wrote:
Apparently sent my email too quickly. I had to install python-pip on
the mgr nodes and run “pip install requests==2.6.0” to fix the missing
module and then reboot all three monitors. Now the dashboard enables no
issue.
I'm a bit confused as to why
On 24/07/2019 20:06, Paul Emmerich wrote:
+1 on adding them all at the same time.
All these methods that gradually increase the weight aren't really
necessary in newer releases of Ceph.
FWIW, we added a rack-full (9x60 = 540 OSDs) in one go to our production
cluster (then running Jewel)
On 11/07/2019 15:40, Paul Emmerich wrote:
Is there already a tracker issue?
I'm seeing the same problem here. Started deletion of a bucket with a
few hundred million objects a week ago or so and I've now noticed that
it's also leaking memory and probably going to crash.
Going to investigate
Hi,
On 17/06/2019 16:00, Harald Staub wrote:
> There are customers asking for 500 million objects in a single object
> storage bucket (i.e. 5000 shards), but also more. But we found some
> places that say that there is a limit in the number of shards per
> bucket, e.g.
Our largest bucket was
On 14/05/2019 00:36, Tarek Zegar wrote:
> It's not just mimic to nautilus
> I confirmed with luminous to mimic
>
> They are checking for clean pgs with flags set, they should unset flags,
> then check. Set flags again, move on to next osd
I think I'm inclined to agree that "norebalance" is
Hi,
On 02/05/2019 22:00, Aaron Bassett wrote:
> With these caps I'm able to use a python radosgw-admin lib to list
> buckets and acls and users, but not keys. This user is also unable to
> read buckets and/or keys through the normal s3 api. Is there a way to
> create an s3 user that has read
Hi,
On 28/02/2019 17:00, Marc Roos wrote:
Should you not be pasting that as an issue on github collectd-ceph? I
hope you don't mind me asking, I am also using collectd and dumping the
data to influx. Are you downsampling with influx? ( I am not :/ [0])
It might be "ask collectd-ceph authors
Hi,
We monitor our Ceph clusters (production is Jewel, test clusters are on
Luminous) with collectd and its official ceph plugin.
The one thing that's missing is per-pool outputs - the collectd plugin
just talks to the individual daemons, none of which have pool details in
- those are
Hi,
On 31/01/2019 17:11, shubjero wrote:
Has anyone automated the ability to generate S3 keys for OpenStack users
in Ceph? Right now we take in a users request manually (Hey we need an
S3 API key for our OpenStack project 'X', can you help?). We as
cloud/ceph admins just use radosgw-admin to
Hi,
On 31/01/2019 16:06, Will Dennis wrote:
Trying to utilize the ‘ceph-ansible’ project
(https://github.com/ceph/ceph-ansible)
to deploy some Ceph servers in a Vagrant testbed; hitting some issues
with some of the plays – where is the right (best) venue to ask
questions about this?
Hi,
On 30/01/2019 02:39, Albert Yue wrote:
> As the number of OSDs increase in our cluster, we reach a point where
> pg/osd is lower than recommend value and we want to increase it from
> 4096 to 8192.
For an increase that small, I'd just do it in one go (and have done so
on our production
Hi,
On 23/01/2019 22:28, Ketil Froyn wrote:
How is the commercial support for Ceph? More specifically, I was
recently pointed in the direction of the very interesting combination of
CephFS, Samba and ctdb. Is anyone familiar with companies that provide
commercial support for in-house
Hi,
On our Jewel clusters, the mons keep a log of the cluster status e.g.
2019-01-24 14:00:00.028457 7f7a17bef700 0 log_channel(cluster) log
[INF] : HEALTH_OK
2019-01-24 14:00:00.646719 7f7a46423700 0 log_channel(cluster) log
[INF] : pgmap v66631404: 173696 pgs: 10
Hi,
On 22/01/2019 10:02, M Ranga Swami Reddy wrote:
> Hello - If an OSD shown as down and but its still "in" state..what
> will happen with write/read operations on this down OSD?
It depends ;-)
In a typical 3-way replicated setup with min_size 2, writes to placement
groups on that OSD will
Hi,
The default limit for buckets per user in ceph is 1000, but it is
adjustable via radosgw-admin user modify --max-buckets
One of our users is asking for a significant increase (they're mooting
100,000), and I worry about the impact on RGW performance since, I
think, there's only one
Hi,
On 16/01/2019 09:02, Brian Topping wrote:
> I’m looking at writes to a fragile SSD on a mon node,
> /var/lib/ceph/mon/ceph-{node}/store.db is the big offender at the
> moment.
> Is it required to be on a physical disk or can it be in tempfs? One
> of the log files has paxos strings, so I’m
Hi,
On 08/01/2019 18:58, David Galloway wrote:
> The current distro matrix is:
>
> Luminous: xenial centos7 trusty jessie stretch
> Mimic: bionic xenial centos7
Thanks for clarifying :)
> This may have been different in previous point releases because, as Greg
> mentioned in an earlier post
Dear Greg,
On 04/01/2019 19:22, Gregory Farnum wrote:
> Regarding Ceph releases more generally:
[snip]
> I imagine we will discuss all this in more detail after the release,
> but everybody's patience is appreciated as we work through these
> challenges.
Thanks for this. Could you confirm
Hi,
On 04/01/2019 15:34, Abhishek Lekshmanan wrote:
> Ashley Merrick writes:
>
>> If this is another nasty bug like .2? Can’t you remove .3 from being
>> available till .4 comes around?
>
> This time there isn't a nasty bug, just a a couple of more fixes in .4
> which would be better to have.
Hi,
Since the "where are the bionic packages for Luminous?" question remains
outstanding, I thought I'd look at the question a little further.
The TL;DR is:
Jewel: built for Ubuntu trusty & xenial ; Debian jessie & stretch
Luminous: built for Ubuntu trusty & xenial ; Debian jessie & stretch
Hi,
On 13/12/2018 16:44, Dietmar Rieder wrote:
So you say, that there will be no problem when after the rebalancing I
restart the stopped OSDs? I mean the have still the data on them.
(Sorry, I just don't like to mess somthing up)
It should be fine[0]; when the OSDs come back in ceph will
Hi,
On 13/12/2018 15:48, Dietmar Rieder wrote:
one of our OSD nodes is experiencing a Disk controller problem/failure
(frequent resetting), so the OSDs on this controller are flapping
(up/down in/out).
Ah, hardware...
I have some simple questions, what are the best steps to take now before
Hi,
Sorry for the slow reply.
On 26/11/2018 17:11, Ken Dreyer wrote:
On Thu, Nov 22, 2018 at 11:47 AM Matthew Vernon wrote:
On 22/11/2018 13:40, Paul Emmerich wrote:
We've encountered the same problem on Debian Buster
It looks to me like this could be fixed simply by building the Bionic
Hi,
I've been benchmarking my Luminous test cluster, the s3 user has deleted
all objects and buckets, and yet the RGW data pool is using 7TiB of data:
default.rgw.buckets.data 11 7.16TiB 3.27212TiB 1975644
There are no buckets left (radosgw-admin bucket list returns []), and
On 03/12/2018 22:46, Mike Perez wrote:
Also as a reminder, lets try to coordinate our submissions on the CFP
coordination pad:
https://pad.ceph.com/p/cfp-coordination
I see that mentions a Cephalocon in Barcelona in May. Did I miss an
announcement about that?
Regards,
Matthew
--
The
On 07/11/2018 23:28, Neha Ojha wrote:
> For those who haven't upgraded to 12.2.9 -
>
> Please avoid this release and wait for 12.2.10.
Any idea when 12.2.10 is going to be here, please?
Regards,
Matthew
--
The Wellcome Sanger Institute is operated by Genome Research
Limited, a charity
On 22/11/2018 13:40, Paul Emmerich wrote:
We've encountered the same problem on Debian Buster
It looks to me like this could be fixed simply by building the Bionic
packages in a Bionic chroot (ditto Buster); maybe that could be done in
future? Given I think the packaging process is being
Hi,
The ceph.com ceph luminous packages for Ubuntu Bionic still depend on
libcurl3 (specifically ceph-common, radosgw. librgw2 all depend on
libcurl3 (>= 7.28.0)).
This means that anything that depends on libcurl4 (which is the default
libcurl in bionic) isn't co-installable with ceph. That
Hi,
[apropos auto-repair for scrub settings]
On 15/11/2018 18:45, Mark Schouten wrote:
As a user, I’m very surprised that this isn’t a default setting.
We've been to cowardly to do it so far; even on a large cluster the
occasional ceph pg repair hasn't taken up too much admin time, and the
Hi,
We currently deploy our filestore OSDs with ceph-disk (via
ceph-ansible), and I was looking at using ceph-volume as we migrate to
bluestore.
Our servers have 60 OSDs and 2 NVME cards; each OSD is made up of a
single hdd, and an NVME partition for journal.
If, however, I do:
ceph-volume lvm
Hi,
On 08/11/2018 22:38, Ken Dreyer wrote:
What's the full apt-get command you're running?
I wasn't using apt-get, because the ceph repository has the broken
12.2.9 packages in it (and I didn't want to install them, obviously); so
I downloaded all the .debs I needed, installed the
On 08/11/2018 16:31, Matthew Vernon wrote:
Hi,
in Jewel, /etc/bash_completion.d/radosgw-admin is in the radosgw package
In Luminous, /etc/bash_completion.d/radosgw-admin is in the ceph-common
package
...so if you try and upgrade, you get:
Unpacking ceph-common (12.2.8-1xenial) over (10.2.9
On 08/11/2018 16:31, Matthew Vernon wrote:
The exact versioning would depend on when the move was made (I presume
either Jewel -> Kraken or Kraken -> Luminous). Does anyone know?
To answer my own question, this went into 12.0.3 via
https://github.com/ceph/ceph/
Hi,
in Jewel, /etc/bash_completion.d/radosgw-admin is in the radosgw package
In Luminous, /etc/bash_completion.d/radosgw-admin is in the ceph-common
package
...so if you try and upgrade, you get:
Unpacking ceph-common (12.2.8-1xenial) over (10.2.9-0ubuntu0.16.04.1) ...
dpkg: error processing
On 08/11/2018 09:17, Marc Roos wrote:
And that is why I don't like ceph-deploy. Unless you have maybe hundreds
of disks, I don’t see why you cannot install it "manually".
...as the recent ceph survey showed, plenty of people have hundreds of
disks! Ceph is meant to be operated at scale,
On 07/11/2018 14:16, Marc Roos wrote:
>
>
> I don't see the problem. I am installing only the ceph updates when
> others have done this and are running several weeks without problems. I
> have noticed this 12.2.9 availability also, did not see any release
> notes, so why install it?
On 07/11/2018 10:59, Konstantin Shalygin wrote:
>> I wonder if there is any release announcement for ceph 12.2.9 that I missed.
>> I just found the new packages on download.ceph.com, is this an official
>> release?
>
> This is because 12.2.9 have a several bugs. You should avoid to use this
>
Hi,
On 10/26/18 2:55 PM, David Turner wrote:
It is indeed adding a placement target and not removing it replacing the
pool. The get/put wouldn't be a rados or even ceph command, you would do
it through an s3 client.
Which is an interesting idea, but presumably there's no way of knowing
which
Hi,
On 26/10/2018 12:38, Alexandru Cucu wrote:
> Have a look at this article:>
> https://ceph.com/geen-categorie/ceph-pool-migration/
Thanks; that all looks pretty hairy especially for a large pool (ceph df
says 1353T / 428,547,935 objects)...
...so something a bit more controlled/gradual and
Hi,
On 25/10/2018 17:57, David Turner wrote:
> There are no tools to migrate in either direction between EC and
> Replica. You can't even migrate an EC pool to a new EC profile.
Oh well :-/
> With RGW you can create a new data pool and new objects will be written
> to the new pool. If your
Hi,
I thought I'd seen that it was possible to migrate a replicated pool to
being erasure-coded (but not the converse); but I'm failing to find
anything that says _how_.
Have I misremembered? Can you migrate a replicated pool to EC? (if so, how?)
...our use case is moving our S3 pool which
On 17/10/18 15:23, Paul Emmerich wrote:
[apropos building Mimic on Debian 9]
apt-get install -y g++ libc6-dbg libc6 -t testing
apt-get install -y git build-essential cmake
I wonder if you could avoid the "need a newer libc" issue by using
backported versions of cmake/g++ ?
Regards,
Hi,
On 15/10/18 11:44, Vincent Godin wrote:
> Does a man exist on ceph-objectstore-tool ? if yes, where can i find it ?
No, but there is some --help output:
root@sto-1-1:~# ceph-objectstore-tool --help
Allowed options:
--help produce help message
--type arg
Hi,
On 24/07/18 06:02, Satish Patel wrote:
> My 5 node ceph cluster is ready for production, now i am looking for
> good monitoring tool (Open source), what majority of folks using in
> their production?
This does come up from time to time, so it's worth checking the list
archives.
We use
Hi,
> One of my server silently shutdown last night, with no explanation
> whatsoever in any logs. According to the existing logs, the shutdown
We have seen similar things with our SuperMicro servers; our current
best theory is that it's related to CPU power management. Disabling it
in BIOS
Hi,
On 21/07/18 04:24, Satish Patel wrote:
> I am using openstack-ansible with ceph-ansible to deploy my Ceph
> custer and here is my config in yml file
You might like to know that there's a dedicated (if quiet!) list for
ceph-ansible - ceph-ansi...@lists.ceph.com
Regards,
Matthew
--
The
Hi,
On 19/07/18 17:19, CUZA Frédéric wrote:
> After that we tried to remove the orphans :
>
> radosgw-admin orphans find –pool= default.rgw.buckets.data
> --job-id=ophans_clean
>
> radosgw-admin orphans finish --job-id=ophans_clean
>
> It finds some orphans : 85, but the command finish seems
Hi,
On 17/07/18 01:29, Brad Hubbard wrote:
> Your issue is different since not only do the omap digests of all
> replicas not match the omap digest from the auth object info but they
> are all different to each other.
>
> What is min_size of pool 67 and what can you tell us about the events
>
Hi,
Our cluster is running 10.2.9 (from Ubuntu; on 16.04 LTS), and we have a
pg that's stuck inconsistent; if I repair it, it logs "failed to pick
suitable auth object" (repair log attached, to try and stop my MUA
mangling it).
We then deep-scrubbed that pg, at which point
rados
Hi,
Some of our users have Quite Large buckets (up to 20M objects in a
bucket), and AIUI best practice would be to have sharded indexes for
those buckets (of the order of 1 shard per 100k objects).
On a trivial test case (make a 1M-object bucket, shard index to 10
shards, s3cmd ls s3://bucket
Hi,
On 14/05/18 17:49, Marc Boisis wrote:
> Currently we have a 294 OSD (21 hosts/3 racks) cluster with RBD clients
> only, 1 single pool (size=3).
That's not a large cluster.
> We want to divide this cluster into several to minimize the risk in case
> of failure/crash.
> For example, a
Hi,
On 04/05/18 08:25, Tracy Reed wrote:
> On Fri, May 04, 2018 at 12:18:15AM PDT, Tracy Reed spake thusly:
>> https://jcftang.github.io/2012/09/06/going-from-replicating-across-osds-to-replicating-across-hosts-in-a-ceph-cluster/
>
>> How can I tell which way mine is configured? I could post the
Hi,
TL;DR there seems to be a problem with quota calculation for rgw in our
Jewel / Ubuntu 16.04 cluster. Our support people suggested we raise it
with upstream directly; before I open a tracker item I'd like to check
I've not missed something obvious :)
Our cluster is running Jewel on
On 04/04/18 10:30, Matthew Vernon wrote:
> Hi,
>
> We have an rgw user who had a bunch of partial multipart uploads in a
> bucket, which they then deleted. radosgw-admin bucket list doesn't show
> the bucket any more, but user stats --sync-stats still has (I think)
> the conte
Hi,
We have an rgw user who had a bunch of partial multipart uploads in a
bucket, which they then deleted. radosgw-admin bucket list doesn't show
the bucket any more, but user stats --sync-stats still has (I think)
the contents of that bucket counted against the users' quota.
So, err, how do I
Hi,
What are people here using to benchmark their S3 service (i.e. the rgw)?
rados bench is great for some things, but doesn't tell me about what
performance I can get from my rgws.
It seems that there used to be rest-bench, but that isn't in Jewel
AFAICT; I had a bit of a look at cosbench but
On 05/02/18 15:54, Wes Dillingham wrote:
> Good data point on not trimming when non active+clean PGs are present.
> So am I reading this correct? It grew to 32GB? Did it end up growing
> beyond that, what was the max?
The largest Mon store size I've seen (in a 3000-OSD cluster) was about 66GB.
On 29/11/17 17:24, Matthew Vernon wrote:
> We have a 3,060 OSD ceph cluster (running Jewel
> 10.2.7-0ubuntu0.16.04.1), and one OSD on one host keeps misbehaving - by
> which I mean it keeps spinning ~100% CPU (cf ~5% for other OSDs on that
> host), and having ops blocking on it f
Hi,
We have a 3,060 OSD ceph cluster (running Jewel
10.2.7-0ubuntu0.16.04.1), and one OSD on one host keeps misbehaving - by
which I mean it keeps spinning ~100% CPU (cf ~5% for other OSDs on that
host), and having ops blocking on it for some time. It will then behave
for a bit, and then go back
Hi,
On 20/11/17 15:00, Gerhard W. Recher wrote:
Just interjecting here because I keep seeing things like this, and
they're often buggy, and there's an easy answer:
> DEVICE=`mount | grep /var/lib/ceph/osd/ceph-$ID| cut -f1 -d"p"`
findmnt(8) is your friend, any time you want to find out about
Hi,
On 09/10/17 16:09, Sage Weil wrote:
To put this in context, the goal here is to kill ceph-disk in mimic.
One proposal is to make it so new OSDs can *only* be deployed with LVM,
and old OSDs with the ceph-disk GPT partitions would be started via
ceph-volume support that can only start (but
Hi,
The recent FOSDEM CFP reminded me to wonder if there's likely to be a
Cephalocon in 2018? It was mentioned as a possibility when the 2017 one
was cancelled...
Regards,
Matthew
--
The Wellcome Trust Sanger Institute is operated by Genome Research
Limited, a charity registered in
On 02/10/17 20:26, Erik McCormick wrote:
> On Mon, Oct 2, 2017 at 11:55 AM, Matthew Vernon <m...@sanger.ac.uk> wrote:
>> Making a dashboard is rather a matter of personal preference - we plot
>> client and s3 i/o, network, server load & CPU use, and have indicator
>&g
On 02/10/17 12:34, Osama Hasebou wrote:
> Hi Everyone,
>
> Is there a guide/tutorial about how to setup Ceph monitoring system
> using collectd / grafana / graphite ? Other suggestions are welcome as
> well !
We just installed the collectd plugin for ceph, and pointed it at our
grahphite server;
Hi,
On 29/09/17 01:00, Brad Hubbard wrote:
> This looks similar to
> https://bugzilla.redhat.com/show_bug.cgi?id=1458007 or one of the
> bugs/trackers attached to that.
Yes, although increasing the timeout still leaves the issue that if the
timeout fires you don't get anything resembling a
Hi,
TL;DR - the timeout setting in ceph-disk@.service is (far) too small -
it needs increasing and/or removing entirely. Should I copy this to
ceph-devel?
On 15/09/17 16:48, Matthew Vernon wrote:
On 14/09/17 16:26, Götz Reinicke wrote:
After that, 10 OSDs did not came up as the others
On 19/09/17 10:40, Wido den Hollander wrote:
>
>> Op 19 september 2017 om 10:24 schreef Adrian Saul
>> :
>>
>>
>>> I understand what you mean and it's indeed dangerous, but see:
>>> https://github.com/ceph/ceph/blob/master/systemd/ceph-osd%40.service
>>>
>>>
On 18/09/17 16:37, Matthew Vernon wrote:
> On 13/09/17 15:06, Marc Roos wrote:
>>
>>
>> Am I the only one having these JSON issues with collectd, did I do
>> something wrong in configuration/upgrade?
>
> I also see these, although my dashboard seems to mostl
On 13/09/17 15:06, Marc Roos wrote:
>
>
> Am I the only one having these JSON issues with collectd, did I do
> something wrong in configuration/upgrade?
I also see these, although my dashboard seems to mostly be working. I'd
be interested in knowing what the problem is!
> Sep 13 15:44:15 c01
Hi,
On 14/09/17 16:26, Götz Reinicke wrote:
> maybe someone has a hint: I do have a cephalopod cluster (6 nodes, 144
> OSDs), Cents 7.3 ceph 10.2.7.
>
> I did a kernel update to the recent centos 7.3 one on a node and did a
> reboot.
>
> After that, 10 OSDs did not came up as the others. The
Hi,
On 06/09/17 16:23, Sage Weil wrote:
> Traditionally, we have done a major named "stable" release twice a year,
> and every other such release has been an "LTS" release, with fixes
> backported for 1-2 years.
We use the ceph version that comes with our distribution (Ubuntu LTS);
those come
Hi,
We have a medium-sized (2520 osds, 42 hosts, 88832 pgs, 15PB raw
capacity) Jewel cluster (on Ubuntu), and in normal operation, our mon
store size is around the 1.2G mark. I've noticed, though, that when
doing larger rebalances, they can grow really very large (up to nearly
70G, which is
Hi,
On 18/07/17 05:08, Marcus Furlong wrote:
On 22 March 2017 at 05:51, Dan van der Ster > wrote:
Apologies for reviving an old thread, but I figured out what happened
and never documented it, so I thought an update might be useful.
[snip
Hi,
On 07/07/17 13:03, David Turner wrote:
> So many of your questions depends on what your cluster is used for. We
> don't even know rbd or cephfs from what you said and that still isn't
> enough to fully answer your questions. I have a much smaller 3 node
> cluster using Erasure coding for rbds
Hi,
Currently, our ceph cluster is all 3-way replicated, and works very
nicely. We're consider the possibility of adding an erasure-coding pool;
which I understand would require a cache tier in front of it to ensure
decent performance.
I am wondering what sort of spec should we be thinking about
Hi,
On 01/06/17 10:38, Oliver Humpage wrote:
These read errors are all on Samsung 850 Pro 2TB disks (journals are
on separate enterprise SSDs). The SMART status on all of them are
similar and show nothing out of the ordinary.
Has anyone else experienced anything similar? Is this just a curse
Hi,
This has bitten us a couple of times now (such that we're considering
re-building util-linux with the nilfs2 code commented out), so I'm
wondering if anyone else has seen it [and noting the failure mode in
case anyone else is confused in future]
We see this with our setup of rotating media
Hi,
> How many OSD's are we talking about? We're about 500 now, and even
> adding another 2000-3000 is a 5 minute cut/paste job of editing the
> CRUSH map. If you really are adding racks and racks of OSD's every week,
> you should have found the crush location hook a long time ago.
We have 540
d-bucket and ceph osd set ... but that feels more like a
lash-up and less like a properly-engineered solution to what must be a
fairly common problem?
Regards,
Matthew
> On Wed, Apr 12, 2017 at 4:46 PM, Matthew Vernon <m...@sanger.ac.uk
> <mailto:m...@sanger.ac.uk>> wrote:
>
>
Hi,
Our current (jewel) CRUSH map has rack / host / osd (and the default
replication rule does step chooseleaf firstn 0 type rack). We're shortly
going to be adding some new hosts in new racks, and I'm wondering what
the least-painful way of getting the new osds associated with the
correct (new)
Hi,
radosgw-admin create user sometimes seem to misbehave when trying to
create similarly-named accounts with the same email address:
radosgw-admin -n client.rgw.sto-1-2 user create --uid=XXXDELETEME
--display-name=carthago --email=h...@sanger.ac.uk
{
"user_id": "XXXDELETEME",
[...]
On 09/03/17 11:28, Matthew Vernon wrote:
https://drive.google.com/drive/folders/0B4TV1iNptBAdMEdUaGJIa3U1QVE?usp=sharing
[For the avoidance of doubt, I've changed the key associated with that
S3 account :-) ]
Regards,
Matthew
--
The Wellcome Trust Sanger Institute is operated by Genome
On 09/03/17 10:45, Abhishek Lekshmanan wrote:
On 03/09/2017 11:26 AM, Matthew Vernon wrote:
I'm using Jewel / 10.2.3-0ubuntu0.16.04.2 . We want to keep track of our
S3 users' quota and usage. Even with a relatively small number of users
(23) it's taking ~23 seconds.
What we do is (in outline
Hi,
I'm using Jewel / 10.2.3-0ubuntu0.16.04.2 . We want to keep track of our
S3 users' quota and usage. Even with a relatively small number of users
(23) it's taking ~23 seconds.
What we do is (in outline):
radosgw-admin metadata list user
for each user X:
radosgw-admin user info --uid=X
Dear Marc,
On 28/01/17 23:43, Marc Roos wrote:
Is there a doc that describes all the parameters that are published by
collectd-ceph?
The best I've found is the Redhat documentation of the performance
counters (which are what collectd-ceph is querying):
eezing nodes is osds-only, another is osds-and-mons. The
soft-lockup node is osds-and-rgw
Regards,
Matthew
> //Tu
> On Mon, Jan 23, 2017 at 8:38 AM Matthew Vernon <m...@sanger.ac.uk
> <mailto:m...@sanger.ac.uk>> wrote:
>
> Hi,
>
> We have a 9-node ceph c
Hi,
We have a 9-node ceph cluster, running 10.2.2 and kernel 4.4.0 (Ubuntu
Xenial). We're seeing both machines freezing (nothing in logs on the
machine, which is entirely unresponsive to anything except the power
button) and suffering soft lockups.
Has anyone seen similar? Googling hasn't found
Hi,
On 19/01/17 13:58, Chris Sarginson wrote:
> You look to have a typo in this line:
>
> rgw_frontends = "civetweb port=8080s
> ssl_certificate=/etc/pki/tls/cephrgw01.crt"
>
> It would seem from the error it should be port=8080, not port=8080s.
I think you are incorrect; port=8080s is what
Hello,
On 15/12/16 10:25, David Disseldorp wrote:
> Are you using the Linux kernel CephFS client (mount.ceph), or the
> userspace ceph-fuse back end? Quota enforcement is performed by the
> client, and is currently only supported by ceph-fuse.
Is server enforcement of quotas planned?
Regards,
Hi,
On 15/11/16 11:55, Craig Chi wrote:
> You can try to manually fix this by adding the
> /lib/systemd/system/ceph-mon.target file, which contains:
> and then execute the following command to tell systemd to start this
> target on bootup
> systemctl enable ceph-mon.target
This worked a
Hi,
On 15/11/16 01:27, Craig Chi wrote:
> What's your Ceph version?
> I am using Jewel 10.2.3 and systemd seems to work normally. I deployed
> Ceph by ansible, too.
The version in Ubuntu 16.04, which is 10.2.2-0ubuntu0.16.04.2
> You can check whether you have
Hi,
I have a problem that my ceph-mon isn't getting started when my machine
boots; the OSDs start up just fine. Checking logs, there's no sign of
systemd making any attempt to start it, although it is seemingly enabled:
root@sto-1-1:~# systemctl status ceph-mon@sto-1-1
● ceph-mon@sto-1-1.service
Hi,
I'm configuring ceph as the storage for our openstack install. One thing
we might want to do in the future is have a second openstack instance
(e.g. to test the next release of openstack); we might well want to have
this talk to our existing ceph cluster.
I could do this by giving each stack
Hi,
I have a jewel/Ubuntu16.40 ceph cluster. I attempted to add some
radosgws, having already made the pools I thought they would need per
http://docs.ceph.com/docs/jewel/radosgw/config-ref/#pools
i.e. .rgw and so on:
.rgw
.rgw.control
.rgw.gc
.log
.intent-log
.usage
98 matches
Mail list logo