OSDs on 14 hosts, btw.
Cheers,
Sean M
On 8/01/2020, at 1:32 PM, Sean Matheny
mailto:s.math...@auckland.ac.nz>> wrote:
We’re adding in a CRUSH hierarchy retrospectively in preparation for a big
expansion. Previously we only had host and osd buckets, and now we’ve added in
rack buckets.
to change
the failure domain to ‘rack’, when should I best change this (e.g. after the
rebalancing finishes for moving the hosts to the racks)?
v12.2.2 if it makes a difference.
Cheers,
Sean M
___
ceph-users mailing list
ceph-users@lists.ceph.com
eph/ceph/pull/29122
https://tracker.ceph.com/issues/36512
Sean
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
path.
>
> On Fri, Jun 14, 2019 at 8:27 AM Janne Johansson
> wrote:
> >
> > Den fre 14 juni 2019 kl 13:58 skrev Sean Redmond <
> sean.redmo...@gmail.com>:
> >>
> >> Hi Ceph-Uers,
> >> I noticed that Soft Iron now have hardware acceleration
Hi Ceph-Uers,
I noticed that Soft Iron now have hardware acceleration for Erasure
Coding[1], this is interesting as the CPU overhead can be a problem in
addition to the extra disk I/O required for EC pools.
Does anyone know if any other work is ongoing to support generic FPGA
Hardware Acceleratio
Hi,
Will debian packages be released? I don't see them in the nautilus repo. I
thought that Nautilus was going to be debian-friendly, unlike Mimic.
Sean
On Tue, 19 Mar 2019 14:58:41 +0100
Abhishek Lekshmanan wrote:
>
> We're glad to announce the first release of Nautilu
is now available as a regular command.
>
> Wido
I hope so too, especially when bucket lifecycles and versioning is enabled.
Sean
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
": true
},
"ID": "Test expiry"
}
]
}
I can't be the only one who wants to use this feature.
Thanks,
Sean
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
7
No:
"DeleteMarkers": [
{
"Owner": {
"DisplayName": "static bucket owner",
"ID": "static"
},
"IsLatest": true,
&qu
broke the bucket when a reshard happened.
12.2.7 allowed me to remove the regular files but not the delete markers.
There must be a way of removing index files and so forth through rados commands.
Thanks,
Sean
___
ceph-users mailing list
5-9089-cc6e7c5997e7",
"LastModified": "2018-09-17T16:19:58.187Z"
}
]
}
$ aws --profile=owner s3api delete-object --bucket bucket --key
0/0/00fff6df-863d-48b5-9089-cc6e7c5997e7 --version-id
ZB8ty9c3hxjxV5izmIKM1QwDR6fwnsd
returns
I doubt it - Mimic needs gcc v7 I believe, and Trusty's a bit old for that.
Even the Xenial releases aren't straightforward and rely on some backported
packages.
Sean, missing Mimic on debian stretch
On Wed, 19 Sep 2018, Jakub Jaszewski said:
> Hi Cephers,
>
> Any p
On Fri, 7 Sep 2018, Paul Emmerich said:
> Mimic
Unless you run debian, in which case Luminous.
Sean
> 2018-09-07 12:24 GMT+02:00 Vincent Godin :
> > Hello Cephers,
> > if i had to go for production today, which release should i choose :
>
Hi,
We were on 12.2.5 when a bucket with versioning and 100k objects got stuck when
autoreshard kicked in. We could download but not upload files. But upgrading
to 12.2.7 then running bucket check now shows twice as many objects, according
to bucket limit check. How do I fix this?
Sequenc
ects can cause problems."
"If a cluster is unhealthy for an extended period of time (e.g., days or even
weeks), the past interval set can become large enough to require a significant
amount of memory."
Sean
> Aside from the obvious (errors are bad things!), many people have
&
uld I go about fixing this? The bucket *seems* functional, and I don't
*think* there are extra objects, but the index check thinks there is? How do I
find out what the index actually says? Or whether there really are extra files
that need removing.
Thanks for any ideas or pointers.
Se
6400.0000.000
0.000
x.ns.gin.ntt.ne .INIT. 16 u- 6400.0000.000
0.000
Make sure that nothing is regularly restarting ntpd. For us, we had puppet
and dhcp regularly fight over the contents of ntp.conf, and it caused a
restart of ntpd.
Sean
On Wed, 15 Aug
o radosgw-admin reshard status --bucket test2
[
{
"reshard_status": 0,
"new_bucket_instance_id": "",
"num_shards": -1
},
{
"reshard_status": 0,
"new_bucket_instance_id": "",
"num_shards": -1
}
]
Thanks,
Sean
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
erpret this.
--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1 buffer
0/ 1 timer
0/ 1 filer
0/ 1 striper
0/ 1 objecter
0/ 5 ra
Hi,
You can export and import PG's using ceph_objectstore_tool, but if the osd
won't start you may have trouble exporting a PG.
It maybe useful to share the errors you get when trying to start the osd.
Thanks
On Fri, Aug 3, 2018 at 10:13 PM, Sean Patronis wrote:
>
>
> Hi
Hi all.
We have an issue with some down+peering PGs (I think), when I try to
mount or access data the requests are blocked:
114891/7509353 objects degraded (1.530%)
887 stale+active+clean
1 peering
54 active+recovery_wait
19609
Hi,
I also had the same issues and took to disabling this feature.
Thanks
On Mon, Jul 30, 2018 at 8:42 AM, Micha Krause wrote:
> Hi,
>
> I have a Jewel Ceph cluster with RGW index sharding enabled. I've
>> configured the index to have 128 shards. I am upgrading to Luminous. What
>> will h
Hi,
You may need to consider the latency between the az's, it may make it
difficult to get very high iops - I suspect that is the reason ebs is
replicated within a single AZ.
Have you any data that shows the latency between the az's?
Thanks
On Sat, 28 Jul 2018, 05:52 Mansoor Ahmed, wrote:
> H
users-boun...@lists.ceph.com] On Behalf Of
Ronny Aasen
Sent: Monday, July 23, 2018 6:13 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Reclaim free space on RBD images that use
Bluestore?
On 23.07.2018 22:18, Sean Bolding wrote:
I have XenServers that connect via iSCSI to Ceph ga
help.
Sean
<http://t.sidekickopen08.com/e1t/o/5/f18dQhb0S7kC8dDMPbW2n0x6l2B9gXrN7sKj6v5
KRN6W56jV0P7dSBj2W5vbH2n6yGGzjf197v5Y04?si=71238989&pi=0b6680fd-93ad
-4c52-9d5a-8021344ed59c>
___
ceph-users mailing list
Hi,
Do you have on going resharding? 'radosgw-admin reshard list' should so you
the status.
Do you see the number of objects in .rgw.bucket.index pool increasing?
I hit a lot of problems trying to use auto resharding in 12.2.5 - I have
disabled it for the moment.
Thanks
[1] https://tracker.cep
Hi Sean,
On Tue, 10 Jul 2018, Sean Redmond said:
> Can you please link me to the tracker 12.2.6 fixes? I have disabled
> resharding in 12.2.5 due to it running endlessly.
http://tracker.ceph.com/issues/22721
Sean
> Thanks
>
> On Tue, Jul 10, 2018 at 9:07 AM, Sean
Hi Sean (Good name btw),
Can you please link me to the tracker 12.2.6 fixes? I have disabled
resharding in 12.2.5 due to it running endlessly.
Thanks
On Tue, Jul 10, 2018 at 9:07 AM, Sean Purdy
wrote:
> While we're at it, is there a release date for 12.2.6? It fixes a
> reshard
While we're at it, is there a release date for 12.2.6? It fixes a
reshard/versioning bug for us.
Sean
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
s for the indexes.
>
> HTH,
>
> Matthew
But watch out if you are running Luminous - manual and automatic
resharding breaks if you have versioning or lifecycles on your bucket.
Fix in next stable release 12.2.6 apparently.
http://lists.ceph.com/pipermail/ceph-users-ce
retch. Is
http://tracker.ceph.com/issues/22365 a fix for this? (12.2.3)
In addition, systemctl start/stop/restart radosgw isn't working and I seem to
have to run the radosgw command and options manually.
Thanks,
Sean Purdy
___
ceph-users ma
Hi,
It sounds like the .rgw.bucket.index pool has grown maybe due to some
problem with dynamic bucket resharding.
I wonder if the (stale/old/not used) bucket index's needs to be purged
using something like the below
radosgw-admin bi purge --bucket= --bucket-id=
Not sure how you would find the o
aintained going forwards, and we're a debian shop. I appreciate Mimic is a
non-LTS release, I hope issues of debian support are resolved by the time of
the next LTS.
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.co
Hi,
I know the s4600 thread well as I had over 10 of those drives fail before I
took them all out of production.
Intel did say a firmware fix was on the way but I could not wait and opted
for SM863A and never looked back...
I will be sticking with SM863A for now on futher orders.
Thanks
On Thu
iable is a different type or a missing delimiter. womp. I am
definitely out of my depth but now is a great time to learn! Can anyone
shed some more light as to what may be wrong?
On Fri, May 4, 2018 at 7:49 PM, Yan, Zheng wrote:
> On Wed, May 2, 2018 at 7:19 AM, Sean Sullivan wrote:
> >
;max_size_kb": -1,
"max_objects": -1
}
}
I have attempted a bucket index check and fix on this, however, it does not
appear to have made a difference and no fixes or errors reported from it.
Does anyone have any advice on how to proceed with removing this content?
At t
licy with:
$ s3cmd setpolicy policy.json s3://example1/
or similar.
user2 won't see the bucket in their list of buckets, but will be able to read
and list the bucket in this case.
More at
https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
Sean
On Tue, 8 May
eve this isn't a problem with my setup specifically and anyone else
trying this will have the same issue.
https://tracker.ceph.com/issues/23972
I hope this is the correct path. If anyone can guide me in the right
direction for troubleshooting this further I would be grateful.
On Tue, May 1,
daemons active
mgr dashboard says:
Overall status: HEALTH_WARN
MON_DOWN: 1/3 mons down, quorum box1,box3
I wasn't going to worry too much. I'll check logs and restart an mgr then.
Sean
On Fri, 4 May 2018, John Spray said:
> On Fri, May 4, 2018 at 7:21 AM, Tracy Reed wrote
, May 1, 2018 at 12:09 AM, Patrick Donnelly
wrote:
> Hello Sean,
>
> On Mon, Apr 30, 2018 at 2:32 PM, Sean Sullivan
> wrote:
> > I was creating a new user and mount point. On another hardware node I
> > mounted CephFS as admin to mount as root. I created /aufstest and then
>
r 30, 2018 at 7:24 PM, Sean Sullivan wrote:
> So I think I can reliably reproduce this crash from a ceph client.
>
> ```
> root@kh08-8:~# ceph -s
> cluster:
> id: 9f58ee5a-7c5d-4d68-81ee-debe16322544
> health: HEALTH_OK
>
> services:
> mon: 3 dae
can't seem to get them to start again.
On Mon, Apr 30, 2018 at 5:06 PM, Sean Sullivan wrote:
> I had 2 MDS servers (one active one standby) and both were down. I took a
> dumb chance and marked the active as down (it said it was up but laggy).
> Then started the primary again and now
4:32 PM, Sean Sullivan wrote:
> I was creating a new user and mount point. On another hardware node I
> mounted CephFS as admin to mount as root. I created /aufstest and then
> unmounted. From there it seems that both of my mds nodes crashed for some
> reason and I can't st
I was creating a new user and mount point. On another hardware node I
mounted CephFS as admin to mount as root. I created /aufstest and then
unmounted. From there it seems that both of my mds nodes crashed for some
reason and I can't start them any more.
https://pastebin.com/1ZgkL9fa -- my mds log
Any blog posts to recommend?
It's not a huge cluster, but it does include production data.
Thanks,
Sean
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
marker is the latest version. This is available
in AWS for example.
Thanks,
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
sure no problem, I posted it here
http://tracker.ceph.com/issues/23839
On Tue, 24 Apr 2018, 16:04 Matt Benjamin, wrote:
> Hi Sean,
>
> Could you create an issue in tracker.ceph.com with this info? That
> would make it easier to iterate on.
>
> thanks and regards,
>
&
Hi,
We are currently using Jewel 10.2.7 and recently, we have been experiencing
some issues with objects being deleted using the gc. After a bucket was
unsuccessfully deleted using –purge-objects (first error next discussed
occurred), all of the rgw’s are occasionally becoming unresponsive and
requ
python-based ones, s3cmd, aws.
Sean
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Just a quick note to say thanks for organising the London Ceph/OpenStack day.
I got a lot out of it, and it was nice to see the community out in force.
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
We had something similar recently. We had to disable "rgw dns name" in the end.
Sean
On Thu, 29 Mar 2018, Rudenko Aleksandr said:
>
> Hi friends.
>
>
> I'm sorry, maybe it isn't bug, but i don't know how to solve this problem.
>
> I know that
On Wed, 7 Mar 2018, Wei Jin said:
> Same issue here.
> Will Ceph community support Debian Jessie in the future?
Seems odd to stop it right in the middle of minor point releases. Maybe it was
an oversight? Jessie's still supported in Debian as oldstable and not even in
LTS yet.
S
question really was "are there any performance implications in
deleting large buckets that I should be aware of?". So, no really. Just will
take a while.
The actual cluster is small and balanced with free space. Buckets are not
customer-facing.
Thanks for the advice,
Sean
On T
--bypass-gc
Thanks,
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
msung SM863a 2.5" Enterprise SSD, SATA3 6Gb/s, 2-bit MLC V-NAND
Regards
Sean Redmond
On Wed, Jan 10, 2018 at 11:08 PM, Sean Redmond
wrote:
> Hi David,
>
> Thanks for your email, they are connected inside Dell R730XD (2.5 inch 24
> disk model) in None RAID mode via a perc RAID ca
Herselman wrote:
> Hi Sean,
>
>
>
> No, Intel’s feedback has been… Pathetic… I have yet to receive anything
> more than a request to ‘sign’ a non-disclosure agreement, to obtain beta
> firmware. No official answer as to whether or not one can logically unlock
> the drives,
Hi,
I have a case where 3 out to 12 of these Intel S4600 2TB model failed
within a matter of days after being burn-in tested then placed into
production.
I am interested to know, did you every get any further feedback from the
vendor on your issue?
Thanks
On Thu, Dec 21, 2017 at 1:38 PM, David
Hi,
Did you see this http://docs.ceph.com/docs/master/install/get-packages/ It
contains details on how to add the apt repo's provided by the ceph project.
You may also want to consider 16.04 if this is a production install as
17.10 has a pretty short life (
https://www.ubuntu.com/info/release-end
Can you share - ceph osd tree / crushmap and `ceph health detail` via
pastebin?
Is recovery stuck or it is on going?
On 7 Dec 2017 07:06, "Karun Josy" wrote:
> Hello,
>
> I am seeing health error in our production cluster.
>
> health: HEALTH_ERR
> 1105420/11038158 objects misplaced
Hi Florent,
I have always done mons ,osds, rgw, mds, clients
Packages that don't auto restart services on update IMO is a good thing.
Thanks
On Tue, Dec 5, 2017 at 3:26 PM, Florent B wrote:
> On Debian systems, upgrading packages does not restart services !
>
> On 05/12/2017 16:22, Oscar Sega
ay tuned in future releases for sync plugins that replicate data to (or even
from) cloud storage services like S3!"
But then it looks like you wrote that blog post! I guess I'll stay tuned
Sean
> callbacks that can then act on those changes. For example, the
> metadata se
I can use? I've found Spreadshirt's haproxy fork
which traps requests and updates redis -
https://github.com/spreadshirt/s3gw-haproxy Anybody used that?
Thanks,
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
h
Hi,
Is it possible to add new empty osds to your cluster? Or do these also
crash out?
Thanks
On 18 Nov 2017 14:32, "Ashley Merrick" wrote:
> Hello,
>
>
>
> So seems noup does not help.
>
>
>
> Still have the same error :
>
>
>
> 2017-11-18 14:26:40.982827 7fb4446cd700 -1 *** Caught signal (Abo
On freshly installed ubuntu 16.04 servers with the HWE kernel selected
(4.10). I can not use ceph-deploy or ceph-disk to provision osd.
whenever I try I get the following::
ceph-disk -v prepare --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys
--bluestore --cluster ceph --fs-type xfs -- /dev/s
I am trying to stand up ceph (luminous) on 3 72 disk supermicro servers
running ubuntu 16.04 with HWE enabled (for a 4.10 kernel for cephfs). I am
not sure how this is possible but even though I am running the following
line to wipe all disks of their partitions, once I run ceph-disk to
partition t
body know how to tweak the plugin to select the stats you want to see? e.g.
monitor paxos stuff doesn't show up either. Perhaps there's a deliberate
limitation somewhere, but it seems odd to show "get" and not "put" request
rates.
(collectd 5.7.1 on debian stre
I have tried using ceph-disk directly and i'm running into all sorts of
trouble but I'm trying my best. Currently I am using the following cobbled
script which seems to be working:
https://github.com/seapasulli/CephScripts/blob/master/provision_storage.sh
I'm at 11 right now. I hope this works.
___
Are you using radosgw? I found this page useful when I had a similar issue:
http://www.osris.org/performance/rgw.html
Sean
On Wed, 18 Oct 2017, Ольга Ухина said:
> Hi!
>
> I have a problem with ceph luminous 12.2.1. It was upgraded from kraken,
> but I'm not sure if it
I am trying to install Ceph luminous (ceph version 12.2.1) on 4 ubuntu
16.04 servers each with 74 disks, 60 of which are HGST 7200rpm sas drives::
HGST HUS724040AL sdbv sas
root@kg15-2:~# lsblk --output MODEL,KNAME,TRAN | grep HGST | wc -l
60
I am trying to deploy them all with ::
a line like th
Hi,
Is there any way that radosgw can ping something when a file is removed or
added to a bucket?
Or use its sync facility to sync files to AWS/Google buckets?
Just thinking about backups. What do people use for backups? Been looking at
rclone.
Thanks,
Sean
On Thu, 10 Aug 2017, John Spray said:
> On Thu, Aug 10, 2017 at 4:31 PM, Sean Purdy wrote:
> > Luminous 12.1.1 rc
And 12.2.1 stable
> > We added a new disk and did:
> > That worked, created osd.18, OSD has data.
> >
> > However, mgr output at http://localho
ond
timeout:
from /lib/systemd/system/ceph-disk@.service
Environment=CEPH_DISK_TIMEOUT=1
ExecStart=/bin/sh -c 'timeout $CEPH_DISK_TIMEOUT flock
/var/lock/ceph-disk-$(basename %f) /usr/sbin/ceph-disk --verbose --log-stdout
trigger --sync %f'
Sean
> On 15/09/17 16:48, Matthew Ver
bably trying to connect to the 3rd
> monitor, but why? When this monitor is not in quorum.
There's a setting for client timeouts. I forget where.
Sean
> -Original Message-
> From: Sean Purdy [mailto:s.pu...@cv-library.co.uk]
> Sent: donderdag 21 september 2017 12
94.125.129.7 3 u 411 1024 3770.388 -0.331 0.139
*172.16.0.19 158.43.128.332 u 289 1024 3770.282 -0.005 0.103
Sean
> On Wed, Sep 20, 2017 at 2:50 AM Sean Purdy wrote:
>
> >
> > Hi,
> >
> >
> > Luminous 12.2.0
> >
&g
On Wed, 20 Sep 2017, Burkhard Linke said:
> Hi,
>
>
> On 09/20/2017 12:24 PM, Sean Purdy wrote:
> >On Wed, 20 Sep 2017, Burkhard Linke said:
> >>The main reason for having a journal with filestore is having a block device
> >>that supports synchronous
the filestore journal
Our Bluestore disks are hosted on RAID controllers. Should I set cache policy
as WriteThrough for these disks then?
Sean Purdy
> the bluestore wal/rocksdb partitions can be used to allow both faster
> devices (ssd/nvme) and faster sync writes (compared to sp
2.16.0.45:6789/0},
election epoch 378, leader 0 store01, quorum 0,1,2 store01,store02,store03
and everything's happy.
What should I look for/fix? It's a fairly vanilla system.
Thanks in advance,
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
2 22:48 18218 s3://test/1486716654.15214271.docx.gpg.99
I have not tried rclone or ACL futzing.
Sean Purdy
> I have opened an issue on s3cmd too
>
> https://github.com/s3tools/s3cmd/issues/919
>
> Thanks for your help
>
> Yoann
>
> > I have a fresh luminou
Datapoint: I have the same issue on 12.1.1, three nodes, 6 disks per node.
On Thu, 31 Aug 2017, Piotr Dzionek said:
> For a last 3 weeks I have been running latest LTS Luminous Ceph release on
> CentOS7. It started with 4th RC and now I have Stable Release.
> Cluster runs fine, however I noticed t
On Wed, 23 Aug 2017, David Turner said:
> This isn't a solution to fix them not starting at boot time, but a fix to
> not having to reboot the node again. `ceph-disk activate-all` should go
> through and start up the rest of your osds without another reboot.
Thanks, will try ne
sd@NN.service" will
work.
What happens at disk detect and mount time? Is there a timeout somewhere I can
extend?
How can I tell udev to have another go at mounting the disks?
If it's in the docs and I've missed it, apologies.
Thanks in advance,
Sean Purdy
__
On Tue, 15 Aug 2017, Sean Purdy said:
> Luminous 12.1.1 rc1
>
> Hi,
>
>
> I have a three node cluster with 6 OSD and 1 mon per node.
>
> I had to turn off one node for rack reasons. While the node was down, the
> cluster was still running and accepting files vi
Hi,
On Thu, 17 Aug 2017, Gregory Farnum said:
> On Wed, Aug 16, 2017 at 4:04 AM Sean Purdy wrote:
>
> > On Tue, 15 Aug 2017, Gregory Farnum said:
> > > On Tue, Aug 15, 2017 at 4:23 AM Sean Purdy
> > wrote:
> > > > I have a three node cluster with 6 OSD an
On Tue, 15 Aug 2017, Gregory Farnum said:
> On Tue, Aug 15, 2017 at 4:23 AM Sean Purdy wrote:
> > I have a three node cluster with 6 OSD and 1 mon per node.
> >
> > I had to turn off one node for rack reasons. While the node was down, the
> > cluster was still runn
quorum.
OSDs had 15 minutes of
ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-9: (2) No such
file or directory
before becoming available.
Advice welcome.
Thanks,
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
scrubbing reasons.
Output of related commands below.
Thanks for any help,
Sean Purdy
$ sudo ceph osd tree
ID CLASS WEIGHT TYPE NAMEUP/DOWN REWEIGHT PRI-AFF
-1 32.73651 root default
-3 10.91217 host store01
0 hdd 1.81870 osd.0 up 1.0 1.0
but we're aiming for HA
and redundancy.
Thanks!
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ot;
> If you want to get rid of filestore on Btrfs, start a proper deprecation
> process and inform users that support for it it's going to be removed in
> the near future. The documentation must be updated accordingly and it
> must be clearly emph
Hi,
Another newbie question. Do people using radosgw mirror their buckets
to AWS S3 or compatible services as a backup? We're setting up a
small cluster and are thinking of ways to mitigate total disaster.
What do people recommend?
Thanks,
Sean
ceph-mon coexist peacefully with a different zookeeper
already on the same machine?
Thanks,
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
You should upgrade them all to the latest point release if you don't want
to upgrade to the latest major release.
Start with the mons, then the osds.
Thanks
On 3 Mar 2017 18:05, "Curt Beason" wrote:
> Hello,
>
> So this is going to be a noob question probably. I read the
> documentation,
Am I SOL at this point? The cluster isn't production any longer and
while I don't have months of time I would really like to recover this
cluster just to see if it is at all possible.
--
- Sean: I wrote this. -
___
ceph-users mailing list
cep
reshold)
max_recent 500
max_new 1000
log_file
--- end dump of recent events ---
Segmentation fault (core dumped)
--
I have tried copying my monitor and admin keyring into the admin.keyring
used to try to r
Hi,
Is the current strange DNS issue with docs.ceph.com related to this also? I
noticed that docs.ceph.com is getting a different A record from
ns4.redhat.com vs ns{1..3}.redhat.com
dig output here > http://pastebin.com/WapDY9e2
Thanks
On Thu, Jan 19, 2017 at 11:03 PM, Dan Mick wrote:
> On 01
Looks like there maybe an issue with the ceph.com and tracker.ceph.com
website at the moment
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
stable the
> technology is in general.
>
>
> Stable. Multiple customers of me run it in production with the kernel
> client and serious load on it. No major problems.
>
> Wido
>
> On Mon, Jan 16, 2017 at 3:19 PM Sean Redmond
> wrote:
>
>> What's your use
What's your use case? Do you plan on using kernel or fuse clients?
On 16 Jan 2017 23:03, "Tu Holmes" wrote:
> So what's the consensus on CephFS?
>
> Is it ready for prime time or not?
>
> //Tu
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.c
ere anything
I can do to figure out where the 500 is coming from // troubleshoot
further?
--
- Sean: I wrote this. -
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
If you need the docs you can try reading them here
https://github.com/ceph/ceph/tree/master/doc
On Mon, Jan 2, 2017 at 7:45 PM, Andre Forigato
wrote:
> Hello Marcus,
>
> Yes, it´s down. :-(
>
>
> André
>
> - Mensagem original -
> > De: "Marcus Müller"
> > Para: ceph-users@lists.ceph.co
Hi,
Hmm, could you try and dump the crush map - decompile it - modify it to
remove the DNE osd's, compile it and load it back into ceph?
http://docs.ceph.com/docs/master/rados/operations/crush-map/#get-a-crush-map
Thanks
On Thu, Dec 29, 2016 at 1:01 PM, Łukasz Chrustek wrote:
> Hi,
>
> ]# cep
1 - 100 of 209 matches
Mail list logo