Re: [ceph-users] CephFS with cache-tier kernel-mount client unable to write (Nautilus)

2020-01-22 Thread Hayashida, Mami
Computing Infrastructure On Tue, Jan 21, 2020 at 2:21 PM Ilya Dryomov wrote: > On Tue, Jan 21, 2020 at 7:51 PM Hayashida, Mami > wrote: > > > > Ilya, > > > > Thank you for your suggestions! > > > > `dmsg` (on the client node) only had `libceph: mon0 10.

Re: [ceph-users] CephFS with cache-tier kernel-mount client unable to write (Nautilus)

2020-01-21 Thread Hayashida, Mami
: > On Tue, Jan 21, 2020 at 6:02 PM Hayashida, Mami > wrote: > > > > I am trying to set up a CephFS with a Cache Tier (for data) on a mini > test cluster, but a kernel-mount CephFS client is unable to write. Cache > tier setup alone seems to be working fine (I tested it with `r

[ceph-users] CephFS with cache-tier kernel-mount client unable to write (Nautilus)

2020-01-21 Thread Hayashida, Mami
I am trying to set up a CephFS with a Cache Tier (for data) on a mini test cluster, but a kernel-mount CephFS client is unable to write. Cache tier setup alone seems to be working fine (I tested it with `rados put` and `osd map` commands to verify on which OSDs the objects are placed) and setting

Re: [ceph-users] ceph-deploy osd create adds osds but weight is 0 and not adding hosts to CRUSH map

2019-06-26 Thread Hayashida, Mami
Please disregard the earlier message. I found the culprit: `osd_crush_update_on_start` was set to false. *Mami Hayashida* *Research Computing Associate* Univ. of Kentucky ITS Research Computing Infrastructure On Wed, Jun 26, 2019 at 11:37 AM Hayashida, Mami wrote: > I am trying to bu

[ceph-users] ceph-deploy osd create adds osds but weight is 0 and not adding hosts to CRUSH map

2019-06-26 Thread Hayashida, Mami
I am trying to build a Ceph cluster using ceph-deploy. To add OSDs, I used the following command (which I had successfully used before to build another cluster): ceph-deploy osd create --block-db=ssd0/db0 --data=/dev/sdh osd0 ceph-deploy osd create --block-db=ssd0/db1 --data=/dev/sdi osd0

[ceph-users] Enabling Dashboard RGW management functionality

2019-02-21 Thread Hayashida, Mami
I followed the documentation (http://docs.ceph.com/docs/mimic/mgr/dashboard/) to enable the dashboard RGW management, but am still getting the 501 error ("Please consult the documentation on how to configure and enable the Object Gateway... "). The dashboard it self is working. 1. create a RGW

[ceph-users] Turning RGW data pool into an EC pool

2019-01-17 Thread Hayashida, Mami
I would like to know the simplest and surest way to set up a RGW instance with an EC-pool for storing large quantity of data. 1. I am currently trying to do this on a cluster that is not yet open to users. (i.e. I can mess around with it and, in the worst case, start all over.) 2. I deployed RGW

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hayashida, Mami
ode, I will go back to the previous node and see if I can zap it and start all over again. On Wed, Nov 7, 2018 at 12:21 PM, Hector Martin wrote: > On 11/8/18 2:15 AM, Hayashida, Mami wrote: > > Thank you very much. Yes, I am aware that zapping the SSD and > > converting it to LVM

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hayashida, Mami
in the past -- so that's probably a good preemptive move. On Wed, Nov 7, 2018 at 10:46 AM, Hector Martin wrote: > On 11/8/18 12:29 AM, Hayashida, Mami wrote: > > Yes, that was indeed a copy-and-paste mistake. I am trying to use > > /dev/sdh (hdd) for data and a part o

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hayashida, Mami
: > ceph osd destroy 70 --yes-i-really-mean-it > > I am guessing that’s a copy and paste mistake and should say 120. > > Is the SSD @ /dev/sdh fully for the OSD120 is a partition on this SSD the > journal and other partitions are for other SSD’s? > > On Wed, 7 Nov 2018 at 1

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hayashida, Mami
--all` command. > > Maybe if you try one by one, capturing each command, throughout the > process, with output. In the filestore-to-bluestore guides we never > advertise `activate --all` for example. > > Something is missing here, and I can't tell what it is. > On Tue, Nov 6, 2018

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-06 Thread Hayashida, Mami
nnot restart any of these 10 daemons (`systemctl start ceph-osd@6 [0-9]`). I am wondering if I should zap these 10 osds and start over although at this point I am afraid even zapping may not be a simple task On Tue, Nov 6, 2018 at 3:44 PM, Hector Martin wrote: > On 11/7/18 5:27 AM, Hayash

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-06 Thread Hayashida, Mami
, Hayashida, Mami wrote: > Ok. I will go through this this afternoon and let you guys know the > result. Thanks! > > On Tue, Nov 6, 2018 at 11:32 AM, Hector Martin > wrote: > >> On 11/7/18 1:00 AM, Hayashida, Mami wrote: >> > I see. Thank you for clarif

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-06 Thread Hayashida, Mami
Ok. I will go through this this afternoon and let you guys know the result. Thanks! On Tue, Nov 6, 2018 at 11:32 AM, Hector Martin wrote: > On 11/7/18 1:00 AM, Hayashida, Mami wrote: > > I see. Thank you for clarifying lots of things along the way -- this > > has been ex

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-06 Thread Hayashida, Mami
"ceph.crush_device_class": "None", "ceph.db_device": "/dev/ssd0/db60", "ceph.db_uuid": "d32eQz-79GQ-2eJD-4ANB-vr0O-bDpb-fjWSD5", "ceph.encrypted": "0",

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-06 Thread Hayashida, Mami
I see. Thank you for clarifying lots of things along the way -- this has been extremely helpful. Neither "df | grep osd" nor "mount | grep osd" shows ceph-60 through 69. On Tue, Nov 6, 2018 at 10:57 AM, Hector Martin wrote: > > > On 11/7/18 12:48 AM, Hayashida, M

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-06 Thread Hayashida, Mami
not listed. (Should it be?) Neither does mount lists that drive. ("df | grep sdh" and "mount | grep sdh" both return nothing) On Tue, Nov 6, 2018 at 10:42 AM, Hector Martin wrote: > > > On 11/7/18 12:30 AM, Hayashida, Mami wrote: > > So, currently

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-06 Thread Hayashida, Mami
9 osd.60 up 1.0 1.0 On Mon, Nov 5, 2018 at 11:23 PM, Hector Martin wrote: > On 11/6/18 6:03 AM, Hayashida, Mami wrote: > > WOW. With you two guiding me through every step, the 10 OSDs in > > question are now added back to the cluster as Bluestore disks!!

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Hayashida, Mami
e I encounter any errors. I am planning on trying to start the OSDs (once they are converted to Bluestore) without the udev rule first. On Mon, Nov 5, 2018 at 4:42 PM, Alfredo Deza wrote: > On Mon, Nov 5, 2018 at 4:21 PM Hayashida, Mami > wrote: > > > > Yes, I still have t

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Hayashida, Mami
Yes, I still have the volume log showing the activation process for ssd0/db60 (and 61-69 as well). I will email it to you directly as an attachment. On Mon, Nov 5, 2018 at 4:14 PM, Alfredo Deza wrote: > On Mon, Nov 5, 2018 at 4:04 PM Hayashida, Mami > wrote: > > > >

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Hayashida, Mami
11/6/18 3:31 AM, Hayashida, Mami wrote: > > 2018-11-05 12:47:01.075573 7f1f2775ae00 -1 > > bluestore(/var/lib/ceph/osd/ceph-60) > _open_db add block device(/var/lib/ceph/osd/ceph-60/block.db) returned: > (13) Permission denied > > Looks like the permissions on the block.db device ar

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Hayashida, Mami
I already ran the "ceph-volume lvm activate --all " command right after I prepared (using "lvm prepare") those OSDs. Do I need to run the "activate" command again? On Mon, Nov 5, 2018 at 1:24 PM, Alfredo Deza wrote: > On Mon, Nov 5, 2018 at 12:54 PM Ha

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Hayashida, Mami
: unable to mount object store 2018-11-05 12:47:01.346378 7f1f2775ae00 -1 ** ERROR: osd init failed: (13) Permission denied On Mon, Nov 5, 2018 at 12:34 PM, Hector Martin wrote: > On 11/6/18 2:01 AM, Hayashida, Mami wrote: > > I did find in /etc/fstab entries like this for those

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Hayashida, Mami
I did find in /etc/fstab entries like this for those 10 disks /dev/sdh1 /var/lib/ceph/osd/ceph-60 xfs noatime,nodiratime 0 0 Should I comment all 10 of them out (for osd.{60-69}) and try rebooting again? On Mon, Nov 5, 2018 at 11:54 AM, Hayashida, Mami wrote: > I was just going to wr

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Hayashida, Mami
> > On 11/6/18 1:37 AM, Hayashida, Mami wrote: > > Alright. Thanks -- I will try this now. > > > > On Mon, Nov 5, 2018 at 11:36 AM, Alfredo Deza > <mailto:ad...@redhat.com>> wrote: > > > > On Mon, Nov 5, 2018 at 11:33 AM Hayashida, Mami >

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Hayashida, Mami
Alright. Thanks -- I will try this now. On Mon, Nov 5, 2018 at 11:36 AM, Alfredo Deza wrote: > On Mon, Nov 5, 2018 at 11:33 AM Hayashida, Mami > wrote: > > > > But I still have 50 other Filestore OSDs on the same node, though. > Wouldn't doing it all at once (by not i

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Hayashida, Mami
AM Hayashida, Mami > wrote: > > > > Thank you for all of your replies. Just to clarify... > > > > 1. Hector: I did unmount the file system if what you meant was > unmounting the /var/lib/ceph/osd/ceph-$osd-id for those disks (in my case > osd.60-69) before run

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Hayashida, Mami
the "ln" command (basically getting rid of the symbolic link) for each of those OSDs I have converted? For example ln -sf /dev/null /etc/systemc/system/ceph-disk@60.service Then reboot? On Mon, Nov 5, 2018 at 11:17 AM, Alfredo Deza wrote: > On Mon, Nov 5, 2018 at 10:43 AM Ha

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Hayashida, Mami
Additional info -- I know that /var/lib/ceph/osd/ceph-{60..69} are not mounted at this point (i.e. mount | grep ceph-60, and 61-69, returns nothing.). They don't show up when I run "df", either. On Mon, Nov 5, 2018 at 10:15 AM, Hayashida, Mami wrote: > Well, over the weekend the

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Hayashida, Mami
y: systemd -- Support: http://lists.freeddesktop.org/ -- -- Unit dev-sdh1.device has failed. I see this for every single one of the newly-converted Bluestore OSD disks (/dev/sd{h..q}1). -- On Mon, Nov 5, 2018 at 9:57 AM, Alfredo Deza wrote: > On Fri, Nov 2, 2018 at 5:04 PM Hayashida,

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-02 Thread Hayashida, Mami
ph-70 . As a Ceph novice, I am totally clueless about the next step at this point. Any help would be appreciated. On Thu, Nov 1, 2018 at 3:16 PM, Hayashida, Mami wrote: > Thank you, both of you. I will try this out very soon. > > On Wed, Oct 31, 2018 at 8:48 AM, Alfredo Deza wrote: &g

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-01 Thread Hayashida, Mami
Thank you, both of you. I will try this out very soon. On Wed, Oct 31, 2018 at 8:48 AM, Alfredo Deza wrote: > On Wed, Oct 31, 2018 at 8:28 AM Hayashida, Mami > wrote: > > > > Thank you for your replies. So, if I use the method Hector suggested (by > creating PVs, VGs..

Re: [ceph-users] Filestore to Bluestore migration question

2018-10-31 Thread Hayashida, Mami
loy): osd journal size = 40960 On Wed, Oct 31, 2018 at 7:03 AM, Alfredo Deza wrote: > On Wed, Oct 31, 2018 at 5:22 AM Hector Martin > wrote: > > > > On 31/10/2018 05:55, Hayashida, Mami wrote: > > > I am relatively new to Ceph and need some advice on Bluestore > m

[ceph-users] Filestore to Bluestore migration question

2018-10-30 Thread Hayashida, Mami
I am relatively new to Ceph and need some advice on Bluestore migration. I tried migrating a few of our test cluster nodes from Filestore to Bluestore by following this ( http://docs.ceph.com/docs/luminous/rados/operations/bluestore-migration/) as the cluster is currently running 12.2.9. The

Re: [ceph-users] ceph dashboard ac-* commands not working (Mimic)

2018-10-15 Thread Hayashida, Mami
Ah, ok. Thanks! On Mon, Oct 15, 2018 at 8:52 AM, John Spray wrote: > On Mon, Oct 15, 2018 at 1:47 PM Hayashida, Mami > wrote: > > > > John, > > > > Thanks for your reply. I am glad you clarified the docs URL mystery for > me as that has confused me many t

Re: [ceph-users] ceph dashboard ac-* commands not working (Mimic)

2018-10-15 Thread Hayashida, Mami
for mimic in that URL. > > Hopefully we'll soon have some changes to make this more apparent when > looking at the docs. > > John > > On Fri, 12 Oct 2018, 17:43 Hayashida, Mami, > wrote: > >> I set up a new Mimic cluster recently and have just enabled the >> Das

[ceph-users] ceph dashboard ac-* commands not working (Mimic)

2018-10-12 Thread Hayashida, Mami
I set up a new Mimic cluster recently and have just enabled the Dashboard. I first tried to add a (Dashboard) user with the "ac-user-create" command following this version of documentation ( http://docs.ceph.com/docs/master/mgr/dashboard/), but the command did not work. Following the