Re: [ceph-users] Future of Filestore?

2019-07-20 Thread Stuart Longland
On 20/7/19 11:53 pm, Marc Roos wrote:
> Reverting back to filestore is quite a lot of work and time again. Maybe 
> see first if with some tuning of the vms you can get better results?

None of the VMs are particularly disk-intensive.  There's two users
accessing the system over a WiFi network for email, and some HTTP/SMTP
traffic coming in via an ADSL2 Internet connection.

If Bluestore can't manage this, then I'd consider it totally worthless
in any enterprise installation -- so clearly something is wrong.

> What you also can try is for io intensive vm's add an ssd pool?

How well does that work in a cluster with 0 SSD-based OSDs?

For 3 of the nodes, the cases I'm using for the servers can fit two 2.5"
drives.  I have one 120GB SSD for the OS, that leaves one space spare
for the OSD.  These machines originally had 1TB 5400RPM HDDs fitted
(slower ones than the current drives), and in the beginning I just had
these 3 nodes.  3TB raw space was getting tight.

I since added two new nodes, which are Intel NUCs with m.2 SATA SSDs for
the OS and like the other nodes have a single 2.5" drive bay.

This is being done as a hobby and a learning exercise I might add -- so
while I have spent a lot of money on this, the funds I have to throw at
this are not infinite.

> I moved 
> some exchange servers on them. Tuned down the logging, because that is 
> writing constantly to disk. 
> With such setup you are at least secured for the future.

The VMs I have are mostly Linux (Gentoo, some Debian/Ubuntu), with a few
OpenBSD VMs for things like routers between virtual networks.
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-20 Thread Stuart Longland
On 20/7/19 11:53 pm, Marc Roos wrote:
> Reverting back to filestore is quite a lot of work and time again. Maybe 
> see first if with some tuning of the vms you can get better results?

None of the VMs are particularly disk-intensive.  There's two users
accessing the system over a WiFi network for email, and some HTTP/SMTP
traffic coming in via an ADSL2 Internet connection.

If Bluestore can't manage this, then I'd consider it totally worthless
in any enterprise installation -- so clearly something is wrong.

> What you also can try is for io intensive vm's add an ssd pool?

How well does that work in a cluster with 0 SSD-based OSDs?

For 3 of the nodes, the cases I'm using for the servers can fit two 2.5"
drives.  I have one 120GB SSD for the OS, that leaves one space spare
for the OSD.  These machines originally had 1TB 5400RPM HDDs fitted
(slower ones than the current drives), and in the beginning I just had
these 3 nodes.  3TB raw space was getting tight.

I since added two new nodes, which are Intel NUCs with m.2 SATA SSDs for
the OS and like the other nodes have a single 2.5" drive bay.

This is being done as a hobby and a learning exercise I might add -- so
while I have spent a lot of money on this, the funds I have to throw at
this are not infinite.

> I moved 
> some exchange servers on them. Tuned down the logging, because that is 
> writing constantly to disk. 
> With such setup you are at least secured for the future.

The VMs I have are mostly Linux (Gentoo, some Debian/Ubuntu), with a few
OpenBSD VMs for things like routers between virtual networks.
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Need to replace OSD. How do I find physical disk

2019-07-20 Thread Pelletier, Robert
Thank you gentlemen. I will give this a shot and reply with what worked.

On Jul 19, 2019, at 11:11 AM, Tarek Zegar 
mailto:tze...@us.ibm.com>> wrote:


On the host with the osd run:


 ceph-volume lvm list





"☣Adam" ---07/18/2019 03:25:05 PM---The block device can be found 
in /var/lib/ceph/osd/ceph-$ID/block # ls -l /var/lib/ceph/osd/ceph-9/b

From: "☣Adam" mailto:a...@dc949.org>>
To: ceph-users@lists.ceph.com
Date: 07/18/2019 03:25 PM
Subject: [EXTERNAL] Re: [ceph-users] Need to replace OSD. How do I find 
physical disk
Sent by: "ceph-users" 
mailto:ceph-users-boun...@lists.ceph.com>>





The block device can be found in /var/lib/ceph/osd/ceph-$ID/block
# ls -l /var/lib/ceph/osd/ceph-9/block

In my case it links to /dev/sdbvg/sdb which makes is pretty obvious
which drive this is, but the Volume Group and Logical volume could be
named anything.  To see what physical disk(s) make up this volume group
use lvblk (as Reed suggested)
# lvblk

If that drive needs to be located in a computer with many drives,
smartctl should be able to be used to pull the make, model, and serial
number
# smartctl -i /dev/sdb


I was not aware of ceph-volume, or `ceph-disk list` (which is apparently
now deprecated in favor of ceph-volume), so thank you to all in this
thread for teaching about alternative (arguably more proper) ways of
doing this. :-)

On 7/18/19 12:58 PM, Pelletier, Robert wrote:
> How do I find the physical disk in a Ceph luminous cluster in order to
> replace it. Osd.9 is down in my cluster which resides on ceph-osd1 host.
>
>
>
> If I run lsblk -io KNAME,TYPE,SIZE,MODEL,SERIAL I can get the serial
> numbers of all the physical disks for example
>
> sdbdisk  1.8T ST2000DM001-1CH1 Z1E5VLRG
>
>
>
> But how do I find out which osd is mapped to sdb and so on?
>
> When I run df –h I get this
>
>
>
> [root@ceph-osd1 ~]# df -h
>
> Filesystem   Size  Used Avail Use% Mounted on
>
> /dev/mapper/ceph--osd1-root   19G  1.9G   17G  10% /
>
> devtmpfs  48G 0   48G   0% /dev
>
> tmpfs 48G 0   48G   0% /dev/shm
>
> tmpfs 48G  9.3M   48G   1% /run
>
> tmpfs 48G 0   48G   0% /sys/fs/cgroup
>
> /dev/sda3947M  232M  716M  25% /boot
>
> tmpfs 48G   24K   48G   1% /var/lib/ceph/osd/ceph-2
>
> tmpfs 48G   24K   48G   1% /var/lib/ceph/osd/ceph-5
>
> tmpfs 48G   24K   48G   1% /var/lib/ceph/osd/ceph-0
>
> tmpfs 48G   24K   48G   1% /var/lib/ceph/osd/ceph-8
>
> tmpfs 48G   24K   48G   1% /var/lib/ceph/osd/ceph-7
>
> tmpfs 48G   24K   48G   1% /var/lib/ceph/osd/ceph-33
>
> tmpfs 48G   24K   48G   1% /var/lib/ceph/osd/ceph-10
>
> tmpfs 48G   24K   48G   1% /var/lib/ceph/osd/ceph-1
>
> tmpfs 48G   24K   48G   1% /var/lib/ceph/osd/ceph-38
>
> tmpfs 48G   24K   48G   1% /var/lib/ceph/osd/ceph-4
>
> tmpfs 48G   24K   48G   1% /var/lib/ceph/osd/ceph-6
>
> tmpfs9.5G 0  9.5G   0% /run/user/0
>
>
>
>
>
> *Robert Pelletier, **IT and Security Specialist***
>
> Eastern Maine Community College
> (207) 974-4782 | 354 Hogan Rd., Bangor, ME 04401
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Failed to get omap key when mirroring of image is enabled

2019-07-20 Thread Ajitha Robert
I  have two queries,

1) I have a rbd mirroring setup with two primary and secondary clusters as
peers and I have enabled image mode.., In this i creates rbd image enabled
with journaling.

But whenever i enable mirroring on the image,  I m getting error in osd.log

Primary osd log: failed to get omap key, error retrieving image id for
global id

Secondary osd log: error retrieving image id for global id

2)
I have deployed ceph using ceph-ansible. Is it possible to give the SAN
multipath device in osd devices??. I m using stable3.0 and luminous

I have given
devices

/dev/mapper/
I m getting symlink issue
How to rectify it.
Environment:

OS (e.g. from /etc/os-release): debian stretch
Ansible version (e.g. ansible-playbook --version):2.4.0
ceph-ansible version (e.g. git head or tag or stable branch):3.0
Ceph version (e.g. ceph -v): luminous(12.2)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] how to debug slow requests

2019-07-20 Thread Wei Zhao
Hi ceph users:
I was doing  write benchmark, and found some io will be blocked for a
very long time. The following log is one op , it seems to wait for
replica to finish. My ceph version is 12.2.4, and the pool is 3+2 EC .
Does anyone give me some adives about how I sould debug next ?

{
"ops": [
{
"description": "osd_op(client.17985.0:670679 39.18
39:1a63fc5c:::benchmark_data_SH-IDC1-10-5-37-174_2917453_object670678:head
[set-alloc-hint object_size 1048576 write_size 1048576,write
0~1048576] snapc 0=[] ondisk+write+known_if_redirected e1135)",
"initiated_at": "2019-07-20 23:13:18.725466",
"age": 329.248875,
"duration": 329.248901,
"type_data": {
"flag_point": "waiting for sub ops",
"client_info": {
"client": "client.17985",
"client_addr": "10.5.137.174:0/1544466091",
"tid": 670679
},
"events": [
{
"time": "2019-07-20 23:13:18.725466",
"event": "initiated"
},
{
"time": "2019-07-20 23:13:18.726585",
"event": "queued_for_pg"
},
{
"time": "2019-07-20 23:13:18.726606",
"event": "reached_pg"
},
{
"time": "2019-07-20 23:13:18.726752",
"event": "started"
},
{
"time": "2019-07-20 23:13:18.726842",
"event": "waiting for subops from 4"
},
{
"time": "2019-07-20 23:13:18.743134",
"event": "op_commit"
},
{
"time": "2019-07-20 23:13:18.743137",
"event": "op_applied"
}
]
}
},
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-20 Thread Marc Roos
 

Reverting back to filestore is quite a lot of work and time again. Maybe 
see first if with some tuning of the vms you can get better results? 
What you also can try is for io intensive vm's add an ssd pool? I moved 
some exchange servers on them. Tuned down the logging, because that is 
writing constantly to disk. 
With such setup you are at least secured for the future.






-Original Message-
From: Stuart Longland [mailto:stua...@longlandclan.id.au] 
Subject: Re: [ceph-users] Future of Filestore?


>  
> Maybe a bit of topic, just curious what speeds did you get previously? 

> Depending on how you test your native drive of 5400rpm, the 
> performance could be similar. 4k random read of my 7200rpm/5400 rpm 
> results in ~60iops at 260kB/s.

Well, to be honest I never formally tested the performance prior to the 
move to Bluestore.  It was working "acceptably" for my needs, thus I 
never had a reason to test it.

It was never a speed demon, but it did well enough for my needs.  Had 
Filestore on BTRFS remained an option in Ceph v12, I'd have stayed that 
way.

> I also wonder why filestore could be that much faster, is this not 
> something else? Maybe some dangerous caching method was on?

My understanding is that Bluestore does not benefit from the Linux 
kernel filesystem cache.  On paper, Bluestore *should* be faster, but 
it's hard to know for sure.

Maybe I should try migrating back to Filestore and see if that improves 
things?
--
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com