Re: [ceph-users] Future of Filestore?

2019-07-25 Thread Stuart Longland
On 25/7/19 9:32 pm, Виталий Филиппов wrote:
> Hi again,
> 
> I reread your initial email - do you also run a nanoceph on some SBCs
> each having one 2.5" 5400rpm HDD plugged into it? What SBCs do you use? :-)

I presently have a 5-node Ceph cluster:

- 3× Supermicro A1SAi-2750F with 1 120GB 2.5" SSD for boot (and
originally, journal), and a 2.5" 2TB HDD (WD20SPZX-00U).  One has 32GB
RAM (it was a former compute node), the others have 16GB.
- 2× Intel NUC Core i5 with 1 120GB M.2 SSD for boot and a 2.5" 2TB HDD
(WD20SPZX-00U).  Both with 8GB RAM.

For compute (KVM) I have one Supermicro A1SAi-2750F and one
A2SDi-16C-HLN4F, both with 32GB RAM.

https://hackaday.io/project/10529-solar-powered-cloud-computing
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-25 Thread Виталий Филиппов
Hi again,

I reread your initial email - do you also run a nanoceph on some SBCs each 
having one 2.5" 5400rpm HDD plugged into it? What SBCs do you use? :-)
-- 
With best regards,
  Vitaliy Filippov___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-24 Thread Виталий Филиппов
Cache=writeback is perfectly safe, it's flushed when the guest calls fsync, so 
journaled filesystems and databases don't lose data that's committed to the 
journal.

25 июля 2019 г. 2:28:26 GMT+03:00, Stuart Longland  
пишет:
>On 25/7/19 9:01 am, vita...@yourcmc.ru wrote:
>>> 60 millibits per second?  60 bits every 1000 seconds?  Are you
>serious?
>>>  Or did we get the capitalisation wrong?
>>>
>>> Assuming 60MB/sec (as 60 Mb/sec would still be slower than the
>5MB/sec I
>>> was getting), maybe there's some characteristic that Bluestore is
>>> particularly dependent on regarding the HDDs.
>>>
>>> I'll admit right up front the drives I'm using were chosen because
>they
>>> were all I could get with a 2TB storage capacity for a reasonable
>price.
>>>
>>> I'm not against moving to Bluestore, however, I think I need to
>research
>>> it better to understand why the performance I was getting before was
>so
>>> poor.
>> 
>> It's a nano-ceph! So millibits :) I mean 60 megabytes per second, of 
>> course. My drives are also crap. I just want to say that you probably
>
>> miss some option for your VM, for example "cache=writeback".
>
>cache=writeback should have no effect on read performance but could be 
>quite dangerous if the VM host were to go down immediately after a
>write 
>for any reason.
>
>While 60MB/sec is getting respectable, doing so at the cost of data 
>safety is not something I'm keen on.
>-- 
>Stuart Longland (aka Redhatter, VK4MSL)
>
>I haven't lost my mind...
>   ...it's backed up on a tape somewhere.

-- 
With best regards,
  Vitaliy Filippov___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-24 Thread Stuart Longland

On 25/7/19 9:01 am, vita...@yourcmc.ru wrote:

60 millibits per second?  60 bits every 1000 seconds?  Are you serious?
 Or did we get the capitalisation wrong?

Assuming 60MB/sec (as 60 Mb/sec would still be slower than the 5MB/sec I
was getting), maybe there's some characteristic that Bluestore is
particularly dependent on regarding the HDDs.

I'll admit right up front the drives I'm using were chosen because they
were all I could get with a 2TB storage capacity for a reasonable price.

I'm not against moving to Bluestore, however, I think I need to research
it better to understand why the performance I was getting before was so
poor.


It's a nano-ceph! So millibits :) I mean 60 megabytes per second, of 
course. My drives are also crap. I just want to say that you probably 
miss some option for your VM, for example "cache=writeback".


cache=writeback should have no effect on read performance but could be 
quite dangerous if the VM host were to go down immediately after a write 
for any reason.


While 60MB/sec is getting respectable, doing so at the cost of data 
safety is not something I'm keen on.

--
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-24 Thread vitalif

60 millibits per second?  60 bits every 1000 seconds?  Are you serious?
 Or did we get the capitalisation wrong?

Assuming 60MB/sec (as 60 Mb/sec would still be slower than the 5MB/sec 
I

was getting), maybe there's some characteristic that Bluestore is
particularly dependent on regarding the HDDs.

I'll admit right up front the drives I'm using were chosen because they
were all I could get with a 2TB storage capacity for a reasonable 
price.


I'm not against moving to Bluestore, however, I think I need to 
research

it better to understand why the performance I was getting before was so
poor.


It's a nano-ceph! So millibits :) I mean 60 megabytes per second, of 
course. My drives are also crap. I just want to say that you probably 
miss some option for your VM, for example "cache=writeback".


The exact commandline I used to start my test VM was:

kvm -m 1024 -drive format=rbd,file=rbd:rpool/debian10-1,cache=writeback 
-vnc 0.0.0.0:0 -netdev user,id=mn -device virtio-net,netdev=mn

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-24 Thread Stuart Longland
On 25/7/19 8:48 am, Vitaliy Filippov wrote:
> I get 60 mb/s inside a VM in my home nano-ceph consisting of 5 HDDs 4 of
> which are inside one PC and 5th is plugged into a ROCK64 :)) I use
> Bluestore...

60 millibits per second?  60 bits every 1000 seconds?  Are you serious?
 Or did we get the capitalisation wrong?

Assuming 60MB/sec (as 60 Mb/sec would still be slower than the 5MB/sec I
was getting), maybe there's some characteristic that Bluestore is
particularly dependent on regarding the HDDs.

I'll admit right up front the drives I'm using were chosen because they
were all I could get with a 2TB storage capacity for a reasonable price.

I'm not against moving to Bluestore, however, I think I need to research
it better to understand why the performance I was getting before was so
poor.
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-24 Thread Vitaliy Filippov

/dev/vdb:
 Timing cached reads:   2556 MB in  1.99 seconds = 1281.50 MB/sec
 Timing buffered disk reads:  62 MB in  3.03 seconds =  20.48 MB/sec


That is without any special tuning, just migrating back to FileStore…
journal is on the HDD (it wouldn't let me put it on the SSD like it did
last time).

As I say, not going to set the world on fire, but 20MB/sec is quite
usable for my needs.  The 4× speed increase is very welcome!


I get 60 mb/s inside a VM in my home nano-ceph consisting of 5 HDDs 4 of  
which are inside one PC and 5th is plugged into a ROCK64 :)) I use  
Bluestore...


--
With best regards,
  Vitaliy Filippov
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-24 Thread Stuart Longland
On 23/7/19 9:59 pm, Stuart Longland wrote:
> I'll do some proper measurements once the migration is complete.

A starting point (I accept more rigorous disk storage tests exist):
> virtatomos ~ # hdparm -tT /dev/vdb
> 
> /dev/vdb:
>  Timing cached reads:   2556 MB in  1.99 seconds = 1281.50 MB/sec
>  Timing buffered disk reads:  62 MB in  3.03 seconds =  20.48 MB/sec

That is without any special tuning, just migrating back to FileStore…
journal is on the HDD (it wouldn't let me put it on the SSD like it did
last time).

As I say, not going to set the world on fire, but 20MB/sec is quite
usable for my needs.  The 4× speed increase is very welcome!
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-23 Thread Stuart Longland
On 19/7/19 8:21 pm, Stuart Longland wrote:
> I'm now getting about 5MB/sec I/O speeds in my VMs.
> 
> I'm contemplating whether I migrate back to using Filestore (on XFS this
> time, since BTRFS appears to be a rude word despite Ceph v10 docs
> suggesting it as a good option), but I'm not sure what the road map is
> for supporting Filestore long-term.

I'll do some measurements, but migrating 3 of the 5 nodes back to FileStore:

- I'm seeing fewer "time-out" messages sending and receiving emails (as
the mail server here is now more responsive)
- `ceph health -w` reports fewer "slow" requests
- where a slow request is reported, so far I've only seen it call out
BlueStore-based OSDs as the culprits.

I'll do some proper measurements once the migration is complete.
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-22 Thread Vitaliy Filippov

Linear reads, `hdparm -t /dev/vda`.


Check if you have `cache=writeback` enabled in your VM options.

If it's enabled but you still get 5mb/s then try to benchmark your cluster  
with fio -ioengine=rbd from outside a VM.


Like

fio -ioengine=rbd -name=test -bs=4M -iodepth=16 -rw=read -pool=rpool  
-runtime=60 -rbdname=testimg


--
With best regards,
  Vitaliy Filippov
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-22 Thread Stuart Longland
On 22/7/19 7:39 pm, Vitaliy Filippov wrote:
> 5MB/s in what mode?

Linear reads, `hdparm -t /dev/vda`.

> For linear writes, that definitely means some kind of misconfiguration.
> For random writes... there's a handbrake in Bluestore which makes random
> writes run at half speed in HDD-only setups :)
> https://github.com/ceph/ceph/pull/26909

Sounds like BlueStore may be worth a look once I move to Ceph v14 some
time in the future, which is eventually on my TO-DO list.
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-22 Thread Stuart Longland
On 22/7/19 7:13 pm, Marc Roos wrote:
> 
>  >> Reverting back to filestore is quite a lot of work and time again. 
>  >> Maybe see first if with some tuning of the vms you can get better 
> results?
>  >
>  >None of the VMs are particularly disk-intensive.  There's two users 
> accessing the system over a WiFi network for email, and some HTTP/SMTP 
> traffic coming in via an ADSL2 Internet connection.
>  >
>  >If Bluestore can't manage this, then I'd consider it totally worthless 
> in any enterprise installation -- so clearly something is wrong.
> 
> I have a cluster mainly intended for backups to cephfs, 4 nodes, sata 
> disks and mostly 5400rpm. Because the cluster is doing nothing. I 
> decided to put vm's on them. I am running 15 vm's without problems on 
> the hdd pool. Going to move more to them. One of them is an macos 
> machine, I did once a fio test in it and gave me 917 iops at 4k random 
> reads. (technically not possible I would say, I have mostly default 
> configurations in libvirt)

Well, that is promising.

I did some measurements of the raw disk performance, I get about
30MB/sec according to `hdparm`, so whilst this isn't going to set the
world on fire, it's "decent" for my needs.

The only thing I can think of is the fact that `hdparm` does a
sequential read, whereas BlueStore operation would be more "random", so
seek times come into play.

I've now migrated two of my nodes to FileStore/XFS, with the journal
on-disk (it won't let me move it to the SSD like I did last time oddly
enough), and I'm seeing less I/O issues now although things are still
slow (3 nodes are still on BlueStore).

I think the fact that my nodes have plenty of RAM between them (>8GB,
one with 32GB) helps here.

The BlueStore settings are at their defaults, which means it should be
tuning the cache size used for BlueStore … maybe this isn't working as
it should on a cluster as small as this.

>  >
>  >> What you also can try is for io intensive vm's add an ssd pool?
>  >
>  >How well does that work in a cluster with 0 SSD-based OSDs?
>  >
>  >For 3 of the nodes, the cases I'm using for the servers can fit two 
> 2.5"
>  >drives.  I have one 120GB SSD for the OS, that leaves one space spare 
> for the OSD.  
> 
> 
> I think this could be your bottle neck, I have 31 drives, so the load is 
> spread across 31 (hopefully). If you have only 3 drives you have 
> 3x60iops to share amongst your vms. 
> I am getting the impression that ceph development is not really 
> interested in setups quite different from the advised standards. I once 
> made an attempt to get things better working for 1Gb adapters[0].

Yeah, unfortunately I'll never be able to cram 31 drives into this
cluster.  I am considering how I might add more, and right now the
immediate thought is to use m.2 SATA SSDs in USB 3 cases.

This gives me something a little bigger than a thumb-drive that is
bus-powered and external to the case, so I don't have the thermal and
space issues of mounting a HDD in there: they're small and light-weight
so they can just dangle from the supplied USB3 cable.

I'll have to do some research though on how mixing SSDs and HDDs would
work.  I need more space than SSDs alone can provide in a cost-effective
manner so going SSD only just isn't an option here, but if I can put
them into the same pool with the HDDs and have them act as a "cache" for
the more commonly read/written objects, that could help.

In this topology though, I my only be using 256GB or 512GB SSDs, so much
less storage on SSDs than the HDDs which likely won't work that well for
tiering (https://ceph.com/planet/ceph-hybrid-storage-tiers/).  So it'll
need some planning and home-work. :-)

FileStore/XFS looks to be improving the situation just a little, so if I
have to hold back on that for a bit, that's fine.  It'll give me time to
work on the next step.
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-22 Thread Vitaliy Filippov

5MB/s in what mode?

For linear writes, that definitely means some kind of misconfiguration.  
For random writes... there's a handbrake in Bluestore which makes random  
writes run at half speed in HDD-only setups :)  
https://github.com/ceph/ceph/pull/26909


And if you push that handbrake down you actually get better random writes  
on HDDs with bluestore, too.


--
With best regards,
  Vitaliy Filippov
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-22 Thread Marc Roos


 >> Reverting back to filestore is quite a lot of work and time again. 
 >> Maybe see first if with some tuning of the vms you can get better 
results?
 >
 >None of the VMs are particularly disk-intensive.  There's two users 
accessing the system over a WiFi network for email, and some HTTP/SMTP 
traffic coming in via an ADSL2 Internet connection.
 >
 >If Bluestore can't manage this, then I'd consider it totally worthless 
in any enterprise installation -- so clearly something is wrong.


I have a cluster mainly intended for backups to cephfs, 4 nodes, sata 
disks and mostly 5400rpm. Because the cluster is doing nothing. I 
decided to put vm's on them. I am running 15 vm's without problems on 
the hdd pool. Going to move more to them. One of them is an macos 
machine, I did once a fio test in it and gave me 917 iops at 4k random 
reads. (technically not possible I would say, I have mostly default 
configurations in libvirt)


 >
 >> What you also can try is for io intensive vm's add an ssd pool?
 >
 >How well does that work in a cluster with 0 SSD-based OSDs?
 >
 >For 3 of the nodes, the cases I'm using for the servers can fit two 
2.5"
 >drives.  I have one 120GB SSD for the OS, that leaves one space spare 
for the OSD.  


I think this could be your bottle neck, I have 31 drives, so the load is 
spread across 31 (hopefully). If you have only 3 drives you have 
3x60iops to share amongst your vms. 
I am getting the impression that ceph development is not really 
interested in setups quite different from the advised standards. I once 
made an attempt to get things better working for 1Gb adapters[0].

 >
 >I since added two new nodes, which are Intel NUCs with m.2 SATA SSDs 
for the OS and like the other nodes have a single 2.5" drive bay.
 >
 >This is being done as a hobby and a learning exercise I might add -- 
so while I have spent a lot of money on this, the funds I have to throw 
at this are not infinite.


Same here ;) 


 >
 >> I moved
 >> some exchange servers on them. Tuned down the logging, because that 
is 
 >> writing constantly to disk.
 >> With such setup you are at least secured for the future.
 >
 >The VMs I have are mostly Linux (Gentoo, some Debian/Ubuntu), with a 
few OpenBSD VMs for things like routers between virtual networks.
 >

[0] https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35474.html
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-20 Thread Stuart Longland
On 20/7/19 11:53 pm, Marc Roos wrote:
> Reverting back to filestore is quite a lot of work and time again. Maybe 
> see first if with some tuning of the vms you can get better results?

None of the VMs are particularly disk-intensive.  There's two users
accessing the system over a WiFi network for email, and some HTTP/SMTP
traffic coming in via an ADSL2 Internet connection.

If Bluestore can't manage this, then I'd consider it totally worthless
in any enterprise installation -- so clearly something is wrong.

> What you also can try is for io intensive vm's add an ssd pool?

How well does that work in a cluster with 0 SSD-based OSDs?

For 3 of the nodes, the cases I'm using for the servers can fit two 2.5"
drives.  I have one 120GB SSD for the OS, that leaves one space spare
for the OSD.  These machines originally had 1TB 5400RPM HDDs fitted
(slower ones than the current drives), and in the beginning I just had
these 3 nodes.  3TB raw space was getting tight.

I since added two new nodes, which are Intel NUCs with m.2 SATA SSDs for
the OS and like the other nodes have a single 2.5" drive bay.

This is being done as a hobby and a learning exercise I might add -- so
while I have spent a lot of money on this, the funds I have to throw at
this are not infinite.

> I moved 
> some exchange servers on them. Tuned down the logging, because that is 
> writing constantly to disk. 
> With such setup you are at least secured for the future.

The VMs I have are mostly Linux (Gentoo, some Debian/Ubuntu), with a few
OpenBSD VMs for things like routers between virtual networks.
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-20 Thread Stuart Longland
On 20/7/19 11:53 pm, Marc Roos wrote:
> Reverting back to filestore is quite a lot of work and time again. Maybe 
> see first if with some tuning of the vms you can get better results?

None of the VMs are particularly disk-intensive.  There's two users
accessing the system over a WiFi network for email, and some HTTP/SMTP
traffic coming in via an ADSL2 Internet connection.

If Bluestore can't manage this, then I'd consider it totally worthless
in any enterprise installation -- so clearly something is wrong.

> What you also can try is for io intensive vm's add an ssd pool?

How well does that work in a cluster with 0 SSD-based OSDs?

For 3 of the nodes, the cases I'm using for the servers can fit two 2.5"
drives.  I have one 120GB SSD for the OS, that leaves one space spare
for the OSD.  These machines originally had 1TB 5400RPM HDDs fitted
(slower ones than the current drives), and in the beginning I just had
these 3 nodes.  3TB raw space was getting tight.

I since added two new nodes, which are Intel NUCs with m.2 SATA SSDs for
the OS and like the other nodes have a single 2.5" drive bay.

This is being done as a hobby and a learning exercise I might add -- so
while I have spent a lot of money on this, the funds I have to throw at
this are not infinite.

> I moved 
> some exchange servers on them. Tuned down the logging, because that is 
> writing constantly to disk. 
> With such setup you are at least secured for the future.

The VMs I have are mostly Linux (Gentoo, some Debian/Ubuntu), with a few
OpenBSD VMs for things like routers between virtual networks.
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-20 Thread Marc Roos
 

Reverting back to filestore is quite a lot of work and time again. Maybe 
see first if with some tuning of the vms you can get better results? 
What you also can try is for io intensive vm's add an ssd pool? I moved 
some exchange servers on them. Tuned down the logging, because that is 
writing constantly to disk. 
With such setup you are at least secured for the future.






-Original Message-
From: Stuart Longland [mailto:stua...@longlandclan.id.au] 
Subject: Re: [ceph-users] Future of Filestore?


>  
> Maybe a bit of topic, just curious what speeds did you get previously? 

> Depending on how you test your native drive of 5400rpm, the 
> performance could be similar. 4k random read of my 7200rpm/5400 rpm 
> results in ~60iops at 260kB/s.

Well, to be honest I never formally tested the performance prior to the 
move to Bluestore.  It was working "acceptably" for my needs, thus I 
never had a reason to test it.

It was never a speed demon, but it did well enough for my needs.  Had 
Filestore on BTRFS remained an option in Ceph v12, I'd have stayed that 
way.

> I also wonder why filestore could be that much faster, is this not 
> something else? Maybe some dangerous caching method was on?

My understanding is that Bluestore does not benefit from the Linux 
kernel filesystem cache.  On paper, Bluestore *should* be faster, but 
it's hard to know for sure.

Maybe I should try migrating back to Filestore and see if that improves 
things?
--
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-19 Thread Stuart Longland
On 19/7/19 8:43 pm, Marc Roos wrote:
>  
> Maybe a bit of topic, just curious what speeds did you get previously? 
> Depending on how you test your native drive of 5400rpm, the performance 
> could be similar. 4k random read of my 7200rpm/5400 rpm results in 
> ~60iops at 260kB/s.

Well, to be honest I never formally tested the performance prior to the
move to Bluestore.  It was working "acceptably" for my needs, thus I
never had a reason to test it.

It was never a speed demon, but it did well enough for my needs.  Had
Filestore on BTRFS remained an option in Ceph v12, I'd have stayed that way.

> I also wonder why filestore could be that much faster, is this not 
> something else? Maybe some dangerous caching method was on?

My understanding is that Bluestore does not benefit from the Linux
kernel filesystem cache.  On paper, Bluestore *should* be faster, but
it's hard to know for sure.

Maybe I should try migrating back to Filestore and see if that improves
things?
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-19 Thread Janne Johansson
Den fre 19 juli 2019 kl 12:43 skrev Marc Roos :

>
> Maybe a bit of topic, just curious what speeds did you get previously?
> Depending on how you test your native drive of 5400rpm, the performance
> could be similar. 4k random read of my 7200rpm/5400 rpm results in
> ~60iops at 260kB/s.
> I also wonder why filestore could be that much faster, is this not
> something else? Maybe some dangerous caching method was on?
>

Then again, filestore will use the OS fs caches normally, which bluestore
will not, so unless you tune
your bluestores carefully, it will be far easier to get at least read
caches to work in your favor with
filestore if you have RAM to spare on your OSD hosts.

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Future of Filestore?

2019-07-19 Thread Marc Roos
 
Maybe a bit of topic, just curious what speeds did you get previously? 
Depending on how you test your native drive of 5400rpm, the performance 
could be similar. 4k random read of my 7200rpm/5400 rpm results in 
~60iops at 260kB/s.
I also wonder why filestore could be that much faster, is this not 
something else? Maybe some dangerous caching method was on?



-Original Message-
From: Stuart Longland [mailto:stua...@longlandclan.id.au] 
Sent: vrijdag 19 juli 2019 12:22
To: ceph-users
Subject: [ceph-users] Future of Filestore?

Hi all,

Earlier this year, I did a migration from Ceph 10 to 12.  Previously, I 
was happily running Ceph v10 on Filestore with BTRFS, and getting 
reasonable performance.

Moving to Ceph v12 necessitated a migration away from this set-up, and 
reading the documentation, Bluestore seemed to be "the way", so a hasty 
migration was performed and now my then 3-node cluster moved to 
Bluestore.  I've since added two new nodes to that cluster and replaced 
the disks in all systems, so I have 5 WD20SPZX-00Us storing my data.

I'm now getting about 5MB/sec I/O speeds in my VMs.

I'm contemplating whether I migrate back to using Filestore (on XFS this 
time, since BTRFS appears to be a rude word despite Ceph v10 docs 
suggesting it as a good option), but I'm not sure what the road map is 
for supporting Filestore long-term.

Is Filestore likely to have long term support for the next few years or 
should I persevere with tuning Bluestore to get something that won't be 
outperformed by an early 90s PIO mode 0 IDE HDD?
--
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Future of Filestore?

2019-07-19 Thread Stuart Longland
Hi all,

Earlier this year, I did a migration from Ceph 10 to 12.  Previously, I
was happily running Ceph v10 on Filestore with BTRFS, and getting
reasonable performance.

Moving to Ceph v12 necessitated a migration away from this set-up, and
reading the documentation, Bluestore seemed to be "the way", so a hasty
migration was performed and now my then 3-node cluster moved to
Bluestore.  I've since added two new nodes to that cluster and replaced
the disks in all systems, so I have 5 WD20SPZX-00Us storing my data.

I'm now getting about 5MB/sec I/O speeds in my VMs.

I'm contemplating whether I migrate back to using Filestore (on XFS this
time, since BTRFS appears to be a rude word despite Ceph v10 docs
suggesting it as a good option), but I'm not sure what the road map is
for supporting Filestore long-term.

Is Filestore likely to have long term support for the next few years or
should I persevere with tuning Bluestore to get something that won't be
outperformed by an early 90s PIO mode 0 IDE HDD?
-- 
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
  ...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com