Re: [ceph-users] Help: pool not responding

2016-02-14 Thread koukou73gr
Have you tried restarting  osd.0 ?

-K.

On 02/14/2016 09:56 PM, Mario Giammarco wrote:
> Hello,
> I am using ceph hammer under proxmox. 
> I have working cluster it is several month I am using it.
> For reasons yet to discover I am now in this situation:
> 
> HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean; 7 
> requests are blocked > 32 sec; 1 osds have slow requests
> pg 0.0 is stuck inactive for 3541712.92, current state incomplete, last 
> acting [0,1,3]
> pg 0.40 is stuck inactive for 1478467.695684, current state incomplete, 
> last acting [1,0,3]
> pg 0.3f is stuck inactive for 3541852.000546, current state incomplete, 
> last acting [0,3,1]
> pg 0.3b is stuck inactive for 3541865.897979, current state incomplete, 
> last acting [0,3,1]
> pg 0.0 is stuck unclean for 326.301120, current state incomplete, last 
> acting [0,1,3]
> pg 0.40 is stuck unclean for 326.301128, current state incomplete, last 
> acting [1,0,3]
> pg 0.3f is stuck unclean for 345.066879, current state incomplete, last 
> acting [0,3,1]
> pg 0.3b is stuck unclean for 379.201819, current state incomplete, last 
> acting [0,3,1]
> pg 0.40 is incomplete, acting [1,0,3]
> pg 0.3f is incomplete, acting [0,3,1]
> pg 0.3b is incomplete, acting [0,3,1]
> pg 0.0 is incomplete, acting [0,1,3]
> 7 ops are blocked > 2097.15 sec
> 7 ops are blocked > 2097.15 sec on osd.0
> 1 osds have slow requests
> 
> 
> Problem is that when I try to read or write to pool "rbd" (where I have all 
> my virtual machines) ceph starts to log "slow reads" and system hungs.
> If in the same cluster I create another pool and inside it I create an 
> image I can read and write correctly (and fast) so it seems the cluster is 
> working and only the pool is not working.
> 
> Can you help me?
> Thanks,
> Mario
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Help: pool not responding

2016-02-14 Thread Ferhat Ozkasgarli
Hello Mario,

This kind of problem usually happens for following reasons:

1-) One of the OSD nodes has network problem.
2-) Disk failure
3-) Not enough resource for OSD nodes
4-) Slow OSD Disks

This happened before me. The problem was network cable problem. As soon as
I replaced the cable, everything was fine and dandy.

On Sun, Feb 14, 2016 at 9:56 PM, Mario Giammarco 
wrote:

> Hello,
> I am using ceph hammer under proxmox.
> I have working cluster it is several month I am using it.
> For reasons yet to discover I am now in this situation:
>
> HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean; 7
> requests are blocked > 32 sec; 1 osds have slow requests
> pg 0.0 is stuck inactive for 3541712.92, current state incomplete, last
> acting [0,1,3]
> pg 0.40 is stuck inactive for 1478467.695684, current state incomplete,
> last acting [1,0,3]
> pg 0.3f is stuck inactive for 3541852.000546, current state incomplete,
> last acting [0,3,1]
> pg 0.3b is stuck inactive for 3541865.897979, current state incomplete,
> last acting [0,3,1]
> pg 0.0 is stuck unclean for 326.301120, current state incomplete, last
> acting [0,1,3]
> pg 0.40 is stuck unclean for 326.301128, current state incomplete, last
> acting [1,0,3]
> pg 0.3f is stuck unclean for 345.066879, current state incomplete, last
> acting [0,3,1]
> pg 0.3b is stuck unclean for 379.201819, current state incomplete, last
> acting [0,3,1]
> pg 0.40 is incomplete, acting [1,0,3]
> pg 0.3f is incomplete, acting [0,3,1]
> pg 0.3b is incomplete, acting [0,3,1]
> pg 0.0 is incomplete, acting [0,1,3]
> 7 ops are blocked > 2097.15 sec
> 7 ops are blocked > 2097.15 sec on osd.0
> 1 osds have slow requests
>
>
> Problem is that when I try to read or write to pool "rbd" (where I have all
> my virtual machines) ceph starts to log "slow reads" and system hungs.
> If in the same cluster I create another pool and inside it I create an
> image I can read and write correctly (and fast) so it seems the cluster is
> working and only the pool is not working.
>
> Can you help me?
> Thanks,
> Mario
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Extra RAM to improve OSD write performance ?

2016-02-14 Thread Christian Balzer

Hello,

As Somnath writes below, RAM will only indirectly benefit writes. 
But with the right tuning to keep dentry and other FS related caches in
the SLAB it can help a lot.
As will all the really hot objects that get read frequently and still fit
in the pagecache of your storage nodes, as another read access to the disk
was avoided, leaving all the IOPS for your writes.

However you have to realize that these are fake IOPS, as in when you're
cluster gets busy, changes workloads, runs out of memory to hold all those
entries and objects you're back to what the backing storage of your OSDs
can provide performance wise.

If your cluster is write-heavy and light on reads, that's a perfect
example, both for the benefits and caveats.
Basically once you find that deep-scrubs severely impact your cluster
performance (having to reach EACH object on disk, not just the hot ones
and thus making your disks seek/thrash), it is time to increase I/O
capacity, usually by more OSDs.

Regards,

Christian

On Sun, 14 Feb 2016 17:24:37 + Somnath Roy wrote:

> I doubt it will do much good in case of 100% write workload. You can
> tweak your VM dirty ration stuff to help the buffered write but the down
> side is the more amount of data it has to sync (while dumping dirty
> buffer eventually) the more spikiness it will induce..The write behavior
> won’t be smooth and gain won’t be much (or not at all). But, Ceph does
> xattr reads in the write path, if you have very huge workload this extra
> RAM will help you to hold dentry caches in the memory (or go for
> sappiness setting not to swap out dentry caches) and effectively will
> save some disk hit. Also, in case of mixed read/write scenario this
> should help as some read could be benefitting from this. All depends on
> how random and how big is your workload..
> 
> 
> Thanks & Regards
> Somnath
> 
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Vickey Singh Sent: Sunday, February 14, 2016 1:55 AM
> To: ceph-users@lists.ceph.com; ceph-users
> Subject: [ceph-users] Extra RAM to improve OSD write performance ?
> 
> Hello Community
> 
> Happy Valentines Day ;-)
> 
> I need some advice on using EXATA RAM on my OSD servers to improve
> Ceph's write performance.
> 
> I have 20 OSD servers each with 256GB RAM and 6TB x 16 OSD's, so
> assuming cluster is not recovering, most of the time system will have at
> least ~150GB RAM free. And for 20 machines its a lot ~3.0 TB RAM
> 
> Is there any way to use this free RAM to improve write performance of
> cluster. Something like Linux page cache for OSD write operations.
> 
> I assume that by default Linux page cache can use free memory to improve
> OSD read performance ( please correct me if i am wrong). But how about
> OSD write improvement , How to improve that with free RAM.
> 
> PS : My Ceph cluster's workload is just OpenStack Cinder , Glance , Nova
> for instance disk
> 
> - Vickey -
> 
> 
> 
> PLEASE NOTE: The information contained in this electronic mail message
> is intended only for the use of the designated recipient(s) named above.
> If the reader of this message is not the intended recipient, you are
> hereby notified that you have received this message in error and that
> any review, dissemination, distribution, or copying of this message is
> strictly prohibited. If you have received this communication in error,
> please notify the sender by telephone or e-mail (as shown above)
> immediately and destroy any and all copies of this message in your
> possession (whether hard copies or electronically stored copies).


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Help: pool not responding

2016-02-14 Thread Mario Giammarco
Hello,
I am using ceph hammer under proxmox. 
I have working cluster it is several month I am using it.
For reasons yet to discover I am now in this situation:

HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean; 7 
requests are blocked > 32 sec; 1 osds have slow requests
pg 0.0 is stuck inactive for 3541712.92, current state incomplete, last 
acting [0,1,3]
pg 0.40 is stuck inactive for 1478467.695684, current state incomplete, 
last acting [1,0,3]
pg 0.3f is stuck inactive for 3541852.000546, current state incomplete, 
last acting [0,3,1]
pg 0.3b is stuck inactive for 3541865.897979, current state incomplete, 
last acting [0,3,1]
pg 0.0 is stuck unclean for 326.301120, current state incomplete, last 
acting [0,1,3]
pg 0.40 is stuck unclean for 326.301128, current state incomplete, last 
acting [1,0,3]
pg 0.3f is stuck unclean for 345.066879, current state incomplete, last 
acting [0,3,1]
pg 0.3b is stuck unclean for 379.201819, current state incomplete, last 
acting [0,3,1]
pg 0.40 is incomplete, acting [1,0,3]
pg 0.3f is incomplete, acting [0,3,1]
pg 0.3b is incomplete, acting [0,3,1]
pg 0.0 is incomplete, acting [0,1,3]
7 ops are blocked > 2097.15 sec
7 ops are blocked > 2097.15 sec on osd.0
1 osds have slow requests


Problem is that when I try to read or write to pool "rbd" (where I have all 
my virtual machines) ceph starts to log "slow reads" and system hungs.
If in the same cluster I create another pool and inside it I create an 
image I can read and write correctly (and fast) so it seems the cluster is 
working and only the pool is not working.

Can you help me?
Thanks,
Mario



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Reducing the impact of OSD restarts (noout ain't uptosnuff)

2016-02-14 Thread Tom Christensen
To be clear when you are restarting these osds how many pgs go into peering
state?  And do they stay there for the full 3 minutes?  Certainly I've seen
iops drop to zero or near zero when a large number of pgs are peering.  It
would be wonderful if we could keep iops flowing even when pgs are
peering.  In your case with such a high pg/osd count, my guess is peering
always takes a long time.  As the OSD goes down it has to peer those 564
pgs across the remaining 3 osds, then re-peer them once the OSD comes up
again...  Also because the OSD is a RAID6 I'm pretty sure the IO pattern is
going to be bad, all 564 of those threads are going to request reads and
writes (the peering process is going to update metadata in each pg
directory on the OSD) nearly simultaneously.  In a raid 6 each non-cached
read will cause a read io from at least 5 disks and each write will cause a
write io to all 7 disks.  With that many threads hitting the volume
simultaneously it means you're going to have massive disk head
contention/head seek times, which is going to absolutely destroy your iops
and make peering take that much longer.  In effect in the non-cached case
the raid6 is going to almost entirely negate the distribution of IO load
across those 7 disks, and is going to make them behave with a performance
closer to a single HDD.  As Lionel said earlier, the HW Cache is going to
be nearly useless in any sort of recovery scenario in ceph (which this is).

I hope Robert or someone can come up with a way to continue IO to a pg in
peering state, that would be wonderful as this is the fundamental problem I
believe.  I'm not "happy" with the amount of work we had to put in to
getting our cluster to behave as well as it is now, and it would certainly
be great if things "Just Worked".  I'm just trying to relate our
experience, and indicate what I see as the bottleneck in this particular
setup based on that experience.  I believe the ceph pg calculator and
recommendations about pg counts are too high and your setup is 2-3x above
that.  I've been able to easily topple clusters (mostly due to RAM
exhaustion/swapping/OOM killer) with the recommended pg/osd counts and
recommended RAM (1GB/OSD + 1GB/TB of storage) by causing recovery in a
cluster for 2 years now, and its not been improved as far as I can tell.
The only solution I've seen work reliably is to drop the pg/osd ratio.
Dropping said ratio also greatly reduced the peering load and time and made
the pain of osd restarts almost negligible.

To your question about our data distribution, it is excellent as far as per
pg is concerned, less than 3% variance between pgs.  We did see a massive
disparity between how many pgs each osd gets.  Originally we had osds with
as few as 100pgs, and some with as many as 250 when on average they should
have had about 175pgs each, that was with the recommended pg/osd settings.
Additionally that ratio/variance has been the same regardless of the number
of pgs/osd.  Meaning it started out bad, and stayed bad but didn't get
worse as we added osds.  We've had to reweight osds in our crushmap to get
anything close to a sane distribution of pgs.

-Tom


On Sat, Feb 13, 2016 at 10:57 PM, Christian Balzer  wrote:

> On Sat, 13 Feb 2016 20:51:19 -0700 Tom Christensen wrote:
>
> > > > Next this : > --- > 2016-02-12 01:35:33.915981 7f75be4d57c0  0 osd.2
> > > > 1788 load_pgs 2016-02-12 01:36:32.989709 7f75be4d57c0  0 osd.2 1788
> > > > load_pgs opened
> > > 564 pgs > --- > Another minute to load the PGs.
> > > Same OSD reboot as above : 8 seconds for this.
> >
> > Do you really have 564 pgs on a single OSD?
>
> Yes, the reason is simple, more than a year ago it should have been 8 OSDs
> (halving that number) and now it should be 18 OSDs, which would be a
> perfect fit for the 1024 PGs in the rbd pool.
>
> >I've never had anything like
> > decent performance on an OSD with greater than about 150pgs.  In our
> > production clusters we aim for 25-30 primary pgs per osd, 75-90pgs/osd
> > total (with size set to 3).  When we initially deployed our large cluster
> > with 150-200pgs/osd (total, 50-70 primary pgs/osd, again size 3) we had
> > no end of trouble getting pgs to peer.  The OSDs ate RAM like nobody's
> > business, took forever to do anything, and in general caused problems.
>
> The cluster performs admirable for the stress it is under, the number of
> PGs per OSD never really was an issue when it came to CPU/RAM/network.
> For example the restart increased the OSD process size from 1.3 to 2.8GB,
> but that left 24GB still "free".
> The main reason to have more OSDs (and thus a lower PG count per OSD) is
> to have more IOPS from the underlying storage.
>
> > If you're running 564 pgs/osd in this 4 OSD cluster, I'd look at that
> > first as the potential culprit.  That is a lot of threads inside the OSD
> > process that all need to get CPU/network/disk time in order to peer as
> > they come up.  Especially on firefly I would point to this.  

Re: [ceph-users] OpenStack Developer Summit - Austin

2016-02-14 Thread Danny Al-Gaaf
Hi all,

the vote for presentation period for the Austin Summit ends on 17th
February, 11:59 PST (18th February 7:59 UTC / 08:59 CEST). Here a list
of some very interesting and Ceph related presentation proposals waiting
for your vote (shorted urls point to the OpenStack voting page)!

I'm sure every vote will be welcome to get more Ceph talks to the next
summit!

- Disaster recovery and Ceph block storage: Introducing multi-site
  mirroring, Josh Durgin, https://goo.gl/FRbE9f
- CephFS as a service with OpenStack Manila, John Spray,
  https://goo.gl/5VHkFn
- Building a next-gen multiprotocol, tiered, and globally distributed
  storage platform with Ceph, Sage Weil, https://goo.gl/Q33K2e
- From Hardware to Application - NFV@OpenStack and Ceph, Danny Al-Gaaf,
  https://goo.gl/uZZH4K
- Micro Storage Servers at multi-PetaByte scale running Ceph, Joshua
  Johnson/Sage Weil, https://goo.gl/ZehWx4
- Persistent Containers for Transactional Workloads, Sébastien Han,
  https://goo.gl/iGkybe
- Cache Rules Everything Around Me, Kyle Bader/Stephen Blinick,
  https://goo.gl/AIQG12
- CephFS in Jewel: Stable at last, Gregory Farnum, https://goo.gl/9Z42t1
- Userspace only for Ceph: Boost performance from network stack to disk,
  Haomai Wang, https://goo.gl/Nh0eQt
- One for All: Deploying OpenStack, Ceph or Cloud Foundry with a
  Unified Deployment Tool, Nanuk Krinner/Rick Salevsky,
  https://goo.gl/PwOlNb
- Ceph at Scale - Bloomberg Cloud Storage Platform, Chris Jones,
  https://goo.gl/PXUauI
- How-to build out a Ceph Cluster with Chef, Chris Jones,
  https://goo.gl/hKUo8p
- New Ceph Configurations - High Performance Without High Costs,
  Allen Samuels, https://goo.gl/StwYg3
- Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack
  and Ceph, Sean Cohen/Federico Lucifredi/Sébastien Han,
  https://goo.gl/hQ6mvh
- Managing resources in hyper-converged infrastructures, Neil Levine,
  https://goo.gl/pfpv2l
- Ceph performance tuning with feedback from a custom dashboard,
  Chris Holcombe, https://goo.gl/ql4fKb
- Designing for High Performance Ceph at Scale, James Saint-Rossy/John
  Benton, https://goo.gl/5grj80
- Multi-backend Cinder, All Flash Ceph, and More! Year two of block
  storage at TWC, Craig DeLatte/Bryan Stillwell/David Medberry,
  https://goo.gl/Ms1PGf
- Challenges, Opportunities and lessons learned from real life clusters
  in China for open source storage in OpenStack clouds, Jian Zhang/
  Jiangang Duan, https://goo.gl/LrWh7i
- Stop taking database backups and just use a block drive as a dB
  partition in your Openstack Cloud, Swami Reddy M R.
  https://goo.gl/MnMfLa
- Building cost-efficient, millions IOPS all-flash block storage
  backend for your OpenStack cloud with Ceph, Jian Zhang/
  Jack Zhang/xinxin shu, https://goo.gl/P0xEPd
- Deploying OpenStack, Astara and Ceph: from concept to public cloud
  (and hell in the middle), Jonathan LaCour, https://goo.gl/oGYllC
- Performance and stability comparison of Openstack running on CEPH
  cluster with journals on NVMe and HDD, Narendra Trivedi,
  https://goo.gl/qrXydf
- How to Fuel OpenStack Storage, Christian Huebner,
  https://goo.gl/TZI5QT
- Ceph in the Real World: Examples and Advice, Christian Huebner,
  https://goo.gl/B53btG
- Study the performance characteristics of an Openstack installation
  running on a CEPH cluster with highly dense OSD nodes, Narendra
  Trivedi, https://goo.gl/WNe0qo
- All-Flash Ceph, Gunna Marripudi/Brent Compton, https://goo.gl/RfZhqw
- Building OpenStack with All-flash based Ceph Storage, Jaesuk Ahn/
  Jungyeon Yoon, https://goo.gl/d6DzPC
- Towards a Hyper-Converged Cache Solution for OpenStack, Yuan Zhou,
  https://goo.gl/u8kXRs
- Journey to Stability and Performance for Storage Clusters in OpenStack
  Yuming Ma/Robert Kissell, https://goo.gl/IoA9Vt
- OpenStack Security Use Cases, Adam Heczko/Florin Stingaciu,
  https://goo.gl/usluKP
- Monitoring OpenStack Ceph and Astara, David Wahlstrom/Jonathan LaCour,
  https://goo.gl/tQUMXC
- Why are so many people using Ceph with OpenStack?, Jacob Shucart,
  https://goo.gl/DhDOQ3
- CEPH Capacity Expansion Explained, Al Lau/Yuming Ma,
  https://goo.gl/UFLikd
- How to seemlessly migrate CEPH with PB’s of data from one OS to other
  with no impact, Shyam Bollu/Michael DeSimone/Sébastien Han,
  https://goo.gl/wxKars
- CEPH Proactive Monitoring and Alerting, Rushil Chugh,
  https://goo.gl/Gl8MvO
- Ceph on All-Flash Storage - Breaking the performance barriers,
  Venkat Kolli, https://goo.gl/Fpukwm
- Declarative model based Ceph cluster deployment and its integration
  with OpenStack components like Glance/Nova/Cinder, Jyoti Ranjan/
  Unmesh Gurjar, https://goo.gl/J0ELxu

Many proposals, if I missed one, sorry.

Danny


Am 22.01.2016 um 18:31 schrieb Patrick McGarry:
> Hey cephers,
> 
> Just a reminder that if you are planning to submit a Ceph talk to the
> OpenStack Developer Summit in Austin on 25-29 Apr, that submissions
> close in just over 1 week (01 Feb @ 23:59 PST).  If anyone 

Re: [ceph-users] Extra RAM to improve OSD write performance ?

2016-02-14 Thread Somnath Roy
I doubt it will do much good in case of 100% write workload. You can tweak your 
VM dirty ration stuff to help the buffered write but the down side is the more 
amount of data it has to sync (while dumping dirty buffer eventually) the more 
spikiness it will induce..The write behavior won’t be smooth and gain won’t be 
much (or not at all).
But, Ceph does xattr reads in the write path, if you have very huge workload 
this extra RAM will help you to hold dentry caches in the memory (or go for 
sappiness setting not to swap out dentry caches) and effectively will save some 
disk hit. Also, in case of mixed read/write scenario this should help as some 
read could be benefitting from this. All depends on how random and how big is 
your workload..


Thanks & Regards
Somnath

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Vickey 
Singh
Sent: Sunday, February 14, 2016 1:55 AM
To: ceph-users@lists.ceph.com; ceph-users
Subject: [ceph-users] Extra RAM to improve OSD write performance ?

Hello Community

Happy Valentines Day ;-)

I need some advice on using EXATA RAM on my OSD servers to improve Ceph's write 
performance.

I have 20 OSD servers each with 256GB RAM and 6TB x 16 OSD's, so assuming 
cluster is not recovering, most of the time system will have at least ~150GB 
RAM free. And for 20 machines its a lot ~3.0 TB RAM

Is there any way to use this free RAM to improve write performance of cluster. 
Something like Linux page cache for OSD write operations.

I assume that by default Linux page cache can use free memory to improve OSD 
read performance ( please correct me if i am wrong). But how about OSD write 
improvement , How to improve that with free RAM.

PS : My Ceph cluster's workload is just OpenStack Cinder , Glance , Nova for 
instance disk

- Vickey -



PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Extra RAM to improve OSD write performance ?

2016-02-14 Thread Vickey Singh
Hello Community

Happy Valentines Day ;-)

I need some advice on using EXATA RAM on my OSD servers to improve Ceph's
write performance.

I have 20 OSD servers each with 256GB RAM and 6TB x 16 OSD's, so assuming
cluster is not recovering, most of the time system will have at least
~150GB RAM free. And for 20 machines its a lot ~3.0 TB RAM

Is there any way to use this free RAM to improve write performance of
cluster. Something like Linux page cache for OSD write operations.

I assume that by default Linux page cache can use free memory to improve
OSD read performance ( please correct me if i am wrong). But how about OSD
write improvement , How to improve that with free RAM.

PS : My Ceph cluster's workload is just OpenStack Cinder , Glance , Nova
for instance disk

- Vickey -
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Extra RAM to improve OSD write performance ?

2016-02-14 Thread ceph
Won't be used for write : write are synced (meaning: write it to disk, now)

On 14/02/2016 10:55, Vickey Singh wrote:
> Hello Community
> 
> Happy Valentines Day ;-)
> 
> I need some advice on using EXATA RAM on my OSD servers to improve Ceph's
> write performance.
> 
> I have 20 OSD servers each with 256GB RAM and 6TB x 16 OSD's, so assuming
> cluster is not recovering, most of the time system will have at least
> ~150GB RAM free. And for 20 machines its a lot ~3.0 TB RAM
> 
> Is there any way to use this free RAM to improve write performance of
> cluster. Something like Linux page cache for OSD write operations.
> 
> I assume that by default Linux page cache can use free memory to improve
> OSD read performance ( please correct me if i am wrong). But how about OSD
> write improvement , How to improve that with free RAM.
> 
> PS : My Ceph cluster's workload is just OpenStack Cinder , Glance , Nova
> for instance disk
> 
> - Vickey -
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com