Re: [ceph-users] Erasure Coding - FPGA / Hardware Acceleration

2019-06-14 Thread Sean Redmond
path. > > On Fri, Jun 14, 2019 at 8:27 AM Janne Johansson > wrote: > > > > Den fre 14 juni 2019 kl 13:58 skrev Sean Redmond < > sean.redmo...@gmail.com>: > >> > >> Hi Ceph-Uers, > >> I noticed that Soft Iron now have hardware acceleration

[ceph-users] Erasure Coding - FPGA / Hardware Acceleration

2019-06-14 Thread Sean Redmond
Hi Ceph-Uers, I noticed that Soft Iron now have hardware acceleration for Erasure Coding[1], this is interesting as the CPU overhead can be a problem in addition to the extra disk I/O required for EC pools. Does anyone know if any other work is ongoing to support generic FPGA Hardware Acceleratio

Re: [ceph-users] Fwd: down+peering PGs, can I move PGs from one OSD to another

2018-08-03 Thread Sean Redmond
Hi, You can export and import PG's using ceph_objectstore_tool, but if the osd won't start you may have trouble exporting a PG. It maybe useful to share the errors you get when trying to start the osd. Thanks On Fri, Aug 3, 2018 at 10:13 PM, Sean Patronis wrote: > > > Hi all. > > We have an i

Re: [ceph-users] Converting to dynamic bucket resharding in Luminous

2018-07-30 Thread Sean Redmond
Hi, I also had the same issues and took to disabling this feature. Thanks On Mon, Jul 30, 2018 at 8:42 AM, Micha Krause wrote: > Hi, > > I have a Jewel Ceph cluster with RGW index sharding enabled. I've >> configured the index to have 128 shards. I am upgrading to Luminous. What >> will h

Re: [ceph-users] Setting up Ceph on EC2 i3 instances

2018-07-28 Thread Sean Redmond
Hi, You may need to consider the latency between the az's, it may make it difficult to get very high iops - I suspect that is the reason ebs is replicated within a single AZ. Have you any data that shows the latency between the az's? Thanks On Sat, 28 Jul 2018, 05:52 Mansoor Ahmed, wrote: > H

Re: [ceph-users] [rgw] Very high cache misses with automatic bucket resharding

2018-07-16 Thread Sean Redmond
Hi, Do you have on going resharding? 'radosgw-admin reshard list' should so you the status. Do you see the number of objects in .rgw.bucket.index pool increasing? I hit a lot of problems trying to use auto resharding in 12.2.5 - I have disabled it for the moment. Thanks [1] https://tracker.cep

Re: [ceph-users] Luminous 12.2.6 release date?

2018-07-10 Thread Sean Redmond
Hi Sean (Good name btw), Can you please link me to the tracker 12.2.6 fixes? I have disabled resharding in 12.2.5 due to it running endlessly. Thanks On Tue, Jul 10, 2018 at 9:07 AM, Sean Purdy wrote: > While we're at it, is there a release date for 12.2.6? It fixes a > reshard/versioning bug

Re: [ceph-users] RGW Index rapidly expanding post tunables update (12.2.5)

2018-06-20 Thread Sean Redmond
Hi, It sounds like the .rgw.bucket.index pool has grown maybe due to some problem with dynamic bucket resharding. I wonder if the (stale/old/not used) bucket index's needs to be purged using something like the below radosgw-admin bi purge --bucket= --bucket-id= Not sure how you would find the o

Re: [ceph-users] SSD recommendation

2018-05-31 Thread Sean Redmond
Hi, I know the s4600 thread well as I had over 10 of those drives fail before I took them all out of production. Intel did say a firmware fix was on the way but I could not wait and opted for SM863A and never looked back... I will be sticking with SM863A for now on futher orders. Thanks On Thu

[ceph-users] Bucket reporting content inconsistently

2018-05-11 Thread Sean Redmond
;max_size_kb": -1, "max_objects": -1 } } I have attempted a bucket index check and fix on this, however, it does not appear to have made a difference and no fixes or errors reported from it. Does anyone have any advice on how to proceed with removing this content? At t

Re: [ceph-users] RGW GC Processing Stuck

2018-04-24 Thread Sean Redmond
gt; Matt > > On Tue, Apr 24, 2018 at 10:45 AM, Sean Redmond > wrote: > > Hi, > > We are currently using Jewel 10.2.7 and recently, we have been > experiencing > > some issues with objects being deleted using the gc. After a bucket was > > unsuccessfully deleted u

[ceph-users] RGW GC Processing Stuck

2018-04-24 Thread Sean Redmond
Hi, We are currently using Jewel 10.2.7 and recently, we have been experiencing some issues with objects being deleted using the gc. After a bucket was unsuccessfully deleted using –purge-objects (first error next discussed occurred), all of the rgw’s are occasionally becoming unresponsive and requ

Re: [ceph-users] Many concurrent drive failures - How do I activate pgs?

2018-01-12 Thread Sean Redmond
msung SM863a 2.5" Enterprise SSD, SATA3 6Gb/s, 2-bit MLC V-NAND Regards Sean Redmond On Wed, Jan 10, 2018 at 11:08 PM, Sean Redmond wrote: > Hi David, > > Thanks for your email, they are connected inside Dell R730XD (2.5 inch 24 > disk model) in None RAID mode via a perc RAID ca

Re: [ceph-users] Many concurrent drive failures - How do I activate pgs?

2018-01-10 Thread Sean Redmond
have ordered HGST UltraStar SN200 2.5 inch SFF drives with a 3 DWPD > rating. > > > > > > Regards > > David Herselman > > > > *From:* Sean Redmond [mailto:sean.redmo...@gmail.com] > *Sent:* Thursday, 11 January 2018 12:45 AM > *To:* David Herselman

Re: [ceph-users] Many concurrent drive failures - How do I activate pgs?

2018-01-10 Thread Sean Redmond
Hi, I have a case where 3 out to 12 of these Intel S4600 2TB model failed within a matter of days after being burn-in tested then placed into production. I am interested to know, did you every get any further feedback from the vendor on your issue? Thanks On Thu, Dec 21, 2017 at 1:38 PM, David

Re: [ceph-users] Ubuntu 17.10, Luminous - which repository

2017-12-08 Thread Sean Redmond
Hi, Did you see this http://docs.ceph.com/docs/master/install/get-packages/ It contains details on how to add the apt repo's provided by the ceph project. You may also want to consider 16.04 if this is a production install as 17.10 has a pretty short life ( https://www.ubuntu.com/info/release-end

Re: [ceph-users] HEALTH_ERR : PG_DEGRADED_FULL

2017-12-07 Thread Sean Redmond
Can you share - ceph osd tree / crushmap and `ceph health detail` via pastebin? Is recovery stuck or it is on going? On 7 Dec 2017 07:06, "Karun Josy" wrote: > Hello, > > I am seeing health error in our production cluster. > > health: HEALTH_ERR > 1105420/11038158 objects misplaced

Re: [ceph-users] Luminous v12.2.2 released

2017-12-05 Thread Sean Redmond
Hi Florent, I have always done mons ,osds, rgw, mds, clients Packages that don't auto restart services on update IMO is a good thing. Thanks On Tue, Dec 5, 2017 at 3:26 PM, Florent B wrote: > On Debian systems, upgrading packages does not restart services ! > > On 05/12/2017 16:22, Oscar Sega

Re: [ceph-users] OSD Random Failures - Latest Luminous

2017-11-18 Thread Sean Redmond
Hi, Is it possible to add new empty osds to your cluster? Or do these also crash out? Thanks On 18 Nov 2017 14:32, "Ashley Merrick" wrote: > Hello, > > > > So seems noup does not help. > > > > Still have the same error : > > > > 2017-11-18 14:26:40.982827 7fb4446cd700 -1 *** Caught signal (Abo

Re: [ceph-users] Upgrade osd ceph version

2017-03-05 Thread Sean Redmond
Hi, You should upgrade them all to the latest point release if you don't want to upgrade to the latest major release. Start with the mons, then the osds. Thanks On 3 Mar 2017 18:05, "Curt Beason" wrote: > Hello, > > So this is going to be a noob question probably. I read the > documentation,

Re: [ceph-users] Problems with http://tracker.ceph.com/?

2017-01-20 Thread Sean Redmond
Hi, Is the current strange DNS issue with docs.ceph.com related to this also? I noticed that docs.ceph.com is getting a different A record from ns4.redhat.com vs ns{1..3}.redhat.com dig output here > http://pastebin.com/WapDY9e2 Thanks On Thu, Jan 19, 2017 at 11:03 PM, Dan Mick wrote: > On 01

[ceph-users] Problems with http://tracker.ceph.com/?

2017-01-19 Thread Sean Redmond
Looks like there maybe an issue with the ceph.com and tracker.ceph.com website at the moment ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] CephFS

2017-01-17 Thread Sean Redmond
stable the > technology is in general. > > > Stable. Multiple customers of me run it in production with the kernel > client and serious load on it. No major problems. > > Wido > > On Mon, Jan 16, 2017 at 3:19 PM Sean Redmond > wrote: > >> What's your use

Re: [ceph-users] CephFS

2017-01-16 Thread Sean Redmond
What's your use case? Do you plan on using kernel or fuse clients? On 16 Jan 2017 23:03, "Tu Holmes" wrote: > So what's the consensus on CephFS? > > Is it ready for prime time or not? > > //Tu > > ___ > ceph-users mailing list > ceph-users@lists.ceph.c

Re: [ceph-users] docs.ceph.com down?

2017-01-02 Thread Sean Redmond
If you need the docs you can try reading them here https://github.com/ceph/ceph/tree/master/doc On Mon, Jan 2, 2017 at 7:45 PM, Andre Forigato wrote: > Hello Marcus, > > Yes, it´s down. :-( > > > André > > - Mensagem original - > > De: "Marcus Müller" > > Para: ceph-users@lists.ceph.co

Re: [ceph-users] osd removal problem

2016-12-29 Thread Sean Redmond
Hi, Hmm, could you try and dump the crush map - decompile it - modify it to remove the DNE osd's, compile it and load it back into ceph? http://docs.ceph.com/docs/master/rados/operations/crush-map/#get-a-crush-map Thanks On Thu, Dec 29, 2016 at 1:01 PM, Łukasz Chrustek wrote: > Hi, > > ]# cep

[ceph-users] CephFS metdata inconsistent PG Repair Problem

2016-12-19 Thread Sean Redmond
Hi Ceph-Users, I have been running into a few issue with cephFS metadata pool corruption over the last few weeks, For background please see tracker.ceph.com/issues/17177 # ceph -v ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367) I am currently facing a side effect of this issue tha

Re: [ceph-users] CephFS FAILED assert(dn->get_linkage()->is_null())

2016-12-12 Thread Sean Redmond
tigate this further but though it was worth sharing. Hopefully the above is useful to you, If you need more information I will do my best to provide it, you can also find me in #ceph (s3an2) if it is helpful. Thanks On Mon, Dec 12, 2016 at 12:17 PM, John Spray wrote: > On Sat, Dec 10, 2016 a

Re: [ceph-users] CephFS FAILED assert(dn->get_linkage()->is_null())

2016-12-10 Thread Sean Redmond
pect that i will face an mds assert of the same type sooner >> or later, can you please explain a bit further what operations did you do >> to clean the problem? >> Cheers >> Goncalo >> >> From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Rob >

Re: [ceph-users] CephFS FAILED assert(dn->get_linkage()->is_null())

2016-12-08 Thread Sean Redmond
how is it possible to identify stray directory fragments? Thanks On Thu, Dec 8, 2016 at 6:30 PM, John Spray wrote: > On Thu, Dec 8, 2016 at 3:45 PM, Sean Redmond > wrote: > > Hi, > > > > We had no changes going on with the ceph pools or ceph servers at the > time. > &g

Re: [ceph-users] CephFS FAILED assert(dn->get_linkage()->is_null())

2016-12-08 Thread Sean Redmond
Hi, We had no changes going on with the ceph pools or ceph servers at the time. We have however been hitting this in the last week and it maybe related: http://tracker.ceph.com/issues/17177 Thanks On Thu, Dec 8, 2016 at 3:34 PM, John Spray wrote: > On Thu, Dec 8, 2016 at 3:11 PM, S

[ceph-users] CephFS FAILED assert(dn->get_linkage()->is_null())

2016-12-08 Thread Sean Redmond
Hi, I have a CephFS cluster that is currently unable to start the mds server as it is hitting an assert, the extract from the mds log is below, any pointers are welcome: ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b) 2016-12-08 14:50:18.577038 7f7d9faa3700 1 mds.0.47077 handle_m

[ceph-users] ceph.com Website problems

2016-12-07 Thread Sean Redmond
Looks like ceph.com, tracker.ceph.com download.ceph.com websites / repo are having an issue at the moment, I guess it maybe related to the below:DreamCompute US-East 2 Cluster - Network connectivity issues

Re: [ceph-users] Ceph Ceilometer Integration

2016-11-30 Thread Sean Redmond
Hi Satheesh, Do you have anything in the ceilometer error logs? Thanks On Wed, Nov 30, 2016 at 6:05 PM, Patrick McGarry wrote: > Hey Satheesh, > > Moving this over to ceph-user where it'll get the appropriate > eyeballs. Might also be worth a visit to the #ceph irc channel on > oftc.net. Thank

Re: [ceph-users] cephfs page cache

2016-11-14 Thread Sean Redmond
ced this issue. Adding following lines to httpd.conf > can workaround this issue. > > EnableMMAP off > EnableSendfile off > > > > > On Sat, Sep 3, 2016 at 11:07 AM, Yan, Zheng wrote: > > On Fri, Sep 2, 2016 at 5:10 PM, Sean Redmond > wrote: > >> I have

Re: [ceph-users] ceph 10.2.3 release

2016-11-08 Thread Sean Redmond
Hi, Yes this is pretty stable, I am running it in production. Thanks On Tue, Nov 8, 2016 at 10:38 AM, M Ranga Swami Reddy wrote: > Hello, > Can you please confirm, if the ceph 10.2.3 is ready for production use. > > Thanks > Swami > > ___ > ceph-user

Re: [ceph-users] Feedback wanted: health warning when standby MDS dies?

2016-10-19 Thread Sean Redmond
Hi, I would be interested in this case when a mds in standby-replay fails. Thanks On Wed, Oct 19, 2016 at 4:06 PM, Scottix wrote: > I would take the analogy of a Raid scenario. Basically a standby is > considered like a spare drive. If that spare drive goes down. It is good to > know about the

Re: [ceph-users] ceph on two data centers far away

2016-10-18 Thread Sean Redmond
Maybe this would be an option for you: http://docs.ceph.com/docs/jewel/rbd/rbd-mirroring/ On Tue, Oct 18, 2016 at 8:18 PM, yan cui wrote: > Hi Guys, > >Our company has a use case which needs the support of Ceph across two > data centers (one data center is far away from the other). The exp

Re: [ceph-users] Please help to check CEPH official server inaccessible issue

2016-10-11 Thread Sean Redmond
Hi, Yes there is a problem at the moment, there is another ML thread with more details. The eu repo mirror should still be working eu.ceph.com Thanks On 11 Oct 2016 3:07 p.m., "wenngong" wrote: > Hi Dear, > > I am trying to study and install ceph from official website. But I cannot > access:

[ceph-users] ceph website problems?

2016-10-11 Thread Sean Redmond
Hi, Looks like the ceph website and related sub domains are giving errors for the last few hours. I noticed the below that I use are in scope. http://ceph.com/ http://docs.ceph.com/ http://download.ceph.com/ http://tracker.ceph.com/ Thanks ___ ceph-us

Re: [ceph-users] Ceph consultants?

2016-10-10 Thread Sean Redmond
Hi, In the end this was tracked back to a switch MTU problem, once that was fixed any version of ceph-deploy osd prepair/create worked as expected. Thanks On Mon, Oct 10, 2016 at 11:02 AM, Eugen Block wrote: > Did the prepare command succeed? I don't see any output referring to > 'ceph-deploy

Re: [ceph-users] I/O freeze while a single node is down.

2016-09-13 Thread Sean Redmond
Hi, The host that is taken down has 12 disks in it? Have a look at the down PG's '18 pgs down' - I suspect this will be what is causing the I/O freeze. Is your cursh map setup correctly to split data over different hosts? Thanks On Tue, Sep 13, 2016 at 11:45 AM, Daznis wrote: > No, no errors

Re: [ceph-users] NFS gateway

2016-09-07 Thread Sean Redmond
Have you seen this : https://github.com/nfs-ganesha/nfs-ganesha/wiki/Fsalsupport#CEPH On Wed, Sep 7, 2016 at 3:30 PM, jan hugo prins wrote: > Hi, > > One of the use-cases I'm currently testing is the possibility to replace > a NFS storage cluster using a Ceph cluster. > > The idea I have is to

Re: [ceph-users] cephfs page cache

2016-09-02 Thread Sean Redmond
37 PM, Gregory Farnum wrote: > On Fri, Sep 2, 2016 at 11:35 AM, Sean Redmond > wrote: > > Hi, > > > > That makes sense, I have worked around this by forcing the sync within > the > > application running under apache and it is working very well now without >

Re: [ceph-users] cephfs page cache

2016-09-02 Thread Sean Redmond
not be exhibiting any of these > issues, although obviously we can't guarantee there are no bugs. > -Greg > > > > > Thanks > > > > On Wed, Aug 31, 2016 at 5:51 PM, Sean Redmond > > wrote: > >> > >> I am not sure how to tell? > >>

Re: [ceph-users] cephfs page cache

2016-09-02 Thread Sean Redmond
s again. This issue could be caused by stale session. > Could you check kernel logs of your servers. Are there any ceph > related kernel message (such as "ceph: mds0 caps stale") > > Regards > Yan, Zheng > > > On Thu, Sep 1, 2016 at 11:02 PM, Sean Redmond > w

Re: [ceph-users] cephfs page cache

2016-09-01 Thread Sean Redmond
Hi, It seems to be using syscall mmap() from what I read this indicates it is using memory-mapped IO. Please see a strace here: http://pastebin.com/6wjhSNrP Thanks On Wed, Aug 31, 2016 at 5:51 PM, Sean Redmond wrote: > I am not sure how to tell? > > Server1 and Server2 mount the

Re: [ceph-users] cephfs page cache

2016-08-31 Thread Sean Redmond
Aug 31, 2016 at 12:49 AM, Sean Redmond > wrote: > > Hi, > > > > I have been able to pick through the process a little further and > replicate > > it via the command line. The flow seems looks like this: > > > > 1) The user uploads an image to webserver se

Re: [ceph-users] cephfs metadata pool: deep-scrub error "omap_digest != best guess omap_digest"

2016-08-31 Thread Sean Redmond
I have updated the tracker with some log extracts as I seem to be hitting this or a very similar issue. I was unsure of the correct syntax for the command ceph-objectstore-tool to try and extract that information. On Wed, Aug 31, 2016 at 5:56 AM, Brad Hubbard wrote: > > On Wed, Aug 31, 2016 at

Re: [ceph-users] cephfs page cache

2016-08-31 Thread Sean Redmond
a problem but its not clear to me what the expected behavior is when a cephfs client is trying to read a file contents that is currently still being flushed to the file system by the cephfs client that created the file. On Tue, Aug 30, 2016 at 5:49 PM, Sean Redmond wrote: > Hi, > > I hav

Re: [ceph-users] cephfs page cache

2016-08-30 Thread Sean Redmond
/1w6UZzNQ It looks it maybe a race between the time it takes the uploader01 server to commit the file to the file system and the fast incoming read request from the visiting user to server1 or server2. Thanks On Tue, Aug 30, 2016 at 10:21 AM, Sean Redmond wrote: > You are correct it only s

Re: [ceph-users] cephfs page cache

2016-08-30 Thread Sean Redmond
You are correct it only seems to impact recently modified files. On Tue, Aug 30, 2016 at 3:36 AM, Yan, Zheng wrote: > On Tue, Aug 30, 2016 at 2:11 AM, Gregory Farnum > wrote: > > On Mon, Aug 29, 2016 at 7:14 AM, Sean Redmond > wrote: > >> Hi, > >> > >>

Re: [ceph-users] cephfs page cache

2016-08-29 Thread Sean Redmond
Hi, Yes the file has no contents until the page cache is flushed. I will give the fuse client a try and report back. Thanks On Mon, Aug 29, 2016 at 7:11 PM, Gregory Farnum wrote: > On Mon, Aug 29, 2016 at 7:14 AM, Sean Redmond > wrote: > > Hi, > > > > I am runn

[ceph-users] cephfs page cache

2016-08-29 Thread Sean Redmond
Hi, I am running cephfs (10.2.2) with kernel 4.7.0-1. I have noticed that frequently static files are showing empty when serviced via a web server (apache). I have tracked this down further and can see when running a checksum against the file on the cephfs file system on the node serving the empty

[ceph-users] cephfs page cache

2016-08-29 Thread Sean Redmond
Hi, I am running cephfs (10.2.2) with kernel 4.7.0-1. I have noticed that frequently static files are showing empty when serviced via a web server (apache). I have tracked this down further and can see when running a checksum against the file on the cephfs file system on the node serving the empty

Re: [ceph-users] rgw query bucket usage quickly

2016-07-28 Thread Sean Redmond
Hi, This seems pretty quick here on a jewel cluster here, But I guess the key questions is how large is large? Is it perhaps a large number of smaller files that is slowing this down? Is the bucket index shared / on SSD? [root@korn ~]# time s3cmd du s3://seanbackup 1656225129419 29 objects

Re: [ceph-users] CephFS | Recursive stats not displaying with GNU ls

2016-07-18 Thread Sean Redmond
Hi, Is this disabled because its not a stable feature or just user preference? Thanks On Mon, Jul 18, 2016 at 2:37 PM, Yan, Zheng wrote: > On Mon, Jul 18, 2016 at 9:00 PM, David wrote: > > Hi all > > > > Recursive statistics on directories are no longer showing on an ls -l > output > > but ge

Re: [ceph-users] Lessons learned upgrading Hammer -> Jewel

2016-07-15 Thread Sean Redmond
Hi Matt, I have too followed the upgrade from hammer to jewel, I think it is pretty accepted to upgrade between LTS releases (H>J) skipping the 'stable' releases (I) in the middle. Thanks On Fri, Jul 15, 2016 at 9:48 AM, Mart van Santen wrote: > > Hi Wido, > > Thank you, we are currently in th

Re: [ceph-users] Realistic Ceph Client OS

2016-07-12 Thread Sean Redmond
Thanks, Can I ignore this warning then? > > health HEALTH_WARN > crush map has legacy tunables (require bobtail, min is firefly) > > Cheers, > Mike > > On Jul 12, 2016, at 9:57 AM, Sean Redmond wrote: > > Hi, > > Take a look at the docs here ( > http:/

Re: [ceph-users] Realistic Ceph Client OS

2016-07-12 Thread Sean Redmond
00 > > How can I set the tunable low enough? And what does that mean for > performance? > > Cheers, > Mike > > On Jul 12, 2016, at 9:43 AM, Sean Redmond wrote: > > Hi, > > It should work for you with kernel 3.10 as long as turntables are set low

Re: [ceph-users] Realistic Ceph Client OS

2016-07-12 Thread Sean Redmond
Hi, It should work for you with kernel 3.10 as long as turntables are set low enough - Do you see anything in 'dmesg'? Thanks On Tue, Jul 12, 2016 at 5:37 PM, Mike Jacobacci wrote: > Hi All, > > Is mounting rbd only really supported in Ubuntu? All of our servers are > CentOS 7 or RedHat 7 and

Re: [ceph-users] cluster failing to recover

2016-07-05 Thread Sean Redmond
Hi, What happened to the missing 2 OSD's? 53 osds: 51 up, 51 in Thanks On Tue, Jul 5, 2016 at 4:04 PM, Matyas Koszik wrote: > > Should you be interested, the solution to this was > ceph pg $pg mark_unfound_lost delete > for all pgs that had unfound objects, now the cluster is back in a health

[ceph-users] RADOSGW buckets via NFS?

2016-07-03 Thread Sean Redmond
Hi, I noticed in the jewel release notes: "You can now access radosgw buckets via NFS (experimental)." Are there any docs that explain the configuration of NFS to access RADOSGW buckets? Thanks ___ ceph-users mailing list ceph-users@lists.ceph.com htt

Re: [ceph-users] How many nodes/OSD can fail

2016-07-03 Thread Sean Redmond
789/0 pipe(0x7f3da0005500 sd=4 :0 s=1 pgs=0 cs=0 l=1 >> c=0x7f3da00067c0).fault >> 2016-07-03 09:49:50.205788 7f3da55f8700 0 -- 192.168.0.5:0/2773396901 >> >> 192.168.0.6:6789/0 pipe(0x7f3da0005500 sd=3 :0 s=1 pgs=0 cs=0 l=1 >> c=0x7f3da0004c40).fault >> 2016-07-0

Re: [ceph-users] How many nodes/OSD can fail

2016-07-03 Thread Sean Redmond
It would need to be set to 1 On 3 Jul 2016 8:17 a.m., "Willi Fehler" wrote: > Hello David, > > so in a 3 node Cluster how should I set min_size if I want that 2 nodes > could fail? > > Regards - Willi > > Am 28.06.16 um 13:07 schrieb David: > > Hi, > > This is probably the min_size on your cephfs

[ceph-users] RADOSGW buckets via NFS?

2016-06-30 Thread Sean Redmond
Hi, I noticed in the jewel release notes: "You can now access radosgw buckets via NFS (experimental)." Are there any docs that explain the configuration of NFS to access RADOSGW buckets? Thanks ___ ceph-users mailing list ceph-users@lists.ceph.com htt

Re: [ceph-users] Move RGW bucket index

2016-06-13 Thread Sean Redmond
t 12:11 PM, Василий Ангапов wrote: > Is there any way to move existing non-sharded bucket index to sharded > one? Or is there any way (online or offline) to move all objects from > non-sharded bucket to sharded one? > > 2016-06-13 11:38 GMT+03:00 Sean Redmond : > > Hi, > &g

Re: [ceph-users] can I attach a volume to 2 servers

2016-05-02 Thread Sean Redmond
Hi, You could set the below to create ephemeral disks as RBD's [libvirt] libvirt_images_type = rbd On Mon, May 2, 2016 at 2:28 PM, yang sheng wrote: > Hi > > I am using ceph infernalis. > > it works fine with my openstack liberty. > > I am trying to test nova evacuate. > > All the vms' volum

Re: [ceph-users] Crush Map tunning recommendation and validation

2016-03-24 Thread Sean Redmond
Hi German, For Data to be split over the racks you should set the crush rule set to 'step chooseleaf firstn 0 type rack' instead of 'step chooseleaf firstn 0 type host' Thanks On Wed, Mar 23, 2016 at 3:50 PM, German Anders wrote: > Hi all, > > I had a question, I'm in the middle of a new ceph

Re: [ceph-users] DSS 7000 for large scale object storage

2016-03-21 Thread Sean Redmond
I used a Unit a little like this ( https://www.sgi.com/products/storage/servers/mis_server.html) for a SATA pool in ceph - rebuilds after a failure of a node can be painful without a fair amount of testing & tuning. I have opted for more units with less disks for future builds using R730XD. On Mo

Re: [ceph-users] Cluster always scrubbing.

2015-11-24 Thread Sean Redmond
ep scrub` is empty. >But command of "ceph health" show there is "*16 pgs > active+clean+scrubbing+deep, 2** pgs active+clean+scrubbing*". >I have 2 osds have slow requests warning. >Is it releated? > > > > Best wishes, > Mika > > > 20

Re: [ceph-users] Cluster always scrubbing.

2015-11-23 Thread Sean Redmond
Hi Mika, Have the scubs been running for a long time? Can you see what pool they are running on? You can check using `ceph pg dump | grep scrub` Thanks On Mon, Nov 23, 2015 at 9:32 AM, Mika c wrote: > Hi cephers, > We are facing a scrub issue. Our CEPH cluster is using Trusty / Hammer > 0.

Re: [ceph-users] SSD journals killed by VMs generating 500 IOPs (4kB) non-stop for a month, seemingly because of a syslog-ng bug

2015-11-23 Thread Sean Redmond
Hi Mart, I agree with Eneko, I had 72 of the Samaung Evo drives in service for journals (4:1) and ended up replacing them all within 9 months with Intel DC 3700's due to high number of failures and very poor performance resulting in frequent blocked ops. Just stick with the Intel Data Center Grad

Re: [ceph-users] All SSD Pool - Odd Performance

2015-11-19 Thread Sean Redmond
our QEMU setup, which may be a single > IO thread. That’s also what I think Mike is alluding to. > > Warren > > From: Sean Redmond mailto:sean.redmo...@gmail.com > >> > Date: Wednesday, November 18, 2015 at 6:39 AM > To: "ceph-us...@ceph.com<mailto:ceph-us...@ceph.

[ceph-users] All SSD Pool - Odd Performance

2015-11-18 Thread Sean Redmond
Hi, I have a performance question for anyone running an SSD only pool. Let me detail the setup first. 12 X Dell PowerEdge R630 ( 2 X 2620v3 64Gb RAM) 8 X intel DC 3710 800GB Dual port Solarflare 10GB/s NIC (one front and one back) Ceph 0.94.5 Ubuntu 14.04 (3.13.0-68-generic) The above is in one

Re: [ceph-users] SSD pool and SATA pool

2015-11-17 Thread Sean Redmond
Hi, The below should help you: http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/ Thanks On Tue, Nov 17, 2015 at 9:58 PM, Nikola Ciprich wrote: > I'm not an ceph expert, but I needed to use > > osd crush update on start = false > > in [osd] config section..

Re: [ceph-users] Same rbd mount from multiple servers

2014-10-20 Thread Sean Redmond
Hi Mihaly, To my understanding you cannot mount an ext4 file system on more than one server at the same time, You would need to look to use a clustered file system. Thanks From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mihály Árva-Tóth Sent: 20 October 2014 09:34 To: