Re: [ceph-users] Change Partition Schema on OSD Possible?

2017-01-16 Thread w...@42on.com
> Op 17 jan. 2017 om 05:31 heeft Hauke Homburg het > volgende geschreven: > > Am 16.01.2017 um 12:24 schrieb Wido den Hollander: >>> Op 14 januari 2017 om 14:58 schreef Hauke Homburg : >>> >>> >>> Am 14.01.2017 um 12:59 schrieb Wido den

Re: [ceph-users] CephFS

2017-01-16 Thread w...@42on.com
> Op 17 jan. 2017 om 03:47 heeft Tu Holmes het volgende > geschreven: > > I could use either one. I'm just trying to get a feel for how stable the > technology is in general. Stable. Multiple customers of me run it in production with the kernel client and serious load

Re: [ceph-users] CephFS

2017-01-16 Thread Tu Holmes
I could use either one. I'm just trying to get a feel for how stable the technology is in general. On Mon, Jan 16, 2017 at 3:19 PM Sean Redmond wrote: > What's your use case? Do you plan on using kernel or fuse clients? > > On 16 Jan 2017 23:03, "Tu Holmes"

Re: [ceph-users] CephFS

2017-01-16 Thread Sean Redmond
What's your use case? Do you plan on using kernel or fuse clients? On 16 Jan 2017 23:03, "Tu Holmes" wrote: > So what's the consensus on CephFS? > > Is it ready for prime time or not? > > //Tu > > ___ > ceph-users mailing list >

Re: [ceph-users] 答复: 答复: 答复: Pipe "deadlock" in Hammer, 0.94.5

2017-01-16 Thread Gregory Farnum
On Sat, Jan 14, 2017 at 7:54 PM, 许雪寒 wrote: > Thanks for your help:-) > > I checked the source code again, and in read_message, it does hold the > Connection::lock: You're correct of course; I wasn't looking and forgot about this bit. This was added to deal with

[ceph-users] CephFS

2017-01-16 Thread Tu Holmes
So what's the consensus on CephFS? Is it ready for prime time or not? //Tu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] mkfs.ext4 hang on RBD volume

2017-01-16 Thread Jason Dillaman
Can you ensure that you have the "admin socket" configured for your librbd-backed VM so that you can do the following when you hit that condition: ceph --admin-daemon objecter_requests That will dump out any hung IO requests between librbd and the OSDs. I would also check your librbd logs to

Re: [ceph-users] mkfs.ext4 hang on RBD volume

2017-01-16 Thread Vincent Godin
We are using librbd on a host with CentOS 7.2 via virtio-blk. This server hosts the VMs on which we are doing our tests. But we have exactly the same behaviour than #9071. We try to follow the thread to the bug 8818 but we didn't reproduce the issue with a lot of DD. Each time we try with

Re: [ceph-users] ceph.com outages

2017-01-16 Thread McFarland, Bruce
Ignore that last post. After another try or 2 I got to the new site with the updates as described. Looks great! On 1/16/17, 9:12 AM, "ceph-devel-ow...@vger.kernel.org on behalf of McFarland, Bruce" wrote: >Patrick, >I’m

Re: [ceph-users] ceph.com outages

2017-01-16 Thread McFarland, Bruce
Patrick, I’m probably overlooking something, but when I follow the ceph days link there are no 2017 events only past. The cephalocon link goes to a 404 page not found. Bruce On 1/16/17, 7:03 AM, "ceph-devel-ow...@vger.kernel.org on behalf of Patrick McGarry"

Re: [ceph-users] mkfs.ext4 hang on RBD volume

2017-01-16 Thread Jason Dillaman
Are you using krbd directly within the VM or librbd via virtio-blk/scsi? Ticket #9071 is against krbd. On Mon, Jan 16, 2017 at 11:34 AM, Vincent Godin wrote: > In fact, we can reproduce the problem from VM with CentOS 6.7, 7.2 or 7.3. > We can reproduce it each time with

Re: [ceph-users] mkfs.ext4 hang on RBD volume

2017-01-16 Thread Vincent Godin
In fact, we can reproduce the problem from VM with CentOS 6.7, 7.2 or 7.3. We can reproduce it each time with this config : one VM (here in CentOS 6.7) with 16 RBD volumes of 100GB attached. When we launch in serial mkfs.ext4 on each of these volumes, we allways encounter the problem on one of

[ceph-users] Ceph.com

2017-01-16 Thread Chris Jones
The site looks great! Good job! ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] librbd cache and clone awareness

2017-01-16 Thread Shawn Edwards
On Mon, Jan 16, 2017 at 10:11 AM Jason Dillaman wrote: > On Sun, Jan 15, 2017 at 2:56 PM, Shawn Edwards > wrote: > > If I, say, have 10 rbd attached to the same box using librbd, all 10 of > the > > rbd are clones of the same snapshot, and I have

Re: [ceph-users] librbd cache and clone awareness

2017-01-16 Thread Jason Dillaman
On Sun, Jan 15, 2017 at 2:56 PM, Shawn Edwards wrote: > If I, say, have 10 rbd attached to the same box using librbd, all 10 of the > rbd are clones of the same snapshot, and I have caching turned on, will each > rbd be caching blocks from the parent snapshot individually,

Re: [ceph-users] ceph.com outages

2017-01-16 Thread Loris Cuoghi
Hello, Le 16/01/2017 à 16:03, Patrick McGarry a écrit : > Ok, the new website should be up and functional. Shout if you see > anything that is still broken. Minor typos: "It replicates and re-balance data within the cluster dynamically—elminating this tedious task" -> re-balances ->

Re: [ceph-users] ceph.com outages

2017-01-16 Thread Patrick McGarry
FYI, our ipv6 is lagging a bit behind ipv4 (and the red hat nameservers may take a bit to catch up), so you may see the old site for just a little bit longer. On Mon, Jan 16, 2017 at 10:03 AM, Patrick McGarry wrote: > Ok, the new website should be up and functional. Shout

Re: [ceph-users] ceph.com outages

2017-01-16 Thread Patrick McGarry
Ok, the new website should be up and functional. Shout if you see anything that is still broken. As for the site itself, I'd like to highlight a few things worth checking out: * Ceph Days -- The first two Ceph Days have been posted, as well as the historical events for all of last year.

Re: [ceph-users] Ceph Monitoring

2017-01-16 Thread Marius Vaitiekunas
On Mon, Jan 16, 2017 at 3:54 PM, Andre Forigato wrote: > Hello Marius Vaitiekunas, Chris Jones, > > Thank you for your contributions. > I was looking for this information. > > I'm starting to use Ceph, and my concern is about monitoring. > > Do you have any scripts for

Re: [ceph-users] How to fix: HEALTH_ERR 45 pgs are stuck inactive for more than 300 seconds; 19 pgs degraded; 45 pgs stuck inactive; 19 pgs stuck unclean; 19 pgs undersized; recovery 2514/5028 objects

2017-01-16 Thread Donny Davis
give this a try ceph osd set noout On Jan 16, 2017 9:08 AM, "Stéphane Klein" wrote: > I see my mistake: > > ``` > osdmap e57: 2 osds: 1 up, 1 in; 64 remapped pgs > flags sortbitwise,require_jewel_osds > ``` > >

Re: [ceph-users] How to fix: HEALTH_ERR 45 pgs are stuck inactive for more than 300 seconds; 19 pgs degraded; 45 pgs stuck inactive; 19 pgs stuck unclean; 19 pgs undersized; recovery 2514/5028 objects

2017-01-16 Thread Stéphane Klein
I see my mistake: ``` osdmap e57: 2 osds: 1 up, 1 in; 64 remapped pgs flags sortbitwise,require_jewel_osds ``` ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] ceph.com outages

2017-01-16 Thread Patrick McGarry
Hey cephers, Please bear with us as we migrate ceph.com as there may be some outages. They should be quick and over soon. Thanks! -- Best Regards, Patrick McGarry Director Ceph Community || Red Hat http://ceph.com || http://community.redhat.com @scuttlemonkey || @ceph

Re: [ceph-users] Ceph Monitoring

2017-01-16 Thread Andre Forigato
Hello Marius Vaitiekunas, Chris Jones, Thank you for your contributions. I was looking for this information. I'm starting to use Ceph, and my concern is about monitoring. Do you have any scripts for this monitoring? If you can help me. I will be very grateful to you. (Excuse me if there is

Re: [ceph-users] How to fix: HEALTH_ERR 45 pgs are stuck inactive for more than 300 seconds; 19 pgs degraded; 45 pgs stuck inactive; 19 pgs stuck unclean; 19 pgs undersized; recovery 2514/5028 objects

2017-01-16 Thread Stéphane Klein
2017-01-16 12:24 GMT+01:00 Loris Cuoghi : > Hello, > > Le 16/01/2017 à 11:50, Stéphane Klein a écrit : > >> Hi, >> >> I have two OSD and Mon nodes. >> >> I'm going to add third osd and mon on this cluster but before I want to >> fix this error: >> > > > > [SNIP SNAP] > >

Re: [ceph-users] All SSD cluster performance

2017-01-16 Thread Maxime Guyot
Hi Kees, Assuming 3 replicas and collocated journal each RBD write will trigger 6 SSD writes (excluding FS overhead and occasional re-balance). Intel has 4 tiers of Data center SATA SSD (other manufacturers may have fewer): - S31xx: ~0.1 DWPD (counted on 3 years): Very read intensive - S35xx: ~1

Re: [ceph-users] How to update osd pool default size at runtime?

2017-01-16 Thread Stéphane Klein
2017-01-16 12:47 GMT+01:00 Jay Linux : > Hello Stephane, > > Try this . > > $ceph osd pool get size -->> it will prompt the " > osd_pool_default_size " > $ceph osd pool get min_size-->> it will prompt the " > osd_pool_default_min_size " > > if you want to change

Re: [ceph-users] How to update osd pool default size at runtime?

2017-01-16 Thread Jay Linux
Hello Stephane, Try this . $ceph osd pool get size -->> it will prompt the " osd_pool_default_size " $ceph osd pool get min_size-->> it will prompt the " osd_pool_default_min_size " if you want to change in runtime, trigger below command $ceph osd pool set size $ceph osd pool set

Re: [ceph-users] How to fix: HEALTH_ERR 45 pgs are stuck inactive for more than 300 seconds; 19 pgs degraded; 45 pgs stuck inactive; 19 pgs stuck unclean; 19 pgs undersized; recovery 2514/5028 objects

2017-01-16 Thread Loris Cuoghi
Hello, Le 16/01/2017 à 11:50, Stéphane Klein a écrit : Hi, I have two OSD and Mon nodes. I'm going to add third osd and mon on this cluster but before I want to fix this error: > > [SNIP SNAP] You've just created your cluster. With the standard CRUSH rules you need one OSD on three

Re: [ceph-users] All SSD cluster performance

2017-01-16 Thread Kees Meijs
Hi Maxime, Given your remark below, what kind of SATA SSD do you recommend for OSD usage? Thanks! Regards, Kees On 15-01-17 21:33, Maxime Guyot wrote: > I don’t have firsthand experience with the S3520, as Christian pointed out > their endurance doesn’t make them suitable for OSDs in most

Re: [ceph-users] unable to do regionmap update

2017-01-16 Thread Marko Stojanovic
Hi Orit Executing period update resolved issue. Thanks for help. Kind regards, Marko On 1/15/17 08:53, Orit Wasserman wrote: On Wed, Jan 11, 2017 at 2:53 PM, Marko Stojanovic wrote: Hello all, I have issue with radosgw-admin regionmap update . It doesn't update