> Op 17 jan. 2017 om 05:31 heeft Hauke Homburg het
> volgende geschreven:
>
> Am 16.01.2017 um 12:24 schrieb Wido den Hollander:
>>> Op 14 januari 2017 om 14:58 schreef Hauke Homburg :
>>>
>>>
>>> Am 14.01.2017 um 12:59 schrieb Wido den
> Op 17 jan. 2017 om 03:47 heeft Tu Holmes het volgende
> geschreven:
>
> I could use either one. I'm just trying to get a feel for how stable the
> technology is in general.
Stable. Multiple customers of me run it in production with the kernel client
and serious load
I could use either one. I'm just trying to get a feel for how stable the
technology is in general.
On Mon, Jan 16, 2017 at 3:19 PM Sean Redmond
wrote:
> What's your use case? Do you plan on using kernel or fuse clients?
>
> On 16 Jan 2017 23:03, "Tu Holmes"
What's your use case? Do you plan on using kernel or fuse clients?
On 16 Jan 2017 23:03, "Tu Holmes" wrote:
> So what's the consensus on CephFS?
>
> Is it ready for prime time or not?
>
> //Tu
>
> ___
> ceph-users mailing list
>
On Sat, Jan 14, 2017 at 7:54 PM, 许雪寒 wrote:
> Thanks for your help:-)
>
> I checked the source code again, and in read_message, it does hold the
> Connection::lock:
You're correct of course; I wasn't looking and forgot about this bit.
This was added to deal with
So what's the consensus on CephFS?
Is it ready for prime time or not?
//Tu
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Can you ensure that you have the "admin socket" configured for your
librbd-backed VM so that you can do the following when you hit that
condition:
ceph --admin-daemon objecter_requests
That will dump out any hung IO requests between librbd and the OSDs. I
would also check your librbd logs to
We are using librbd on a host with CentOS 7.2 via virtio-blk. This server
hosts the VMs on which we are doing our tests. But we have exactly the same
behaviour than #9071. We try to follow the thread to the bug 8818 but we
didn't reproduce the issue with a lot of DD. Each time we try with
Ignore that last post. After another try or 2 I got to the new site with
the updates as described. Looks great!
On 1/16/17, 9:12 AM, "ceph-devel-ow...@vger.kernel.org on behalf of
McFarland, Bruce" wrote:
>Patrick,
>I’m
Patrick,
I’m probably overlooking something, but when I follow the ceph days link
there are no 2017 events only past. The cephalocon link goes to a 404 page
not found.
Bruce
On 1/16/17, 7:03 AM, "ceph-devel-ow...@vger.kernel.org on behalf of
Patrick McGarry"
Are you using krbd directly within the VM or librbd via
virtio-blk/scsi? Ticket #9071 is against krbd.
On Mon, Jan 16, 2017 at 11:34 AM, Vincent Godin wrote:
> In fact, we can reproduce the problem from VM with CentOS 6.7, 7.2 or 7.3.
> We can reproduce it each time with
In fact, we can reproduce the problem from VM with CentOS 6.7, 7.2 or 7.3.
We can reproduce it each time with this config : one VM (here in CentOS
6.7) with 16 RBD volumes of 100GB attached. When we launch in serial
mkfs.ext4 on each of these volumes, we allways encounter the problem on one
of
The site looks great! Good job!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, Jan 16, 2017 at 10:11 AM Jason Dillaman wrote:
> On Sun, Jan 15, 2017 at 2:56 PM, Shawn Edwards
> wrote:
> > If I, say, have 10 rbd attached to the same box using librbd, all 10 of
> the
> > rbd are clones of the same snapshot, and I have
On Sun, Jan 15, 2017 at 2:56 PM, Shawn Edwards wrote:
> If I, say, have 10 rbd attached to the same box using librbd, all 10 of the
> rbd are clones of the same snapshot, and I have caching turned on, will each
> rbd be caching blocks from the parent snapshot individually,
Hello,
Le 16/01/2017 à 16:03, Patrick McGarry a écrit :
> Ok, the new website should be up and functional. Shout if you see
> anything that is still broken.
Minor typos:
"It replicates and re-balance data within the cluster
dynamically—elminating this tedious task"
-> re-balances
->
FYI, our ipv6 is lagging a bit behind ipv4 (and the red hat
nameservers may take a bit to catch up), so you may see the old site
for just a little bit longer.
On Mon, Jan 16, 2017 at 10:03 AM, Patrick McGarry wrote:
> Ok, the new website should be up and functional. Shout
Ok, the new website should be up and functional. Shout if you see
anything that is still broken.
As for the site itself, I'd like to highlight a few things worth checking out:
* Ceph Days -- The first two Ceph Days have been posted, as well as
the historical events for all of last year.
On Mon, Jan 16, 2017 at 3:54 PM, Andre Forigato
wrote:
> Hello Marius Vaitiekunas, Chris Jones,
>
> Thank you for your contributions.
> I was looking for this information.
>
> I'm starting to use Ceph, and my concern is about monitoring.
>
> Do you have any scripts for
give this a try
ceph osd set noout
On Jan 16, 2017 9:08 AM, "Stéphane Klein"
wrote:
> I see my mistake:
>
> ```
> osdmap e57: 2 osds: 1 up, 1 in; 64 remapped pgs
> flags sortbitwise,require_jewel_osds
> ```
>
>
I see my mistake:
```
osdmap e57: 2 osds: 1 up, 1 in; 64 remapped pgs
flags sortbitwise,require_jewel_osds
```
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hey cephers,
Please bear with us as we migrate ceph.com as there may be some
outages. They should be quick and over soon. Thanks!
--
Best Regards,
Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com || http://community.redhat.com
@scuttlemonkey || @ceph
Hello Marius Vaitiekunas, Chris Jones,
Thank you for your contributions.
I was looking for this information.
I'm starting to use Ceph, and my concern is about monitoring.
Do you have any scripts for this monitoring?
If you can help me. I will be very grateful to you.
(Excuse me if there is
2017-01-16 12:24 GMT+01:00 Loris Cuoghi :
> Hello,
>
> Le 16/01/2017 à 11:50, Stéphane Klein a écrit :
>
>> Hi,
>>
>> I have two OSD and Mon nodes.
>>
>> I'm going to add third osd and mon on this cluster but before I want to
>> fix this error:
>>
> >
> > [SNIP SNAP]
>
>
Hi Kees,
Assuming 3 replicas and collocated journal each RBD write will trigger 6 SSD
writes (excluding FS overhead and occasional re-balance).
Intel has 4 tiers of Data center SATA SSD (other manufacturers may have fewer):
- S31xx: ~0.1 DWPD (counted on 3 years): Very read intensive
- S35xx: ~1
2017-01-16 12:47 GMT+01:00 Jay Linux :
> Hello Stephane,
>
> Try this .
>
> $ceph osd pool get size -->> it will prompt the "
> osd_pool_default_size "
> $ceph osd pool get min_size-->> it will prompt the "
> osd_pool_default_min_size "
>
> if you want to change
Hello Stephane,
Try this .
$ceph osd pool get size -->> it will prompt the "
osd_pool_default_size "
$ceph osd pool get min_size-->> it will prompt the "
osd_pool_default_min_size "
if you want to change in runtime, trigger below command
$ceph osd pool set size
$ceph osd pool set
Hello,
Le 16/01/2017 à 11:50, Stéphane Klein a écrit :
Hi,
I have two OSD and Mon nodes.
I'm going to add third osd and mon on this cluster but before I want to
fix this error:
>
> [SNIP SNAP]
You've just created your cluster.
With the standard CRUSH rules you need one OSD on three
Hi Maxime,
Given your remark below, what kind of SATA SSD do you recommend for OSD
usage?
Thanks!
Regards,
Kees
On 15-01-17 21:33, Maxime Guyot wrote:
> I don’t have firsthand experience with the S3520, as Christian pointed out
> their endurance doesn’t make them suitable for OSDs in most
Hi Orit
Executing period update resolved issue. Thanks for help.
Kind regards,
Marko
On 1/15/17 08:53, Orit Wasserman wrote:
On Wed, Jan 11, 2017 at 2:53 PM, Marko Stojanovic wrote:
Hello all,
I have issue with radosgw-admin regionmap update . It doesn't update
30 matches
Mail list logo