On Mon, Aug 3, 2015 at 5:10 PM, Quentin Hartman
qhart...@direwolfdigital.com wrote:
The problem with this kind of monitoring is that there are so many possible
metrics to watch and so many possible ways to watch them. For myself, I'm
working on implementing a couple of things:
- Watching error
On Mon, Aug 3, 2015 at 2:39 PM, Kenneth Waegeman
kenneth.waege...@ugent.be wrote:
Another question: When using cephfs with it, we have to use a cache pool on
top of it. But this forms a huge bottleneck for read-only operations. Is it
possible to use the ec pool directly and bypass the cache for
This is handled by the filesystem usually (or not, depending on what filesystem
you use).
When you hit a bad block you should just replace the drive - in case of a
spinning disk the damage is likely going to spread, in case of flash device
this error should have been prevented by firmware in
All of the other things that I would be looking at would show a link speed
failure. In the two cases of network shenanigans I've had that effectively
broke ceph the link speed was always correct. That leads me to distrust
link speed as a reliable source of truth. Also, it's testing a proxy for
Hey John...
First of all. thank you for the nice talks you have been giving around.
See the feedback on your suggestions bellow, plus some additional questions.
However, please note that in my example I am not doing only
deletions but also creating and updating files, which afaiu,
Hello,
There's a number of reasons I can think of why this would happen.
You say default behavior but looking at your map it's obvious that you
probably don't have a default cluster and crush map.
Your ceph.conf may help, too.
Regards,
Christian
On Tue, 4 Aug 2015 13:05:54 +1000 Daniel Manzau
Hi Yan Zheng
Now my questions:
1) I performed all of this testing to understand what would be the minimum
size (reported by df) of a file of 1 char and I am still not able to find a
clear answer. In a regular posix file system, the size of a 1 char (1 byte)
file is actually constrained by
On Tue, Aug 4, 2015 at 9:40 AM, Goncalo Borges
gonc...@physics.usyd.edu.au wrote:
Hey John...
First of all. thank you for the nice talks you have been giving around.
See the feedback on your suggestions bellow, plus some additional questions.
However, please note that in my example I am not
Hello,
On Thu, 30 Jul 2015 11:39:29 +0200 Khalid Ahsein wrote:
Good morning christian,
thank you for your quick response.
so I need to upgrade to 64 GB or 96 GB to be more secure ?
32GB would be sufficient, 64GB will give you read performance benefits
with hot objects (large pagecache).
Hi Cephers,
We've been testing drive failures and we're just trying to see if the
behaviour of our cluster is normal, or if we've setup something wrong.
In summary; the OSD is down and out, but the PGs are showing as degraded
and don't seem to want to remap. We'd have assumed once the OSD was
Hi Max A. Krasilnikov,
Could you please explain why we need 3+ nodes in case of replication factor of
2 ?
My understanding is client io depends on min_size , which is 1 in this case.
Thanks Regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com]
On Fri, Jul 31, 2015 at 7:21 PM, Jan Schermer j...@schermer.cz wrote:
I remember reading that ScaleIO (I think?) does something like this by
regularly sending reports to a multicast group, thus any node with issues (or
just overload) is reweighted or avoided automatically on the client. OSD
On Mon, Aug 3, 2015 at 5:50 AM, Goncalo Borges gonc...@physics.usyd.edu.au
wrote:
Dear CephFS gurus...
I forgot to mention in my previous email that I do understand the
deletions may take a while to perform since they are completed in the
background by MDS.
The delay in deletions should be
On Fri, Jul 31, 2015 at 5:57 PM, Charley Guan xi...@us.ibm.com wrote:
1) what is the purpose of installing and configuring salt-master and
salt-minion in Ceph environment?
is this true that salt-master installed in calamari master machine and
calamari-minion would be configured in Ceph
Hi, all
I have yet to find a good solution, and some one have any suggestions.
I think this is a big problem when you use cache tier.
2015-08-03
liukai
发件人:Kenneth Waegeman kenneth.waege...@ugent.be
发送时间:2015-07-30 17:31
主题:Re: [ceph-users] A cache tier issue with rate only at
Hello Ceph user peers,
I am trying to setup a Ceph cluster in OpenStack cloud. I remotely created
3 VMs via Horizon (GUI). I successfully manually installed Ceph in all the
nodes and wanted to make *ceph-node-1* as the Monitor (to start with).
The IP addresses for three nodes are:
ceph-node-1
Hi Patrick...
Do you think it is possible to make the talk / slides available? The
link is still not active in the ceph-tech-talks URL
Cheers
Goncalo
On 07/31/2015 01:08 AM, Patrick McGarry wrote:
Hey cephers,
Just sending a friendly reminder that our online CephFS Tech Talk is
happening
The problem with this kind of monitoring is that there are so many possible
metrics to watch and so many possible ways to watch them. For myself, I'm
working on implementing a couple of things:
- Watching error counters on servers
- Watching error counters on switches
- Watching performance
My
On Fri, Jul 31, 2015 at 11:11 AM, Kenneth Waegeman
kenneth.waege...@ugent.be wrote:
This works when only using 1 host..
Is there a way to run the benchmarks with multiple instances?
You need to give it a unique name with the --run-name switch.
___
On Mon, Aug 3, 2015 at 5:29 PM, Wvath p.wv...@gmail.com wrote:
Dear John Spray,
You made me very clear!!
If I can get a root privilege, is it possible to beat Lustre in the same
environment ?
Check out Mark's response on the CephFS vs Lustre performance thread
on this list -- that pretty
On Mon, Aug 3, 2015 at 2:55 PM, Anton Ivanov anton.iva...@ask.com wrote:
Hi all,
I am running Kubernetes on CoreOS and use rbd binary exctracted from
ceph/ceph-docker/config image to map images to CoreOS host. Everything works
just fine except when trying to unmap the volume. It doesn’t
Hi all,
I am running Kubernetes on CoreOS and use rbd binary exctracted from
ceph/ceph-docker/config image to map images to CoreOS host. Everything works
just fine except when trying to unmap the volume. It doesn't unmap and gives
following error:
$ rbd showmapped
id pool imagesnap device
On Mon, Aug 3, 2015 at 12:30 PM, Stijn De Weirdt
stijn.dewei...@ugent.be wrote:
Like a lot of system monitoring stuff, this is the kind of thing that
in an ideal world we wouldn't have to worry about, but the experience
in practice is that people deploy big distributed storage systems
without
Hi,
I'd like to deploy Cephfs in a cluster, but I need to have a performance
report compared with Lustre and Gluster. Could anyone point me documents /
links for performance between CephFS, Gluster and Lustre?
Thank you.
Kind regards,
- j
___
Like a lot of system monitoring stuff, this is the kind of thing that
in an ideal world we wouldn't have to worry about, but the experience
in practice is that people deploy big distributed storage systems
without having really good monitoring in place. We (people providing
not to become
On 08/03/2015 06:31 AM, jupiter wrote:
Hi,
I'd like to deploy Cephfs in a cluster, but I need to have a performance
report compared with Lustre and Gluster. Could anyone point me documents
/ links for performance between CephFS, Gluster and Lustre?
Thank you.
Kind regards,
- j
Hi,
I don't
plz follow the quick installation here(
http://ceph.com/docs/master/start/quick-start-preflight/) instead of manual
install.
On Mon, Aug 3, 2015 at 3:48 PM, Jiwan Ninglekhu jiwan.c...@gmail.com
wrote:
Hello Ceph user peers,
I am trying to setup a Ceph cluster in OpenStack cloud. I remotely
Dear, Ceph.
I'm wondering how does Ceph isolate bad blocks when EIO occurs.
I saw source codes and I found the logic of deep scrub, chunky_scrub() in PG.cc.
And I understood the real recovery logic is in submit_push_data() in
ReplicatedBackend.cc.
It pulled an object from another replica and
Hi,
I read here in the documentation:
http://docs.ceph.com/docs/master/architecture/#erasure-coding
In an erasure coded pool, the primary OSD in the up set receives all
write operations.
I dont' find what happens with read operations. Does the client contact
the primary and does this OSD
Summary: I am having problems with inconsistent PG's that the 'ceph pg repair'
command does not fix. Below are the details. Any help would be appreciated.
# Find the inconsistent PG's
~# ceph pg dump | grep inconsistent
dumped all in format plain
2.439 42080 00 017279507143 31033103
It seems like it's about time for us to make the jump to C++11. This
is probably going to have an impact on users of the librados C++
bindings. It seems like such users would have to recompile code using
the librados C++ libraries after upgrading the librados library
version. Is that
Yes! Sorry I forgot to publish these since I was still fighting
technical troubles from CDS (which now seem to be mostly resolved).
The Ceph Tech Talk should be up, and the CDS videos should be up
within the next day or two. Thanks for reminding me.
Best Regards,
Patrick McGarry
Director Ceph
Hrm, that's certainly supposed to work. Can you make a bug? Be sure
to note what version you are running (output of ceph-osd -v).
-Sam
On Mon, Aug 3, 2015 at 12:34 PM, Andras Pataki
apat...@simonsfoundation.org wrote:
Summary: I am having problems with inconsistent PG's that the 'ceph pg
Done: http://tracker.ceph.com/issues/12577
BTW, I¹m using the latest release 0.94.2 on all machines.
Andras
On 8/3/15, 3:38 PM, Samuel Just sj...@redhat.com wrote:
Hrm, that's certainly supposed to work. Can you make a bug? Be sure
to note what version you are running (output of ceph-osd
34 matches
Mail list logo