Hi Travis,
These binaries are hosted on Canonical servers and are only for Ubuntu. Until
the latest FireFly patch release 0.80.9, everything worked fine. I just tried
the hammer binaries, and they seem to be failing in loading up erasure coding
libraries.
I have now built my own binaries and I
Hi,
Does ceph typically use TCP or UDP or something else for data path for
connection to clients and inter OSD cluster traffic?
Thanks
Pankaj
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
TCP
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Thu, May 28, 2015 at 2:00 PM, Garg, Pankaj wrote:
Hi,
Does ceph typically use TCP or UDP or something else for data path for
To follow up on the original post,
Further digging indicates this is a problem with RBD image access and is
not related to NFS-RBD interaction as initially suspected. The nfsd is
simply hanging as a result of a hung request to the XFS file system
mounted on our RBD-NFS gateway.This hung XFS
Thanks a million for the feedback Christian!
I 've tried to recreate the issue with 10RBD Volumes mounted on a
single server without success!
I 've issued the mkfs.xfs command simultaneously (or at least as fast
I could do it in different terminals) without noticing any problems. Can
you
On Thu, May 28, 2015 at 1:04 AM, Kenneth Waegeman
kenneth.waege...@ugent.be wrote:
On 05/27/2015 10:30 PM, Gregory Farnum wrote:
On Wed, May 27, 2015 at 6:49 AM, Kenneth Waegeman
kenneth.waege...@ugent.be wrote:
We are also running a full backup sync to cephfs, using multiple
distributed
On Thu, May 28, 2015 at 12:59 AM, kefu chai tchai...@gmail.com wrote:
On Wed, May 27, 2015 at 3:36 AM, Patrick McGarry pmcga...@redhat.com wrote:
Due to popular demand we are expanding the Ceph lists to include a
Chinese-language list to allow for direct communications for all of
our friends
Hello,
google is your friend, this comes up every month at least, if not more
frequently.
Your default size (replica) is 2, the default CRUSH rule you quote at the
very end of your mail delineates failure domains on the host level (quite
rightly so).
So with 2 replicas (quite dangerous with
Hi Christian,
Based on your feedback, I modified the CRUSH map:
step chooseleaf firstn 0 type host
to
step chooseleaf firstn 0 type osd
And then i compiled and set, and voila, health is OK now. Thanks so much!
ceph health
HEALTH_OK
Regards,
Doan
On 05/29/2015 10:53 AM, Christian Balzer
Hi ceph experts,
I just freshly deployed ceph 0.94.1 with one monitor and one storage
node containing 4 disks. But ceph health shows pgs stuck in degraded,
unclean, and undersized. Any idea how to resolve this issue to get
active+clean state?
ceph health
HEALTH_WARN 27 pgs degraded; 27 pgs
Hi Bruce,
RHEL7.0 kernel has many issues on filesystem sub modules and most of them fixed
only in RHEL7.1.
So you should consider to go to RHEL7.1 directly and upgrade to at least kernel
3.10.0-229.1.2
BR,
Luke
From: ceph-users
Hello,
On Thu, 28 May 2015 12:05:03 +0200 Xavier Serrano wrote:
On Thu May 28 11:22:52 2015, Christian Balzer wrote:
We are testing different scenarios before making our final decision
(cache-tiering, journaling, separate pool,...).
Definitely a good idea to test things out and
Jens-Christian Fischer jens-christian.fischer@... writes:
I think we (i.e. Christian) found the problem:
We created a test VM with 9 mounted RBD volumes (no NFS server). As soon as
he hit all disks, we started to experience these 120 second timeouts. We
realized that the QEMU process on the
Hello Greg,
On Wed, 27 May 2015 22:53:43 -0700 Gregory Farnum wrote:
The description of the logging abruptly ending and the journal being
bad really sounds like part of the disk is going back in time. I'm not
sure if XFS internally is set up in such a way that something like
losing part of
Hi Greg,
That is really great, thanks for your response, I completely understand what is
going on now. I wasn't thinking about capacity in a per PG sense.
I have exported a pg dump of the cache pool and calculated some percentages and
I can see that the data can vary up to around 5% amongst
On Thu, 28 May 2015 10:32:18 +0200 Jan Schermer wrote:
Can you check the capacitor reading on the S3700 with smartctl ?
I suppose you mean this?
---
175 Power_Loss_Cap_Test 0x0033 100 100 010Pre-fail Always
- 648 (2 2862)
---
Never mind that these are brand new.
Hello,
I am testing NFS over RBD recently. I am trying to build the NFS HA
environment under Ubuntu 14.04 for testing, and the packages version
information as follows:
- Ubuntu 14.04 : 3.13.0-32-generic(Ubuntu 14.04.2 LTS)
- ceph : 0.80.9-0ubuntu0.14.04.2
- ceph-common :
On 05/27/2015 10:30 PM, Gregory Farnum wrote:
On Wed, May 27, 2015 at 6:49 AM, Kenneth Waegeman
kenneth.waege...@ugent.be wrote:
We are also running a full backup sync to cephfs, using multiple distributed
rsync streams (with zkrsync), and also ran in this issue today on Hammer
0.94.1 .
Can you check the capacitor reading on the S3700 with smartctl ? This drive has
non-volatile cache which *should* get flushed when power is lost, depending on
what hardware does on reboot it might get flushed even when rebooting.
I just got this drive for testing yesterday and it’s a beast, but
On Thu May 28 11:22:52 2015, Christian Balzer wrote:
We are testing different scenarios before making our final decision
(cache-tiering, journaling, separate pool,...).
Definitely a good idea to test things out and get an idea what Ceph and
your hardware can do.
From my experience and
On 28 May 2015, at 10:56, Christian Balzer ch...@gol.com wrote:
On Thu, 28 May 2015 10:32:18 +0200 Jan Schermer wrote:
Can you check the capacitor reading on the S3700 with smartctl ?
I suppose you mean this?
---
175 Power_Loss_Cap_Test 0x0033 100 100 010Pre-fail
On Thu, May 28, 2015 at 12:22 AM, Christian Balzer ch...@gol.com wrote:
Hello Greg,
On Wed, 27 May 2015 22:53:43 -0700 Gregory Farnum wrote:
The description of the logging abruptly ending and the journal being
bad really sounds like part of the disk is going back in time. I'm not
sure if
On Thu, May 28, 2015 at 1:33 AM, wd_hw...@wistron.com wrote:
Hello,
I am testing NFS over RBD recently. I am trying to build the NFS HA
environment under Ubuntu 14.04 for testing, and the packages version
information as follows:
- Ubuntu 14.04 : 3.13.0-32-generic(Ubuntu 14.04.2 LTS)
-
Hi Pankaj,
While there have been times in the past where ARM binaries were hosted
on ceph.com, there is not currently any ARM hardware for builds. I
don't think you will see any ARM binaries in
http://ceph.com/debian-hammer/pool/main/c/ceph/, for example.
Combine that with the fact that
Hi all,
I have been testing cephfs with erasure coded pool and cache tier. I
have 3 mds running on the same physical server as 3 mons. The cluster is
in ok state otherwise, rbd is working and all pg are active+clean. Im
running v 0.87.2 giant on all nodes and ubuntu 14.04.2 .
The cluster
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I've got some more tests running right now. Once those are done, I'll
find a couple of tests that had extreme difference and gather some
perf data for them.
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2
(This came up as in-reply-to to the previous mds crashing thread --
it's better to start threads with a fresh message)
On 28/05/2015 16:58, Peter Tiernan wrote:
Hi all,
I have been testing cephfs with erasure coded pool and cache tier. I
have 3 mds running on the same physical server as
27 matches
Mail list logo