/c.txt
Suggestions? Bug? Comment?
Cheers
Goncalo
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW 2006
T: +61 2 93511937
___
ceph-users mailing list
afaiu, should have an
immediate effect (let us say a couple of seconds) in the system. This is
not what I am experiencing where sometimes, my perception is that sizes
are never updated until a new operation is triggered
Cheers
Goncalo
On 08/03/2015 01:20 PM, Goncalo Borges wrote:
Dear CephFS
be in read-only, for example.
TIA
Goncalo
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW 2006
T: +61 2 93511937
___
ceph-users mailing list
ceph-users
Hey John...
First of all. thank you for the nice talks you have been giving around.
See the feedback on your suggestions bellow, plus some additional questions.
However, please note that in my example I am not doing only
deletions but also creating and updating files, which afaiu,
space reported by a df command in
this case?
My naive assumption would be that a df should show as used space 512KB x
3. Is this correct?
Cheers
Goncalo
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University
today at 13:00 EDT (17:00 UTC). Please stop by and hear a
technical deep dive on CephFS and ask any questions you might have.
Thanks!
http://ceph.com/ceph-tech-talks/
direct link to the video conference: https://bluejeans.com/172084437/browser
--
Goncalo Borges
Research Computing
ARC Centre
Hi All...
I am still fighting with this issue. It may be something which is not
properly implemented, and if that is the case, that is fine.
I am still trying to understand what is the real space occupied by files
in a /cephfs filesystem, reported for example by a df.
Maybe I did not
pool 'cephfs_dt' (5) object 'thisobjectdoesnotexist' -
pg 5.28aa7f5a (5.35a) - up ([24,21,15], p24) acting ([24,21,15], p24)
Is this expected? Are those PGs actually assigned to something it does
not exists?
Cheers
Goncalo
--
Goncalo Borges
Research Computing
ARC Centre of Excellence
--monmap /tmp/monmap --keyring
/tmp/ceph.mon.keyring
That simply makes things crazy in 0.94.1.
Once I substituted the fqdn by simply the hostname (without the domain)
it worked.
Cheers
Goncalo
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics
to be opened it will start looking for them...
Jan
On 24 Aug 2015, at 03:07, Goncalo Borges gonc...@physics.usyd.edu.au wrote:
Hi Jan...
Thank for the reply.
Yes, I did an 'umount -l' but I was sure that no I/O was happening at the time.
So, I was almost 100% sure that there were no real
orrect, the test repeatedly writes data to 8M
files. The cache make multiple write assimilate into single OSD
write
Ugh, of course. I don't see a tracker ticket for that, so I made one:
http://tracker.ceph.com/issues/13569
-Greg
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Pa
build)
I remember that systemd support was introduced in the latest infernalis
release, and I just wonder if that, somehow, breaks the backward
compatibility with older systems.
Cheers
Goncalo
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Tera
Well... I misinterpreted the error. It is not systemd related but selinux
related. I must be missing some selinux component. Will investigate better.
From: Goncalo Borges [goncalo.bor...@sydney.edu.au]
Sent: 13 November 2015 16:51
To: ceph-users
://github.com/blog/1840-improving-github-for-science
https://guides.github.com/activities/citable-code/
That would provide a unique / standard way to cite ceph project everywhere.
Cheers
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW 2006
T: +61 2 93511937
h by a factor of 4 but I kept seeing the same
behavior.
At this point, I do not have a clear idea why this is happening.
Cheers
Goncalo
On 10/03/2015 04:03 AM, Gregory Farnum wrote:
On Fri, Oct 2, 2015 at 1:57 AM, John Spray <jsp...@redhat.com> wrote:
On Fri, Oct 2, 2015 at 2:42 AM, G
:
Hi Goncalo,
On Wed, Oct 14, 2015 at 6:51 AM, Goncalo Borges
<gonc...@physics.usyd.edu.au> wrote:
Hi Sage...
I've seen that the rh6 derivatives have been ruled out.
This is a problem in our case since the OS choice in our systems is,
somehow, imposed by CERN. The experiments so
Hi Ken
Here it is:
http://tracker.ceph.com/issues/13470
Cheers
G.
On 10/09/2015 02:58 AM, Ken Dreyer wrote:
On Wed, Sep 30, 2015 at 7:46 PM, Goncalo Borges
<gonc...@physics.usyd.edu.au> wrote:
- Each time logrotate is executed, we received a daily notice with the
message
ibus
debug packages which I do not really
need.
Cheers
Goncalo
On 11/13/2015 08:27 PM, Goncalo Borges wrote:
Well... I misinterpreted the error. It is not systemd related but selinux
related. I must be missing some selinux component. Will investigate b
.
The Ceph Day events for Shanghai, Tokyo, and Melbourne should all
still be proceeding as planned, however. Feel free to contact me if
you have any questions about Ceph Days. Thanks.
On 08/26/2015 10:28 AM, nigel.d.willi...@gmail.com wrote:
On 26 Aug 2015, at 9:47 am, Goncalo Borges gonc
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW 2006
T: +61 2
' and 'out'
Cheers
Goncalo
On 08/25/2015 01:06 PM, Shinobu wrote:
So what is the situation where you need to do:
# cd /var/lib/ceph/osd/ceph-23/current
# rm -Rf *
# df
(...)
I'm quite sure that is not normal.
Shinobu
On Tue, Aug 25, 2015 at 9:41 AM, Goncalo Borges
gonc...@physics.usyd.edu.au
active+recovery_wait+degraded
8 active+remapped+backfilling
4 active+recovering+degraded
recovery io 521 MB/s, 170 objects/s
Cheers
Goncalo
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28
.
The Ceph Day events for Shanghai, Tokyo, and Melbourne should all
still be proceeding as planned, however. Feel free to contact me if
you have any questions about Ceph Days. Thanks.
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics
ump_stuck stale | grep ^[12] | awk '{print $1}'`;
do ceph pg force_create_pg $pg; done
ok
pg 1.23 now creating, ok
pg 2.38b now creating, ok
(...)
--- * ---
7) At this point, for the PGs to leave the 'creating' status, I had to
restart all remaining OSDs. Otherwise those PGs were in the creating
23
Error ENOENT: i don't have pgid 2.38b
(...)
--- * ---
6) Create the non existing PGs
# for pg in `ceph pg dump_stuck stale | grep ^[12] | awk '{print $1}'`; do
ceph pg force_create_pg $pg; done
ok
pg 1.23 now creating, ok
pg 2.38b now creating, ok
(...)
--- * ---
7) At this point, for the PGs to
one or two?
e./ In what circumstances we would do a reset of the filesystem with
'ceph fs reset cephfs --yes-i-really-mean-it'?
Thank you in Advance
Cheers
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University
ave a thoughts of what might be wrong? Or if there is
other info I can provide to ease the search for what it might be?
Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Goncalo Bor
n on that.
Cheers
Quoting Shinobu Kinjo <ski...@redhat.com>:
Anyhow this page would help you:
http://ceph.com/docs/master/cephfs/disaster-recovery/
Shinobu
- Original Message -
From: "Shinobu Kinjo" <ski...@redhat.com>
To: "Goncalo Borges" <gonc.
usr/lib64/liblttng-ust.so.0
(0x00337da0)
- To fix this, I had to set 'export HOME=/root' in
/usr/lib64/ceph/ceph_common.sh
Cheers
Goncalo
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney,
the amount of memory they can use?
Cheers
Goncalo
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW 2006
T: +61 2 93511937
___
ceph-users mailing list
ceph
I found a partial answer to some of the questions:
5./ My questions:
- Is there a simple command for me to check which sessions are
active? 'cephfs-table-tool 0 show session' does not seem to work
- Is there a way for me to cross check which sessions belong to
which clients (IPs)?
.675605 7f8eaa775700 1 mds.0.19
ms_verify_authorizer: cannot decode auth caps bl of length 0
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW 2006
T: +61 2 93511937
__
at 2:54 AM, Goncalo Borges
gonc...@physics.usyd.edu.au wrote:
Hey guys...
1./ I have a simple question regarding the appearance of degraded PGs.
First, for reference:
a. I am working with 0.94.2
b. I have 32 OSDs distributed in 4 servers, meaning that I have 8 OSD per
server.
c. Our cluster is set
ceph-disk:Journal /dev/sdc3 was
not prepared with ceph-disk. Symlinking directly)
[1]
https://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs
--
---------
Adrien GILLARD
+33 (0)6 29 06 16 31
gillard
be available in Centos CR
repos. For Centos 7.1.1503, it provides libunwind-1.1-5.el7.x86_64)
http://mirror.centos.org/centos/7.1.1503/cr
Cheers
Goncalo
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW
options" option
ceph-disk: Mounting filesystem failed: Command '['/usr/bin/mount',
'-t', 'xfs', '-o', 'noatime,inode64', '--', '/dev/sdh1',
'/var/lib/ceph/tmp/mnt.0bUl5q']' returned non-zero exit status 1
5./ So there is an incoherence on this set of instructions.
Is there a way to
Dear Cephfs gurus.
I have two questions regarding ACL support on cephfs.
1) Last time we tried ACLs we saw that they were only working properly in the
kernel module and I wonder what is the present status of acl support on
ceph-fuse. Can you clarify on that?
2) If ceph-fuse is still not
I think I've understood how to run it...
ceph-fuse -m MON_IP:6789 -r /syd /coepp/cephfs/syd
does what I want
Cheers
Goncalo
On 12/15/2015 12:04 PM, Goncalo Borges wrote:
Dear CephFS experts
Before it was possible to mount a subtree of a filesystem using
ceph-fuse and -r option
to properly do it?
TIA
Goncalo
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW 2006
T: +61 2 93511937
___
ceph-users mailing list
ceph-users@lists.ceph.com
_
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Goncalo Borges
To: Goncalo Borges; Loic Dachary
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe
issues.
I commented out partprobe and everything seems to work just fine.
*If someone has experience with why this is very bad please advise.
Make sure you know
ake install
Cheers
G.
-- Goncalo Borges Research Computing ARC Centre of Excellence for
Particle Physics at the Terascale School of Physics A28 | University of
Sydney, NSW 2006 T: +61 2 93511937
___
ceph-users mailing list
ceph-users@lists.ceph.com
h
6_64.rpm
rbd-fuse-10.2.1-0.el7.x86_64.rpm
rbd-mirror-10.2.1-0.el7.x86_64.rpm
rbd-nbd-10.2.1-0.el7.x86_64.rpm
On 05/25/2016 07:45 AM, Gregory Farnum wrote:
On Wed, May 18, 2016 at 6:04 PM, Goncalo Borges
<goncalo.bor...@sydney.edu.au> wrote:
Dear All...
Our infrastructure is the fo
[Install]
WantedBy=ceph-mon.target
Am i the only one seeing the issue? Is it really an issue?
Cheers
G.
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW 2006
T: +6
v27 here ?
how to solve this problem ?
Regards,
XiuCai.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Phy
/2016 01:40 PM, Goncalo Borges wrote:
Hi XiuCai
Shouldn't you have, at least, 2 mons?
Cheers
G.
On 06/28/2016 01:12 PM, 秀才 wrote:
Hi,
ther are 1 mon and 7 osds in my cluster now.
but it seems something wrong, because `rbd -p test reate pet --size
1024` could not return.
and status is always
ds2
mds standby_for_rank = rccephmds
mds standby replay = true
Am I doing something particularly different than what is expected?
Cheers
G.
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW 200
Hi Greg.
We are using Ceph and CephFS 9.2.0. CephFS clients are being mounted via
ceph-fuse.
We recently noticed the firewall from certain CephFS clients dropping
connections with OSDs as SRC. This is something which is not systematic but we
noticed happening at least once. Here is an
Hi X
Have you tried to inspect the mds for problematic sessions still connected from
those clients?
To check which sessions are still connected to the mds, do (in ceph 9.2.0, the
command might be different or even do not exist in other older versions)
ceph daemon mds. session ls
Cheers
om]
Sent: 03 February 2016 11:31
To: Goncalo Borges
Cc: Mykola Dvornik; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Urgent help needed for ceph storage "mount error 5 =
Input/output error"
I see a lot sessions. How can I clear these session? Since I've rebooted the
cluster already,
Dear CephFS experts...
We are using Ceph and CephFS 9.2.0. CephFS clients are being mounted via
ceph-fuse.
We recently noticed the firewall from certain CephFS clients dropping
connections with OSDs as SRC. This is something which is not systematic but we
noticed happening at least once. Here
Hi CephFS experts.
1./ We are using Ceph and CephFS 9.2.0 with an active mds and a standby-replay
mds (standard config)
# ceph -s
cluster
health HEALTH_OK
monmap e1: 3 mons at
{mon1=:6789/0,mon2=:6789/0,mon3=:6789/0}
election epoch 98, quorum 0,1,2 mon1,mon3,mon2
Hi...
Seems very similar to
http://tracker.ceph.com/issues/14144
Can you confirm it is the same issue?
Cheers
G.
From: Goncalo Borges
Sent: 02 February 2016 15:30
To: ceph-us...@ceph.com
Cc: rct...@coepp.org.au
Subject: CEPHFS: standby-replay mds crash
Hi
Dear CephFS gurus...
I would like your advise on how to improve performance without compromising
reliability for CephFS clients deployed under a WAN.
Currently, our infrastructure relies on:
- ceph infernalis
- a ceph object cluster, with all core infrastructure components sitting in the
same
Hi Zhang...
If I can add some more info, the change of PGs is a heavy operation, and as far
as i know, you should NEVER decrease PGs. From the notes in pgcalc
(http://ceph.com/pgcalc/):
"It's also important to know that the PG count can be increased, but NEVER
decreased without destroying /
From: Zhang Qiang [dotslash...@gmail.com]
Sent: 23 March 2016 23:17
To: Goncalo Borges
Cc: Oliver Dzombic; ceph-users
Subject: Re: [ceph-users] Need help for PG problem
And here's the osd tree if it matters.
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1
ling disk and was restarted a couple of times
while the disk gave errors. This caused the PG to become incomplete.
I've set debug osd to 20, but I can't really tell what is going wrong on osd.68
which causes it to stall this long.
Any idea what to do here to get this PG up and running again?
W
when all are in the same version?*
Thank you for your answers
Cheers
Goncalo
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW 2006
T: +61 2 93511937
___
ceph-user
directory)
open("/usr/share/locale/en_US/LC_MESSAGES/attr.mo", O_RDONLY) = -1
ENOENT (No such file or directory)
open("/usr/share/locale/en.UTF-8/LC_MESSAGES/attr.mo", O_RDONLY) = -1
ENOENT (No such file or directory)
open("/usr/share/locale/en.utf8/LC_MESSAGES/attr.mo"
Goncalo
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW 2006
T: +61 2 93511937
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
is value.
Can someone clarify exactly what is the happening here?
Cheers
G.
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW 2006
T: +61 2 93511937
__
Hi cephers...
Our production cluster is running Jewel 10.2.2.
We were running a production cluster with 8 servers each with 8 osds making a
gran total of 64 osds. Each server also hosts 2 ssds for journals. Each sshd
supports 4 journals.
We had 1/3 of our osds above 80% occupied, and we
on the
10.100.1.0/24 being blocked.
I think i had the firewall disabled when I bootstrapped the osds in the
machines and that might explain why there was some transfer of data.
Sorry for the entropy.
Cheers
G.
On 07/27/2016 08:44 AM, Goncalo Borges wrote:
Hi cephers...
Our production
t_osdmap_epoch 2546
last_pg_scan 2546
full_ratio 0.95
*nearfull_ratio 0.85*
Cheers
G.
On 07/26/2016 12:39 PM, Brad Hubbard wrote:
On Tue, Jul 26, 2016 at 12:16:35PM +1000, Goncalo Borges wrote:
Hi Brad
Thanks for replying.
Answers inline.
I am a bit confused about the 'unchachable' m
?
The compilation takes a while but I will update the issue once I have
finished this last experiment (in the next few days)
Cheers
Goncalo
On 07/12/2016 09:45 PM, Goncalo Borges wrote:
Hi All...
Thank you for continuing to follow this already very long thread.
Pat and Greg are correct
Firewall or communication issues?
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of M Ranga Swami
Reddy [swamire...@gmail.com]
Sent: 28 July 2016 22:00
To: ceph-users
Subject: [ceph-users] osd wrongly maked as down
Hello,
hello - I use
.
Thanks for the help
Goncalo
From: Christian Balzer [ch...@gol.com]
Sent: 20 July 2016 19:36
To: ceph-us...@ceph.com
Cc: Goncalo Borges
Subject: Re: [ceph-users] pgs stuck unclean after reweight
Hello,
On Wed, 20 Jul 2016 13:42:20 +1000 Goncalo Borges wrote
Hi Kostis
This is a wild guess but one thing I note is that your pool 179 has a very low
pg number (100).
Maybe the algorithm behind the new tunable need a higher pg number to actually
proceed with the recovery?
You could try to increase the pgs to 128 (it is always better to use powers of
July 2016 06:54
To: Goncalo Borges
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] ceph-fuse (jewel 10.2.2): No such file or directory
issues
On Wed, Jul 27, 2016 at 6:37 PM, Goncalo Borges
<goncalo.bor...@sydney.edu.au> wrote:
> Hi Greg
>
> Thanks for replying. Answer inline.
Dear cephers.
I would like to request some clarification on migrating from legacy to optimal
(jewel) tunables.
We have recently migrated from infernalis to Jewel. However, we are still using
legacy tunables.
All our ceph infrastructure (mons. odss and mdss) are running 10.2.2 in Centos
it.
It is also worthwhile to mention that this seems to happen while we are
adding a new storage server to the underlying ceph infrastructure, so
there was some data movement happening in the background.
Any suggestion on how to mitigate it?
Cheers
Goncalo and Sean
--
Goncalo Borges
Research
wrote:
Try:
ceph pg set_nearfull_ratio 0.9
On 26 Jul 2016 08:16, "Goncalo Borges" <goncalo.bor...@sydney.edu.au
<mailto:goncalo.bor...@sydney.edu.au>> wrote:
Hello...
I do not think that these settings are working properly in jewel.
Maybe someone els
directly from osds, right?!
We understand about the performance issues that it might imply but we
are more concerned in having data coherence in the client.
Thoughts?
Cheers
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28
Thanks.
Daleep Singh Bais
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physi
:42
To: Goncalo Borges; ceph-users
Subject: Re: [ceph-users] Cascading failure on a placement group
Hi,
The timezones on all my systems appear to be the same, I just verified
it by running 'date' on all my boxes.
- HP
On Sat, 2016-08-13 at 12:36 +0000, Goncalo Borges wrote:
> The ticket I men
rent even if now is ok.
It should be worthwhile to check if timezone is/was different in mind.
Cheers
From: Hein-Pieter van Braam [h...@tmm.cx]
Sent: 13 August 2016 22:42
To: Goncalo Borges; ceph-users
Subject: Re: [ceph-users] Cascading failure on a pla
Hi cephfers
I have a really simple question: the documentation always refers to the
procedure to substitute failed disks. Currently I have a predicted failure in a
raid 0 osd and I would like to substitute before it fails without having to go
by replicating pgs once the osd is removed from
[ceph-users-boun...@lists.ceph.com] on behalf of Goncalo
Borges [goncalo.bor...@sydney.edu.au]
Sent: Sunday, August 14, 2016 5:47 AM
To: ceph-us...@ceph.com
Subject: [ceph-users] Substitute a predicted failure (not yet failed) osd
Hi cephfers
I have a really simple question: the documentation a
The ticket I mentioned earlier was marked as a duplicate of
http://tracker.ceph.com/issues/9732
Cheers
Goncalo
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Goncalo
Borges [goncalo.bor...@sydney.edu.au]
Sent: 13 August 2016 22:23
To: Hein-Pieter van Braam; ceph-users
Hi Willi
If you are using ceph-fuse, to enable quota, you need to pass "--client-quota"
option in the mount operation.
Cheers
Goncalo
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Willi Fehler
[willi.feh...@t-online.de]
Sent: 13
Hi HP.
I am just a site admin so my opinion should be validated by proper support staff
Seems really similar to
http://tracker.ceph.com/issues/14399
The ticket speaks about timezone difference between osds. Maybe it is something
worthwhile to check?
Cheers
Goncalo
ers
Goncalo
From: Gregory Farnum [gfar...@redhat.com]
Sent: 12 July 2016 03:07
To: Goncalo Borges
Cc: John Spray; ceph-users
Subject: Re: [ceph-users] ceph-fuse segfaults ( jewel 10.2.2)
Oh, is this one of your custom-built packages? Are they using
tcmal
for tips, I saw an issue claiming that
'fuse_disable_pagecache' should be set to true in ceph.conf. Can you
briefly explain is this is correct and what is the con of not using it?
(just or me to understand it).
Thank you in Advance
Cheers
Goncalo
On 07/15/2016 01:35 PM, Goncalo Borges
uot;,
"history": []
}
},
{
"peer": "60",
"pgid": "5.306",
"last_update": "1005'55174",
"last_complete": "1005'55174",
"log_tail": "1005'
flag during update? If YES, try to unset it. I
faced with same problem when upgrade my ceph cluster from Hummer to
Jewel. Maybe it's your: http://tracker.ceph.com/issues/16113
Среда, 20 июля 2016, 8:42 +05:00 от Goncalo Borges
<goncalo.bor...@sydney.edu.au>:
Hi All...
To
e. However, in this new _up_ set, there is
always one osd with the near full message.
Maybe that is why re balancing is on hold?
Maybe if I increase the thresold for the warning the rebalance will restart?
Cheers
G.
On 07/20/2016 01:42 PM, Goncalo Borges wrote:
Hi All...
Today we had a warning regard
Hi Swami.
Did not make any difference.
Cheers
G.
On 07/20/2016 03:31 PM, M Ranga Swami Reddy wrote:
can you restart osd.32 and check the status?
Thanks
Swami
On Wed, Jul 20, 2016 at 9:12 AM, Goncalo Borges
<goncalo.bor...@sydney.edu.au> wrote:
Hi All...
Today we had a warning reg
your patch.
Will report here afterwards.
Thanks for the feedback.
Cheers
Goncalo
On 07/15/2016 01:19 PM, Yan, Zheng wrote:
On Fri, Jul 15, 2016 at 9:35 AM, Goncalo Borges
<goncalo.bor...@sydney.edu.au> wrote:
Hi All...
I've seen that Zheng, Brad, Pat and Greg already updated or mad
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/10 civetweb
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
0/ 0 refs
1/ 5 xio
1/ 5 compressor
1/ 5 newstore
1/ 5 bluestore
1/ 5 bluefs
1/ 3 bdev
1/ 5 kstore
4/
and
recompile. Is this something safe to do?
Cheers
Goncalo
On 07/05/2016 01:34 PM, Patrick Donnelly wrote:
Hi Goncalo,
I believe this segfault may be the one fixed here:
https://github.com/ceph/ceph/pull/10027
(Sorry for brief top-post. Im on mobile.)
On Jul 4, 2016 9:16 PM, "Goncalo B
st. Im on mobile.)
On Jul 4, 2016 9:16 PM, "Goncalo Borges" <goncalo.bor...@sydney.edu.au>
wrote:
Dear All...
We have recently migrated all our ceph infrastructure from 9.2.0 to
10.2.2.
We are currently using ceph-fuse to mount cephfs in a number of clients.
ceph-fuse 10.2.2
lock functions in two specific lines of src/client/Client.cc which, I
imagine, were also not there in 9.2.0 (unless there was a big rewrite of
src/client/Client.cc from 9.2.0 to 10.2.2)
Cheers
Goncalo
On 07/05/2016 02:45 PM, Goncalo Borges wrote:
Hi Brad, Shinobu, Patrick...
Indeed if I run
est practice?
P.S. I've done benchmarking: 3500 can support up to 16 10k-RPM HDD.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Goncalo Borges
Research Computing
ARC Centre of Excelle
My previous email did not go through because of its size. Here goes a
new attempt:
Cheers
Goncalo
--- * ---
Hi Patrick, Brad...
Unfortunately, the other user application breaks ceph-fuse again (It is
a completely different application then in my previous test).
We have tested it in 4
57 python
29312 goncalo 20 0 1594m 83m 19m R 99.9 0.2 1:05.01 python
31979 goncalo 20 0 1595m 82m 19m R 100.2 0.2 1:04.82 python
29333 goncalo 20 0 1594m 82m 19m R 99.5 0.2 1:04.94 python
29609 goncalo 20 0 1594m 82m 19m R 99.9 0.2 1:05.07 pytho
On 07/11/2016 05:04 PM, Goncalo Borges wrote:
Hi John...
Thank you for replying.
Here is the result of the tests you asked but I do not see nothing
abnormal. Actually, your suggestions made me see that:
1) ceph-fuse 9.2.0 is presenting the same behaviour but with less
memory consumption
onn...@redhat.com> wrote:
On Thu, Jul 7, 2016 at 2:01 AM, Goncalo Borges
<goncalo.bor...@sydney.edu.au> wrote:
Unfortunately, the other user application breaks ceph-fuse again (It is a
completely different application then in my previous test).
We have tested it in 4 machines with 4 cor
chooseleaf_vary_r=5 and
then decrease it slowly to 1?)
- then from firefly to hammer
- then from hammer to jewel
2) or going directly to jewel tunables?
Any advise on how to minimize the data movement?
TIA
Goncalo
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics
mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW 2006
T: +61 2 93511937
1 - 100 of 154 matches
Mail list logo