curred
> which can take somewhat longer.
>
>
> On 10.10.2018, at 13:57, Daniel Carrasco wrote:
>
> Thanks for your response.
>
> I'll point in that direction.
> I also need a fast recovery in case that MDS die so, Standby MDS are
> recomended or recovery is fast enought
9, Daniel Carrasco wrote:
>
>
>- Wich is the best configuration to avoid that MDS problems.
>
> Single active MDS with lots of RAM.
>
>
--
_________
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
T
!
--
_
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.i2tic.com
_
___
ceph-users mailing list
ceph-users
El lun., 8 oct. 2018 5:44, Yan, Zheng escribió:
> On Mon, Oct 8, 2018 at 11:34 AM Daniel Carrasco
> wrote:
> >
> > I've got several problems on 12.2.8 too. All my standby MDS uses a lot
> of memory (while active uses normal memory), and I'm receiving a lot of
> sl
un 'ceph mds repaired fs_name:damaged_rank' .
> >
> > Sorry for all the trouble I caused.
> > Yan, Zheng
> >
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-use
ne has tried it?
Thanks!!
--
_____
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.i2tic.com
_
___
ceph-users mailing list
ceph-u
is the
daemon overhead and the memory fragmentation. At least is not 13-15Gb like
before.
Greetings!!
2018-07-25 23:16 GMT+02:00 Daniel Carrasco :
> I've changed the configuration adding your line and changing the mds
> memory limit to 512Mb, and for now looks stable (its on about 3-6% and
>
/ceph/ceph-mds.x.profile..heap
>
>
>
>
> On Tue, Jul 24, 2018 at 3:18 PM Daniel Carrasco
> wrote:
> >
> > This is what i get:
> >
> >
> >
t; >
> > El mar., 24 jul. 2018 1:00, Gregory Farnum
> escribió:
> >>
> >> On Mon, Jul 23, 2018 at 11:08 AM Patrick Donnelly
> wrote:
> >>>
> >>> On Mon, Jul 23, 2018 at 5:48 AM, Daniel Carrasco
> wrote:
> >>> >
:08 AM Patrick Donnelly
> wrote:
>
>> On Mon, Jul 23, 2018 at 5:48 AM, Daniel Carrasco
>> wrote:
>> > Hi, thanks for your response.
>> >
>> > Clients are about 6, and 4 of them are the most of time on standby.
>> Only two
>> > are ac
than 1Gb of RAM just now. Of course I've not rebooted the machine, but
maybe if the daemon was killed for high memory usage then the new
configuration is loaded now.
Greetings!
2018-07-23 21:07 GMT+02:00 Daniel Carrasco :
> Thanks!,
>
> It's true that I've seen a continuous memo
than 1Gb of RAM just now. Of course I've not rebooted the machine, but
maybe if the daemon was killed for high memory usage then the new
configuration is loaded now.
Greetings!
2018-07-19 11:35 GMT+02:00 Daniel Carrasco :
> Hello again,
>
> It is still early to say that is working
"sum": 0.0,
"avgtime": 0.0
}
},
"throttle-objecter_bytes": {
"val": 0,
"max": 104857600,
"get_started": 0,
"get": 0,
"get_sum": 0,
"get_or_fail_fail": 0,
"get_or_fail_suc
l cache size on the MDS?
>
>
> Paul
>
> 2018-07-23 13:16 GMT+02:00 Daniel Carrasco :
>
>> Hello,
>>
>> I've created a Ceph cluster of 3 nodes (3 mons, 3 osd, 3 mgr and 3 mds
>> with two active). This cluster is for mainly for server a webpage (small
>> fi
the above problem.
My SO is Ubuntu 16.04 x64 with kernel version 4.13.0-45-generic and ceph
server/client version is 12.2.7.
How I can debug why that CPU usage?.
Thanks!
--
_
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L
: 8192 Tcmalloc page size
Call ReleaseFreeMemory() to release freelist memory to the OS (via
madvise()).
Bytes released to the OS take up virtual address space but no physical
memory.
Greetings!!
2018-07-19 10:24 GMT+02:00 Daniel Carrasco
GMT+02:00 Daniel Carrasco :
> Thanks again,
>
> I was trying to use fuse client instead Ubuntu 16.04 kernel module to see
> if maybe is a client side problem, but CPU usage on fuse client is very
> high (a 100% and even more in a two cores machine), so I'd to rever to
> kernel cli
gt; On Wed, Jul 18, 2018 at 3:48 PM Daniel Carrasco
> wrote:
>
>> Hello, thanks for your response.
>>
>> This is what I get:
>>
>> # ceph tell mds.kavehome-mgto-pro-fs01 heap stats
>> 2018-07-19 00:43:46.142560 7f5a7a7fc700 0 client.1318388 ms_handle
e slightly-broken
> base systems and find that running the "heap release" (or similar
> wording) command will free up a lot of RAM back to the OS!
> -Greg
>
> On Wed, Jul 18, 2018 at 1:53 PM, Daniel Carrasco
> wrote:
> > Hello,
> >
> > I've created a
d5) luminous
(stable).
Thanks!!
--
_____
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.i2tic.com
_
--
___
itory when there is a commit to the
> repository from elsewhere. This would be on local storage and remove a lot
> of complexity. All front-end servers would update automatically via git.
>
> If something like that doesn't work, it would seem you have a workaround
> that works for you.
>
>
HA website with an LB in front of them.
>
> I'm biased here a bit, but I don't like to use networked filesystems
> unless nothing else can be worked out or the software using it is 3rd party
> and just doesn't support anything else.
>
> On Thu, Mar 1, 2018 at 9:05 AM Daniel Carrasc
Greetings!!
2018-02-28 17:11 GMT+01:00 Daniel Carrasco <d.carra...@i2tic.com>:
> Hello,
>
> I've created a Ceph cluster with 3 nodes and a FS to serve a webpage. The
> webpage speed is good enough (near to NFS speed), and have HA if one FS die.
> My problem comes when I d
the git repository and
all start to work very slow.
Thanks!!
--
_
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tlf: +34 911 12 32 84 Ext: 223
www.i2tic.com
because there
is no quota on this cluster.
This will lower the petitions to MDS and CPU usage, right?
Greetings!!
2018-02-22 19:34 GMT+01:00 Patrick Donnelly <pdonn...@redhat.com>:
> On Wed, Feb 21, 2018 at 11:17 PM, Daniel Carrasco <d.carra...@i2tic.com>
> wrote:
> &g
for a while.
Greetings!!
El 22 feb. 2018 3:59, "Patrick Donnelly" <pdonn...@redhat.com> escribió:
> Hello Daniel,
>
> On Wed, Feb 21, 2018 at 10:26 AM, Daniel Carrasco <d.carra...@i2tic.com>
> wrote:
> > Is possible to make a better distribution on the M
2018-02-21 19:26 GMT+01:00 Daniel Carrasco <d.carra...@i2tic.com>:
> Hello,
>
> I've created a Ceph cluster with 3 nodes to serve files to an high traffic
> webpage. I've configured two MDS as active and one as standby, but after
> add the new system to production
, or if there are
another side effects.
My last question is if someone can recomend me a good client configuration
like cache size, and maybe something to lower the metadata servers load.
Thanks!!
--
_
Daniel Carrasco Marín
Ingeniería para la
Finally i've disabled the mon_osd_report_timeout option and seems to works
fine.
Greetings!.
2017-10-17 19:02 GMT+02:00 Daniel Carrasco <d.carra...@i2tic.com>:
> Thanks!!
>
> I'll take a look later.
>
> Anyway, all my Ceph daemons are in same version on all nodes (I've
s@lists.ceph.com/msg39886.html
-Original Message-----
From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
Sent: dinsdag 17 oktober 2017 17:49
To: ceph-us...@ceph.com
Subject: [ceph-users] OSD are marked as down after jewel -> luminous
upgrade
Hello,
Today I've decided to upgrade my Cep
For now I've added the nodown flag to keep all OSD online, and all is
working fine, but this is not the best way to do it.
Someone knows how to fix this problem?. Maybe this release needs to open
new ports on firewall?
Thanks!!
--
at using through the Internet? RGW,
multi-site, multi-datacenter crush maps, etc?
On Fri, Jun 30, 2017 at 2:28 PM Daniel Carrasco <d.carra...@i2tic.com>
wrote:
> Hello,
>
> My question is about steam security of connections between ceph services.
> I've read that connection is verified
Hello,
My question is about steam security of connections between ceph services.
I've read that connection is verified by private keys and signed packets,
but my question is if that packets are ciphered in any way to avoid packets
sniffers, because I want to know if can be used through internet
!!
2017-06-15 19:04 GMT+02:00 Daniel Carrasco <d.carra...@i2tic.com>:
> Hello, thanks for the info.
>
> I'll give a try tomorrow. On one of my test I got the messages that yo say
> (wrongfully marked), but i've lowered other options and now is fine. For
> now the OSD are not re
ein_stephane
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
__
u'll want
> to monitor your cluster for OSDs being marked down for a few seconds before
> marking themselves back up. You can see this in the OSD logs where the OSD
> says it was wrongfully marked down in one line and then the next is where
> it tells the mons it is actually up.
>
&g
I forgot to say that after upgrade the machine RAM to 4Gb, the OSD daemons
has started to use only a 5% (about 200MB). Is like magic, and now I've
about 3.2Gb of free RAM.
Greetings!!
2017-06-15 15:08 GMT+02:00 Daniel Carrasco <d.carra...@i2tic.com>:
> Finally, the problem was W3To
hat caused more peering and backfilling, ... which caused more OSDs to be
> killed by OOM killer.
>
> On Wed, Jun 14, 2017 at 5:01 PM Daniel Carrasco <d.carra...@i2tic.com>
> wrote:
>
>> Is strange because on my test cluster (three nodes) with two nodes with
>> O
hat is your full ceph configuration? There must be
something not quite right in there.
On Wed, Jun 14, 2017 at 4:26 PM Daniel Carrasco <d.carra...@i2tic.com>
wrote:
>
>
> El 14 jun. 2017 10:08 p. m., "David Turner" <drakonst...@gmail.com>
> escribió:
>
> Not j
the same node, I'd
> still say that 2GB is low. The Ceph OSD daemon using 1GB of RAM is not
> surprising, even at that size.
>
> When you say you increased the size of the pools to 3, what did you do to
> the min_size? Is that still set to 2?
>
> On Wed, Jun 14, 2017 at 3:
/ stale file handles up
> the wazoo background
>
>
>
> On Mon, Jun 12, 2017 at 10:41 AM, Daniel Carrasco <d.carra...@i2tic.com>
> wrote:
>
>> 2017-06-12 16:10 GMT+02:00 David Turner <drakonst...@gmail.com>:
>>
>>> I have an incredibly light-wei
se I created the
other MDS after mount, because I've done some test just before send this
email and now looks very fast (i've not noticed the downtime).
Greetings!!
--
_
Daniel Carrasco Marín
Ingeniería para la Innovación i2TIC, S.L.
Tl
2017-06-12 10:49 GMT+02:00 Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de>:
> Hi,
>
>
> On 06/12/2017 10:31 AM, Daniel Carrasco wrote:
>
>> Hello,
>>
>> I'm very new on Ceph, so maybe this question is a noob question.
>>
>>
is stable?, because if I have multiple FS to avoid
SPOF and I only can deploy an MDS, then we have a new SPOF...
This is to know if maybe i need to use Block Devices pools instead File
Server pools.
Thanks!!! and greetings!!
--
_
Daniel Carrasco Marín
44 matches
Mail list logo