gt; mounts points was 20x more throughput than 1 mount point). It wasn't until
> we got up to about 100 concurrent mount points that we capped our
> throughput, but our total throughput just kept going up the more mount
> points we had of ceph-fuse for cephfs.
>
> On Tue, Dec 12, 2017 at
Hello, everyone!
We have recently started to use CephFS (Jewel, v12.2.1) from a few LXD
containers. We have mounted it on the host servers and then exposed it in
the LXD containers.
Do you have any recommendations (dos and don'ts) on this way of using
CephFS?
Thank you, in advance!
Kind
t;> configured Gentoo server in 5ms. Please don't be an OS bigot while touting
>> that you should learn the proper tool. Redhat is not the only distribution
>> with a large support structure.
>>
>> The OS is a tool, but you should actually figure out and use the proper
&g
nse.
> There over 10k people working at redhat to produce a stable os. I am
> very pleased with the level of knowledge here and what redhat is doing
> general.
>
> I just have to finish with; You people working on Ceph are doing a great
> job and are working on a great project!
>
>
recommendation. We'll continue to use 4.10, then.
Thanks, a lot!
Kind regards,
Bogdan
On Fri, Oct 27, 2017 at 8:04 PM, Ilya Dryomov <idryo...@gmail.com> wrote:
> On Fri, Oct 27, 2017 at 6:33 PM, Bogdan SOLGA <bogdan.so...@gmail.com>
> wrote:
> > Hello, everyone!
> >
>
ally find the kernel I want and then
> disable RBD features until the RBD is compatible to be mapped by that
> kernel.
>
> On Fri, Oct 27, 2017 at 12:34 PM Bogdan SOLGA <bogdan.so...@gmail.com>
> wrote:
>
>> Hello, everyone!
>>
>> We have recently upgraded our Ceph pool
Hello, everyone!
We have recently upgraded our Ceph pool to the latest Luminous release. On
one of the servers that we used as Ceph clients we had several freeze
issues, which we empirically linked to the concurrent usage of some I/O
operations - writing in an LXD container (backed by Ceph) while
has been
> > deprecated.
> >
> > On Oct 15, 2017 6:29 PM, "Bogdan SOLGA" <bogdan.so...@gmail.com> wrote:
> >>
> >> Hello, everyone!
> >>
> >> We are trying to create a custom cluster name using the latest
> ceph-deploy
>
Hello, everyone!
We are trying to create a custom cluster name using the latest ceph-deploy
version (1.5.39), but we keep getting the error:
*'ceph-deploy new: error: subnet must have at least 4 numbers separated by
dots like x.x.x.x/xx, but got: cluster_name'*
We tried to run the new command
Hello, everyone!
We are working on a project which uses RBD images (formatted with XFS) as
home folders for the project's users. The access speed and the overall
reliability have been pretty good, so far.
>From the architectural perspective, our main focus is on providing a
seamless user
Thank you, Wido! It was indeed the keyring; the connection works after
setting it.
Thanks a lot for your help!
Bogdan
On Tue, Dec 27, 2016 at 3:43 PM, Wido den Hollander <w...@42on.com> wrote:
>
> > Op 27 december 2016 om 14:25 schreef Bogdan SOLGA <
> b
, but to no avail.
Any further recommendations are highly welcome.
Thanks,
Bogdan
On Tue, Dec 27, 2016 at 3:11 PM, Wido den Hollander <w...@42on.com> wrote:
>
> > Op 26 december 2016 om 19:24 schreef Bogdan SOLGA <
> bogdan.so...@gmail.com>:
> >
> >
>
Hello, everyone!
We have recently setup a Ceph cluster running on the Hammer release
(v0.94.5), and we would like to know what is the advised release for
preparing a production-ready cluster - the LTS version (Hammer) or the
latest stable version (Infernalis)?
The cluster works properly (so
gt; Is this just one machine or RBD image or is there more?
>
> I'd first create a snapshot and then try running fsck on it, it should
> hopefully tell you if there's a problem in setup or a corruption.
>
> If it's not important data and it's just one instance of this problem t
; is and tries to reach beyond.
>>
>> Is this just one machine or RBD image or is there more?
>>
>> I'd first create a snapshot and then try running fsck on it, it should
>> hopefully tell you if there's a problem in setup or a corruption.
>>
>> If it's no
gt; Jan
>
> On 12 Nov 2015, at 21:46, Bogdan SOLGA <bogdan.so...@gmail.com> wrote:
>
> By running rbd resize
> <http://docs.ceph.com/docs/master/rbd/rados-rbd-cmds/> and then
> 'xfs_growfs -d' on the filesystem.
>
> Is there a better way to resize an RBD image a
Hello everyone!
We have a recently installed Ceph cluster (v 0.94.5, Ubuntu 14.04), and
today I noticed a lot of 'attempt to access beyond end of device' messages
in the /var/log/syslog file. They are related to a mounted RBD image, and
have the following format:
*Nov 12 21:06:44 ceph-client-01
" tunables.
>
> --
> Adam
>
> On Sun, Nov 8, 2015 at 11:48 PM, Bogdan SOLGA <bogdan.so...@gmail.com>
> wrote:
> > Hello Greg!
> >
> > Thank you for your advice, first of all!
> >
> > I have tried to adjust the Ceph tunables detailed in this pa
se it shouldn't be the EC pool causing trouble; it's the
> CRUSH tunables also mentioned in that thread. Instructions should be
> available in the docs for using older tunable that are compatible with
> kernel 3.13.
> -Greg
>
>
> On Saturday, November 7, 2015, Bogdan SOLGA <bogd
Hello, everyone!
I have recently created a Ceph cluster (v 0.94.5) on Ubuntu 14.04.3 and I
have created an erasure coded pool, which has a caching pool in front of it.
When trying to map RBD images, regardless if they are created in the rbd or
in the erasure coded pool, the operation fails with
Hello, everyone!
I just tried to create a new Ceph cluster, using 3 LXC clusters as
monitors, and the 'ceph-deploy mon create-initial' command fails for each
of the monitors with a 'initctl: Event failed' error, when running the
following command:
[ceph-mon-01][INFO ] Running command: sudo
Hello!
A retry of this question, as I'm still stuck at the install step, due to
the old version issue.
Any help is highly appreciated.
Regards,
Bogdan
On Sat, Oct 31, 2015 at 9:22 AM, Bogdan SOLGA <bogdan.so...@gmail.com>
wrote:
> Hello everyone!
>
> I'm struggling to get a n
Hello everyone!
I'm struggling to get a new Ceph cluster installed, and I'm wondering why
am I always getting the version 0.80.10 installed, regardless if I'm
running just 'ceph-deploy install' or 'ceph-deploy install --release
hammer'.
Trying a 'ceph-deploy install -h', on the --release command
.
Thanks, again!
Kind regards,
Bogdan
On Mon, Mar 23, 2015 at 12:47 PM, John Spray john.sp...@redhat.com wrote:
On 22/03/2015 08:29, Bogdan SOLGA wrote:
Hello, everyone!
I have a few questions related to the CephFS part of Ceph:
* is it production ready?
Like it says at http://ceph.com/docs
Hello, everyone!
I have a few questions related to the CephFS part of Ceph:
- is it production ready?
- can multiple CephFS be created on the same cluster? The CephFS creation
http://docs.ceph.com/docs/master/cephfs/createfs/ page describes how
to create a CephFS using (at least)
Of
Bogdan SOLGA
Sent: 19 March 2015 20:51
To: ceph-users@lists.ceph.com
Subject: [ceph-users] PGs issue
Hello, everyone!
I have created a Ceph cluster (v0.87.1-1) using the info from the 'Quick
deploy' page, with the following setup:
• 1 x admin / deploy node;
• 3 x OSD and MON
the output of `ceph osd dump` and ceph osd tree`
Thanks
Sahana
On Fri, Mar 20, 2015 at 11:47 AM, Bogdan SOLGA bogdan.so...@gmail.com
wrote:
Hello, Nick!
Thank you for your reply! I have tested both with setting the replicas
number to 2 and 3, by setting the 'osd pool default size = (2|3
Thank you for your suggestion, Nick! I have re-weighted the OSDs and the
status has changed to '256 active+clean'.
Is this information clearly stated in the documentation, and I have missed
it? In case it isn't - I think it would be recommended to add it, as the
issue might be encountered by
.
Thanks
Sahana
On Fri, Mar 20, 2015 at 2:08 PM, Bogdan SOLGA bogdan.so...@gmail.com
wrote:
Thank you for your suggestion, Nick! I have re-weighted the OSDs and the
status has changed to '256 active+clean'.
Is this information clearly stated in the documentation, and I have
missed it? In case
Hello, everyone!
I have created a Ceph cluster (v0.87.1-1) using the info from the 'Quick
deploy http://docs.ceph.com/docs/master/start/quick-ceph-deploy/' page,
with the following setup:
- 1 x admin / deploy node;
- 3 x OSD and MON nodes;
- each OSD node has 2 x 8 GB HDDs;
The
30 matches
Mail list logo