Yes, also. The main reason though is temporarily missing connection from
Ceph nodes to package repo - this will take some days or weeks to reconnect.
The client nodes can connect and update.
Thanks,
Lukáš
On Tue, Feb 14, 2017 at 6:56 PM, Shinobu Kinjo wrote:
> On Wed, Feb
Hi all,
We encountered an issue updating our OSD from Jewel (10.2.5) to Kraken
(11.2.0). OS was RHEL derivative. Prior to this we updated all the
mons to Kraken.
After updating ceph packages I restarted the 60 OSD on the box with
'systemctl restart ceph-osd.target'. Very soon after the system
Hello debian
stretch is almost stable so i wanted to deploy ceph jewel on it
but with
ceph-deploy new mynode
I have this error
[ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported:
debian 9.0
I Know I can cheat changing /etc/debian_version to 8.0 but i'm sure
there is a
On Wed, Feb 15, 2017 at 2:18 AM, Lukáš Kubín wrote:
> Hi,
> I'm most probably hitting bug http://tracker.ceph.com/issues/13755 - when
> libvirt mounted RBD disks suspend I/O during snapshot creation until hard
> reboot.
>
> My Ceph cluster (monitors and OSDs) is running
Hi Liuchang0812,
Thank you for replying the thread.
I have corrected this issue. This was due to incorrect ownership of the
/var/lib/ceph. It was under root and I have changed it to ceph ownership
to resolve this.
However, I am seeing a new error while preparing the osd's. Any idea
about
On Tue, Feb 14, 2017 at 11:38 AM, Florent B wrote:
> Hi everyone,
>
> I use Ceph-fuse on a Jewel cluster.
>
> I would like to set stripe_unit to 8192 on a directory but it seems not
> possible :
>
> # setfattr -n ceph.dir.layout.stripe_unit -v "8192" maildata1
> setfattr:
unsubscribe ceph-users___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Sage and all,
We are going to use SSDs for cache in ceph. But I am not sure which
one is the best solution, bcache? flashcache? or cache tier?
I found there are some CAUTION in ceph.com about cache tiering. Is cache
tiering is already production ready? especially for rbd.
thanx in
Hardware failures are just one possible cause. If you value your data you will
have a backup and preferably going to some sort of removable media that can be
taken offsite, like those things that everybody keeps saying are dead…..what
are they called….oh yeah tapes. J A online copy of your data
On Tue, Feb 14, 2017 at 9:33 AM, Oliver Schulz wrote:
> Dear Ceph Experts,
>
> after upgrading our Ceph cluster from Hammer to Jewel,
> the MDS (after a few days) found some metadata damage:
>
># ceph status
>[...]
>health HEALTH_ERR
> mds0: Metadata
Hi Cephers,
Although I might be stating an obvious fact: altering the parameter
works as advertised.
The only issue I encountered was lowering the parameter too much at once
results in some slow requests because the cache pool is "full".
So in short: it works when lowering the parameter bit by
Dear Ceph Experts,
after upgrading our Ceph cluster from Hammer to Jewel,
the MDS (after a few days) found some metadata damage:
# ceph status
[...]
health HEALTH_ERR
mds0: Metadata damage detected
[...]
The output of
# ceph tell mds.0 damage ls
is:
[
{
Hi Brad,
I'll be doing so later in the day.
Thanks,
George
From: Brad Hubbard [bhubb...@redhat.com]
Sent: 13 February 2017 22:03
To: Vasilakakos, George (STFC,RAL,SC); Ceph Users
Subject: Re: [ceph-users] PG stuck peering after host reboot
I'd suggest
Hi.
We use Ceph Rados GW S3. And we are very happy :).
Each administrator is responsible for its service.
Using the following clients S3:
Linux - s3cmd, duply;
Windows - cloudberry.
P.S 500 TB data, 3x replication, 3 datacenter.
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Dongsheng Yang
> Sent: 14 February 2017 09:01
> To: Sage Weil
> Cc: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com
> Subject: [ceph-users] bcache vs
Hello.
Where monitors are keeping their keys? I can't see them in 'ceph auth
list'. Are they in that list but I have no permission to see them (as
admin), or are they stored somewhere else? How can I see that list?
___
ceph-users mailing list
Hi Cephers,
At University of Zurich we are using Ceph as a storage back-end for our
OpenStack installation. Since we recently reached 70% of occupancy
(mostly caused by the cinder pool served by 16384PGs) we are in the
phase of extending the cluster with additional storage nodes of the same
type
On Tue, Feb 14, 2017 at 3:48 AM, David Ramahefason wrote:
> Any idea on how we could increase performances ? as this really impact our
> openstack MOS9.0 Mitaka infrastructure, VM spawning can take up to 15
> minutes...
Have you configured Glance RBD store properly? The
Hi,
> On Tue, Feb 14, 2017 at 3:48 AM, David Ramahefason
> wrote:
> > Any idea on how we could increase performances ? as this really impact
> > our openstack MOS9.0 Mitaka infrastructure, VM spawning can take up to
> > 15 minutes...
>
> Have you configured Glance RBD store
Dear list,
I was looking on how to change the owner of a bucket. There is a lack of
documentation on that point (even the man page is not clear), I found
how with the Help of Orit.
> radosgw-admin metadata get bucket:
> radosgw-admin bucket link --uid= --bucket=
> --bucket-id=
this issue
> Op 14 februari 2017 om 11:14 schreef Nick Fisk :
>
>
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Dongsheng Yang
> > Sent: 14 February 2017 09:01
> > To: Sage Weil
> > Cc:
Thanks everyone for the suggestions, playing with all three of the
tuning knobs mentioned has greatly increased the number of client
connections an instance can deal with. We're still experimenting to
find the max values to saturate our hardware.
With values as below we'd see something around 50
> -Original Message-
> From: Wido den Hollander [mailto:w...@42on.com]
> Sent: 14 February 2017 16:25
> To: Dongsheng Yang ; n...@fisk.me.uk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] bcache vs flashcache vs cache tiering
>
>
> > Op 14
We are running flashcache in production for RBD behind OSDs since over two
years now. We had a few issues with it:
• one rare kernel livelock between XFS and flashcache that took some effort to
track down and fix (we could release patched flashcache if there is interest)
• careful tuning of skip
Hi,
I'm most probably hitting bug http://tracker.ceph.com/issues/13755 - when
libvirt mounted RBD disks suspend I/O during snapshot creation until hard
reboot.
My Ceph cluster (monitors and OSDs) is running v0.94.3, while clients
(OpenStack/KVM computes) run v0.94.5. Can I still update the client
Hi,
according to kraken release-notes and documentation, AsyncMessenger now
also supports RDMA and DPDK.
Is anyone already using async-ms with RDMA or DPDK and might be able to
tell us something about real-world performance gains and stability?
Best, Bastian
On Tue, Feb 14, 2017 at 5:27 AM, Tyanko Aleksiev
wrote:
> Hi Cephers,
>
> At University of Zurich we are using Ceph as a storage back-end for our
> OpenStack installation. Since we recently reached 70% of occupancy
> (mostly caused by the cinder pool served by 16384PGs)
On Tue, Feb 14, 2017 at 11:38 AM, Benjeman Meekhof wrote:
> Hi all,
>
> We encountered an issue updating our OSD from Jewel (10.2.5) to Kraken
> (11.2.0). OS was RHEL derivative. Prior to this we updated all the
> mons to Kraken.
>
> After updating ceph packages I restarted
On Tue, Feb 14, 2017 at 8:25 AM, Wido den Hollander wrote:
>
>> Op 14 februari 2017 om 11:14 schreef Nick Fisk :
>>
>>
>> > -Original Message-
>> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> > Dongsheng Yang
>> > Sent: 14
> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: 14 February 2017 21:05
> To: Wido den Hollander
> Cc: Dongsheng Yang ; Nick Fisk
> ; Ceph Users
> Subject: Re:
On Wed, 15 Feb 2017 02:56:22 +0900 Shinobu Kinjo wrote:
> On Wed, Feb 15, 2017 at 2:18 AM, Lukáš Kubín wrote:
> > Hi,
> > I'm most probably hitting bug http://tracker.ceph.com/issues/13755 - when
> > libvirt mounted RBD disks suspend I/O during snapshot creation until hard
Hi Khang,
What file system do you use in OSD node?
XFS always use Memory for caching data before writing to disk.
So, don't worry, it always holds memory in your system as much as possible.
2017-02-15 10:35 GMT+07:00 Khang Nguyễn Nhật
:
> Hi all,
> My ceph
On Tue, Feb 14, 2017 at 11:44 PM, Bastian Rosner
wrote:
>
> Hi,
>
> according to kraken release-notes and documentation, AsyncMessenger now also
> supports RDMA and DPDK.
>
> Is anyone already using async-ms with RDMA or DPDK and might be able to tell
> us
Hi Sam,
Thank for your reply. I use BTRFS file system on OSDs.
Here is result of "*free -hw*":
total used freeshared
buffers cache available
Mem: 125G 58G 31G1.2M3.7M
36G 60G
and "*ceph df*":
On Tue, 14 Feb 2017 22:42:21 - Nick Fisk wrote:
> > -Original Message-
> > From: Gregory Farnum [mailto:gfar...@redhat.com]
> > Sent: 14 February 2017 21:05
> > To: Wido den Hollander
> > Cc: Dongsheng Yang ; Nick Fisk
> >
Hi all,
My ceph OSDs is running on Fedora-server24 with config are:
128GB RAM DDR3, CPU Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz, 72 OSDs (8TB
per OSD). My cluster was use ceph object gateway with S3 API. Now, it had
contained 500GB data but it was used > 50GB RAM. I'm worry my OSD will dead
if i
I've been testing flashcache, bcache, dm-cache and even dm-writeboost in
production ceph clusters.
The only one that is working fine and gives the speed we need is bcache. All
others failed with slow speeds or low latencies.
Stefan
Excuse my typo sent from my mobile phone.
> Am 15.02.2017 um
37 matches
Mail list logo