Hi does anyone try this stack, may be someone can provide some
feedback about it?
Thanks.
P.S.
AFAIK at now Ceph RBD + LIO lack of iSCSI HA support, so i think about NFS.
UPD1:
I did some tests and get strange behavior:
Every several minutes io from nfs client to nfs proxy just stops, no
Hello,
On Mon, 13 Mar 2017 21:32:45 + James Okken wrote:
> Hi all,
>
> I have a 3 storage node openstack setup using CEPH.
> I believe that means I have 3 OSDs, as each storage node has a one of 3 fiber
> channel storage locations mounted.
You use "believe" a lot, so I'm assuming you're
Hello,
On Mon, 13 Mar 2017 11:25:15 -0400 Ben Erridge wrote:
> On Sun, Mar 12, 2017 at 8:24 PM, Christian Balzer wrote:
>
> >
> > Hello,
> >
> > On Sun, 12 Mar 2017 19:37:16 -0400 Ben Erridge wrote:
> >
> > > I am testing attached volume storage on our openstack cluster which
On Mon, Mar 13, 2017 at 8:15 PM, Andras Pataki
wrote:
> Dear Cephers,
>
> We're using the ceph file system with the fuse client, and lately some of
> our processes are getting stuck seemingly waiting for fuse operations. At
> the same time, the cluster is healthy,
Hi all,
I have a 3 storage node openstack setup using CEPH.
I believe that means I have 3 OSDs, as each storage node has a one of 3 fiber
channel storage locations mounted.
The storage media behind each node is actually single 7TB HP fiber channel MSA
array.
The best performance configuration
On Mon, Mar 13, 2017 at 3:28 AM, Dan van der Ster wrote:
> Hi John,
>
> Last week we updated our prod CephFS cluster to 10.2.6 (clients and
> server side), and for the first time today we've got an object info
> size mismatch:
>
> I found this ticket you created in the
On 03/13/2017 11:54 AM, John Spray wrote:
On Mon, Mar 13, 2017 at 2:13 PM, Kent Borg wrote:
We have a Cephfs cluster stuck in read-only mode, looks like following the
Disaster Recovery steps, is it a good idea to first make a RADOS snapshot of
the Cephfs pools? Or are there
Dear Cephers,
We're using the ceph file system with the fuse client, and lately some
of our processes are getting stuck seemingly waiting for fuse
operations. At the same time, the cluster is healthy, no slow requests,
all OSDs up and running, and both the MDS and the fuse client think that
Of course!
[root@cephrgw01 ~]# ps -ef | grep rgw
root 766 1 0 mar09 ?00:00:00 /sbin/dhclient -H cephrgw01
-q -lf /var/lib/dhclient/dhclient--eth0.lease -pf
/var/run/dhclient-eth0.pid eth0
ceph 895 1 0 mar09 ?00:14:39 /usr/bin/radosgw -f
--cluster ceph --name
Thank you Iban.
Can you please also send me the output of : ps -ef ! grep rgw
Many Thanks.
On Mar 13, 2017 7:32 PM, "Iban Cabrillo" wrote:
> HI Yair,
> This is my conf:
>
> [client.rgw.cephrgw]
> host = cephrgw01
> rgw_frontends = "civetweb port=8080s
HI Yair,
This is my conf:
[client.rgw.cephrgw]
host = cephrgw01
rgw_frontends = "civetweb port=8080s
ssl_certificate=/etc/pki/tls/cephrgw01.crt"
rgw_zone = RegionOne
keyring = /etc/ceph/ceph.client.rgw.cephrgw.keyring
log_file = /var/log/ceph/client.rgw.cephrgw.log
rgw_keystone_url =
On Mon, Mar 13, 2017 at 9:10 AM Ken Dreyer wrote:
> At a general level, is there any way we could update the documentation
> automatically whenever src/common/config_opts.h changes?
GitHub PR hooks that block any change to the file which doesn't include a
documentation
But per the doc the client stanza should include
client.radosgw.instance_name
[client.rgw.ceph-rgw-02]
host = ceph-rgw-02
keyring = /etc/ceph/ceph.client.radosgw.keyring
log file = /var/log/radosgw/client.radosgw.gateway.log
rgw_frontends = "civetweb port=8080"
"For example, if your node name
At a general level, is there any way we could update the documentation
automatically whenever src/common/config_opts.h changes?
- Ken
On Tue, Mar 7, 2017 at 2:44 AM, Nick Fisk wrote:
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
On Mon, Mar 13, 2017 at 2:13 PM, Kent Borg wrote:
> We have a Cephfs cluster stuck in read-only mode, looks like following the
> Disaster Recovery steps, is it a good idea to first make a RADOS snapshot of
> the Cephfs pools? Or are there ways that could make matters worse?
On 03/13/2017 04:06 PM, Yair Magnezi wrote:
Thank you Abhishek
But still ...
root@ceph-rgw-02:/var/log/ceph# ps -ef | grep rgw
ceph 1332 1 1 14:59 ?00:00:00 /usr/bin/radosgw
--cluster=ceph --id *rgw.ceph-rgw-02* -f --setuser ceph --setgroup ceph
On Sun, Mar 12, 2017 at 8:24 PM, Christian Balzer wrote:
>
> Hello,
>
> On Sun, 12 Mar 2017 19:37:16 -0400 Ben Erridge wrote:
>
> > I am testing attached volume storage on our openstack cluster which uses
> > ceph for block storage.
> > our Ceph nodes have large SSD's for their
Thank you Abhishek
But still ...
root@ceph-rgw-02:/var/log/ceph# ps -ef | grep rgw
ceph 1332 1 1 14:59 ?00:00:00 /usr/bin/radosgw
--cluster=ceph --id *rgw.ceph-rgw-02* -f --setuser ceph --setgroup ceph
root@ceph-rgw-02:/var/log/ceph# cat /etc/ceph/ceph.conf
[global]
fsid =
On Mon, Mar 13, 2017 at 1:35 PM, John Spray wrote:
> On Mon, Mar 13, 2017 at 10:28 AM, Dan van der Ster
> wrote:
>> Hi John,
>>
>> Last week we updated our prod CephFS cluster to 10.2.6 (clients and
>> server side), and for the first time today we've got
On 03/13/2017 03:26 PM, Yair Magnezi wrote:
Hello Wido
yes , the is my /etc/cep/ceph.conf
and yes radosgw.ceph-rgw-02 is the running instance .
root@ceph-rgw-02:/var/log/ceph# ps -ef | grep -i rgw
ceph 17226 1 0 14:02 ?00:00:01 /usr/bin/radosgw
--cluster=ceph --id
Hello Wido
yes , the is my /etc/cep/ceph.conf
and yes radosgw.ceph-rgw-02 is the running instance .
root@ceph-rgw-02:/var/log/ceph# ps -ef | grep -i rgw
ceph 17226 1 0 14:02 ?00:00:01 /usr/bin/radosgw
--cluster=ceph --id rgw.ceph-rgw-02 -f --setuser ceph --setgroup ceph
We have a Cephfs cluster stuck in read-only mode, looks like following
the Disaster Recovery steps, is it a good idea to first make a RADOS
snapshot of the Cephfs pools? Or are there ways that could make matters
worse?
Thanks,
-kb
___
ceph-users
> Op 13 maart 2017 om 15:03 schreef Yair Magnezi :
>
>
> Hello Cephers .
>
> I'm trying to modify the civetweb default port to 80 but from some
> reason it insists on listening on the default 7480 port
>
> My configuration is quiet simple ( experimental ) and
Hello Cephers .
I'm trying to modify the civetweb default port to 80 but from some
reason it insists on listening on the default 7480 port
My configuration is quiet simple ( experimental ) and looks like this :
[global]
fsid = 00c167db-aea1-41b4-903b-69b0c86b6a0f
mon_initial_members =
Thanks for the detailed upgrade report.
We have another scenario: We have allready upgraded to jewel 10.2.6 but
we are still running all our monitors and osd daemons as root using the
setuser match path directive.
What would be the recommended way to have all daemons running as ceph:ceph user
On Mon, Mar 13, 2017 at 10:28 AM, Dan van der Ster wrote:
> Hi John,
>
> Last week we updated our prod CephFS cluster to 10.2.6 (clients and
> server side), and for the first time today we've got an object info
> size mismatch:
>
> I found this ticket you created in the
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Florian Haas
> Sent: 13 March 2017 10:09
> To: Dan van der Ster
> Cc: ceph-users
> Subject: Re: [ceph-users] osd_disk_thread_ioprio_priority
On 03/13/2017 11:07 AM, Dan van der Ster wrote:
On Sat, Mar 11, 2017 at 12:21 PM, wrote:
The next and biggest problem we encountered had to do with the CRC errors on
the OSD map. On every map update, the OSDs that were not upgraded yet, got that
CRC error and
Hi John,
Last week we updated our prod CephFS cluster to 10.2.6 (clients and
server side), and for the first time today we've got an object info
size mismatch:
I found this ticket you created in the tracker, which is why I've
emailed you: http://tracker.ceph.com/issues/18240
Here's the detail
Am 08.03.17 um 02:47 schrieb Christian Balzer:
>
> Hello,
>
> as Adrian pointed out, this is not really Ceph specific.
>
> That being said, there are literally dozen of threads in this ML about
> this issue and speeding up things in general, use your google-foo.
Yeah about that, the search
On Mon, Mar 13, 2017 at 11:00 AM, Dan van der Ster wrote:
>> I'm sorry, I may have worded that in a manner that's easy to
>> misunderstand. I generally *never* suggest that people use CFQ on
>> reasonably decent I/O hardware, and thus have never come across any
>> need to set
On Sat, Mar 11, 2017 at 12:21 PM, wrote:
>
> The next and biggest problem we encountered had to do with the CRC errors on
> the OSD map. On every map update, the OSDs that were not upgraded yet, got
> that CRC error and asked the monitor for a full OSD map instead of
On Mon, Mar 13, 2017 at 10:35 AM, Florian Haas wrote:
> On Sun, Mar 12, 2017 at 9:07 PM, Laszlo Budai wrote:
>> Hi Florian,
>>
>> thank you for your answer.
>>
>> We have already set the IO scheduler to cfq in order to be able to lower the
>>
On Sun, Mar 12, 2017 at 9:07 PM, Laszlo Budai wrote:
> Hi Florian,
>
> thank you for your answer.
>
> We have already set the IO scheduler to cfq in order to be able to lower the
> priority of the scrub operations.
> My problem is that I've found different values set for
Hi,
>>Currently I have the. noout and nodown flags set while doing the maintenance
>>work.
you only need noout to avoid rebalancing
see documentation:
http://docs.ceph.com/docs/kraken/rados/troubleshooting/troubleshooting-osd/
"STOPPING W/OUT REBALANCING".
Your clients are hanging because of
35 matches
Mail list logo