Dear Cephers:
I meet a problem now:
my cluster has 3 servers, each server has 1mon and 1osd
node0(mon0+osd0)、node1(mon1+osd1)、node2(mon2+osd2)
3 mon(mon0 192.168.202.35/24, mon1 192.168.202.36/24, mon2 192.168.202.37/24)
public network 192.168.2.*/24
cluster network 172.16.2.*/24
operation:
Has anyone got any suggestions on how to either circumvent this problem?
On Fri, 12 May 2017, Harald Hannelius wrote:
I am unable to perform ceph-install {node} on Debian Wheezy.
[server][DEBUG ] Hit http://security.debian.org wheezy/updates/main
Translation-en
[server][DEBUG ] Hit
On Thu, May 18, 2017 at 3:11 AM, Christian Balzer wrote:
> On Wed, 17 May 2017 18:02:06 -0700 Ben Hines wrote:
>
>> Well, ceph journals are of course going away with the imminent bluestore.
> Not really, in many senses.
>
But we should expect far fewer writes to pass through the
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Dan
> van der Ster
> Sent: 18 May 2017 09:30
> To: Christian Balzer
> Cc: ceph-users
> Subject: Re: [ceph-users] Changing SSD Landscape
>
> On Thu,
Hi list,
Does it makes sense to split an SSD in two parts, one on which i will put
writeback cache-mode and an other with read-proxy mode in order to benefit
of the two modes ?
Thks
--
*Guillaume Comte*
06 25 85 02 02 | guillaume.co...@blade-group.com
90 avenue des
Hmmm, after crashing for a few days every 30 seconds it's apparently
running normally again. Weird. I was thinking since it's looking for a
snapshot object, maybe re-enabling snaptrimming and removing all the
snapshots in the pool would remove that object (and the problem)? Never
got to that point
Let me explore the code to my needs. Thanks Chris
Regards,
Shambhu
From: Bitskrieg [mailto:bitskr...@bitskrieg.net]
Sent: Thursday, May 18, 2017 6:40 PM
To: Shambhu Rajak; wes_dilling...@harvard.edu
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Available tools for deploying ceph cluster
Hi,
On 05/18/2017 02:28 PM, Shambhu Rajak wrote:
> I want to deploy ceph-cluster as a backend storage for openstack, so I
> am trying to find the best tool available for deploying ceph cluster.
>
> Few are in my mind:
>
> https://github.com/ceph/ceph-ansible
>
>
> BTW, you asked about Samsung parts earlier. We are running these
> SM863's in a block storage cluster:
>
> Model Family: Samsung based SSDs
> Device Model: SAMSUNG MZ7KM240HAGR-0E005
> Firmware Version: GXM1003Q
>
>
> 177 Wear_Leveling_Count 0x0013 094 094 005Pre-fail
>
On Thu, May 18, 2017 at 5:53 AM, Harald Hannelius wrote:
>
>
> Has anyone got any suggestions on how to either circumvent this problem?
It is not clear what version of ceph you want to install and on what
server. A full paste of the log (this looks like ceph-deploy output?)
I'm unfortunately out of ideas at the moment. I think the best chance
of figuring out what is wrong is to repeat it while logs are enabled.
On Wed, May 17, 2017 at 4:51 PM, Stefan Priebe - Profihost AG
wrote:
> No i can't reproduce it with active logs. Any furthr ideas?
>
HI Wes
Since I want a production deployment, full-fledged management would be
necessary for administrating, maintaining, could you suggest on this lines.
Thanks,
Shambhu
From: Wes Dillingham [mailto:wes_dilling...@harvard.edu]
Sent: Thursday, May 18, 2017 6:08 PM
To: Shambhu Rajak
Cc:
Hello,
Env;- Bluestore EC 4+1 v11.2.0 RHEL7.3 16383 PG
We did our resiliency testing and found OSD's keeps on flapping and
cluster went to error state.
What we did:-
1. we have 5 node cluster
2. poweroff/stop ceph.target on last node and waited everything seems to
reach back to normal.
3.
"340 osds: 101 up, 112 in" This is going to be your culprit. Your CRUSH
map is in a really weird state. How many OSDs do you have in this
cluster? When OSDs go down, secondary OSDs take over for it, but when OSDs
get marked out, the cluster re-balances to distribute the data according to
how
Hello,
I'm using cache tiering with cephfs on latest ceph jewel release.
For my use case, I wanted to make new writes go "directly" to the cache
pool , and any
use other logic for promoting when reading, like after 2 reads, for example.
I see that the following settings are available:
Hi ceph-users,
I want to deploy ceph-cluster as a backend storage for openstack, so I am
trying to find the best tool available for deploying ceph cluster.
Few are in my mind:
https://github.com/ceph/ceph-ansible
https://github.com/01org/virtual-storage-manager/wiki/Getting-Started-with-VSM
Is
Hi,
ceph-deploy is a nice tool, it does a lot of work for you and is not
very hard to understand, if you know the basics of ceph.
http://docs.ceph.com/docs/master/rados/deployment/
Regards,
Eugen
Zitat von Shambhu Rajak :
Hi ceph-users,
I want to deploy
If you dont want a full fledged configuration management approach
ceph-deploy is your best bet. http://docs.ceph.com/
docs/master/rados/deployment/ceph-deploy-new/
On Thu, May 18, 2017 at 8:28 AM, Shambhu Rajak wrote:
> Hi ceph-users,
>
>
>
> I want to deploy ceph-cluster
18 matches
Mail list logo