On Wed, Mar 23, 2016 at 01:22:45AM +0100, Loic Dachary wrote:
> On 23/03/2016 01:12, Chris Dunlop wrote:
>> On Wed, Mar 23, 2016 at 01:03:06AM +0100, Loic Dachary wrote:
>>> On 23/03/2016 00:39, Chris Dunlop wrote:
"The old OS'es" that were being supported up to v0.94.5 includes debian
Hi, everyone,
In my ceph cluster, first I deploy my ceph using ceph-deploy with user root, I
don't set up any thing after it's setup,
to my surprise, the cluster can auto-start after my host reboot, all thing is
ok, mon is running and OSDs of device is mounted itself and also running
properly.
Hello Gonçalo,
Thanks for your reminding. I was just setting up the cluster for test, so don't
worry, I can just remove the pool. And I learnt that since the replication
number and pool number are related to pg_num, I'll consider them carefully
before deploying any data.
> On Mar 23, 2016,
Hi Zhang,
From the ceph health detail, I suggest NTP server should be calibrated.
Can you share crush map output?
2016-03-22 18:28 GMT+08:00 Zhang Qiang :
> Hi Reddy,
> It's over a thousand lines, I pasted it on gist:
>
Hello,
On Tue, 22 Mar 2016 12:28:22 -0400 Maran wrote:
> Hey guys,
>
> I'm trying to wrap my head about the Ceph Cache Tiering to discover if
> what I want is achievable.
>
> My cluster exists of 6 OSD nodes with normal HDD and one cache tier of
> SSDs.
>
One cache tier being what, one node?
On 23/03/2016 01:12, Chris Dunlop wrote:
> Hi Loïc,
>
> On Wed, Mar 23, 2016 at 01:03:06AM +0100, Loic Dachary wrote:
>> On 23/03/2016 00:39, Chris Dunlop wrote:
>>> "The old OS'es" that were being supported up to v0.94.5 includes debian
>>> wheezy. It would be quite surprising and unexpected
Hi Loïc,
On Wed, Mar 23, 2016 at 01:03:06AM +0100, Loic Dachary wrote:
> On 23/03/2016 00:39, Chris Dunlop wrote:
>> "The old OS'es" that were being supported up to v0.94.5 includes debian
>> wheezy. It would be quite surprising and unexpected to drop support for an
>> OS in the middle of a
Hi Chris,
On 23/03/2016 00:39, Chris Dunlop wrote:
> Hi Loïc,
>
> On Wed, Mar 23, 2016 at 12:14:27AM +0100, Loic Dachary wrote:
>> On 22/03/2016 23:49, Chris Dunlop wrote:
>>> Hi Stable Release Team for v0.94,
>>>
>>> Let's try again... Any news on a release of v0.94.6 for debian wheezy
>>>
Hi Loïc,
On Wed, Mar 23, 2016 at 12:14:27AM +0100, Loic Dachary wrote:
> On 22/03/2016 23:49, Chris Dunlop wrote:
>> Hi Stable Release Team for v0.94,
>>
>> Let's try again... Any news on a release of v0.94.6 for debian wheezy
>> (bpo70)?
>
> I don't think publishing a debian wheezy backport
On 22/03/2016 23:49, Chris Dunlop wrote:
> Hi Stable Release Team for v0.94,
>
> Let's try again... Any news on a release of v0.94.6 for debian wheezy (bpo70)?
I don't think publishing a debian wheezy backport for v0.94.6 is planned. Maybe
it's a good opportunity to initiate a community
Hi Zhang...
If I can add some more info, the change of PGs is a heavy operation, and as far
as i know, you should NEVER decrease PGs. From the notes in pgcalc
(http://ceph.com/pgcalc/):
"It's also important to know that the PG count can be increased, but NEVER
decreased without destroying /
Hi Stable Release Team for v0.94,
Let's try again... Any news on a release of v0.94.6 for debian wheezy (bpo70)?
Cheers,
Chris
On Thu, Mar 17, 2016 at 12:43:15PM +1100, Chris Dunlop wrote:
> Hi Chen,
>
> On Thu, Mar 17, 2016 at 12:40:28AM +, Chen, Xiaoxi wrote:
>> It’s already there, in
I was able to get this back to HEALTH_OK by doing the following:
1. Allow ceph-objectstore-tool to run over a weekend attempting to export
the PG. Looking at timestamps it took approximately 6 hours to complete
successfully
2. Import the PG into unused PG and start it up+out
3. Allow the cluster
On Tue, Mar 22, 2016 at 9:37 AM, John Spray wrote:
> On Tue, Mar 22, 2016 at 2:37 PM, Ben Archuleta wrote:
>> Hello All,
>>
>> I have experience using Lustre but I am new to the Ceph world, I have some
>> questions to the Ceph users out there.
>>
>> I am
I got it, the pg_num suggested is the total, I need to divide it by the
number of replications.
Thanks Oliver, your answer is very thorough and helpful!
On 23 March 2016 at 02:19, Oliver Dzombic wrote:
> Hi Zhang,
>
> yeah i saw your answer already.
>
> At very first,
On Tue, Mar 22, 2016 at 1:19 AM, Max A. Krasilnikov wrote:
> Hello!
>
> I have 3-node cluster running ceph version 0.94.6
> (e832001feaf8c176593e0325c8298e3f16dfb403)
> on Ubuntu 14.04. When scrubbing I get error:
>
> -9> 2016-03-21 17:36:09.047029 7f253a4f6700 5 -- op
Hi Zhang,
yeah i saw your answer already.
At very first, you should make sure that there is no clock skew.
This can cause some sideeffects.
According to
http://docs.ceph.com/docs/master/rados/operations/placement-groups/
you have to:
(OSDs * 100)
Total PGs =
Hello All,
I’m experiencing some issues installing Teuthology on CentOS 6.5.
I’ve tried installing it in a number of ways:
* Wishing a python virtual environment
* Using "pip install teuthology” directly
The installation fails in both cases.
a) In a python virtual environment (using
On Tue, Mar 22, 2016 at 1:12 PM, Xusangdi wrote:
> Hi Matt & Cephers,
>
> I am looking for advise on setting up a file system based on Ceph. As CephFS
> is not yet productive ready(or I missed some breakthroughs?), the new NFS on
> RadosGW should be a promising alternative,
On Tue, Mar 22, 2016 at 2:37 PM, Ben Archuleta wrote:
> Hello All,
>
> I have experience using Lustre but I am new to the Ceph world, I have some
> questions to the Ceph users out there.
>
> I am thinking about deploying a Ceph storage cluster that lives in multiple
> location
On Tue, Mar 22, 2016 at 4:48 PM, Jason Dillaman wrote:
>> Hi Jason,
>>
>> Le 22/03/2016 14:12, Jason Dillaman a écrit :
>> >
>> > We actually recommend that OpenStack be configured to use writeback cache
>> > [1]. If the guest OS is properly issuing flush requests, the cache
> Hi Jason,
>
> Le 22/03/2016 14:12, Jason Dillaman a écrit :
> >
> > We actually recommend that OpenStack be configured to use writeback cache
> > [1]. If the guest OS is properly issuing flush requests, the cache will
> > still provide crash-consistency. By default, the cache will
Hi Jason,
Le 22/03/2016 14:12, Jason Dillaman a écrit :
We actually recommend that OpenStack be configured to use writeback cache [1].
If the guest OS is properly issuing flush requests, the cache will still
provide crash-consistency. By default, the cache will automatically start up
in
Hello All,
I have experience using Lustre but I am new to the Ceph world, I have some
questions to the Ceph users out there.
I am thinking about deploying a Ceph storage cluster that lives in multiple
location "Building A" and "Building B”, this cluster will be comprised of two
dell servers
Hello All,
I have experience using Lustre but I am new to the Ceph world, I have some
questions to the Ceph users out there.
I am thinking about deploying a Ceph storage cluster that lives in multiple
location "Building A" and "Building B”, this cluster will be comprised of two
dell servers
Hi Xusangdi,
NFS on RGW is not intended as an alternative to CephFS. The basic idea is to
expose the S3 namespace using Amazon's prefix+delimiter convention (delimiter
currently limited to '/'). We use opens for atomicity, which implies NFSv4 (or
4.1). In addition to limitations by design,
> > I've been looking on the internet regarding two settings which might
> > influence
> > performance with librbd.
> >
> > When attaching a disk with Qemu you can set a few things:
> > - cache
> > - aio
> >
> > The default for libvirt (in both CloudStack and OpenStack) for 'cache' is
> > 'none'.
Hi Wido,
Le 22/03/2016 13:52, Wido den Hollander a écrit :
Hi,
I've been looking on the internet regarding two settings which might influence
performance with librbd.
When attaching a disk with Qemu you can set a few things:
- cache
- aio
The default for libvirt (in both CloudStack and
Hi,
I've been looking on the internet regarding two settings which might influence
performance with librbd.
When attaching a disk with Qemu you can set a few things:
- cache
- aio
The default for libvirt (in both CloudStack and OpenStack) for 'cache' is
'none'. Is that still the recommend value
Hi Matt & Cephers,
I am looking for advise on setting up a file system based on Ceph. As CephFS is
not yet productive ready(or I missed some breakthroughs?), the new NFS on
RadosGW should be a promising alternative, especially for large files, which is
what we are most interested in. However,
Hi desmond,
this seems to be much to do for 90 OSDs. And possible a few mistakes in
typing.
Every change of disk needs extra editing too.
This weighting was automatically done in former versions.
Do you know why and where this changed or was i faulty at some point?
Markus
Am 21.03.2016 um
Hi Reddy,
It's over a thousand lines, I pasted it on gist:
https://gist.github.com/dotSlashLu/22623b4cefa06a46e0d4
On Tue, 22 Mar 2016 at 18:15 M Ranga Swami Reddy
wrote:
> Hi,
> Can you please share the "ceph health detail" output?
>
> Thanks
> Swami
>
> On Tue, Mar 22,
Hi Zhang,
are you sure, that all your 20 osd's are up and in ?
Please provide the complete output of ceph -s or better with detail flag.
Thank you :-)
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP Interactive UG (
Hi,
Can you please share the "ceph health detail" output?
Thanks
Swami
On Tue, Mar 22, 2016 at 3:32 PM, Zhang Qiang wrote:
> Hi all,
>
> I have 20 OSDs and 1 pool, and, as recommended by the
> doc(http://docs.ceph.com/docs/master/rados/operations/placement-groups/), I
>
Hi all,
I have 20 OSDs and 1 pool, and, as recommended by the doc(
http://docs.ceph.com/docs/master/rados/operations/placement-groups/), I
configured pg_num and pgp_num to 4096, size 2, min size 1.
But ceph -s shows:
HEALTH_WARN
534 pgs degraded
551 pgs stuck unclean
534 pgs undersized
too many
Hi Cephers,
I don't notice the user already changed from root into ceph. By changed
the directory caps, the problem already fixed. Thank you all.
Best wishes,
Mika
2016-03-22 16:50 GMT+08:00 Mika c :
> Hi Cephers,
> Setting of "rgw frontends =
>
Hello!
I have 3-node cluster running ceph version 0.94.6
(e832001feaf8c176593e0325c8298e3f16dfb403)
on Ubuntu 14.04. When scrubbing I get error:
-9> 2016-03-21 17:36:09.047029 7f253a4f6700 5 -- op tracker -- seq: 48045,
time: 2016-03-21 17:36:09.046984, event: all_read, op:
20-Mar-16 23:23, Schlacta, Christ пишет:
What do you use as an interconnect between your osds, and your clients?
Two Mellanox 10Gb SFP NIC dual port each = 4 x 10Gbit/s ports on each
server.
On servers each 2 ports bonded, so we have 2 bond for Cluster net and
Storage net.
Clients servers
38 matches
Mail list logo