On 01/23/2014 01:23 AM, Karol Kozubal wrote:
This works correctly when I mount at run time. However I am running into
issues doing this at boot time through fstab with the following command,
as per the documentation on ceph.com I am passing cephfs root as a part
of the first argument in fstab:
On Thu, Jan 23, 2014 at 8:36 PM, David Francheski (dfranche)
wrote:
> Thanks Yehuda,
>
> I've attached both the apache2 access/error logs, as well as the radosgw
> log file.
> It doesn't look like /var/www/s3gw.fcgi is even being called.
> I put a "touch /tmp/radosgw-started-flag" command in /var/
>>But there is an officil libleveldb version from ceph for wheezy:
>>
>>http://gitbuilder.ceph.com/leveldb-deb-x86_64/
>>
>>http://gitbuilder.ceph.com/leveldb-deb-x86_64/libleveldb1_1.9.0-1~bpo70+1_amd64.deb
>>
>>
>>and
>>
>>http://gitbuilder.ceph.com/leveldb-deb-x86_64/libleveldb-dev_1.9.0-1~b
So I just have a few more questions that are coming to mind. Firstly, I
have OSDi whose underlying filesystems can be.. Dun dun dun Resized!!
If I choose to expand my allocation to ceph, I can in theory do so by
expanding the quota on the OSDi. (I'm using ZFS) Similarly, if the OSD is
und
On Thu, Jan 23, 2014 at 5:24 PM, Stuart Longland wrote:
> Hi all,
>
> I'm in the process of setting up a storage cluster for production use.
> At the moment I have it in development and am testing the robustness of
> the cluster. One key thing I'm conscious of is single points of
> failure. Thus
On 24/01/14 11:24, Stuart Longland wrote:
> The set up at present is 3 identical nodes:
> - Ubuntu 12.04 LTS AMD64
> - Intel Core i3 3570T CPUs
> - 8GB RAM
> - Dual Gigabit Ethernet (one interface public, one cluster)
> - 60GB Intel 520S SSD
> - 2× Seagate SV35 3TB HDD for OSDs
Ohh, and I should h
Hi all,
I'm in the process of setting up a storage cluster for production use.
At the moment I have it in development and am testing the robustness of
the cluster. One key thing I'm conscious of is single points of
failure. Thus, I'm testing the cluster by simulating node outages (hard
powering-
Hello!
I have a great deal of interest in the ability to version objects in
buckets via the S3 API. Where is this on the roadmap for Ceph?
This is a pretty useful feature during failover scenarios between zones in
a region. For instance, take the example where you have a region with two
zones:
u
On Thu, Jan 23, 2014 at 2:21 PM, Schlacta, Christ wrote:
> What guarantees does ceph place on data integrity? Zfs uses a Merkel tree to
> guarantee the integrity of all data and metadata on disk and will ultimately
> refuse to return "duff" data to an end user consumer.
>
> I know ceph provides so
What guarantees does ceph place on data integrity? Zfs uses a Merkel tree
to guarantee the integrity of all data and metadata on disk and will
ultimately refuse to return "duff" data to an end user consumer.
I know ceph provides some integrity mechanisms and has a scrub feature.
Does it provide fu
Hi all-
I'm creating some scripted performance testing for my Ceph cluster. The part
relevant to my questions works like this:
1. Create some pools
2. Create and map some RBDs
3. Write-in the RBDs using DD or FIO
4. Run FIO testing on the RBDs (small block random and
Hi,
have a look at
http://ceph.com/docs/master/rados/configuration/ceph-conf/#runtime-changes
best regards,
Kurt
Alessandro Brega schrieb:
> Good day.
>
> I have a running ceph cluster and would like to change the setting
> "mon osd down out interval = 3600". Is there a way to do this without
>
You can inject settings into a running cluster. To set the mon osd
down out interval for all the osds it would be.
ceph tell osd.* injectargs '--mon-osd-down-out-interval 3600'
On Thu, Jan 23, 2014 at 12:52 PM, Alessandro Brega
wrote:
> Good day.
>
> I have a running ceph cluster and would like
Good day.
I have a running ceph cluster and would like to change the setting "mon osd
down out interval = 3600". Is there a way to do this without having to
restart any ceph services?
Kind regards
Alessandro Brega
___
ceph-users mailing list
ceph-users@
But there is an officil libleveldb version from ceph for wheezy:
http://gitbuilder.ceph.com/leveldb-deb-x86_64/
http://gitbuilder.ceph.com/leveldb-deb-x86_64/libleveldb1_1.9.0-1~bpo70+1_amd64.deb
and
http://gitbuilder.ceph.com/leveldb-deb-x86_64/libleveldb-dev_1.9.0-1~bpo70+1_amd64.deb
Am 2
On Thu, Jan 23, 2014 at 6:27 PM, Alexandre DERUMIER wrote:
> Thanks.
>
> Does It need to rebuild the whole ceph packages with libleveldb-dev ?
>
> Or can I simply backport libleveldb1 and use ceph packages from intank
> repository ?
I had to rebuild ceph because the old one is a static library o
On Thu, Jan 23, 2014 at 8:24 AM, David Francheski (dfranche)
wrote:
> Hi,
>
> I'm using the latest Emperor Ceph release, and trying to bring up the S3
> Object Gateway.
> I have a Ceph cluster deployed on an Ubuntu 13.10 based distribution.
>
> When I attempt to create a S3 bucket using the "boto"
Thanks.
Does It need to rebuild the whole ceph packages with libleveldb-dev ?
Or can I simply backport libleveldb1 and use ceph packages from intank
repository ?
- Mail original -
De: "Sylvain Munaut"
À: "Alexandre DERUMIER"
Cc: "Mark Nelson" , ceph-users@lists.ceph.com
Envoyé: J
Hi,
I'm using the latest Emperor Ceph release, and trying to bring up the S3 Object
Gateway.
I have a Ceph cluster deployed on an Ubuntu 13.10 based distribution.
When I attempt to create a S3 bucket using the "boto" python module, I get the
following error:
Boto.exception.S3ResponseError: S
On Thu, Jan 23, 2014 at 8:07 AM, Arne Wiebalck wrote:
>
> On Jan 23, 2014, at 4:18 PM, Gregory Farnum
> wrote:
>
>> On Wed, Jan 22, 2014 at 3:23 PM, Karol Kozubal
>> wrote:
>>> Hi Everyone,
>>>
>>> I have a few questions concerning mounting cephfs with ceph-fuse in fstab at
>>> boot. I am curr
Hi,
> because debian wheezy libleveldb1 is also quite old
> http://packages.debian.org/wheezy/libleveldb1
> libleveldb1 (0+20120530.gitdd0d562-1)
Yes, that version is "buggy" and was causing the issue.
I took the source deb from debian sid and rebuilt it for precise in my case:
http://packages.
On Jan 23, 2014, at 4:18 PM, Gregory Farnum
wrote:
> On Wed, Jan 22, 2014 at 3:23 PM, Karol Kozubal
> wrote:
>> Hi Everyone,
>>
>> I have a few questions concerning mounting cephfs with ceph-fuse in fstab at
>> boot. I am currently successfully mounting cephfs using ceph-fuse on 6
>> client
There are some seldom-used files (namely install ISOs) that I want to throw
in ceph to keep them widely available, but throughput and response times
aren't critical for them, nor is redundancy. Is it possible to throw them
into OSDs on cheap, bulk offline storage, and more importantly, will idle
O
[ Returning list to thread. ]
On Wed, Jan 22, 2014 at 11:37 PM, Dmitry Lysenko wrote:
> 22.01.2014 13:01, Gregory Farnum пишет:
>
>
>> On Wed, Jan 22, 2014 at 3:23 AM, Dmitry Lysenko wrote:
>> > Good day.
>> >
>> > Some time ago i change pg_num like this
>> > http://www.sebastien-han.fr/blog/20
Hi,
Is there already an SELinux policy module for CephFS available?
My understanding is that such policy should either come with the RPM that needs
it, in this case "ceph" (which is only partly true as you can mount CephFS
without having the ceph RPM) or, probably better, go into a separate RPM
Hi,
I'm trying to deploy Ceph 0.72.2 on Fedora 20, but having some
issues.
I have tried compiling Ceph myself as well install rpms from
http://gitbuilder.ceph.com/ceph-rpm-fedora20-x86_64-basic/ref/emperor/
with the same result: my OSDs are dying 15 minutes after
On Thu, Jan 23, 2014 at 3:35 AM, bf wrote:
> Gregory Farnum writes:
>
>
>> Yes, Ceph does all the heavy lifting. Multiple PGs with the same OSDs
>> can happen (eg, if you only have two OSDs, all PGs will be on both),
>> but it behaves about as well as is possible within the configuration
>> you g
On Wed, Jan 22, 2014 at 10:43 PM, Schlacta, Christ wrote:
> can ceph handle a configuration where a custer node is not "always on", but
> rather gets booted periodically to sync to the cluster, and is also
> sometimes up full time as demand requires? I ask because I want to put an
> OSD on each o
Hi,
so, the packages for debian wheezy from intank repository are using the leveldb
package from debian repo ?
because debian wheezy libleveldb1 is also quite old
http://packages.debian.org/wheezy/libleveldb1
libleveldb1 (0+20120530.gitdd0d562-1)
- Mail original -
De: "Mark Nelson"
On Wed, Jan 22, 2014 at 3:23 PM, Karol Kozubal wrote:
> Hi Everyone,
>
> I have a few questions concerning mounting cephfs with ceph-fuse in fstab at
> boot. I am currently successfully mounting cephfs using ceph-fuse on 6
> clients. I use the following command, where the ip is my mon address:
>
Hi Everyone,
I have a few questions concerning mounting cephfs with ceph-fuse in fstab at
boot. I am currently successfully mounting cephfs using ceph-fuse on 6
clients. I use the following command, where the ip is my mon address:
> ceph-fuse –m 192.168.0.2:6789 –r /shared/files /root/shared
> HEALTH_WARN 1 pgs down; 3 pgs incomplete; 3 pgs stuck inactive; 3 pgs
stuck unclean; 7 requests are blocked > 32 sec; 3 osds have slow requests;
pool cloudstack has too few pgs; pool .rgw.buckets has too few pgs
> pg 14.0 is stuck inactive since forever, current state incomplete, last
acting [5,0
On Thu, 23 Jan 2014, Guang wrote:
> Hi Joao,
> Thanks for your reply!
>
> I captured the log after seeing the 'noin' keyword and the log is attached.
>
> Meanwhile, while checking the monitor logs, I see it does election every few
> seconds and the election process could take several seconds, so
On 01/23/2014 06:39 AM, Sylvain Munaut wrote:
Hi,
I have a cluster that contains 16 OSDs spread over 4 physical
machines. Each machines runs 4 OSD process.
Among those, one isue periodically using 100% of the CPU.
I finally tracked this down. CPU usage was mostly from leveldb calls
Andrey K
Hi,
we have a lot of pools which contain rbd volumes containing VM images.
I want to know which volume causes most traffic. It's easy to determine which
pool causes traffic, for example:
# ceph osd pool stats
[...]
pool abaswebtest id 56
client io 3372 kB/s rd, 10557 kB/s wr, 1303 op/s
How to
Hi,
> I have a cluster that contains 16 OSDs spread over 4 physical
> machines. Each machines runs 4 OSD process.
>
> Among those, one isue periodically using 100% of the CPU.
I finally tracked this down. CPU usage was mostly from leveldb calls
Andrey Korolyov (xdeller on IRC) pointed out they h
Gregory Farnum writes:
> Yes, Ceph does all the heavy lifting. Multiple PGs with the same OSDs
> can happen (eg, if you only have two OSDs, all PGs will be on both),
> but it behaves about as well as is possible within the configuration
> you give it.
> -Greg
> Software Engineer #42http://in
On 01/22/2014 05:47 PM, alistair.whit...@barclays.com wrote:
All,
Having failed to successfully and new monitors using ceph-deploy, I
tried the documented manual approach.
The platform:
OS: RHEL 6.4
Ceph: Emperor
Ceph-deploy: 1.3.4-0
When following the procedure on an existing node in a
Once I figure out how to get my cluster healthy again after the monitor problem
discussed below, then I will try ceph-deploy again and send you the output.
I have been trying to re-inject the last healthy monmap into all the nodes,
however this has proved unsuccessful thus far.
-Original Me
39 matches
Mail list logo