On 03-08-15 22:25, Samuel Just wrote:
It seems like it's about time for us to make the jump to C++11. This
is probably going to have an impact on users of the librados C++
bindings. It seems like such users would have to recompile code using
the librados C++ libraries after upgrading the
Hi,
I've encountered some problems accesing files on CephFS:
$ ls -al syntenyPlot.png
-rw-r- 1 edgar edgar 9329 Jun 11 2014 syntenyPlot.png
$ groups
... edgar ...
$ cat syntenyPlot.png
cat: syntenyPlot.png: Permission denied
CephFS is mounted via ceph-fuse:
ceph-fuse on /ceph type
Hello,
On Tue, 4 Aug 2015 20:33:58 +1000 Daniel Manzau wrote:
Hi Christian,
True it's not exactly out of the box. Here is the ceph.conf.
Crush rule file and a description (are those 4 hosts or are the HDD and
SSD shared on the same HW as your pool size suggests), etc etc.
My guess is
Hi Christian,
True it's not exactly out of the box. Here is the ceph.conf.
Could it be the osd crush update on start = false stopping the
remapping of a disk on failure?
[global]
fsid = bfb7e666-f66d-45c0-b4fc-b98182fed666
mon_initial_members = ceph-store1, ceph-store2, ceph-admin1
mon_host =
Hi Mark,
Thanks for the comments, that was the same arguments people concern
CephFS performance here. But one thing I like the Ceph is it is
capable to run everything including replications directly to XFS on
commodity hardware disks, I am not clear if the Lustre can do it as
well, or did you
Hi Ilya,
Please see the info you asked for attached below. Thanks!
$ cat /sys/module/rbd/parameters/single_major
N
# ltrace rbd unmap /dev/rbd1
__libc_start_main([ rbd, unmap, /dev/rbd1 ] unfinished ...
_ZNSt8ios_base4InitC1Ev(0x630918, 0x7ffd4a4a39c8, 0x7ffd4a4a39e8, 512) = 118
Hi,
Yes we have been following Sebastien's SSD HDD mix blog which seems to be
working ok. So 2 hosts with SSD and HDD on each.
We aren't setting osd pool default min size and it's currently reporting
as 0
ceph --admin-daemon /var/run/ceph/ceph-osd.12.asok config show | grep
osd_pool_default_min
Also I can confirm that doing echo dev-id /sys/bus/rbd/remove does unmap
the device, though not rbd unmap
-Original Message-
From: Ilya Dryomov [mailto:idryo...@gmail.com]
Sent: 03 August 2015 19:12
To: Ivanov, Anton
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] rbd on CoreOS
On
On Mon, Aug 3, 2015 at 4:05 PM, 乔建峰 scaleq...@gmail.com wrote:
[Including ceph-users alias]
2015-08-03 16:01 GMT+08:00 乔建峰 scaleq...@gmail.com:
Hi Cephers,
Currently, I'm experiencing an issue which suffers me a lot, so I'm
writing to ask for your comments/help/suggestions. More details
On Tue, Aug 4, 2015 at 12:42 PM, Goncalo Borges
gonc...@physics.usyd.edu.au wrote:
Hi Yan Zheng
Now my questions:
1) I performed all of this testing to understand what would be the minimum
size (reported by df) of a file of 1 char and I am still not able to find
a
clear answer. In a
Hi Robert,
It works. Thanks.
-Regards,
Mallikarjun
-- Forwarded message --
From: Robert LeBlanc rob...@leblancnet.us
Date: Fri, Jul 31, 2015 at 10:39 PM
Subject: Re: [ceph-users] OSD removal is not cleaning entry from osd listing
To: Mallikarjun Biradar
Everything is working fine now on the CI builder. Those apt-get errors
are still there, but everything seems to be installing just fine.
On Fri, Jul 24, 2015 at 5:19 PM, Travis Rhoden trho...@gmail.com wrote:
Hi Noah,
It does look like the two things are unrelated. But you are right,
Hi all,
I am running Kubernetes on CoreOS and use rbd binary exctracted from
ceph/ceph-docker/config image to map images to CoreOS host. Everything works
just fine except when trying to unmap the volume. It doesn't unmap and gives
following error:
$ rbd showmapped
id pool imagesnap device
Hi Cephers,
This is a greeting from Jevon. Currently, I'm experiencing an issue which
suffers me a lot, so I'm writing to ask for your comments/help/suggestions.
More details are provided bellow.
Issue:
I set up a cluster having 24 OSDs and created one pool with 1024 placement
groups on it for a
Hi Cephers,
This is a greeting from Jevon. Currently, I'm experiencing an issue which
suffers me a lot, so I'm writing to ask for your comments/help/suggestions.
More details are provided bellow.
Issue:
I set up a cluster having 24 OSDs and created one pool with 1024 placement
groups on it for a
On 31/07/2015, Mariusz Gronczewski wrote:
Well, Centos 6 will be supported to 2020, and centos 7 was released a
year ago so I'd imagine a lot of people haven't migrated yet and
migration process is nontrivial if you already did some modificiations
to c6 (read: fix broken as fuck init scripts
Hello,
I'm using Proxmox VE 3.4. After installing the latest updates, the ceph tools
started segfault, while the Ceph itself seems to be working well:
# ceph
Segmentation fault
# ceph -s
Segmentation fault
# rados ls
*** Caught signal (Segmentation fault) **
in thread 7f1c7d99f760
Hello
I was doing upgrades on my ceph cluster, and because the hammer version
didn't worked well, I tryied to downgrade to previous version.
In that process I removed all the ceph configuration and got my ceph
cluster destroyed. I have the OSDs intact, but I don't know if it is
possible to
Dear, Ceph.
I'm wondering how does Ceph isolate bad blocks when EIO occurs.
I saw source codes and I found the logic of deep scrub, chunky_scrub() in PG.cc.
And I understood the real recovery logic is in submit_push_data() in
ReplicatedBackend.cc.
It pulled an object from another replica and
Hi,
Before I start with my question, following are some references -
1) I want to keep track of creating and closing ceph connections properly
so I am using singleton pattern for getting an instance of ceph.
2) following is kind of the steps-
a) creating a rados cluster
b) connecting to
In hadoop 2.6 the namenode must be on , but hadoop 1.1.2 the namenode is down
is ok.
below is the configuration.
property
namefs.default.name/name
valueceph://ceph0:6789//value
/property
in hadoop2.6 the raise exceptions like this:
15/07/28 11:26:55 INFO mapreduce.Cluster:
[Including ceph-users alias]
2015-08-03 16:01 GMT+08:00 乔建峰 scaleq...@gmail.com:
Hi Cephers,
Currently, I'm experiencing an issue which suffers me a lot, so I'm
writing to ask for your comments/help/suggestions. More details are
provided bellow.
Issue:
I set up a cluster having 24 OSDs
On Tue, Aug 4, 2015 at 3:30 PM, Anton Ivanov anton.iva...@ask.com wrote:
Also I can confirm that doing echo dev-id /sys/bus/rbd/remove does unmap
the device, though not rbd unmap
Yeah, that much was clear from the beginning. I'll look into it.
Thanks,
Ilya
Dear, Ceph.
I'm wondering how does Ceph isolate bad blocks when EIO occurs.
I saw source codes and I found the logic of deep scrub, chunky_scrub() in PG.cc.
And I understood the real recovery logic is in submit_push_data() in
ReplicatedBackend.cc.
It pulled an object from another replica and
Hey cephers,
The long-awaited Ceph Developer Summit videos from CDS Jewel are now
posted on YouTube [0] and linked from the CDS page [1]. My apologies
for this taking so long, we had some technical problems with the
recordings which I believe have now all been sorted out. The only
exception is
Hi all,
I accidentally deleted a ceph pool while there was still a rados block device
mapped on a client. If I try to unmap the device with “rbd unmap the command
simply hangs. I can´t get rid of the device...
We are on:
Ubuntu 14.04
Client ceph software version is 0.80.9
Ceph cluster software
On 04-08-15 16:39, Daniel Marks wrote:
Hi all,
I accidentally deleted a ceph pool while there was still a rados block device
mapped on a client. If I try to unmap the device with “rbd unmap the command
simply hangs. I can´t get rid of the device...
We are on:
Ubuntu 14.04
Client
On Tue, 4 Aug 2015, Wido den Hollander wrote:
On 03-08-15 22:25, Samuel Just wrote:
It seems like it's about time for us to make the jump to C++11. This
is probably going to have an impact on users of the librados C++
bindings. It seems like such users would have to recompile code using
Hello Somnath,
Thanks for the quick response! I checked the versions on the client and the
cluster machine. They actually had different versions of ceph installed: 0.94.2
on the cluster and 0.80.9 on the client. I made changes so that both of them
have the same version of Ceph now. As for the
Hello Somnath,
I tried that and it seems to be loading librados.so.2.0.0 in both cases (client
and cluster machine).
Thanks,
Aakanksha
From: Somnath Roy [mailto:somnath@sandisk.com]
Sent: Tuesday, August 04, 2015 6:00 PM
To: Aakanksha Pudipeddi-SSI; ceph-us...@ceph.com
Subject: RE: Error
I have my first ceph cluster up and running and am currently testing cephfs
for file access. It turns out, I am not getting excellent write
performance on my cluster via cephfs(kernel driver) and would like to try
to explore moving my cephfs_metadata pool to SSD.
To quickly describe the cluster:
Hi,
I am trying to create a Ceph block device on a ceph cluster machine itself for
experimental purposes. I used to be able to do that earlier but it gives me a
segmentation fault right now:
*** Caught signal (Segmentation fault) **
in thread 7f49628f6840
ceph version 0.94.2
There is probably a binary version mismatch (?)...Make sure rbd command is
loading the proper librbd/librados binaries...
Thanks Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Aakanksha Pudipeddi-SSI
Sent: Tuesday, August 04, 2015 5:06 PM
To:
Bob,
Those numbers would seem to indicate some other problem One of the
biggest culprits of that poor performance is often related to network issues.
In the last few months, there have been several reported issues of performance,
that have turned out to be network. Not all, but most.
We've done the splitting several times. The most important thing is to
run a ceph version which does not have the linger ops bug.
This is dumpling latest release, giant and hammer. Latest firefly
release still has this bug. Which results in wrong watchers and no
working snapshots.
Stefan
Am
I have done this not that long ago. My original PG estimates were wrong and I
had to increase them.
After increasing the PG numbers the Ceph rebalanced, and that took a while. To
be honest in my case the slowdown wasn’t really visible, but it took a while.
My strong suggestion to you
Sage Weil wrote:
On Tue, 4 Aug 2015, Wido den Hollander wrote:
On 03-08-15 22:25, Samuel Just wrote:
It seems like it's about time for us to make the jump to C++11. This
is probably going to have an impact on users of the librados C++
bindings. It seems like such users would have to
I think I wrote about my experience with this about 3 months ago, including
what techniques I used to minimize impact on production.
Basicaly we had to
1) increase pg_num in small increments only, bcreating the placement groups
themselves caused slowed requests on OSDs
2) increse pgp_num in
It will cause a large amount of data movement. Each new pg after the
split will relocate. It might be ok if you do it slowly. Experiment
on a test cluster.
-Sam
On Mon, Aug 3, 2015 at 12:57 AM, 乔建峰 scaleq...@gmail.com wrote:
Hi Cephers,
This is a greeting from Jevon. Currently, I'm
I'll be more of a third-party person and try to be factual. =)
I wouldn't throw off Gluster too fast yet.
Besides what you described with the object and disk storage.
It uses Amazon Dynamo paper on eventually consistent methodology of
organizing data.
Gluster has different features so I would
Hi,
Am 04.08.2015 um 21:16 schrieb Ketor D:
Hi Stefan,
Could you describe more about the linger ops bug?
I'm runing Firefly as you say still has this bug.
It will be fixed in next ff release.
This on:
http://tracker.ceph.com/issues/9806
Stefan
Thanks!
On Wed, Aug 5, 2015
On 08/01/2015 07:52 PM, pixelfairy wrote:
Id like to look at a read-only copy of running virtual machines for
compliance and potentially malware checks that the VMs are unaware of.
the first note on http://ceph.com/docs/master/rbd/rbd-snapshot/ warns
that the filesystem has to be in a
Just make sure with tool like 'lsof' or so that ceph commands are loading the
proper librados/librbd binaries by running say 'ceph -w'...
It should be loading hammer version of librbd/librados..
From: Aakanksha Pudipeddi-SSI [mailto:aakanksha...@ssi.samsung.com]
Sent: Tuesday, August 04, 2015
I am thinking of having ceph journal on a RAID1 SSD.
Kindly advise me on this, does the RAID1 SSD for journal make sense? ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
44 matches
Mail list logo