Re: [ceph-users] ceph Cluster attempt to access beyond end of device

2017-08-15 Thread ZHOU Yuan
Hi Hauke,

It's possibly the XFS issue as discussed in the previous thread. I also saw
this issue in some JBOD setup, running with RHEL 7.3


Sincerely, Yuan

On Tue, Aug 15, 2017 at 7:38 PM, Hauke Homburg 
wrote:

> Hello,
>
>
> I found some error in the Cluster with dmes -T:
>
> attempt to access beyond end of device
>
> I found the following Post:
>
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg39101.html
>
> Is this a Problem with the Size of the Filesystem itself oder "only"
> eine Driver Bug? I ask becaue we habe in each Node 8 HDD with a Hardware
> RAID 6 running. In this RAID we have the XFS Partition.
>
> Also we have one big Filesystem in 1 OSD in each Server instead of 1
> Filesystem per HDD at 8 HDD in each Server.
>
> greetings
>
> Hauke
>
>
> --
> www.w3-creative.de
>
> www.westchat.de
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Qemu with customized librbd/librados

2016-07-14 Thread ZHOU Yuan
Hi list,

I ran into some issue on customizing the librbd(linking with jemalloc) with
stock qemu in Ubuntu Trusty here.
Stock qemu depends on librbd1 and librados2(0.80.x). These two libraries
will be installed at /usr/lib/x86_64-linux-gnu/lib{rbd,rados}.so. The path
is included in /etc/ld.so.conf.d/x86_64-linux-gnu.conf. This seems to be
for compatibility of multi-arch support on Ubuntu.
I find when I'm building the local ceph, the compiler seems try to link
with the existing /etc/lib/x86_64-linux-gnu/librbd.so, instead of the newly
built local librbd. So I just went to remove those
/usr/lib/x86_64-linux-gnu/lib{rbd,rados}.so and installed my customized lib
into /usr/local/lib/

Is this the right way to building a customized Ceph? or should I make the
--prefix=/usr/lib/x86_64-linux-gnu?

Sincerely, Yuan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-cn] librados: Objecter returned from getxattrs r=-2

2015-11-02 Thread Zhou, Yuan
Hi,

So in RGW there's no hive* objects now, could you please check if there's any 
exists in the S3 perspective?  That's to check the object listing of bucket 
'olla' via the S3 API (boto or s3cmd could do the job)

I've met some similar issue in Hadoop over SwiftFS before. There's some OSDs 
were down in Ceph cluster, then the file listing in Hadoop and Swift does not 
match. Don’t know the detail failures though. I was simply trying to do some 
benchmarks so the data are not important. By manually deleting the 
objects/buckets and regenerating the data issue was fixed.

hope this can help. 

thanks, -yuan

-Original Message-
From: 张绍文 [mailto:zhangshao...@btte.net] 
Sent: Tuesday, November 3, 2015 1:45 PM
To: Zhou, Yuan
Cc: ceph...@lists.ceph.com; ceph-us...@ceph.com
Subject: Re: [Ceph-cn] librados: Objecter returned from getxattrs r=-2

On Tue, 3 Nov 2015 05:32:27 +
"Zhou, Yuan" <yuan.z...@intel.com> wrote:

> Hi,
> 
> The directory there should be some simulated hierarchical structure 
> with '/' in the object names. Do you mind checking the rest objects in 
> ceph pool .rgw.buckets?
> 
> $ rados ls -p .rgw.buckets | grep default.157931.5_hive
> 
> If there're still objects come out, you might try to delete them from 
> the 'olla' bucket with S3 API. (Note I'm not sure how's your Hive data 
> generated, so please do backup first if it's important.)
> 

Thanks for your reply. I dumped object list yesterday:

# rados -p .rgw.buckets ls >obj-list
# ls -lh obj-list
-rw-r--r-- 1 root root 1.2G Nov  2 15:51 obj-list # grep default.157931.5_hive 
obj-list # 

There's no such object.

> 
> -Original Message-
> From: Ceph-cn [mailto:ceph-cn-boun...@lists.ceph.com] On Behalf Of ???
> Sent: Tuesday, November 3, 2015 12:22 PM
> To: ceph...@lists.ceph.com; ceph-us...@ceph.com
> Subject: Re: [Ceph-cn] librados: Objecter returned from getxattrs r=-2
> 
> With debug_objecter = 20/0 I get this, I guess the thing is: the 
> object has been removed, but "directory" info still exists.
> 
> 2015-11-03 12:07:22.264704 7f03c42f3700 10 client.214496.objecter 
> ms_dispatch 0x2c18840 osd_op_reply(81 
> default.157931.5_hive/staging_hive_2015-11-01_14-57-40_861_37977797652
> 10222008-1/_tmp.-ext-1/ [getxattrs,stat] v0'0 uv0 ack = -2 ((2) No 
> such file or directory)) v6
> 
> So, how can I safely remove the "directory" info?
> 
> On Tue, 3 Nov 2015 10:10:26 +0800
> 张绍文 <zhangshao...@btte.net> wrote:
> 
> > On Mon, 2 Nov 2015 16:47:11 +0800
> > 张绍文 <zhangshao...@btte.net> wrote:
> >   
> > > On Mon, 2 Nov 2015 16:36:57 +0800
> > > 张绍文 <zhangshao...@btte.net> wrote:
> > > 
> > > > Hi, all:
> > > > 
> > > > I'm using hive via s3a, but it's not usable after I removed some 
> > > > temp files with:
> > > > 
> > > > /opt/hadoop/bin/hdfs dfs -rm -r -f s3a://olla/hive/
> > > > 
> > > > With debug_radosgw = 10/0, I got these messages repeatly:
> > > > 
> > > > 2015-11-02 14:30:44.547271 7f08ef7fe700 10 librados: Objecter 
> > > > returned from getxattrs r=-2 2015-11-02 14:30:44.549117
> > > > 7f08ef7fe700 10 librados: getxattrs
> > > > oid=default.157931.5_hive/staging_hive_2015-11-01_14-57-40_861_3
> > > > 79 7779765210222008-1/_tmp.-ext-1/
> > > > nspace=
> > > > 
> > > > I dumped whole object list, and there's no object named starts 
> > > > with hive/..., and hive is not usable now, please help.
> > > >   
> > > 
> > > Sorry, I forgot this:
> > > 
> > > # ceph -v
> > > ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
> > > 
> > > Other known "directories" under THE same bucket is readable.
> > > 
> > > 
> > 
> > Also happened to others on ceph-users maillist, seems not resolved:
> > 
> > http://article.gmane.org/gmane.comp.file-systems.ceph.user/7653/matc
> > h=
> > objecter+returned+getxattrs
> > 
> >   
> 
> 
> 
> --
> 张绍文
> ___
> Ceph-cn mailing list
> ceph...@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-cn-ceph.com



--
张绍文
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-cn] librados: Objecter returned from getxattrs r=-2

2015-11-02 Thread Zhou, Yuan
Hi,

The directory there should be some simulated hierarchical structure with '/' in 
the object names. Do you mind checking the rest objects in ceph pool 
.rgw.buckets?

$ rados ls -p .rgw.buckets | grep default.157931.5_hive

If there're still objects come out, you might try to delete them from the 
'olla' bucket with S3 API.
(Note I'm not sure how's your Hive data generated, so please do backup first if 
it's important.)

thanks, -yuan

-Original Message-
From: Ceph-cn [mailto:ceph-cn-boun...@lists.ceph.com] On Behalf Of ???
Sent: Tuesday, November 3, 2015 12:22 PM
To: ceph...@lists.ceph.com; ceph-us...@ceph.com
Subject: Re: [Ceph-cn] librados: Objecter returned from getxattrs r=-2

With debug_objecter = 20/0 I get this, I guess the thing is: the object has 
been removed, but "directory" info still exists.

2015-11-03 12:07:22.264704 7f03c42f3700 10 client.214496.objecter ms_dispatch 
0x2c18840 osd_op_reply(81 
default.157931.5_hive/staging_hive_2015-11-01_14-57-40_861_3797779765210222008-1/_tmp.-ext-1/
[getxattrs,stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v6

So, how can I safely remove the "directory" info?

On Tue, 3 Nov 2015 10:10:26 +0800
张绍文  wrote:

> On Mon, 2 Nov 2015 16:47:11 +0800
> 张绍文  wrote:
> 
> > On Mon, 2 Nov 2015 16:36:57 +0800
> > 张绍文  wrote:
> >   
> > > Hi, all:
> > > 
> > > I'm using hive via s3a, but it's not usable after I removed some 
> > > temp files with:
> > > 
> > > /opt/hadoop/bin/hdfs dfs -rm -r -f s3a://olla/hive/
> > > 
> > > With debug_radosgw = 10/0, I got these messages repeatly:
> > > 
> > > 2015-11-02 14:30:44.547271 7f08ef7fe700 10 librados: Objecter 
> > > returned from getxattrs r=-2 2015-11-02 14:30:44.549117
> > > 7f08ef7fe700 10 librados: getxattrs 
> > > oid=default.157931.5_hive/staging_hive_2015-11-01_14-57-40_861_379
> > > 7779765210222008-1/_tmp.-ext-1/
> > > nspace=
> > > 
> > > I dumped whole object list, and there's no object named starts 
> > > with hive/..., and hive is not usable now, please help.
> > > 
> > 
> > Sorry, I forgot this:
> > 
> > # ceph -v
> > ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
> > 
> > Other known "directories" under THE same bucket is readable.
> > 
> >   
> 
> Also happened to others on ceph-users maillist, seems not resolved:
> 
> http://article.gmane.org/gmane.comp.file-systems.ceph.user/7653/match=
> objecter+returned+getxattrs
> 
> 



--
张绍文
___
Ceph-cn mailing list
ceph...@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-cn-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs replace hdfs problem

2015-10-12 Thread ZHOU Yuan
Hi,

From the doc it looks like for the default cephfs-hadoop driver,
Hadoop 2.x is not supported yet. You may need to get a newer
hadoop-cephfs.jar if you need to use YARN?

http://docs.ceph.com/docs/master/cephfs/hadoop/

https://github.com/GregBowyer/cephfs-hadoop

Sincerely, Yuan


On Mon, Oct 12, 2015 at 1:58 PM, Fulin Sun  wrote:
> Thanks so much for kindly advice. This is my fault.
>
> I resolved the problem and the root cause is that I misconfigured
> HADOOP_CLASSPATH, so sorry for confusing and troubling.
>
> But then I am trying to use hadoop yarn to do terasort benckmark test based
> on cephfs. New exception message occurs as :
>
> Does this mean that I cannot use this ceph-hadoop plugin over the hadoop
> version? Hadoop version is : 2.7.1 release, Ceph version is : 0.94.3
>
> Thanks again for moving this thread.
>
> Best,
> Sun.
>
> 15/10/12 11:08:35 INFO client.RMProxy: Connecting to ResourceManager at
> /172.16.33.18:8032
> 15/10/12 11:08:35 INFO mapreduce.Cluster: Failed to use
> org.apache.hadoop.mapred.YarnClientProtocolProvider due to error:
> java.lang.NoSuchMethodException:
> org.apache.hadoop.fs.ceph.CephFS.(java.net.URI,
> org.apache.hadoop.conf.Configuration)
> java.io.IOException: Cannot initialize Cluster. Please check your
> configuration for mapreduce.framework.name and the correspond server
> addresses.
> at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
> at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:82)
> at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:75)
> at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1260)
> at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1256)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapreduce.Job.connect(Job.java:1255)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1284)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
> at org.apache.hadoop.examples.terasort.TeraGen.run(TeraGen.java:301)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.examples.terasort.TeraGen.main(TeraGen.java:305)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
> 
> 
>
>
> From: Paul Evans
> Date: 2015-10-12 11:10
> To: Fulin Sun
> Subject: Re: [ceph-users] cephfs replace hdfs problem
> I don’t think there are many of us that have attempted what you are trying
> to do… that’s the most likely reason the list is quiet.
> You may need to be patient, and possibly provide updates (if you have any)
> to keep the issue in front of people.
> Best of luck...
> --
> Paul
>
> On Oct 11, 2015, at 7:03 PM, Fulin Sun  wrote:
>
> sign...
> I had to say that I have not received any reponse from this mailing list...
>
> 
> 
>
> From: Fulin Sun
> Date: 2015-10-10 17:27
> To: ceph-users
> Subject: [ceph-users] cephfs replace hdfs problem
> Hi there,
>
> I configured hadoop-cephfs plugin and try to use cephfs as a replacement. I
> had sucessfully configured
>
> hadoop-env.sh with setting the HADOOP_CLASSPATH for hadoop-cephfs.jar
>
> But when I run hadoop fs -ls /, I got the following exception. Looks like it
> cannot find the actual jar for both
> hadoop-cephfs.jar  and  libcephfs-java.jar I placed these two in the
> /usr/local/hadoop/lib directory and edited
> the hadoop classpath in hadoop-env.sh
>
> How could this issue be ?
>
> Thanks anyone for kind response.
>
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class
> org.apache.hadoop.fs.ceph.CephFileSystem not found
> at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195)
> at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2638)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
> at 

Re: [ceph-users] CDS Jewel Wed/Thurs

2015-07-01 Thread Zhou, Yuan
Hey Patrick, 

Looks like the GMT+8 time for the 1st day is wrong, should be 10:00 pm - 7:30 
am?

-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Patrick McGarry
Sent: Tuesday, June 30, 2015 11:28 PM
To: Ceph Devel; Ceph-User
Subject: CDS Jewel Wed/Thurs

Hey cephers,

Just a friendly reminder that our Ceph Developer Summit for Jewel planning is 
set to run tomorrow and Thursday. The schedule and dial in information is 
available on the new wiki:

http://tracker.ceph.com/projects/ceph/wiki/CDS_Jewel

Please let me know if you have any questions. Thanks!


-- 

Best Regards,

Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com  ||  http://community.redhat.com @scuttlemonkey || @ceph
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in the 
body of a message to majord...@vger.kernel.org More majordomo info at  
http://vger.kernel.org/majordomo-info.html
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Accessing Ceph from Spark

2015-06-17 Thread ZHOU Yuan
Hi Milan,

We've done some tests here and our hadoop can talk to RGW successfully
with this SwiftFS plugin. But we haven't tried Spark yet. One thing is
the data locality feature, it actually requires some special
configuration of Swift proxy-server, so RGW is not able to archive the
data locality there.

Could you please kindly share some deployment consideration of running
Spark on Swift/Ceph? Tachyon seems more promising...


Sincerely, Yuan


On Wed, Jun 17, 2015 at 9:58 PM, Milan Sladky milan.sla...@outlook.com wrote:
 Is it possible to access Ceph from Spark as it is mentioned here for
 Openstack Swift?

 https://spark.apache.org/docs/latest/storage-openstack-swift.html

 Thanks for help.

 Milan Sladky

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cache data consistency among multiple RGW instances

2015-01-21 Thread ZHOU Yuan
Thanks Greg, that's a awesome feature I missed. I find some
explanation on the watch-notify thing:
http://www.slideshare.net/Inktank_Ceph/sweil-librados.

Just want to confirm, it looks like I need to list all the RGW
instances in ceph.conf, and then these RGW instances will
automatically do the cache invalidation if necessary?


Sincerely, Yuan


On Mon, Jan 19, 2015 at 10:58 PM, Gregory Farnum g...@gregs42.com wrote:
 On Sun, Jan 18, 2015 at 6:40 PM, ZHOU Yuan dunk...@gmail.com wrote:
 Hi list,

 I'm trying to understand the RGW cache consistency model. My Ceph
 cluster has multiple RGW instances with HAProxy as the load balancer.
 HAProxy would choose one RGW instance to serve the request(with
 round-robin).
 The question is if RGW cache was enabled, which is the default
 behavior, there seem to be some cache inconsistency issue. e.g.,
 object0 was cached in RGW-0 and RGW-1 at the same time. Sometime later
 it was updated from RGW-0. In this case if the next read was issued to
 RGW-1, the outdated cache would be served out then since RGW-1 wasn't
 aware of the updates. Thus the data would be inconsistent. Is this
 behavior expected or is there anything I missed?

 The RGW instances make use of the watch-notify primitive to keep their
 caches consistent. It shouldn't be a problem.
 -Greg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cache data consistency among multiple RGW instances

2015-01-21 Thread ZHOU Yuan
Greg, Thanks a lot for the education!

Sincerely, Yuan


On Tue, Jan 20, 2015 at 2:37 PM, Gregory Farnum g...@gregs42.com wrote:
 You don't need to list them anywhere for this to work. They set up the
 necessary communication on their own by making use of watch-notify.

 On Mon, Jan 19, 2015 at 6:55 PM ZHOU Yuan dunk...@gmail.com wrote:

 Thanks Greg, that's a awesome feature I missed. I find some
 explanation on the watch-notify thing:
 http://www.slideshare.net/Inktank_Ceph/sweil-librados.

 Just want to confirm, it looks like I need to list all the RGW
 instances in ceph.conf, and then these RGW instances will
 automatically do the cache invalidation if necessary?


 Sincerely, Yuan


 On Mon, Jan 19, 2015 at 10:58 PM, Gregory Farnum g...@gregs42.com wrote:
  On Sun, Jan 18, 2015 at 6:40 PM, ZHOU Yuan dunk...@gmail.com wrote:
  Hi list,
 
  I'm trying to understand the RGW cache consistency model. My Ceph
  cluster has multiple RGW instances with HAProxy as the load balancer.
  HAProxy would choose one RGW instance to serve the request(with
  round-robin).
  The question is if RGW cache was enabled, which is the default
  behavior, there seem to be some cache inconsistency issue. e.g.,
  object0 was cached in RGW-0 and RGW-1 at the same time. Sometime later
  it was updated from RGW-0. In this case if the next read was issued to
  RGW-1, the outdated cache would be served out then since RGW-1 wasn't
  aware of the updates. Thus the data would be inconsistent. Is this
  behavior expected or is there anything I missed?
 
  The RGW instances make use of the watch-notify primitive to keep their
  caches consistent. It shouldn't be a problem.
  -Greg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Cache data consistency among multiple RGW instances

2015-01-18 Thread ZHOU Yuan
Hi list,

I'm trying to understand the RGW cache consistency model. My Ceph
cluster has multiple RGW instances with HAProxy as the load balancer.
HAProxy would choose one RGW instance to serve the request(with
round-robin).
The question is if RGW cache was enabled, which is the default
behavior, there seem to be some cache inconsistency issue. e.g.,
object0 was cached in RGW-0 and RGW-1 at the same time. Sometime later
it was updated from RGW-0. In this case if the next read was issued to
RGW-1, the outdated cache would be served out then since RGW-1 wasn't
aware of the updates. Thus the data would be inconsistent. Is this
behavior expected or is there anything I missed?

Sincerely, Yuan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Volume level quota in cache tiering

2014-12-07 Thread ZHOU Yuan
Hi lists,

I was trying to do some tests on cache tiering. Currently I have a SSD
backed pool t0(1440G) and a HDD backed pool t1(8T). If I create 10
RBDs on the tier, it looks like all these volumes are sharing t0 but
without any size limits for each.
Is it possible to set per RBD quota in the cache tier? for example,
all these 10 RBDs are limited to use 100G of t0 maximally.


Sincerely, Yuan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Erasure coding parameters change

2014-11-09 Thread ZHOU Yuan
Hi Loic,

On Mon, Nov 10, 2014 at 6:44 AM, Loic Dachary l...@dachary.org wrote:


 On 05/11/2014 13:57, Jan Pekař wrote: Hi,

 is there any possibility to change erasure coding pool parameters ie k and m 
 values on the fly? I want to add more disks to existing erasure pool and 
 change redundancy level. I cannot find it in docs.

 Hi,

 It is not possible to change k/m on the fly.

I'm a little confused. Does this mean even if the pool is reported to
be using the updated profile, the one it is actually using is still
the old profile?


 Changing erasure-code-profile is not working so I assume that is only 
 template for newly created pools.
 If it is not possible now is it planned in the feature (when)?

 Changing these parameters require a re-encoding of all objects. The interim 
 solution is to create another pool and move all the objects from the first 
 pool to the other.

Is there any a easy way to do this transition other than copy these
objects one by one?

If disk space is an issue, a solution might be to define the old pool
as a cache tier and evict its content to the storage tier. I did not
test this though and there may be problems I do not yet see.

 Cheers

 Thank you
 With regards
 Jan Pekar, Imatic
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 --
 Loïc Dachary, Artisan Logiciel Libre


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] crush choose firstn vs. indep

2014-01-13 Thread ZHOU Yuan
Hi Loic, thanks for the education!

I’m also trying to understand the new ‘indep’ mode. Is this new mode
designed for Ceph-EC only? It seems that all of the data in 3-copy system
are equivalent and this new algorithm should also work?


Sincerely, Yuan


On Mon, Jan 13, 2014 at 7:37 AM, Loic Dachary l...@dachary.org wrote:



 On 12/01/2014 15:55, Dietmar Maurer wrote:
  From the docs:
 
 
 
  step [choose|chooseleaf] [firstn|indep] N bucket-type
 
 
 
  What exactly is the difference between ‘firstn’ and ‘indep’?
 
 Hi,

 For Ceph releases up to Emperor[1], firstn is used and I'm not aware of a
 use case requiring indep. As part of the effort to implement erasure coded
 pools, firstn[2] and indep[3] were separated in two functions. The firstn
 method is best suited for replicated pools. The indep method tries to
 minimize the position changes in case an OSD becomes unavailable. For
 instance, if indep finds

   [1,2,3,4]

 and after a while 3 become unavailable, it is very likely to replace it
 with

   [1,2,5,4]

 It matters to erasure coded pools because

   [4,5,2,1]

 (i.e. the same OSDs but in different positions), implies more I/O. Another
 difference is that in the case of a mapping failure (i.e. unable to find
 the required number of OSDs), firstn will return a short list ( for
 instance [1,2,3] when 4 are required ) and indep will return a list with a
 placeholder at the missing position ( for instance [1,2,CRUSH_ITEM_NONE,4]
 ).

 Cheers

 [1] implementation in releases up to Emperor
 https://github.com/ceph/ceph/blob/v0.72/src/crush/mapper.c#L295
 [2] firstn https://github.com/ceph/ceph/blob/v0.74/src/crush/mapper.c#L295
 [3] indep https://github.com/ceph/ceph/blob/v0.74/src/crush/mapper.c#L459

 --
 Loïc Dachary, Artisan Logiciel Libre


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com