Re: [ceph-users] EC configuration questions...

2015-03-03 Thread Don Doerner
Loic,

Thank you, I got it created.  One of these days, I am going to have to try to 
understand some of the crush map details...  Anyway, on to the next step!

-don-

--
The information contained in this transmission may be confidential. Any 
disclosure, copying, or further distribution of confidential information is not 
permitted unless such privilege is explicitly granted in writing by Quantum. 
Quantum reserves the right to have electronic communications, including email 
and attachments, sent across its networks filtered through anti virus and spam 
software programs and retain such messages in order to comply with applicable 
data security and retention requirements. Quantum is not responsible for the 
proper and complete transmission of the substance of this communication or for 
any delay in its receipt.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] EC configuration questions...

2015-03-02 Thread Don Doerner
Update: the attempt to define a traditional replicated pool was  successful; 
it's online and ready to go.  So the cluster basics appear sound...

-don-

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don 
Doerner
Sent: 02 March, 2015 16:18
To: ceph-users@lists.ceph.com
Subject: [ceph-users] EC configuration questions...
Sensitivity: Personal

Hello,

I am trying to set up to measure erasure coding performance and overhead.  My 
Ceph cluster-of-one has 27 disks, hence 27 OSDs, all empty.  I have ots of 
memory, and I am using osd crush chooseleaf type = 0 in my config file, so my 
OSDs should be able to peer with others on the same host, right?

I look at the EC profiles defined, and see only default which has k=2,m=1.  
Wanting to set up a more realistic test, I defined a new profile k8m3, 
similar to default, but with k=8,m=3.

Checked with ceph osd erasure-code-profile get k8m3, all looks good.

I then go to define my pool: ceph osd pool create ecpool 256 256 erasure k8m3 
apparently succeeds.

*Sidebar: my math on the pgnum stuff was (27 pools * 100)/11 = ~246, 
round up to 256.

Now I ask ceph health, and get:
HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256 pgs stuck unclean; 
too few pgs per osd (9  min 20)

Digging in to this a bit (ceph health detail), I see the magic OSD number 
(2147483647) that says that there weren't enough OSDs to assign to a placement 
group, for all placement groups.  And at the same time, it is warning me that I 
have too few PGs per OSD.

At the moment, I am defining a traditional replicated pool (3X) to see if that 
will work...  Anyone have any guess as to what I may be doing incorrectly with 
my erasure coded pool?  Or what I should do next to get a clue?

Regards,

-don-


The information contained in this transmission may be confidential. Any 
disclosure, copying, or further distribution of confidential information is not 
permitted unless such privilege is explicitly granted in writing by Quantum. 
Quantum reserves the right to have electronic communications, including email 
and attachments, sent across its networks filtered through anti virus and spam 
software programs and retain such messages in order to comply with applicable 
data security and retention requirements. Quantum is not responsible for the 
proper and complete transmission of the substance of this communication or for 
any delay in its receipt.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] EC configuration questions...

2015-03-02 Thread Loic Dachary
Hi Don,

On 03/03/2015 01:18, Don Doerner wrote: Hello,
 
  
 
 I am trying to set up to measure erasure coding performance and overhead.  My 
 Ceph “cluster-of-one” has 27 disks, hence 27 OSDs, all empty.  I have ots of 
 memory, and I am using “osd crush chooseleaf type = 0” in my config file, so 
 my OSDs should be able to peer with others on the same host, right?
 
  
 
 I look at the EC profiles defined, and see only “default” which has k=2,m=1.  
 Wanting to set up a more realistic test, I defined a new profile “k8m3”, 
 similar to default, but with k=8,m=3. 
 
  
 
 Checked with “ceph osd erasure-code-profile get k8m3”, all looks good.

When you create the erasure-code-profile you also need to set the failure 
domain (see ruleset-failure-domain in 
http://ceph.com/docs/master/rados/operations/erasure-code-jerasure/). It will 
not use the osd crush chooseleaf type = 0 from your configuration file. You 
can verify the details of the ruleset used by the erasure coded pool with the 
command ./ceph osd crush rule dump

Cheers

 
  
 
 I then go to define my pool: “ceph osd pool create ecpool 256 256 erasure 
 k8m3” apparently succeeds.
 
 ·Sidebar: my math on the pgnum stuff was (27 pools * 100)/11 = ~246, 
 round up to 256.
 
  
 
 Now I ask “ceph health”, and get:
 
 HEALTH_WARN256 pgs incomplete; 256 pgs stuck inactive; 256 pgs stuck unclean; 
 too few pgs per osd (9  min 20)
 
  
 
 Digging in to this a bit (“ceph health detail”), I see the magic OSD number 
 (2147483647) that says that there weren’t enough OSDs to assign to a 
 placement group, /for all placement groups/.  And at the same time, it is 
 warning me that I have too few PGs per OSD.
 
  
 
 At the moment, I am defining a traditional replicated pool (3X) to see if 
 that will work…  Anyone have any guess as to what I may be doing incorrectly 
 with my erasure coded pool?  Or what I should do next to get a clue?
 
  
 
 Regards,
 
  
 
 -don-
 
  
 
 --
 The information contained in this transmission may be confidential. Any 
 disclosure, copying, or further distribution of confidential information is 
 not permitted unless such privilege is explicitly granted in writing by 
 Quantum. Quantum reserves the right to have electronic communications, 
 including email and attachments, sent across its networks filtered through 
 anti virus and spam software programs and retain such messages in order to 
 comply with applicable data security and retention requirements. Quantum is 
 not responsible for the proper and complete transmission of the substance of 
 this communication or for any delay in its receipt.
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] EC configuration questions...

2015-03-02 Thread Don Doerner
Hello,

I am trying to set up to measure erasure coding performance and overhead.  My 
Ceph cluster-of-one has 27 disks, hence 27 OSDs, all empty.  I have ots of 
memory, and I am using osd crush chooseleaf type = 0 in my config file, so my 
OSDs should be able to peer with others on the same host, right?

I look at the EC profiles defined, and see only default which has k=2,m=1.  
Wanting to set up a more realistic test, I defined a new profile k8m3, 
similar to default, but with k=8,m=3.

Checked with ceph osd erasure-code-profile get k8m3, all looks good.

I then go to define my pool: ceph osd pool create ecpool 256 256 erasure k8m3 
apparently succeeds.

*Sidebar: my math on the pgnum stuff was (27 pools * 100)/11 = ~246, 
round up to 256.

Now I ask ceph health, and get:
HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256 pgs stuck unclean; 
too few pgs per osd (9  min 20)

Digging in to this a bit (ceph health detail), I see the magic OSD number 
(2147483647) that says that there weren't enough OSDs to assign to a placement 
group, for all placement groups.  And at the same time, it is warning me that I 
have too few PGs per OSD.

At the moment, I am defining a traditional replicated pool (3X) to see if that 
will work...  Anyone have any guess as to what I may be doing incorrectly with 
my erasure coded pool?  Or what I should do next to get a clue?

Regards,

-don-

--
The information contained in this transmission may be confidential. Any 
disclosure, copying, or further distribution of confidential information is not 
permitted unless such privilege is explicitly granted in writing by Quantum. 
Quantum reserves the right to have electronic communications, including email 
and attachments, sent across its networks filtered through anti virus and spam 
software programs and retain such messages in order to comply with applicable 
data security and retention requirements. Quantum is not responsible for the 
proper and complete transmission of the substance of this communication or for 
any delay in its receipt.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com