Thanks for the help. I can trigger the OSD to use 1 replica (which I  
assume is just the primary) using './ceph osd pool set metadata size  
1'. Unfortunately, this seems to prevent the client from properly  
mounting.

Here is the output I get. It seems once the replication level is  
changed it has trouble opening a session with the MDS.

googoo-10 59> ./ceph osd pool set metadata size 1
09.09.02 14:02:17.313654 mon1 <- [osd,pool,set,metadata,size,1]
09.09.02 14:02:18.345679 mon0 -> 'set pool 1 size to 1' (0)

googoo-10 60> ./ceph osd dump -o -
09.09.02 14:02:32.293234 mon1 <- [osd,dump]
09.09.02 14:02:32.294275 mon1 -> 'dumped osdmap epoch 3' (0)
epoch 3
fsid 9a900213-f3e3-9cf2-b2ca-e209b866c57a
created 09.09.02 14:02:02.880448
modifed 09.09.02 14:02:18.316561

pg_pool 0 'data' pg_pool(rep size 2 ruleset 0 pg_num 16 pgp_num 16  
lpg_num 1 lpgp_num 1 last_change 1)
pg_pool 1 'metadata' pg_pool(rep size 1 ruleset 1 pg_num 16 pgp_num 16  
lpg_num 1 lpgp_num 1 last_change 3)
pg_pool 2 'casdata' pg_pool(rep size 2 ruleset 2 pg_num 16 pgp_num 16  
lpg_num 1 lpgp_num 1 last_change 1)

max_osd 4
osd0 in weight 1 up   (up_from 2 up_thru 0 down_at 0 last_clean 0-0)  
127.0.0.1:6800/15206/0
osd1 out down (up_from 0 up_thru 0 down_at 0 last_clean 0-0)
osd2 out down (up_from 0 up_thru 0 down_at 0 last_clean 0-0)
osd3 out down (up_from 0 up_thru 0 down_at 0 last_clean 0-0)


09.09.02 14:02:32.294348 wrote 721 byte payload to -

googoo-10 61> ./csyn --syn trace sometracefile 1 --debug_client 15
starting csyn at 0.0.0.0:6802/15311/0
mounting and starting 1 syn client(s)
09.09.02 14:02:41.495397 client-1
waiting for client(s) to finish
09.09.02 14:02:41.495701 client-1 initing
09.09.02 14:02:41.495883 client-1 mounting
09.09.02 14:02:41.519017 3065953168 client0 mounted: have osdmap 0 and  
mdsmap 0
09.09.02 14:02:41.519146 3065953168 client0 random mds-1
09.09.02 14:02:41.519171 3065953168 client0 chose target mds0 based on  
hierarchy
09.09.02 14:02:41.519192 3065953168 client0 no address for mds0,  
requesting new mdsmap
09.09.02 14:02:41.524293 3055463312 client0 handle_mds_map epoch 5
09.09.02 14:02:41.524492 3065953168 client0 opening session to mds0
09.09.02 14:02:41.524761 3065953168 client0 waiting for session to  
mds0 to open
09.09.02 14:02:42.496894 3044973456 client0 renew_caps()

Thanks again.
Andrew

On Aug 28, 2009, at 3:12 PM, Yehuda Sadeh Weinraub wrote:

> On Fri, Aug 28, 2009 at 1:09 PM, Andrew Leung <ale...@soe.ucsc.edu>  
> wrote:
>>
>> I am doing some basic Ceph testing using to most recent release
>> (v0.13). I'm currently using the simple vstart.sh script to get a
>> client/mds/mon/osd up and running on a single machine.
>>
>> I'm wondering how I can toggle the replication level (e.g., turn on  
>> 2-
>> way replication, turn off replication). I'm particularly interested  
>> in
>> doing so for metadata replication. Is there a flag I can input or
>> modify in the vstart script or ceph.conf file?
>>
>> Thanks.
>> Andrew
>>
>
> According to Sage it goes like this:
>
> $  ./ceph osd pool set metadata size 1
> $  ./ceph pg dump -o -
>
> You can figure out current replication size from this:
> $  ./ceph osd dump -o -
>
> Then you can run:
> $  ./ceph osd pool set metadata size 2
>
> And then you can watch the replication happening by running'ceph -w'.


------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
Ceph-devel mailing list
Ceph-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ceph-devel

Reply via email to