On Thu, Aug 11, 2016 at 10:23:19AM +0700, agung Laksono wrote:
> Thank you Brad,
> 
> I am able to run ceph with 3 MONs, 3 OSDs and 1 MDS now.
> 
> However, I still not get the workflow of the ceph using this step.
> I might need print in somewhere, inject a crash by kill one node, etc.
> 
> Does this also possible using this method?
> 

Sure, something like this?

$ ps uwwx|grep ceph-
brad     26160  0.6  0.1 389424 25436 pts/2    Sl   13:37   0:00 
/home/brad/working/src/ceph/build/bin/ceph-mon -i a -c 
/home/brad/working/src/ceph/build/ceph.conf
brad     26175  0.3  0.1 385316 23604 pts/2    Sl   13:37   0:00 
/home/brad/working/src/ceph/build/bin/ceph-mon -i b -c 
/home/brad/working/src/ceph/build/ceph.conf
brad     26194  0.3  0.1 383264 22932 pts/2    Sl   13:37   0:00 
/home/brad/working/src/ceph/build/bin/ceph-mon -i c -c 
/home/brad/working/src/ceph/build/ceph.conf
brad     27230  1.3  0.1 831920 26908 ?        Ssl  13:37   0:00 
/home/brad/working/src/ceph/build/bin/ceph-osd -i 0 -c 
/home/brad/working/src/ceph/build/ceph.conf
brad     27553  1.4  0.1 832812 27924 ?        Ssl  13:37   0:00 
/home/brad/working/src/ceph/build/bin/ceph-osd -i 1 -c 
/home/brad/working/src/ceph/build/ceph.conf
brad     27895  1.4  0.1 831792 27472 ?        Ssl  13:37   0:00 
/home/brad/working/src/ceph/build/bin/ceph-osd -i 2 -c 
/home/brad/working/src/ceph/build/ceph.conf
brad     28294  0.1  0.0 410648 15652 ?        Ssl  13:37   0:00 
/home/brad/working/src/ceph/build/bin/ceph-mds -i a -c 
/home/brad/working/src/ceph/build/ceph.conf
brad     28914  0.0  0.0 118496   944 pts/2    S+   13:38   0:00 grep --color 
ceph-

$ kill -SIGSEGV 27553

$ ps uwwx|grep ceph-
brad     26160  0.5  0.1 390448 26720 pts/2    Sl   13:37   0:00 
/home/brad/working/src/ceph/build/bin/ceph-mon -i a -c 
/home/brad/working/src/ceph/build/ceph.conf
brad     26175  0.2  0.1 386852 24752 pts/2    Sl   13:37   0:00 
/home/brad/working/src/ceph/build/bin/ceph-mon -i b -c 
/home/brad/working/src/ceph/build/ceph.conf
brad     26194  0.2  0.1 384800 24160 pts/2    Sl   13:37   0:00 
/home/brad/working/src/ceph/build/bin/ceph-mon -i c -c 
/home/brad/working/src/ceph/build/ceph.conf
brad     27230  0.5  0.1 833976 27012 ?        Ssl  13:37   0:00 
/home/brad/working/src/ceph/build/bin/ceph-osd -i 0 -c 
/home/brad/working/src/ceph/build/ceph.conf
brad     27895  0.4  0.1 831792 27616 ?        Ssl  13:37   0:00 
/home/brad/working/src/ceph/build/bin/ceph-osd -i 2 -c 
/home/brad/working/src/ceph/build/ceph.conf
brad     28294  0.0  0.0 410648 15620 ?        Ssl  13:37   0:00 
/home/brad/working/src/ceph/build/bin/ceph-mds -i a -c 
/home/brad/working/src/ceph/build/ceph.conf
brad     30635  0.0  0.0 118496   900 pts/2    S+   13:38   0:00 grep --color 
ceph-

$ egrep -C10 '^2.*Segmentation fault' out/osd.1.log
2016-08-11 13:38:22.837492 7fa0c74ae700  1 -- 127.0.0.1:0/27551 --> 
127.0.0.1:6811/27893 -- osd_ping(ping e10 stamp 2016-08-11 13:38:22.837435) v2 
-- ?+0 0xba0f600 con 0xb92b320
2016-08-11 13:38:22.837725 7fa0c36a1700  1 -- 127.0.0.1:0/27551 <== osd.0 
127.0.0.1:6803/27228 8 ==== osd_ping(ping_reply e10 stamp 2016-08-11 
13:38:22.837435) v2 ==== 47+0+0 (617510928 0 0) 0xb7efa00 con 0xb6560a0
2016-08-11 13:38:22.837737 7fa0c2994700  1 -- 127.0.0.1:0/27551 <== osd.2 
127.0.0.1:6811/27893 8 ==== osd_ping(ping_reply e10 stamp 2016-08-11 
13:38:22.837435) v2 ==== 47+0+0 (617510928 0 0) 0xb7ee400 con 0xb92b320
2016-08-11 13:38:22.837791 7fa0c2893700  1 -- 127.0.0.1:0/27551 <== osd.2 
127.0.0.1:6810/27893 8 ==== osd_ping(ping_reply e10 stamp 2016-08-11 
13:38:22.837435) v2 ==== 47+0+0 (617510928 0 0) 0xb7ef600 con 0xb92b200
2016-08-11 13:38:22.837800 7fa0c37a2700  1 -- 127.0.0.1:0/27551 <== osd.0 
127.0.0.1:6802/27228 8 ==== osd_ping(ping_reply e10 stamp 2016-08-11 
13:38:22.837435) v2 ==== 47+0+0 (617510928 0 0) 0xb7ef000 con 0xb655b00
2016-08-11 13:38:23.871496 7fa0c319c700  1 -- 127.0.0.1:6806/27551 <== osd.2 
127.0.0.1:0/27893 9 ==== osd_ping(ping e10 stamp 2016-08-11 13:38:23.871366) v2 
==== 47+0+0 (3526151526 0 0) 0xb90d000 con 0xb657060
2016-08-11 13:38:23.871497 7fa0c309b700  1 -- 127.0.0.1:6807/27551 <== osd.2 
127.0.0.1:0/27893 9 ==== osd_ping(ping e10 stamp 2016-08-11 13:38:23.871366) v2 
==== 47+0+0 (3526151526 0 0) 0xb90b400 con 0xb6572a0
2016-08-11 13:38:23.871540 7fa0c319c700  1 -- 127.0.0.1:6806/27551 --> 
127.0.0.1:0/27893 -- osd_ping(ping_reply e10 stamp 2016-08-11 13:38:23.871366) 
v2 -- ?+0 0xb7ede00 con 0xb657060
2016-08-11 13:38:23.871574 7fa0c309b700  1 -- 127.0.0.1:6807/27551 --> 
127.0.0.1:0/27893 -- osd_ping(ping_reply e10 stamp 2016-08-11 13:38:23.871366) 
v2 -- ?+0 0xb7ef400 con 0xb6572a0
2016-08-11 13:38:24.039347 7fa0dd331700  1 -- 127.0.0.1:6804/27551 --> 
127.0.0.1:6790/0 -- pg_stats(0 pgs tid 6 v 0) v1 -- ?+0 0xb6c4680 con 0xb654fc0
2016-08-11 13:38:24.381589 7fa0eb2d58c0 -1 *** Caught signal (Segmentation 
fault) **
 in thread 7fa0eb2d58c0 thread_name:ceph-osd

 ceph version v11.0.0-798-g62e8a97 (62e8a97bebb8581318d5484391ec0b131e6f7c71)
 1: /home/brad/working/src/ceph/build/bin/ceph-osd() [0xc1f87e]
 2: (()+0x10c30) [0x7fa0e7546c30]
 3: (pthread_join()+0xad) [0x7fa0e753e6bd]
 4: (Thread::join(void**)+0x2c) [0xe8622c]
 5: (DispatchQueue::wait()+0x12) [0xf47002]
 6: (SimpleMessenger::wait()+0xb59) [0xe21389]
 7: (main()+0x2f00) [0x6b3010]

-- 
HTH,
Brad

> 
> 
> 
> On Thu, Aug 11, 2016 at 4:17 AM, Brad Hubbard <bhubb...@redhat.com> wrote:
> 
> > On Thu, Aug 11, 2016 at 12:45 AM, agung Laksono <agung.sma...@gmail.com>
> > wrote:
> > > I've seen the Ansible before  but not in detail for that.
> > > I also have tried to run quick guide for development.
> > > It did not work on my VM that I already install ceph inside it.
> > >
> > > the error is :
> > >
> > >  agung@arrasyid:~/ceph/ceph/src$ ./vstart.sh -d -n -x
> > > ** going verbose **
> > > [./fetch_config /tmp/fetched.ceph.conf.3818]
> > > ./init-ceph: failed to fetch config with './fetch_config
> > > /tmp/fetched.ceph.conf.3818'
> > >
> > >
> > > Do I need to use a vanilla ceph to make vstart.sh work?
> > >
> > > When I learn a cloud system, usually I compile
> > > the source code,  run in pseudo-distributed, modify the code
> > > and add prints somewhere, recompile and re-run the system.
> > > Might this method work for exploring ceph?
> >
> > It should, sure.
> >
> > Try this.
> >
> > 1) Clone a fresh copy of the repo.
> > 2) ./do_cmake.sh
> > 3) cd build
> > 4) make
> > 5) OSD=3 MON=3 MDS=1 ../src/vstart.sh -n -x -l
> > 6) bin/ceph -s
> >
> > That should give you a working cluster with 3 MONs, 3 OSDs and 1 MDS.
> >
> > --
> > Cheers,
> > Brad
> >
> > >
> > >
> > > On Wed, Aug 10, 2016 at 9:14 AM, Brad Hubbard <bhubb...@redhat.com>
> > wrote:
> > >>
> > >> On Wed, Aug 10, 2016 at 12:26 AM, agung Laksono <agung.sma...@gmail.com
> > >
> > >> wrote:
> > >> >
> > >> > Hi Ceph users,
> > >> >
> > >> > I am new in ceph. I've been succeed installing ceph in 4 VM using
> > Quick
> > >> > installation guide in ceph documentation.
> > >> >
> > >> > And I've also done to compile
> > >> > ceph from source code, build and install in single vm.
> > >> >
> > >> > What I want to do next is that run ceph multiple nodes in a cluster
> > >> > but only inside a single machine. I need this because I will
> > >> > learn the ceph code and will modify some codes, recompile and
> > >> > redeploy on the node/VM. On my study, I've also to be able to run/kill
> > >> > particular node.
> > >> >
> > >> > does somebody know how to configure single vm to run multiple osd and
> > >> > monitor of ceph?
> > >> >
> > >> > Advises and comments are very appreciate. thanks
> > >>
> > >> Hi,
> > >>
> > >> Did you see this?
> > >>
> > >>
> > >> http://docs.ceph.com/docs/hammer/dev/quick_guide/#
> > running-a-development-deployment
> > >>
> > >> Also take a look at the AIO (all in one) options in ceph-ansible.
> > >>
> > >> HTH,
> > >> Brad
> > >
> > >
> > >
> > >
> > > --
> > > Cheers,
> > >
> > > Agung Laksono
> > >
> >
> >
> >
> 
> 
> -- 
> Cheers,
> 
> Agung Laksono
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to