Sorry somehow my copy past keeps excluding this part: Sep 26 13:28:38 dev03 ntpd[8930]: synchronized to 10.99.1.5, stratum 3
Sep 26 13:52:25 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=0: Trying to acquire journal lock... Sep 26 13:52:25 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=0: Busy Sep 26 13:52:37 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=1: Trying to acquire journal lock... Sep 26 13:52:37 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=1: Busy *Sep 26 13:52:37 dev03 gfs_controld[14620]: gfs_sdb1 finish: needs recovery jid 1 nodeid 1 status 1* On Fri, Sep 26, 2008 at 2:38 PM, Alan A <[EMAIL PROTECTED]> wrote: > Node3 > > /var/log/messages > > > > Sep 26 13:28:13 dev03 clvmd: Cluster LVM daemon started - connected to CMAN > > Sep 26 13:28:17 dev03 kernel: GFS 0.1.23-5.el5_2.2 (built Aug 14 2008 > 17:08:35) installed > > Sep 26 13:28:17 dev03 kernel: Trying to join cluster "lock_dlm", > "test1_cluster:gfs_fs1" > > Sep 26 13:28:17 dev03 kernel: Joined cluster. Now mounting FS... > > Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_fs1.2: jid=2: > Trying to acquire journal lock... > > Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_fs1.2: jid=2: > Looking at journal... > > Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_fs1.2: jid=2: > Done > > Sep 26 13:28:18 dev03 kernel: Trying to join cluster "lock_dlm", > "test1_cluster:gfs_sdb1" > > Sep 26 13:28:18 dev03 kernel: Joined cluster. Now mounting FS... > > Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=2: > Trying to acquire journal lock... > > Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=2: > Looking at journal... > > Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=2: > Done > > > On Fri, Sep 26, 2008 at 2:06 PM, Alan A <[EMAIL PROTECTED]> wrote: > >> >> I have been able to recreate the problem with gfs_grow. Here is the output >> of the test command, and the actual command, with the /var/log/messages - >> all from the node3. I am opening ticket with RH, and will give you the >> ticket number afterwards. >> >> >> [EMAIL PROTECTED] /]# gfs_grow -v -T /lvm_test2 >> >> FS: Mount Point: /lvm_test2 >> >> FS: Device: /dev/mapper/gfs_sdb1-gfs_sdb1 >> >> FS: Options: rw,hostdata=jid=2:id=262146:first=0 >> >> FS: Size: 1572864 >> >> RGRP: Current Resource Group List: >> >> RI: Addr 1328945, RgLen 15, Start 1328960, DataLen 243904, BmapLen 60976 >> >> RI: Addr 1310720, RgLen 2, Start 1310722, DataLen 18220, BmapLen 4555 >> >> RI: Addr 1250100, RgLen 4, Start 1250104, DataLen 60616, BmapLen 15154 >> >> RI: Addr 1189480, RgLen 4, Start 1189484, DataLen 60616, BmapLen 15154 >> >> RI: Addr 1128860, RgLen 4, Start 1128864, DataLen 60616, BmapLen 15154 >> >> RI: Addr 1068240, RgLen 4, Start 1068244, DataLen 60616, BmapLen 15154 >> >> RI: Addr 1007620, RgLen 4, Start 1007624, DataLen 60616, BmapLen 15154 >> >> RI: Addr 947000, RgLen 4, Start 947004, DataLen 60616, BmapLen 15154 >> >> RI: Addr 886380, RgLen 4, Start 886384, DataLen 60616, BmapLen 15154 >> >> RI: Addr 825760, RgLen 4, Start 825764, DataLen 60616, BmapLen 15154 >> >> RI: Addr 765140, RgLen 4, Start 765144, DataLen 60616, BmapLen 15154 >> >> RI: Addr 704512, RgLen 4, Start 704516, DataLen 60624, BmapLen 15156 >> >> RI: Addr 545589, RgLen 4, Start 545593, DataLen 60612, BmapLen 15153 >> >> RI: Addr 484970, RgLen 4, Start 484974, DataLen 60612, BmapLen 15153 >> >> RI: Addr 424351, RgLen 4, Start 424355, DataLen 60612, BmapLen 15153 >> >> RI: Addr 363732, RgLen 4, Start 363736, DataLen 60612, BmapLen 15153 >> >> RI: Addr 303113, RgLen 4, Start 303117, DataLen 60612, BmapLen 15153 >> >> RI: Addr 242494, RgLen 4, Start 242498, DataLen 60612, BmapLen 15153 >> >> RI: Addr 181875, RgLen 4, Start 181879, DataLen 60612, BmapLen 15153 >> >> RI: Addr 121256, RgLen 4, Start 121260, DataLen 60612, BmapLen 15153 >> >> RI: Addr 60637, RgLen 4, Start 60641, DataLen 60612, BmapLen 15153 >> >> RI: Addr 17, RgLen 4, Start 21, DataLen 60616, BmapLen 15154 >> >> RGRP: 22 Resource groups in total >> >> JRNL: Current Journal List: >> >> JI: Addr 671744 NumSeg 2048 SegSize 16 >> >> JI: Addr 638976 NumSeg 2048 SegSize 16 >> >> JI: Addr 606208 NumSeg 2048 SegSize 16 >> >> JRNL: 3 Journals in total >> >> DEV: Size: 1703936 >> >> RGRP: New Resource Group List: >> >> RI: Addr 1572864, RgLen 9, Start 1572873, DataLen 131060, BmapLen 32765 >> >> RGRP: 1 Resource groups in total >> >> [EMAIL PROTECTED] /]# gfs_grow -v /lvm_test2 >> >> FS: Mount Point: /lvm_test2 >> >> FS: Device: /dev/mapper/gfs_sdb1-gfs_sdb1 >> >> FS: Options: rw,hostdata=jid=2:id=262146:first=0 >> >> FS: Size: 1572864 >> >> RGRP: Current Resource Group List: >> >> RI: Addr 1328945, RgLen 15, Start 1328960, DataLen 243904, BmapLen 60976 >> >> RI: Addr 1310720, RgLen 2, Start 1310722, DataLen 18220, BmapLen 4555 >> >> RI: Addr 1250100, RgLen 4, Start 1250104, DataLen 60616, BmapLen 15154 >> >> RI: Addr 1189480, RgLen 4, Start 1189484, DataLen 60616, BmapLen 15154 >> >> RI: Addr 1128860, RgLen 4, Start 1128864, DataLen 60616, BmapLen 15154 >> >> RI: Addr 1068240, RgLen 4, Start 1068244, DataLen 60616, BmapLen 15154 >> >> RI: Addr 1007620, RgLen 4, Start 1007624, DataLen 60616, BmapLen 15154 >> >> RI: Addr 947000, RgLen 4, Start 947004, DataLen 60616, BmapLen 15154 >> >> RI: Addr 886380, RgLen 4, Start 886384, DataLen 60616, BmapLen 15154 >> >> RI: Addr 825760, RgLen 4, Start 825764, DataLen 60616, BmapLen 15154 >> >> RI: Addr 765140, RgLen 4, Start 765144, DataLen 60616, BmapLen 15154 >> >> RI: Addr 704512, RgLen 4, Start 704516, DataLen 60624, BmapLen 15156 >> >> RI: Addr 545589, RgLen 4, Start 545593, DataLen 60612, BmapLen 15153 >> >> RI: Addr 484970, RgLen 4, Start 484974, DataLen 60612, BmapLen 15153 >> >> RI: Addr 424351, RgLen 4, Start 424355, DataLen 60612, BmapLen 15153 >> >> RI: Addr 363732, RgLen 4, Start 363736, DataLen 60612, BmapLen 15153 >> >> RI: Addr 303113, RgLen 4, Start 303117, DataLen 60612, BmapLen 15153 >> >> RI: Addr 242494, RgLen 4, Start 242498, DataLen 60612, BmapLen 15153 >> >> RI: Addr 181875, RgLen 4, Start 181879, DataLen 60612, BmapLen 15153 >> >> RI: Addr 121256, RgLen 4, Start 121260, DataLen 60612, BmapLen 15153 >> >> RI: Addr 60637, RgLen 4, Start 60641, DataLen 60612, BmapLen 15153 >> >> RI: Addr 17, RgLen 4, Start 21, DataLen 60616, BmapLen 15154 >> >> RGRP: 22 Resource groups in total >> >> JRNL: Current Journal List: >> >> JI: Addr 671744 NumSeg 2048 SegSize 16 >> >> JI: Addr 638976 NumSeg 2048 SegSize 16 >> >> JI: Addr 606208 NumSeg 2048 SegSize 16 >> >> JRNL: 3 Journals in total >> >> DEV: Size: 1703936 >> >> RGRP: New Resource Group List: >> >> RI: Addr 1572864, RgLen 9, Start 1572873, DataLen 131060, BmapLen 32765 >> >> RGRP: 1 Resource groups in total >> >> Preparing to write new FS information... >> >> Done. >> >> >> >> >> >> Node3 >> >> /var/log/messages >> >> >> >> Sep 26 13:28:13 dev03 clvmd: Cluster LVM daemon started - connected to >> CMAN >> >> Sep 26 13:28:17 dev03 kernel: GFS 0.1.23-5.el5_2.2 (built Aug 14 2008 >> 17:08:35) installed >> >> Sep 26 13:28:17 dev03 kernel: Trying to join cluster "lock_dlm", >> "test1_cluster:gfs_fs1" >> >> Sep 26 13:28:17 dev03 kernel: Joined cluster. Now mounting FS... >> >> Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_fs1.2: jid=2: >> Trying to acquire journal lock... >> >> Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_fs1.2: jid=2: >> Looking at journal... >> >> Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_fs1.2: jid=2: >> Done >> >> Sep 26 13:28:18 dev03 kernel: Trying to join cluster "lock_dlm", >> "test1_cluster:gfs_sdb1" >> >> Sep 26 13:28:18 dev03 kernel: Joined cluster. Now mounting FS... >> >> Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=2: >> Trying to acquire journal lock... >> >> Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=2: >> Looking at journal... >> >> Sep 26 13:28:18 dev03 kernel: GFS: fsid=test1_cluster:gfs_sdb1.2: jid=2: >> Done >> >> >> On Fri, Sep 26, 2008 at 1:59 PM, Bob Peterson <[EMAIL PROTECTED]>wrote: >> >>> ----- "Alan A" <[EMAIL PROTECTED]> wrote: >>> | Again thanks for the fast and prompt response Bob. >>> | >>> | I will try to reproduce the problem with gfs_grow. >>> | >>> | One more question regarding GFS - what steps would you recommend (if >>> | any) >>> | for growing and shrinking active GFS volume? >>> >>> Hi Alan, >>> >>> Neither GFS or GFS2 volumes cannot be shrunk. Eventually I >>> need to start working on a gfs2_shrink tool for gfs2, but I >>> don't think GFS will ever be able to shrink. >>> >>> As for growing, it sounds like you're already familiar with >>> that. You just do something like: >>> >>> lvresize or lvextend the logical volume >>> mount the gfs volume to a mount point >>> gfs_grow /your/mount/point >>> >>> It's probably safest to do gfs_grow when there is not a lot of >>> system activity. For example, at night when the system is not >>> being beat up by lots of I/O. >>> >>> Regards, >>> >>> Bob Peterson >>> Red Hat Clustering & GFS >>> >>> -- >>> Linux-cluster mailing list >>> [email protected] >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> >> -- >> Alan A. >> > > > > -- > Alan A. > -- Alan A.
-- Linux-cluster mailing list [email protected] https://www.redhat.com/mailman/listinfo/linux-cluster
