Re: [Ocfs2-users] OCFS2 and db_block_size

2011-11-14 Thread Sunil Mushran
We talk about this in the user's guide.
1. Always use 4K blocksize.
2. Never set the cluster size less than the database block size.

Having a smaller cluster size could mean that a db block may not be contiguous.
And you don't want that for performance and other reasons. Having a still larger
cluster size is an easy way to ensure the files are contiguous. Contiguity can 
only
help perf.

On 11/14/2011 03:35 PM, Pravin K Patil wrote:
> Hi All,
> Is there a benchmark study done different block sizes of ocfs2 and 
> corrosponding db_block_size and its impact on read / write?
> Similar way s there any study done for cluster size of ocfs2 and 
> corrosponding db_block_size and its impact on read / write?
> For example if the db_block_size is 8K and if we have ocfs2 cluster size as 
> 4K will it have any performance impact or in other words, if we make cluster 
> size of file systems on which data files are located as 8K will it improve 
> performance? if so is it for read or write?
> Looking for actual expereince on the settings of ocfs2 block size, cluster 
> size and db_block_size corelation.
>
> Regards,
> Pravin
>


___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users


[Ocfs2-users] OCFS2 and db_block_size

2011-11-14 Thread Pravin K Patil
Hi All,

Is there a benchmark study done different block sizes of ocfs2 and
corrosponding db_block_size and its impact on read / write?
Similar way s there any study done for cluster size of ocfs2 and
corrosponding db_block_size and its impact on read / write?

For example if the db_block_size is 8K and if we have ocfs2 cluster size as
4K will it have any performance impact or in other words, if we make
cluster size of file systems on which data files are located as 8K will it
improve performance? if so is it for read or write?

Looking for actual expereince on the settings of ocfs2 block size, cluster
size and db_block_size corelation.

Regards,
Pravin
___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users

Re: [Ocfs2-users] dlm locking

2011-11-14 Thread Sunil Mushran
o2image is only useful for debugging. It allows us to get a copy of the file 
system
on which we can test fsck inhouse. The files in lost+found have to be resolved
manually. If they are junk, delete them. If useful, move it to another 
directory.

On 11/11/2011 05:36 PM, Nick Khamis wrote:
> All Fixed!
>
> Just a few questions. Is there any documentation on howto diagnose on
> ocfs2 filesystem:
> * How to transfer an image file for testing onto a different machine.
> As you did with "o2image.out"
> * Does "fsck.ocfs2 -fy /dev/loop0" pretty much fix all the common problems
> * What can I do with the files in lost+found
>
> Thanks Again,
>
> Nick.
>
> On Fri, Nov 11, 2011 at 8:02 PM, Sunil Mushran  
> wrote:
>> So it detected one cluster that was doubly allocated. It fixed it.
>> Details below. The other fixes could be because the o2image was
>> taken on a live volume.
>>
>> As to how this could happen... I would look at the storage.
>>
>>
>> # fsck.ocfs2 -fy /dev/loop0
>> fsck.ocfs2 1.6.3
>> Checking OCFS2 filesystem in /dev/loop0:
>>   Label:  AsteriskServer
>>   UUID:   3A791AB36DED41008E58CEF52EBEEFD3
>>   Number of blocks:   592384
>>   Block size: 4096
>>   Number of clusters: 592384
>>   Cluster size:   4096
>>   Number of slots:2
>>
>> /dev/loop0 was run with -f, check forced.
>> Pass 0a: Checking cluster allocation chains
>> Pass 0b: Checking inode allocation chains
>> Pass 0c: Checking extent block allocation chains
>> Pass 1: Checking inodes and blocks.
>> Duplicate clusters detected.  Pass 1b will be run
>> Running additional passes to resolve clusters claimed by more than one
>> inode...
>> Pass 1b: Determining ownership of multiply-claimed clusters
>> Pass 1c: Determining the names of inodes owning multiply-claimed clusters
>> Pass 1d: Reconciling multiply-claimed clusters
>> Cluster 161335 is claimed by the following inodes:
>>   /asterisk/extensions.conf
>>   /moh/macroform-cold_day.wav
>> [DUP_CLUSTERS_CLONE] Inode "/asterisk/extensions.conf" may be cloned or
>> deleted to break the claim it has on its clusters. Clone inode
>> "/asterisk/extensions.conf" to break claims on clusters it shares with other
>> inodes? y
>> [DUP_CLUSTERS_CLONE] Inode "/moh/macroform-cold_day.wav" may be cloned or
>> deleted to break the claim it has on its clusters. Clone inode
>> "/moh/macroform-cold_day.wav" to break claims on clusters it shares with
>> other inodes? y
>> Pass 2: Checking directory entries.
>> [DIRENT_INODE_FREE] Directory entry 'musiconhold.conf' refers to inode
>> number 35348 which isn't allocated, clear the entry? y
>> Pass 3: Checking directory connectivity.
>> [LOSTFOUND_MISSING] /lost+found does not exist.  Create it so that we can
>> possibly fill it with orphaned inodes? y
>> Pass 4a: checking for orphaned inodes
>> Pass 4b: Checking inodes link counts.
>> [INODE_COUNT] Inode 96783 has a link count of 1 on disk but directory entry
>> references come to 2. Update the count on disk to match? y
>> [INODE_NOT_CONNECTED] Inode 96784 isn't referenced by any directory entries.
>>   Move it to lost+found? y
>> [INODE_NOT_CONNECTED] Inode 96785 isn't referenced by any directory entries.
>>   Move it to lost+found? y
>> [INODE_NOT_CONNECTED] Inode 96794 isn't referenced by any directory entries.
>>   Move it to lost+found? y
>> [INODE_NOT_CONNECTED] Inode 96796 isn't referenced by any directory entries.
>>   Move it to lost+found? y
>> All passes succeeded.
>> Slot 0's journal dirty flag removed
>> Slot 1's journal dirty flag removed
>>
>>
>> [root@ca-test92 ocfs2]# fsck.ocfs2 -fy /dev/loop0
>> fsck.ocfs2 1.6.3
>> Checking OCFS2 filesystem in /dev/loop0:
>>   Label:  AsteriskServer
>>   UUID:   3A791AB36DED41008E58CEF52EBEEFD3
>>   Number of blocks:   592384
>>   Block size: 4096
>>   Number of clusters: 592384
>>   Cluster size:   4096
>>   Number of slots:2
>>
>> /dev/loop0 was run with -f, check forced.
>> Pass 0a: Checking cluster allocation chains
>> Pass 0b: Checking inode allocation chains
>> Pass 0c: Checking extent block allocation chains
>> Pass 1: Checking inodes and blocks.
>> Pass 2: Checking directory entries.
>> Pass 3: Checking directory connectivity.
>> Pass 4a: checking for orphaned inodes
>> Pass 4b: Checking inodes link counts.
>> All passes succeeded.


___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users