If you are dealing with creating/removing lots of small files, a large journal
will help. Currently there is no way other than trial and error. We'll look
into making this easier but right now there is no other way.

Pick the largest value of all the subcomponents for the hb timeout.

Fabio Corazza wrote:
Sorry for my laziness, I just had a read at the mkfs.ocfs2 man page and
had answered to some questions by myself.

If you can still give me some hints about the block-size and
cluster-size values about the filesystem that I'm going to create, I'd
appreciated it. Also, I'm a little bit curious about the journal size,
how and why it should be tuned.

I'd also have another question... reading the faq it's stated that I
should set the O2CB_HEARTBEAT_THRESHOLD to a value calculated through a
specific formula over the I/O layer timeout. Where can I look to obtain
such value?

I'm using the iSCSI Linux initiator with the parameter
ConnFailTimeout=180, don't know if this has something to do with the I/O
layer timeout. Also, I'm using multipath-tools.


Thanks,
Fabio

Fabio Corazza wrote:
OK, so basically the filesystem keeps on relying on EVMS devices even if
ocfs2console or ocfs2tools will be detecting other devices. Please
confirm me that this is correct.

Also, relating to the options I'm given during the creation of an ocfs2
volume, which options do you suggest for a volume that _only_ stores a
LOT of small files (images, maximum size for each will be 3MB) and a lot
of directories. Actually, I will have 2 nodes on r/w and a third node
that will just read (is the backup server).

[-b block-size] [-C cluster-size] [-N number-of-node-slots] [-T
filesystem-type] [-L volume-label] [-J journal-options] [-HFqvV] device
[blocks-count]

Basically: block-size, cluster-size.

Also, what number-of-node-slots mean? The maximum number of nodes the
filesystem can be accessed from? I've seen that this defaults to 4, can
this be expanded after the filesystem creation or has to be prevented on
time?

Also, what about journal-options?


Thanks for your attention, highly appreciated.


Fabio


Sunil Mushran wrote:
Well, mounted.ocfs2 is dumb... as in, it just scans /proc/partitions.
We have to teach it new tricks. :)

Fabio Corazza wrote:
Hi there,
 I've just setup an EVMS cluster with Heartbeat 2.0.7 and OCFS2.

Everything seems to be working fine except this:

[EMAIL PROTECTED] photos]# mounted.ocfs2 -d
Device                FS     UUID                                  Label
/dev/dm-6             ocfs2  c1a56afe-3d4b-4b88-919c-b9454b1ec708  cache
/dev/dm-7             ocfs2  c1a56afe-3d4b-4b88-919c-b9454b1ec708  cache
/dev/dm-8             ocfs2  0663bfeb-60ad-400a-8c1a-61156772eebc photos
/dev/dm-14            ocfs2  e2533760-1c3f-4f7a-886f-8769e73f1088 photos
/dev/dm-15            ocfs2  e2533760-1c3f-4f7a-886f-8769e73f1088 photos

[EMAIL PROTECTED] photos]# mounted.ocfs2 -f
Device                FS     Nodes
/dev/dm-6             ocfs2  mybbook-as01, mybbook-as02
/dev/dm-7             ocfs2  mybbook-as01, mybbook-as02
/dev/dm-8             ocfs2  Unknown: OCFS2 directory corrupted
/dev/dm-14            ocfs2  mybbook-as01, mybbook-as02
/dev/dm-15            ocfs2  mybbook-as01, mybbook-as02
[EMAIL PROTECTED] photos]#

The same in the other node.


I tried to reboot, to run dmsetup delete_all.... restart evms... nothing
happens. That dm-8 still hangs there. Everything else is fine... what
could it be? The filesystems seem to work correctly.



Regards,




_______________________________________________
Ocfs2-users mailing list
[email protected]
http://oss.oracle.com/mailman/listinfo/ocfs2-users

Reply via email to