Robin Humble wrote:
now onto the next problem :)

setup is a single ~1TB software raid0 OST on 'xe', an MDT/MGS on 'x19'
with SATA disk, and 16 other client nodes mounting the Lustre filesystem
and doing bonnie++ over o2ib. all standard Lustre rpms, x86_64 CentOS4.4.

IIRC the variable number of threads is a new feature in 1.5.97?
looks like there's a small bug in it...

it ends up with 512 ll_ost_io_* threads, and the last is hung in a D state
with no i/o on the filesystem. ie.
  % ps auxw
  ...
  root      7488  0.0  0.0     0    0 ?        S    19:20   0:00 [ll_ost_io_511]
  root      7489  0.0  0.0     0    0 ?        D    19:20   0:00 [ll_ost_io_512]
  ...

after the i/o:
 num threads   name
 -----------   ----
      5      ll_log_comt_*
      8      ldlm_bl_*
      8      ldlm_cb_*
     15      ldlm_cn_*
    304      ll_ost_*
    512      ll_ost_io_*

the /tmp/lustre-log.* files are kinda huge. they're at
  http://www.cita.utoronto.ca/~rjh/lustre/

there's no LBUG if I restrict the number of threads with eg:
  options ost oss_num_threads=300

let me know if you'd like more info or to try patches etc.

cheers,
robin

Feb 10 19:20:41 xe kernel: LustreError: 7489:0:(ost_handler.c:1555:ost_thread_init()) 
ASSERTION(thread->t_id < OSS_THREADS_MAX) failed
Looks like it's just a slightly bad assert.

Index: ost_handler.c
===================================================================
RCS file: /cvsroot/cfs/lustre-core/ost/ost_handler.c,v
retrieving revision 1.181
diff -u -p -r1.181 ost_handler.c
--- ost_handler.c       10 Feb 2007 06:32:49 -0000      1.181
+++ ost_handler.c       13 Feb 2007 20:54:19 -0000
@@ -1586,7 +1586,7 @@ static int ost_thread_init(struct ptlrpc

        LASSERT(thread != NULL);
        LASSERT(thread->t_data == NULL);
-        LASSERT(thread->t_id < OSS_THREADS_MAX);
+        LASSERT(thread->t_id <= OSS_THREADS_MAX);

        OBD_ALLOC_PTR(tls);
        if (tls != NULL) {

_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to