Kevin-

I applied the patch to the v2.8.2 source. The server still doesn't start, but now I get more output, which I've attached. I'm downloading the latest source from fisheye now and will let you know how it goes with that.

-crispy


On 10/21/2010 12:12 PM, Kevin Harms wrote:
Chris,

   sorry about this. Bad coding on my part. Try this patch. I think it will at 
least get you farther.

kevin

--- src/io/trove/trove-migrate.c        2 Sep 2009 21:10:40 -0000       1.6
+++ src/io/trove/trove-migrate.c        21 Oct 2010 16:10:40 -0000
@@ -267,7 +267,7 @@ int trove_migrate (TROVE_method_id metho
      s = wtime();
  #endif

-    count          = 10;
+    count          = 1;
      pos            = TROVE_ITERATE_START;
      name.buffer    = malloc(PATH_MAX);
      name.buffer_sz = PATH_MAX;


On Oct 21, 2010, at 10:32 AM, Chris Poultney wrote:

Phil, thanks for this. I wasn't sure if it was me or PVFS as I haven't set up 
PVFS before. For now it's working fine with two separate servers, config files, 
etc. I'm curious to see what Becky comes up with for multiple port for one 
alias - may provide an interim solution that's closer to the desired behavior.

-crispy


On 10/21/2010 11:25 AM, Phil Carns wrote:
Oh, by the way, I tried my example with PVFS trunk rather than one of
the official releases, which might explain why my failure behavior was a
little different than yours. They probably both have the same root
problem, though.

-Phil

On 10/21/2010 11:24 AM, Phil Carns wrote:
Hi Chris,

I believe that your configuration file is correct. A pvfs2-server
config file can have two<Filesystem>  sections, and the only things
that need to be different between the two are the "ID" and the "Name".
Everything else can be completely identical (there is no need to
change the RootHandle or Ranges, though doing so as you have in your
example won't hurt anything).

This will result in two subdirectories in your storage space
(/pvfs2-storage-multi), one for each file system. Both will be
serviced on the same port; you just select which to mount via:

mount -t pvfs2 tcp://wongburger:3334/fs1 /dir1
or
mount -t pvfs2 tcp://wongburger:3334/fs2 /dir2

Unfortunately, although you did everything right, it looks like there
is a bug in PVFS. I tried on my laptop with the attached configuration
and my server crashed as well. Mine generated some log messages which
are also attached. I don't think anyone has tried this type of
configuration in a while.

Until this is resolved, you always have the option of simply running
two sets of servers on the same nodes, with completely separate config
files. If you go that route, then you have to change not only the ID
and Name, but also the port, storage space, and log file to keep them
from conflicting.

-Phil



On 10/20/2010 02:38 PM, Chris Poultney wrote:
I'd like to set up two shares to be managed by the same PVFS server.
I've read in the docs that I can have multiple filesystem entries in
pvfs2-fs.conf, but I haven't been able to successfully do this. Does
someone have an example config file with multiple filesystem entries
that they can share?

More details:

I've successfully set up a single share based on the quick start
guide. It's a four-node network: one metadata server, four i/o nodes.
Nodes are identical, each 6x dual-core xeon, LAN connection, running
ubuntu lucid. To generate the two-filesystem config file, I ran
pvfs2-genconfig twice, then pasted the filesystem section from the
second file into the first, making sure the Name and ID fields were
unique. This was based on 3.9 in the PVFS2 FAQ, "Can I mount more
than one PVFS file system on the same client?"

Using the two-filesystem config file, pvfs2-server -f works, but then
pvfs2-server (with or without -d) ends immediately with a segfault.
Nothing is written to the console or logfiles. I tried a different
version of pvfs2-fs.conf with non-overlapping MetaHandleRange and
DataHandleRange values for the two filesystems, but as I'm not
exactly clear on what these do I'm not sure if that's the right
approach. I've attached the conf file for a single-server test of two
filesystems, with separate Range value.

Thanks,
-crispy


_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users





_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

[S 10/21 13:41] PVFS2 Server on node wongburger version 2.8.2 starting...
TROVE:DBPF:Berkeley DB: illegal record number size
[E 10/21 13:41] src/io/trove/trove-dbpf/dbpf-mgmt.c line 1531: 
dbpf_collection_iterate_op_svc: Unknown error: -1073742095
[E 10/21 13:41]         [bt] pvfs2-server(dbpf_collection_iterate+0x2cf) 
[0x4606af]
[E 10/21 13:41]         [bt] pvfs2-server(trove_collection_iterate+0x3a) 
[0x4168ba]
[E 10/21 13:41]         [bt] pvfs2-server(trove_migrate+0x91) [0x4178d1]
[E 10/21 13:41]         [bt] pvfs2-server() [0x414112]
[E 10/21 13:41]         [bt] pvfs2-server() [0x415264]
[E 10/21 13:41]         [bt] pvfs2-server(main+0xd89) [0x4163c9]
[E 10/21 13:41]         [bt] /lib/libc.so.6(__libc_start_main+0xfd) 
[0x7f8493822c4d]
[E 10/21 13:41]         [bt] pvfs2-server() [0x412ab9]
[E 10/21 13:41] trove_collection_iterate failed: ret=-1073742095 method=1 pos=1 
name=0x7fff6afcf210 coll=831114092 count=0 op=140207390731840
[E 10/21 13:41] trove_migrate failed: ret=-1073742095
[E 10/21 13:41] Error: Could not initialize server interfaces; aborting.
[E 10/21 13:41] Error: Could not initialize server; aborting.
shmat: id 1671191: unable to attach to shared system memory region: Invalid 
argument
shmat: id 1671191: unable to attach to shared system memory region: Invalid 
argument
Segmentation fault

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to