Chris, sorry about this. Bad coding on my part. Try this patch. I think it will at least get you farther.
kevin
--- src/io/trove/trove-migrate.c 2 Sep 2009 21:10:40 -0000 1.6
+++ src/io/trove/trove-migrate.c 21 Oct 2010 16:10:40 -0000
@@ -267,7 +267,7 @@ int trove_migrate (TROVE_method_id metho
s = wtime();
#endif
- count = 10;
+ count = 1;
pos = TROVE_ITERATE_START;
name.buffer = malloc(PATH_MAX);
name.buffer_sz = PATH_MAX;
On Oct 21, 2010, at 10:32 AM, Chris Poultney wrote:
> Phil, thanks for this. I wasn't sure if it was me or PVFS as I haven't set up
> PVFS before. For now it's working fine with two separate servers, config
> files, etc. I'm curious to see what Becky comes up with for multiple port for
> one alias - may provide an interim solution that's closer to the desired
> behavior.
>
> -crispy
>
>
> On 10/21/2010 11:25 AM, Phil Carns wrote:
>> Oh, by the way, I tried my example with PVFS trunk rather than one of
>> the official releases, which might explain why my failure behavior was a
>> little different than yours. They probably both have the same root
>> problem, though.
>>
>> -Phil
>>
>> On 10/21/2010 11:24 AM, Phil Carns wrote:
>>> Hi Chris,
>>>
>>> I believe that your configuration file is correct. A pvfs2-server
>>> config file can have two <Filesystem> sections, and the only things
>>> that need to be different between the two are the "ID" and the "Name".
>>> Everything else can be completely identical (there is no need to
>>> change the RootHandle or Ranges, though doing so as you have in your
>>> example won't hurt anything).
>>>
>>> This will result in two subdirectories in your storage space
>>> (/pvfs2-storage-multi), one for each file system. Both will be
>>> serviced on the same port; you just select which to mount via:
>>>
>>> mount -t pvfs2 tcp://wongburger:3334/fs1 /dir1
>>> or
>>> mount -t pvfs2 tcp://wongburger:3334/fs2 /dir2
>>>
>>> Unfortunately, although you did everything right, it looks like there
>>> is a bug in PVFS. I tried on my laptop with the attached configuration
>>> and my server crashed as well. Mine generated some log messages which
>>> are also attached. I don't think anyone has tried this type of
>>> configuration in a while.
>>>
>>> Until this is resolved, you always have the option of simply running
>>> two sets of servers on the same nodes, with completely separate config
>>> files. If you go that route, then you have to change not only the ID
>>> and Name, but also the port, storage space, and log file to keep them
>>> from conflicting.
>>>
>>> -Phil
>>>
>>>
>>>
>>> On 10/20/2010 02:38 PM, Chris Poultney wrote:
>>>> I'd like to set up two shares to be managed by the same PVFS server.
>>>> I've read in the docs that I can have multiple filesystem entries in
>>>> pvfs2-fs.conf, but I haven't been able to successfully do this. Does
>>>> someone have an example config file with multiple filesystem entries
>>>> that they can share?
>>>>
>>>> More details:
>>>>
>>>> I've successfully set up a single share based on the quick start
>>>> guide. It's a four-node network: one metadata server, four i/o nodes.
>>>> Nodes are identical, each 6x dual-core xeon, LAN connection, running
>>>> ubuntu lucid. To generate the two-filesystem config file, I ran
>>>> pvfs2-genconfig twice, then pasted the filesystem section from the
>>>> second file into the first, making sure the Name and ID fields were
>>>> unique. This was based on 3.9 in the PVFS2 FAQ, "Can I mount more
>>>> than one PVFS file system on the same client?"
>>>>
>>>> Using the two-filesystem config file, pvfs2-server -f works, but then
>>>> pvfs2-server (with or without -d) ends immediately with a segfault.
>>>> Nothing is written to the console or logfiles. I tried a different
>>>> version of pvfs2-fs.conf with non-overlapping MetaHandleRange and
>>>> DataHandleRange values for the two filesystems, but as I'm not
>>>> exactly clear on what these do I'm not sure if that's the right
>>>> approach. I've attached the conf file for a single-server test of two
>>>> filesystems, with separate Range value.
>>>>
>>>> Thanks,
>>>> -crispy
>>>>
>>>>
>>>> _______________________________________________
>>>> Pvfs2-users mailing list
>>>> [email protected]
>>>> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>>>
>>
>>
>>
>>
>> _______________________________________________
>> Pvfs2-users mailing list
>> [email protected]
>> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
> _______________________________________________
> Pvfs2-users mailing list
> [email protected]
> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ Pvfs2-users mailing list [email protected] http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
