Hi Sam,

Output of getfattr as follows:

[EMAIL PROTECTED]:/mnt/pvfs2> getfattr -n "user.pvfs2.dfile_count" ./dirdf6/
# file: dirdf6
user.pvfs2.dfile_count="6"

I am running SUSE 10.0, but I have compiled the kernel 2.6.12.6 with papi
and lustre patch.

Please let me know if any additional information is required.

Regards,

-Suneet
On 2/8/07, Sam Lang <[EMAIL PROTECTED]> wrote:


Suneet,

What does getfattr tell you the extended attribute is set to?

getfattr -n "user.pvfs2.dfile_count" ./dirdf6/

Also, what OS version are you running?

-sam

On Feb 8, 2007, at 12:36 AM, Suneet Chandok wrote:

> Hi Sam,
>
> Currently my configuration is like 8 nodes running 8 I/O server and
> 1 meta data server. shark24 is the one acting I/O and meta data
> server.
> I mounted client on one the nodes using:
> mount -t pvfs2 tcp://shark24:3334/pvfs2-fs /mnt/pvfs2
>
> Now, in /mnt/pvfs2 I did the following to span file on 6 nodes in
> dirdf directory.
>
> mkdir dirdf6
> setfattr -n "user.pvfs2.dfile_count" -v 6 ./dirdf6/
>
> I am creating a 2GB file with my process in /mnt/pvfs2/dirdf/. I
> checked with pvfs2-viewdist and I still see the file being striped
> over 8 I/O servers instead of 6. Output of pvfs2-viewdist:
>
> >/opt/pvfs2-2.6.2/bin/pvfs2-viewdist -f /mnt/pvfs2/dirdf6/test
>
> dist_name = simple_stripe
> strip_size = 65536
> Number of datafiles/servers = 8
> Server 0 - tcp://shark18:3334, handle: 1431655730 (55555532.bstream )
> Server 1 - tcp://shark19:3334, handle: 1908874318 (71c71c4e.bstream)
> Server 2 - tcp://shark20:3334, handle: 2386092906 (8e38e36a.bstream)
> Server 3 - tcp://shark21:3334, handle: 2863311494 (aaaaaa86.bstream)
> Server 4 - tcp://shark22:3334, handle: 3340530082 (c71c71a2.bstream)
> Server 5 - tcp://shark23:3334, handle: 3817748670 (e38e38be.bstream)
> Server 6 - tcp://shark24:3334, handle: 4294967258 (ffffffda.bstream)
> Server 7 - tcp://shark17:3334, handle: 954437142 ( 38e38e16.bstream)
>
> Can you suggest if I am missing something in my configuration or
> any additional information is required or shall I recompile with
> pvfs2-xattr.patch.
>
> Regards
> -Suneet
>
> On 2/7/07, Sam Lang <[EMAIL PROTECTED]> wrote:
> On Feb 7, 2007, at 10:27 AM, Suneet Chandok wrote:
>
> > Hi,
> >
> > I have a question related to I/O servers.  I have 24 node cluster
> > with I/O server running on 23 nodes and 1 meta-data server. Is
> > there a way to configure the system so that only limited number I/O
> > servers are active; lets say 2, 4, 8, or 16. Do I need to generate
> > and distribute new configuration files each time I want to increase
> > or reduce I/O servers and also start server again on all the nodes?
> > I need this for tesing OpenMPI with different number of I/O server.
> >
> > Please let me know if I am not clear.
> >
>
> Hi Suneet,
>
> There is an extended attribute that you can use to get the behavior
> you're looking for.  If you create a directory and set
> user.pvfs2.dfile_count to the number of IO servers you want, all
> files in that directory will use that number of servers.  So if you
> do:
>
> mkdir dirdf6
> setfattr -n "user.pvfs2.dfile_count" -v 6 ./dirdf6/
>
> Then files in the dirdf6 directory will use 6 of the IO servers in
> your system.  Note that the servers are chosen in a round-robin
> fashion, starting randomly, so you won't be able to specifically set
> which servers get used.  There are ways to do this in pvfs, but its
> not as easy as setting the extended attribute described above.
>
> Also not that in the case above the directory is on a pvfs mounted
> volume.  If you don't want to go over the VFS (or don't have setfattr
> installed or something), you can use the pvfs2-setxattr tool, but
> you'll need the attached patch to get that to work properly on
> directories.
>
> You can also use the pvfs2-viewdist tool to see what IO servers a
> file will use, by mapping the datafile handles to the IO servers from
> the config file.
>
> -sam
>
>
>
>
> > --
> > Suneet Chandok
> > Research Assistant
> > Parallel Software Technologies Laboratory
> > Department of Computer Science
> > University of Houston
> >
> > Cell: 704-248-0718
> > Email: [EMAIL PROTECTED]
> > _______________________________________________
> > Pvfs2-users mailing list
> > [email protected]
> > http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>
>
>
>
>
>
> --
> Suneet Chandok
> Research Assistant
> Parallel Software Technologies Laboratory
> Department of Computer Science
> University of Houston
>
> Cell: 704-248-0718
> Email: [EMAIL PROTECTED]




--
Suneet Chandok
Research Assistant
Parallel Software Technologies Laboratory
Department of Computer Science
University of Houston

Cell: 704-248-0718
Email: [EMAIL PROTECTED]
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to