Yes we did review and thought with the workload the overhead of small io to
panfs would not impact you too much and it seemed ok in your testing.   I
personally did not think through the deletes when we talked before.   On a
create if the files are large enough you don't see a large percentage of
time lost.

There is a cache setting you can add to the config file to increase the db
cache, but that only helps on lookups (does not help with deletes or
creates too much).   I think we have sent you a sample config that has that
set in it.  If not let us know how much memory each server node has and we
can help out.

If the above does not get the performance to acceptable levels, then yes
drbd on the volumes is how you would do HA

Again, sorry we missed this piece earlier.

-Boyd



On Tuesday, May 28, 2013, Michael Robbert wrote:

> That is disappointing to hear. I thought that we had cleared this design
> with Omnibond on a conference call to establish support. Can you explain
> more fully what the problem is? Panasas has told us in the past that their
> file system should perform well under small random access which is what I
> thought meta data would be.
> We have local disk on the servers, but that moving the meta data there
> would be a problem for our HA setup. Would you suggest DRBD if we need to
> go that route?
>
> Thanks,
> Mike
>
>
> On 5/25/13 4:26 AM, Boyd Wilson wrote:
>
>> If the md files are on panfs that is probably the issue, panfs is not
>> great for db performance.  Do you have local disks in the server?   If
>> you can reconfigure and point there, performance should get better.
>>
>> -Boyd
>>
>> On Friday, May 24, 2013, Michael Robbert wrote:
>>
>>     I believe so. Last week I was rsyncing some file to the file system.
>>     Yesterday I needed to delete a bunch of them and that is when I
>>     noticed the problem. On closer inspection it looks like rsync still
>>     writes quickly with large files(100MB/s), but bonnie++ is quite a
>>     bit slower(20MB/s). So for now I'm just concerned with the MD
>>     performance.
>>     It is stored on the same PanFS systems as the data.
>>
>>     Mike
>>
>>     On 5/24/13 5:47 PM, Boyd Wilson wrote:
>>
>>         Michael,
>>         You said slowdown, was it performing better before and slowed
>> down?
>>
>>         Also where are your MD file stored?
>>
>>         -b
>>
>>
>>         On Fri, May 24, 2013 at 6:06 PM, Michael Robbert <
>> [email protected]
>>         <mailto:[email protected]>> wrote:
>>
>>              We recently noticed a performance problem with our OrangeFS
>>         server.
>>
>>              Here are the server stats:
>>              3 servers, built identically with identical hardware
>>
>>              [root@orangefs02 ~]# /usr/sbin/pvfs2-server --version
>>              2.8.7-orangefs (mode: aio-threaded)
>>
>>              [root@orangefs02 ~]# uname -r
>>              2.6.18-308.16.1.el5.584g0000
>>
>>              4 core E5603 1.60GHz
>>              12GB of RAM
>>
>>              OrangeFS is being served to clients using bmi_tcp over DDR
>>         Infiniband.
>>              Backend storage is PanFS with 2x10Gig connections on the
>>         servers.
>>              Performance to the backend looks fine using bonnie++.
>>          >100MB/sec
>>              write and ~250MB/s read to each stack. ~300 creates/sec.
>>
>>              On the OrangeFS clients are running kernel version
>>         2.6.18-238.19.1.el5.
>>
>>              The biggest problem I have right now is that delete are
>>         taking a
>>              long time. Almost 1 sec per file.
>>
>>              [root@fatcompute-11-32
>>         L_10_V0.2_eta0.3_wRes_____**truncerr1e-11]# find
>>              N2/|wc -l
>>              137
>>              [root@fatcompute-11-32
>>         L_10_V0.2_eta0.3_wRes_____**truncerr1e-11]# time
>>              rm -rf N2
>>
>>              real    1m31.096s
>>              user    0m0.000s
>>              sys     0m0.015s
>>
>>              Similar results for file creates:
>>
>>              [root@fatcompute-11-32 ]# date;for i in `seq 1 50`;do touch
>>              file${i};done;date
>>              Fri May 24 16:04:17 MDT 2013
>>              Fri May 24 16:05:05 MDT 2013
>>
>>              What else do you need to know? Which debug flags? What
>>         should we be
>>              looking at?
>>              I don't see any load on the servers and I've restarted
>>         server and
>>              rebooted server nodes.
>>
>>              Thanks for any pointers,
>>              Mike Robbert
>>              Colorado School of Mines
>>
>>
>>              ______________________________**___________________
>>              Pvfs2-users mailing list
>>         [email protected]
>>              <mailto:[email protected]>
>>         http://www.beowulf-__**underground.org/mailman/__**
>> listinfo/pvfs2-users<http://www.beowulf-__underground.org/mailman/__listinfo/pvfs2-users>
>>         <http://www.beowulf-**underground.org/mailman/**
>> listinfo/pvfs2-users<http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users>
>> >
>>
>>
>>
>>
>
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to