Re: [Lustre-discuss] One or two OSS, no difference?

2010-03-05 Thread Andreas Dilger
On 2010-03-04, at 14:18, Jeffrey Bennett wrote:
 I just noticed the sequential performance is ok, but the random IO  
 (which is what I am measuring) is not. Is there any way to increase  
 random IO performance on Lustre? We have LUNs that can provide  
 around 250.000 random read 4kb IOPS but we are only seeing 3.000 to  
 10.000 on Lustre.

There is work currently underway to improve the SMP scaling  
performance for the RPC handling layer in Lustre.  Currently that  
limits the delivered RPC rate to 10-15k/sec or so.

 -Original Message-
 From: oleg.dro...@sun.com [mailto:oleg.dro...@sun.com]
 Sent: Thursday, March 04, 2010 12:49 PM
 To: Jeffrey Bennett
 Cc: lustre-discuss@lists.lustre.org
 Subject: Re: [Lustre-discuss] One or two OSS, no difference?

 Hello!

   This is pretty strange. Are there any differences in network  
 topology that can explain this?
   If you remove the first client, does the second one shows  
 performance
   at the level of of the first, but as soon as you start the load on  
 the first again, the second
   client performance drops?

 Bye,
Oleg
 On Mar 4, 2010, at 1:45 PM, Jeffrey Bennett wrote:

 Hi Oleg, thanks for your reply

 I was actually testing with only one client. When adding a second  
 client using a different file, one client gets all the performance  
 and the other one gets very low performance, any recommendation?

 Thanks in advance

 jab


 -Original Message-
 From: oleg.dro...@sun.com [mailto:oleg.dro...@sun.com]
 Sent: Wednesday, March 03, 2010 5:20 PM
 To: Jeffrey Bennett
 Cc: lustre-discuss@lists.lustre.org
 Subject: Re: [Lustre-discuss] One or two OSS, no difference?

 Hello!

 On Mar 3, 2010, at 6:35 PM, Jeffrey Bennett wrote:
 We are building a very small Lustre cluster with 32 clients  
 (patchless) and two OSS servers. Each OSS server has 1 OST with 1  
 TB of Solid State Drives. All is connected using dual-port DDR IB.

 For testing purposes, I am enabling/disabling one of the OSS/OST  
 by using the lfs setstripe command. I am running XDD and vdbench  
 benchmarks.

 Does anybody have an idea why there is no difference in MB/sec or  
 random IOPS when using one OSS or two OSS? A quick test with dd  
 also shows the same MB/sec when using one or two OSTs.

 I wonder if you just don't saturate even one OST (both backend SSD  
 and IB interconnect) with this number of clients? Does the total  
 throughput decreases as you decrease
 number of active clients and increases as you increase it even  
 further?
 Increasing maximum number of in-flight rpcs might help in that case.
 Also are all of your clients writing to the same file or each  
 client does io to a separate file (I hope)?

 Bye,
   Oleg

 ___
 Lustre-discuss mailing list
 Lustre-discuss@lists.lustre.org
 http://lists.lustre.org/mailman/listinfo/lustre-discuss


Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] One or two OSS, no difference?

2010-03-05 Thread Jeffrey Bennett
Andreas, if we are using 4kb blocks I understand we only transfer 1 page per 
RPC call, so are we limited to 10-15K RPC per second or what's the same, 
10-15.000 IOPS?

jab


-Original Message-
From: andreas.dil...@sun.com [mailto:andreas.dil...@sun.com] On Behalf Of 
Andreas Dilger
Sent: Friday, March 05, 2010 2:05 AM
To: Jeffrey Bennett
Cc: oleg.dro...@sun.com; lustre-discuss@lists.lustre.org
Subject: Re: [Lustre-discuss] One or two OSS, no difference?

On 2010-03-04, at 14:18, Jeffrey Bennett wrote:
 I just noticed the sequential performance is ok, but the random IO  
 (which is what I am measuring) is not. Is there any way to increase  
 random IO performance on Lustre? We have LUNs that can provide  
 around 250.000 random read 4kb IOPS but we are only seeing 3.000 to  
 10.000 on Lustre.

There is work currently underway to improve the SMP scaling  
performance for the RPC handling layer in Lustre.  Currently that  
limits the delivered RPC rate to 10-15k/sec or so.

 -Original Message-
 From: oleg.dro...@sun.com [mailto:oleg.dro...@sun.com]
 Sent: Thursday, March 04, 2010 12:49 PM
 To: Jeffrey Bennett
 Cc: lustre-discuss@lists.lustre.org
 Subject: Re: [Lustre-discuss] One or two OSS, no difference?

 Hello!

   This is pretty strange. Are there any differences in network  
 topology that can explain this?
   If you remove the first client, does the second one shows  
 performance
   at the level of of the first, but as soon as you start the load on  
 the first again, the second
   client performance drops?

 Bye,
Oleg
 On Mar 4, 2010, at 1:45 PM, Jeffrey Bennett wrote:

 Hi Oleg, thanks for your reply

 I was actually testing with only one client. When adding a second  
 client using a different file, one client gets all the performance  
 and the other one gets very low performance, any recommendation?

 Thanks in advance

 jab


 -Original Message-
 From: oleg.dro...@sun.com [mailto:oleg.dro...@sun.com]
 Sent: Wednesday, March 03, 2010 5:20 PM
 To: Jeffrey Bennett
 Cc: lustre-discuss@lists.lustre.org
 Subject: Re: [Lustre-discuss] One or two OSS, no difference?

 Hello!

 On Mar 3, 2010, at 6:35 PM, Jeffrey Bennett wrote:
 We are building a very small Lustre cluster with 32 clients  
 (patchless) and two OSS servers. Each OSS server has 1 OST with 1  
 TB of Solid State Drives. All is connected using dual-port DDR IB.

 For testing purposes, I am enabling/disabling one of the OSS/OST  
 by using the lfs setstripe command. I am running XDD and vdbench  
 benchmarks.

 Does anybody have an idea why there is no difference in MB/sec or  
 random IOPS when using one OSS or two OSS? A quick test with dd  
 also shows the same MB/sec when using one or two OSTs.

 I wonder if you just don't saturate even one OST (both backend SSD  
 and IB interconnect) with this number of clients? Does the total  
 throughput decreases as you decrease
 number of active clients and increases as you increase it even  
 further?
 Increasing maximum number of in-flight rpcs might help in that case.
 Also are all of your clients writing to the same file or each  
 client does io to a separate file (I hope)?

 Bye,
   Oleg

 ___
 Lustre-discuss mailing list
 Lustre-discuss@lists.lustre.org
 http://lists.lustre.org/mailman/listinfo/lustre-discuss


Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] One or two OSS, no difference?

2010-03-05 Thread Andreas Dilger
On 2010-03-05, at 14:53, Jeffrey Bennett wrote:
 Andreas, if we are using 4kb blocks I understand we only transfer 1  
 page per RPC call, so are we limited to 10-15K RPC per second or  
 what's the same, 10-15.000 IOPS?

That depends on whether you are doing read or write requests, whether  
it is in the client cache, etc.  Random read requests would definitely  
fall under the RPC limit, random write requests can benefit from  
aggregation on the client, assuming you aren't doing O_DIRECT or  
O_SYNC IO operations.

Increasing your max_rpcs_in_flight and max_dirty_mb on the clients can  
improve IOPS, assuming the servers are not handling enough requests  
from the clients.  Check the RPC req_waittime, req_qdepth,  
ost_{read,write} service time via:

 lctl get_param ost.OSS.ost_io.stats

to see whether the servers are saturated, or idle.  CPU usage may also  
be a factor.

There was also bug 22074 fixed recently (post 1.8.2) that addresses a  
performance problem with lots of small IOs to different files (NOT  
related to small IOs to a single file).

 -Original Message-
 From: andreas.dil...@sun.com [mailto:andreas.dil...@sun.com] On  
 Behalf Of Andreas Dilger
 Sent: Friday, March 05, 2010 2:05 AM
 To: Jeffrey Bennett
 Cc: oleg.dro...@sun.com; lustre-discuss@lists.lustre.org
 Subject: Re: [Lustre-discuss] One or two OSS, no difference?

 On 2010-03-04, at 14:18, Jeffrey Bennett wrote:
 I just noticed the sequential performance is ok, but the random IO
 (which is what I am measuring) is not. Is there any way to increase
 random IO performance on Lustre? We have LUNs that can provide
 around 250.000 random read 4kb IOPS but we are only seeing 3.000 to
 10.000 on Lustre.

 There is work currently underway to improve the SMP scaling
 performance for the RPC handling layer in Lustre.  Currently that
 limits the delivered RPC rate to 10-15k/sec or so.

 -Original Message-
 From: oleg.dro...@sun.com [mailto:oleg.dro...@sun.com]
 Sent: Thursday, March 04, 2010 12:49 PM
 To: Jeffrey Bennett
 Cc: lustre-discuss@lists.lustre.org
 Subject: Re: [Lustre-discuss] One or two OSS, no difference?

 Hello!

  This is pretty strange. Are there any differences in network
 topology that can explain this?
  If you remove the first client, does the second one shows
 performance
  at the level of of the first, but as soon as you start the load on
 the first again, the second
  client performance drops?

 Bye,
   Oleg
 On Mar 4, 2010, at 1:45 PM, Jeffrey Bennett wrote:

 Hi Oleg, thanks for your reply

 I was actually testing with only one client. When adding a second
 client using a different file, one client gets all the performance
 and the other one gets very low performance, any recommendation?

 Thanks in advance

 jab


 -Original Message-
 From: oleg.dro...@sun.com [mailto:oleg.dro...@sun.com]
 Sent: Wednesday, March 03, 2010 5:20 PM
 To: Jeffrey Bennett
 Cc: lustre-discuss@lists.lustre.org
 Subject: Re: [Lustre-discuss] One or two OSS, no difference?

 Hello!

 On Mar 3, 2010, at 6:35 PM, Jeffrey Bennett wrote:
 We are building a very small Lustre cluster with 32 clients
 (patchless) and two OSS servers. Each OSS server has 1 OST with 1
 TB of Solid State Drives. All is connected using dual-port DDR IB.

 For testing purposes, I am enabling/disabling one of the OSS/OST
 by using the lfs setstripe command. I am running XDD and vdbench
 benchmarks.

 Does anybody have an idea why there is no difference in MB/sec or
 random IOPS when using one OSS or two OSS? A quick test with dd
 also shows the same MB/sec when using one or two OSTs.

 I wonder if you just don't saturate even one OST (both backend SSD
 and IB interconnect) with this number of clients? Does the total
 throughput decreases as you decrease
 number of active clients and increases as you increase it even
 further?
 Increasing maximum number of in-flight rpcs might help in that case.
 Also are all of your clients writing to the same file or each
 client does io to a separate file (I hope)?

 Bye,
  Oleg

 ___
 Lustre-discuss mailing list
 Lustre-discuss@lists.lustre.org
 http://lists.lustre.org/mailman/listinfo/lustre-discuss


 Cheers, Andreas
 --
 Andreas Dilger
 Sr. Staff Engineer, Lustre Group
 Sun Microsystems of Canada, Inc.



Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] One or two OSS, no difference?

2010-03-04 Thread Jeffrey Bennett
Hi Oleg,

I just noticed the sequential performance is ok, but the random IO (which is 
what I am measuring) is not. Is there any way to increase random IO performance 
on Lustre? We have LUNs that can provide around 250.000 random read 4kb IOPS 
but we are only seeing 3.000 to 10.000 on Lustre.

jab


-Original Message-
From: oleg.dro...@sun.com [mailto:oleg.dro...@sun.com] 
Sent: Thursday, March 04, 2010 12:49 PM
To: Jeffrey Bennett
Cc: lustre-discuss@lists.lustre.org
Subject: Re: [Lustre-discuss] One or two OSS, no difference?

Hello!

   This is pretty strange. Are there any differences in network topology that 
can explain this?
   If you remove the first client, does the second one shows performance
   at the level of of the first, but as soon as you start the load on the first 
again, the second
   client performance drops?

Bye,
Oleg
On Mar 4, 2010, at 1:45 PM, Jeffrey Bennett wrote:

 Hi Oleg, thanks for your reply
 
 I was actually testing with only one client. When adding a second client 
 using a different file, one client gets all the performance and the other one 
 gets very low performance, any recommendation?
 
 Thanks in advance
 
 jab
 
 
 -Original Message-
 From: oleg.dro...@sun.com [mailto:oleg.dro...@sun.com] 
 Sent: Wednesday, March 03, 2010 5:20 PM
 To: Jeffrey Bennett
 Cc: lustre-discuss@lists.lustre.org
 Subject: Re: [Lustre-discuss] One or two OSS, no difference?
 
 Hello!
 
 On Mar 3, 2010, at 6:35 PM, Jeffrey Bennett wrote:
 We are building a very small Lustre cluster with 32 clients (patchless) and 
 two OSS servers. Each OSS server has 1 OST with 1 TB of Solid State Drives. 
 All is connected using dual-port DDR IB.
 
 For testing purposes, I am enabling/disabling one of the OSS/OST by using 
 the lfs setstripe command. I am running XDD and vdbench benchmarks.
 
 Does anybody have an idea why there is no difference in MB/sec or random 
 IOPS when using one OSS or two OSS? A quick test with dd also shows the 
 same MB/sec when using one or two OSTs.
 
 I wonder if you just don't saturate even one OST (both backend SSD and IB 
 interconnect) with this number of clients? Does the total throughput 
 decreases as you decrease
 number of active clients and increases as you increase it even further?
 Increasing maximum number of in-flight rpcs might help in that case.
 Also are all of your clients writing to the same file or each client does io 
 to a separate file (I hope)?
 
 Bye,
Oleg

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] One or two OSS, no difference?

2010-03-03 Thread Oleg Drokin
Hello!

On Mar 3, 2010, at 6:35 PM, Jeffrey Bennett wrote:
 We are building a very small Lustre cluster with 32 clients (patchless) and 
 two OSS servers. Each OSS server has 1 OST with 1 TB of Solid State Drives. 
 All is connected using dual-port DDR IB.
  
 For testing purposes, I am enabling/disabling one of the OSS/OST by using the 
 “lfs setstripe” command. I am running XDD and vdbench benchmarks.
  
 Does anybody have an idea why there is no difference in MB/sec or random IOPS 
 when using one OSS or two OSS? A quick test with “dd” also shows the same 
 MB/sec when using one or two OSTs.

I wonder if you just don't saturate even one OST (both backend SSD and IB 
interconnect) with this number of clients? Does the total throughput decreases 
as you decrease
number of active clients and increases as you increase it even further?
Increasing maximum number of in-flight rpcs might help in that case.
Also are all of your clients writing to the same file or each client does io to 
a separate file (I hope)?

Bye,
Oleg
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss