[lustre-discuss] MDS/MGS has a block storage device mounted and it does not have any permissions (no read , no write, no execute)

2019-02-05 Thread Pinkesh Valdria
opc]# modprobe lustre [root@lustre-client-1 opc]# mount -t lustre 10.0.2.3@tcp:/lustrewt /mnt   (This fails with below error): mount.lustre: mount 10.0.2.4@tcp:/lustrewt at /mnt failed: Input/output error Is the MGS running? [root@lustre-client-1 opc]#         Thanks, Pinkesh Valdria OCI –

Re: [lustre-discuss] lnet_selftest - fails for me

2019-08-12 Thread Pinkesh Valdria
Figured out the issue.   I forgot to load module on the server side.  Solution:   Load module on all nodes involved in testing.  [root@lustre-oss-server-nic0-1 ~]# modprobe lnet_selftest [root@lustre-oss-server-nic0-1 ~]# From: lustre-discuss on behalf of Pinkesh Valdria Date

[lustre-discuss] lnet_selftest - fails for me

2019-08-12 Thread Pinkesh Valdria
Hello,  Does anyone know why this simple lnet_selftest is failing.    I am able to use the Lustre file system without any problem.    I looked at /var/log/messages on the client and server node and there are no errors.     Googling for the error,  was not helpful. The script: 

[lustre-discuss] LNET tunables and LND tunables

2019-08-11 Thread Pinkesh Valdria
Hello, I have a lustre cluster using 25gbps ethernet network.  (no infinitiband).    I see lot of examples online for infiniband and what tunables to use for it,  but I am struggling to find recommendations when using ethernet networks.   Appreciate if someone can share their

[lustre-discuss] max_pages_per_rpc=4096 fails on the client nodes

2019-08-14 Thread Pinkesh Valdria
I want to enable large RPC size.   I followed the steps as per the Lustre manual section: 33.9.2 Usage (http://doc.lustre.org/lustre_manual.xhtml),  but I get the below we error when I try to update the client.     Updated the OSS server: [root@lustre-oss-server-nic0-1 test]# lctl

Re: [lustre-discuss] max_pages_per_rpc=4096 fails on the client nodes

2019-08-14 Thread Pinkesh Valdria
For others, incase they face this issue.  Solution: I had to unmount and remount for the command to work. From: Pinkesh Valdria Date: Wednesday, August 14, 2019 at 9:25 AM To: "lustre-discuss@lists.lustre.org" Subject: max_pages_per_rpc=4096 fails on the client nodes I want

Re: [lustre-discuss] lctl set_param obdfilter.*.readcache_max_filesize=2M fails

2019-08-08 Thread Pinkesh Valdria
some get command From: Shaun Tancheff Date: Thursday, August 8, 2019 at 9:50 AM To: Pinkesh Valdria , "lustre-discuss@lists.lustre.org" Subject: Re: [lustre-discuss] lctl set_param obdfilter.*.readcache_max_filesize=2M fails I think the parameter has changed:

[lustre-discuss] lctl set_param obdfilter.*.readcache_max_filesize=2M fails

2019-08-08 Thread Pinkesh Valdria
Hello Lustre experts, I am fairly new to lustre and I did a deployment of it on Oracle Public Cloud using instructions on whamcloud wiki pages.    I am now trying to set some parameters for better performance and need help to understand why I am getting this error: On OSS Servers: 

Re: [lustre-discuss] lctl set_param obdfilter.*.readcache_max_filesize=2M fails

2019-08-08 Thread Pinkesh Valdria
.readcache_max_filesize=2M osd-ldiskfs.lfsbv-OST0008.readcache_max_filesize=2M osd-ldiskfs.lfsbv-OST0009.readcache_max_filesize=2M [root@lustre-oss-server-nic0-1 ~]# From: Chris Horn Date: Thursday, August 8, 2019 at 11:11 AM To: Pinkesh Valdria , Shaun Tancheff , "lustre-discuss@lists.lustr

[lustre-discuss] Lustre tuning - help

2019-08-09 Thread Pinkesh Valdria
in, appreciate any guidance or help you can provide or you can point me to docs, articles, which will be helpful for me. Thanks, Pinkesh Valdria Principal Solutions Architect – Big Data & HPC Oracle Cloud Infrastructure – Seattle +1-206-234-4314. ___

Re: [lustre-discuss] Lnet Self Test

2019-12-04 Thread Pinkesh Valdria
be changed and also if it really helps or not. # Several packets in a rapid sequence can be coalesced into one interrupt passed up to the CPU, providing more CPU time for application processing. Thanks, Pinkesh valdria Oracle Cloud From: Jongwoo Han Date: Wednesday, December 4, 2019 at 8

[lustre-discuss] Lemur Lustre - make rpm fails

2019-12-09 Thread Pinkesh Valdria
I am trying to install Lemur on CentOS 7.6 (7.6.1810) to integrate with Object storage but the install fails.   I used the instructions on below page to install.  I already had Lustre client (2.12.3) installed on the machine,  so I started with steps for Lemur.

Re: [lustre-discuss] Degraded read performance with Large Bulk IO (16MB RPC)

2019-12-13 Thread Pinkesh Valdria
From: "Moreno Diego (ID SIS)" Date: Friday, December 13, 2019 at 2:55 AM To: Pinkesh Valdria , "lustre-discuss@lists.lustre.org" Subject: Re: [lustre-discuss] Degraded read performance with Large Bulk IO (16MB RPC) >From what I can see they exist on my 2.12.3 client

Re: [lustre-discuss] Degraded read performance with Large Bulk IO (16MB RPC)

2019-12-13 Thread Pinkesh Valdria
llite  >  llite_parameters.txt There are other parameters under llite.   I attached the complete list. From: "Moreno Diego (ID SIS)" Date: Friday, December 13, 2019 at 8:36 AM To: Pinkesh Valdria , "lustre-discuss@lists.lustre.org" Subject: Re: [lustre-discuss] De

Re: [lustre-discuss] Degraded read performance with Large Bulk IO (16MB RPC)

2019-12-11 Thread Pinkesh Valdria
m_R]$ Works On client nodes lctl get_param llite.*.statahead_agl llite.lfsbv-98231c3bc000.statahead_agl=1 llite.lfsnvme-98232c30e000.statahead_agl=1 [opc@lustre-client-1 lctl_list_param_R]$ From: "Moreno Diego (ID SIS)" Date: Tuesday, December 10, 2019 at 2:06 AM To: Pi

Re: [lustre-discuss] Lemur Lustre - make rpm fails

2019-12-11 Thread Pinkesh Valdria
/lemur' error: Bad exit status from /var/tmp/rpm-tmp.cPPeEL (%install) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.cPPeEL (%install) make[1]: *** [rpm] Error 1 make[1]: Leaving directory `/root/lemur/packaging/rpm' make: *** [local-rpm] Error 2 [root@lustre-client-4 lemur]#

Re: [lustre-discuss] Lnet Self Test

2019-12-07 Thread Pinkesh Valdria
ot  20   0   0  0  0 S   2.0  0.0  30:56.70 socknal_sd00_00 60861 root  20   0   0  0  0 S   2.0  0.0  30:54.97 socknal_sd00_02 60862 root  20   0   0  0  0 S   2.0  0.0  30:56.06 socknal_sd00_03 60863 root  20   0   0  0 

[lustre-discuss] Lnet Self Test

2019-11-26 Thread Pinkesh Valdria
nt_huge_pages=never [sysctl] kernel.sched_min_granularity_ns = 1000 kernel.sched_wakeup_granularity_ns = 1500 vm.dirty_ratio = 30 vm.dirty_background_ratio = 10 vm.swappiness=30 " > lustre-performance/tuned.conf tuned-adm profile lustre-perform

Re: [lustre-discuss] Lnet Self Test

2019-11-27 Thread Pinkesh Valdria
with Lnet? Thanks, Pinkesh Valdria Oracle Cloud Infrastructure From: Andreas Dilger Date: Wednesday, November 27, 2019 at 1:25 AM To: Pinkesh Valdria Cc: "lustre-discuss@lists.lustre.org" Subject: Re: [lustre-discuss] Lnet Self Test The first thing to note i

[lustre-discuss] Degraded read performance with Large Bulk IO (16MB RPC)

2019-12-10 Thread Pinkesh Valdria
I was expecting better or same read performance with Large Bulk IO (16MB RPC),  but I see degradation in performance.   Do I need to tune any other parameter to benefit from Large Bulk IO?   Appreciate if I can get any pointers to troubleshoot further. Throughput before Read:  2563 MB/s

Re: [lustre-discuss] Degraded read performance with Large Bulk IO (16MB RPC)

2020-01-22 Thread Pinkesh Valdria
-quickstart/oci-lustre/tree/master/scripts As next step -  I plan to test deployment of Lustre on 100 Gbps RoCEv2 RDMA network (Mellanox CX5).  Thanks, Pinkesh Valdria Oracle Cloud – Principal Solutions Architect https://blogs.oracle.com/cloud-infrastructure/lustre-file-system

[lustre-discuss] Lustre with 100 Gbps Mellanox CX5 card

2020-01-22 Thread Pinkesh Valdria
Hello Lustre Community, I am trying to configure lustre for 100 Gbps Mellanox CX5 card.   I tried using 2.12.3 version first, but it failed when I tried to run lnetctl net add --net o2ib0 --if enp94s0f0,  so I started looking at the lustre binaries and found the below repos for ib.   Is

Re: [lustre-discuss] Lemur Lustre - make rpm fails

2020-01-08 Thread Pinkesh Valdria
Hello Nathaniel, As a workaround, is there an older lemur rpm version or older Lustre version I should use to unblock myself? https://github.com/whamcloud/lemur/issues/7 https://github.com/whamcloud/lemur/issues/8 Thanks, Pinkesh Valdria On 12/11/19, 6:31 AM, "Pinkesh Valdria&qu

[lustre-discuss] NFS Client Attributes caching - equivalent feature/config in Lustre

2020-04-21 Thread Pinkesh Valdria
locks extensively. Appreciate any guidance.  Thanks, pinkesh valdria ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

[lustre-discuss] Complete list of rules for PCC

2020-08-25 Thread Pinkesh Valdria
I am looking for the various policy rules which can be applied for Lustre Persistent Client Cache.   In the docs,  I see below example using projid, fname and uid.    Where can I find a complete list of supported rules.     Also is there a way for PCC to only cache content of few folders

[lustre-discuss] Bulk Attach/Detach - Lustre PCC (Persistent Client Cache)

2020-08-24 Thread Pinkesh Valdria
tach ”  or there is another command and I missed it in the docs. Thanks, Pinkesh Valdria ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

[lustre-discuss] Lustre (latest) access via NFSv4

2020-06-04 Thread Pinkesh Valdria
Can Lustre be access via NFSv4.   I know we can use NFSv3, but wanted to ask about NFSv4 support ? ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

[lustre-discuss] MOFED & Lustre 2.14.51 - install fails with dependency failure related to ksym/MOFED

2021-05-21 Thread Pinkesh Valdria via lustre-discuss
re-client-2.14.51-1.el7.x86_64.rpm * lustre-client-debuginfo-2.14.51-1.el7.x86_64.rpm * lustre-client-devel-2.14.51-1.el7.x86_64.rpm * lustre-client-tests-2.14.51-1.el7.x86_64.rpm * lustre-iokit-2.14.51-1.el7.x86_64.rpm Thanks, Pinkesh Valdria Principal Solutions Architect – HPC Oracle Clo

Re: [lustre-discuss] MOFED & Lustre 2.14.51 - install fails with dependency failure related to ksym/MOFED

2021-05-25 Thread Pinkesh Valdria via lustre-discuss
node,large_dir,flex_bg” Thanks, Pinkesh Valdria From: Pinkesh Valdria Date: Friday, May 21, 2021 at 6:04 PM To: "lustre-discuss@lists.lustre.org" Subject: MOFED & Lustre 2.14.51 - install fails with dependency failure related to ksym/MOFED Sorry for a long email, wanted to

[lustre-discuss] Lustre using RDMA (RoCEv2)

2021-07-08 Thread Pinkesh Valdria via lustre-discuss
is, still same error. echo 'options lnet networks="o2ib(ens800f0)" ' > /etc/modprobe.d/lustre.conf echo 'options lnet networks="o2ib(ens800f0)" ' > /etc/modprobe.d/lnet.conf Thanks, Pinkesh Valdria Principal Solutions Architect – HPC __

[lustre-discuss] Lustre Client compile on Ubuntu18.04 failing

2021-02-25 Thread Pinkesh Valdria via lustre-discuss
enerated/autoconf.h... no checking for /root/linux-oracle/include/linux/autoconf.h... no configure: error: Run make config in /root/linux-oracle. root@lustre-client-2-12-4-ubuntu1804:~/lustre-release# Thanks, Pinkesh Valdria Principal Solutions Architect – HPC Oracle Cloud Infrastructure +65-8932

Re: [lustre-discuss] [External] : lustre-discuss Digest, Vol 179, Issue 26

2021-02-26 Thread Pinkesh Valdria via lustre-discuss
Lustre for Oracle Linux UEK kernels, if that's okay. Thanks, Pinkesh Valdria Principal Solutions Architect – HPC Oracle Cloud Infrastructure +65-8932-3639 (m) - Singapore +1-425-205-7834 (m) - USA On 2/26/21, 12:46 AM, "lustre-discuss on behalf of lustre-discuss-requ...@lists.lustr

[lustre-discuss] How to make OSTs active again

2021-09-30 Thread Pinkesh Valdria via lustre-discuss
-MDT0001-mdtlov_UUID 4 21 UP osp lustrefs-OST0001-osc-MDT0001 lustrefs-MDT0001-mdtlov_UUID 4 22 UP lwp lustrefs-MDT-lwp-MDT0001 lustrefs-MDT-lwp-MDT0001_UUID 4 Thanks, Pinkesh Valdria Oracle Cloud Infrastructure +65-8932-3639 (m) - Singapore +1-425-205-7834 (m) - USA https://blogs.oracle.com