Re: [lustre-discuss] lustre 2.15.3

2024-03-28 Thread Jongwoo Han via lustre-discuss
; kernel-4.18.0-513.18.1.el8_9.x86_64. Or must we upgrade our lustre package > to 2.15.4 ? > > > Thanks, > > Khoi > > > ___ > lustre-discuss mailing list > lustre-discuss@lists.lustre.org > http://lists.lustre.org/lis

Re: [lustre-discuss] Dependency issue with Lustre+ZFS support

2022-04-07 Thread Jongwoo Han via lustre-discuss
6_64 (zfs) > libzfs.so.2()(64bit) >Available: libzfs2-0.8.6-1.el7.x86_64 (zfs) >libzfs.so.2()(64bit) > Error: Package: lustre-osd-zfs-mount-2.12.8_6_g5457c37-1.el7.x86_64 > (lustre-server) >Requires: libnvpair.so.1()(64bit) >Available: libnvpair1-0.7.13-1.el7.x86_64 (lustre-server) >libnvpair.so.1()(64bit) >Available: libnvpair1-0.8.5-1.el7.x86_64 (zfs) >libnvpair.so.1()(64bit) >Available: libnvpair1-0.8.6-1.el7.x86_64 (zfs) >libnvpair.so.1()(64bit) > You could try using --skip-broken to work around the problem > You could try running: rpm -Va --nofiles --nodigest > ___ > lustre-discuss mailing list > lustre-discuss@lists.lustre.org > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org > -- Jongwoo Han +82-505-227-6108 ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Re: [lustre-discuss] Best ways to backup a Lustre file system?

2021-10-28 Thread Jongwoo Han via lustre-discuss
customer asks me about backup, I recommend putting files into AWS glacier. Thanks, Jongwoo Han 2021년 10월 17일 (일) 오전 9:06, Sid Young via lustre-discuss < lustre-discuss@lists.lustre.org>님이 작성: > G'Day all, > > Apart from rsync'ing all the data on a mounted lustre filesystem

Re: [lustre-discuss] LNet supports IB HDR(200Gb) network?

2021-10-28 Thread Jongwoo Han via lustre-discuss
OST configurations available today cannot endure such workloads. Yet when 200Gb throughput must be provided to OSS, there is an alternative such as dynamic load balancing on multi-rail configuration ( using 2 x 100Gb ports links for OSSes). regards, Jongwoo Han 2021년 10월 19일 (화) 오후 7:31, 홍재기 via

Re: [lustre-discuss] Failed to mount MDT

2021-10-27 Thread Jongwoo Han via lustre-discuss
It seems your CONFIG directory in MDT is corrupt. I suggest that mount MDT as plain ext2fs ( # mount /dev/mapper/mpatha /mnt/tmp ) , backup CONFIG directory, and erase contents of CONFIG directory then mount as lustre filesystem. Regards, Jongwoo Han 2021년 10월 28일 (목) 오전 2:08, Рачко Антон

Re: [lustre-discuss] Is there aceiling of lustre filesystem a client can mount

2020-07-15 Thread Jongwoo Han
I think your question is ambiguous. What ceiling do you mean? Total storage capacity? number of disks? number of clients? number of filesystems? Please be more clear about it. Regards, Jongwoo Han 2020년 7월 15일 (수) 오후 3:29, 肖正刚 님이 작성: > Hi, all > Is there a ceiling for a Lustre filesyste

Re: [lustre-discuss] Lnet Self Test

2019-12-04 Thread Jongwoo Han
Have you tried MTU >= 9000 bytes (AKA jumbo frame) on the 25G ethernet and the switch? If it is set to 1500 bytes, ethernet + IP + TCP frame headers take quite amount of packet, reducing available bandwidth for data. Jongwoo Han 2019년 11월 28일 (목) 오전 3:44, Pinkesh Valdria 님이 작성: > Thanks A

Re: [lustre-discuss] one ost down

2019-11-15 Thread Jongwoo Han
better do first. If it happens again after rebuild, another thing to try is to shut down the OSSes by completely powerfing off entire shelves. Regards, Jongwoo Han 2019년 11월 15일 (금) 오후 6:01, Einar Næss Jensen 님이 작성: > > Hello dear lustre community. > > > We have a lustre file system, w

Re: [lustre-discuss] [SPAMMY (6.924)] Lustre in HA-LVM Cluster issue

2019-08-28 Thread Jongwoo Han
>> 30: lustre-OST001e_UUID INACTIVE >> >> 33: lustre-OST0021_UUID INACTIVE >> >> 40: lustre-OST0028_UUID INACTIVE >> >> 44: lustre-OST002c_UUID INACTIVE >> >> 50: lustre-OST0032_UUID INACTIVE >> >> 55: lustre-OST0037_UUID INACTIVE >> >> 60: lustre-OST003c_UUID INAC

Re: [lustre-discuss] Upgrading CentOS / Lustre....

2019-08-28 Thread Jongwoo Han
> be upgraded first before the clients. > > Cheers. > > Phill. > ___ > lustre-discuss mailing list > lustre-discuss@lists.lustre.org > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org > -- Jongwoo Han +82-505-227-6108

Re: [lustre-discuss] Frequency vs Cores for OSS/MDS processors

2019-07-05 Thread Jongwoo Han
; 4170 Morena Boulevard, Suite C - San Diego, CA 92117 >> <https://www.google.com/maps/search/4170+Morena+Boulevard,+Suite+C+-+San+Diego,+CA+92117?entry=gmail&source=g> >> High-Performance Computing / Lustre Filesystems / Scale-out Storage >> >> -- > --

Re: [lustre-discuss] lustre 2.7 panic when mounting

2019-06-10 Thread Jongwoo Han
lists.lustre.org%2Flistinfo.cgi%2Flustre-discuss-lustre.org&data=02%7C01%7Cheathp%40hpc.msstate.edu%7Cea9a327f07174050a9df08d6e859d404%7Ced51dbb0af8645a29c9773fb3935df17%7C0%7C1%7C636951869843759306&sdata=XCARkzSqZ%2FcYcDGguRUtOb3SXaQv294JAdojxRcHc5g%3D&reserved=0>

Re: [lustre-discuss] ZFS and multipathing for OSTs

2019-04-26 Thread Jongwoo Han
zpool with zpool replace try this in your test environment and tell us if you have found anything interesting in the syslog. In my case replacing single disk in multipathd+zfs pool triggerd massive udevd partition scan. Thanks Jongwoo Han 2019년 4월 26일 (금) 오전 3:44, Kurt Strosahl 님이 작성: > G

Re: [lustre-discuss] file system mounting as read only

2019-03-23 Thread Jongwoo Han
; lustre-discuss mailing list > lustre-discuss@lists.lustre.org > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org > -- Jongwoo Han +82-505-227-6108 ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Re: [lustre-discuss] Suspended jobs and rebooting lustre servers

2019-02-28 Thread Jongwoo Han
We do have vendor support and have engaged them. I wanted to ask the > community and get some feedback. > >> > >> Thanks, > >> -Raj > >> ___________ > >> lustre-discuss mailing list > >> lustre-discuss@lists.lustre.org > >> http://lists.lustre.org

Re: [lustre-discuss] Draining and replacing OSTs with larger volumes

2019-02-28 Thread Jongwoo Han
s forced, rolling migrate and upgrade should be planned carefully. It will be better to set up correct procedure checklist by practicing on a virtual environment with identical versions. > Cheers > Scott > ___ > lustre-discuss mailing list &g

Re: [lustre-discuss] index is already in use problem

2019-01-23 Thread Jongwoo Han
; Thanks, > > BR, > Jae-Hyuck > > > ___ > lustre-discuss mailing list > lustre-discuss@lists.lustre.org > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org > -- Jongwoo Han +82-505-227-6108 ___ lustre-discuss mailing l

Re: [lustre-discuss] Lustre Sizing

2019-01-03 Thread Jongwoo Han
://github.com/ewwhite/zfs-ha/wiki when import/export from failed server to live server is done, it is straightforward to mount zfs backed lustre ost with "mount -t lustre / " command. This can be integrated to above zfs heartbeat script. On Fri, Jan 4, 2019 at 2:42 PM ANS wrote: > Thank yo

Re: [lustre-discuss] Lustre Sizing

2019-01-02 Thread Jongwoo Han
arding this. >>>>>> >>>>>> Also from performance prospective what are the zfs and lustre >>>>>> parameters to be tuned. >>>>>> >>>>>> -- >>>>>> Thanks, >>>>>> ANS. >>>>>> ___ >>>>>> lustre-discuss mailing list >>>>>> lustre-discuss@lists.lustre.org >>>>>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org >>>>>> >>>>> -- >>>>> -- >>>>> Jeff Johnson >>>>> Co-Founder >>>>> Aeon Computing >>>>> >>>>> jeff.john...@aeoncomputing.com >>>>> www.aeoncomputing.com >>>>> t: 858-412-3810 x1001 f: 858-412-3845 >>>>> m: 619-204-9061 >>>>> >>>>> 4170 Morena Boulevard, Suite C - San Diego, CA 92117 >>>>> <https://maps.google.com/?q=4170+Morena+Boulevard,+Suite+C+-+San+Diego,+CA+92117&entry=gmail&source=g> >>>>> >>>>> High-Performance Computing / Lustre Filesystems / Scale-out Storage >>>>> >>>> >>>> >>>> -- >>>> Thanks, >>>> ANS. >>>> >>> -- >>> -- >>> Jeff Johnson >>> Co-Founder >>> Aeon Computing >>> >>> jeff.john...@aeoncomputing.com >>> www.aeoncomputing.com >>> t: 858-412-3810 x1001 f: 858-412-3845 >>> m: 619-204-9061 >>> >>> 4170 Morena Boulevard, Suite C - San Diego, CA 92117 >>> >>> High-Performance Computing / Lustre Filesystems / Scale-out Storage >>> >> >> >> -- >> Thanks, >> ANS. >> > > > -- > Thanks, > ANS. > ___ > lustre-discuss mailing list > lustre-discuss@lists.lustre.org > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org > -- Jongwoo Han +82-505-227-6108 ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Re: [lustre-discuss] zpool features, recordsize, post-upgrade?

2018-10-09 Thread Jongwoo Han
Hi Marion, Enabling PFL and relocating file will double space consumption, so safe way is not to enable new features. Among other things, compression may be a good idea to try but remember it will increase OSS CPU utilization. Regards, Jongwoo Han On Tue, Oct 9, 2018 at 9:00 AM Marion Hakanson