; kernel-4.18.0-513.18.1.el8_9.x86_64. Or must we upgrade our lustre package
> to 2.15.4 ?
>
>
> Thanks,
>
> Khoi
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/lis
6_64 (zfs)
> libzfs.so.2()(64bit)
>Available: libzfs2-0.8.6-1.el7.x86_64 (zfs)
>libzfs.so.2()(64bit)
> Error: Package: lustre-osd-zfs-mount-2.12.8_6_g5457c37-1.el7.x86_64
> (lustre-server)
>Requires: libnvpair.so.1()(64bit)
>Available: libnvpair1-0.7.13-1.el7.x86_64 (lustre-server)
>libnvpair.so.1()(64bit)
>Available: libnvpair1-0.8.5-1.el7.x86_64 (zfs)
>libnvpair.so.1()(64bit)
>Available: libnvpair1-0.8.6-1.el7.x86_64 (zfs)
>libnvpair.so.1()(64bit)
> You could try using --skip-broken to work around the problem
> You could try running: rpm -Va --nofiles --nodigest
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
Jongwoo Han
+82-505-227-6108
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
customer asks me about backup, I recommend putting files into AWS glacier.
Thanks,
Jongwoo Han
2021년 10월 17일 (일) 오전 9:06, Sid Young via lustre-discuss <
lustre-discuss@lists.lustre.org>님이 작성:
> G'Day all,
>
> Apart from rsync'ing all the data on a mounted lustre filesystem
OST configurations available today cannot endure such workloads. Yet
when 200Gb throughput must be provided to OSS, there is an alternative such
as dynamic load balancing on multi-rail configuration ( using 2 x 100Gb
ports links for OSSes).
regards,
Jongwoo Han
2021년 10월 19일 (화) 오후 7:31, 홍재기 via
It seems your CONFIG directory in MDT is corrupt.
I suggest that mount MDT as plain ext2fs ( # mount /dev/mapper/mpatha
/mnt/tmp ) , backup CONFIG directory, and erase contents of CONFIG
directory then mount as lustre filesystem.
Regards,
Jongwoo Han
2021년 10월 28일 (목) 오전 2:08, Рачко Антон
I think your question is ambiguous.
What ceiling do you mean? Total storage capacity? number of disks? number
of clients? number of filesystems?
Please be more clear about it.
Regards,
Jongwoo Han
2020년 7월 15일 (수) 오후 3:29, 肖正刚 님이 작성:
> Hi, all
> Is there a ceiling for a Lustre filesyste
Have you tried MTU >= 9000 bytes (AKA jumbo frame) on the 25G ethernet and
the switch?
If it is set to 1500 bytes, ethernet + IP + TCP frame headers take quite
amount of packet, reducing available bandwidth for data.
Jongwoo Han
2019년 11월 28일 (목) 오전 3:44, Pinkesh Valdria 님이
작성:
> Thanks A
better do first.
If it happens again after rebuild, another thing to try is to shut down the
OSSes by completely powerfing off entire shelves.
Regards,
Jongwoo Han
2019년 11월 15일 (금) 오후 6:01, Einar Næss Jensen 님이
작성:
>
> Hello dear lustre community.
>
>
> We have a lustre file system, w
>> 30: lustre-OST001e_UUID INACTIVE
>>
>> 33: lustre-OST0021_UUID INACTIVE
>>
>> 40: lustre-OST0028_UUID INACTIVE
>>
>> 44: lustre-OST002c_UUID INACTIVE
>>
>> 50: lustre-OST0032_UUID INACTIVE
>>
>> 55: lustre-OST0037_UUID INACTIVE
>>
>> 60: lustre-OST003c_UUID INAC
> be upgraded first before the clients.
>
> Cheers.
>
> Phill.
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
Jongwoo Han
+82-505-227-6108
; 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>> <https://www.google.com/maps/search/4170+Morena+Boulevard,+Suite+C+-+San+Diego,+CA+92117?entry=gmail&source=g>
>> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>>
>> --
> --
lists.lustre.org%2Flistinfo.cgi%2Flustre-discuss-lustre.org&data=02%7C01%7Cheathp%40hpc.msstate.edu%7Cea9a327f07174050a9df08d6e859d404%7Ced51dbb0af8645a29c9773fb3935df17%7C0%7C1%7C636951869843759306&sdata=XCARkzSqZ%2FcYcDGguRUtOb3SXaQv294JAdojxRcHc5g%3D&reserved=0>
zpool with zpool replace
try this in your test environment and tell us if you have found anything
interesting in the syslog.
In my case replacing single disk in multipathd+zfs pool triggerd massive
udevd partition scan.
Thanks
Jongwoo Han
2019년 4월 26일 (금) 오전 3:44, Kurt Strosahl 님이 작성:
> G
; lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
Jongwoo Han
+82-505-227-6108
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
We do have vendor support and have engaged them. I wanted to ask the
> community and get some feedback.
> >>
> >> Thanks,
> >> -Raj
> >> ___________
> >> lustre-discuss mailing list
> >> lustre-discuss@lists.lustre.org
> >> http://lists.lustre.org
s forced, rolling migrate and upgrade should be planned
carefully. It will be better to set up correct procedure checklist by
practicing on a virtual environment with identical versions.
> Cheers
> Scott
> ___
> lustre-discuss mailing list
&g
; Thanks,
>
> BR,
> Jae-Hyuck
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
Jongwoo Han
+82-505-227-6108
___
lustre-discuss mailing l
://github.com/ewwhite/zfs-ha/wiki
when import/export from failed server to live server is done, it is
straightforward to mount zfs backed lustre ost with "mount -t lustre
/ " command. This can be integrated to above
zfs heartbeat script.
On Fri, Jan 4, 2019 at 2:42 PM ANS wrote:
> Thank yo
arding this.
>>>>>>
>>>>>> Also from performance prospective what are the zfs and lustre
>>>>>> parameters to be tuned.
>>>>>>
>>>>>> --
>>>>>> Thanks,
>>>>>> ANS.
>>>>>> ___
>>>>>> lustre-discuss mailing list
>>>>>> lustre-discuss@lists.lustre.org
>>>>>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>>>>>
>>>>> --
>>>>> --
>>>>> Jeff Johnson
>>>>> Co-Founder
>>>>> Aeon Computing
>>>>>
>>>>> jeff.john...@aeoncomputing.com
>>>>> www.aeoncomputing.com
>>>>> t: 858-412-3810 x1001 f: 858-412-3845
>>>>> m: 619-204-9061
>>>>>
>>>>> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>>>>> <https://maps.google.com/?q=4170+Morena+Boulevard,+Suite+C+-+San+Diego,+CA+92117&entry=gmail&source=g>
>>>>>
>>>>> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>>>>>
>>>>
>>>>
>>>> --
>>>> Thanks,
>>>> ANS.
>>>>
>>> --
>>> --
>>> Jeff Johnson
>>> Co-Founder
>>> Aeon Computing
>>>
>>> jeff.john...@aeoncomputing.com
>>> www.aeoncomputing.com
>>> t: 858-412-3810 x1001 f: 858-412-3845
>>> m: 619-204-9061
>>>
>>> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>>>
>>> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>>>
>>
>>
>> --
>> Thanks,
>> ANS.
>>
>
>
> --
> Thanks,
> ANS.
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
--
Jongwoo Han
+82-505-227-6108
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
Hi Marion,
Enabling PFL and relocating file will double space consumption, so safe way
is not to enable new features. Among other things, compression may be a
good idea to try but remember it will increase OSS CPU utilization.
Regards,
Jongwoo Han
On Tue, Oct 9, 2018 at 9:00 AM Marion Hakanson
20 matches
Mail list logo