Re: [lustre-discuss] Best practice to backup MDT zfs based offline
update: i tried using this : zfs send -R mdtpool/mdt0@august | gzip > mdt-aug.gz Zee On Wed, Aug 21, 2019 at 10:16 PM Zeeshan Ali Shah wrote: > Dear All, what are best practices to backup MDT zfs based offline . One > option is to zfs send/recieve to remote machine. Any other option for > weekly backup ? . > > what about tar ? > > please advise > > /Zee > ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
[lustre-discuss] Best practice to backup MDT zfs based offline
Dear All, what are best practices to backup MDT zfs based offline . One option is to zfs send/recieve to remote machine. Any other option for weekly backup ? . what about tar ? please advise /Zee ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
Re: [lustre-discuss] Wanted: multipath.conf for dell ME4 series arrays
Hi Jeff, On Wed, 21 Aug 2019 at 17:34, Jeff Johnson wrote: > What underlying Lustre target filesystem? (assuming ldiskfs with a hardware > RAID array) correct - ldiskfs, using 8* raid6 luns per ME4084 > What does your current multipath.conf look like? we just had blacklist, WWNs and mappings, we were missing any ME4 specific device {} settings, however I've since found the magic incantation from https://downloads.dell.com/manuals/common/powervault-me4-series-linux-dell-emc-2018-3924-bp-l_wp_en-us.pdf, notably device { vendor "DellEMC" product "ME4" path_grouping_policy "group_by_prio" path_checker "tur" hardware_handler "1 alua" prio "alua" failback immediate rr_weight "uniform" path_selector "service-time 0" } and it seems to be working a whole lot better :-) ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
Re: [lustre-discuss] Wanted: multipath.conf for dell ME4 series arrays
Andrew, ME4084 is a dual-controller active/active hardware RAID array. Disclosing some config data could be helpful. 1. What underlying Lustre target filesystem? (assuming ldiskfs with a hardware RAID array) 2. What does your current multipath.conf look like? --Jeff On Tue, Aug 20, 2019 at 11:47 PM Andrew Elwell wrote: > Hi folks, > > we're seeing MMP reluctance to hand over the (umounted) OSTs to the > partner pair on our shiny new ME4084 arrays, > > Does anyone have the device {} settings they'd be willing to share? > My gut feel is we've not defined path failover properly and some > timeouts need tweaking > > > (4* ME4084's per 2 740 servers with SAS cabling, Lustre 2.10.8 and CentOS > 7.x) > > Many thanks > > > Andrew > ___ > lustre-discuss mailing list > lustre-discuss@lists.lustre.org > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org > -- -- Jeff Johnson Co-Founder Aeon Computing jeff.johnson at aeoncomputing dot com www.aeoncomputing.com 4170 Morena Boulevard, Suite C - San Diego, CA 92117 High-Performance Computing / Lustre Filesystems / Scale-out Storage ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
[lustre-discuss] Wanted: multipath.conf for dell ME4 series arrays
Hi folks, we're seeing MMP reluctance to hand over the (umounted) OSTs to the partner pair on our shiny new ME4084 arrays, Does anyone have the device {} settings they'd be willing to share? My gut feel is we've not defined path failover properly and some timeouts need tweaking (4* ME4084's per 2 740 servers with SAS cabling, Lustre 2.10.8 and CentOS 7.x) Many thanks Andrew ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org