Re: [Lustre-discuss] df running slow

2014-11-26 Thread Alexander I Kulyavtsev
Try /usr/sbin/lctl set_param osc.lustre-OST0001-*.active=0 as workarownd on client host, with proper names for filesystem and ost names for all retired OSTs. We had 'df' hanging on client after we retired some OSTs on 1.8.9 system and now keep this mantra in rc.local . What client version do y

Re: [lustre-discuss] MDT partition getting full

2015-04-22 Thread Alexander I Kulyavtsev
Before you remounted as ldiskfs, what is the output of mount -t lustre lfs df -hi lfs df -h the first command is to verify fs is actually mounted as lustre. Alex. On Apr 22, 2015, at 4:23 PM, Colin Faber mailto:cfa...@gmail.com>> wrote: You could look at your MDT partition directly, eithe

Re: [lustre-discuss] OST partition sizes

2015-04-29 Thread Alexander I Kulyavtsev
What range of record sizes did you use for IOR? This is more important than file size. 100MB is small, overall data size (# of files) shall be twice as memory. I ran series of test for small record size for raidz2 10+2; will re-run some tests after upgrading to 0.6.4.1 . Single file performance

Re: [lustre-discuss] OST partition sizes

2015-04-29 Thread Alexander I Kulyavtsev
ot like a client writing or reading 2 files. I didn't bother looking at 1 thread. Later I just started doing 100MB tests since it's a very common file size for us. Plus I didn't see real big difference once size gets bigger than that. Scott On 4/29/2015 10:24 AM, Alexander I

Re: [lustre-discuss] zfs -- mds/mdt -- ssd model / type recommendation

2015-05-05 Thread Alexander I Kulyavtsev
How much space is used per i-node on MDT in production installation. What is recommended size of MDT? I'm presently at about 10 KB/inode which seems too high compared with ldiskfs. I ran out of inodes on zfs mdt in my tests and zfs got "locked". MDT zpool got all space used. We have zpool creat

Re: [lustre-discuss] zfs -- mds/mdt -- ssd model / type recommendation

2015-05-05 Thread Alexander I Kulyavtsev
I checked lustre 1.8.8 ldiskfs MDT: 106*10^6 inodes take 610GB on MDT, or 3.5 KB/inode. I've thought it is less. So MDT size just 'factor three' more compared to old ldiskfs. How many files do you plan to have? Alex. On May 5, 2015, at 12:16 PM, Alexander I Kulyavtsev mailt

Re: [lustre-discuss] OpenSFS / EOFS Presentations index

2015-05-11 Thread Alexander I Kulyavtsev
DocDB can be handy to manage documents. http://docdb-v.sourceforge.net/ Check "public" instance here to see examples: https://cd-docdb.fnal.gov/ Alex. On May 11, 2015, at 8:46 PM, Scott Nolin mailto:scott.no...@ssec.wisc.edu>> wrote: It would be really convenient if all the presentations for v

Re: [lustre-discuss] "tag" feature of ltop

2015-05-21 Thread Alexander I Kulyavtsev
It may have sense to keep tagging. I marked OSS dslustre15 and then switched to OST view. I have all OSTs on marked OSS highlighted: 005c F dslustre13 10160 1 0 16503001 95 80 005d F dslustre14 10160 0 0 07472001 95 79 00

Re: [lustre-discuss] "tag" feature of ltop

2015-05-21 Thread Alexander I Kulyavtsev
, Olaf P. mailto:faala...@llnl.gov>> wrote: Alexander, Thanks for your reply. ltop also lets you sort by OSS, so that the OSTs sharing an OSS are all next to each other. Do you find tagging more helpful than that? Olaf P. Faaland LLNL From: Alexander I Kuly

[lustre-discuss] lustre manual formatting error

2015-06-18 Thread Alexander I Kulyavtsev
I believe path in /proc/fs/lustre/obdfilter/*/brw_stats got broken in this manual subsection: https://build.hpdd.intel.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml > 25.3.4.2. Visualizing Results > > ... skip ... > > It is also useful to monitor and record

Re: [lustre-discuss] lustre 2.5.3 ost not draining

2015-07-10 Thread Alexander I Kulyavtsev
Hi Kurt, to keep traffic from almost full OST we usually set ost in degraded mode like described in manual: > Handling Degraded OST RAID Arrays > To mark the OST as degraded, use: > lctl set_param obdfilter.{OST_name}.degraded=1 Alex. On Jul 10, 2015, at 10:13 AM, Kurt Strosahl wrote: > No,

Re: [lustre-discuss] lustre 2.5.3 ost not draining

2015-07-10 Thread Alexander I Kulyavtsev
I think so, try it. We do set ost degraded on 1.8 when ost nears 95% and we migrate data to another ost. On 1.8 lfs_migrate uses 'rm' and objects are indeed deallocated. Alex On Jul 10, 2015, at 10:55 AM, Kurt Strosahl wrote: > Will that let deletes happen against it? > > w/r, > Kurt > > --

[lustre-discuss] zfs ? Re: interrupted tar archive of an mdt ldiskfs

2015-07-13 Thread Alexander I Kulyavtsev
What about zfs MDT backup/restore in lustre 2.5.3? I took a look at the referenced manual pages - it tells nothing about zfs MDT backup. I believed we just use zfs send/receive in this case. Do I need to fix OI / FID mapping? Shall I run offline lfsck and wait??? Alex. On Jul 13, 2015, at 2

Re: [lustre-discuss] lustre 2.5.3 ost not draining

2015-07-13 Thread Alexander I Kulyavtsev
Hi Kurt, The situation with "mount/unmount is necessary to trigger the cleanup" is similar to described at zfs bug 1548: https://github.com/zfsonlinux/zfs/issues/1548 Reportedly it was fixed in zfs 0.6.3 ; the update to 0.6.4.1 is recommended; and 0.6.4.2 was recently released. The bug is

Re: [lustre-discuss] lustre 2.5.3 ost not draining

2015-07-14 Thread Alexander I Kulyavtsev
Since zfs on linux 0.6.4 : [root@lfsa ~]# zpool get fragmentation,leaked zpla NAME PROPERTY VALUE SOURCE zpla fragmentation 0% - zpla leaked 0 default or do "get all ..." and look for fragmentation entry. Alex. On Jul 14, 2015, at 7:23 AM, Kurt Strosahl wrote: >

Re: [lustre-discuss] FIEMAP support for Lustre

2015-08-24 Thread Alexander I Kulyavtsev
Hi Oleg, does ZFS based lustre supports FIEMAP? We have lustre 2.5 with zfs installed. Otherwise we will need to setup separate test system with ldiskfs. But: please review separate reply, I think this can be addressed through multirail, NRS, file striping. Best regards, Alex. On Aug 24, 2015

Re: [lustre-discuss] FIEMAP support for Lustre

2015-08-24 Thread Alexander I Kulyavtsev
Wenji, you may take a look at 1.3. Lustre File System Storage and I/O and 1.3.1. Lustre File System and Striping Commands lfs getstripe lfs setstripe Lustre Network Request Scheduler https://build.hpdd.intel.com/job/lustre-manual/lastSuccessfulBuild/a

Re: [lustre-discuss] free space on ldiskfs vs. zfs

2015-08-24 Thread Alexander I Kulyavtsev
Same question here. 6TB/65TB is 11% . In our case about the same fraction was "missing." My speculation was, It may happen if at some point between zpool and linux the value reported in TB is interpreted as in TiB, and then converted to TB. Or unneeded conversion MB to MiB done twice, etc. He

Re: [lustre-discuss] free space on ldiskfs vs. zfs

2015-08-24 Thread Alexander I Kulyavtsev
estion if I knew his drive count > and drive size. > > Chris > > On 08/24/2015 02:12 PM, Alexander I Kulyavtsev wrote: >> Same question here. >> >> 6TB/65TB is 11% . In our case about the same fraction was "missing." >> >> My speculation was, It

Re: [lustre-discuss] zfs and luster 2.5.3.90

2016-01-15 Thread Alexander I Kulyavtsev
Frederick, thanks for the patch list! It is nice to know the patch set(s) which is/are actually running in production. We are at zfs/spl 0.6.4.1 in production for the last six months with 2.5.3 last GA release (Sept'14). Is tag 2.5.3.90 considered stable? I was cautious to use 2.5.3.90 as ther

Re: [lustre-discuss] strange lustre issues following removal of an OST

2016-01-19 Thread Alexander I Kulyavtsev
LU-642 has similar assert message. It also reports: Lustre: setting import flintfs-OST_UUID INACTIVE by administrator request Do you have deactivated OSTs ? Alex. On Jan 19, 2016, at 4:09 PM, Kurt Strosahl mailto:stros...@jlab.org>> wrote: All, On Monday morning we had to remove an OST

Re: [lustre-discuss] Inactivated ost still showing up on the mds

2016-01-26 Thread Alexander I Kulyavtsev
Hi Kurt, probably too late if you unlinked the files: Did you do zfs snapshot on MDT and damaged OST before removing files? I so, it may be possible to mount ost zfs as a regualr zfs and pull out objects corresponding to files. mdt zfs snapshot to get fids. Alex. On Jan 22, 2016, at 7:39 AM, Kurt

[lustre-discuss] lustre 1.8.9 client with LLNL server 2.5.3 LBUG

2016-01-27 Thread Alexander I Kulyavtsev
Does anyone have experience running lustre 1.8.9 client with LLNL server 2.5.3 (zfs)? I was almost instantly getting LBUG related to IGIF FID assertion after the mount: dsg0515 kernel: LustreError: 30899:0:(mdc_fid.c:334:fid_le_to_cpu()) ASSERTION(fid_is_igif(dst) || fid_ver(dst) == 0) failed:

Re: [lustre-discuss] Questions about migrate OSTs from ldiskfs to zfs

2016-03-01 Thread Alexander I Kulyavtsev
Please see inlined. On Feb 26, 2016, at 6:36 PM, Dilger, Andreas mailto:andreas.dil...@intel.com>> wrote: On Feb 23, 2016, at 05:24, Fernando Perez mailto:fpe...@icm.csic.es>> wrote: Hi all. ... snip... - Dou you recommend to do a lustre update before replace the OSTs by the new zfs OSTs? L

Re: [lustre-discuss] Questions about migrate OSTs from ldiskfs to zfs

2016-03-01 Thread Alexander I Kulyavtsev
016, at 4:14 PM, Christopher J. Morrone wrote: On 03/01/2016 09:18 AM, Alexander I Kulyavtsev wrote: is tag 2.5.3.90 considered stable? No. Generally speaking you do not want to use anything with number 50 or greater for the fourth number unless you are helping out with testing during the dev

Re: [lustre-discuss] Error on a zpool underlying an OST

2016-03-11 Thread Alexander I Kulyavtsev
You lost one "file" only: 0x2c90f I wold take zfs snapshot on ost, mount it as zfs and try to find lustre FID of the file. If that does not work, I guess zdb with high verbosity level can help to pinpoint broken zfs object, like in "zdb: Examining ZFS At Point-Blank Range," and see what it is

Re: [lustre-discuss] Lustre 2.8.0 released

2016-03-19 Thread Alexander I Kulyavtsev
You do not need to rebuild the kernel for "pure" zfs system. Few server kernel patches are for ldiskfs optimizations. You still need to rebuild zfs, lustre server and/or lustre client. Client nodes may have different versions of kernels. You need to rebuild client for specific kernel version of

Re: [lustre-discuss] Rebuild server

2016-03-19 Thread Alexander I Kulyavtsev
Here are configuration files we keep in git in addition to scripts for lustre install and mdt/ost formatting. /etc/ldev.conf -- this file is common for all servers /etc/sysconfig/lustre -- common for all servers; changed zfs mountpoint /etc/modprob

Re: [lustre-discuss] Issue with installing zfs on lustre

2016-05-03 Thread Alexander I Kulyavtsev
Install zfs and spl from zfsonlinux.org Alex On May 4, 2016, at 12:40 AM, sohamm mailto:soh...@gmail.com>> wrote: Downloading packages: kmod-zfs-3.10.0-327.13.1.el7_l FAILED http://build.hpdd.intel.com/job/lustre-master/arch=x86_64%2Cbuild_type=server%2Cdistro=el7%2Cib_st

Re: [lustre-discuss] stripe count recommendation, and proposal for auto-stripe tool

2016-05-19 Thread Alexander I Kulyavtsev
"1) More even space usage across OSTs (mostly relevant for *really* big files, ..." When OSTs are almost full and user writes large file it can overfill the OSTs. Having file OSTs striped over several OSTs somewhat mitigates this issue. 2) bandwidth ... It is better to benchmark the applicat

Re: [lustre-discuss] lnet router lustre rpm compatibility

2016-06-20 Thread Alexander I Kulyavtsev
On Jun 20, 2016, at 4:00 PM, Jessica Otey wrote: > All, > I am in the process of preparing to upgrade a production lustre system > running 1.8.9 to 2.4.3. I would like to know router compatibility matrix too and have it published together with client/server/IB compatibility matrix. It will be n

Re: [lustre-discuss] lnet router lustre rpm compatibility

2016-06-22 Thread Alexander I Kulyavtsev
Servers upgraded first or with clients. Alex. On Jun 22, 2016, at 11:01 AM, E.S. Rosenberg mailto:esr+lus...@mail.hebrew.edu>> wrote: I always understood the recommendation was to update the clients (and LNET Routers) before the servers and not the other way around? ___

Re: [lustre-discuss] ZFS backed OSS out of memory

2016-06-23 Thread Alexander I Kulyavtsev
1) https://github.com/zfsonlinux/zfs/issues/2581 suggests few things to monitor in /proc . Searching for OOM at https://github.com/zfsonlinux/zfs/issues gives more hints where to look. I guess OOM is not necessarily caused by zfs/spl. Do you have lustre mounted on OSS and some process writing to

Re: [lustre-discuss] MDT 100% full

2016-07-26 Thread Alexander I Kulyavtsev
Brian, Do you have zfs 'frozen' ? It can lock when you have zero bytes left and you can not do much with zfs after. You will need to remove file on zfs itself or remove snapshot to unfreeze zfs, so do not wait until it filled completely. To avoid deleting mdt objects after zfs locking, I create f

Re: [lustre-discuss] client server communication half-lost, read-out?

2016-08-01 Thread Alexander I Kulyavtsev
I used this: # lctl get_param printk lnet.printk=warning error emerg console # lctl set_param printk=+neterror (debug) # lctl set_param printk=-neterror Take a look at "Diagnostic and Debugging Tools" chapter at lustre manual. # lctl debug_list subs Subsystems: all_subs, undefined, mdc, md

[lustre-discuss] two lustre fs on same lnet was: Re: lustre clients cannot access different OSS groups on TCP and infiniband at same time

2016-08-03 Thread Alexander I Kulyavtsev
Hi Andreas, the network names need to be unique if the same clients are connecting to both filesystems. What are complication having two lustre filesystems on the same lnet on the same IB? Does it have performance impact (broadcasts, credits, buffers)? We have two (three) lustre fs facing cluste

Re: [lustre-discuss] ZFS not freeing disk space

2016-08-10 Thread Alexander I Kulyavtsev
It can be zfs snapshot holding space on ost. Or, it can be zfs issue, zfs not releasing space until reboot. Check zfs bugs on zfs wiki. Lustre shows change in OST used space right away. zfs 0.6.3 is pretty old. We are using 0.6.4.1 with lustre 2.5.3. (there is zfs 0.6.4.2) You may need to pat

Re: [lustre-discuss] ZFS not freeing disk space

2016-08-10 Thread Alexander I Kulyavtsev
3 the possible work around can be to set zfs xattr=sa but that does not help with existing files on OST. Alex. P.S. Apparently there's more than one way to leak the space. On Aug 10, 2016, at 6:07 PM, Alexander I Kulyavtsev mailto:a...@fnal.gov>> wrote: It can be zfs snapshot holdi

[lustre-discuss] how to get objid in zfs?

2016-09-16 Thread Alexander I Kulyavtsev
What is the simple way to find corrupted file reported in lustre error, like : Sep 12 19:12:29 lfs7 kernel: LustreError: 10704:0:(ldlm_resource.c:1188:ldlm_resource_get()) lfs-OST0012: lvbo_init failed for resource 0x51b94b:0x0: rc = -2 0x51b94b looks like objid, how to find fid or the file corr

Re: [lustre-discuss] migrating from a stand-alone mgs to a merged mgs

2016-09-23 Thread Alexander I Kulyavtsev
Do you want put mgs and mdt on the same node, or same partition? You may have mds and mdt on the same node on different partitions. It may be easier to do. mgs partition is small. Alex. On Sep 23, 2016, at 3:07 PM, John White wrote: > Good Afternoon, > I’m trying to figure out how to ta

Re: [lustre-discuss] LustreError on ZFS volumes

2016-12-13 Thread Alexander I Kulyavtsev
It may worth to do zfs snapshot on ost before there were mass changes on ost, to investigate original issue and just in case things get worse if underlying zfs metadata are broken. Did you do scrub pool (snapshot/clone) before migrating files out? It will not fix the data, may fix metadata and

Re: [lustre-discuss] Building against kmod spl/zfs

2017-01-13 Thread Alexander I Kulyavtsev
Hi Brian, do you use rpm based system or something else? I do not use yet kmod zfs lustre (using dkms) but I use kmod zfs on other zfs appliance. I case of rpm base system you need to install zfs-release-1-5 rpm to configure yum. Yum may use prebuilt dkms modules. RHEL based systems have kA

Re: [lustre-discuss] Odd quota behavior with Lustre/ZFS

2017-02-09 Thread Alexander I Kulyavtsev
Yes, in lustre 2.5.3 after doing chgrp for large subtree. IIRC, for three groups; counts were small different "negative" numbers, not 21. I can get more details tomorrow. Alex > On Feb 9, 2017, at 5:14 PM, Mohr Jr, Richard Frank (Rick Mohr) > wrote: > > Has anyone else encountered this “off b

Re: [lustre-discuss] Odd quota behavior with Lustre/ZFS

2017-02-16 Thread Alexander I Kulyavtsev
is is lustre 2.5.3 GA with few patches. zfs-0.6.4.1 Alex. From: Mohr Jr, Richard Frank (Rick Mohr) Sent: Thursday, February 16, 2017 3:02 PM To: Alexander I Kulyavtsev Cc: Lustre discussion Subject: Re: [lustre-discuss] Odd quota behavior with Lustre/ZFS Alex, Were you

Re: [lustre-discuss] set OSTs read only ?

2017-07-12 Thread Alexander I Kulyavtsev
You may find advise from Andreas on this list (also attached below). I did not try setting fail_loc myself. In 2.9 there is setting osp.*.max_create_count=0 described at LUDOC-305. We used to set OST degraded as described in lustre manual. It works most of the time but at some point I saw lustr

Re: [lustre-discuss] Interesting disk usage of tar of many small files on zfs-based Lustre 2.10

2017-08-03 Thread Alexander I Kulyavtsev
Lustre IO size is 1MB; you have zfs record 4MB. Do you see IO rate change when tar record size set to 4 MB (tar -b 8192) ? How many data disks do you have at raidz2? zfs may write few extra empty blocks to improve defragmentation; IIRC this patch is on by default in zfs 0.7 to improve io rates f

Re: [lustre-discuss] nodes crash during ior test

2017-08-07 Thread Alexander I Kulyavtsev
Lustre wiki has sidebars on Testing and Monitoring, you may start Benchmarking. there was Benchmarking Group in OpenSFS. wiki: http://wiki.opensfs.org/Benchmarking_Working_Group mail list: http://lists.opensfs.org/listinfo.cgi/openbenchmark-opensfs.org It is actually question to the list what is

Re: [lustre-discuss] Recompiling client from the source doesnot contain lnetctl

2018-01-23 Thread Alexander I Kulyavtsev
Andreas, It will be extremely helpful to have "rpmbuild --rebuild" to build lustre [client] rpm with the same content and functionality as lustre[-client] rpm from distro. The client rpm is rebuild most often for different kinds of worker nodes and I see several mails on this list reporting lnet

[lustre-discuss] ltop for lustre 2.10.2 ?

2018-02-01 Thread Alexander I Kulyavtsev
Is there version of ltop working with lustre 2.10.2 ? lmtmetric does not work on mds; it is OK checking ost on oss. # lmtmetric -m mdt lmtmetric: error reading lustre MDS uuid from proc: Invalid argument lmtmetric: mdt metric: Invalid argument I suspect it is due to lmtmetric constantly reading

Re: [lustre-discuss] ltop for lustre 2.10.2 ?

2018-02-02 Thread Alexander I Kulyavtsev
tale, Giuseppe wrote: > > Hi Alex, > > We are aware of the problem. I'm going to try and develop a patch to handle > the move to /sys/ in the next week or so. > > Giuseppe > From: lustre-discuss on behalf of > Alexander I Kulyavtsev > Sent: Thursday, February

Re: [lustre-discuss] File locking errors.

2018-02-15 Thread Alexander I Kulyavtsev
Do you have flock option in fstab for lustre mount or in command you use to mount lustre on client? Search for flock on lustre wiki http://wiki.lustre.org/Mounting_a_Lustre_File_System_on_Client_Nodes or lustre manual http://doc.lustre.org/lustre_manual.pdf Here are links where to start learning

[lustre-discuss] Lustre LTS 2.10 on el6.x ... was Re: File locking errors.

2018-02-15 Thread Alexander I Kulyavtsev
Eli, since lustre version 2.10.1 intel build server has server rpms for el6.9 with in kernel ofed (but not on download server). e.g. 2.10.3 GA https://build.hpdd.intel.com/job/lustre-b2_10/69/ It means lustre 2.10.x at least builds on el6.9 and I guess it shall be easier with patchless server (zf

Re: [lustre-discuss] Lustre client 2.10.3 build/install problem on Centos 6.7

2018-03-23 Thread Alexander I Kulyavtsev
There are detailed build instructions on lustre.org wiki : http://wiki.lustre.org/Compiling_Lustre I built, installed and lustre client 2.10.3 works on SLF 6.x as below: > $ cat /etc/redhat-release > Scientific Linux Fermi release 6.9 (Ramsey) > $ uname -r > 2.6.32-696.1.1.el6.x86_64 >

Re: [lustre-discuss] bad performance with Lustre/ZFS on NVMe SSD

2018-04-10 Thread Alexander I Kulyavtsev
Ricardo, It can be helpful to see output of commands on zfs pool host when you read files through lustre client; and directly through zfs: # zpool iostat -lq -y zpool_name 1 # zpool iostat -w -y zpool_name 5 # zpool iostat -r -y zpool_name 5 -q queue statistics -l Latency statistics -r Request

Re: [lustre-discuss] luster 2.10.3 lnetctl configurations not persisting through reboot

2018-04-17 Thread Alexander I Kulyavtsev
File /etc/lnet.conf is described on lustre wiki: http://wiki.lustre.org/Dynamic_LNet_Configuration_and_lnetctl Alex. On 4/17/18, 3:37 PM, "lustre-discuss on behalf of Kurt Strosahl" wrote: I configured an lnet router today with luster 2.10.3 as the lustre software. I then connfigured

Re: [lustre-discuss] Lustre 2.11 lnet troubleshooting

2018-04-17 Thread Alexander I Kulyavtsev
To original question: lnetctl on router node shows ‘enable: 1 ’ # lnetctl routing show routing: - cpt[0]: …snip… - enable: 1 Lustre 2.10.3-1.el6 Alex. On 4/17/18, 7:05 PM, "lustre-discuss on behalf of Faaland, Olaf P." wrote: Update: Joe pointed out "lnetctl set routin

Re: [lustre-discuss] lustre project quotas

2018-05-10 Thread Alexander I Kulyavtsev
Do you use zfs or ldiskfs on OST? Zfs does not have project quota yet. Alex. From: lustre-discuss on behalf of Einar Næss Jensen Date: Thursday, May 10, 2018 at 7:47 AM To: "lustre-discuss@lists.lustre.org" Subject: Re: [lustre-discuss] lustre project quotas ​Lustre server is 2.10.1 lustre

Re: [lustre-discuss] LUG 2018

2018-06-20 Thread Alexander I Kulyavtsev
Slides at: http://opensfs.org/lug-2018-agenda/ -A. From: lustre-discuss on behalf of "E.S. Rosenberg" Date: Wednesday, June 20, 2018 at 12:20 PM To: Lustre discussion Subject: [lustre-discuss] LUG 2018 Hi all, Are the talks online yet? Thanks, Eli ___

Re: [lustre-discuss] MOFED 4.4-1.0.0.0

2018-08-06 Thread Alexander I Kulyavtsev
Hi Megan, Standard lustre build works with in-kernel ofed. To use Mellanox ofed you have to rebuild lustre: http://wiki.lustre.org/Compiling_Lustre Apparently there is version of lustre 2.10.4 built with mlx ofed in whamcloud download area: https://downloads.whamcloud.com/public/l

Re: [lustre-discuss] lustre vs. lustre-client

2018-08-10 Thread Alexander I Kulyavtsev
What about lustre client in upstream kernel? I guess lustre-common and lustre-client shall be packaged in a way that these rpms can be drop-in replacement for lustre client functionality in upstream kernel like today we have lustre with in-kernel IB or custom IB. Also there was discussion to s

[lustre-discuss] lustre-monitoring email list

2018-08-23 Thread Alexander I Kulyavtsev
Hi Ken, all. Ken, could you please create “lustre-monitoring” email list at http://lists.lustre.org ? The purpose of the list is to discuss the development and share experiences of lustre monitoring solutions. Specifically, I would like to bring up the discussion of experiences of using influ

Re: [lustre-discuss] separate SSD only filesystem including HDD

2018-08-31 Thread Alexander I Kulyavtsev
Thanks, Andreas! I’m looking on similar configuration. Performance wise, is zfs or ldiskfs recommended on NVMe OSTs? We are comfortable with zfs on current HDD system, how much penalty we will pay for ldiskfs on NVMe? zfs overhead can be different for high IOPS with NVMe; are there numbers? Alex

Re: [lustre-discuss] slow write performance from client to server

2018-10-15 Thread Alexander I Kulyavtsev
You can do a quick check with 2.10.5 client by mounting lustre on MDS if you do not have free node to install 2.10.5 client. Do you have lnet configured with IB or 10GE? LNet defaults to tcp if not set. Can it be you are connected through slow management network? Alex. On 10/15/18, 6:41 PM, "

Re: [lustre-discuss] Command line tool to monitor Lustre I/O ?

2018-12-20 Thread Alexander I Kulyavtsev
1) cerebro + ltop still work. 2) telegraf + inflixdb (collector, time series DB ). Telegraf has input plugins for lustre ("lustre2"), zfs, and many others. Grafana to plot live data from DB. Also, influxDB integrates with Prometheus. Basically, each component can feed data to different outpu

Re: [lustre-discuss] Migrating files doesn't free space on the OST

2019-01-17 Thread Alexander I Kulyavtsev
- you can re-run command to find files residing on ost to see if files are new or old. - zfs may have snapshots if you ever did snapshots; it takes space. - removing data or snapshots has some lag to release the blocks (tens of minutes) but I guess that is completed by now. - there are can be

Re: [Lustre-discuss] GIT corrupted on lustre

2012-12-23 Thread Alexander I Kulyavtsev
Stackoverflow has thread http://stackoverflow.com/questions/4254389/git-corrupt-loose-object with reference to article by Linus how to recover http://git.kernel.org/?p=git/git.git;a=blob;f=Documentation/howto/recover-corrupted-blob-object.txt;h=323b513ed0e0ce8b749672f589a375073a050b97;hb=HEAD Ale

Re: [lustre-discuss] Stop writes for users

2019-05-14 Thread Alexander I Kulyavtsev
There was feature request, and there were corresponding LU: LU-5703 - Lustre quiesce LU-7236 - connections on demand Alex. From: lustre-discuss on behalf of Robert Redl Sent: Tuesday, May 14, 2019 10:36 AM To: lustre-discuss@lists.lustre.org Subject: Re: [l

Re: [lustre-discuss] Lustre v2.12.3 Availability

2019-07-12 Thread Alexander I Kulyavtsev
Do you plan to have zfs 0.8.x in lustre 2.12.3 ? Or better to ask, do you test lustre 2.12.y with zfs 0.8.x ? Alex. From: lustre-discuss on behalf of Andreas Dilger Sent: Friday, July 12, 2019 4:02:34 PM To: Peter Jones Cc: lustre-discuss@lists.lustre.org; Tau

Re: [lustre-discuss] which zfs for lustre 2.12.3?

2019-09-27 Thread Alexander I Kulyavtsev
What are plans supporting zfs 0.8.x in lustre 2.12.x ? 2.13 is feature branch and 2.12.x is LTS. IIRC there was note to have a possibility to enable zfs 0.8 for tests in lustre. How to enable 0.8.2 (when it will be released) in 2.12.3 ? I guess I will need to rebuild lustre pointing to proper z