Re: [lustre-discuss] free space on ldiskfs vs. zfs

2015-08-24 Thread Götz Waschk
Dear All,

I'm sorry, I cannot provide verbose zpool information anymore. I was a
bit in a hurry to put the file system into production and that's why I
have reformatted the servers with ldiskfs.

On Tue, Aug 25, 2015 at 5:54 AM, Alexander I Kulyavtsev  wrote:
> I was assuming the question was about total space as I struggled for some 
> time to understand  why do I have 99 TB total available space per OSS, after 
> installing zfs lustre, while ldiskfs OSTs have 120 TB on the same hardware. 
> The 20% difference was partially (10%) accounted by different raid6 / raidz2 
> configuration. But I was not able to explain the other 10%.

> For question in original post, I can not make 24 TB from "available" field of 
> df output:
> 207 KiB "available" on his zfs lustre,  198 KiB on ldiskfs lustre.
> At the same time the difference of the total space is
> 233548424256 -207693153280 = 25855270976 KiB = 24.09 TB.

> Götz, could you please tell us what did you mean by "available" ?


I was comparing the Lustre file system size from the two
configurations, the space available for user data. I expected it to be
the same, that is 218T for both file systems.

I understand that you have the same issue.

Regards, Götz Waschk
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] free space on ldiskfs vs. zfs

2015-08-24 Thread Alexander I Kulyavtsev
Hmm,
I was assuming the question was about total space as I struggled for some time 
to understand  why do I have 99 TB total available space per OSS, after 
installing zfs lustre, while ldiskfs OSTs have 120 TB on the same hardware. The 
20% difference was partially (10%) accounted by different raid6 / raidz2 
configuration. But I was not able to explain the other 10%.

For question in original post, I can not make 24 TB from "available" field of 
df output:
207 KiB "available" on his zfs lustre,  198 KiB on ldiskfs lustre.
At the same time the difference of the total space is 
233548424256 -207693153280 = 25855270976 KiB = 24.09 TB.

Götz, could you please tell us what did you mean by "available" ?

Also,
in my case the output of linux df on OSS for the zfs pool looks strange:
zpool size reported as 25T (why?), and the formatted OST taking all space on 
this pool shows 33T:

[root@lfs1 ~]# df -h  /zpla-  /mnt/OST
Filesystem Size  Used Avail Use% Mounted on
zpla-   25T  256K   25T   1% /zpla-
zpla-/OST   33T  8.3T   25T  26% /mnt/OST
[root@lfs1 ~]# 

in bytes:

[root@lfs1 ~]# df --block-size=1  /zpla-  /mnt/OST
Filesystem 1B-blocks  Used  Available Use% Mounted on
zpla- 26769344561152262144 26769344299008   1% /zpla-
zpla-/OST 35582552834048 9093386076160 26489164660736  26% /mnt/OST

same ost reported by lustre:
[root@lfsa scripts]# lfs df 
UUID   1K-blocksUsed   Available Use% Mounted on
lfs-MDT_UUID   974961920  275328   974684544   0% /mnt/lfsa[MDT:0]
lfs-OST_UUID 34748586752  8880259840 25868324736  26% /mnt/lfsa[OST:0]
...

Compare:

[root@lfs1 ~]# zpool list
NAMESIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT
zpla-  43.5T  10.9T  32.6T -16%24%  1.00x  ONLINE  -
zpla-0001  43.5T  11.0T  32.5T -17%25%  1.00x  ONLINE  -
zpla-0002  43.5T  10.8T  32.7T -17%24%  1.00x  ONLINE  -
I realize zfs reports raw disk space including parity blocks (48TB = 43.5 TiB); 
 and everything else (like metadata, space for xattr inodes).

I can not explain the difference 40 TB (dec.) of data space (10*4TB drives) and 
35,582,552,834,048 bytes shown by df for OST.

Best regards, Alex.

On Aug 24, 2015, at 7:52 PM, Christopher J. Morrone  wrote:

> I could be wrong, but I don't think that the original poster was asking 
> why the SIZE field of zpool list was wrong, but rather why the AVAIL 
> space in zfs list was lower than he expected.
> 
> I would find it easier to answer the question if I knew his drive count 
> and drive size.
> 
> Chris
> 
> On 08/24/2015 02:12 PM, Alexander I Kulyavtsev wrote:
>> Same question here.
>> 
>> 6TB/65TB is 11% . In our case about the same fraction was "missing."
>> 
>> My speculation was, It may happen if at some point between zpool and linux 
>> the value reported in TB is interpreted as in TiB, and then converted to TB. 
>> Or  unneeded conversion MB to MiB done twice, etc.
>> 
>> Here is my numbers:
>> We have 12* 4TB drives per pool, it is 48 TB (decimal).
>> zpool created as raidz2 10+2.
>> zpool reports  43.5T.
>> Pool size shall be 48T=4T*12, or 40T=4T*10 (depending what zpool shows, 
>> before raiding or after raiding).
>>> From the Oracle ZFS documentation, "zpool list" returns the total space 
>>> without overheads, thus 48 TB shall be reported by zpool instead of 43.5TB.
>> 
>> In my case, it looked like conversion error/interpretation issue between TB 
>> and TiB:
>> 
>> 48*1000*1000*1000*1000/1024/1024/1024/1024 = 43.65574568510055541992
>> 
>> 
>> At disk level:
>> 
>> ~/sas2ircu 0 display
>> 
>> Device is a Hard disk
>>   Enclosure # : 2
>>   Slot #  : 12
>>   SAS Address : 5003048-0-015a-a918
>>   State   : Ready (RDY)
>>   Size (in MB)/(in sectors)   : 3815447/7814037167
>>   Manufacturer: ATA
>>   Model Number: HGST HUS724040AL
>>   Firmware Revision   : AA70
>>   Serial No   : PN2334PBJPW14T
>>   GUID: 5000cca23de6204b
>>   Protocol: SATA
>>   Drive Type  : SATA_HDD
>> 
>> One disk size is about 4 TB (decimal):
>> 
>> 3815447*1024*1024 = 4000786153472
>> 7814037167*512  = 4000787029504
>> 
>> vdev presents whole disk to zpool. There is some overhead, some space left 
>> on sdq9 .
>> 
>> [root@lfs1 scripts]# head -4 /etc/zfs/vdev_id.conf
>> alias s0  /dev/disk/by-path/pci-:03:00.0-sas-0x50030480015aa90c-lun-0
>> alias s1  /dev/disk/by-path/pci-:03:00.0-sas-0x50030480015aa90d-lun-0
>> alias s2  /dev/disk/by-path/pci-:03:00.0-sas-0x50030480015aa90e-lun-0
>> alias s3  /dev/disk/by-path/pci-:03:00.0-sas-

Re: [lustre-discuss] free space on ldiskfs vs. zfs

2015-08-24 Thread Christopher J. Morrone
I could be wrong, but I don't think that the original poster was asking 
why the SIZE field of zpool list was wrong, but rather why the AVAIL 
space in zfs list was lower than he expected.


I would find it easier to answer the question if I knew his drive count 
and drive size.


Chris

On 08/24/2015 02:12 PM, Alexander I Kulyavtsev wrote:

Same question here.

6TB/65TB is 11% . In our case about the same fraction was "missing."

My speculation was, It may happen if at some point between zpool and linux the 
value reported in TB is interpreted as in TiB, and then converted to TB. Or  
unneeded conversion MB to MiB done twice, etc.

Here is my numbers:
We have 12* 4TB drives per pool, it is 48 TB (decimal).
zpool created as raidz2 10+2.
zpool reports  43.5T.
Pool size shall be 48T=4T*12, or 40T=4T*10 (depending what zpool shows, before 
raiding or after raiding).

From the Oracle ZFS documentation, "zpool list" returns the total space without 
overheads, thus 48 TB shall be reported by zpool instead of 43.5TB.


In my case, it looked like conversion error/interpretation issue between TB and 
TiB:

48*1000*1000*1000*1000/1024/1024/1024/1024 = 43.65574568510055541992


At disk level:

~/sas2ircu 0 display

Device is a Hard disk
   Enclosure # : 2
   Slot #  : 12
   SAS Address : 5003048-0-015a-a918
   State   : Ready (RDY)
   Size (in MB)/(in sectors)   : 3815447/7814037167
   Manufacturer: ATA
   Model Number: HGST HUS724040AL
   Firmware Revision   : AA70
   Serial No   : PN2334PBJPW14T
   GUID: 5000cca23de6204b
   Protocol: SATA
   Drive Type  : SATA_HDD

One disk size is about 4 TB (decimal):

3815447*1024*1024 = 4000786153472
7814037167*512  = 4000787029504

vdev presents whole disk to zpool. There is some overhead, some space left on 
sdq9 .

[root@lfs1 scripts]# head -4 /etc/zfs/vdev_id.conf
alias s0  /dev/disk/by-path/pci-:03:00.0-sas-0x50030480015aa90c-lun-0
alias s1  /dev/disk/by-path/pci-:03:00.0-sas-0x50030480015aa90d-lun-0
alias s2  /dev/disk/by-path/pci-:03:00.0-sas-0x50030480015aa90e-lun-0
alias s3  /dev/disk/by-path/pci-:03:00.0-sas-0x50030480015aa90f-lun-0
...
alias s12  /dev/disk/by-path/pci-:03:00.0-sas-0x50030480015aa918-lun-0
...

[root@lfs1 scripts]# ls -l  /dev/disk/by-path/
...
lrwxrwxrwx 1 root root  9 Jul 23 16:27 
pci-:03:00.0-sas-0x50030480015aa918-lun-0 -> ../../sdq
lrwxrwxrwx 1 root root 10 Jul 23 16:27 
pci-:03:00.0-sas-0x50030480015aa918-lun-0-part1 -> ../../sdq1
lrwxrwxrwx 1 root root 10 Jul 23 16:27 
pci-:03:00.0-sas-0x50030480015aa918-lun-0-part9 -> ../../sdq9

Pool report:

[root@lfs1 scripts]# zpool list
NAMESIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT
zpla-  43.5T  10.9T  32.6T -16%24%  1.00x  ONLINE  -
zpla-0001  43.5T  11.0T  32.5T -17%25%  1.00x  ONLINE  -
zpla-0002  43.5T  10.8T  32.7T -17%24%  1.00x  ONLINE  -
[root@lfs1 scripts]#

[root@lfs1 ~]# zpool list -v zpla-0001
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT
zpla-0001  43.5T  11.0T  32.5T -17%25%  1.00x  ONLINE  -
   raidz2  43.5T  11.0T  32.5T -17%25%
 s12  -  -  - -  -  -
 s13  -  -  - -  -  -
 s14  -  -  - -  -  -
 s15  -  -  - -  -  -
 s16  -  -  - -  -  -
 s17  -  -  - -  -  -
 s18  -  -  - -  -  -
 s19  -  -  - -  -  -
 s20  -  -  - -  -  -
 s21  -  -  - -  -  -
 s22  -  -  - -  -  -
 s23  -  -  - -  -  -
[root@lfs1 ~]#

[root@lfs1 ~]# zpool get all zpla-0001
NAME   PROPERTYVALUE   SOURCE
zpla-0001  size43.5T   -
zpla-0001  capacity25% -
zpla-0001  altroot -   default
zpla-0001  health  ONLINE  -
zpla-0001  guid547290297520142 default
zpla-0001  version -   default
zpla-0001  bootfs  -   default
zpla-0001  delegation  on  default
zpla-0001  autoreplace off default
zpla-0001  cachefile   -   

Re: [lustre-discuss] FIEMAP support for Lustre

2015-08-24 Thread Dilger, Andreas
On 2015/08/24, 1:22 PM, "lustre-discuss on behalf of Alexander I
Kulyavtsev"  wrote:

>Hi Oleg,
>does ZFS based lustre supports FIEMAP?
>
>We have lustre 2.5 with zfs installed. Otherwise we will need to setup
>separate test system with ldiskfs.

The ZFS back-end code does not itself support FIEMAP, unlike ldiskfs.
There is an open ticket for this if someone is interested to implement
FIEMAP support for ZFS:

https://jira.hpdd.intel.com/browse/LU-1941

but until that is done, the client-side FIEMAP can only get information
from ldiskfs.

Cheers, Andreas

>
>On Aug 24, 2015, at 11:06 AM, Drokin, Oleg  wrote:
>
>> Hello!
>> 
>> On Aug 24, 2015, at 11:57 AM, Wenji Wu wrote:
>> 
>>> Hello, everybody,
>>> 
>>> I understand that ext2/3/4 support FIEMAP to get file extent mapping.
>>> 
>>> Does Lustre supports similar feature like FIEMAP? Can Lustre client
>>>gets FIEMAP-like information on a Luster file system?
>> 
>> Yes, Lustre does support fiemap.
>> You can see patched ext4progs and the filefrag included there works on
>>top of Lustre too, as an example.
>> 
>> lustre/tests/checkfiemap.c in the lustre source tree is another example
>>user of this functionality that you can consult.
>> 
>> Bye,
>>Oleg
>> ___
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>___
>lustre-discuss mailing list
>lustre-discuss@lists.lustre.org
>http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>


Cheers, Andreas
-- 
Andreas Dilger

Lustre Software Architect
Intel High Performance Data Division


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] FIEMAP support for Lustre

2015-08-24 Thread Dilger, Andreas
Of historical interest is that FIEMAP was actually developed for Lustre first, 
and was then added to the other filesystems when we pushed it upstream.  The 
older FIBMAP interface just wasn't scalable enough to handle the large files 
that Lustre is using.

Unfortunately, because the Lustre client was not in the kernel at the time, 
some of the FIEMAP features that Lustre uses are not included in the upstream 
FIEMAP (e.g. device reporting and device-ordered output).

You can use filefrag from any Lustre-patched e2fsprogs to output the layout for 
multi-striped files, or vanilla filefrag for single-striped files.

Cheers, Andreas
--
Andreas Dilger
Lustre Software Architect
Intel High Performance Data Division

On 2015/08/24, 9:57 AM, "lustre-discuss on behalf of Wenji Wu" 
mailto:lustre-discuss-boun...@lists.lustre.org>
 on behalf of we...@fnal.gov> wrote:

Hello, everybody,

I understand that ext2/3/4 support FIEMAP to get file extent mapping.

Does Lustre supports similar feature like FIEMAP? Can Lustre client gets 
FIEMAP-like information on a Luster file system?

Thanks,

wenji
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] free space on ldiskfs vs. zfs

2015-08-24 Thread Alexander I Kulyavtsev
Same question here.

6TB/65TB is 11% . In our case about the same fraction was "missing."

My speculation was, It may happen if at some point between zpool and linux the 
value reported in TB is interpreted as in TiB, and then converted to TB. Or  
unneeded conversion MB to MiB done twice, etc.

Here is my numbers:
We have 12* 4TB drives per pool, it is 48 TB (decimal).
zpool created as raidz2 10+2.
zpool reports  43.5T.
Pool size shall be 48T=4T*12, or 40T=4T*10 (depending what zpool shows, before 
raiding or after raiding).
>From the Oracle ZFS documentation, "zpool list" returns the total space 
>without overheads, thus 48 TB shall be reported by zpool instead of 43.5TB.

In my case, it looked like conversion error/interpretation issue between TB and 
TiB:

48*1000*1000*1000*1000/1024/1024/1024/1024 = 43.65574568510055541992


At disk level:

~/sas2ircu 0 display

Device is a Hard disk
  Enclosure # : 2
  Slot #  : 12
  SAS Address : 5003048-0-015a-a918
  State   : Ready (RDY)
  Size (in MB)/(in sectors)   : 3815447/7814037167
  Manufacturer: ATA 
  Model Number: HGST HUS724040AL
  Firmware Revision   : AA70
  Serial No   : PN2334PBJPW14T
  GUID: 5000cca23de6204b
  Protocol: SATA
  Drive Type  : SATA_HDD

One disk size is about 4 TB (decimal):

3815447*1024*1024 = 4000786153472
7814037167*512  = 4000787029504

vdev presents whole disk to zpool. There is some overhead, some space left on 
sdq9 .

[root@lfs1 scripts]# head -4 /etc/zfs/vdev_id.conf
alias s0  /dev/disk/by-path/pci-:03:00.0-sas-0x50030480015aa90c-lun-0
alias s1  /dev/disk/by-path/pci-:03:00.0-sas-0x50030480015aa90d-lun-0
alias s2  /dev/disk/by-path/pci-:03:00.0-sas-0x50030480015aa90e-lun-0
alias s3  /dev/disk/by-path/pci-:03:00.0-sas-0x50030480015aa90f-lun-0
...
alias s12  /dev/disk/by-path/pci-:03:00.0-sas-0x50030480015aa918-lun-0
...

[root@lfs1 scripts]# ls -l  /dev/disk/by-path/
...
lrwxrwxrwx 1 root root  9 Jul 23 16:27 
pci-:03:00.0-sas-0x50030480015aa918-lun-0 -> ../../sdq
lrwxrwxrwx 1 root root 10 Jul 23 16:27 
pci-:03:00.0-sas-0x50030480015aa918-lun-0-part1 -> ../../sdq1
lrwxrwxrwx 1 root root 10 Jul 23 16:27 
pci-:03:00.0-sas-0x50030480015aa918-lun-0-part9 -> ../../sdq9

Pool report:

[root@lfs1 scripts]# zpool list
NAMESIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT
zpla-  43.5T  10.9T  32.6T -16%24%  1.00x  ONLINE  -
zpla-0001  43.5T  11.0T  32.5T -17%25%  1.00x  ONLINE  -
zpla-0002  43.5T  10.8T  32.7T -17%24%  1.00x  ONLINE  -
[root@lfs1 scripts]# 

[root@lfs1 ~]# zpool list -v zpla-0001
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT
zpla-0001  43.5T  11.0T  32.5T -17%25%  1.00x  ONLINE  -
  raidz2  43.5T  11.0T  32.5T -17%25%
s12  -  -  - -  -  -
s13  -  -  - -  -  -
s14  -  -  - -  -  -
s15  -  -  - -  -  -
s16  -  -  - -  -  -
s17  -  -  - -  -  -
s18  -  -  - -  -  -
s19  -  -  - -  -  -
s20  -  -  - -  -  -
s21  -  -  - -  -  -
s22  -  -  - -  -  -
s23  -  -  - -  -  -
[root@lfs1 ~]# 

[root@lfs1 ~]# zpool get all zpla-0001
NAME   PROPERTYVALUE   SOURCE
zpla-0001  size43.5T   -
zpla-0001  capacity25% -
zpla-0001  altroot -   default
zpla-0001  health  ONLINE  -
zpla-0001  guid547290297520142 default
zpla-0001  version -   default
zpla-0001  bootfs  -   default
zpla-0001  delegation  on  default
zpla-0001  autoreplace off default
zpla-0001  cachefile   -   default
zpla-0001  failmodewaitdefault
zpla-0001  listsnapshots   off default
zpla-0001  autoexpand  off default
zpla-0001  dedupditto  0   default
zpla-0001  dedupratio  1.0

Re: [lustre-discuss] FIEMAP support for Lustre

2015-08-24 Thread Wenji Wu
Alex,

Thanks,

I understand the Lustre stripe concept. I just try to understand if lustre
support FIEMAP between Luster client and server.

wenji

On 8/24/15, 2:34 PM, "Alexander I Kulyavtsev"  wrote:

>Wenji,
>you may take a look at
>   1.3.  Lustre File System Storage and I/O
>and 
>   1.3.1.  Lustre File System and Striping
>Commands 
>   lfs getstripe
>   lfs setstripe
>
>Lustre Network Request Scheduler
>   
> https://build.hpdd.intel.com/job/lustre-manual/lastSuccessfulBuild/artifa
>ct/lustre_manual.xhtml#dbdoclet.nrstuning
>
>Lustre multirail:
>   
> http://cdn.opensfs.org/wp-content/uploads/2013/04/LUG13-Presentation-ihar
>a-final-rev4.pdf
>   
> http://cdn.opensfs.org/wp-content/uploads/2012/12/900-930_Diego_Moreno_LU
>G_Bull_2011.pdf
>These are actually server side. IIRC you are looking on client side.
>
>Best regards, Alex.
>
>On Aug 24, 2015, at 11:06 AM, Drokin, Oleg  wrote:
>
>> Hello!
>> 
>> On Aug 24, 2015, at 11:57 AM, Wenji Wu wrote:
>> 
>>> Hello, everybody,
>>> 
>>> I understand that ext2/3/4 support FIEMAP to get file extent mapping.
>>> 
>>> Does Lustre supports similar feature like FIEMAP? Can Lustre client
>>>gets FIEMAP-like information on a Luster file system?
>> 
>> Yes, Lustre does support fiemap.
>> You can see patched ext4progs and the filefrag included there works on
>>top of Lustre too, as an example.
>> 
>> lustre/tests/checkfiemap.c in the lustre source tree is another example
>>user of this functionality that you can consult.
>> 
>> Bye,
>>Oleg
>> ___
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] free space on ldiskfs vs. zfs

2015-08-24 Thread Christopher J. Morrone
If you provide the "zpool list -v" output it might give us a little 
clearer view of what you have going on.


Chris

On 08/19/2015 06:18 AM, Götz Waschk wrote:

Dear Lustre experts,

I have configured two different Lustre instances, both using Lustre
2.5.3, one with ldiskfs on RAID-6 hardware RAID and one using ZFS and
RAID-Z2, using the same type of hardware. I was wondering, why I 24 TB
less space available, when I should have the same amount of parity
used:

  # lfs df
UUID   1K-blocksUsed   Available Use% Mounted on
fs19-MDT_UUID   50322916  47269646494784   1%
/testlustre/fs19[MDT:0]
fs19-OST_UUID51923288320   12672 51923273600   0%
/testlustre/fs19[OST:0]
fs19-OST0001_UUID51923288320   12672 51923273600   0%
/testlustre/fs19[OST:1]
fs19-OST0002_UUID51923288320   12672 51923273600   0%
/testlustre/fs19[OST:2]
fs19-OST0003_UUID51923288320   12672 51923273600   0%
/testlustre/fs19[OST:3]
filesystem summary:  207693153280   50688 207693094400   0% /testlustre/fs19
UUID   1K-blocksUsed   Available Use% Mounted on
fs18-MDT_UUID   47177700  48215243550028   1%
/lustre/fs18[MDT:0]
fs18-OST_UUID58387106064  6014088200 49452733560  11%
/lustre/fs18[OST:0]
fs18-OST0001_UUID58387106064  5919753028 49547068928  11%
/lustre/fs18[OST:1]
fs18-OST0002_UUID58387106064  5944542316 49522279640  11%
/lustre/fs18[OST:2]
fs18-OST0003_UUID58387106064  5906712004 49560109952  11%
/lustre/fs18[OST:3]
filesystem summary:  233548424256 23785095548 198082192080  11% /lustre/fs18

fs18 is using ldiskfs, while fs19 is ZFS:
# zpool list
NAME  SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
lustre-ost165T  18,1M  65,0T 0%  1.00x  ONLINE  -
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
lustre-ost1   13,6M  48,7T   311K  /lustre-ost1
lustre-ost1/ost1  12,4M  48,7T  12,4M  /lustre-ost1/ost1


Any idea on why my 6TB per OST went?

Regards, Götz Waschk
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org



___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] FIEMAP support for Lustre

2015-08-24 Thread Drokin, Oleg
Hello!

   fiemap is not implemented on ZFS backends at this time.

Bye,
Oleg
On Aug 24, 2015, at 3:22 PM, Alexander I Kulyavtsev wrote:

> Hi Oleg,
> does ZFS based lustre supports FIEMAP?
> 
> We have lustre 2.5 with zfs installed. Otherwise we will need to setup 
> separate test system with ldiskfs.
> 
> But: please review separate reply, I think this can be addressed through 
> multirail, NRS, file striping.
> 
> Best regards, Alex.
> 
> On Aug 24, 2015, at 11:06 AM, Drokin, Oleg  wrote:
> 
>> Hello!
>> 
>> On Aug 24, 2015, at 11:57 AM, Wenji Wu wrote:
>> 
>>> Hello, everybody,
>>> 
>>> I understand that ext2/3/4 support FIEMAP to get file extent mapping. 
>>> 
>>> Does Lustre supports similar feature like FIEMAP? Can Lustre client gets 
>>> FIEMAP-like information on a Luster file system?
>> 
>> Yes, Lustre does support fiemap.
>> You can see patched ext4progs and the filefrag included there works on top 
>> of Lustre too, as an example.
>> 
>> lustre/tests/checkfiemap.c in the lustre source tree is another example user 
>> of this functionality that you can consult.
>> 
>> Bye,
>>   Oleg
>> ___
>> lustre-discuss mailing list
>> lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> 
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] FIEMAP support for Lustre

2015-08-24 Thread Alexander I Kulyavtsev
Wenji,
you may take a look at 
1.3.  Lustre File System Storage and I/O 
and 
1.3.1.  Lustre File System and Striping
Commands 
lfs getstripe
lfs setstripe

Lustre Network Request Scheduler

https://build.hpdd.intel.com/job/lustre-manual/lastSuccessfulBuild/artifact/lustre_manual.xhtml#dbdoclet.nrstuning

Lustre multirail:

http://cdn.opensfs.org/wp-content/uploads/2013/04/LUG13-Presentation-ihara-final-rev4.pdf

http://cdn.opensfs.org/wp-content/uploads/2012/12/900-930_Diego_Moreno_LUG_Bull_2011.pdf
These are actually server side. IIRC you are looking on client side.

Best regards, Alex.

On Aug 24, 2015, at 11:06 AM, Drokin, Oleg  wrote:

> Hello!
> 
> On Aug 24, 2015, at 11:57 AM, Wenji Wu wrote:
> 
>> Hello, everybody,
>> 
>> I understand that ext2/3/4 support FIEMAP to get file extent mapping. 
>> 
>> Does Lustre supports similar feature like FIEMAP? Can Lustre client gets 
>> FIEMAP-like information on a Luster file system?
> 
> Yes, Lustre does support fiemap.
> You can see patched ext4progs and the filefrag included there works on top of 
> Lustre too, as an example.
> 
> lustre/tests/checkfiemap.c in the lustre source tree is another example user 
> of this functionality that you can consult.
> 
> Bye,
>Oleg
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] lightweight RPC monitoring for portal RPC daemon?

2015-08-24 Thread Wahl, Edward
Anyone have a handy way to do some lightweight RPC monitoring for the portal 
RPC daemon?  (ptlrpcd)  I'm hoping someone has rigged something up for 
debugging before and can share.  We're seeing some odd evicts/'stuck cpu until 
crash' issues that we'd like to take a closer look at.   Lustre 2.5.x

Hoping to find something that stands out as to what causes the 
reconnects/evictions until we can recreate it. Expecting to find a user doing 
something degenerate but low bandwidth that hits a new LBUG.

Ed Wahl
OSC

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] FIEMAP support for Lustre

2015-08-24 Thread Alexander I Kulyavtsev
Hi Oleg,
does ZFS based lustre supports FIEMAP?

We have lustre 2.5 with zfs installed. Otherwise we will need to setup separate 
test system with ldiskfs.

But: please review separate reply, I think this can be addressed through 
multirail, NRS, file striping.

Best regards, Alex.

On Aug 24, 2015, at 11:06 AM, Drokin, Oleg  wrote:

> Hello!
> 
> On Aug 24, 2015, at 11:57 AM, Wenji Wu wrote:
> 
>> Hello, everybody,
>> 
>> I understand that ext2/3/4 support FIEMAP to get file extent mapping. 
>> 
>> Does Lustre supports similar feature like FIEMAP? Can Lustre client gets 
>> FIEMAP-like information on a Luster file system?
> 
> Yes, Lustre does support fiemap.
> You can see patched ext4progs and the filefrag included there works on top of 
> Lustre too, as an example.
> 
> lustre/tests/checkfiemap.c in the lustre source tree is another example user 
> of this functionality that you can consult.
> 
> Bye,
>Oleg
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre on mac os 10.10 and windows 7

2015-08-24 Thread E.S. Rosenberg
What's wrong with plain ext4?
Or XFS, btrfs etc.

If you want good support on Windows you're stuck with windows filesystems
((ex)FAT, NTFS), though there are tools to mount extX filesystems on
windows they aren't all that stable as far as I know (though I haven't
looked at that for years so things may have changed).

There are decent OSS implementations of FAT/NTFS, but on them you would
have the problem that you can't preserve linux file properties (owner etc)
you would be able to get around that by creating tar archives
In the end if compatibility with windows is needed and you aren't
interested in the hassle of getting a non-MS filesystem supported you are
limited to their filesystems.

Regards,
Eli

On Mon, Aug 24, 2015 at 8:33 PM, Michael Parchet 
wrote:

> Hello,
>
> Thanks for your answer.
> I understand that lustre was not the right choice for my objective.
> I'm looking for a free and open source file system that would allow me to
> backup and read data on the long time  possibly compatible for linux, mac
> os 10.10 and windows 7
>
> What would you suggest to fulfill this objective ?
>
> Thanks for your support
>
> Best regards
>
> mparchet
>
>
>
> On 16. 08. 15 18:46, E.S. Rosenberg wrote:
>
> I am puzzled, why would you format an external harddisk as lustre? You
> realize lustre is not really aimed at being a single disk filesystem?
>
> Lustre is supposed to be spread over a set of disks (and servers) where
> you have disks/servers specifically in charge of storing objects and
> disks/servers in charge of maintaining the metadata which actually tells
> you what those objects are...
>
> Even if you would get fuse lustre support running you would be using it to
> mount a filesystem being served by one or more remote hosts and not to
> mount a single external disk, the fuse driver would be a client driver and
> not the whole server package which is capable of understanding the disk
> provided it has the metadata.
>
> (Theoretically you should be able to mount the disk as ext4 or ZFS
> depending on the backend filesystem you chose, but even then lacking the
> metadata objects are just objects without descriptive names and may lack
> parts that were striped to other disks afaik).
>
> What is the goal you are trying to accomplish (beyond mounting your
> external disk)?
>
> Regards,
> Eli
>
> On Fri, Aug 14, 2015 at 8:19 PM, Michael Parchet 
> wrote:
>
>> Hello,
>>
>> I have downlouded Fuse for osx but My wd Element portable that I formated
>> with the Lustre file system. but osx 10.10 could don't read my disk.
>>
>> Could you help me please ?
>>
>> Thanks for your support
>>
>> Best regards
>>
>> mparchet
>>
>>
>> Le 05/08/15 15:41, Stu Midgley a écrit :
>>
>> In some sense their has already been a port of Lustre to macosx and
>>> windows.  I ported it using FUSE about 10years ago, using liblustre.
>>> It was about a 10min exercise...
>>>
>>> I have absolutely no idea where liblustre is at now...
>>>
>>> On Wed, Aug 5, 2015 at 8:36 PM, Michaël Parchet 
>>> wrote:
>>>
 Hello,

 Is it technically  possible to developp an implementation for lustre
 for mac
 and windows ?

 Thanks for your answer

 Best regards

 mparchet

 Le 05. 08. 15 14:17, Ben Evans a écrit :

 Lustre is a linux-only project.  Only linux can act as a client and a
> server for Lustre.
>
> -Ben Evans
>
> -Original Message-
> From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org]
> On
> Behalf Of Michaël Parchet
> Sent: Wednesday, August 05, 2015 6:16 AM
> To: lustre-discuss@lists.lustre.org
> Subject: [lustre-discuss] Lustre on mac os 10.10 and windows 7
>
> Hello,
>
> I have formated a western digital Element hard drive with the lustre
> format. The drive have the very good performance but mac os 10.10
> couldn't
> read it
>
> When I look the partition type with gparted I get ext4 but even install
> osxfuse My lustre drive isen't recognized by mac os x
>
> Why ?
>
> Could windows 7 read the lustre drive ?
>
> Thanks for your support
>
> Best regareds
>
> mparchet
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>

 ___
 lustre-discuss mailing list
 lustre-discuss@lists.lustre.org
 http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

>>>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre on mac os 10.10 and windows 7

2015-08-24 Thread Michael Parchet
Hello,

Thanks for your answer. 
I understand that lustre was not the right choice for my objective.
I'm looking for a free and open source file system that would allow me to 
backup and read data on the long time  possibly compatible for linux, mac os 
10.10 and windows 7

What would you suggest to fulfill this objective ?

Thanks for your support

Best regards

mparchet



> On 16. 08. 15 18:46, E.S. Rosenberg wrote:
> I am puzzled, why would you format an external harddisk as lustre? You 
> realize lustre is not really aimed at being a single disk filesystem?
> 
> Lustre is supposed to be spread over a set of disks (and servers) where you 
> have disks/servers specifically in charge of storing objects and 
> disks/servers in charge of maintaining the metadata which actually tells you 
> what those objects are...
> 
> Even if you would get fuse lustre support running you would be using it to 
> mount a filesystem being served by one or more remote hosts and not to mount 
> a single external disk, the fuse driver would be a client driver and not the 
> whole server package which is capable of understanding the disk provided it 
> has the metadata.
> 
> (Theoretically you should be able to mount the disk as ext4 or ZFS depending 
> on the backend filesystem you chose, but even then lacking the metadata 
> objects are just objects without descriptive names and may lack parts that 
> were striped to other disks afaik).
> 
> What is the goal you are trying to accomplish (beyond mounting your external 
> disk)?
> 
> Regards,
> Eli
> 
>> On Fri, Aug 14, 2015 at 8:19 PM, Michael Parchet  wrote:
>> Hello,
>> 
>> I have downlouded Fuse for osx but My wd Element portable that I formated 
>> with the Lustre file system. but osx 10.10 could don't read my disk.
>> 
>> Could you help me please ?
>> 
>> Thanks for your support
>> 
>> Best regards
>> 
>> mparchet
>> 
>> 
>> Le 05/08/15 15:41, Stu Midgley a écrit : 
>> 
>>> In some sense their has already been a port of Lustre to macosx and
>>> windows.  I ported it using FUSE about 10years ago,   using 
>>> liblustre.
>>> It was about a 10min exercise...
>>> 
>>> I have absolutely no idea where liblustre is at now...
>>> 
 On Wed, Aug 5, 2015 at 8:36 PM, Michaël Parchet  
 wrote:
 Hello,
 
 Is it technically  possible to developp an implementation for lustre for 
 mac
 and windows ?
 
 Thanks for your answer
 
 Best regards
 
 mparchet
 
 Le 05. 08. 15 14:17, Ben Evans a écrit :
 
> Lustre is a linux-only project.  Only linux can act as a client and a
> server for Lustre.
> 
> -Ben Evans
> 
> -Original Message-
> From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org] On
> Behalf Of Michaël Parchet
> Sent: Wednesday, August 05, 2015 6:16 AM
> To: lustre-discuss@lists.lustre.org
> Subject: [lustre-discuss] Lustre on mac os 10.10 and windows 7
> 
> Hello,
> 
> I have formated a western digital Element hard drive with the lustre
> format. The drive have the very good performance but mac os 10.10 couldn't
> read it
> 
> When I look the partition type with gparted I get   
> ext4 but even install
> osxfuse My lustre drive isen't recognized by mac os x
> 
> Why ?
> 
> Could windows 7 read the lustre drive ?
> 
> Thanks for your support
> 
> Best regareds
> 
> mparchet
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
 
 ___
 lustre-discuss mailing list
 lustre-discuss@lists.lustre.org
 http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] FIEMAP support for Lustre

2015-08-24 Thread Patrick Farrell
Wenji,

You should definitely read up on the ticket linked by Frank Zago - It 
highlights a notable bug in Lustre's fiemap implementation.  Whether or not 
that affects your intended use, you'll have to decide.

- Patrick

From: lustre-discuss [lustre-discuss-boun...@lists.lustre.org] on behalf of 
Wenji Wu [we...@fnal.gov]
Sent: Monday, August 24, 2015 12:32 PM
To: Drokin, Oleg
Cc: lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] FIEMAP support for Lustre

Oleg,

Thanks a lot.

Which version of Luster should I take a look?

Thanks

wenji

On 8/24/15, 11:06 AM, "Drokin, Oleg"  wrote:

>Hello!
>
>On Aug 24, 2015, at 11:57 AM, Wenji Wu wrote:
>
>> Hello, everybody,
>>
>> I understand that ext2/3/4 support FIEMAP to get file extent mapping.
>>
>> Does Lustre supports similar feature like FIEMAP? Can Lustre client
>>gets FIEMAP-like information on a Luster file system?
>
>Yes, Lustre does support fiemap.
>You can see patched ext4progs and the filefrag included there works on
>top of Lustre too, as an example.
>
>lustre/tests/checkfiemap.c in the lustre source tree is another example
>user of this functionality that you can consult.
>
>Bye,
>Oleg

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] FIEMAP support for Lustre

2015-08-24 Thread Wenji Wu
Oleg,

Thanks a lot.

Which version of Luster should I take a look?

Thanks 

wenji

On 8/24/15, 11:06 AM, "Drokin, Oleg"  wrote:

>Hello!
>
>On Aug 24, 2015, at 11:57 AM, Wenji Wu wrote:
>
>> Hello, everybody,
>> 
>> I understand that ext2/3/4 support FIEMAP to get file extent mapping.
>> 
>> Does Lustre supports similar feature like FIEMAP? Can Lustre client
>>gets FIEMAP-like information on a Luster file system?
>
>Yes, Lustre does support fiemap.
>You can see patched ext4progs and the filefrag included there works on
>top of Lustre too, as an example.
>
>lustre/tests/checkfiemap.c in the lustre source tree is another example
>user of this functionality that you can consult.
>
>Bye,
>Oleg

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] FIEMAP support for Lustre

2015-08-24 Thread Drokin, Oleg
Hello!

On Aug 24, 2015, at 11:57 AM, Wenji Wu wrote:

> Hello, everybody,
> 
> I understand that ext2/3/4 support FIEMAP to get file extent mapping. 
> 
> Does Lustre supports similar feature like FIEMAP? Can Lustre client gets 
> FIEMAP-like information on a Luster file system?

Yes, Lustre does support fiemap.
You can see patched ext4progs and the filefrag included there works on top of 
Lustre too, as an example.

lustre/tests/checkfiemap.c in the lustre source tree is another example user of 
this functionality that you can consult.

Bye,
Oleg
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] FIEMAP support for Lustre

2015-08-24 Thread Wenji Wu
Hello, everybody,

I understand that ext2/3/4 support FIEMAP to get file extent mapping.

Does Lustre supports similar feature like FIEMAP? Can Lustre client gets 
FIEMAP-like information on a Luster file system?

Thanks,

wenji
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org