Re: [lustre-discuss] Mount lustre client with MDS/MGS backup

2016-09-22 Thread Pardo Diaz, Alfonso
Machines are running lustre version 2.8.0 (clients and servers)


> El 20 sept 2016, a las 17:32, Mohr Jr, Richard Frank (Rick Mohr) 
> <rm...@utk.edu> escribió:
> 
> 
>> On Sep 19, 2016, at 2:40 AM, Pardo Diaz, Alfonso <alfonso.pa...@ciemat.es> 
>> wrote:
>> 
>> I still having the same problem in my system. My clients is stucked in the 
>> primary MDS, that it's down, and It doesn’t use the backup (service MDS), 
>> but only when try to connect there first time.
>> As I said in previous messages, the client connected when the primary was ok 
>> it can use the service MDS without problems.
>> 
>> Any suggestion?
> 
> Unfortunately, no.  Did you ever mention which Lustre version you are 
> running?  I don’t recall seeing that.
> 
> --
> Rick Mohr
> Senior HPC System Administrator
> National Institute for Computational Sciences
> http://www.nics.tennessee.edu
> 

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Mount lustre client with MDS/MGS backup

2016-09-19 Thread Pardo Diaz, Alfonso
Hello Richard,

I still having the same problem in my system. My clients is stucked in the 
primary MDS, that it's down, and It doesn’t use the backup (service MDS), but 
only when try to connect there first time.
As I said in previous messages, the client connected when the primary was ok it 
can use the service MDS without problems.

Any suggestion?


> El 15 sept 2016, a las 6:46, Mohr Jr, Richard Frank (Rick Mohr) 
> <rm...@utk.edu> escribió:
> 
> Alfonso,
> 
> Are you still having problems with this, or were you able to get it resolved?
> 
> --
> Rick Mohr
> Senior HPC System Administrator
> National Institute for Computational Sciences
> http://www.nics.tennessee.edu
> 
> 
>> On Sep 1, 2016, at 12:43 PM, Pardo Diaz, Alfonso <alfonso.pa...@ciemat.es> 
>> wrote:
>> 
>> Hi!
>> 
>> I am using a combined MDS/MGS. This is my config:
>> 
>> Checking for existing Lustre data: found
>> Reading CONFIGS/mountdata
>> 
>>  Read previous values:
>> Target: fs-MDT
>> Index:  0
>> Lustre FS:  fs
>> Mount type: ldiskfs
>> Flags:  0x1005
>> (MDT MGS no_primnode )
>> Persistent mount opts: user_xattr,errors=remount-ro
>> Parameters:  failover.node=192.168.8.9@o2ib:192.168.8.10@o2ib 
>> mdt.identity_upcall=NONE
>> 
>> 
>> 
>> 
>> Alfonso Pardo Diaz
>> System Administrator / Researcher
>> c/ Sola nº 1; 10200 Trujillo, ESPAÑA
>> Tel: +34 927 65 93 17 Fax: +34 927 32 32 37
>> 
>> 
>> 
>> 
>> De: Ben Evans [bev...@cray.com]
>> Enviado el: jueves, 01 de septiembre de 2016 15:25
>> Para: Pardo Diaz, Alfonso; Mohr Jr, Richard Frank (Rick Mohr)
>> Cc: lustre-discuss@lists.lustre.org
>> Asunto: Re: [lustre-discuss] Mount lustre client with MDS/MGS backup
>> 
>> where is the MGS mounted, and now is it configured?
>> 
>> -Ben Evans
>> 
>> On 9/1/16, 2:16 AM, "lustre-discuss on behalf of Pardo Diaz, Alfonso"
>> <lustre-discuss-boun...@lists.lustre.org on behalf of
>> alfonso.pa...@ciemat.es> wrote:
>> 
>>> Oppps, damm copy and paste!
>>> 
>>> I am writing the correct output with same result. If the MDT is mounted
>>> in the backup MDS (192.168.8.10) the mounted client work OK, but new
>>> clients throw the next error:
>>> 
>>> mount -v -t lustre 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs /mnt/fs
>>> arg[0] = /sbin/mount.lustre
>>> arg[1] = -v
>>> arg[2] = -o
>>> arg[3] = rw
>>> arg[4] = 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs
>>> arg[5] = /mnt/fs
>>> source = 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs
>>> (192.168.8.9@o2ib:192.168.8.10@o2ib:/fs), target = /mnt/fs
>>> options = rw
>>> mounting device 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs at /mnt/fs,
>>> flags=0x100 options=device=192.168.8.9@o2ib:192.168.8.10@o2ib:/fs
>>> mount.lustre: mount 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs at /mnt/fs
>>> failed: Input/output error retries left: 0
>>> mount.lustre: mount 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs at /mnt/fs
>>> failed: Input/output error
>>> Is the MGS running?
>>> 
>>> 
>>> 
>>> 
>>>> El 31 ago 2016, a las 15:32, Mohr Jr, Richard Frank (Rick Mohr)
>>>> <rm...@utk.edu> escribió:
>>>> 
>>>> 
>>>>> On Aug 31, 2016, at 8:12 AM, Pardo Diaz, Alfonso
>>>>> <alfonso.pa...@ciemat.es> wrote:
>>>>> 
>>>>> I mount my clients: mount -t lustre mds1@o2ib:mds2@o2ib:/fs /mnt/fs
>>>>> 
>>>>> 1) When both MDS are OK I can mount without problems
>>>>> 2) If the MDS1 is down and my clients have lustre mounted, they use
>>>>> MDS2 without problems
>>>>> 3) If the MDS1 is down and I try to mount a new client, It can¹t mount
>>>>> lustre with the next error:
>>>>> 
>>>>> 
>>>> 
>>>>> arg[4] = 192.168.8.9@o2ib:192.168.8.9@o2ib:/fs
>>>> 
>>>> The client is resolving both hostnames (mds1 and mds2) to the same IP
>>>> address.  I am guessing that this corresponds to mds1, so when it is
>>>> down, there is no second host for the client to try.  Try specifying IP
>>>> addresses instead of hostnames and see if that make a difference.
>>>> 
>>>> --
>>>> Rick Mohr
>>>> Senior HPC System Administrator
>>>> National Institute for Computational Sciences
>>>> http://www.nics.tennessee.edu
>>>> 
>>> 
>>> ___
>>> lustre-discuss mailing list
>>> lustre-discuss@lists.lustre.org
>>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
> 
> 

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Mount lustre client with MDS/MGS backup

2016-09-01 Thread Pardo Diaz, Alfonso
Hi!

I am using a combined MDS/MGS. This is my config:

Checking for existing Lustre data: found
Reading CONFIGS/mountdata

   Read previous values:
Target: fs-MDT
Index:  0
Lustre FS:  fs
Mount type: ldiskfs
Flags:  0x1005
  (MDT MGS no_primnode )
Persistent mount opts: user_xattr,errors=remount-ro
Parameters:  failover.node=192.168.8.9@o2ib:192.168.8.10@o2ib 
mdt.identity_upcall=NONE




Alfonso Pardo Diaz
System Administrator / Researcher
c/ Sola nº 1; 10200 Trujillo, ESPAÑA
Tel: +34 927 65 93 17 Fax: +34 927 32 32 37




De: Ben Evans [bev...@cray.com]
Enviado el: jueves, 01 de septiembre de 2016 15:25
Para: Pardo Diaz, Alfonso; Mohr Jr, Richard Frank (Rick Mohr)
Cc: lustre-discuss@lists.lustre.org
Asunto: Re: [lustre-discuss] Mount lustre client with MDS/MGS backup

where is the MGS mounted, and now is it configured?

-Ben Evans

On 9/1/16, 2:16 AM, "lustre-discuss on behalf of Pardo Diaz, Alfonso"
<lustre-discuss-boun...@lists.lustre.org on behalf of
alfonso.pa...@ciemat.es> wrote:

>Oppps, damm copy and paste!
>
>I am writing the correct output with same result. If the MDT is mounted
>in the backup MDS (192.168.8.10) the mounted client work OK, but new
>clients throw the next error:
>
>mount -v -t lustre 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs /mnt/fs
>arg[0] = /sbin/mount.lustre
>arg[1] = -v
>arg[2] = -o
>arg[3] = rw
>arg[4] = 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs
>arg[5] = /mnt/fs
>source = 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs
>(192.168.8.9@o2ib:192.168.8.10@o2ib:/fs), target = /mnt/fs
>options = rw
>mounting device 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs at /mnt/fs,
>flags=0x100 options=device=192.168.8.9@o2ib:192.168.8.10@o2ib:/fs
>mount.lustre: mount 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs at /mnt/fs
>failed: Input/output error retries left: 0
>mount.lustre: mount 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs at /mnt/fs
>failed: Input/output error
>Is the MGS running?
>
>
>
>
>> El 31 ago 2016, a las 15:32, Mohr Jr, Richard Frank (Rick Mohr)
>><rm...@utk.edu> escribió:
>>
>>
>>> On Aug 31, 2016, at 8:12 AM, Pardo Diaz, Alfonso
>>><alfonso.pa...@ciemat.es> wrote:
>>>
>>> I mount my clients: mount -t lustre mds1@o2ib:mds2@o2ib:/fs /mnt/fs
>>>
>>> 1) When both MDS are OK I can mount without problems
>>> 2) If the MDS1 is down and my clients have lustre mounted, they use
>>>MDS2 without problems
>>> 3) If the MDS1 is down and I try to mount a new client, It can¹t mount
>>>lustre with the next error:
>>>
>>>
>> 
>>> arg[4] = 192.168.8.9@o2ib:192.168.8.9@o2ib:/fs
>>
>> The client is resolving both hostnames (mds1 and mds2) to the same IP
>>address.  I am guessing that this corresponds to mds1, so when it is
>>down, there is no second host for the client to try.  Try specifying IP
>>addresses instead of hostnames and see if that make a difference.
>>
>> --
>> Rick Mohr
>> Senior HPC System Administrator
>> National Institute for Computational Sciences
>> http://www.nics.tennessee.edu
>>
>
>___
>lustre-discuss mailing list
>lustre-discuss@lists.lustre.org
>http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Mount lustre client with MDS/MGS backup

2016-09-01 Thread Pardo Diaz, Alfonso
Oppps, damm copy and paste!

I am writing the correct output with same result. If the MDT is mounted in the 
backup MDS (192.168.8.10) the mounted client work OK, but new clients throw the 
next error:

mount -v -t lustre 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs /mnt/fs
arg[0] = /sbin/mount.lustre
arg[1] = -v
arg[2] = -o
arg[3] = rw
arg[4] = 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs
arg[5] = /mnt/fs
source = 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs 
(192.168.8.9@o2ib:192.168.8.10@o2ib:/fs), target = /mnt/fs
options = rw
mounting device 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs at /mnt/fs, 
flags=0x100 options=device=192.168.8.9@o2ib:192.168.8.10@o2ib:/fs
mount.lustre: mount 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs at /mnt/fs failed: 
Input/output error retries left: 0
mount.lustre: mount 192.168.8.9@o2ib:192.168.8.10@o2ib:/fs at /mnt/fs failed: 
Input/output error
Is the MGS running?




> El 31 ago 2016, a las 15:32, Mohr Jr, Richard Frank (Rick Mohr) 
> <rm...@utk.edu> escribió:
> 
> 
>> On Aug 31, 2016, at 8:12 AM, Pardo Diaz, Alfonso <alfonso.pa...@ciemat.es> 
>> wrote:
>> 
>> I mount my clients: mount -t lustre mds1@o2ib:mds2@o2ib:/fs /mnt/fs
>> 
>> 1) When both MDS are OK I can mount without problems
>> 2) If the MDS1 is down and my clients have lustre mounted, they use MDS2 
>> without problems
>> 3) If the MDS1 is down and I try to mount a new client, It can’t mount 
>> lustre with the next error:
>> 
>> 
> 
>> arg[4] = 192.168.8.9@o2ib:192.168.8.9@o2ib:/fs
> 
> The client is resolving both hostnames (mds1 and mds2) to the same IP 
> address.  I am guessing that this corresponds to mds1, so when it is down, 
> there is no second host for the client to try.  Try specifying IP addresses 
> instead of hostnames and see if that make a difference.
> 
> --
> Rick Mohr
> Senior HPC System Administrator
> National Institute for Computational Sciences
> http://www.nics.tennessee.edu
> 

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Mount lustre client with MDS/MGS backup

2016-08-31 Thread Pardo Diaz, Alfonso
Hello,


I have a trouble mounting my lustre clients in MDS/MGS high availability. This 
is my scenario:

MDS1: primary MDS/MGS
MDS2: backup MDS/MGS
OSS1
OSS2
...
OSS8

I mount my clients: mount -t lustre mds1@o2ib:mds2@o2ib:/fs /mnt/fs

1) When both MDS are OK I can mount without problems
2) If the MDS1 is down and my clients have lustre mounted, they use MDS2 
without problems
3) If the MDS1 is down and I try to mount a new client, It can’t mount lustre 
with the next error:

arg[0] = /sbin/mount.lustre
arg[1] = -v
arg[2] = -o
arg[3] = rw
arg[4] = 192.168.8.9@o2ib:192.168.8.9@o2ib:/fs
arg[5] = /mnt/fs
source = 192.168.8.9@o2ib:192.168.8.9@o2ib:/fs 
(192.168.8.9@o2ib:192.168.8.9@o2ib:/fs), target = /mnt/fs
options = rw
mounting device 192.168.8.9@o2ib:192.168.8.9@o2ib:/fs at /mnt/fs, 
flags=0x100 options=device=192.168.8.9@o2ib:192.168.8.9@o2ib:/fs
mount.lustre: mount 192.168.8.9@o2ib:192.168.8.9@o2ib:/fs at /mnt/fs failed: 
Input/output error retries left: 0
mount.lustre: mount 192.168.8.9@o2ib:192.168.8.9@o2ib:/fs at /mnt/fs failed: 
Input/output error
Is the MGS running?


I suspect that the new client try to mount using MDS1, but if is down, they 
don’t try to mount using the next MDS.



Any suggestion?


-
Alfonso Pardo Díaz
Researcher / System Administrator CETA-Ciemat
C/Sola Nº1. 10200 - TRUJILLO, SPAIN
Teléfono. +34 927659317 (ext. 214) FAX +34 927323237

[cid:A9E2233C-3F99-40A6-B088-531EA585AD26@ceta-ciemat.es]



Confidencialidad: 
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario 
y puede contener informaci�n privilegiada o confidencial. Si no es vd. el 
destinatario indicado, queda notificado de que la utilizaci�n, divulgaci�n y/o 
copia sin autorizaci�n est� prohibida en virtud de la legislaci�n vigente. Si 
ha recibido este mensaje por error, le rogamos que nos lo comunique 
inmediatamente respondiendo al mensaje y proceda a su destrucci�n.

Disclaimer: 
This message and its attached files is intended exclusively for its recipients 
and may contain confidential information. If you received this e-mail in error 
you are hereby notified that any dissemination, copy or disclosure of this 
communication is strictly prohibited and may be unlawful. In this case, please 
notify us by a reply and delete this email and its contents immediately. 


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] Date release of 2.7.1

2016-01-21 Thread Pardo Diaz, Alfonso
Hi,


We want to upgrade our Lustre environment from Centos 6.5 with Lustre 2.5.2 to 
Centos 7 with Lustre 2.7. I see in the “Lustre Support Matrix” that servers 
with Centos 7 only are supported by Lustre 2.7.1.

When is the date release of 2.7.1 version?



Regards,




Alfonso Pardo Diaz
System Administrator / Researcher
c/ Sola nº 1; 10200 Trujillo, ESPAÑA
Tel: +34 927 65 93 17 Fax: +34 927 32 32 37

[CETA-Ciemat logo]



Confidencialidad: 
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario 
y puede contener informaci�n privilegiada o confidencial. Si no es vd. el 
destinatario indicado, queda notificado de que la utilizaci�n, divulgaci�n y/o 
copia sin autorizaci�n est� prohibida en virtud de la legislaci�n vigente. Si 
ha recibido este mensaje por error, le rogamos que nos lo comunique 
inmediatamente respondiendo al mensaje y proceda a su destrucci�n.

Disclaimer: 
This message and its attached files is intended exclusively for its recipients 
and may contain confidential information. If you received this e-mail in error 
you are hereby notified that any dissemination, copy or disclosure of this 
communication is strictly prohibited and may be unlawful. In this case, please 
notify us by a reply and delete this email and its contents immediately. 


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[Lustre-discuss] Lustre error syncing data on lock cancel

2015-02-03 Thread Pardo Diaz, Alfonso
Hello everybody,


I am getting some lustre errors:

RIT - 6 CRIT messages (Last worst: Feb 3 15:14:20 sa-d3-01 kernel: LustreError: 
29982:0:(ost_handler.c:1775:ost_blocking_ast()) Error -2 syncing data on lock 
cancel)


This errors are sporadic, but I get it everydays for all OSS.


Any suggestion?

Thanks in advance




Confidencialidad: 
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario 
y puede contener información privilegiada o confidencial. Si no es vd. el 
destinatario indicado, queda notificado de que la utilización, divulgación y/o 
copia sin autorización está prohibida en virtud de la legislación vigente. Si 
ha recibido este mensaje por error, le rogamos que nos lo comunique 
inmediatamente respondiendo al mensaje y proceda a su destrucción.

Disclaimer: 
This message and its attached files is intended exclusively for its recipients 
and may contain confidential information. If you received this e-mail in error 
you are hereby notified that any dissemination, copy or disclosure of this 
communication is strictly prohibited and may be unlawful. In this case, please 
notify us by a reply and delete this email and its contents immediately. 


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] MDS kernel panic

2014-05-30 Thread Pardo Diaz, Alfonso
Hello, 

Since I update my lustre 2.2 to 2.5.1 (Centos6.5) and copy the MDT to a new SSD 
disk. I get random kernel panics in the MDS (both HA pairs). The last kernel 
panic I get this log:

4Lustre: MGS: non-config logname received: params
3LustreError: 11-0: cetafs-MDT-lwp-MDT: Communicating with 0@lo, 
operation mds_connect failed with -11.
4Lustre: MGS: non-config logname received: params
4Lustre: cetafs-MDT: Will be in recovery for at least 5:00, or until 102 
clients reconnect
4Lustre: MGS: non-config logname received: params
4Lustre: MGS: non-config logname received: params
4Lustre: Skipped 5 previous similar messages
4Lustre: MGS: non-config logname received: params
4Lustre: Skipped 9 previous similar messages
4Lustre: MGS: non-config logname received: params
4Lustre: Skipped 2 previous similar messages
4Lustre: MGS: non-config logname received: params
4Lustre: Skipped 23 previous similar messages
4Lustre: MGS: non-config logname received: params
4Lustre: Skipped 8 previous similar messages
3LustreError: 3461:0:(ldlm_lib.c:1751:check_for_next_transno()) 
cetafs-MDT: waking for gap in transno, VBR is OFF (skip: 17188113481, ql: 
1, comp: 101, conn: 102, next: 17188113493, last_committed: 17188113480)
6Lustre: cetafs-MDT: Recovery over after 1:13, of 102 clients 102 
recovered and 0 were evicted.
1BUG: unable to handle kernel NULL pointer dereference at (null)
1IP: [a0c3b6a0] __iam_path_lookup+0x70/0x1f0 [osd_ldiskfs]
4PGD 106c0bf067 PUD 106c0be067 PMD 0 
4Oops: 0002 [#1] SMP 
4last sysfs file: /sys/devices/system/cpu/online
4CPU 0 
4Modules linked in: osp(U) mdd(U) lfsck(U) lod(U) mdt(U) mgs(U) mgc(U) 
fsfilt_ldiskfs(U) osd_ldiskfs(U) lquota(U) ldiskfs(U) lustre(U) lov(U) osc(U) 
mdc(U) fid(U) fld(U) ksocklnd(U) ko2iblnd(U) ptlrpc(U) obdclass(U) lnet(U) 
lvfs(U) sha512_generic sha256_generic crc32c_intel libcfs(U) ipmi_devintf 
cpufreq_ondemand acpi_cpufreq freq_table mperf ib_ipoib rdma_ucm ib_ucm 
ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_addr ipv6 dm_multipath microcode 
iTCO_wdt iTCO_vendor_support sb_edac edac_core lpc_ich mfd_core i2c_i801 igb 
i2c_algo_bit i2c_core ptp pps_core ioatdma dca mlx4_ib ib_sa ib_mad ib_core 
mlx4_en mlx4_core sg ext4 jbd2 mbcache sd_mod crc_t10dif ahci isci libsas 
mpt2sas scsi_transport_sas raid_class megaraid_sas dm_mirror dm_region_hash 
dm_log dm_mod [last unloaded: scsi_wait_scan]
4
4Pid: 3362, comm: mdt00_001 Not tainted 2.6.32-431.5.1.el6_lustre.x86_64 #1 
Bull SAS bullx/X9DRH-7TF/7F/iTF/iF
4RIP: 0010:[a0c3b6a0]  [a0c3b6a0] 
__iam_path_lookup+0x70/0x1f0 [osd_ldiskfs]
4RSP: 0018:88085e2754b0  EFLAGS: 00010246
4RAX: fffb RBX: 88085e275600 RCX: 0009c93c
4RDX:  RSI: 0009c93b RDI: 88106bcc32f0
4RBP: 88085e275500 R08:  R09: 
4R10:  R11:  R12: 88085e2755c8
4R13: 5250 R14: 8810569bf308 R15: 0001
4FS:  () GS:88002820() knlGS:
4CS:  0010 DS: 0018 ES: 0018 CR0: 8005003b
4CR2:  CR3: 00106dd9b000 CR4: 000407f0
4DR0:  DR1:  DR2: 
4DR3:  DR6: 0ff0 DR7: 0400
4Process mdt00_001 (pid: 3362, threadinfo 88085e274000, task 
88085f55c080)
4Stack:
4  88085e2755d8 8810569bf288 a00fd2c4
4d 88085e275660 88085e2755c8 88085e2756c8 
4d  88085db2a480 88085e275530 a0c3ba6c
4Call Trace:
4 [a00fd2c4] ? do_get_write_access+0x3b4/0x520 [jbd2]
4 [a0c3ba6c] iam_lookup_lock+0x7c/0xb0 [osd_ldiskfs]
4 [a0c3bad4] __iam_it_get+0x34/0x160 [osd_ldiskfs]
4 [a0c3be1e] iam_it_get+0x2e/0x150 [osd_ldiskfs]
4 [a0c3bf4e] iam_it_get_exact+0xe/0x30 [osd_ldiskfs]
4 [a0c3d47f] iam_insert+0x4f/0xb0 [osd_ldiskfs]
4 [a0c366ea] osd_oi_iam_refresh+0x18a/0x330 [osd_ldiskfs]
4 [a0c3ea40] ? iam_lfix_ipd_alloc+0x0/0x20 [osd_ldiskfs]
4 [a0c386dd] osd_oi_insert+0x11d/0x480 [osd_ldiskfs]
4 [811ae522] ? generic_setxattr+0xa2/0xb0
4 [a0c25021] ? osd_ea_fid_set+0xf1/0x410 [osd_ldiskfs]
4 [a0c33595] osd_object_ea_create+0x5b5/0x700 [osd_ldiskfs]
4 [a0e173bf] lod_object_create+0x13f/0x260 [lod]
4 [a0e756c0] mdd_object_create_internal+0xa0/0x1c0 [mdd]
4 [a0e86428] mdd_create+0xa38/0x1730 [mdd]
4 [a0c2af37] ? osd_xattr_get+0x97/0x2e0 [osd_ldiskfs]
4 [a0e14770] ? lod_index_lookup+0x0/0x30 [lod]
4 [a0d50358] mdo_create+0x18/0x50 [mdt]
4 [a0d5a64c] mdt_reint_open+0x13ac/0x21a0 [mdt]
4 [a065983c] ? lustre_msg_add_version+0x6c/0xc0 [ptlrpc]
4 [a04f4600] ? lu_ucred_key_init+0x160/0x1a0 [obdclass]
4 [a0d431f1] mdt_reint_rec+0x41/0xe0 [mdt]
4 [a0d2add3] mdt_reint_internal+0x4c3/0x780 [mdt]
4 

[Lustre-discuss] Remove filesystem directories from MDT

2014-05-26 Thread Pardo Diaz, Alfonso
Hello,


I have some problems in my filesystem. When I browse the filesystem from a 
client, a specific directory have directories that contain the same 
directories, in other words:

LustreFS- dir A - dir B - dir B - dir B - dir B - dir B…

This directory, and its children, has the same obdidx/objid:


[root@client vm-106-disk-1.raw]# lfs getstripe vm-106-disk-1.raw

vm-106-disk-1.raw/vm-106-disk-1.raw
lmm_stripe_count:   1
lmm_stripe_size:1048576
lmm_pattern:1
lmm_layout_gen: 0
lmm_stripe_offset:  19
obdidx   objid   objid   group
19 7329413   0x6fd6850
[root@client vm-106-disk-1.raw]# cd vm-106-disk-1.raw/
[root@client vm-106-disk-1.raw]# pwd
/mnt/data/106/GNUSparseFile.2227/vm-106-disk-1.raw/vm-106-disk-1.raw/vm-106-disk-1.raw
[root@client vm-106-disk-1.raw]# lfs getstripe vm-106-disk-1.raw

vm-106-disk-1.raw/vm-106-disk-1.raw
lmm_stripe_count:   1
lmm_stripe_size:1048576
lmm_pattern:1
lmm_layout_gen: 0
lmm_stripe_offset:  19
obdidx   objid   objid   group
19 7329413   0x6fd6850

[root@client vm-106-disk-1.raw]# cd vm-106-disk-1.raw/
[root@client vm-106-disk-1.raw]# pwd
/mnt/data/106/GNUSparseFile.2227/vm-106-disk-1.raw/vm-106-disk-1.raw/vm-106-disk-1.raw/vm-106-disk-1.raw
[root@client vm-106-disk-1.raw]# lfs getstripe vm-106-disk-1.raw

vm-106-disk-1.raw/vm-106-disk-1.raw
lmm_stripe_count:   1
lmm_stripe_size:1048576
lmm_pattern:1
lmm_layout_gen: 0
lmm_stripe_offset:  19
obdidx   objid   objid   group
19 7329413   0x6fd6850

But when I mount the MDT with “Ldiskfs” I can see correctly the filesystem in 
the “ROOT” mdt directory.

I wish to remove from the client this looped directory (dir B in the example), 
but when I try to remove I get a “kernel panic in the MDS/MGS.

is it a good idea remove a subdirectory in the MDT (with Ldiskfs mounted) in 
the “ROOT” mdt directory or I will get orphan objects in the OST, if I remove 
this directory in the MDT?



Thanks!!!




Alfonso Pardo Diaz
System Administrator / Researcher
c/ Sola nº 1; 10200 Trujillo, ESPAÑA
Tel: +34 927 65 93 17 Fax: +34 927 32 32 37





Confidencialidad: 
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario 
y puede contener información privilegiada o confidencial. Si no es vd. el 
destinatario indicado, queda notificado de que la utilización, divulgación y/o 
copia sin autorización está prohibida en virtud de la legislación vigente. Si 
ha recibido este mensaje por error, le rogamos que nos lo comunique 
inmediatamente respondiendo al mensaje y proceda a su destrucción.

Disclaimer: 
This message and its attached files is intended exclusively for its recipients 
and may contain confidential information. If you received this e-mail in error 
you are hereby notified that any dissemination, copy or disclosure of this 
communication is strictly prohibited and may be unlawful. In this case, please 
notify us by a reply and delete this email and its contents immediately. 


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Remove filesystem directories from MDT

2014-05-26 Thread Pardo Diaz, Alfonso
Hello,


I have some problems in my filesystem. When I browse the filesystem from a 
client, a specific directory have directories that contain the same 
directories, in other words:

LustreFS- dir A - dir B - dir B - dir B - dir B - dir B…

This directory, and its children, has the same obdidx/objid:


[root@client vm-106-disk-1.raw]# lfs getstripe vm-106-disk-1.raw

vm-106-disk-1.raw/vm-106-disk-1.raw
lmm_stripe_count:   1
lmm_stripe_size:1048576
lmm_pattern:1
lmm_layout_gen: 0
lmm_stripe_offset:  19
obdidx   objid   objid   group
19 7329413   0x6fd6850
[root@client vm-106-disk-1.raw]# cd vm-106-disk-1.raw/
[root@client vm-106-disk-1.raw]# pwd
/mnt/data/106/GNUSparseFile.2227/vm-106-disk-1.raw/vm-106-disk-1.raw/vm-106-disk-1.raw
[root@client vm-106-disk-1.raw]# lfs getstripe vm-106-disk-1.raw

vm-106-disk-1.raw/vm-106-disk-1.raw
lmm_stripe_count:   1
lmm_stripe_size:1048576
lmm_pattern:1
lmm_layout_gen: 0
lmm_stripe_offset:  19
obdidx   objid   objid   group
19 7329413   0x6fd6850

[root@client vm-106-disk-1.raw]# cd vm-106-disk-1.raw/
[root@client vm-106-disk-1.raw]# pwd
/mnt/data/106/GNUSparseFile.2227/vm-106-disk-1.raw/vm-106-disk-1.raw/vm-106-disk-1.raw/vm-106-disk-1.raw
[root@client vm-106-disk-1.raw]# lfs getstripe vm-106-disk-1.raw

vm-106-disk-1.raw/vm-106-disk-1.raw
lmm_stripe_count:   1
lmm_stripe_size:1048576
lmm_pattern:1
lmm_layout_gen: 0
lmm_stripe_offset:  19
obdidx   objid   objid   group
19 7329413   0x6fd6850

But when I mount the MDT with “Ldiskfs” I can see correctly the filesystem in 
the “ROOT” mdt directory.

I wish to remove from the client this looped directory (dir B in the example), 
but when I try to remove I get a “kernel panic in the MDS/MGS.

is it a good idea remove a subdirectory in the MDT (with Ldiskfs mounted) in 
the “ROOT” mdt directory or I will get orphan objects in the OST, if I remove 
this directory in the MDT?



Thanks!!!




Alfonso Pardo Diaz
System Administrator / Researcher
c/ Sola nº 1; 10200 Trujillo, ESPAÑA
Tel: +34 927 65 93 17 Fax: +34 927 32 32 37





Confidencialidad: 
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario 
y puede contener información privilegiada o confidencial. Si no es vd. el 
destinatario indicado, queda notificado de que la utilización, divulgación y/o 
copia sin autorización está prohibida en virtud de la legislación vigente. Si 
ha recibido este mensaje por error, le rogamos que nos lo comunique 
inmediatamente respondiendo al mensaje y proceda a su destrucción.

Disclaimer: 
This message and its attached files is intended exclusively for its recipients 
and may contain confidential information. If you received this e-mail in error 
you are hereby notified that any dissemination, copy or disclosure of this 
communication is strictly prohibited and may be unlawful. In this case, please 
notify us by a reply and delete this email and its contents immediately. 


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] [HPDD-discuss] Same performance Infiniband and Ethernet

2014-05-21 Thread Pardo Diaz, Alfonso
 ost on 
 different servers.  You can increase/decrease the number of threads as needed 
 to see if the aggregate performance gets better/worse.  On my clients with 
 QDR IB, I typically see aggregate write speeds in the range of 2.5-3.0 GB/s.
 
 You are probably already aware of this, but just in case, make sure that the 
 IB clients you use for testing don't also have ethernet connections to your 
 OSS servers.  If the client has an ethernet and an IB path to the same 
 server, it will choose one of the paths to use.  It could end up choosing 
 ethernet instead of IB and mess up your results.
 
 -- 
 Rick Mohr
 Senior HPC System Administrator
 National Institute for Computational Sciences
 http://www.nics.tennessee.edu
 
 
 On May 19, 2014, at 6:33 AM, Pardo Diaz, Alfonso alfonso.pa...@ciemat.es
 wrote:
 
 Hi,
 
 I have migrated my Lustre 2.2 to 2.5.1 and I have equipped my OSS/MDS and 
 clients with Infiniband QDR interfaces.
 I have compile lustre with OFED 3.2 and I have configured lnet module with:
 
 options lent networks=“o2ib(ib0),tcp(eth0)”
 
 
 But when I try to compare the lustre performance across Infiniband (o2ib), I 
 get the same performance than across ethernet (tcp):
 
 INFINIBAND TEST:
 dd if=/dev/zero of=test.dat bs=1M count=1000
 1000+0 records in
 1000+0 records out
 1048576000 bytes (1,0 GB) copied, 5,88433 s, 178 MB/s
 
 ETHERNET TEST:
 dd if=/dev/zero of=test.dat bs=1M count=1000
 1000+0 records in
 1000+0 records out
 1048576000 bytes (1,0 GB) copied, 5,97423 s, 154 MB/s
 
 
 And this is my scenario:
 
 - 1 MDs with SSD RAID10 MDT
 - 10 OSS with 2 OST per OSS
 - Infiniband interface in connected mode
 - Centos 6.5
 - Lustre 2.5.1
 - Striped filesystem “lfs setstripe -s 1M -c 10
 
 
 I know my infiniband running correctly, because if I use IPERF3 between 
 client and servers I got 40Gb/s by infiniband and 1Gb/s by ethernet 
 connections.
 
 
 
 Could you help me?
 
 
 Regards,
 
 
 
 
 
 Alfonso Pardo Diaz
 System Administrator / Researcher
 c/ Sola nº 1; 10200 Trujillo, ESPAÑA
 Tel: +34 927 65 93 17 Fax: +34 927 32 32 37
 
 
 
 
 
 Confidencialidad: 
 Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su 
 destinatario y puede contener información privilegiada o confidencial. Si no 
 es vd. el destinatario indicado, queda notificado de que la utilización, 
 divulgación y/o copia sin autorización está prohibida en virtud de la 
 legislación vigente. Si ha recibido este mensaje por error, le rogamos que 
 nos lo comunique inmediatamente respondiendo al mensaje y proceda a su 
 destrucción.
 
 Disclaimer: 
 This message and its attached files is intended exclusively for its 
 recipients and may contain confidential information. If you received this 
 e-mail in error you are hereby notified that any dissemination, copy or 
 disclosure of this communication is strictly prohibited and may be unlawful. 
 In this case, please notify us by a reply and delete this email and its 
 contents immediately. 
 
 
 ___
 HPDD-discuss mailing list
 hpdd-disc...@lists.01.org
 https://lists.01.org/mailman/listinfo/hpdd-discuss
 
 
 

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] non-config log name received

2014-05-21 Thread Pardo Diaz, Alfonso
Hello,

I had a MDS and MGS lustre 2.2, during my update process to 2.5.1, I have merge 
the MDS and MGS in the same node. All work OK, but when a client mount the 
filesystem, in the MDS I got the next message (/var/log/messages):

kernel: Lustre: MGS: non-config logname received: params”



Any idea about the meaning of this log?



Thanks in advance





Alfonso Pardo Diaz
System Administrator / Researcher
c/ Sola nº 1; 10200 Trujillo, ESPAÑA
Tel: +34 927 65 93 17 Fax: +34 927 32 32 37





Confidencialidad: 
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario 
y puede contener información privilegiada o confidencial. Si no es vd. el 
destinatario indicado, queda notificado de que la utilización, divulgación y/o 
copia sin autorización está prohibida en virtud de la legislación vigente. Si 
ha recibido este mensaje por error, le rogamos que nos lo comunique 
inmediatamente respondiendo al mensaje y proceda a su destrucción.

Disclaimer: 
This message and its attached files is intended exclusively for its recipients 
and may contain confidential information. If you received this e-mail in error 
you are hereby notified that any dissemination, copy or disclosure of this 
communication is strictly prohibited and may be unlawful. In this case, please 
notify us by a reply and delete this email and its contents immediately. 


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Same performance Infiniband and Ethernet

2014-05-19 Thread Pardo Diaz, Alfonso
Hi,

I have migrated my Lustre 2.2 to 2.5.1 and I have equipped my OSS/MDS and 
clients with Infiniband QDR interfaces.
I have compile lustre with OFED 3.2 and I have configured lnet module with:

options lent networks=“o2ib(ib0),tcp(eth0)”


But when I try to compare the lustre performance across Infiniband (o2ib), I 
get the same performance than across ethernet (tcp):

INFINIBAND TEST:
dd if=/dev/zero of=test.dat bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1,0 GB) copied, 5,88433 s, 178 MB/s

ETHERNET TEST:
dd if=/dev/zero of=test.dat bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1,0 GB) copied, 5,97423 s, 154 MB/s


And this is my scenario:

- 1 MDs with SSD RAID10 MDT
- 10 OSS with 2 OST per OSS
- Infiniband interface in connected mode
- Centos 6.5
- Lustre 2.5.1
- Striped filesystem “lfs setstripe -s 1M -c 10


I know my infiniband running correctly, because if I use IPERF3 between client 
and servers I got 40Gb/s by infiniband and 1Gb/s by ethernet connections.



Could you help me?


Regards,





Alfonso Pardo Diaz
System Administrator / Researcher
c/ Sola nº 1; 10200 Trujillo, ESPAÑA
Tel: +34 927 65 93 17 Fax: +34 927 32 32 37





Confidencialidad: 
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario 
y puede contener información privilegiada o confidencial. Si no es vd. el 
destinatario indicado, queda notificado de que la utilización, divulgación y/o 
copia sin autorización está prohibida en virtud de la legislación vigente. Si 
ha recibido este mensaje por error, le rogamos que nos lo comunique 
inmediatamente respondiendo al mensaje y proceda a su destrucción.

Disclaimer: 
This message and its attached files is intended exclusively for its recipients 
and may contain confidential information. If you received this e-mail in error 
you are hereby notified that any dissemination, copy or disclosure of this 
communication is strictly prohibited and may be unlawful. In this case, please 
notify us by a reply and delete this email and its contents immediately. 


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Same performance Infiniband and Ethernet

2014-05-19 Thread Pardo Diaz, Alfonso
thank for your ideas,


I have measure the OST RAID performance, and there isn’t a bottleneck in the 
RAID disk. If I write directly in the RAID I got:

dd if=/dev/zero of=test.dat bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1,0 GB) copied, 1,34852 s, 778 MB/s

And If i use /dev/urandom as input file I get the same performance again for 
infiniband and ethernet connection.

How can I write directly forgoing cache?


Thanks again!




El 19/05/2014, a las 13:24, Hammitt, Charles Allen chamm...@email.unc.edu 
escribió:

 Two things:
 
 1)  Linux write cache is likely getting in the way; you'd be better off 
 trying to write directly forgoing cache
 2)  you need to write a much bigger file than 1GB; try 50GB
 
 
 Then as the previous poster said, maybe your disks aren't up to snuff or are 
 misconfigured.  
 Also, very interesting, and impossible, to get 154MB/s out of a Single GbE 
 link [128MB/s].  Should be more like 100-115.  Less this is 10/40GbE...if 
 so... again, start at #1 and #2.
 
 
 
 
 Regards,
 
 Charles
 
 
 
 
 -- 
 ===
 Charles Hammitt
 Storage Systems Specialist
 ITS Research Computing @ 
 The University of North Carolina-CH
 211 Manning Drive
 Campus Box # 3420, ITS Manning, Room 2504  
 Chapel Hill, NC 27599
 ===
 
 
 
 
 -Original Message-
 From: lustre-discuss-boun...@lists.lustre.org 
 [mailto:lustre-discuss-boun...@lists.lustre.org] On Behalf Of Vsevolod 
 Nikonorov
 Sent: Monday, May 19, 2014 6:54 AM
 To: lustre-discuss@lists.lustre.org
 Subject: Re: [Lustre-discuss] Same performance Infiniband and Ethernet
 
 What disks do your OSTs have? Maybe you have reached your disk performance 
 limit, so Infiniband gives some speedup, but very small. Did you try to 
 enable striping on your Lustre filesystem? For instance, you can type 
 something like this: lfs setstripe -c count of stripes 
 /mnt/lustre/somefolder and than copy a file into that folder.
 
 Also, there's an opinion that sequence of zeros is not a good way to test a 
 performance, so maybe you should try using /dev/urandom (which is rather 
 slow, so it's better to have a pre-generated urandom file in /ram, or 
 /dev/shm, or where your memory space is mounted to, and copy that file to 
 Lustre filesystem as a test).
 
 
 
 Pardo Diaz, Alfonso писал 2014-05-19 14:33:
 Hi,
 
 I have migrated my Lustre 2.2 to 2.5.1 and I have equipped my OSS/MDS 
 and clients with Infiniband QDR interfaces.
 I have compile lustre with OFED 3.2 and I have configured lnet module
 with:
 
 options lent networks=“o2ib(ib0),tcp(eth0)”
 
 
 But when I try to compare the lustre performance across Infiniband 
 (o2ib), I get the same performance than across ethernet (tcp):
 
 INFINIBAND TEST:
 dd if=/dev/zero of=test.dat bs=1M count=1000
 1000+0 records in
 1000+0 records out
 1048576000 bytes (1,0 GB) copied, 5,88433 s, 178 MB/s
 
 ETHERNET TEST:
 dd if=/dev/zero of=test.dat bs=1M count=1000
 1000+0 records in
 1000+0 records out
 1048576000 bytes (1,0 GB) copied, 5,97423 s, 154 MB/s
 
 
 And this is my scenario:
 
 - 1 MDs with SSD RAID10 MDT
 - 10 OSS with 2 OST per OSS
 - Infiniband interface in connected mode
 - Centos 6.5
 - Lustre 2.5.1
 - Striped filesystem “lfs setstripe -s 1M -c 10
 
 
 I know my infiniband running correctly, because if I use IPERF3 
 between client and servers I got 40Gb/s by infiniband and 1Gb/s by 
 ethernet connections.
 
 
 
 Could you help me?
 
 
 
 Regards,
 
 
 
 
 
 Alfonso Pardo Diaz
 System Administrator / Researcher
 c/ Sola nº 1; 10200 Trujillo, ESPAÑA
 Tel: +34 927 65 93 17 Fax: +34 927 32 32 37
 
 
 
 
 
 Confidencialidad:
 Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su 
 destinatario y puede contener información privilegiada o confidencial.
 Si no es vd. el destinatario indicado, queda notificado de que la 
 utilización, divulgación y/o copia sin autorización está prohibida en 
 virtud de la legislación vigente. Si ha recibido este mensaje por 
 error, le rogamos que nos lo comunique inmediatamente respondiendo al 
 mensaje y proceda a su destrucción.
 
 Disclaimer:
 This message and its attached files is intended exclusively for its 
 recipients and may contain confidential information. If you received 
 this e-mail in error you are hereby notified that any dissemination, 
 copy or disclosure of this communication is strictly prohibited and 
 may be unlawful. In this case, please notify us by a reply and delete 
 this email and its contents immediately.
 
 
 ___
 Lustre-discuss mailing list
 Lustre-discuss@lists.lustre.org
 http://lists.lustre.org/mailman/listinfo/lustre-discuss
 
 Email secured by Check Point
 
 --
 Никоноров Всеволод Дмитриевич, ОИТТиС, НИКИЭТ
 
 Vsevolod D. Nikonorov, JSC NIKET
 
 Email secured by Check Point

[Lustre-discuss] Lustre develop documentation

2012-07-20 Thread Pardo Diaz, Alfonso
Hi everyone!


I wish to develop some ideas to Lustre. I am watching the Lustre source code, 
but it's impossible to interpret the code without documentation. Do you know 
where I can find some documentation?


Thanks in advance


Confidencialidad: 
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario 
y puede contener información privilegiada o confidencial. Si no es vd. el 
destinatario indicado, queda notificado de que la utilización, divulgación y/o 
copia sin autorización está prohibida en virtud de la legislación vigente. Si 
ha recibido este mensaje por error, le rogamos que nos lo comunique 
inmediatamente respondiendo al mensaje y proceda a su destrucción.

Disclaimer: 
This message and its attached files is intended exclusively for its recipients 
and may contain confidential information. If you received this e-mail in error 
you are hereby notified that any dissemination, copy or disclosure of this 
communication is strictly prohibited and may be unlawful. In this case, please 
notify us by a reply and delete this email and its contents immediately. 


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Lustre develop documentation

2012-07-20 Thread Pardo Diaz, Alfonso

Thanks!


I think is enough to start




-Mensaje original-
De: lustre-discuss-boun...@lists.lustre.org en nombre de Artem Blagodarenko
Enviado el: vie 20/07/2012 10:19
Para: lustre-discuss@lists.lustre.org
Asunto: Re: [Lustre-discuss] Lustre develop documentation
 
Hello!

This is all I know:
1) http://wiki.lustre.org
2)  Understanding Lustre Filesystem Internals, 2009
3) The Lustre Storage Architecture, 2005
4) Main part of Lustre code is commented with Doxygen. 
Online documentation is here:
http://wiki.lustre.org/doxygen/
You can make this documentation offline:
 To build all the documentation, in the top-level lustre directory, run:
 doxygen build/doxyfile.api
 doxygen build/doxyfile.ref
I believe this is not full documentation list.

On 20.07.2012, at 11:02, Pardo Diaz, Alfonso wrote:

 Hi everyone!
 
 
 I wish to develop some ideas to Lustre. I am watching the Lustre source code, 
 but it's impossible to interpret the code without documentation. Do you know 
 where I can find some documentation?
 
 
 Thanks in advance
 
  Confidencialidad: Este mensaje y sus ficheros 
 adjuntos se dirige exclusivamente a su destinatario y puede contener 
 información privilegiada o confidencial. Si no es vd. el destinatario 
 indicado, queda notificado de que la utilización, divulgación y/o copia sin 
 autorización está prohibida en virtud de la legislación vigente. Si ha 
 recibido este mensaje por error, le rogamos que nos lo comunique 
 inmediatamente respondiendo al mensaje y proceda a su destrucción. 
 Disclaimer: This message and its attached files is intended exclusively for 
 its recipients and may contain confidential information. If you received this 
 e-mail in error you are hereby notified that any dissemination, copy or 
 disclosure of this communication is strictly prohibited and may be unlawful. 
 In this case, please notify us by a reply and delete this email and its 
 contents immediately. 
 ___
 Lustre-discuss mailing list
 Lustre-discuss@lists.lustre.org
 http://lists.lustre.org/mailman/listinfo/lustre-discuss


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss