hi thanks for the quick response **************************************************************************** ssh [EMAIL PROTECTED] Scientific Linux CERN SLC release 4.6 (Beryllium) [EMAIL PROTECTED]'s password: Last login: Sun Dec 23 18:02:33 2007 from x-mathr11.tau.ac.il [EMAIL PROTECTED] ~]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination
Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination [EMAIL PROTECTED] ~]# *************************************************************************** ssh [EMAIL PROTECTED] [EMAIL PROTECTED] ~]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination RH-Firewall-1-INPUT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere icmp any ACCEPT ipv6-crypt-- anywhere anywhere ACCEPT ipv6-auth-- anywhere anywhere ACCEPT udp -- anywhere 224.0.0.251 udp dpt:5353 ACCEPT udp -- anywhere anywhere udp dpt:ipp ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:30000:30101 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh ACCEPT udp -- anywhere anywhere state NEW udp dpt:afs3-callback REJECT all -- anywhere anywhere reject-with icmp-host-prohibited [EMAIL PROTECTED] ~]# ********************************************************************************************** what do you mean by "is this network dedicated to Lustre?" I run only lustre on the machines since I would like to have a working configuration. can you recommend me an interface? thanks !, Avi On Dec 23, 2007 9:27 PM, Aaron Knister <[EMAIL PROTECTED]> wrote: > Can you check the firewall on each of those machines ( iptables -L ) and > paste that here. Also, is this network dedicated to Lustre? Lustre can > easily saturate a network interface under load to the point it becomes > difficult to login to a node if it only has one interface. I'd recommend > using a different interface if you can. > > On Dec 23, 2007, at 11:03 AM, Avi Gershon wrote: > > node 1 *132.66.176.212* > *node 2 132.66.176.215* > > [EMAIL PROTECTED] ~]# ssh 132.66.176.215 > [EMAIL PROTECTED]'s password: > ssh(21957) Permission denied, please try again. > [EMAIL PROTECTED]'s password: > Last login: Sun Dec 23 14:32:51 2007 from x-math20.tau.ac.il > [EMAIL PROTECTED] ~]# lctl ping [EMAIL PROTECTED] > failed to ping [EMAIL PROTECTED]: Input/output error > [EMAIL PROTECTED] ~]# lctl list_nids > [EMAIL PROTECTED] > [EMAIL PROTECTED] ~]# ssh 132.66.176.212 > The authenticity of host '132.66.176.212 (132.66.176.212)' can't be > established. > RSA1 key fingerprint is 85:2a:c1:47:84:b7:b5:a6:cd:c4:57:86:af:ce:7e:74. > Are you sure you want to continue connecting (yes/no)? yes > ssh(11526) Warning: Permanently added '132.66.176.212 ' (RSA1) to the list > of kno > wn hosts. > [EMAIL PROTECTED]'s password: > Last login: Sun Dec 23 15:24:41 2007 from x-math20.tau.ac.il > [EMAIL PROTECTED] ~]# lctl ping [EMAIL PROTECTED] > failed to ping [EMAIL PROTECTED]: Input/output error > [EMAIL PROTECTED] ~]# lctl list_nids > [EMAIL PROTECTED] > [EMAIL PROTECTED] ~]# > > thanks for helping!! > Avi > > On Dec 23, 2007 5:32 PM, Aaron Knister <[EMAIL PROTECTED]> wrote: > > > On the oss can you ping the mds/mgs using this command-- > > > > lctl ping [EMAIL PROTECTED] > > > > If it doesn't ping, list the nids on each node by running > > > > lctl list_nids > > > > and tell me what comes back. > > > > -Aaron > > > > > > On Dec 23, 2007, at 9:22 AM, Avi Gershon wrote: > > > > HI I could use some help. > > I installed lustre on 3 computers > > mdt/mgs : > > > > > > ************************************************************************************8 > > [EMAIL PROTECTED] ~]#mkfs.lustre --reformat --fsname spfs --mdt --mgs > > /dev/hdb > > > > Permanent disk data: > > Target: spfs-MDTffff > > Index: unassigned > > Lustre FS: spfs > > Mount type: ldiskfs > > Flags: 0x75 > > (MDT MGS needs_index first_time update ) > > Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr > > Parameters: > > > > device size = 19092MB > > formatting backing filesystem ldiskfs on /dev/hdb > > target name spfs-MDTffff > > 4k blocks 0 > > options -J size=400 -i 4096 -I 512 -q -O dir_index -F > > mkfs_cmd = mkfs.ext2 -j -b 4096 -L spfs-MDTffff -J size=400 -i 4096 -I > > 512 -q -O dir_index -F /dev/hdb > > Writing CONFIGS/mountdata > > [EMAIL PROTECTED] ~]# df > > Filesystem 1K-blocks Used Available Use% Mounted on > > /dev/hda1 19228276 4855244 13396284 27% / > > none 127432 0 127432 0% /dev/shm > > /dev/hdb 17105436 455152 15672728 3% /mnt/test/mdt > > [EMAIL PROTECTED] ~]# cat /proc/fs/lustre/devices > > 0 UP mgs MGS MGS 5 > > 1 UP mgc [EMAIL PROTECTED] 5f5ba729-6412-3843-2229-1310a0b48f71 5 > > 2 UP mdt MDS MDS_uuid 3 > > 3 UP lov spfs-mdtlov spfs-mdtlov_UUID 4 > > 4 UP mds spfs-MDT0000 spfs-MDT0000_UUID 3 > > [EMAIL PROTECTED] ~]# > > *************************************************************end > > mdt******************************8 > > so you can see that the MGS is up > > ond on the ost's I get an error!! plz help... > > > > ost: > > ********************************************************************** > > [EMAIL PROTECTED] ~]# mkfs.lustre --reformat --fsname spfs --ost > > --mgsnode=132.66. [EMAIL PROTECTED] /dev/hdb1 > > > > Permanent disk data: > > Target: spfs-OSTffff > > Index: unassigned > > Lustre FS: spfs > > Mount type: ldiskfs > > Flags: 0x72 > > (OST needs_index first_time update ) > > Persistent mount opts: errors=remount-ro,extents,mballoc > > Parameters: [EMAIL PROTECTED] > > > > device size = 19594MB > > formatting backing filesystem ldiskfs on /dev/hdb1 > > target name spfs-OSTffff > > 4k blocks 0 > > options -J size=400 -i 16384 -I 256 -q -O dir_index -F > > mkfs_cmd = mkfs.ext2 -j -b 4096 -L spfs-OSTffff -J size=400 -i 16384 -I > > 256 -q -O dir_index -F /dev/hdb1 > > Writing CONFIGS/mountdata > > [EMAIL PROTECTED] ~]# /CONFIGS/mountdata > > -bash: /CONFIGS/mountdata: No such file or directory > > [EMAIL PROTECTED] ~]# mount -t lustre /dev/hdb1 /mnt/test/ost1 > > mount.lustre: mount /dev/hdb1 at /mnt/test/ost1 failed: Input/output > > error > > Is the MGS running? > > ***********************************************end > > ost******************************** > > > > can any one point out the problem? > > thanks Avi. > > > > > > _______________________________________________ > > Lustre-discuss mailing list > > [email protected] > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > > > > Aaron Knister > > Associate Systems Administrator/Web Designer > > Center for Research on Environment and Water > > > > (301) 595-7001 > > [EMAIL PROTECTED] > > > > > > > > > > Aaron Knister > Associate Systems Administrator/Web Designer > Center for Research on Environment and Water > > (301) 595-7001 > [EMAIL PROTECTED] > > > >
_______________________________________________ Lustre-discuss mailing list [email protected] https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
