On 28/05/2020 17:40, Leonardo Saavedra wrote:
[...]
Remove the 2.9.0 lustre packages, then install
lustre-client-2.12.4-1.el7.x86_64.rpm and
kmod-lustre-client-2.12.4-1.el7.x86_64.rpm
Cheers to you and Degremont, Aurelien, who replied saying the same
earlier, that seemed to fix it.
Phill.
On 27/05/2020 19:26, Leonardo Saavedra wrote:
On 5/26/20 5:47 PM, Phill Harvey-Smith wrote:
echo "%_topdir $HOME/rpmbuild" >> .rpmmacros
wget -c
https://downloads.whamcloud.com/public/lustre/lustre-2.12.4/el7/client/SRPMS/lustre-2.12.4-1.src.rpm
rpmbuild --clean --r
Hi all,
Can anyone tell me where to download the Lustere client modules for
CentOS 7.8 please ?
# uname -a
Linux exec3r420 3.10.0-1127.8.2.el7.x86_64 #1 SMP Tue May 12 16:57:42
UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/redhat-release
CentOS Linux release 7.8.2003 (Core)
Servers
b Real name: Mr Phill
Harvey-smith
[2016/12/12 14:50:32.140577, 3] smbd/password.c:308(register_existing_vuid)
register_existing_vuid: UNIX uid 1091 is UNIX user stsxab, and will
be vuid 100
[2016/12/12 14:50:32.140734, 3] smbd/password.c:238(register_homes_share)
Adding homes service fo
On 15/12/2016 14:21, Hanley, Jesse A. wrote:
I forgot: You should also be able to use lshowmount.
Humm that works on the old sever, but can't find the command on the new
centos 7.2 server, which I installed from RPMs I suspect there is one
that I have not installed :)
Cheers.
Phill.
Hi all,
I've now hopefully completed my migration to our new lustre filestore.
Is there any way I can check to see if any clients still have the lustre
filesystems mounted from the old MDS/OSS I do have one test machine that
I know is, but want to make sure none of the others do before
On 12/12/2016 17:27, Patrick Farrell wrote:
Perhaps more expansively:
Is the new MDS configured to be able to authenticate these users? Using
/etc/passwd synchronization to do network auth is nasty. It's just
asking for weird troubles if you don't get it exactly right. LDAP or
similar is the
Hi All,
I'm in the final step of upgrading our storage servers to lustre 2.8.
The MDS/OSS are running on Centos 7.2 the clients are Ubuntu 12.04,
though I also have a virtual machine running on Centos 7.2 as a client,
both seem to exhibit the same problem.
Our old environment was a 2.4
On 02/11/2016 17:54, Dilger, Andreas wrote:
There is a "make debs" target, but I don't know how often this is
tested. That would be the best thing to use for Ubuntu, and if it isn't
working then please feel free to report to the list and/or Jira.
Just got back to this,
make debs gets further
On 02/11/2016 12:22, Patrick Farrell wrote:
Phil,
Phill :)
I feel you must be looking for more, but the answer is "yes". I think
just "make rpms" should get you there, from the same spot you did make
install.
Humm, that seems to fall over following are the last few lines of output
Hi all,
As a stopgap to migrating our cluster from Ubuntu to Centos, I need to
distribute an updated Lustre client to all the compute nodes, which are
all running Ubuntu 12.04.
I have the 2.8 client compiled on a test node and it seems to be working
without problems on this node, as I have
Hi all,
I'm just in the process of replacing our lustre servers to new hardware.
The old servers mds0/oss0 where running on Ubuntu 10.04, with a redhat
kernel and lustre 2.1.0.
The new servers mds1/oss1 are running on Centos 7.2 with lustre 2.8.0.
The filesystems on the new server are the
Hi all,
Having tested a simple setup for lustre / zfs, I'd like to trey and
replicate on the test system what we currently have on the production
system, which uses a much older version of lustre (2.0 IIRC).
Currently we have a combined mgs / mds node and a single oss node. we
have 3
Hi all,
Now I have my test setup working on Centos 7.2, with Lustre 2.8 backed
by ZFS, I have a question relates to this setup.
Looking on the net the advice seems to be if you are going to run ZFS
you should turn off hardware raid and let the zfs pool handle it, would
this advice still
Hi all
I'm still having problems trying to setup lustr 2.8 on Centos 7.2. I
think part of the problem is that the documentation at :
http://lustre.org/documentation/ seems to be way out of date, when it
comes to setting up lustre with zfs.
Also chapter 9 of the documentation talks about
Hi all,
I've hopefully setup a test MGT/MGS on one of our test servers however
when I try and start lustre with service lustre start it reports [OK]
but the following errors are logged in /var/log/messages
Sep 21 09:44:29 oric kernel: osd_zfs: disagrees about version of symbol
On 13/09/2016 14:59, Crowe, Tom wrote:
Your mkfs.lustre will look something like this:
mkfs.lustre ‹ost --mgsnode=NID_OF_MGS --fsname=FS_NAME --index=DECIMAL_NUM
‹backfstype=zfs --network=NET_TYPE testpool/ost-name-here
The last argument, is the zfs vdev device you want mkfs.lustre to create
Hi all,
As we will soon be replacing our lustre servers, I am currently trying
to setup a test filesystem to practice the process of setting it all up.
I've installed Centos 7.2 on a couple of old server nodes to use as my
test mds/mdt and oss.
I have been following the setup guide from :
Hi all,
We will shortly be replacing the hardware that our Lustre servers run on
as they are both over 5 years old and are due for retirement.
Our current setup is this :
Note that since this was largely setup before I joined and since it's in
service at the moment I can't really take the
On 14/08/2016 03:09, Stephane Thiell wrote:
Hi Phil,
Phill :)
I understand that you’re running master on your clients (tag v2_8_56
was created 4 days ago) and 2.1 on the servers? Running master in
production is already a challenge. Also Lustre has never be good for
cross-version
On 11/08/2016 16:10, Colin Faber wrote:
First glance indicates you're having network connectivity problems,
(possibly driver issue with your NIC?)
I don't seem to have had any problems with any other services running on
the cluster, and there are no messages in the journal or the /var/log
Hi all,
I have a (fairly urgent) problem.
I have been updating our cluster to Ubuntu 16.04, which has on the whole
gone well, however in the last part I have run across a rather serious
error.
Our frontend node has an instance of samba that shares out the home
directories, we have found
Hi all,
After working through the "Odd behavior with matlab" theread and
applying the patch I can do :
lctl set_param llite.home-.create_no_open_optimization=0
Which will allow matlab to run.
The problem is that this setting needs to be set every time the system
is booted, and also the
On 03/06/2016 12:29, Patrick Farrell wrote:
Right,
I've now managed to get the LU-4185 patch to integrate into a copy of
the lustre tree checked out on 31st may. I did have to make some minor
changes (documented below), and could not get it to compile in a more
recent checkout, but since I
On 02/06/2016 17:45, E.S. Rosenberg wrote:
My fault, I took the discussion OT and also forgot to set the From so
that it would be on list...
No you did set it correctly, I put the list back in assuming you'd made
a mistake, as this list is one of those where reply goes back to the
sender and
On 02/06/2016 14:32, E.S. Rosenberg wrote:
What is the output of:
cat /sys/fs/lustre/version
Ok /proc/fs/lustre/version :
On the 16.04 system which barfs on matlab :
lustre: 2.8.53_51_g3680fa1
kernel: patchless_client
build: 2.8.53_51_g3680fa1
On the 12.04 system where matlab works :
On 02/06/2016 14:32, E.S. Rosenberg wrote:
What is the output of:
cat /sys/fs/lustre/version
Humm I don't have a /sys/fs/lustre I have the modules loaded and they
are listed when I do an lsmod.
Actually the same is true for the Ubuntu 12.04 nodes which have an older
version of the lustre
On 10/05/2016 21:28, Christopher J. Morrone wrote:
Do you need infiniband support in Lustre? If not, you can avoid this
problem by disabling compilation of the o2ib lnd. Add "--with-o2ib=no"
to the configure command line.
Excellent thanks, that seems to have done the trick.
Cheers.
Phill.
Hi all,
We are currently running the lustre client on our departmental linux
cluster which is running Ubuntu 12.04 LTS (x64). It is our plan to
upgrade our compute nodes over the summer to Ubuntu 16.04 LTS. However
to do this we will need to compile the luctre client to run on these.
I have
Hi all,
One of my users is reporting a massive size difference between the
figures reported by du and quota.
doing a du -hs on his directory reports :
du -hs .
529G.
doing a lfs quota -u username /storage reports
Filesystem kbytes quota limit grace files quota limit grace
Hi All,
We have one of our Lustre filesystems that we export via NFS to a
machine that we don't currently have the Lustre client modules built for.
The remote machine can mount the filesystems however as soon as it tries
to access a file on the NFS mounted filesystem, it causes the NFS server
Hi all,
Are there any gotyas that I need to be aware of in upgrading our OSS and
MDS servers to the latest 2.5 Lustre version, will all the clients need
to be updated at the same time ?
Cheers.
Phill.
___
Lustre-discuss mailing list
Hi all,
Our OSS has started panicing in the last couple of days, it seems to be
related to nfs4, but not sure so asking the group for pointers.
Fistly a couple of screen grabs are at :
http://penguin.stats.warwick.ac.uk/~stsxab/Lustre/
The OSS server is currently running Ubuntu 10.04 LTS with
Just to let everyone know I have got the Lustre client building now by
doing a fresh install, it looks like the node I was trying to build on
had a screwed / bad build environment.
Cheers for the help.
Phill.
___
Lustre-discuss mailing list
On 30/07/2013 16:29, Diep, Minh wrote:
Hi Phill,
Please try
Sh autogen.sh
./configure --disable-server
Doh! that is what I tried, but not what I wrote in my email :(
I have found the required .config file however and put it in
/lib/modules/3.8.0-27-generic/build/.config (it's installed in
On 30/07/2013 20:49, Dilger, Andreas wrote:
On 2013/07/30 10:43 AM, Diep, Minh minh.d...@intel.com wrote:
We are not actively building Ubuntu 12.04. I would look into this error
checking if Linux was built with CONFIG_CRYPTO in or as moduleŠ no
I don't think we should be depending on
36 matches
Mail list logo