Hi Lance,
We are doing it with beegfs (mdadm and NVMe drives in the same HW.) For
GPFS have you updated the nsddevices sample script to look at the mdadm devices
and put it in /var/mmfs/etc?
BTW I'm interested to see how you go with that configuration.
Cheers,
Greg
-Original
That last little bit “not available today” gives me hope. It would be nice to
get there “one day.”
Our situation is we are using NFS for access to images that VMs run from. An
outage means shutting down a lot of guests. An NFS outage of even short
duration would result in the system disks of
In theory it only affects SMB, but in practice if NFS depends on winbind for
authorisation then it is affected too. I can understand the need for changes to
happen every so often and that maybe outages will be required then.
But, I would like to see some effort to avoid doing this
We run GPFS client SW on SLES 12 SP2 which has a 4.4 kernel. It is only at
4.2.3-1 at present.
-Original Message-
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of
z@imperial.ac.uk
Sent: Monday, 4 December 2017 8:38 PM
To:
I personally don’t think lack of a migration tool is a problem. I do think that
2 format changes in such quick succession is a problem. I am willing to migrate
occasionally, but then the amount of data we have in GPFS is still small. I do
value my data, so I'd trust a manual migration using
I guess I may as well ask about SLES 12 SP3 as well! TIA.
-Original Message-
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Kenneth Waegeman
Sent: Wednesday, 27 September 2017 6:17 PM
To: gpfsug-discuss@spectrumscale.org
I am interested too, so maybe keep it on list?
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of John Hearns
Sent: Saturday, 16 September 2017 1:37 AM
To: gpfsug main discussion list
Subject:
+1. We are interested in SLES 12 SP3 too.
BTW had anybody done any comparisons of SLES 12 SP2 (4.4) kernel vs RHEL 7.3 in
terms of GPFS IO performance? I would think the 4.4 kernel might give it an
edge. I'll probably get around to comparing them myself one day, but if anyone
else has some
It would be nice to know why you cannot use ganesha or mmsmb.
You don't have to use protocols or CES. We are migrating to CES from doing our
own thing with NFS and samba on Debian. Debian does not have support for CES,
so we had to roll our own. We did not use CNFS either. To get to CES we had
I asked Mellanox about this nearly 2 years ago and was told around the 500 node
mark there will be a tipping point and that datagram will be more useful after
that. Memory utilisation was the issue. I've also seen references to smaller
node counts more recently as well as generic
Are you using infiniband or Ethernet? I'm wondering if IBM have solved the
gratuitous arp issue which we see with our non-protocols NFS implementation.
-Original Message-
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Simon
I was going to keep mmdf in mind, not gpfs.snap. I will now also keep in mind
that mmdf can have an impact as at present we have spinning disk for metadata.
The system I am playing around on is not production yet, so I am safe for the
moment.
Thanks again.
From:
Thanks. I don't have a snap. I'll keep that in mind for next time I do this.
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Knister, Aaron
S. (GSFC-606.2)[COMPUTER SCIENCE CORP]
Sent: Thursday, 30 March 2017 9:45 AM
To: gpfsug main
Thanks Yuri. These were great. I'm not trying to be impertinent, but I have one
suggestion - If you can find the time, add some diagrams to help readers
visualise the various data structures and layouts. I am thinking along the
lines of what was in "The Magic Garden Explained" and "The Design
The srp work was done a few years ago now. We use the same srp code for both
physical and virtual, so I am guessing it has nothing to do with the SRIOV side
of things. Somebody else did the work, so I will try and get an answer for you.
I agree performance and stability is good with physical
We use KVM running on a Debian host, with CentOS guests. Storage is zoned from
our DDN Infiniband array to the host and then passed through to the guests. We
would like to zone it directly to the guests SRIOV IB HCA, but srp seems to be
a bit of dead code tree. We had to do a bit of work to get
We hit something like this due to a bug in gskit. We all thought it was
networking at first and it took me a fair bit of time to check all that. We
have 7 nsd servers and around 400 clients running 4.2.0.4. We are just trying a
workaround now that looks promising. The bug will be fixed at some
Are there any presentation available online that provide diagrams of the
directory/file creation process and modifications in terms of how the
blocks/inodes and indirect blocks etc are used. I would guess there are a few
different cases that would need to be shown.
This is the sort of thing
I am wondering what people use to produce a file size distribution report for
their filesystems. Has everyone rolled their own or is there some goto app to
use.
Cheers,
Greg
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of
I agree with an RFE.
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Bryan Banister
Sent: Friday, 26 August 2016 2:47 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] mmcessmbchconfig
I see only 4 pdfs now with slightly different titles to the previous 5 pdfs
available with 4.2.0. Just checking there are only supposed to be 4 now?
Greg
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
The GID selection rules for account creation are Linux distribution specific.
It sounds like you are familiar with Red Hat, where I think this idea of
GID=UID started.
sles12sp1-brc:/dev/disk/by-uuid # useradd testout
sles12sp1-brc:/dev/disk/by-uuid # grep testout /etc/passwd
You are right. An IBMer cleared it up for me.
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Simon Thompson
(Research Computing - IT Services)
Sent: Tuesday, 19 July 2016 6:00 PM
To: gpfsug main discussion list
Hi All,
Given the issues with supporting RHEL 7.2 I am wondering about
the latest SLES release and support. Is anybody running actually running it on
SLES 12 SP1. I've seen reference to a kernel version that is in SLES 12 SP1,
but I'm not sure I trust it as the same document also
24 matches
Mail list logo