56.1821.52
Best Regards,
-Kums
- Original message -From: "Truong Vu" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug-discuss@spectrumscale.orgCc:Subject: Re: [gpfsug-discuss] Lro
Yes, older versions of GPFS don't recognize /dev/nvme*. So you would
need /var/mmfs/etc/nsddevices user exit. On newer GPFS versions, the nvme
devices are also generic. So, it is good that you are using the same NSD
sub-type.
Cheers,
Tru.
From: gpfsug-discuss-requ...@spectrumscale.org
I’m wary of spending a lot of money on LROC devices when I don’t know what
return I will get.. that said I think the main bottleneck for any SMB
installation is samba itself, not the disks, so I remain largely unconvinced
that LROC will help much.
From: gpfsug-discuss-boun...@spectrumscale.org
100% utilized are bursts above 200,000 IO's. Any way to tell ganesha.nfsd to
cache more?
On 1/25/17 3:51 PM, Matt Weil wrote:
[ces1,ces2,ces3]
maxStatCache 8
worker1Threads 2000
maxFilesToCache 50
pagepool 100G
maxStatCache 8
lrocData no
378G system memory.
On 1/25/17 3:29
[ces1,ces2,ces3]
maxStatCache 8
worker1Threads 2000
maxFilesToCache 50
pagepool 100G
maxStatCache 8
lrocData no
378G system memory.
On 1/25/17 3:29 PM, Sven Oehme wrote:
have you tried to just leave lrocInodes and lrocDirectories on and turn data
off ?
yes data I just turned
have you tried to just leave lrocInodes and lrocDirectories on and turn
data off ?
also did you increase maxstatcache so LROC actually has some compact
objects to use ?
if you send value for maxfilestocache,maxfilestocache,workerthreads and
available memory of the node i can provide a start point.
On 1/25/17 3:00 PM, Sven Oehme wrote:
Matt,
the assumption was that the remote devices are slower than LROC. there is some
attempts in the code to not schedule more than a maximum numbers of outstanding
i/os to the LROC device, but this doesn't help in all cases and is depending on
what
Sent:* 25 January 2017 20:25
> *To:* gpfsug-discuss@spectrumscale.org
> *Subject:* Re: [gpfsug-discuss] LROC Zimon sensors
>
> Richard,
>
> there are no exposures of LROC counters in the Scale GUI. you need to use
> the grafana bridge to get graphs or the command line tools to query the
>
fsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] LROC Zimon sensors
Richard,
there are no exposures of LROC counters in the Scale GUI. you need to use the
grafana bridge to get graphs or the command line tools to query the data in
text format.
Sven
On Wed, Jan 25, 2017 at 5:08 PM Sobey, Rich
Matt,
the assumption was that the remote devices are slower than LROC. there is
some attempts in the code to not schedule more than a maximum numbers of
outstanding i/os to the LROC device, but this doesn't help in all cases and
is depending on what kernel level parameters for the device are set.
ale.org>
Subject: [EXTERNAL] Re: [gpfsug-discuss] LROC Zimon sensors
Richard,
there are no exposures of LROC counters in the Scale GUI. you need to use the
grafana bridge to get graphs or the command line tools to query the data in
text format.
Sven
___
first good that the problem at least is solved, it would be great if you
could open a PMR so this gets properly fixed, the daemon shouldn't
segfault, but rather print a message that the device is too big.
on the caching , it only gets used when you run out of pagepool or when you
run out of full
after restart. still doesn't seem to be in use.
> [root@ces1 ~]# mmdiag --lroc
>
> === mmdiag: lroc ===
> LROC Device(s): '0A6403AA5865389E#/dev/nvme0n1;' status Running
> Cache inodes 1 dirs 1 data 1 Config: maxFile 1073741824 stubFile
> 1073741824
> Max capacity: 1526184 MB, currently in use:
wow that was it.
> mmdiag --lroc
>
> === mmdiag: lroc ===
> LROC Device(s): '0A6403AA5865389E#/dev/nvme0n1;' status Running
> Cache inodes 1 dirs 1 data 1 Config: maxFile 1073741824 stubFile
> 1073741824
> Max capacity: 1526184 MB, currently in use: 0 MB
> Statistics from: Thu Dec 29 10:08:58
On 12/29/16 10:02 AM, Aaron Knister wrote:
> Interesting. Thanks Matt. I admit I'm somewhat grasping at straws here.
>
> That's a *really* long device path (and nested too), I wonder if
> that's causing issues.
was thinking of trying just /dev/sdxx
>
> What does a "tspreparedisk -S" show on that
i agree that is a very long name , given this is a nvme device it should
show up as /dev/nvmeXYZ
i suggest to report exactly that in nsddevices and retry.
i vaguely remember we have some fixed length device name limitation , but i
don't remember what the length is, so this would be my first guess
Interesting. Thanks Matt. I admit I'm somewhat grasping at straws here.
That's a *really* long device path (and nested too), I wonder if that's
causing issues.
What does a "tspreparedisk -S" show on that node?
Also, what does your nsddevices script look like? I'm wondering if you
could have
s an LROC related assert fixed in 4.1.1.9 but I can't find
details on it.
*From:*Matt Weil
*Sent:* 12/28/16, 5:21 PM
*To:* gpfsug main discussion list
*Subject:* Re: [gpfsug-discuss] LROC
yes
> Wed Dec 28 16:17:07.507 2016: [X] *** Assert exp(ssd->state !=
> ssdActive) in line 427 of file
>
/
avage "well there's yer problem". Are you
> perhaps running a version of GPFS 4.1 older than 4.1.1.9? Looks like
> there was an LROC related assert fixed in 4.1.1.9 but I can't find
> details on it.
>
>
>
> *From:*Matt Weil
> *Sent:* 12/28/16, 5:21 PM
> *To:
ssion list
Subject: Re: [gpfsug-discuss] LROC
yes
> Wed Dec 28 16:17:07.507 2016: [X] *** Assert exp(ssd->state !=
> ssdActive) in line 427 of file
> /project/sprelbmd1/build/rbmd11027d/src/avs/fs/mmfs/ts/flea/fs_agent_gpfs.C
> Wed Dec 28 16:17:07.508 2016: [E] *** Traceback:
> Wed
yes
> Wed Dec 28 16:17:07.507 2016: [X] *** Assert exp(ssd->state !=
> ssdActive) in line 427 of file
> /project/sprelbmd1/build/rbmd11027d/src/avs/fs/mmfs/ts/flea/fs_agent_gpfs.C
> Wed Dec 28 16:17:07.508 2016: [E] *** Traceback:
> Wed Dec 28 16:17:07.509 2016: [E] 2:0x7FF1604F39B5
>
t;
> From: gpfsug-discuss-boun...@spectrumscale.org
> <mailto:gpfsug-discuss-boun...@spectrumscale.org>
> <gpfsug-discuss-boun...@spectrumscale.org
> <mailto:gpfsug-discuss-boun...@spectrumscale.org>> on behalf of Sven Oehme
> <oeh.
: gpfsug-discuss-boun...@spectrumscale.org
<gpfsug-discuss-boun...@spectrumscale.org> on behalf of Sven Oehme
<oeh...@gmail.com>
Sent: Wednesday, December 21, 2016 11:37:46 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] LROC
StatCache is not useful on Linux, that hasn't ch
From: gpfsug-discuss-boun...@spectrumscale.org
<gpfsug-discuss-boun...@spectrumscale.org> on behalf of Sven Oehme
<oeh...@gmail.com>
Sent: Wednesday, December 21, 2016 9:23:16 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] LROC
Lroc only needs
Ooh, LROC sensors for Zimon… must look into that.
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Sven Oehme
Sent: 21 December 2016 09:23
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Subject: Re: [gpfsug-d
Lroc only needs a StatCache object as it 'compacts' a full open File object
(maxFilesToCache) to a StatCache Object when it moves the content to the
LROC device.
therefore the only thing you really need to increase is maxStatCache on the
LROC node, but you still need maxFiles Objects, so leave
cuss@spectrumscale.org>
Subject: [EXTERNAL] Re: [gpfsug-discuss] LROC
as many as possible and both
have maxFilesToCache 128000
and maxStatCache 4
do these effect what sits on the LROC as well? Are those to small? 1million
seemed excessive.
On 12/20/16 11:03 AM, Sven Oehme wrote:
how much f
as many as possible and both
have maxFilesToCache 128000
and maxStatCache 4
do these effect what sits on the LROC as well? Are those to small?
1million seemed excessive.
On 12/20/16 11:03 AM, Sven Oehme wrote:
> how much files do you want to cache ?
> and do you only want to cache
all depends on workload. we will publish a paper comparing mixed workload
with and without LROC pretty soon.
most numbers i have seen show anywhere between 30% - 1000% (very rare case)
improvements, so its for sure worth a test.
sven
On Fri, Oct 14, 2016 at 6:31 AM Sobey, Richard A
29 matches
Mail list logo