Re: nvd0 lockup while compiling ports

2017-10-04 Thread Warner Losh
On Wed, Oct 4, 2017 at 6:03 AM, Gerhard Schmidt  wrote:

> Hi,
>
> I've got a new Workstation last week with the Main HD as M2 Card.
>
> FreeBSD recognizes the card as nvd0
> nvd0:  NVMe namespace
> nvd0: 488386MB (1000215216 512 byte sectors)
>
> When compiling some ports (in this example VirtualBox-ose) I
> experiencing lockups on the Harddisk when many files are deleted.
>
> here is the entries gstat reports
>
> dT: 1.064s  w: 1.000s
>  L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/w   %busy Name
>  25281769  0  00.0  0  00.0  100.0| nvd0
>  25279769  0  00.0  0  00.0  100.0| nvd0p4
>  25279769  0  00.0  0  00.0  100.0| ufs/USR
>
> here the right part of gstat -d -o
>
> dT: 1.003s  w: 1.000s
> d/s   kBps   ms/do/s   ms/o   %busy Name
> 770  24641  31965  00.0  100.1| nvd0
> 770  24641  31965  00.0  100.1| nvd0p4
> 770  24641  31965  00.0  100.1| ufs/USR
>
> the numbers under L(q) go up to about 16 and as long as theses
> operations are not finisched, no outer file operation is possible on
> this filesystem.
>
> The number of ops/s and d/s is constant aproximatly 770.
>
> Is there a way to speed up delete operations or limit the queue length?
>

Try disabling TRIM.

Warner
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: my build time impact of clang 5.0

2017-10-04 Thread Matt Smith

On Oct 04 13:03, krad wrote:

have you tried meta builds and pkgbase?


I was going to say this. Because my build times have increased so 
massively on my underpowered server I've switched to doing incremental 
builds. Set WITH_META_MODE=yes in /etc/src-env.conf and add 
kld_list="filemon" to /etc/rc.conf with a kldload filemon.


Doing this makes builds take a couple of minutes on average rather than 
12 hours.


pkgbase I have been monitoring the mailing list, but I haven't actually 
tried it yet. The problems with /etc merging makes me twitchy.


--
Matt
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: my build time impact of clang 5.0

2017-10-04 Thread krad
have you tried meta builds and pkgbase?


On 3 October 2017 at 16:38, Dan Mack  wrote:

> Jakub Lach  writes:
>
> > On the other hand, I'm having tremendous increases in Unixbench scores
> > comparing to
> > 11-STABLE in the April (same machine, clang 4 then, clang 5 now) (about
> > 40%).
> >
> > I have never seen something like that, and I'm running Unixbench on
> -STABLE
> > since
> > 2008.
>
> Agree; clang/llvm and friends have added a lot of value.  It's worth it
> I think.
>
> It is however getting harder to continue with a source based update
> model, which I prefer even though most people just use package managers
> today.
>
> I still like to read the commits and understand what's changing, why,
> and select the version I am comfortable with given the nuances of my
> configuration(s).  I think that's why 'knock-on-wood' I've been able to
> track mostly CURRENT and/or STABLE without any outages since about 1998
> on production systems :-)
>
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


nvd0 lockup while compiling ports

2017-10-04 Thread Gerhard Schmidt
Hi,

I've got a new Workstation last week with the Main HD as M2 Card.

FreeBSD recognizes the card as nvd0
nvd0:  NVMe namespace
nvd0: 488386MB (1000215216 512 byte sectors)

When compiling some ports (in this example VirtualBox-ose) I
experiencing lockups on the Harddisk when many files are deleted.

here is the entries gstat reports

dT: 1.064s  w: 1.000s
 L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/w   %busy Name
 25281769  0  00.0  0  00.0  100.0| nvd0
 25279769  0  00.0  0  00.0  100.0| nvd0p4
 25279769  0  00.0  0  00.0  100.0| ufs/USR

here the right part of gstat -d -o

dT: 1.003s  w: 1.000s
d/s   kBps   ms/do/s   ms/o   %busy Name
770  24641  31965  00.0  100.1| nvd0
770  24641  31965  00.0  100.1| nvd0p4
770  24641  31965  00.0  100.1| ufs/USR

the numbers under L(q) go up to about 16 and as long as theses
operations are not finisched, no outer file operation is possible on
this filesystem.

The number of ops/s and d/s is constant aproximatly 770.

Is there a way to speed up delete operations or limit the queue length?

Regards
   Estartu


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


iSCSI: LUN modification error: LUN XXX is not managed by the block backend and LUN device confusion

2017-10-04 Thread Eugene M. Zheganin

Hi,


I got one more problem while dealing iSCSI targets in the production 
(yeah, I'm boring and stubborn). The environment is as in previous 
questions (a production site, hundreds of VMs and hundreds of disks). 
I've encountered this issue before, but this time i decided to ask 
whether it's possible that the reason is my inadequate actions.


We have two types of disks in production, one is called 
"userdata[number]" and another is called "games[something+number]". The 
iSCSI targets are named appropriately, plus userdata disks have the 
scsiname[number] option. Number simply indicates the VM it should be 
attached to. But sometimes some weird confusion happens, and I have two 
sorts of things, let me show them using one LUN as example (in reality 
right now I have 6 LUNs like this).


So from now we are considering the 310 as a VM tag, and two disks, 
userdata310 and games disk.


So imagine a piece of ctl.conf like this:

===Cut===

#
# worker310
#

target iqn.2016-04.net.playkey.iscsi:userdata-worker310 {
initiator-portal 10.0.3.142/32
portal-group playkey
auth-type none

lun 0 {
option scsiname userdata310
path /dev/zvol/data/userdata/worker310
}
}

#
# worker310
#

target iqn.2016-04.net.playkey.iscsi:gamestop-worker310 {
initiator-portal 10.0.3.142/32
portal-group playkey
auth-type none

lun 0 {
path /dev/zvol/data/reference-ver13_1233-worker310
}
}

===Cut===

When the issue happens, I got the following line in the log:

Oct  4 12:00:55 san1 ctld[777]: LUN modification error: LUN 547 is not 
managed by the block backend
Oct  4 12:00:55 san1 ctld[777]: failed to modify lun 
"iqn.2016-04.net.playkey.iscsi:userdata-worker310,lun,0", CTL lun 547



In the "ctladm devlist -v" I see this about the LUN 547:

547 block   10737418240  512 MYSERIAL 738 MYDEVID 738
  lun_type=0
  num_threads=14
  file=/dev/zvol/data/reference-ver13_1233-worker228
ctld_name=iqn.2016-04.net.playkey.iscsi:gamestop-worker228,lun,0
  scsiname=userdata310


So, notice, that the userdata disk for VM310 has the devices for 
completely different VM (according to their names). Weird ! One may 
think that this is simply the misconfiguration and the games disk for 
worker228 VM simply has the erroneous scsiname option tag. But no, it 
hasn't:



#
# worker228
#

target iqn.2016-04.net.playkey.iscsi:gamestop-worker228 {
initiator-portal  [...obfuscated...]/32
portal-group playkey
auth-type none

lun 0 {
path /dev/zvol/data/reference-ver13_1233-worker228
}
}


The workaround to this is simply to comment the troublesome LUNs/targets 
in the ctl.conf, reload, uncomment and reload again.


Am I doing something wrong ?


Thanks.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ctld: only 579 iSCSI targets can be created

2017-10-04 Thread Eugene M. Zheganin

Hi.

On 02.10.2017 15:03, Edward Napierala wrote:

Thanks for the packet trace.  What happens there is that the Windows
initiator logs in, requests Discovery ("SendTargets=All"), receives 
the list

of targets, as expected, and then... sends "SendTargets=All" again,
instead of logging off.  This results in ctld(8) dropping the session.
The initiator then starts the Discovery session again, but this time 
it only

logs in and then out, without actually requesting the target list.

Perhaps you could work around this by using "discovery-filter",
as documented in ctl.conf(5)?

Thanks a lot, that did it. Seems like that Microsoft initiator has some 
limitation after crossing the number of 512 targets, and this happens 
somewhere near 573.


When discovery is portal-filtered everything seems to be working just fine.


Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"