Re: make kernel ignore broken SATA disk

2020-04-12 Thread Steven Hartland

On 12/04/2020 18:08, Stefan Bethke wrote:

Am 12.04.2020 um 19:03 schrieb Slawa Olhovchenkov :

Now I can't boot into single user mode anymore, ZFS just waits forever, and the 
kernel is printing an endless chain of SATA error messages.

I really need a way to remove the broken disk before ZFS tries to access it, or 
a way to stop ZFS from try to access the disk.



Might be a silly suggestion but unplug it?
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Slow zfs destroy

2019-11-28 Thread Steven Hartland

It may well depend on the extent of the deletes occurring.

Have you tried disabling TRIM to see if it eliminates the delay?

    Regards
    Steve

On 28/11/2019 09:59, Eugene Grosbein wrote:

28.11.2019 14:26, Steven Hartland wrote:


As you mentioned it’s on SSD you could be suffering from poor TRIM performance 
from your devices
if you run gstat -pd you’ll be able to get an indication if this is the case.

Yes, this box does have problems with poor TRIM performance.
But isn't "zfs destroy" supposed to perform actual removal in background?
"feature@async_destroy" is "enabled" for this pool.


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Slow zfs destroy

2019-11-27 Thread Steven Hartland
As you mentioned it’s on SSD you could be suffering from poor TRIM
performance from your devices if you run gstat -pd you’ll be able to get an
indication if this is the case.

On Thu, 28 Nov 2019 at 06:50, Eugene Grosbein  wrote:

> Hi!
>
> Is it normal that "zfs destroy" for one ZVOL with attribute "used" equal
> to 2112939808 bytes (~2GB)
> takes over two minutes waiting on "tx_sync_done_cv"? The pool is RAID1
> over five SSDs encrypted with GELI
> having ZIL and Cache on distinct unencrypted SSD.
>
> 11.3-STABLE/amd64 r354667. System has 360G RAM and vfs.zfs.arc_max=160g.
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS...

2019-06-07 Thread Steven Hartland
Great to hear you got your data back even after all the terrible luck you
suffered!

  Regards
  Steve

On Fri, 7 Jun 2019 at 00:49, Michelle Sullivan  wrote:

> Michelle Sullivan wrote:
> >> On 02 May 2019, at 03:39, Steven Hartland 
> wrote:
> >>
> >>
> >>
> >>> On 01/05/2019 15:53, Michelle Sullivan wrote:
> >>> Paul Mather wrote:
> >>>>> On Apr 30, 2019, at 11:17 PM, Michelle Sullivan 
> wrote:
> >>>>>
> >>>>> Been there done that though with ext2 rather than UFS..  still got
> all my data back... even though it was a nightmare..
> >>>>
> >>>> Is that an implication that had all your data been on UFS (or ext2:)
> this time around you would have got it all back?  (I've got that impression
> through this thread from things you've written.) That sort of makes it
> sound like UFS is bulletproof to me.
> >>> Its definitely not (and far from it) bullet proof - however when the
> data on disk is not corrupt I have managed to recover it - even if it has
> been a nightmare - no structure - all files in lost+found etc... or even
> resorting to r-studio in the even of lost raid information etc..
> >> Yes but you seem to have done this with ZFS too, just not in this
> particularly bad case.
> >>
> > There is no r-studio for zfs or I would have turned to it as soon as
> this issue hit.
> >
> >
> >
> So as an update, this Company: http://www.klennet.com/ produce a ZFS
> recovery tool: https://www.klennet.com/zfs-recovery/default.aspx and
> following several code changes due to my case being an 'edge case' the
> entire volume (including the zvol - which I previously recovered as it
> wasn't suffering from the metadata corruption) and all 34 million files
> is being recovered intact with the entire directory structure.  Its only
> drawback is it's a windows only tool, so I built 'windows on a stick'
> and it's running from that.  The only thing I had to do was physically
> pull the 'spare' out as the spare already had data on it from being
> previously swapped in and it confused the hell out of the algorithm that
> detects the drive order.
>
> Regards,
>
> Michelle
>
> --
> Michelle Sullivan
> http://www.mhix.org/
>
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: FreeBSD flood of 8 breakage announcements in 3 mins.

2019-05-15 Thread Steven Hartland
Is disagree, having them hatched causes us less work not more, as others
have said one update not many, which result in one outage of systems that
need patching not many.

   Regards
   Steve

On Wed, 15 May 2019 at 16:48, Julian H. Stacey  wrote:

> Hi, Reference:
> > From: Alan Somers 
> > Date: Wed, 15 May 2019 08:32:26 -0600
>
> Alan Somers wrote:
> > On Wed, May 15, 2019 at 8:26 AM Julian H. Stacey 
> wrote:
> > >
> > > Hi core@,
> > > cc hackers@ & stable@
> > >
> > > PR headline : "FreeBSD flood of 8 breakage announcements in 3 mins."
> > >
> > >
> https://lists.freebsd.org/pipermail/freebsd-announce/2019-May/date.html
> > >
> > > Volunteers who contribute actual fixes are very much appreciated;
> > > But those styled as 'management' who delay announcements to batch
> floods
> > > damage us. As they've previously refused to stop, it's time to sack
> them.
> > >
> > > Just send each announcement out when ready, no delays to batch them.
> > > No sys admins can deal with 8 in 3 mins:
> > >   Especially on multiple systems & releases.  Recipients start
> > >   mitigating, then more flood in, & need review which are
> > >   most urgent to interrupt to;  While also avoiding sudden upgrades
> > >   to many servers & releases, to minimise disturbing server users,
> > >   bosses & customers.
> > >
> > > Cheers,
> > > Julian
> > > --
> > > Julian Stacey, Consultant Systems Engineer, BSD Linux Unix, Munich
> Aachen Kent
> > >  http://stolenvotes.uk  Brexit ref. stole votes from 700,000 Brits in
> EU.
> > >  Lies bought; Groups fined; 1.9 M young had no vote, 1.3 M old leavers
> died.
> >
> > I disagree, Julian.  I think SAs are easier to deal with when they're
> > batched.  True, I can't fix the first one in less than 3 minutes.  But
> > then I probably wouldn't even notice it that fast.  Batching them all
> > together means fewer updates and reboots.
>
> Batching also means some of these vulnerabilities could have been
> fixed earlier & less of a surge of demand on recipient admins time.
>
> An admin can find time to ameliorate 1 bug, not 8 suddenly together.
> Avoidance is called planning ahead. Giving warning of a workload.
> Like an admin plans ahead & announces an outage schedule for planned
> upgrade.
>
> Suddenly dumping 8 on admins causes overload on admin manpower.
> 8 reason for users to approach admin in parallel & say
> "FreeBSD seems riddled, how long will all the sudden unplanned
>  outages take ?  Should we just dump it ?"
> Dont want negative PR & lack of management.
>
> Cheers,
> Julian
> --
> Julian Stacey, Consultant Systems Engineer, BSD Linux Unix, Munich Aachen
> Kent
>  http://stolenvotes.uk  Brexit ref. stole votes from 700,000 Brits in EU.
>  Lies bought; Groups fined; 1.9 M young had no vote, 1.3 M old leavers
> died.
> ___
> freebsd-hack...@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
> To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS...

2019-05-01 Thread Steven Hartland



On 01/05/2019 15:53, Michelle Sullivan wrote:

Paul Mather wrote:
On Apr 30, 2019, at 11:17 PM, Michelle Sullivan  
wrote:


Been there done that though with ext2 rather than UFS..  still got 
all my data back... even though it was a nightmare..



Is that an implication that had all your data been on UFS (or ext2:) 
this time around you would have got it all back?  (I've got that 
impression through this thread from things you've written.) That sort 
of makes it sound like UFS is bulletproof to me.


Its definitely not (and far from it) bullet proof - however when the 
data on disk is not corrupt I have managed to recover it - even if it 
has been a nightmare - no structure - all files in lost+found etc... 
or even resorting to r-studio in the even of lost raid information etc..
Yes but you seem to have done this with ZFS too, just not in this 
particularly bad case.


If you imagine that the in memory update for the metadata was corrupted 
and then written out to disk, which is what you seem to have experienced 
with your ZFS pool, then you'd be in much the same position.


This case - from what my limited knowledge has managed to fathom is a 
spacemap has become corrupt due to partial write during the hard power 
failure. This was the second hard outage during the resilver process 
following a drive platter failure (on a ZRAID2 - so single platter 
failure should be completely recoverable all cases - except hba 
failure or other corruption which does not appear to be the case).. 
the spacemap fails checksum (no surprises there being that it was part 
written) however it cannot be repaired (for what ever reason))... how 
I get that this is an interesting case... one cannot just assume 
anything about the corrupt spacemap... it could be complete and just 
the checksum is wrong, it could be completely corrupt and ignorable.. 
but what I understand of ZFS (and please watchers chime in if I'm 
wrong) the spacemap is just the freespace map.. if corrupt or missing 
one cannot just 'fix it' because there is a very good chance that the 
fix would corrupt something that is actually allocated and therefore 
the best solution would be (to "fix it") would be consider it 100% 
full and therefore 'dead space' .. but zfs doesn't do that - probably 
a good thing - the result being that a drive that is supposed to be 
good (and zdb reports some +36m objects there) becomes completely 
unreadable ...  my thought (desire/want) on a 'walk' tool would be a 
last resort tool that could walk the datasets and send them elsewhere 
(like zfs send) so that I could create a new pool elsewhere and send 
the data it knows about to another pool and then blow away the 
original - if there are corruptions or data missing, thats my problem 
it's a last resort.. but in the case the critical structures become 
corrupt it means a local recovery option is enabled.. it means that if 
the data is all there and the corruption is just a spacemap one can 
transfer the entire drive/data to a new pool whilst the original host 
is rebuilt... this would *significantly* help most people with large 
pools that have to blow them away and re-create the pools because of 
errors/corruptions etc... and with the addition of 'rsync' (the 
checksumming of files) it would be trivial to just 'fix' the data 
corrupted or missing from a mirror host rather than transferring the 
entire pool from (possibly) offsite


From what I've read that's not a partial write issue, as in that case 
the pool would have just rolled back. It sounds more like the write was 
successful but the data in that write was trashed due to your power 
incident and that was replicated across ALL drives.


To be clear this may or may not be what your seeing as you don't see to 
have covered any of the details of the issues your seeing and what in 
detail steps you have tried to recover with?


I'm not saying this is the case but all may not be lost depending on the 
exact nature of the corruption.


For more information on space maps see:
https://www.delphix.com/blog/delphix-engineering/openzfs-code-walk-metaslabs-and-space-maps
https://sdimitro.github.io/post/zfs-lsm-flushing/

A similar behavior resulted in being a bug:
https://www.reddit.com/r/zfs/comments/97czae/zfs_zdb_space_map_errors_on_unmountable_zpool/

    Regards
    Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Concern: ZFS Mirror issues (12.STABLE and firmware 19 .v. 20)

2019-04-20 Thread Steven Hartland
Thanks for extra info, the next question would be have you eliminated 
that corruption exists before the disk is removed?


Would be interesting to add a zpool scrub to confirm this isn't the case 
before the disk removal is attempted.


    Regards
    Steve

On 20/04/2019 18:35, Karl Denninger wrote:


On 4/20/2019 10:50, Steven Hartland wrote:

Have you eliminated geli as possible source?
No; I could conceivably do so by re-creating another backup volume set 
without geli-encrypting the drives, but I do not have an extra set of 
drives of the capacity required laying around to do that. I would have 
to do it with lower-capacity disks, which I can attempt if you think 
it would help.  I *do* have open slots in the drive backplane to set 
up a second "test" unit of this sort.  For reasons below it will take 
at least a couple of weeks to get good data on whether the problem 
exists without geli, however.


I've just setup an old server which has a LSI 2008 running and old FW 
(11.0) so was going to have a go at reproducing this.


Apart from the disconnect steps below is there anything else needed 
e.g. read / write workload during disconnect?


Yes.  An attempt to recreate this on my sandbox machine using smaller 
disks (WD RE-320s) and a decent amount of read/write activity (tens to 
~100 gigabytes) on a root mirror of three disks with one taken offline 
did not succeed.  It *reliably* appears, however, on my backup volumes 
with every drive swap. The sandbox machine is physically identical 
other than the physical disks; both are Xeons with ECC RAM in them.


The only operational difference is that the backup volume sets have a 
*lot* of data written to them via zfs send|zfs recv over the 
intervening period where with "ordinary" activity from I/O (which was 
the case on my sandbox) the I/O pattern is materially different.  The 
root pool on the sandbox where I tried to reproduce it synthetically 
*is* using geli (in fact it boots native-encrypted.)


The "ordinary" resilver on a disk swap typically covers ~2-3Tb and is 
a ~6-8 hour process.


The usual process for the backup pool looks like this:

Have 2 of the 3 physical disks mounted; the third is in the bank vault.

Over the space of a week, the backup script is run daily.  It first 
imports the pool and then for each zfs filesystem it is backing up 
(which is not all of them; I have a few volatile ones that I don't 
care if I lose, such as object directories for builds and such, plus 
some that are R/O data sets that are backed up separately) it does:


If there is no "...@zfs-base": zfs snapshot -r ...@zfs-base; zfs send 
-R ...@zfs-base | zfs receive -Fuvd $BACKUP


else

zfs rename -r ...@zfs-base ...@zfs-old
zfs snapshot -r ...@zfs-base

zfs send -RI ...@zfs-old ...@zfs-base |zfs recv -Fudv $BACKUP

 if ok then zfs destroy -vr ...@zfs-old otherwise print a 
complaint and stop.


When all are complete it then does a "zpool export backup" to detach 
the pool in order to reduce the risk of "stupid root user" (me) accidents.


In short I send an incremental of the changes since the last backup, 
which in many cases includes a bunch of automatic snapshots that are 
taken on frequent basis out of the cron. Typically there are a week's 
worth of these that accumulate between swaps of the disk to the vault, 
and the offline'd disk remains that way for a week.  I also wait for 
the zpool destroy on each of the targets to drain before continuing, 
as not doing so back in the 9 and 10.x days was a good way to 
stimulate an instant panic on re-import the next day due to kernel 
stack page exhaustion if the previous operation destroyed hundreds of 
gigabytes of snapshots (which does routinely happen as part of the 
backed up data is Macrium images from PCs, so when a new month comes 
around the PC's backup routine removes a huge amount of old data from 
the filesystem.)


Trying to simulate the checksum errors in a few hours' time thus far 
has failed.  But every time I swap the disks on a weekly basis I get a 
handful of checksum errors on the scrub. If I export and re-import the 
backup mirror after that the counters are zeroed -- the checksum error 
count does *not* remain across an export/import cycle although the 
"scrub repaired" line remains.


For example after the scrub completed this morning I exported the pool 
(the script expects the pool exported before it begins) and ran the 
backup.  When it was complete:


root@NewFS:~/backup-zfs # zpool status backup
  pool: backup
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
    Sufficient replicas exist for the pool to continue functioning 
in a

    degraded state.
action: Online the device using 'zpool online' or replace the device with
    'zpool replace'.
  scan: scrub repaired 188K in 0 days 09:40:18 with 0 errors on Sat 
Apr 20 08:45:09 2019

config:

 

Re: Concern: ZFS Mirror issues (12.STABLE and firmware 19 .v. 20)

2019-04-20 Thread Steven Hartland

Have you eliminated geli as possible source?

I've just setup an old server which has a LSI 2008 running and old FW 
(11.0) so was going to have a go at reproducing this.


Apart from the disconnect steps below is there anything else needed e.g. 
read / write workload during disconnect?


mps0:  port 0xe000-0xe0ff mem 
0xfaf3c000-0xfaf3,0xfaf4-0xfaf7 irq 26 at device 0.0 on pci3

mps0: Firmware: 11.00.00.00, Driver: 21.02.00.00-fbsd
mps0: IOCCapabilities: 
185c


    Regards
    Steve

On 20/04/2019 15:39, Karl Denninger wrote:

I can confirm that 20.00.07.00 does *not* stop this.
The previous write/scrub on this device was on 20.00.07.00.  It was
swapped back in from the vault yesterday, resilvered without incident,
but a scrub says

root@NewFS:/home/karl # zpool status backup
   pool: backup
  state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
     attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
     using 'zpool clear' or replace the device with 'zpool replace'.
    see: http://illumos.org/msg/ZFS-8000-9P
   scan: scrub repaired 188K in 0 days 09:40:18 with 0 errors on Sat Apr
20 08:45:09 2019
config:

     NAME  STATE READ WRITE CKSUM
     backup    DEGRADED 0 0 0
   mirror-0    DEGRADED 0 0 0
     gpt/backup61.eli  ONLINE   0 0 0
     gpt/backup62-1.eli    ONLINE   0 0    47
     13282812295755460479  OFFLINE  0 0 0  was
/dev/gpt/backup62-2.eli

errors: No known data errors

So this is firmware-invariant (at least between 19.00.00.00 and
20.00.07.00); the issue persists.

Again, in my instance these devices are never removed "unsolicited" so
there can't be (or at least shouldn't be able to) unflushed data in the
device or kernel cache.  The procedure is and remains:

zpool offline .
geli detach .
camcontrol standby ...

Wait a few seconds for the spindle to spin down.

Remove disk.

Then of course on the other side after insertion and the kernel has
reported "finding" the device:

geli attach ...
zpool online 

Wait...

If this is a boogered TXG that's held in the metadata for the
"offline"'d device (maybe "off by one"?) that's potentially bad in that
if there is an unknown failure in the other mirror component the
resilver will complete but data has been irrevocably destroyed.

Granted, this is a very low probability scenario (the area where the bad
checksums are has to be where the corruption hits, and it has to happen
between the resilver and access to that data.)  Those are long odds but
nonetheless a window of "you're hosed" does appear to exist.



___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: CARP stopped working after upgrade from 11 to 12

2019-01-18 Thread Steven Hartland

On 18/01/2019 10:34, Thomas Steen Rasmussen wrote:

On 1/16/19 8:16 PM, Thomas Steen Rasmussen wrote:


On 1/16/19 6:56 PM, Steven Hartland wrote:



PS: are you going to file a PR ?



Yes here https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=235005



Hello all,

A quick follow up for the archives:

Steven Hartland smh@ found the issue and created 
https://reviews.freebsd.org/D18882 which resulted in this commit 
https://svnweb.freebsd.org/changeset/base/343130 from kp@ with a fix 
for the issue.


All is well in carp/pfsync land again. Thank you Steven, kp@ and Pete 
French for your help.


Best regards & have a nice weekend,

Thomas Steen Rasmussen


Glad to help, thanks for the bug report!

    Regards
    Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: CARP stopped working after upgrade from 11 to 12

2019-01-16 Thread Steven Hartland



On 16/01/2019 17:33, Pete French wrote:

I have confirmed that pfsync is the culprit. Read on for details.

Excellent work. I;m home now, so won't get a chnace to out this into
practice until tomorrow unfortunately, but it's brilliant that you have
confirmed it.


I tried disabling pfsync and rebooting both nodes, they came up as
MASTER/SLAVE then.

This is very useful to know - I willprobably  try tomorrow running my
firewalls back up with pfsync disabled to see if it works for me too.


Then I tried enabling pfsync and starting it, and on the SLAVE node I
immediately got:

That kind of confirms it really doesnt it ?

So, is it possible to get r342051 backend out of STABLE for now ? This
is a bit 'gotcha' for anyone running a firewall pair with CARp after all.

-pete.

PS: are you going to file a PR ?
You could also try setting net.pfsync.pfsync_buckets="1" in 
/boot/loader.conf which reading the code should ensure all items are 
processed in a single bucket so if its the bucketing split has the issue 
then this will fix. If the issue is more ingrained then it won't.


    Regards
    Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: CARP stopped working after upgrade from 11 to 12

2019-01-16 Thread Steven Hartland
I can't see how any of those would impact carp unless pf is now 
incorrectly blocking carp packets, which seems unlikely from that commit.


Questions:

 * Are you running a firewall?
 * What does sysctl net.inet.carp report?
 * What exactly does ifconfig report about your carp on both hosts?
 * Have you tried enabling more detailed carp logging using sysctl
   net.inet.carp.log?

    Regards
    Steve


On 16/01/2019 14:31, Thomas Steen Rasmussen wrote:

On 1/16/19 3:14 PM, Pete French wrote:

I just upgraded my pair of firewalls from 11 to 12, and am now in the
situation where CARP no longer works between them to faiilover the
virtual addresse. Both machines come up thinking that they
are the master. If I manually set the advskew on the interfaces to
a high number on what should be passive then it briefly goes to backup
mode, but then goes back to master with the message:

BACKUP -> MASTER (preempting a slower master)

This is kind of a big problem!


Indeed. I am seeing the same thing. Which revision of 12 are you running?

I am currently (yesterday and today) bisecting revisions to find the 
commit which broke this, because it worked in 12-BETA2 but doesn't 
work on latest 12-STABLE.


I have narrowed it down to somewhere between 12-STABLE-342037 which 
works, and 12-STABLE-342055 which does not.


Only 4 commits touch 12-STABLE branch in that range:


r342038 | eugen | 2018-12-13 10:52:40 + (Thu, 13 Dec 2018) | 5 lines

MFC r340394: ipfw.8: Fix part of the SYNOPSIS documenting
LIST OF RULES AND PREPROCESSING that is still referred
as last section of the SYNOPSIS later but was erroneously situated
in the section IN-KERNEL NAT.


r342047 | markj | 2018-12-13 15:51:07 + (Thu, 13 Dec 2018) | 3 lines

MFC r341638:
Let kern.trap_enotcap be set as a tunable.


r342048 | markj | 2018-12-13 16:07:35 + (Thu, 13 Dec 2018) | 3 lines

MFC r340405:
Add accounting to per-domain UMA full bucket caches.


r342051 | kp | 2018-12-13 20:00:11 + (Thu, 13 Dec 2018) | 20 lines

pfsync: Performance improvement

pfsync code is called for every new state, state update and state
deletion in pf. While pf itself can operate on multiple states at the
same time (on different cores, assuming the states hash to a different
hashrow), pfsync only had a single lock.
This greatly reduced throughput on multicore systems.

Address this by splitting the pfsync queues into buckets, based on the
state id. This ensures that updates for a given connection always end up
in the same bucket, which allows pfsync to still collapse multiple
updates into one, while allowing multiple cores to proceed at the same
time.

The number of buckets is tunable, but defaults to 2 x number of cpus.
Benchmarking has shown improvement, depending on hardware and setup, 
from ~30%

to ~100%.

Sponsored by:   Orange Business Services



Of these I thought r342051 sounded most likely, so I am currently 
building r342050.


I will write again in a few hours when I have isolated the commit.

Best regards,

Thomas Steen Rasmussen


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS and Megaraid SAS MFI controllers

2018-09-18 Thread Steven Hartland

What triggered the issue, was it a reboot and how many disks?

On 18/09/2018 08:59, The Doctor via freebsd-stable wrote:

This about abot the 3rd time  within 3 weeks that I have hadto
rebuild a server (Backup are great but long restores).

What isthe 'formula' for the following:

zfs i/o on your screen:

You boot with a Live CD  / USB .

You import and yet you do not see boot or etc directories on the
boot device?

3 times is costing me time and money.



___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs boot size

2018-08-16 Thread Steven Hartland

The recommended size for a boot partition has been 512K for a while.

We always put swap directly after it so if a resize is needed its easy 
without and resilvering .


If your pool is made up of partitions which are only 34 block smaller 
than your zfs partition you're likely going to need to dump and restore 
the entire pool as it won't accept vdevs smaller than the original.


    Regards
    Steve

On 16/08/2018 23:07, Randy Bush wrote:

so the number of blocks one must reserve for zfs boot has gone from 34
to 40.  is one supposed to, one at a time, drop each disk out of the
pool, repartition, re-add, and resilver?  luckily, there are only 16
drives, and resilvering a drive only takes a couple of days.  so we
might be done with it this calendar year.  and what is the likelihood we
make it through this without some sort of disaster?

clue bat, please?

randy
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: TRIM, iSCSI and %busy waves

2018-04-06 Thread Steven Hartland
That is very hw and use case dependent.

The reason we originally sponsored the project to add TRIM to ZFS was that
in our case without TRIM the performance got so bad that we had to secure
erase disks every couple of weeks as they simply became so slow they where
unusable.

Now admittedly that was a fair few years ago and hw has moved on since
then, but the point remains that it’s not totally true that just not
TRIMing is an option.

On Fri, 6 Apr 2018 at 09:10, Borja Marcos  wrote:

>
> > On 5 Apr 2018, at 17:00, Warner Losh  wrote:
> >
> > I'm working on trim shaping in -current right now. It's focused on NVMe,
> > but since I'm doing the bulk of it in cam_iosched.c, it will eventually
> be
> > available for ada and da. The notion is to measure how long the TRIMs
> take,
> > and only send them at 80% of that rate when there's other traffic in the
> > queue (so if trims are taking 100ms, send them no faster than 8/s). While
> > this will allow for better read/write traffic, it does slow the TRIMs
> down
> > which slows down whatever they may be blocking in the upper layers. Can't
> > speak to ZFS much, but for UFS that's freeing of blocks so things like
> new
> > block allocation may be delayed if we're almost out of disk (which we
> have
> > no signal for, so there's no way for the lower layers to prioritize trims
> > or not).
>
> Have you considered "hard" shaping including discarding TRIMs when needed?
> Remember that a TRIM is not a write, which is subject to a contract with
> the application,
> but a better-if-you-do-it operation.
>
> Otherwise, as you say, you might be blocking other operations in the upper
> layers.
> I am assuming here that with many devices doing TRIMs is better than not
> doing them.
> And in case of queue congestion doing *some* TRIMs should be better than
> doing
> no TRIMs at all.
>
> Yep, not the first time I propose something of the sort, but my queue of
> suggestions
> to eventually discard TRIMs doesn’s implement the same method ;)
>
>
> Borja.
>
>
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: TRIM, iSCSI and %busy waves

2018-04-05 Thread Steven Hartland

You can indeed tune things here are the relevant sysctls:
sysctl -a | grep trim |grep -v kstat
vfs.zfs.trim.max_interval: 1
vfs.zfs.trim.timeout: 30
vfs.zfs.trim.txg_delay: 32
vfs.zfs.trim.enabled: 1
vfs.zfs.vdev.trim_max_pending: 1
vfs.zfs.vdev.trim_max_active: 64
vfs.zfs.vdev.trim_min_active: 1
vfs.zfs.vdev.trim_on_init: 1

    Regards
    Steve

On 05/04/2018 15:08, Eugene M. Zheganin wrote:

Hi,

I have a production iSCSI system (on zfs of course) with 15 ssd disks 
and it's often suffering from TRIMs.


Well, I know what TRIM is for, and I know it's a good thing, but 
sometimes (actually often) I'm seeing my disks in gstat are 
overwhelmed by the TRIM waves, this looks like a "wave" of 20K 
100%busy delete operations starting on first pool disk, then reaching 
second, then third,... - at the time it reaches the 15th disk the 
first one if freed from TRIM operations, and in 20-40 seconds this 
wave begins again.


I'm also having a couple of iSCSI issues that I'm dealing through 
bounty with, so may be this is related somehow. Or may be not. Due to 
some issues in iSCSI stack my system sometimes reboots, and then these 
"waves" are stopped for some time.


So, my question is - can I fine-tune TRIM operations ? So they don't 
consume the whole disk at 100%. I see several sysctl oids, but they 
aren't well-documented.


P.S. This is 11.x, disks are Toshibas, and they are attached via LSI HBA.


Thanks.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: FreeBSD 11.x fails to boot w/ SAS3 4Kn HDD and LSISAS3008 on SuperMicro X10DRH-iT

2018-03-13 Thread Steven Hartland
Have you tried setting dumpdev to AUTO in rc.conf to see if you can obtain
a panic dump? You could also try disabling reboot on it panic using the
sysctl

On Mon, 12 Mar 2018 at 22:18, Rick Miller  wrote:

> Hi all,
>
> Thanks in advance to anyone that might be able to help. I subscribed to
> freebsd-geom@ so that the list did not need to "reply-all". Having trouble
> getting FreeBSD 11-STABLE @ r329011 to boot from SAS3 4Kn HDDs via LSISAS
> 3008 HBA on a SuperMicro X10DRH-iT motherboard after an apparent
> installation. All internal media (including all other disks attached to the
> HBA) were removed to eliminate other storage being the reason the system
> won't boot. This occurs specifically in CSM mode, but the preference is to
> boot via UEFI mode instead.
>
> Anyway...booting the machine via the memstick image demonstrates the LSISAS
> 3008 controller attaching via mpr(4) (whose manpage describes the
> controller being supported[1]):
>
> mpr0@pci0:3:0:0: class=0x010700 card=0x080815d9 chip=0x00971000 rev=0x02
> hdr=0x00
>   vendor   = 'LSI Logic / Symbios Logic'
>   device   = 'SAS3008 PCI-Express Fusion-MPT SAS-3'
>   class = mass storage
>   subclass = SAS
>
> The only inserted disk attaches as da0 as illustrated by dmesg:
>
> ses0: pass0,da0: Elemne t descriptor: 'Slot00'
> da0 at mpr0 bus 0 scbus0 target 8 lun 0
> ses0: pass0,da0: SAS Device Slot Element: 1 Phys at Slot 0
> ses0:  phy 0: SAS device type 1 id 0
> ses0:  phy 0: protocols: initiator( None ) Target( SSP )
> ses0:  phy 0: parent 500304801e870bff addr 5000c500a012814d
> da0:   Fixed Direct Access SPC-4 SCSI device
> da0:  Serial Number $serial_number
> da0: 1200.000MB/s transfers
> da0: Command queueing enabled
> da0: 1907729MB (488378648 4096 byte sectors)
>
> The original goal was to boot via zfs root, but when that failed,
> subsequent installations used the "Auto (UFS) option" to partition the
> disk. For example, the first installation gpart'd the disk as:
>
> # gpart show da0
> =>6  488378635  da0  GPT   (1.8T)
> 6  128  1  freebsd-boot  (512K)
> 134  487325568  2  freebsd-ufs  (1.8T)
> 487325702  1048576  3  freebsd-swap  (4.0G)
> 4883742784363  - free -   (17M)
>
> The result was a reboot loop. When the system reached the point of reading
> the disk, it just rebooted and continued doing so. There was no loader or
> beastie menu. Thus, thinking that it could be the partition layout
> requirements of the 4Kn disks, it was gpart'd like the below[2][3]. This
> was done by exiting to the shell during the partition phase of bsdinstall
> and manually gpart'ing the disk according to the below, mounting da0p2 at
> /mnt and placing an fstab at /tmp/bsdinstall_etc/fstab that included mount
> entries for /dev/da0p2 at / and /dev/da0p3 as swap.
>
> # gpart show da0
> =>6  488378635  da0  GPT   (1.8T)
> 634  - free -  (136K)
>   40  512  1  freebsd-boot  (2.0M)
> 552  419430400  2  freebsd-ufs  (1.6T)
> 419430952  1048576  3  freebsd-swap  (4.0G)
> 42047952867899113  - free -   (259G)
>
> When configured as such, the system rebooted at the completion of the
> install and appeared to roll through the boot order, which specifies the
> HDD first, then CD/DVD, then network. It did attempt to boot via network,
> but is irrelevant here.
>
> All the hardware is alleged to be supported by FreeBSD as best I can tell
> and OS installation apparently works. I'm at a loss as to why the OS won't
> boot. Does someone have feedback or input that may expose why it doesn't
> boot?
>
> FWIW, a RHEL7 install was also attempted, which also does not boot.
>
> [1]
>
> https://www.freebsd.org/cgi/man.cgi?query=mpr=0=4=FreeBSD+11.1-RELEASE=default=html
> [2] https://lists.freebsd.org/pipermail/freebsd-hardware/
> 2013-September/007380.html
> [3] http://www.wonkity.com/~wblock/docs/html/disksetup.html
>
> --
> Take care
> Rick Miller
> ___
> freebsd-g...@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-geom
> To unsubscribe, send any mail to "freebsd-geom-unsubscr...@freebsd.org"
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: disk errors: CAM status: Uncorrectable parity/CRC error

2017-09-24 Thread Steven Hartland
Try reducing the disk connection speed down to see if that helps.

On Sun, 24 Sep 2017 at 06:49, Graham Menhennitt 
wrote:

> G'day all,
>
> I'm setting up a machine running 11-Stable on a PC Engines APU2C board.
> It has a 16Gb SSD as its first disk (ada0), and a Seagate 2Tb SATA-3
> disk as its second (ada1). I'm getting lots of read errors on the second
> disk. They appear on the console as:
>
> (ada1:ahcich1:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 00 6b 02 40 00 00 00
> 01 00 00
> (ada1:ahcich1:0:0:0): CAM status: Uncorrectable parity/CRC error
> (ada1:ahcich1:0:0:0): Retrying command
>
> dmesg:
>
>  ACS-2 ATA SATA 3.x device
> ada1: Serial Number 
> ada1: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
> ada1: Command Queueing enabled
> ada1: 1907729MB (3907029168 512 byte sectors)
> ada1: quirks=0x1<4K>
>
> I've replaced the disk and the cable without improvement. Unfortunately,
> I don't have a second CPU board to try.
>
> Is it possible that the board can't keep up with the data coming from
> the disk? If so, can I try slowing it down somehow?
>
> Any other suggestions, please?
>
> Thanks,
>
>  Graham
>
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: The system works unstable

2017-09-14 Thread Steven Hartland
If you have remote console via IPMI it might be an idea to leave an 
active top session so you can see the last point at which the machine 
stopped. It may provided a pointer to bad process e.g. eating all the 
machine RAM.


    Regards
    Steve

On 14/09/2017 11:30, wishmaster wrote:

Hi!

My 11.0-STABLE system had been working stable for a three last years, but 
became unstable a week ago. The system just freezes about once/2 days (at 
night) and no responsive via network, nothing in the console.
Before the first freezing I had made updates of applications in the base system 
and jails.
Options VIMAGE and Linux compat. are enabled.

The first thought was bad sectors on the HDD, but  dd if=/dev/ada0 of=/dev/null 
bs=128k rejected this one.

How can I debug this situation?

Thank,
Vit
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: 11.1 coredumping in sendfile, as used by a uwsgi process

2017-09-12 Thread Steven Hartland

Could you post the decoded crash info from /var/crash/...

I would also create a bug report:
https://bugs.freebsd.org/bugzilla/enter_bug.cgi?product=Base%20System

    Regards
    Steve
On 12/09/2017 14:40, Mark Martinec wrote:

A couple of days ago I have upgraded an Intel box from FreeBSD 10.3 to
11.1-RELEASE-p1, and reinstalled all the packages, built on the same 
OS version.

This host is running nginx web server with an uwsgi as a backend.
The file system is ZFS (recent as of 10.3, zpool not yet upgraded
to new 11.1 features).

Ever since the upgrade, this host is crashing/rebooting two or three 
times
per day. The reported crash location is always the same: it is in a 
sendfile

function (same addresses each time), the running process is always uwsgi:


Sep 12 15:03:12 xxx syslogd: kernel boot file is /boot/kernel/kernel
Sep 12 15:03:12 xxx kernel: [22677]
Sep 12 15:03:12 xxx kernel: [22677]
Sep 12 15:03:12 xxx kernel: [22677] Fatal trap 12: page fault while in 
kernel mode

Sep 12 15:03:12 xxx kernel: [22677] cpuid = 7; apic id = 07
Sep 12 15:03:12 xxx kernel: [22677] fault virtual address = 0xe8
Sep 12 15:03:12 xxx kernel: [22677] fault code    = 
supervisor write data, page not present
Sep 12 15:03:12 xxx kernel: [22677] instruction pointer   = 
0x20:0x80afefb2
Sep 12 15:03:12 xxx kernel: [22677] stack pointer = 
0x28:0xfe02397da5a0
Sep 12 15:03:12 xxx kernel: [22677] frame pointer = 
0x28:0xfe02397da5e0
Sep 12 15:03:12 xxx kernel: [22677] code segment  = base 
0x0, limit 0xf, type 0x1b
Sep 12 15:03:12 xxx kernel: [22677]   = DPL 0, pres 1, 
long 1, def32 0, gran 1
Sep 12 15:03:12 xxx kernel: [22677] processor eflags  = interrupt 
enabled, resume, IOPL = 0
Sep 12 15:03:12 xxx kernel: [22677] current process   = 34504 
(uwsgi)

Sep 12 15:03:12 xxx kernel: [22677] trap number   = 12
Sep 12 15:03:12 xxx kernel: [22677] panic: page fault
Sep 12 15:03:12 xxx kernel: [22677] cpuid = 7
Sep 12 15:03:12 xxx kernel: [22677] KDB: stack backtrace:
Sep 12 15:03:12 xxx kernel: [22677] #0 0x80aada97 at 
kdb_backtrace+0x67

Sep 12 15:03:12 xxx kernel: [22677] #1 0x80a6bb76 at vpanic+0x186
Sep 12 15:03:12 xxx kernel: [22677] #2 0x80a6b9e3 at panic+0x43
Sep 12 15:03:12 xxx kernel: [22677] #3 0x80edf832 at 
trap_fatal+0x322
Sep 12 15:03:12 xxx kernel: [22677] #4 0x80edf889 at 
trap_pfault+0x49

Sep 12 15:03:12 xxx kernel: [22677] #5 0x80edf0c6 at trap+0x286
Sep 12 15:03:12 xxx kernel: [22677] #6 0x80ec3641 at calltrap+0x8
Sep 12 15:03:12 xxx kernel: [22677] #7 0x80a6a2af at 
sendfile_iodone+0xbf
Sep 12 15:03:12 xxx kernel: [22677] #8 0x80a69eae at 
vn_sendfile+0x124e
Sep 12 15:03:12 xxx kernel: [22677] #9 0x80a6a4dd at 
sendfile+0x13d
Sep 12 15:03:12 xxx kernel: [22677] #10 0x80ee0394 at 
amd64_syscall+0x6c4
Sep 12 15:03:12 xxx kernel: [22677] #11 0x80ec392b at 
Xfast_syscall+0xfb

Sep 12 15:03:12 xxx kernel: [22677] Uptime: 6h17m57s
Sep 12 15:03:12 xxx kernel: [22677] Dumping 983 out of 8129 
MB:..2%..12%..22%..31%..41%..51%..61%..72%..82%..92%Copyright (c) 
1992-2017 The FreeBSD Project.
Sep 12 15:03:12 xxx kernel: Copyright (c) 1979, 1980, 1983, 1986, 
1988, 1989, 1991, 1992, 1993, 1994
Sep 12 15:03:12 xxx kernel: The Regents of the University of 
California. All rights reserved.
Sep 12 15:03:12 xxx kernel: FreeBSD is a registered trademark of The 
FreeBSD Foundation.
Sep 12 15:03:12 xxx kernel: FreeBSD 11.1-RELEASE-p1 #0: Wed Aug  9 
11:55:48 UTC 2017

[...]
Sep 12 15:03:12 xxx savecore: reboot after panic: page fault
Sep 12 15:03:12 xxx savecore: writing core to /var/crash/vmcore.4


This host with the same services was very stable under 10.3, same ZFS 
pool.


We have several other hosts running 11.1 with no incidents, running 
various

services (but admittedly no other host has a comparably busy web server).
Interestingly the nginx has a sendfile feature enabled too, but this does
not cause a crash (on this or other hosts), only the sendfile as used
by uwsgi seems to be the problem.

For the time being I have disabled the use of sendfile in uwsgi, we'll 
see

is this avoids the trouble.

Suggestions?

  Mark
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: The 11.1-RC3 can only boot and attach disks in "Safe mode", otherwise gets stuck attaching

2017-07-24 Thread Steven Hartland
Based on your boot info you're using mps, so this could be related to 
mps fix committed to stable/11 today by ken@

https://svnweb.freebsd.org/changeset/base/321415

re@ cc'ed as this could cause hangs for others too on 11.1-RELEASE if 
this is the case.


Regards
Steve

On 24/07/2017 15:55, Mark Martinec wrote:

Thanks! Tried it, and the message (or a backtrace) does not show
during a boot of a generic (patched) kernel, at least not in
the last 40-lines screen before the hang occurs.
(It also does not show during a "Safe mode" successful boot.)


Btw (may or may not be relevant): after the above experiment
I have rebooted the machine in "Safe mode" (generic kernel,
EARLY_AP_STARTUP enabled by default) - and spent some time
doing non-intensive interactive work on this host (web browsing,
editor, shell, all under KDE) - and after about an hour the
machine froze: clock display not updating, keyboard unresponsive,
console virtual terminals inaccessible) - so had to reboot.
According to fans speed the machine was idle.
The /var/log/messages does not show anything of interest
before the freeze. All disks are under ZFS.

Can EARLY_AP_STARTUP have an effect also _after_ booting?
This host never hung during normal work when EARLY_AP_STARTUP
was disabled (or with 11.0 and earlier).

  Mark
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Kernel panic

2017-06-21 Thread Steven Hartland
Given how old 9.1 is, even if you did investigate its unlikely it would 
get fixed.


I'd recommend updating to 11.0-RELEASE and see if the panic still happens.

On 21/06/2017 17:35, Efraín Déctor wrote:

Hello.

Today one of my servers crashed with a kernel panic. I got this message:

panic: cancel_mkdir_dotdot: Lost inodedep

I was just moving some files using WinSCP and then the server crashed.

How can I trace the root of the problem?. I check and the raid seems 
to be ok:


mfi0 Volumes:
  Id SizeLevel   Stripe  State   Cache   Name
 mfid0 (  278G) RAID-1  64k OPTIMAL Disabled 
 mfid1 ( 1905G) RAID-1  64k OPTIMAL Disabled 

Thanks in advance.

uname -a
FreeBSD server 9.1-RELEASE-p22 FreeBSD 9.1-RELEASE-p22 #0: Mon Nov  3 
18:22:10 UTC 2014 
r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: Formatting a 4k disk on FreeBSD with ext2

2017-05-08 Thread Steven Hartland
That looks like its trying to do an erase of the sectors, which is 
likely failing due to the device being a HW RAID, have you tried with 
nodiscard set?


On 08/05/2017 16:42, HSR Hackspace wrote:

Hi folks;

I'm trying to format a 300 GB partition on x86_64 box running running
BSD 10.1 with HW  RAID configuration. All my attempt so far have
failed. Below are the logs for same.

Logs:

1. pod1208-wsa07:rtestuser 106] ./mkfs.ext3  /dev/da0p7
mke2fs 1.43.4 (31-Jan-2017)
Warning: could not erase sector 2: Invalid argument->
Creating filesystem with 78643200 4k blocks and 19660800 inodes
Filesystem UUID: c31fab56-f690-4313-a09c-9a585224caea
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 2048, 23887872, 71663616

Allocating group tables: done
Warning: could not read block 0: Invalid argument --->
Warning: could not erase sector 0: Invalid argument
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information:0/2400
Warning, had trouble writing out superblocks.
pod1208-wsa07:rtestuser 107]

2.
pod1208-wsa07:rtestuser 32] ./fsck.ext2 -v  -b 4096 /dev/da0p7
e2fsck 1.43.4 (31-Jan-2017)
./fsck.ext2: Invalid argument while trying to open /dev/da0p7

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
 e2fsck -b 8193 
  or
 e2fsck -b 32768 

pod1208-wsa07:rtestuser 33]

+++

3.
pod1208-wsa07:rtestuser 43] ./dumpe2fs /dev/da0p7
dumpe2fs 1.43.4 (31-Jan-2017)
./dumpe2fs: Invalid argument while trying to open /dev/da0p7
Couldn't find valid filesystem superblock.
pod1208-wsa07:rtestuser 44]
+++

4. +++

pod1208-wsa07:rtestuser 29] ./mkfs.ext2 -b 4096 /dev/da0p7
mke2fs 1.43.4 (31-Jan-2017)
Warning: could not erase sector 2: Invalid argument
Creating filesystem with 78643200 4k blocks and 19660800 inodes
Filesystem UUID: 015a85a4-7db9-4767-869c-7bab11c9074e
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624, 11239424, 2048, 23887872, 71663616

Allocating group tables: done
Warning: could not read block 0: Invalid argument
Warning: could not erase sector 0: Invalid argument
Writing inode tables: done
Writing superblocks and filesystem accounting information:0/2400
Warning, had trouble writing out superblocks.
pod1208-wsa07:rtestuser 30]


5.
pod1208-wsa07:rtestuser 31] camcontrol identify da0
camcontrol: ATA ATAPI_IDENTIFY via pass_16 failed
pod1208-wsa07:rtestuser 32]

6.

pod1208-wsa07:rtestuser 24] diskinfo -c da0
da0
 4096# sectorsize
 1197995982848   # mediasize in bytes (1.1T)
 292479488   # mediasize in sectors
 0   # stripesize
 0   # stripeoffset
 18206   # Cylinders according to firmware.
 255 # Heads according to firmware.
 63  # Sectors according to firmware.
 00ff273941fd086a1ec0fbb015e7a68d# Disk ident.

I/O command overhead:
^[OAtime to read 10MB block  0.004811 sec   =0.000 msec/sector
 time to read 20480 sectors   0.491882 sec   =0.024 msec/sector
 calculated command overhead =0.024 msec/sector

pod1208-wsa07:rtestuser 25] diskinfo -c da0p7
da0p7
 4096# sectorsize
 322122547200# mediasize in bytes (300G)
 78643200# mediasize in sectors
 0   # stripesize
 1493762048  # stripeoffset
 4895# Cylinders according to firmware.
 255 # Heads according to firmware.
 63  # Sectors according to firmware.
 00ff273941fd086a1ec0fbb015e7a68d# Disk ident.

I/O command overhead:
 time to read 10MB block  0.004860 sec   =0.000 msec/sector
 time to read 20480 sectors   0.495921 sec   =0.024 msec/sector
 calculated command overhead =0.024 msec/sector

7.
pod1208-wsa07:rtestuser 21] gpart show -l
6  292479477  da0  GPT  (1.1T)
6 10   - free -  (40K)
  161281  (null)  (512K)
  144 2621442  efi  (1.0G)
 26228810485763  rootfs  (4.0G)
131086420971524  swap  (8.0G)
340801610485765  nextroot  (4.0G)
4456592 1024006  var  (400M)
4558992   786432007  raw  

Re: 10.3-stable random server reboots - finding the reason

2017-04-18 Thread Steven Hartland
It's not an external vulnerability in the DRAC is it as that seems to be
more and more common these days
On Tue, 18 Apr 2017 at 15:10, tech-lists  wrote:

> On 18/04/2017 13:34, Kurt Jaeger wrote:
>
> > 1)
> > echo '-Dh -S115200' > /boot.config
> > 2)
> > vi /boot/loader.conf
> > 
> > console="comconsole"
> > comconsole_speed=115200
> > 
> >
> > Then: connect it to some kermit session from a different box,
> > and write a session log. This might capture some last gasp.
> >
>
> A bit more detail:
>
> Problem seems to be it's DRAC is broken. The server is a Dell R710 in a
> datacentre so I've not got physical access. I'd normally admin the
> hardware via the DRAC but it's non-functional now.
>
> The system won't boot without someone there plugging in a keyboard and
> pressing F2 to continue... arrrgh! "idrac failed press F2". It does
> boot, though. The DRAC normally shows fan speed and voltage amongst
> other things. Right now I'm looking for an in-band method of doing that
> on FreeBSD as obv the out-of-band method has failed. cpu temps look
> fine, cool.
>
> AIUI the DRAC is a chip on the motherboard and not a module that can be
> whipped out and replaced. For all I know, the DRAC might be causing the
> reboot if it's only intermittently dead. I don't know this though. It
> might be CPU or RAM, both of these can be swapped out, just need to
> capture the info...
>
> arrgh!
> --
> J.
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Is it known problem, that zfs.ko could not be built with system compiler (clang 3.9.1) without optimization?

2017-02-22 Thread Steven Hartland
I seem to remember this happening when I tried it too, likely it blows 
the stack, what's your panic?


When doing similar tracing before I've flagged the relevant methods with 
__noinline.


On 22/02/2017 20:47, Lev Serebryakov wrote:

Hello Freebsd-stable,

Now if you build zfs.ko with -O0 it panics on boot.

If you use default optimization level, a lot of fbt DTreace probes are
   missing.



___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Building Kernel and World with -j

2017-01-23 Thread Steven Hartland

On 23/01/2017 07:24, Sergei Akhmatdinov wrote:

On Sun, 22 Jan 2017 22:57:46 -0800
Walter Parker  wrote:

For decades there has always been a warning not to do parallel builds of
the kernel or the world (Linux kernel builds also suggest not to do this).

Every once in a while, I see people post about 5 minutes. This only way I
can see this happening is by doing a parallel build (-j 16 on a Xeon
Monster box).

Are parallel builds safe? If not, what are actual risk factors and can they
be mitigated?

Not only do I use -j, I also use ccache.

Another option is to use WITH_META_MODE=YES, that's where most of the 5-minute
reports come from, I imagine. I haven't used it myself.

My kernel takes 10 minutes with world taking about two hours. I generally just
leave them building overnight.

The risks of parallel builds are mostly in the past, concurrency was still just
coming out and there were chances that something would get compiled before it's
dependency, breaking your compile and wasting all of those hours.

Cheers,

We always use -j for both kernel and world for years.

While there's been a few niggles if the clock is out and its a rebuild 
they have been few and far between.


Current cut down kernel build time is 1m and world build time is 22m 
here for FreeBSD 11.0-RELEASE on a dual E5-2640.


Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool asize problem on 11.0

2017-01-12 Thread Steven Hartland

On 12/01/2017 22:57, Stefan Bethke wrote:

Am 12.01.2017 um 23:29 schrieb Stefan Bethke :

I’ve just created two pools on a freshly partitioned disk, using 11.0 amd64, 
and the shift appears to be 9:

# zpool status -v host
  pool: host
state: ONLINE
status: One or more devices are configured to use a non-native block size.
Expect reduced performance.
action: Replace affected devices with devices that support the
configured block size, or migrate data to a properly configured
pool.
  scan: none requested
config:

NAME STATE READ WRITE CKSUM
host ONLINE   0 0 0
  gpt/host0  ONLINE   0 0 0  block size: 512B configured, 
4096B native

errors: No known data errors

# zdb host | grep ashift
ashift: 9
ashift: 9

But:
# sysctl vfs.zfs.min_auto_ashift
vfs.zfs.min_auto_ashift: 12

Of course, I’ve noticed this only after restoring all the backups, and getting 
ready to put the box back into production.

Is this expected behaviour?  I guess there’s no simple fix, and I have to start 
over from scratch?

I had falsely assumed that vfs.zfs.min_auto_ashift would be 12 in all 
circumstances.  It appears when running FreeBSD 11.0p2 in VirtualBox, it can be 
9.  And my target disk was attached to the host and mapped into the VM as a 
„native disk image“, but the 4k native sector size apparently got lost in that 
abstraction.

The output above is with the disk installed in the target system with a native 
AHCI connection, and the system booted from that disk.

I’ve certainly learned to double check the ashift property on creating pools.

The default value for vfs.zfs.min_auto_ashift is 9, so unless you 
specifically set it to 12 you will get the behaviour you described.


Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: Can't boot on ZFS -- /boot/zfsloader not found

2017-01-12 Thread Steven Hartland

On 12/01/2017 21:12, Jeremie Le Hen wrote:

Hey Steven,

(Please cc: me on reply)

On Thu, Jan 12, 2017 at 1:32 AM, Steven Hartlan

The reason I'd recommend 512k for boot is to provide room for expansion
moving forward, as repartitioning to upgrade is a scary / hard thing to do.
Remember it wasn't long ago when it was well under 64k and that's what was
recommend, its not like with disk sizes these days you'll miss the extra
384k ;-)

Yeah, that's wise you're right.


Boot to a live cd, I'd recommend mfsbsd, and make sure the boot loader was
written to ALL boot disks correctly e.g.
if you have a mirrored pool with ada0 and ada1:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0

If this doesn't help the output from gpart show, uname -a and zpool status
would also be helpful.

This is all assuming standard BIOS mode and not UEFI which is done
differently.

I just use the installation media on an USB key and then drop to the
shell.  This is a full FreeBSD running, so that's fine.

% # gpart show ada0
% =>   40  312581728  ada0  GPT  (149G)
% 40   1024 1  freebsd-boot  (512K)
%   10648387840 2  freebsd-swap  (4.0G)
%8388904  304192864 3  freebsd-zfs  (145G)
%
% # uname -a
% FreeBSD  11.0-RELEASE-p1 FreeBSD 11.0-RELEASE-p1 #0 r306420: Thu Sep
29 01:43:23 UTC 2016 % %
r...@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64
%
% # zpool status
%  pool: zroot
% state: ONLINE
%  scan: none requested
% config:
%
%NAME  STATE READ
WRITE CKSUM
%zroot ONLINE   0
0 0
%  gptid/1c387d3b-d892-11e6-944b-f44d30620eeb  ONLINE   0
0 0
%
% errors: No known data errors

Here are the steps to write the bootloader:

% # gpart bootcode -b /boot/pmbr -p  /boot/gptzfsboot -i 1 ada0
% partcode written to ada0p1
% bootcode written to ada0
% # zpool get bootfs zroot
% NAME   PROPERTY  VALUE   SOURCE
% zroot  bootfszroot   local

Two things spring to mind

Idea 1:
Is your root fs actually your direct pool or is it actually /root off 
your pool.

If so you want to run:
zpool set bootfs=zroot/root zroot

Idea 2:
You mentioned in your original post and you used zfs send / recv to 
restore the pool, so I wonder if your cache file is out of date.


Try the following:
|zpool export zroot
zpool import -R /mnt -o cachefile=/boot/zfs/zpool.cache zroot
cp /boot/zfs/zpool.cache /mnt/boot/zfs/zpool.cache
zpool set bootfs=zroot/root zroot

Regards
Steve
|
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Can't boot on ZFS -- /boot/zfsloader not found

2017-01-11 Thread Steven Hartland

On 11/01/2017 22:58, Jeremie Le Hen wrote:

(Sorry I had to copy-paste this email from the archives to a new
thread, because I'm not subscribed to -stable@. Would you mind cc:ing
me next time please?)


As your not at the boot loader stage yet for keyboard enabling legacy
USB keyboard / mouse support in BIOS may help.

That worked, thanks for the tip.



If you see issues with keyboard after the kernel takes over setting
hint.ehci.0.disabled=1 may help.

If your installing 11.x then the guides boot partition size is out of
date and you should use 512K for your freebsd-boot partition.

Oh, really?  I'll update it once I'll have figure it all out.  That's
weird though because /boot/gptzfsboot is less than 90 KB.

11.x
du -h /boot/gptzfsboot
 97K/boot/gptzfsboot

12.x
du -h /boot/gptzfsboot
113K/boot/gptzfsboot

The reason I'd recommend 512k for boot is to provide room for expansion 
moving forward, as repartitioning to upgrade is a scary / hard thing to 
do. Remember it wasn't long ago when it was well under 64k and that's 
what was recommend, its not like with disk sizes these days you'll miss 
the extra 384k ;-)




For the GPT version (I've never used MBR one) what actual error do you get?

When I just start up the NUC, I get:
"""
\Can't find /boot/zfsloader

FreeBSD/x86 boot
Default: zroot:/boot/kernel/kernel
boot:
\
Can't find /boot/kernel/kernel
"""

I though it was the first stage boot loader, but it may be the second stage.
Boot to a live cd, I'd recommend mfsbsd, and make sure the boot loader 
was written to ALL boot disks correctly e.g.

if you have a mirrored pool with ada0 and ada1:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0

If this doesn't help the output from gpart show, uname -a and zpool 
status would also be helpful.


This is all assuming standard BIOS mode and not UEFI which is done 
differently.




-jlh


On 11/01/2017 19:22, Jeremie Le Hen wrote:

Hi,

I'm in the process of transferring my home server to an Intel NUC. I've zfs
send/receive'd the whole pool to the new drive and first followed
[ZfsOnGpt] but it fails in the first stage boot loader. The USB keyboard
doesn't seem to be recognized so I can't really debug anything.

Then I tried both [ZfsInMbrSlice]
but it complains that it can't find /boot/zfsloader. Again USB keyboard
doesn't work.

Both times I've been very careful to install the bootcodes at the right
place.

1. Any idea why the USB keyboard doesn't work?
2. Any idea what can be wrong in the setups I tried? How can I debug this?

[ZfsOnGpt] https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot
[ZfsInMbrSlice] https://wiki.freebsd.org/RootOnZFS/ZFSBootPartition

-jlh
___
freebsd-stable at freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"




___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Can't boot on ZFS -- /boot/zfsloader not found

2017-01-11 Thread Steven Hartland
As your not at the boot loader stage yet for keyboard enabling legacy 
USB keyboard / mouse support in BIOS may help.


If you see issues with keyboard after the kernel takes over setting 
hint.ehci.0.disabled=1 may help.


If your installing 11.x then the guides boot partition size is out of 
date and you should use 512K for your freebsd-boot partition.


For the GPT version (I've never used MBR one) what actual error do you get?

On 11/01/2017 19:22, Jeremie Le Hen wrote:

Hi,

I'm in the process of transferring my home server to an Intel NUC. I've zfs
send/receive'd the whole pool to the new drive and first followed
[ZfsOnGpt] but it fails in the first stage boot loader. The USB keyboard
doesn't seem to be recognized so I can't really debug anything.

Then I tried both [ZfsInMbrSlice]
but it complains that it can't find /boot/zfsloader. Again USB keyboard
doesn't work.

Both times I've been very careful to install the bootcodes at the right
place.

1. Any idea why the USB keyboard doesn't work?
2. Any idea what can be wrong in the setups I tried? How can I debug this?

[ZfsOnGpt] https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot
[ZfsInMbrSlice] https://wiki.freebsd.org/RootOnZFS/ZFSBootPartition

-jlh
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: [ZFS] files in a weird situtation

2016-12-18 Thread Steven Hartland
find also has -delete which avoids the exec overhead, not much of an impact
here but worth noting if you're removing lots.

On 18 December 2016 at 00:38, Adam Vande More  wrote:

> On Sat, Dec 17, 2016 at 3:01 PM, David Marec 
> wrote:
>
> > [I had first posted onto the Forum about this issue]
> >
> > Two months ago,
> >
> > - next to a call to |`||delete-old-libs`| or `install world`, I don't
> > really know -
> >
> > my box that is following FreeBSD-11 Stable ran into a weird situation.
> >
> > A set of files, especially `/lib/libjail.so.1` are in both states
> > `existing` and `not existing`:
> >
> > I means:
> >
> > david:~>cp ~david/libjail.so.1 /lib
> > cp: /lib/libjail.so.1: File exists
> >
> > But:
> >
> > david:~>ls /lib/libjail.so.1
> > ls: /lib/libjail.so.1: No such file or directory
> > david:~>find /lib -name "libjail.so.1" -print
> > /lib/libjail.so.1
> > david:~>find /lib -name "libjail.so.1" -ls
> > find: /lib/libjail.so.1: No such file or directory
> >
> > With deeper investigation, the file is in fact mapped to an `inode`:
> >
> > root@dmarec:~ # ls -di /lib
> > 13 /lib
> > root@dmarec:~ # zdb - zroot/ 13 | grep libjail.so.1
> > libjail.so.1 = 10552574 (type: Regular File)
> >
> > Which fails with `zdb` on:
> >
> > root@dmarec:~ # zdb - zroot/ 10552574
> > Dataset zroot [ZPL], ID 21, cr_txg 1, 114G, 2570002 objects,
> rootbp
> > DVA[0]=<0:b97d6ea00:200> DVA[1]=<0:1c212b0400:200> [L0 DMU
> objset]
> > fletcher4 lz4 LE contiguous unique double size=800L/200P
> > birth=3852240L/3852240P fill=2570002
> > cksum=17b78fb7e4:7c87a526a07:16251edfaae60:2ce0c5734ccf2f
> >
> > Object  lvl   iblk   dblk  dsize  lsize   %full type
> > zdb: dmu_bonus_hold(10552574) failed, errno 2
> >
> >
> > `stat (2)` returns ENOENT when checking for the file:
> >  david:~>truss stat -L /lib/libjail.so.1
> > ...
> > stat("/lib/libjail.so.1",0x7fffe7e8) ERR#2 'No such
> > file or
> > directory'david:~>truss stat -L /lib/libjail.so.1
> >
> > A pass with `zfs scrub` didn't help.
> >
> > Any clue is welcome. What's that `dmu_bonus_hold` stands for ?
> >
>
> I am unable to understand what your intent is here.  If you wish to delete
> it, you can do:
>
> find . -inum 10552574 -exec rm {} \;
>
> --
> Adam
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: vfs.zfs.vdev.bio_delete_disable - crash while changing on the fly

2016-12-09 Thread Steven Hartland
Obviously this shouldn't happen but would need a stack trace to identify 
the cause.


If you want to disable TRIM on ZFS you should really use:
vfs.zfs.trim.enabled=0

On 09/12/2016 13:13, Eugene M. Zheganin wrote:

Hi.

Recently I've encountered the issue with "slow TRIM" and Sandisk SSDs,
so I was told  to try to disable TRIM and see what happens (thanks a lot
by the way, that did it). But changing the
vfs.zfs.vdev.bio_delete_disable on the fly can lead to the system crash
with the probability of 50%. Is it just me or is this already known ? If
it's known, why isn't this oid in a read-only list ?

Thanks.

P.S. My box tried to dump a core, but after a reboot savecore got
nothing, so you just have to believe me. ;)

Eugene.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool get all: Assertion failed / Abort (core dumped)

2016-12-01 Thread Steven Hartland

Are you sure your kernel and world are in sync?

On 01/12/2016 23:53, Miroslav Lachman wrote:
There is some minor problem with "zpool get all" command if one of two 
pools is unvailable:


# zpool get all
Assertion failed: (nvlist_lookup_nvlist(config, "feature_stats", 
) == 0), file 
/usr/src/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/common/libzfs_config.c, 
line 250.

Abort (core dumped)


# zpool list
NAMESIZE  ALLOC  FREE  EXPANDSZ  FRAG   CAP  DEDUP  HEALTH ALTROOT
ssdtank1  -  -  - - - -  -  UNAVAIL  -
tank0  1.80T  1.08G  1.80T-0%0%  1.00x  ONLINE   -


I understand that it cannot work properly but I think it should not 
core dumped.



Pool details:

# zpool status
  pool: ssdtank1
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-3C
  scan: none requested
config:

NAME  STATE READ WRITE CKSUM
ssdtank1  UNAVAIL  0 0 0
  mirror-0UNAVAIL  0 0 0
17880119912861428605  UNAVAIL  0 0 0  was 
/dev/gpt/ssd0tank1
17492271345202510424  UNAVAIL  0 0 0  was 
/dev/gpt/ssd1tank1


  pool: tank0
 state: ONLINE
  scan: none requested
config:

NAMESTATE READ WRITE CKSUM
tank0   ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
gpt/disk0tank0  ONLINE   0 0 0
gpt/disk1tank0  ONLINE   0 0 0

errors: No known data errors


# uname -srmi
FreeBSD 10.3-RELEASE-p12 amd64 GENERIC


Miroslav Lachman
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Sandisk CloudSpeed Gen. II Eco Channel SSD vs ZFS = we're in hell

2016-11-29 Thread Steven Hartland

On 29/11/2016 12:30, Eugene M. Zheganin wrote:

Hi.

On 28.11.2016 23:07, Steven Hartland wrote:

Check your gstat with -dp so you also see deletes, it may be that your
drives have a very slow TRIM.


Indeed, I see a bunch of delete operations, and when TRIM disabled my
engineers report that the performance is greatly increasing. Is this it ?


Yep sounds like the TRIM on the disk is particularly poor.

You can download, compile and run 
http://blog.multiplay.co.uk/dropzone/freebsd/ioctl-delete.c


You can run with: ioctl-delete   

This will time and give you performance info with regards to delete 
requests.


It uses raw BIO_DELETE commands to the device so eliminates filesystem 
latency (don't run on a device that's in use).


You'll likely want to put some data on there though as often FW 
eliminates the delete if it will have no effect e.g.

dd if=/dev/random of= bs=1m count=20480

Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Sandisk CloudSpeed Gen. II Eco Channel SSD vs ZFS = we're in hell

2016-11-28 Thread Steven Hartland
Check your gstat with -dp so you also see deletes, it may be that your 
drives have a very slow TRIM.


On 28/11/2016 17:54, Eugene M. Zheganin wrote:

Hi,

recently we bough a bunch of "Sandisk CloudSpeed Gen. II Eco Channel" 
disks (the model name by itself should already made me suspicious) for 
using with zfs SAN on FreeBSD, we're plugged them into the LSI SAS3008 
and now we are experiencing the performance that I would call 
"literally awful". I'm using already some of the zfs SANs on FreeBSD 
with Intel/Samsung SSD drives, including the LSI SAS3008 controller, 
but never saw anything like this (and yes, these are all SSDs):


dT: 1.004s  w: 1.000s
 L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/w   %busy Name
   75472 78367  104.4 12   1530   94.8  113.4| da0
   75475 81482   79.2 12   1530   94.5  113.1| da1
   69490 96626  106.9 12   1530  124.9  149.4| da2
   75400 72382   51.5 10   1275   93.7   93.4| da3
0  0  0  00.0  0  00.00.0| da4
   75400 72382   55.0 10   1275   93.9   93.7| da5
2   3975   3975  240200.3  0  00.0   21.0| da6
0   3967   3967  241440.3  0  00.0   21.4| da7
1   3929   3929  242590.3  0  00.0   21.6| da8
0   3998   3998  239330.3  0  00.0   21.2| da9
0  0  0  00.0  0  00.00.0| da10
0   4037   4037  237100.2  0  00.0   21.3| da11
0  0  0  00.0  0  00.00.0| da12
0  0  0  00.0  0  00.00.0| da13
0  0  0  00.0  0  00.00.0| da14
0  0  0  00.0  0  00.00.0| da15
0  0  0  00.0  0  00.00.0| da16

Disks are ogranized in the raidz1 pools (which is slower than the 
raid1 or 10, but, considering the performance of SSDs, we got no 
problems with Intel or Samsung drives), the controller is flashed with 
last firmware available (identical controller with Samsung drives 
performs just fine). Disks are 512e/4K drives, and "diskinfo 
-v"/"camcontrol identify" both report that they have 4K 
stripersize/physical sector. Pools are organized using dedicated 
disks, so, considering all of the above, I don't see any possiblity to 
explain this with the alignment errors. No errors are seen in the 
dmesg. So, right at this time, I'm out of ideas. Everything point that 
these Sandisk drives are the roort of the problem, but I don't see how 
this is possible- according to the various benchmarks (taken, however, 
with regular drives, not "Channel" ones, and so far I haven't figured 
out what is the difference between "Channel" and non-"Channel" ones, 
but they run different firmware branches) they have to be okay (or 
seem so), just the ordinary SSD.


If someone has the explanation of this awful performance, please let 
me know.


Thanks.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Help! two machines ran out of swap and corrupted their zpools!

2016-11-22 Thread Steven Hartland
At one point lz4 wasn't supported on boot, I seem to remember that may 
have been addressed but not 100% sure.


If it hasn't and your kernel is now compressed that may explain it?

Have you tried booting from a live cd and checking the status of the pool?

On 22/11/2016 08:43, Pete French wrote:

zpool import -N -O readonly=on -f -R /mnt/somezpoool

If that doesn't help try:

zpool import -N -O readonly=on -f -R /mnt/somezpoool -Fn

I got someone to do this (am still having toruble finding time
as am supposed to be off sick) and it causes instant kernel panic
on trying to import the pool. Same as it does on boot.


Drop us a line of your configuration and used ZFS features. Like dedup,
snapshots, external l2 logs and caches.

10.3-STABLE r303832 from start of August. One simple pool, two
drives mirrored, GPT formatted drives. No dedup, no snapshots,
no external logs or caches. We have lz4 comression enabled on the
filesystem, but apart from that its an utterly bog-standard setup.

Am an leaning towards faulty hardware now actually... seems most likely...

-pete.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Help! two machines ran out of swap and corrupted their zpools!

2016-11-22 Thread Steven Hartland
When you say corrupt what do you mean, specifically what's the output 
from zpool status?


One thing that springs to mind if zpool status doesn't show any issues, and:
1. You have large disks
2. You have performed an update and not rebooted since.

You may be at the scenario where there's enough data on the pool such 
that the kernel / loader are out range of the BIOS.


All depends on exactly what you're seeing?


On 21/11/2016 17:47, Pete French wrote:

So, I am off sick and my colleagues decided to load test our set of five
servers excesively. All ran out of swap. So far so irritating, but whats has
happened is that twoof them now will not boot, as it appears the ZFS pool
they are booting from has become corrupted.

One starts to boot, then crases importing the root pool. The other doenst
even get that far with gptzfsboot saying it can't find the pool to boot from!

Now I can recover these, but I am a bit worried, that it got like this at
all, as I havent ever seen ZFS corrupt a pool like this. Anyone got any 
insights,
or suggstions as to how to stop it happening again ?

We are swapping to a separate partition, not to the pool by theway.

-pete.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Steven Hartland

On 21/10/2016 10:04, Eugene M. Zheganin wrote:

Hi.

On 21.10.2016 9:22, Steven Hartland wrote:

On 21/10/2016 04:52, Eugene M. Zheganin wrote:

Hi.

On 20.10.2016 21:17, Steven Hartland wrote:

Do you have atime enabled for the relevant volume?

I do.


If so disable it and see if that helps:
zfs set atime=off 


Nah, it doesn't help at all.
As per with Jonathon what does gstat -pd and top -SHz show? 


gstat (while ls'ing):

dT: 1.005s  w: 1.000s
 L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/wd/s kBps   
ms/d   %busy Name
1 49 49   2948   13.5  0  00.0  0 0 0.0   
65.0| ada0
0 32 32   1798   11.1  0  00.0  0 0 0.0   
35.3| ada1



Averagely busy then on rust.

gstat (while idling):

dT: 1.003s  w: 1.000s
 L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/wd/s kBps   
ms/d   %busy Name
0  0  0  00.0  0  00.0  0 0 0.0
0.0| ada0
0  2  22550.8  0  00.0  0 0 0.0
0.1| ada1


top -SHz output doesn't really differ while ls'ing or idling:

last pid: 12351;  load averages:  0.46,  0.49, 
0.46   up 39+14:41:02 14:03:05

376 processes: 3 running, 354 sleeping, 19 waiting
CPU:  5.8% user,  0.0% nice, 16.3% system,  0.0% interrupt, 77.9% idle
Mem: 21M Active, 646M Inact, 931M Wired, 2311M Free
ARC: 73M Total, 3396K MFU, 21M MRU, 545K Anon, 1292K Header, 47M Other
Swap: 4096M Total, 4096M Free

  PID USERNAME   PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
  600 root390 27564K  5072K nanslp  1 295.0H  24.56% monit
0 root   -170 0K  2608K -   1  75:24   0.00% 
kernel{zio_write_issue}
  767 freeswitch  200   139M 31668K uwait   0  48:29   0.00% 
freeswitch{freeswitch}
  683 asterisk200   806M   483M uwait   0  41:09   0.00% 
asterisk{asterisk}
0 root-80 0K  2608K -   0  37:43   0.00% 
kernel{metaslab_group_t}

[... others lines are just 0% ...]
This looks like you only have ~4Gb ram which is pretty low for ZFS I 
suspect vfs.zfs.prefetch_disable will be 1, which will crash the 
performance.


Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland

On 21/10/2016 04:52, Eugene M. Zheganin wrote:

Hi.

On 20.10.2016 21:17, Steven Hartland wrote:

Do you have atime enabled for the relevant volume?

I do.


If so disable it and see if that helps:
zfs set atime=off 


Nah, it doesn't help at all.

As per with Jonathon what does gstat -pd and top -SHz show?
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland

In your case there your vdev (ada0) is saturated with writes from postgres.

You should consider more / faster disks.

You might also want to consider enabling lz4 compression on the PG 
volume as its works well in IO bound situations.



On 21/10/2016 01:54, Jonathan Chen wrote:

On 21 October 2016 at 12:56, Steven Hartland <kill...@multiplay.co.uk> wrote:
[...]

When you see the stalling what does gstat -pd and top -SHz show?

On my dev box:

1:38pm# uname -a
FreeBSD irontree 10.3-STABLE FreeBSD 10.3-STABLE #0 r307401: Mon Oct
17 10:17:22 NZDT 2016 root@irontree:/usr/obj/usr/src/sys/GENERIC
amd64
1:49pm# gstat -pd
dT: 1.004s  w: 1.000s
  L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/wd/s   kBps
ms/d   %busy Name
 0  0  0  00.0  0  00.0  0  0
  0.00.0| cd0
18618  1128   41.4606  52854   17.2  0  0
  0.0  100.5| ada0
^C
1:49pm# top -SHz
last pid: 83284;  load averages:  0.89,  0.68,  0.46
  up
4+03:11:32  13:49:05
565 processes: 9 running, 517 sleeping, 17 zombie, 22 waiting
CPU:  3.7% user,  0.0% nice,  1.9% system,  0.0% interrupt, 94.3% idle
Mem: 543M Active, 2153M Inact, 11G Wired, 10M Cache, 2132M Free
ARC: 7249M Total, 1325M MFU, 4534M MRU, 906M Anon, 223M Header, 261M Other
Swap: 32G Total, 201M Used, 32G Free

   PID USERNAME   PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
83149 postgres380  2197M   528M zio->i  5   1:13  23.19% postgres
83148 jonc220 36028K 13476K select  2   0:11   3.86% pg_restore
   852 postgres200  2181M  2051M select  5   0:27   0.68% postgres
 0 root   -15- 0K  4240K -   6   0:50   0.49%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   6   0:50   0.39%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   6   0:50   0.39%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   7   0:50   0.39%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   7   0:50   0.39%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   7   0:50   0.29%
kernel{zio_write_issue_}
 3 root-8- 0K   112K zio->i  6   1:50   0.20%
zfskern{txg_thread_enter}
12 root   -88- 0K   352K WAIT0   1:07   0.20%
intr{irq268: ahci0}
 0 root   -16- 0K  4240K -   4   0:29   0.20%
kernel{zio_write_intr_4}
 0 root   -16- 0K  4240K -   7   0:29   0.10%
kernel{zio_write_intr_6}
 0 root   -16- 0K  4240K -   0   0:29   0.10%
kernel{zio_write_intr_1}
 0 root   -16- 0K  4240K -   5   0:29   0.10%
kernel{zio_write_intr_2}
 0 root   -16- 0K  4240K -   1   0:29   0.10%
kernel{zio_write_intr_5}
...

Taking another look at the internal dir structure for postgres, I'm
not too sure whether this is related to the original poster's problem
though.

Cheers.


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland



On 20/10/2016 23:48, Jonathan Chen wrote:

On 21 October 2016 at 11:27, Steven Hartland <kill...@multiplay.co.uk> wrote:

On 20/10/2016 22:18, Jonathan Chen wrote:

On 21 October 2016 at 09:09, Peter <p...@citylink.dinoex.sub.org> wrote:
[...]

I see this on my pgsql_tmp dirs (where Postgres stores intermediate
query data that gets too big for mem - usually lots of files) - in
normal operation these dirs are completely empty, but make heavy disk
activity (even writing!) when doing ls.
Seems normal, I dont care as long as the thing is stable. One would need
to check how ZFS stores directories and what kind of fragmentation can
happen there. Or wait for some future feature that would do
housekeeping. ;)

I'm seeing this as well with an Odoo ERP running on Postgresql. This
lag does matter to me as this is huge performance hit when running
Postgresql on ZFS, and it would be good to see this resolved.
pg_restores can make the system crawl as well.

As mentioned before could you confirm you have disable atime?

Yup, also set the blocksize to 4K.

11:46am# zfs get all irontree/postgresql
NAME PROPERTY  VALUE  SOURCE
irontree/postgresql  type  filesystem -
irontree/postgresql  creation  Wed Sep 23 15:07 2015  -
irontree/postgresql  used  43.8G  -
irontree/postgresql  available 592G   -
irontree/postgresql  referenced43.8G  -
irontree/postgresql  compressratio 1.00x  -
irontree/postgresql  mounted   yes-
irontree/postgresql  quota none   default
irontree/postgresql  reservation   none   default
irontree/postgresql  recordsize8K local
irontree/postgresql  mountpoint/postgresql
inherited from irontree
irontree/postgresql  sharenfs  offdefault
irontree/postgresql  checksum  on default
irontree/postgresql  compression   offdefault
irontree/postgresql  atime offlocal
irontree/postgresql  devices   on default
irontree/postgresql  exec  on default
irontree/postgresql  setuidon default
irontree/postgresql  readonly  offdefault
irontree/postgresql  jailedoffdefault
irontree/postgresql  snapdir   hidden default
irontree/postgresql  aclmode   discarddefault
irontree/postgresql  aclinheritrestricted default
irontree/postgresql  canmount  on default
irontree/postgresql  xattr offtemporary
irontree/postgresql  copies1  default
irontree/postgresql  version   5  -
irontree/postgresql  utf8only  off-
irontree/postgresql  normalization none   -
irontree/postgresql  casesensitivity   sensitive  -
irontree/postgresql  vscan offdefault
irontree/postgresql  nbmandoffdefault
irontree/postgresql  sharesmb  offdefault
irontree/postgresql  refquota  none   default
irontree/postgresql  refreservationnone   default
irontree/postgresql  primarycache  alldefault
irontree/postgresql  secondarycachealldefault
irontree/postgresql  usedbysnapshots   0  -
irontree/postgresql  usedbydataset 43.8G  -
irontree/postgresql  usedbychildren0  -
irontree/postgresql  usedbyrefreservation  0  -
irontree/postgresql  logbias   latencydefault
irontree/postgresql  dedup offdefault
irontree/postgresql  mlslabel -
irontree/postgresql  sync  standard   default
irontree/postgresql  refcompressratio  1.00x  -
irontree/postgresql  written   43.8G  -
irontree/postgresql  logicalused   43.4G  -
irontree/postgresql  logicalreferenced 43.4G  -
irontree/postgresql  volmode   defaultdefault
irontree/postgresql  filesystem_limit  none   default
irontree/postgresql  snapshot_limitnone   default
irontree/postgresql  filesystem_count  none   default
irontre

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland

On 20/10/2016 22:18, Jonathan Chen wrote:

On 21 October 2016 at 09:09, Peter  wrote:
[...]

I see this on my pgsql_tmp dirs (where Postgres stores intermediate
query data that gets too big for mem - usually lots of files) - in
normal operation these dirs are completely empty, but make heavy disk
activity (even writing!) when doing ls.
Seems normal, I dont care as long as the thing is stable. One would need
to check how ZFS stores directories and what kind of fragmentation can
happen there. Or wait for some future feature that would do
housekeeping. ;)

I'm seeing this as well with an Odoo ERP running on Postgresql. This
lag does matter to me as this is huge performance hit when running
Postgresql on ZFS, and it would be good to see this resolved.
pg_restores can make the system crawl as well.

As mentioned before could you confirm you have disable atime?
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland

Do you have atime enabled for the relevant volume?

If so disable it and see if that helps:
zfs set atime=off 

Regards
Steve

On 20/10/2016 14:47, Eugene M. Zheganin wrote:

Hi.

I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation 
on different releases) and a zfs. I also have one directory that used 
to have a lot of (tens of thousands) files. I surely takes a lot of 
time to get a listing of it. But now I have 2 files and a couple of 
dozens directories in it (I sorted files into directories). 
Surprisingly, there's still a lag between "ls" and an output:



===Cut===

# /usr/bin/time -h ls
.recycle2016-01 2016-04 2016-07 
2016-10 sort-files.sh
20142016-02 2016-05 2016-08 
ktrace.out  sort-months.sh
20152016-03 2016-06 2016-09 
old sounds

5.75s real  0.00s user  0.02s sys

===Cut===


I've seen this situation before, on other servers, so it's not the 
first time I encounter this. However, it's not 100% reproducible (I 
mean, if I fill the directory with dozens of thousands of files, I 
will not certainly get this lag after the deletion).


Has anyone seen this and does anyone know how to resolve this ? It's 
not critical issue, but it makes thing uncomfortable here. One method 
I'm aware of: you can move the contents of this directory to some 
other place, then delete it and create again. But it's kind of a nasty 
workaround.



Thanks.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Repeatable panic on ZFS filesystem (used for backups); 11.0-STABLE

2016-10-17 Thread Steven Hartland



On 17/10/2016 22:50, Karl Denninger wrote:

I will make some effort on the sandbox machine to see if I can come up
with a way to replicate this.  I do have plenty of spare larger drives
laying around that used to be in service and were obsolesced due to
capacity -- but what I don't know if whether the system will misbehave
if the source is all spinning rust.

In other words:

1. Root filesystem is mirrored spinning rust (production is mirrored SSDs)

2. Backup is mirrored spinning rust (of approx the same size)

3. Set up auto-snapshot exactly as the production system has now (which
the sandbox is NOT since I don't care about incremental recovery on that
machine; it's a sandbox!)

4. Run a bunch of build-somethings (e.g. buildworlds, cross-build for
the Pi2s I have here, etc) to generate a LOT of filesystem entropy
across lots of snapshots.

5. Back that up.

6. Export the backup pool.

7. Re-import it and "zfs destroy -r" the backup filesystem.

That is what got me in a reboot loop after the *first* panic; I was
simply going to destroy the backup filesystem and re-run the backup, but
as soon as I issued that zfs destroy the machine panic'd and as soon as
I re-attached it after a reboot it panic'd again.  Repeat until I set
trim=0.

But... if I CAN replicate it that still shouldn't be happening, and the
system should *certainly* survive attempting to TRIM on a vdev that
doesn't support TRIMs, even if the removal is for a large amount of
space and/or files on the target, without blowing up.

BTW I bet it isn't that rare -- if you're taking timed snapshots on an
active filesystem (with lots of entropy) and then make the mistake of
trying to remove those snapshots (as is the case with a zfs destroy -r
or a zfs recv of an incremental copy that attempts to sync against a
source) on a pool that has been imported before the system realizes that
TRIM is unavailable on those vdevs.

Noting this:

 Yes need to find some time to have a look at it, but given how rare
 this is and with TRIM being re-implemented upstream in a totally
 different manor I'm reticent to spend any real time on it.

What's in-process in this regard, if you happen to have a reference?

Looks like it may be still in review: https://reviews.csiden.org/r/263/

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Repeatable panic on ZFS filesystem (used for backups); 11.0-STABLE

2016-10-17 Thread Steven Hartland

On 17/10/2016 20:52, Andriy Gapon wrote:

On 17/10/2016 21:54, Steven Hartland wrote:

You're hitting stack exhaustion, have you tried increasing the kernel stack 
pages?
It can be changed from /boot/loader.conf
kern.kstack_pages="6"

Default on amd64 is 4 IIRC

Steve,

perhaps you can think of a more proper fix? :-)
https://lists.freebsd.org/pipermail/freebsd-stable/2016-July/085047.html
Yes need to find some time to have a look at it, but given how rare this 
is and with TRIM being re-implemented upstream in a totally different 
manor I'm reticent to spend any real time on it.

On 17/10/2016 19:08, Karl Denninger wrote:

The target (and devices that trigger this) are a pair of 4Gb 7200RPM
SATA rotating rust drives (zmirror) with each provider geli-encrypted
(that is, the actual devices used for the pool create are the .eli's)

The machine generating the problem has both rotating rust devices *and*
SSDs, so I can't simply shut TRIM off system-wide and call it a day as
TRIM itself is heavily-used; both the boot/root pools and a Postgresql
database pool are on SSDs, while several terabytes of lesser-used data
is on a pool of Raidz2 that is made up of spinning rust.

snip...

NewFS.denninger.net dumped core - see /var/crash/vmcore.1

Mon Oct 17 09:02:33 CDT 2016

FreeBSD NewFS.denninger.net 11.0-STABLE FreeBSD 11.0-STABLE #13
r307318M: Fri Oct 14 09:23:46 CDT 2016
k...@newfs.denninger.net:/usr/obj/usr/src/sys/KSD-SMP  amd64

panic: double fault

GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "amd64-marcel-freebsd"...

Unread portion of the kernel message buffer:

Fatal double fault
rip = 0x8220d9ec
rsp = 0xfe066821f000
rbp = 0xfe066821f020
cpuid = 6; apic id = 14
panic: double fault
cpuid = 6
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
0xfe0649d78e30
vpanic() at vpanic+0x182/frame 0xfe0649d78eb0
panic() at panic+0x43/frame 0xfe0649d78f10
dblfault_handler() at dblfault_handler+0xa2/frame 0xfe0649d78f30
Xdblfault() at Xdblfault+0xac/frame 0xfe0649d78f30
--- trap 0x17, rip = 0x8220d9ec, rsp = 0xfe066821f000, rbp =
0xfe066821f020 ---
avl_rotation() at avl_rotation+0xc/frame 0xfe066821f020
avl_remove() at avl_remove+0x1c8/frame 0xfe066821f070
vdev_queue_io_to_issue() at vdev_queue_io_to_issue+0x87f/frame
0xfe066821f530
vdev_queue_io_done() at vdev_queue_io_done+0x83/frame 0xfe066821f570
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f5a0
zio_execute() at zio_execute+0x23d/frame 0xfe066821f5f0
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821f650
zio_execute() at zio_execute+0x23d/frame 0xfe066821f6a0
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821f6e0
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f710
zio_execute() at zio_execute+0x23d/frame 0xfe066821f760
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821f7c0
zio_execute() at zio_execute+0x23d/frame 0xfe066821f810
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821f850
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f880
zio_execute() at zio_execute+0x23d/frame 0xfe066821f8d0
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821f930
zio_execute() at zio_execute+0x23d/frame 0xfe066821f980
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821f9c0
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f9f0
zio_execute() at zio_execute+0x23d/frame 0xfe066821fa40
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821faa0
zio_execute() at zio_execute+0x23d/frame 0xfe066821faf0
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821fb30
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821fb60
zio_execute() at zio_execute+0x23d/frame 0xfe066821fbb0
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821fc10
zio_execute() at zio_execute+0x23d/frame 0xfe066821fc60
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821fca0
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821fcd0
zio_execute() at zio_execute+0x23d/frame 0xfe066821fd20
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821fd80
zio_execute() at zio_execute+0x23d/frame 0xfe066821fdd0
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821fe10
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821fe40
zio_execute() at zio_execute+0x23d/frame 0xfe066821fe90
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821fef0

Re: Repeatable panic on ZFS filesystem (used for backups); 11.0-STABLE

2016-10-17 Thread Steven Hartland
Setting those values will only effect what's queued to the device not 
what's actually outstanding.


On 17/10/2016 21:22, Karl Denninger wrote:

Since I cleared it (by setting TRIM off on the test machine, rebooting,
importing the pool and noting that it did not panic -- pulled drives,
re-inserted into the production machine and ran backup routine -- all
was normal) it may be a while before I see it again (a week or so is usual.)

It appears to be related to entropy in the filesystem that comes up as
"eligible" to be removed from the backup volume, which (not
surprisingly) tends to happen a few days after I do a new world build or
something similar (the daily and/or periodic snapshots roll off at about
that point.)

I don't happen to have a spare pair of high-performance SSDs I can stick
in the sandbox machine in an attempt to force the condition to assert
itself in test, unfortunately.

I *am* concerned that it's not "simple" stack exhaustion because setting
the max outstanding TRIMs on a per-vdev basis down quite-dramatically
did *not* prevent it from happening -- and if it was simply stack depth
related I would have expected that to put a stop to it.

On 10/17/2016 15:16, Steven Hartland wrote:

Be good to confirm its not an infinite loop by giving it a good bump
first.

On 17/10/2016 19:58, Karl Denninger wrote:

I can certainly attempt setting that higher but is that not just
hiding the problem rather than addressing it?


On 10/17/2016 13:54, Steven Hartland wrote:

You're hitting stack exhaustion, have you tried increasing the kernel
stack pages?
It can be changed from /boot/loader.conf
kern.kstack_pages="6"

Default on amd64 is 4 IIRC

On 17/10/2016 19:08, Karl Denninger wrote:

The target (and devices that trigger this) are a pair of 4Gb 7200RPM
SATA rotating rust drives (zmirror) with each provider geli-encrypted
(that is, the actual devices used for the pool create are the .eli's)

The machine generating the problem has both rotating rust devices
*and*
SSDs, so I can't simply shut TRIM off system-wide and call it a day as
TRIM itself is heavily-used; both the boot/root pools and a Postgresql
database pool are on SSDs, while several terabytes of lesser-used data
is on a pool of Raidz2 that is made up of spinning rust.

snip...

NewFS.denninger.net dumped core - see /var/crash/vmcore.1

Mon Oct 17 09:02:33 CDT 2016

FreeBSD NewFS.denninger.net 11.0-STABLE FreeBSD 11.0-STABLE #13
r307318M: Fri Oct 14 09:23:46 CDT 2016
k...@newfs.denninger.net:/usr/obj/usr/src/sys/KSD-SMP  amd64

panic: double fault

GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and
you are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for
details.
This GDB was configured as "amd64-marcel-freebsd"...

Unread portion of the kernel message buffer:

Fatal double fault
rip = 0x8220d9ec
rsp = 0xfe066821f000
rbp = 0xfe066821f020
cpuid = 6; apic id = 14
panic: double fault
cpuid = 6
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
0xfe0649d78e30
vpanic() at vpanic+0x182/frame 0xfe0649d78eb0
panic() at panic+0x43/frame 0xfe0649d78f10
dblfault_handler() at dblfault_handler+0xa2/frame 0xfe0649d78f30
Xdblfault() at Xdblfault+0xac/frame 0xfe0649d78f30
--- trap 0x17, rip = 0x8220d9ec, rsp = 0xfe066821f000,
rbp =
0xfe066821f020 ---
avl_rotation() at avl_rotation+0xc/frame 0xfe066821f020
avl_remove() at avl_remove+0x1c8/frame 0xfe066821f070
vdev_queue_io_to_issue() at vdev_queue_io_to_issue+0x87f/frame
0xfe066821f530
vdev_queue_io_done() at vdev_queue_io_done+0x83/frame
0xfe066821f570
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f5a0
zio_execute() at zio_execute+0x23d/frame 0xfe066821f5f0
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame
0xfe066821f650
zio_execute() at zio_execute+0x23d/frame 0xfe066821f6a0
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame
0xfe066821f6e0
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f710
zio_execute() at zio_execute+0x23d/frame 0xfe066821f760
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame
0xfe066821f7c0
zio_execute() at zio_execute+0x23d/frame 0xfe066821f810
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame
0xfe066821f850
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f880
zio_execute() at zio_execute+0x23d/frame 0xfe066821f8d0
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame
0xfe066821f930
zio_execute() at zio_execute+0x23d/frame 0xfe066821f980
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame
0xfe066821f9c0
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f9f0
zio_execute() at 

Re: Repeatable panic on ZFS filesystem (used for backups); 11.0-STABLE

2016-10-17 Thread Steven Hartland

Be good to confirm its not an infinite loop by giving it a good bump first.

On 17/10/2016 19:58, Karl Denninger wrote:

I can certainly attempt setting that higher but is that not just
hiding the problem rather than addressing it?


On 10/17/2016 13:54, Steven Hartland wrote:

You're hitting stack exhaustion, have you tried increasing the kernel
stack pages?
It can be changed from /boot/loader.conf
kern.kstack_pages="6"

Default on amd64 is 4 IIRC

On 17/10/2016 19:08, Karl Denninger wrote:

The target (and devices that trigger this) are a pair of 4Gb 7200RPM
SATA rotating rust drives (zmirror) with each provider geli-encrypted
(that is, the actual devices used for the pool create are the .eli's)

The machine generating the problem has both rotating rust devices *and*
SSDs, so I can't simply shut TRIM off system-wide and call it a day as
TRIM itself is heavily-used; both the boot/root pools and a Postgresql
database pool are on SSDs, while several terabytes of lesser-used data
is on a pool of Raidz2 that is made up of spinning rust.

snip...

NewFS.denninger.net dumped core - see /var/crash/vmcore.1

Mon Oct 17 09:02:33 CDT 2016

FreeBSD NewFS.denninger.net 11.0-STABLE FreeBSD 11.0-STABLE #13
r307318M: Fri Oct 14 09:23:46 CDT 2016
k...@newfs.denninger.net:/usr/obj/usr/src/sys/KSD-SMP  amd64

panic: double fault

GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and
you are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for
details.
This GDB was configured as "amd64-marcel-freebsd"...

Unread portion of the kernel message buffer:

Fatal double fault
rip = 0x8220d9ec
rsp = 0xfe066821f000
rbp = 0xfe066821f020
cpuid = 6; apic id = 14
panic: double fault
cpuid = 6
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
0xfe0649d78e30
vpanic() at vpanic+0x182/frame 0xfe0649d78eb0
panic() at panic+0x43/frame 0xfe0649d78f10
dblfault_handler() at dblfault_handler+0xa2/frame 0xfe0649d78f30
Xdblfault() at Xdblfault+0xac/frame 0xfe0649d78f30
--- trap 0x17, rip = 0x8220d9ec, rsp = 0xfe066821f000, rbp =
0xfe066821f020 ---
avl_rotation() at avl_rotation+0xc/frame 0xfe066821f020
avl_remove() at avl_remove+0x1c8/frame 0xfe066821f070
vdev_queue_io_to_issue() at vdev_queue_io_to_issue+0x87f/frame
0xfe066821f530
vdev_queue_io_done() at vdev_queue_io_done+0x83/frame 0xfe066821f570
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f5a0
zio_execute() at zio_execute+0x23d/frame 0xfe066821f5f0
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821f650
zio_execute() at zio_execute+0x23d/frame 0xfe066821f6a0
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821f6e0
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f710
zio_execute() at zio_execute+0x23d/frame 0xfe066821f760
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821f7c0
zio_execute() at zio_execute+0x23d/frame 0xfe066821f810
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821f850
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f880
zio_execute() at zio_execute+0x23d/frame 0xfe066821f8d0
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821f930
zio_execute() at zio_execute+0x23d/frame 0xfe066821f980
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821f9c0
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f9f0
zio_execute() at zio_execute+0x23d/frame 0xfe066821fa40
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821faa0
zio_execute() at zio_execute+0x23d/frame 0xfe066821faf0
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821fb30
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821fb60
zio_execute() at zio_execute+0x23d/frame 0xfe066821fbb0
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821fc10
zio_execute() at zio_execute+0x23d/frame 0xfe066821fc60
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821fca0
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821fcd0
zio_execute() at zio_execute+0x23d/frame 0xfe066821fd20
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821fd80
zio_execute() at zio_execute+0x23d/frame 0xfe066821fdd0
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821fe10
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821fe40
zio_execute() at zio_execute+0x23d/frame 0xfe066821fe90
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821fef0
zio_execute() at zio_execute+0x23d/frame 0xfe066821ff40
vdev_queue_io_done() at vdev_queue_io_done+0xcd/f

Re: Repeatable panic on ZFS filesystem (used for backups); 11.0-STABLE

2016-10-17 Thread Steven Hartland
You're hitting stack exhaustion, have you tried increasing the kernel 
stack pages?

It can be changed from /boot/loader.conf
kern.kstack_pages="6"

Default on amd64 is 4 IIRC

On 17/10/2016 19:08, Karl Denninger wrote:

The target (and devices that trigger this) are a pair of 4Gb 7200RPM
SATA rotating rust drives (zmirror) with each provider geli-encrypted
(that is, the actual devices used for the pool create are the .eli's)

The machine generating the problem has both rotating rust devices *and*
SSDs, so I can't simply shut TRIM off system-wide and call it a day as
TRIM itself is heavily-used; both the boot/root pools and a Postgresql
database pool are on SSDs, while several terabytes of lesser-used data
is on a pool of Raidz2 that is made up of spinning rust.

snip...


NewFS.denninger.net dumped core - see /var/crash/vmcore.1

Mon Oct 17 09:02:33 CDT 2016

FreeBSD NewFS.denninger.net 11.0-STABLE FreeBSD 11.0-STABLE #13
r307318M: Fri Oct 14 09:23:46 CDT 2016
k...@newfs.denninger.net:/usr/obj/usr/src/sys/KSD-SMP  amd64

panic: double fault

GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "amd64-marcel-freebsd"...

Unread portion of the kernel message buffer:

Fatal double fault
rip = 0x8220d9ec
rsp = 0xfe066821f000
rbp = 0xfe066821f020
cpuid = 6; apic id = 14
panic: double fault
cpuid = 6
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
0xfe0649d78e30
vpanic() at vpanic+0x182/frame 0xfe0649d78eb0
panic() at panic+0x43/frame 0xfe0649d78f10
dblfault_handler() at dblfault_handler+0xa2/frame 0xfe0649d78f30
Xdblfault() at Xdblfault+0xac/frame 0xfe0649d78f30
--- trap 0x17, rip = 0x8220d9ec, rsp = 0xfe066821f000, rbp =
0xfe066821f020 ---
avl_rotation() at avl_rotation+0xc/frame 0xfe066821f020
avl_remove() at avl_remove+0x1c8/frame 0xfe066821f070
vdev_queue_io_to_issue() at vdev_queue_io_to_issue+0x87f/frame
0xfe066821f530
vdev_queue_io_done() at vdev_queue_io_done+0x83/frame 0xfe066821f570
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f5a0
zio_execute() at zio_execute+0x23d/frame 0xfe066821f5f0
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821f650
zio_execute() at zio_execute+0x23d/frame 0xfe066821f6a0
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821f6e0
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f710
zio_execute() at zio_execute+0x23d/frame 0xfe066821f760
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821f7c0
zio_execute() at zio_execute+0x23d/frame 0xfe066821f810
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821f850
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f880
zio_execute() at zio_execute+0x23d/frame 0xfe066821f8d0
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821f930
zio_execute() at zio_execute+0x23d/frame 0xfe066821f980
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821f9c0
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821f9f0
zio_execute() at zio_execute+0x23d/frame 0xfe066821fa40
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821faa0
zio_execute() at zio_execute+0x23d/frame 0xfe066821faf0
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821fb30
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821fb60
zio_execute() at zio_execute+0x23d/frame 0xfe066821fbb0
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821fc10
zio_execute() at zio_execute+0x23d/frame 0xfe066821fc60
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821fca0
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821fcd0
zio_execute() at zio_execute+0x23d/frame 0xfe066821fd20
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821fd80
zio_execute() at zio_execute+0x23d/frame 0xfe066821fdd0
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821fe10
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821fe40
zio_execute() at zio_execute+0x23d/frame 0xfe066821fe90
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe066821fef0
zio_execute() at zio_execute+0x23d/frame 0xfe066821ff40
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe066821ff80
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe066821ffb0
zio_execute() at zio_execute+0x23d/frame 0xfe066822
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe0668220060
zio_execute() at zio_execute+0x23d/frame 0xfe06682200b0
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 

Re: zfs/raidz and creation pause/blocking

2016-09-22 Thread Steven Hartland
Almost certainly its TRIMing the drives try setting the sysctl 
vfs.zfs.vdev.trim_on_init=0


On 22/09/2016 12:54, Eugene M. Zheganin wrote:

Hi.

Recently I spent a lot of time setting up various zfs installations, and
I got a question.
Often when creating a raidz on disks considerably big (>~ 1T) I'm seeing
a weird stuff: "zpool create" blocks, and waits for several minutes. In
the same time system is fully responsive and I can see in gstat that the
kernel starts to tamper all the pool candidates sequentially at 100%
busy with iops around zero (in the example below, taken from a live
system, it's doing something with da11):

(zpool create gamestop raidz da5 da7 da8 da9 da10 da11)

dT: 1.064s  w: 1.000s
  L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/w   %busy Name
 0  0  0  00.0  0  00.00.0| da0
 0  0  0  00.0  0  00.00.0| da1
 0  0  0  00.0  0  00.00.0| da2
 0  0  0  00.0  0  00.00.0| da3
 0  0  0  00.0  0  00.00.0| da4
 0  0  0  00.0  0  00.00.0| da5
 0  0  0  00.0  0  00.00.0| da6
 0  0  0  00.0  0  00.00.0| da7
 0  0  0  00.0  0  00.00.0| da8
 0  0  0  00.0  0  00.00.0| da9
 0  0  0  00.0  0  00.00.0| da10
   150  3  0  00.0  0  00.0  112.6| da11
 0  0  0  00.0  0  00.00.0| da0p1
 0  0  0  00.0  0  00.00.0| da0p2
 0  0  0  00.0  0  00.00.0| da0p3
 0  0  0  00.0  0  00.00.0| da1p1
 0  0  0  00.0  0  00.00.0| da1p2
 0  0  0  00.0  0  00.00.0| da1p3
 0  0  0  00.0  0  00.00.0| da0p4
 0  0  0  00.0  0  00.00.0| gpt/boot0
 0  0  0  00.0  0  00.00.0|
gptid/22659641-7ee6-11e6-9b56-0cc47aa41194
 0  0  0  00.0  0  00.00.0| gpt/zroot0
 0  0  0  00.0  0  00.00.0| gpt/esx0
 0  0  0  00.0  0  00.00.0| gpt/boot1
 0  0  0  00.0  0  00.00.0|
gptid/23c1fbec-7ee6-11e6-9b56-0cc47aa41194
 0  0  0  00.0  0  00.00.0| gpt/zroot1
 0  0  0  00.0  0  00.00.0| mirror/mirror
 0  0  0  00.0  0  00.00.0| da1p4
 0  0  0  00.0  0  00.00.0| gpt/esx1

The most funny thing is that da5,7-11 are SSD, with a capability of like
30K iops at their least.
So I wonder what is happening during this and why does it take that
long. Because usually pools are creating very fast.

Thanks.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: NVMe as boot device.

2016-09-15 Thread Steven Hartland
Yes but you need to boot with efi as that's the old thing mode that
supports nvme boot. We have an nvme only ZFS box and it works fine

On Thursday, 15 September 2016, Andrey Cherkashin 
wrote:

> Hi,
>
> I know FreeBSD supports NVMe as root device, but does it support it as
> boot? Can’t find any confirmation.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: zfs crash under load on 11-Prerelease r305056

2016-09-03 Thread Steven Hartland

That file is not accessible

On 03/09/2016 10:51, Volodymyr Kostyrko wrote:

Hi all.

Got one host without keyboard so can't dump it.

Screenshot: http://limb0.b1t.name/incoming/IMG_20160903_120545.jpg

This is MINIMAL kernel with minor additions.



___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Panic on BETA1 in the ZFS subsystem

2016-07-22 Thread Steven Hartland

On 21/07/2016 13:52, Andriy Gapon wrote:

On 21/07/2016 15:25, Karl Denninger wrote:

The crash occurred during a backup script operating, which is (roughly)
the following:

zpool import -N backup (mount the pool to copy to)

iterate over a list of zfs filesystems and...

zfs rename fs@zfs-base fs@zfs-old
zfs snapshot fs@zfs-base
zfs send -RI fs@zfs-old fs@zfs-base | zfs receive -Fudv backup
zfs destroy -vr fs@zfs-old

The first filesystem to be done is the rootfs, that is when it panic'd,
and from the traceback it appears that the Zio's in there are from the
backup volume, so the answer to your question is "yes".

I think that what happened here was that a quite large number of TRIM
requests was queued by ZFS before it had a chance to learn that the
target vdev in the backup pool did not support TRIM.  So, when the the
first request failed with ENOTSUP the vdev was marked as not supporting
TRIM.  After that all subsequent requests were failed without sending
them down the storage stack.  But the way it is done means that all the
requests were processed by the nested zio_execute() calls on the same
stack.  And that lead to the stack overflow.

Steve, do you think that this is a correct description of what happened?

The state of the pools that you described below probably contributed to
the avalanche of TRIMs that caused the problem.


Yes does indeed sound like what happened to me.

Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Panic on BETA1 in the ZFS subsystem

2016-07-20 Thread Steven Hartland

The panic was due to stack exhaustion, why it was so deep not looked.

On 20/07/2016 15:32, Karl Denninger wrote:

The panic occurred during a zfs send/receive operation for system
backup. I've seen this one before, unfortunately, and it appears
that it's still there -- may be related to
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=207464

On 7/20/2016 09:26, Karl Denninger wrote:

Came out of the ZFS system; this machine is running *unpatched* (none of
my changes are in the kernel); I have a good core dump if further trace
information will help.

Dump header from device: /dev/gpt/dump
   Architecture: amd64
   Architecture Version: 2
   Dump Length: 9582022656
   Blocksize: 512
   Dumptime: Wed Jul 20 09:10:31 2016
   Hostname: NewFS.denninger.net
   Magic: FreeBSD Kernel Dump
   Version String: FreeBSD 11.0-BETA1 #0 r302489: Sat Jul  9 10:15:24 CDT
2016
 k...@newfs.denninger.net:/usr/obj/usr/src/sys/KSD-SMP
   Panic String: double fault
   Dump Parity: 445173880
   Bounds: 4
   Dump Status: good

NewFS.denninger.net dumped core - see /var/crash/vmcore.4

Wed Jul 20 09:18:15 CDT 2016

FreeBSD NewFS.denninger.net 11.0-BETA1 FreeBSD 11.0-BETA1 #0 r302489:
Sat Jul  9 10:15:24 CDT 2016
k...@newfs.denninger.net:/usr/obj/usr/src/sys/KSD-SMP  amd64

panic: double fault

GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "amd64-marcel-freebsd"...

Unread portion of the kernel message buffer:

Fatal double fault
rip = 0x8220bc92
rsp = 0xfe0667f0ef30
rbp = 0xfe0667f0f3c0
cpuid = 14; apic id = 34
panic: double fault
cpuid = 14
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
0xfe0649db8e30
vpanic() at vpanic+0x182/frame 0xfe0649db8eb0
panic() at panic+0x43/frame 0xfe0649db8f10
dblfault_handler() at dblfault_handler+0xa2/frame 0xfe0649db8f30
Xdblfault() at Xdblfault+0xac/frame 0xfe0649db8f30
--- trap 0x17, rip = 0x8220bc92, rsp = 0xfe0667f0ef30, rbp =
0xfe0667f0f3c0 ---
vdev_queue_io_to_issue() at vdev_queue_io_to_issue+0x22/frame
0xfe0667f0f3c0
vdev_queue_io_done() at vdev_queue_io_done+0x83/frame 0xfe0667f0f400
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe0667f0f430
zio_execute() at zio_execute+0x236/frame 0xfe0667f0f480
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe0667f0f4e0
zio_execute() at zio_execute+0x236/frame 0xfe0667f0f530
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe0667f0f570
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe0667f0f5a0
zio_execute() at zio_execute+0x236/frame 0xfe0667f0f5f0
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe0667f0f650
zio_execute() at zio_execute+0x236/frame 0xfe0667f0f6a0
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe0667f0f6e0
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe0667f0f710
zio_execute() at zio_execute+0x236/frame 0xfe0667f0f760
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe0667f0f7c0
zio_execute() at zio_execute+0x236/frame 0xfe0667f0f810
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe0667f0f850
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe0667f0f880
zio_execute() at zio_execute+0x236/frame 0xfe0667f0f8d0
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe0667f0f930
zio_execute() at zio_execute+0x236/frame 0xfe0667f0f980
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe0667f0f9c0
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe0667f0f9f0
zio_execute() at zio_execute+0x236/frame 0xfe0667f0fa40
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe0667f0faa0
zio_execute() at zio_execute+0x236/frame 0xfe0667f0faf0
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe0667f0fb30
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe0667f0fb60
zio_execute() at zio_execute+0x236/frame 0xfe0667f0fbb0
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe0667f0fc10
zio_execute() at zio_execute+0x236/frame 0xfe0667f0fc60
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe0667f0fca0
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe0667f0fcd0
zio_execute() at zio_execute+0x236/frame 0xfe0667f0fd20
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 0xfe0667f0fd80
zio_execute() at zio_execute+0x236/frame 0xfe0667f0fdd0
vdev_queue_io_done() at vdev_queue_io_done+0xcd/frame 0xfe0667f0fe10
zio_vdev_io_done() at zio_vdev_io_done+0xd9/frame 0xfe0667f0fe40
zio_execute() at zio_execute+0x236/frame 0xfe0667f0fe90
zio_vdev_io_start() at zio_vdev_io_start+0x34d/frame 

Re: FreeBSD 10.3 slow boot on Supermicro X11SSW-F

2016-06-28 Thread Steven Hartland

Does adding the following to /boot/loader.conf make any difference?
hw.memtest.tests="0"

On 28/06/2016 14:59, Miroslav Lachman wrote:
I installed FreeBSD 10.3 on brand new machine Supermicro X11SSW-F. It 
sits on top of 4x 1TB Samsung SSDs on ZFS RAIDZ2.


The booting is painfully slow from BTX to menu to kernel loading.
Progress indicated by \ | / - characters is changing by speed of 1 
character per 2 seconds.

The whole boot process takes about 10 minutes.

I found this blog post solving the same problem
http://smyck.net/2016/06/15/freebsd-slow-zfs-bootloader/

It seems there is some bug in loader in 10.3. If /boot/pmbr, 
/boot/gptzfsboot and /boot/zfsloader are replaced by files from 
11-CURRENT snapshot (from 
ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/amd64/11.0-CURRENT/base.txz) 
the booting speed is back to normal.


Is it know problem? What was changed in loader between 10.3 and 11?


Miroslav Lachman
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS and NVMe, trim caused stalling

2016-05-17 Thread Steven Hartland

On 17/05/2016 08:49, Borja Marcos wrote:

On 05 May 2016, at 16:39, Warner Losh  wrote:


What do you think? In some cases it’s clear that TRIM can do more harm than 
good.

I think it’s best we not overreact.

I agree. But with this issue the system is almost unusable for now.


This particular case is cause by the nvd driver, not the Intel P3500 NVME 
drive. You need
a solution (3): Fix the driver.

Specifically, ZFS is pushing down a boatload of BIO_DELETE requests. In ata/da 
land, these
requests are queued up, then collapsed together as much as makes sense (or is 
possible).
This vastly helps performance (even with the extra sorting that I forced to be 
in there that I
need to fix before 11). The nvd driver needs to do the same thing.

I understand that, but I don’t think it’s a good that ZFS depends blindly on a 
driver feature such
as that. Of course, it’s great to exploit it.

I have also noticed that ZFS has a good throttling mechanism for write 
operations. A similar
mechanism should throttle trim requests so that trim requests don’t clog the 
whole system.

It already does.



I’d be extremely hesitant to tossing away TRIMs. They are actually quite 
important for
the FTL in the drive’s firmware to proper manage the NAND wear. More free space 
always
reduces write amplification. It tends to go as 1 / freespace, so simply 
dropping them on
the floor should be done with great reluctance.

I understand. I was wondering about choosing the lesser between two evils. A 15 
minute
I/O stall (I deleted 2 TB of data, that’s a lot, but not so unrealistic) or 
settings trims aside
during the peak activity.

I see that I was wrong on that, as a throttling mechanism would be more than 
enough probably,
unless the system is close to running out of space.

I’ve filed a bug report anyway. And copying to -stable.


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=209571

TBH it sounds like you may have badly behaved HW, we've used ZFS + TRIM 
and for years on large production boxes and while we're seen slow down 
we haven't experienced the total lockups you're describing.


The graphs on you're ticket seem to indicate peak throughput of 250MB/s 
which is extremely slow for standard SSD's let alone NVMe ones and when 
you add in the fact you have 10 well it seems like something is VERY wrong.


I just did a quick test on our DB box here creating and then deleting a 
2G file as you describe and I couldn't even spot the delete in the 
general noise it was so quick to process and that's a 6 disk machine 
with P3700's.


Regards
Steve


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: HP DL 585 / ACPI ID / ECC Memory / Panic

2016-05-12 Thread Steven Hartland
I wouldn't rule out a bad cpu as we had a very similar issue and that's
what it was.

Quick way to confirm is to move all the dram from the disabled CPU to one
of the other CPUs and see if the issue stays away with the current CPU
still disabled.

If that's the case it's likely the on chip memory controller has developed
a fault

On Thursday, 12 May 2016, Nikolaj Hansen  wrote:

> Hi,
>
> I recently added a zfs disk array to my old HP 585 G1 Server.
> Immediately there was kernel panics and I have spent quite a bit of time
> figuring out what was really wrong.
>
> The system has 4 cpu cards with opteron double core processors. Each
> card has 4x2 gigabyte memory 4x2x4 = 32 gigabyte of total system mem.
> The memory is DDR400 ECC mem.
>
> The panic was very easily reproducable. I just had to issue enough reads
> to the system up until the faulty mem was accessed.
>
> Strangely I can run memtest86+ with the DDR setting on and I find no
> error what so ever.
>
> Adding
>
> hint.lapic.2.disabled=1 > /boot/loader.conf
>
> Immediately mitigates the error for FreeBSD. So here is my conclusion:
>
> If you can make the system stable by disabling one core on one cpu card:
>
> 1) The other cards / mem must be ok.
> 2) The mainboard must be ok since one of the cores on the cpu is still
> running / not barfing panics.
> 3) the cpu core with acpi 2 is probably also ok. it is on the same chip
> as a non disabled core.
> 4) It is likely down to a rotten DIMM.
>
> In place of mindlessly trying to find the culprit by switching dimms I
> would really like to identify the CPU, card and mem module from the os.
>
> Info here:
>
> http://pastebin.com/jqufNKck
>
> Thank you for your time and help.
>
> --
>
>
> Med venlig hilsen / with regards
>
> Nikolaj Hansen
>
>
>
>
>
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: booting from separate zfs pool

2016-04-28 Thread Steven Hartland

I thought that was in 10.3 as well?


On 28/04/2016 11:55, krad wrote:

I think the new pivotroot type stuff in 11 may help a lot with this

https://www.freebsd.org/news/status/report-2015-10-2015-12.html#Root-Remount



On 28 April 2016 at 10:31, Malcolm Herbert  wrote:


On Thu, Apr 28, 2016 at 12:44:28PM +0500, Eugene M. Zheganin wrote:
|So, I'm still struggling with my problem when I cannot boot from a big
|zfs 2T pool (I have written some messages about a year ago, the whole
|story is too long and irrelevant to retell it, I'll only notice that I
|took the path where I'm about to boot from a separate zfs pool closer to
|the beginning of the disk).

it's no an answer to your question, but I'm wondering wether the solution
to this may also help with beadm being unable to work with crypted ZFS root
volumes as the boot zpool is not the same as zroot ... I've got that issue
myself ...

--
Malcolm Herbert
m...@mjch.net
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: A gpart(8) mystery on 10.3-RELEASE

2016-04-05 Thread Steven Hartland

On 05/04/2016 23:09, Warren Block wrote:

On Tue, 5 Apr 2016, Steven Hartland wrote:

On 05/04/2016 20:48, Warren Block wrote:


Actually, the more I think about it, using bootcode -p to write the 
entire EFI partition seems dangerous.  Unless it is surprisingly 
smart, it will wipe out any existing stuff on that EFI partition, 
which could be any number of important things put there by other 
utilities or operating systems, including device drivers.


The safer way is to mount that partition and copy the boot1.efi file 
to it.


Pretty sure that's not done as you cant guarantee fat support is 
available.


In the kernel, you mean?  True.  But odds are good that someone with a 
custom kernel without msdosfs will understand the implications of 
overwriting the EFI partition.


And of course it is safe to create an EFI partition, it would only be 
a problem if one already existed with some extra files on it.
Yes, we remove msdosfs here and it would be PITA to have to add it back 
in just to support writing EFI boot fs.


So personally I much prefer the current method. For those that want to 
play with the EFI partition then they can do the manual update step 
instead easily enough.


Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: A gpart(8) mystery on 10.3-RELEASE

2016-04-05 Thread Steven Hartland



On 05/04/2016 20:48, Warren Block wrote:

On Tue, 5 Apr 2016, Boris Samorodov wrote:


05.04.16 12:30, Trond Endrestøl пишет:


What am I doing wrong? Can't gpart(8) write both the pmbr and the efi
image as a single command? Is it an off-by-one error in gpart(8)?

gpart bootcode -b /boot/pmbr -p /boot/boot1.efifat -i 1 ada0
gpart: /boot/boot1.efifat: file too big (524288 limit)


Do you try to get only UEFI boot? Then do not use "-b" option. It is
needed for BIOS boot.

Do you need to get a system with both UEFI and BIOS boot? Then use two
different partitions for UEFI and BIOS booting schemes.


gpart bootcode -b /boot/pmbr ada0
bootcode written to ada0


This is needed only for BIOS boot and together with "-p /boot/gptboot"
option.


Well... bootcode -b only writes to the PMBR and does not take a 
partition number with -i.  So the short form version I use could be 
refused by a very strict option parser, requiring two separate steps:


  gpart bootcode -b /boot/pmbr ada0
  gpart bootcode -p /boot/gptboot -i1 ada0

The way it parses options when working on EFI partitions might be more 
strict.


Actually, the more I think about it, using bootcode -p to write the 
entire EFI partition seems dangerous.  Unless it is surprisingly 
smart, it will wipe out any existing stuff on that EFI partition, 
which could be any number of important things put there by other 
utilities or operating systems, including device drivers.


The safer way is to mount that partition and copy the boot1.efi file 
to it.

Pretty sure that's not done as you cant guarantee fat support is available.

Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: Skylake Loader Performance 10.3-BETA3

2016-03-07 Thread Steven Hartland

On 07/03/2016 16:43, Will Green wrote:

On 4 Mar 2016, at 18:49, Mark Dixon  wrote:

Will Green  sundivenetworks.com> writes:


I am happy to test patches and/or current on this server if that helps. If

you want more details on the

motherboard/system I have started a post on it at

http://buildwithbsd.org/hw/skylake_xeon_server.html
I've made the UEFI switch which worked fine, but I'm also happy to help out
with testing if anyone looks at this.

Are you booting from ZFS?
Unless I’ve missed something this isn’t yet supported by the installer, but it 
is possible to get working manually.


Pretty sure you missed something and those changes where merged, imp 
should be able to confirm.


Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: 10.3-BETA2 panic on boot with GCE NVMe

2016-02-22 Thread Steven Hartland
Is have an inkling that r293673 may be at fault here, can you try 
reverting that change and see if it fixes the issue?


On 22/02/2016 16:35, Andy Carrel via freebsd-stable wrote:

I've created a 10.3-BETA2 image for Google Compute Engine using swills'
script and am getting a panic on boot when the VM is configured with Local
SSD as NVMe (--local-ssd interface="NVME"). This is a regression from
10.2-RELEASE which will boot successfully with an identical configuration.

"""
Fatal trap 12: page fault while in kernel mode
cpuid = 0; apic id = 00
fault virtual address = 0x60
fault code = supervisor read data, page not present
instruction pointer = 0x20:0x80e16019
stack pointer= 0x28:0xfe01bfff59c0
frame pointer= 0x28:0xfe01bfff59e0
code segment = base 0x0, limit 0xf, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 12 (irq11: virtio_pci0+)
[ thread pid 12 tid 100039 ]
Stopped at  nvme_ctrlr_intx_handler+0x39:   cmpq$0,0x60(%rdi)
db> bt
Tracing pid 12 tid 100039 td 0xf8000422e000
nvme_ctrlr_intx_handler() at nvme_ctrlr_intx_handler+0x39/frame
0xfe01bfff59e0
intr_event_execute_handlers() at intr_event_execute_handlers+0xab/frame
0xfe01bfff5a20
ithread_loop() at ithread_loop+0x96/frame 0xfe01bfff5a70
fork_exit() at fork_exit+0x9a/frame 0xfe01bfff5ab0
fork_trampoline() at fork_trampoline+0xe/frame 0xfe01bfff5ab0
--- trap 0, rip = 0, rsp = 0, rbp = 0 ---
"""
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: FreeBSD 10.3 BETA1&2 does not boot on Mac Pro 2008

2016-02-14 Thread Steven Hartland

On 14/02/2016 00:47, claudiu vasadi wrote:

Hello all,

While trying to boot 10.3 amd64 uefi disk1 BETA1 and BETA2 iso on a Mac Pro
2008, I get the following message:

BETA1 -
http://s76.photobucket.com/user/da1_27/media/10.3-BETA1_zpswjatgfg2.jpg.html

BETA2 -
http://s76.photobucket.com/user/da1_27/media/10.3-BETA2_zpsfiedhsks.jpg.html

Right after displaying the message I see the Num Lock going off and on and
I can hear the optical unit moving the head a couple of times; after that,
complete silence.

When booting verbose, I get the exact same behavior and message.

Ideas?


The next step after that is to display the framebuffer information and 
there is some additional support added in head, 10 only supports 
graphics output protocol but head also supports UGA draw, which from the 
commit message for r287538 seems to indicate this may be required for MAC's.


So if you can you try the latest HEAD snapshot and see if there are any 
difference?


If that fixes it should look to try and MFC those changes too.

Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: UEFI & ZFS

2016-02-12 Thread Steven Hartland

On 12/02/2016 20:36, Thomas Laus wrote:

I have a new Asus H170-Plus-D3 motherboard that will be used for a DOM0 Xen
Server.  It uses an Intel i5-6300 processor and a Samsung 840 EVO SSD.  I
would like to use ZFS on this new installation.  The Xen Kernel does not
have UEFI support at this time, so I installed FreeBSD CURRENT r295345 in
'legacy mode'.  It takes about 7 minutes to go from the first '|' character
to getting the 'beastie' menu.  I changed the BIOS to UEFI and did another
installation.  The boot process goes in an instant.

Several others have the same problem. See here on the freebsd forums:

http://tinyurl.com/z9oldkc

That is my exact problem.  It takes 4 minutes to get a complete 'beastie'
menu and 7 minutes 34 seconds to login.


What sort of timings do you see if its a UFS install?

Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: 10-STABLE hangups frequently

2016-02-02 Thread Steven Hartland

Some more information about your enviroment would be helpful:
1. What revision of stable/10 are you running?
2. What workloads are you running?
3. What's the output of procstat -k -k when this happens (assuming its 
possible to run)?
4. What's the output of sysctl -a |grep vnode, usually and when this 
happens?


Regards
Steve

On 02/02/2016 07:55, Hajimu UMEMOTO wrote:

Hi,

I'm disturbed by a frequent hangup of my 10-STABLE boxes since this
year.  It seems occur during running the periodic daily scripts.
I've narrowed which commit causes this problem.  It seems r292895
causes it.  I see many `Resource temporarily unavailable' message just
before hangup occurs.
Any idea?

Sincerely,

--
Hajimu UMEMOTO
u...@mahoroba.org  u...@freebsd.org
http://www.mahoroba.org/~ume/
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: LSI SAS 3108 RAID Controller

2016-02-02 Thread Steven Hartland

Try adding this to /boot/loader.conf:
hw.mfi.mrsas_enable="1"

Regards
Steve

On 02/02/2016 17:06, Zara Kanaeva wrote:

Dear list,

I have one Fujitsu server with LSI SAS 3108 RAID Controller. These LSI 
SAS 3108 RAID Controller is supported by the mpr driver 
(https://www.freebsd.org/relnotes/CURRENT/hardware/support.html).
If I try to install the FreeBSD-stable 10.0 or FreeBSD-current 11.0 on 
this server I can make partitions, but I can not write install files 
on the disks (better to say RAID5 virtual drive) without errors.

The erorrs are:
mfi0 failed to get command
mfi0: COMMAND ... TIMEOUT AFTER  ... SECONDS

By the installations I see my virtual drive as device with mfi0 as 
identifier.


My questions are:
1) Why I see the virtual drive as device with mfi0 as identifier. I 
would expect that my virtual drive has identifier mpr0 or something 
like this.
2) Why I can install FreeBSD on one of the disks connected to LSI SAS 
3108 RAID Controller, if the disks do not build any virtual drive (no 
matter which RAID level). Is that possible because mpr driver supports 
the LSI SAS 3108 RAID Controller as SCSI Controller and not as RAID 
Controller (see Kernel configuration)?


Thanks in advance, Z. Kanaeva.



___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: make clean failure on 10-stable amd64]

2016-01-29 Thread Steven Hartland
Investigating, strangely enough this builds and cleans just fine from 
buildenv.


On 29/01/2016 17:47, Glen Barber wrote:

CC'd smh, who committed an update recently in this file.  I have
confirmed the failure (but do not understand why I did not see it when
testing the patch...).

Steven, can you please take a look at this?

Glen

On Fri, Jan 29, 2016 at 05:42:10PM +, John wrote:

Hi,

I get the following error in /usr/src when trying to upgrade my system.
In part of this process, I usually run make clean in /usr/src then go
on to running buildworld.

The installed system is 10.3-PRERELEASE #0 r294087
The sources I'm building from is 295045

Working Copy Root Path: /usr/src
URL: https://svn0.eu.freebsd.org/base/stable/10
Relative URL: ^/stable/10
Repository Root: https://svn0.eu.freebsd.org/base
Repository UUID: ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
Revision: 295045

Here is the error:

[...lots of output...]

rm -rf builddir
===> sys (clean)
===> sys/boot (clean)
===> sys/boot/ficl (clean)
rm -f softcore.c testmain testmain.o
rm -f a.out dict.o ficl.o fileaccess.o float.o loader.o math64.o
prefix.o search.o stack.o tools.o vm.o words.o sysdep.o softcore.o
dict.o.tmp ficl.o.tmp fileaccess.o.tmp float.o.tmp loader.o.tmp
math64.o.tmp prefix.o.tmp search.o.tmp stack.o.tmp tools.o.tmp
vm.o.tmp words.o.tmp sysdep.o.tmp softcore.o.tmp rm -f libficl.a
===> sys/boot/forth (clean)
rm -f beastie.4th.8.gz brand.4th.8.gz check-password.4th.8.gz
color.4th.8.gz delay.4th.8.gz loader.conf.5.gz loader.4th.8.gz
menu.4th.8.gz menusets.4th.8.gz version.4th.8.gz beastie.4th.8.cat.gz
brand.4th.8.cat.gz check-password.4th.8.cat.gz color.4th.8.cat.gz
delay.4th.8.cat.gz loader.conf.5.cat.gz loader.4th.8.cat.gz
menu.4th.8.cat.gz menusets.4th.8.cat.gz version.4th.8.cat.gz
===> sys/boot/efi (clean)
make[4]: "/usr/src/sys/boot/efi/Makefile" line 4: Malformed
conditional (${COMPILER_TYPE} != "gcc")
make[4]: Fatal errors encountered -- cannot continue
make[4]: stopped in /usr/src/sys/boot/efi
*** Error code 1

Stop.
make[3]: stopped in /usr/src/sys/boot
*** Error code 1

Stop.
make[2]: stopped in /usr/src/sys
*** Error code 1

Stop.
make[1]: stopped in /usr/src
*** Error code 1

Stop.
make: stopped in /usr/src

many thanks,
--
John

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: make clean failure on 10-stable amd64]

2016-01-29 Thread Steven Hartland



On 29/01/2016 18:24, John wrote:

On Fri, Jan 29, 2016 at 06:10:26PM +, Glen Barber wrote:

On Fri, Jan 29, 2016 at 05:47:46PM +, Glen Barber wrote:

CC'd smh, who committed an update recently in this file.  I have
confirmed the failure (but do not understand why I did not see it when
testing the patch...).

Steven, can you please take a look at this?



John, can you please try the attached patch?

Glen




Index: sys/boot/efi/Makefile
===
--- sys/boot/efi/Makefile(revision 295049)
+++ sys/boot/efi/Makefile(working copy)
@@ -1,5 +1,7 @@
# $FreeBSD$

+.include 
+
# In-tree GCC does not support __attribute__((ms_abi)).
.if ${COMPILER_TYPE} != "gcc"



yay! that works

many thanks,

Thanks for initial report John, this is now fixed as of: r295057

Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ICH5 ATA DMA timeouts

2015-12-06 Thread Steven Hartland
Check cables and devices are in good condition. These things are usually 
a connectivity issue or device failing


On 06/12/2015 03:06, Perry Hutchison wrote:

Does anyone know the condition of the ICH5 ATA support in FreeBSD 10?

In preparing to repurpose an elderly Dell Dimension 4600 from Windows
to FreeBSD, and needing to decide what to do about drives, I found
several mentions in the archives* of ICH5 ATA DMA timeouts -- mostly
affecting the SATA ports, but the prevalence of SATA reports may
just indicate which ports were getting the most use:  a couple of
the reports involved the PATA ports.

While there have been commits to the ATA code since then, I didn't
find any definitive statement that the DMA timeouts had been fixed.
Did I miss something, or would I be better off using a separate SATA
or PATA PCI card instead of the ICH5's built-in ports?

Relevant parts of dmesg (with no hard drives attached):

FreeBSD 10.2-RELEASE #0 r28: Wed Aug 12 19:31:38 UTC 2015
 r...@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC i386
CPU: Intel(R) Pentium(R) 4 CPU 2.80GHz (2793.06-MHz 686-class CPU)
   Origin="GenuineIntel"  Id=0xf34  Family=0xf  Model=0x3  Stepping=4
   
Features=0xbfebfbff
   Features2=0x441d
   TSC: P-state invariant
uhci0:  port 0xff80-0xff9f irq 16 at 
device 29.0 on pci0
usbus0 on uhci0
uhci1:  port 0xff60-0xff7f irq 19 at 
device 29.1 on pci0
usbus1 on uhci1
uhci2:  port 0xff40-0xff5f irq 18 at 
device 29.2 on pci0
usbus2 on uhci2
uhci3:  port 0xff20-0xff3f irq 16 at 
device 29.3 on pci0
usbus3 on uhci3
ehci0:  mem 0xffa80800-0xffa80bff 
irq 23 at device 29.7 on pci0
usbus4: EHCI version 1.0
usbus4 on ehci0
atapci0:  port 
0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0xffa0-0xffaf mem 0xfeb7fc00-0xfeb7 irq 18 at 
device 31.1 on pci0
ata0:  at channel 0 on atapci0
ata1:  at channel 1 on atapci0
atapci1:  port 
0xfe00-0xfe07,0xfe10-0xfe13,0xfe20-0xfe27,0xfe30-0xfe33,0xfea0-0xfeaf irq 18 at 
device 31.2 on pci0
ata2:  at channel 0 on atapci1
ata3:  at channel 1 on atapci1
pci0:  at device 31.3 (no driver attached)
pcm0:  port 0xee00-0xeeff,0xedc0-0xedff mem 
0xfeb7fa00-0xfeb7fbff,0xfeb7f900-0xfeb7f9ff irq 17 at device 31.5 on pci0
pcm0: primary codec not ready!
pcm0: 
ata0: reset tp1 mask=00 ostat0=ff ostat1=ff
ata1: reset tp1 mask=03 ostat0=00 ostat1=00
ata1: stat0=0x00 err=0x01 lsb=0x14 msb=0xeb
ata1: stat1=0x00 err=0x01 lsb=0x14 msb=0xeb
ata1: reset tp2 stat0=00 stat1=00 devices=0x3
ata2: SATA reset: ports status=0x00
ata2: p0: SATA connect timeout status=0004
ata3: SATA reset: ports status=0x00
ata3: p0: SATA connect timeout status=0004
pass0 at ata1 bus 0 scbus1 target 0 lun 0
pass0:  Removable CD-ROM SCSI device
pass0: 33.300MB/s transfers (UDMA2, ATAPI 12bytes, PIO 65534bytes)
pass1 at ata1 bus 0 scbus1 target 1 lun 0
pass1:  Removable CD-ROM SCSI device
pass1: 33.300MB/s transfers (UDMA2, ATAPI 12bytes, PIO 65534bytes)
cd0 at ata1 bus 0 scbus1 target 0 lun 0
cd0:  Removable CD-ROM SCSI device
cd0: 33.300MB/s transfers (UDMA2, ATAPI 12bytes, PIO 65534bytes)
cd0: Attempt to query device size failed: NOT READY, Medium not present
cd1 at ata1 bus 0 scbus1 target 1 lun 0
cd1:  Removable CD-ROM SCSI device
cd1: 33.300MB/s transfers (UDMA2, ATAPI 12bytes, PIO 65534bytes)
cd1: Attempt to query device size failed: NOT READY, Medium not present - tray 
closed
GEOM: new disk cd0
GEOM: new disk cd1

* Archive mentions, in http://lists.freebsd.org/pipermail/...

   freebsd-hardware/2004-September/thread.html#1924
   freebsd-current/2005-February/thread.html#46719
   freebsd-current/2005-February/thread.html#46737
   freebsd-stable/2005-March/thread.html#13265
   freebsd-stable/2007-May/thread.html#35061
   freebsd-stable/2007-July/thread.html#36308
   freebsd-bugs/2012-November/thread.html#50729
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: LACP with 3 interfaces.

2015-11-17 Thread Steven Hartland



On 17/11/2015 15:26, Johan Hendriks wrote:

Hello all

We have a NFS server witch has three network ports.

We have bonded these interfaces as a lagg interface, but when we use the
server it looks like only two interfaces are used.

This is our rc.conf file

ifconfig_igb0="up"
ifconfig_igb1="up"
ifconfig_igb2="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto lacp laggport igb0 laggport igb1 laggport igb2
192.168.100.222 netmask 255.255.255.0"

ifconfig tell us the following.

lagg0: flags=8843 metric 0 mtu 1500

options=403bb

 ether a0:36:9f:7d:fc:2f
 inet 192.168.100.222 netmask 0xff00 broadcast 192.168.100.255
 nd6 options=29
 media: Ethernet autoselect
 status: active
 laggproto lacp lagghash l2,l3,l4
 laggport: igb1 flags=1c
 laggport: igb2 flags=1c
 laggport: igb3 flags=1c

This shows that your server is using l2,l3,l4 hashing for lacp but what 
options have you configured on the switch?

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Bug 204641 - 10.2 UNMAP/TRIM not available on a zfs zpool that uses iSCSI disks, backed on a zpool file target

2015-11-17 Thread Steven Hartland



On 17/11/2015 22:08, Christopher Forgeron wrote:

I just submitted this as a bug:

( https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204641 )

..but I thought I should bring it to the list's attention for more exposure
- If that's a no-no, let me know, as I have a few others that are related
to this that I'd like to discuss.

- - - -


Consider this scenario:

Virtual FreeBSD Machine, with a zpool created out of iSCSI disks.
Physical FreeBSD Machine, with a zpool holding a sparse file that is the
target for the iSCSI disk.

This setup works in an environment with all 10.1 machines, doesn't with all
10.2 machines.

- The 10.2 Machines are 10.2-p7 RELEASE, updated via freebsd-update, no
custom.
- The 10.1 Machine are 10.1-p24 RELEASE, updated via freebsd-update, no
custom.
- iSCSI is all CAM iSCSI, not the old istgt platform.
- The iSCSI Target is a sparse file, stored on a zpool (not a vdev Target)

The target machine is the same physical machine, with the same zpools - I
either boot 10.1 or 10.2 for testing, and use the same zpool/disks

to ensure nothing is changing.

If I have a 10.2 iSCSI Initiator (client) connected to a 10.2 iSCSI Target,
TRIM doesn't work (shows as NONE below).
If I have a 10.2 iSCSI Initiator (client) connected to a 10.1 iSCSI Target,
TRIM does work.

(There is another bug with that last scenario as well, but I will open it
separately)

...for clarity, a 10.1 iSCSI Initiator connected to a 10.1 iSCSI Target
also works perfectly. I have ~20 of these in the field.

On the 10.1 / 10.2 Targets, the ctl.conf file is identical. Zpools are
identical, because they are shared between reboots of the same iSCSI

target machine.



On the 10.2 initiator machine, connected to a 10.2 Target machine:

# sysctl -a | grep cam.da

kern.cam.da.2.minimum_cmd_size: 6
kern.cam.da.2.delete_max: 131072
kern.cam.da.2.delete_method: NONE
kern.cam.da.1.error_inject: 0
kern.cam.da.1.sort_io_queue: 0
kern.cam.da.1.minimum_cmd_size: 6
kern.cam.da.1.delete_max: 131072
kern.cam.da.1.delete_method: NONE
kern.cam.da.0.error_inject: 0
kern.cam.da.0.sort_io_queue: -1
kern.cam.da.0.minimum_cmd_size: 6
kern.cam.da.0.delete_max: 131072
kern.cam.da.0.delete_method: NONE

Note the delete_method is NONE


# sysctl -a | grep trim
vfs.zfs.trim.max_interval: 1
vfs.zfs.trim.timeout: 30
vfs.zfs.trim.txg_delay: 32
vfs.zfs.trim.enabled: 1
vfs.zfs.vdev.trim_max_pending: 1
vfs.zfs.vdev.trim_max_active: 64
vfs.zfs.vdev.trim_min_active: 1
vfs.zfs.vdev.trim_on_init: 1
kstat.zfs.misc.zio_trim.failed: 0
kstat.zfs.misc.zio_trim.unsupported: 181
kstat.zfs.misc.zio_trim.success: 0
kstat.zfs.misc.zio_trim.bytes: 0

Note no trimmed bytes.


On the target machine, 10.1 and 10.2 share the same config file:
/etc/ctl.conf

portal-group pg0 {
 discovery-auth-group no-authentication
 listen 0.0.0.0
 listen [::]
}

 lun 0 {
 path /pool92/iscsi/iscsi.zvol
 blocksize 4K
 size 5T
 option unmap "on"
 option scsiname "pool92"
 option vendor "pool92"
 option insecure_tpc "on"
 }
}


target iqn.iscsi1.zvol {
 auth-group no-authentication
 portal-group pg0

 lun 0 {
 path /pool92_1/iscsi/iscsi.zvol
 blocksize 4K
 size 5T
 option unmap "on"
 option scsiname "pool92_1"
 option vendor "pool92_1"
 option insecure_tpc "on"
 }
}


When I boot a 10.1 Target server, the 10.2 initiator connects, and we do
see proper UNMAP ability:


kern.cam.da.2.minimum_cmd_size: 6
kern.cam.da.2.delete_max: 5497558138880
kern.cam.da.2.delete_method: UNMAP
kern.cam.da.1.error_inject: 0
kern.cam.da.1.sort_io_queue: 0
kern.cam.da.1.minimum_cmd_size: 6
kern.cam.da.1.delete_max: 5497558138880
kern.cam.da.1.delete_method: UNMAP
kern.cam.da.0.error_inject: 0
kern.cam.da.0.sort_io_queue: -1
kern.cam.da.0.minimum_cmd_size: 6
kern.cam.da.0.delete_max: 131072
kern.cam.da.0.delete_method: NONE


Please let me know what you'd like to know next.

Having a quick flick through the code it looks like umap is now only 
supported on dev backed and not file backed.


I believe the following commit is the cause:
https://svnweb.freebsd.org/base?view=revision=279005

This was an MFC of:
https://svnweb.freebsd.org/base?view=revision=278672

I'm guessing this was an unintentional side effect mav?

Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS, SSDs, and TRIM performance

2015-11-03 Thread Steven Hartland
This is something we've already done in FreeBSD, both myself and a 
others have
iterated a few times on this very thing. There's currently nothing 
outstanding that
I'm aware so its important to capture the details as people experience 
them to see

if there is any more work to do in this area.

On 03/11/2015 09:12, Nicolas Gilles wrote:

Not sure about the Samsung XS1715, but lots of SSDs seem to suck at
large amounts of TRIM in general leading a "let me pause everything
for a while" symptom. In fact I think there is work in ZFS to make
TRIMs work better, and to throttle them in case large amounts are
freed to avoid this kind of starvation.

-- Nicolas


On Thu, Oct 29, 2015 at 7:22 PM, Steven Hartland
<kill...@multiplay.co.uk> wrote:

If you running NVMe, are you running a version which has this:
https://svnweb.freebsd.org/base?view=revision=285767

I'm pretty sure 10.2 does have that, so you should be good, but best to
check.

Other questions:
1. What does "gstat -d -p" show during the stalls?
2. Do you have any other zfs tuning in place?

On 29/10/2015 16:54, Sean Kelly wrote:

Me again. I have a new issue and I’m not sure if it is hardware or
software. I have nine servers running 10.2-RELEASE-p5 with Dell OEM’d
Samsung XS1715 NVMe SSDs. They are paired up in a single mirrored zpool on
each server. They perform great most of the time. However, I have a problem
when ZFS fires off TRIMs. Not during vdev creation, but like if I delete a
20GB snapshot.

If I destroy a 20GB snapshot or delete large files, ZFS fires off tons of
TRIMs to the disks. I can see the kstat.zfs.misc.zio_trim.success and
kstat.zfs.misc.zio_trim.bytes sysctls skyrocket. While this is happening,
any synchronous writes seem to block. For example, we’re running PostgreSQL
which does fsync()s all the time. While these TRIMs happen, Postgres just
hangs on writes. This causes reads to block due to lock contention as well.

If I change sync=disabled on my tank/pgsql dataset while this is
happening, it unblocks for the most part. But obviously this is not an ideal
way to run PostgreSQL.

I’m working with my vendor to get some Intel SSDs to test, but any ideas
if this could somehow be a software issue? Or does the Samsung XS1715 just
suck at TRIM and SYNC?

We’re thinking of just setting the vfs.zfs.trim.enabled=0 tunable for now
since WAL segment turnover actually causes TRIM operations a lot, but
unfortunately this is a reboot. But disabling TRIM does seem to fix the
issue on other servers I’ve tested with the same hardware config.


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: ZFS, SSDs, and TRIM performance

2015-10-29 Thread Steven Hartland

If you running NVMe, are you running a version which has this:
https://svnweb.freebsd.org/base?view=revision=285767

I'm pretty sure 10.2 does have that, so you should be good, but best to 
check.


Other questions:
1. What does "gstat -d -p" show during the stalls?
2. Do you have any other zfs tuning in place?

On 29/10/2015 16:54, Sean Kelly wrote:

Me again. I have a new issue and I’m not sure if it is hardware or software. I 
have nine servers running 10.2-RELEASE-p5 with Dell OEM’d Samsung XS1715 NVMe 
SSDs. They are paired up in a single mirrored zpool on each server. They 
perform great most of the time. However, I have a problem when ZFS fires off 
TRIMs. Not during vdev creation, but like if I delete a 20GB snapshot.

If I destroy a 20GB snapshot or delete large files, ZFS fires off tons of TRIMs 
to the disks. I can see the kstat.zfs.misc.zio_trim.success and 
kstat.zfs.misc.zio_trim.bytes sysctls skyrocket. While this is happening, any 
synchronous writes seem to block. For example, we’re running PostgreSQL which 
does fsync()s all the time. While these TRIMs happen, Postgres just hangs on 
writes. This causes reads to block due to lock contention as well.

If I change sync=disabled on my tank/pgsql dataset while this is happening, it 
unblocks for the most part. But obviously this is not an ideal way to run 
PostgreSQL.

I’m working with my vendor to get some Intel SSDs to test, but any ideas if 
this could somehow be a software issue? Or does the Samsung XS1715 just suck at 
TRIM and SYNC?

We’re thinking of just setting the vfs.zfs.trim.enabled=0 tunable for now since 
WAL segment turnover actually causes TRIM operations a lot, but unfortunately 
this is a reboot. But disabling TRIM does seem to fix the issue on other 
servers I’ve tested with the same hardware config.



___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: help with partitioning - huge stripesize

2015-10-12 Thread Steven Hartland

As you say using RAID for ZFS is a bad idea, so ideally change the hardware.

If not see if your RAID controller has a stripe size option to help or 
just ignore the warning, its just a warning as it will be non-optimal 
performance.


On 12/10/2015 12:46, Marko Cupać wrote:

Hi,

I've got HP ProLiant DL320g5p server with HP Smart Array E200 RAID
controller and 4X300Gb SAS disks.

I'd like to use it for hosting jails on ZFS, but no matter how I create
zpool, I always get a warning about non-native block size:

block size: 8192B configured, 1048576B native

I know it is optimal for ZFS to have direct access to disks, but HP
Smart Array E200 apparently does not support JBOD mode. I tried to
configure both single RAID-5 logical volume and four RAID-0
logical volumes, in both cases diskinfo gives me the following:

512 # sectorsize
299966445568# mediasize in bytes (279G)
585871964   # mediasize in sectors
1048576 # stripesize
643072  # stripeoffset
71798   # Cylinders according to firmware.
255 # Heads according to firmware.
32  # Sectors according to firmware.
PA6C90R9SXK07P  # Disk ident.

With hardware I have, is it better to create single RAID-5 logical
volume in HP Smart Array E200 and let ZFS think it deals with single
physical drive, or four  RAID-0 logical volumes and let ZFS think it
deals with four physical drives?

Can I just ignore warning about non-native block size? If not, how can
I make it go away?

Thank you in advance,


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: Dell NVMe issues

2015-10-06 Thread Steven Hartland

On 06/10/2015 19:03, Jim Harris wrote:



On Tue, Oct 6, 2015 at 9:42 AM, Steven Hartland 
<kill...@multiplay.co.uk <mailto:kill...@multiplay.co.uk>> wrote:


Also looks like nvme exposes a timeout_period sysctl you could try
increasing that as it could be too small for a full disk TRIM.


Under CAM SCSI da support we have a delete_max which limits the
max single request size for a delete it may be we need something
similar for nvme as well to prevent this as it should still be
chunking the deletes to ensure this sort of thing doesn't happen.


See attached.  Sean - can you try this patch with TRIM re-enabled in ZFS?

I would be curious if TRIM passes without this patch if you increase 
the timeout_period as suggested.


-Jim


Interesting does the nvme spec not provide information from the device 
as to what its optimal / max deallocate request size should be like the 
ATA spec exposes?


Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Dell NVMe issues

2015-10-06 Thread Steven Hartland

As a guess you're timing out the full disk TRIM request.

Try: sysctl vfs.zfs.vdev.trim_on_init=0 and then re-run the create.

On 06/10/2015 16:18, Sean Kelly wrote:

Back in May, I posted about issues I was having with a Dell PE R630 with 4x800GB NVMe 
SSDs. I would get kernel panics due to the inability to assign all the interrupts 
because of https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=199321 
. Jim Harris helped 
fix this issue so I bought several more of these servers, Including ones with 4x1.6TB 
drives…

while the new servers with 4x800GB drives still work, the ones with 4x1.6TB 
drives do not. When I do a
zpool create tank mirror nvd0 nvd1 mirror nvd2 nvd3
the command never returns and the kernel logs:
nvme0: resetting controller
nvme0: controller ready did not become 0 within 2000 ms

I’ve tried several different things trying to understand where the actual 
problem is.
WORKS: dd if=/dev/nvd0 of=/dev/null bs=1m
WORKS: dd if=/dev/zero of=/dev/nvd0 bs=1m
WORKS: newfs /dev/nvd0
FAILS: zpool create tank mirror nvd[01]
FAILS: gpart add -t freebsd-zfs nvd[01] && zpool create tank mirror nvd[01]p1
FAILS: gpart add -t freebsd-zfs -s 1400g nvd[01[ && zpool create tank nvd[01]p1
WORKS: gpart add -t freebsd-zfs -s 800g nvd[01] && zpool create tank nvd[01]p1

NOTE: The above commands are more about getting the point across, not validity. 
I wiped the disk clean between gpart attempts and used GPT.

So it seems like zpool works if I don’t cross past ~800GB. But other things 
like dd and newfs work.

When I get the kernel messages about the controller resetting and then not 
responding, the NVMe subsystem hangs entirely. Since my boot disks are not 
NVMe, the system continues to work but no more NVMe stuff can be done. Further, 
attempting to reboot hangs and I have to do a power cycle.

Any thoughts on what the deal may be here?

10.2-RELEASE-p5

nvme0@pci0:132:0:0: class=0x010802 card=0x1f971028 chip=0xa820144d rev=0x03 
hdr=0x00
 vendor = 'Samsung Electronics Co Ltd'
 class  = mass storage
 subclass   = NVM



___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: Dell NVMe issues

2015-10-06 Thread Steven Hartland
Also looks like nvme exposes a timeout_period sysctl you could try 
increasing that as it could be too small for a full disk TRIM.


Under CAM SCSI da support we have a delete_max which limits the max 
single request size for a delete it may be we need something similar for 
nvme as well to prevent this as it should still be chunking the deletes 
to ensure this sort of thing doesn't happen.


On 06/10/2015 16:18, Sean Kelly wrote:

Back in May, I posted about issues I was having with a Dell PE R630 with 4x800GB NVMe 
SSDs. I would get kernel panics due to the inability to assign all the interrupts 
because of https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=199321 
. Jim Harris helped 
fix this issue so I bought several more of these servers, Including ones with 4x1.6TB 
drives…

while the new servers with 4x800GB drives still work, the ones with 4x1.6TB 
drives do not. When I do a
zpool create tank mirror nvd0 nvd1 mirror nvd2 nvd3
the command never returns and the kernel logs:
nvme0: resetting controller
nvme0: controller ready did not become 0 within 2000 ms

I’ve tried several different things trying to understand where the actual 
problem is.
WORKS: dd if=/dev/nvd0 of=/dev/null bs=1m
WORKS: dd if=/dev/zero of=/dev/nvd0 bs=1m
WORKS: newfs /dev/nvd0
FAILS: zpool create tank mirror nvd[01]
FAILS: gpart add -t freebsd-zfs nvd[01] && zpool create tank mirror nvd[01]p1
FAILS: gpart add -t freebsd-zfs -s 1400g nvd[01[ && zpool create tank nvd[01]p1
WORKS: gpart add -t freebsd-zfs -s 800g nvd[01] && zpool create tank nvd[01]p1

NOTE: The above commands are more about getting the point across, not validity. 
I wiped the disk clean between gpart attempts and used GPT.

So it seems like zpool works if I don’t cross past ~800GB. But other things 
like dd and newfs work.

When I get the kernel messages about the controller resetting and then not 
responding, the NVMe subsystem hangs entirely. Since my boot disks are not 
NVMe, the system continues to work but no more NVMe stuff can be done. Further, 
attempting to reboot hangs and I have to do a power cycle.

Any thoughts on what the deal may be here?

10.2-RELEASE-p5

nvme0@pci0:132:0:0: class=0x010802 card=0x1f971028 chip=0xa820144d rev=0x03 
hdr=0x00
 vendor = 'Samsung Electronics Co Ltd'
 class  = mass storage
 subclass   = NVM



___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: [FreeBSD-Announce] FreeBSD Errata Notice FreeBSD-EN-15:16.pw

2015-09-17 Thread Steven Hartland

Typo they should be:

https://security.FreeBSD.org/patches/EN-15:16/pw.patch
https://security.FreeBSD.org/patches/EN-15:16/pw.patch.asc

On 17/09/2015 09:20, Pietro Cerutti wrote:

On 2015-09-16 23:31, FreeBSD Errata Notices wrote:

# fetch https://security.FreeBSD.org/patches/EN-15:26/pw.patch
# fetch https://security.FreeBSD.org/patches/EN-15:26/pw.patch.asc


both 404



___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: KSTACK_PAGES stuck at 2 despite options KSTACK_PAGES=4

2015-08-03 Thread Steven Hartland

This should be fixed by r286223 in HEAD.

I'll MFC to stable/10 after the relevant time-out.

Thanks again for the report :)

Regards
Steve

On 31/07/2015 22:21, Trond Endrestøl wrote:

stable/10, i386, r286139, 4 GiB RAM, custom kernel loudly claims:

ZFS NOTICE: KSTACK_PAGES is 2 which could result in stack overflow panic!
Please consider adding 'options KSTACK_PAGES=4' to your kernel config

Well, my custom kernel config does contain:

options KSTACK_PAGES=4

and

options ZFS

sysctl kern.conftxt backs up my story:

kern.conftxt: options  CONFIG_AUTOGENERATED
ident   VBOX
machine i386
cpu I686_CPU
cpu I586_CPU
cpu I486_CPU
makeoptions WITH_CTF=1
makeoptions DEBUG=-g
options ZFS
options KSTACK_PAGES=4  !!
options FDESCFS
...

What more does it want?



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: KSTACK_PAGES stuck at 2 despite options KSTACK_PAGES=4

2015-08-03 Thread Steven Hartland

Thanks for the report Trond, I've reproduced this and am investigating.

On 31/07/2015 22:21, Trond Endrestøl wrote:

stable/10, i386, r286139, 4 GiB RAM, custom kernel loudly claims:

ZFS NOTICE: KSTACK_PAGES is 2 which could result in stack overflow panic!
Please consider adding 'options KSTACK_PAGES=4' to your kernel config

Well, my custom kernel config does contain:

options KSTACK_PAGES=4

and

options ZFS

sysctl kern.conftxt backs up my story:

kern.conftxt: options  CONFIG_AUTOGENERATED
ident   VBOX
machine i386
cpu I686_CPU
cpu I586_CPU
cpu I486_CPU
makeoptions WITH_CTF=1
makeoptions DEBUG=-g
options ZFS
options KSTACK_PAGES=4  !!
options FDESCFS
...

What more does it want?



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: 10.2-Beta i386..what's wrong..?

2015-07-22 Thread Steven Hartland

What's the panic?

As your using ZFS I'd lay money on the fact your blowing the stack, 
which would require kernel built with:

options KSTACK_PAGES=4

Regards
Steve

On 22/07/2015 08:10, Holm Tiffe wrote:

Hi,

yesterday I've decided to to put my old Workstation in my shack and
to install a new FreeBSD on it, it is the computer I've used previously
for my daily work, reading Mails, programming controllers and so on..

It is am AMD XP300+ with an Adaptec 29320 and four IBM 72GB SCSI3 Disks
with only 2GB of Memory.

I've replaced a bad disk, reformated it so 512 Byte sectors )they came
original with 534 or so for ecc),pulled the 10.2-Beta disk1 ISO file from
the german mirror and tried to install on a zfs raidz1 which was going
flawlessly until the point of booting the installed system, some warnings
about zfs and vm... double fault, panic.

Later I've read on the net that installing zfs on a 32Bit machine isn't
really a good idea, so I tried to install the system on a gvinum raid
on gpt partitions.
The layout was gpt-boot, 2G swap and the rest raid on every disk so
that I could build a striped 8G swap and ~190G raid with gvinum.
Installing that worked flawlessly with the install point shell and doing
partitioning per hand. I've made a newfs -U -L root /dev/gvinum/raid,
mounted the filesystem to /mnt, activated the swap and put an fstab in
/tmp/bsdsomething-etc, exited to install.

The installer verified the install containers (base.txz,kernel.txz and
so on) and begun to extract them.
So far soo good, but while extracting the system repeatedly hung on the
very same location. On vt4 I could start a top that was showing an hung
bsdtar process in the state wdrain and nothing other happened, the system
took a long time to react to keypresses..

I've tried to extract the distribution files per hand, same problem, tar
hung on extracting kernel.symbols for example, same behavior on other files
in base.txz.

Ok, it is 10.2-BETA so I've tried 10.1-Release next...exactly the same,
ok tried 9.3-RELEASE .. the same!

What I'm doning wrong here?

Besides of the bad disk that I've changed (IBM-SSG S53D073 C61F) the
hardware is very trusty, it is a gigabyte board and I want to keep this
machine since it has still floppy capabilites that I need to comunicate
with my old CP/M gear and PDP11's. It run for years w/o problems.
Capacitors are already changed and ok.

Sorry for the wishy-washy error messages above, they are from my memory and
from yesterday...

Next try was installing the System on a 8G ATA disk that was laying around,
went flawlessly, booted it and tried to install the files on the gvinum
raid from there...same problem.
Changed the 29320 against a 29160 ..same problem.
No messages about bad disksor something on the console.

What's going on here? The machine run on 8.4-stable before w/o any
problems.

Regards,

Holm


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Problems adding Intel 750 to zfs pool

2015-07-22 Thread Steven Hartland
Be aware that kern.geom.dev.delete_max_sectors will still come into play 
here hence the large request will still get chunked.


This is good to prevent excessively long running individual BIO's which 
would result in user operations being uncancelable.


Regards
Steve

On 21/07/2015 22:29, dy...@techtangents.com wrote:




Jim Harris mailto:jim.har...@gmail.com
22 July 2015 6:54 am


Hi Dylan,

I just committed SVN r285767 which should fix this issue.  I will 
request MFC to stable/10 after the 3 day waiting period.


Thanks,

-Jim


Love your work, Jim! Thank you.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Problems adding Intel 750 to zfs pool

2015-07-20 Thread Steven Hartland

This will almost certainly be due to slow TRIM support on the device.

Try setting the sysctl vfs.zfs.vdev.trim_on_init to 0 before adding the 
devices.


On 18/07/2015 05:35, dy...@techtangents.com wrote:

Hi,

I've installed an Intel 750 400GB NVMe PCIe SSD in a Dell R320 running 
FreeBSD 10.2-beta-1... not STABLE, but not far behind, I think. 
Apologies if this is the wrong mailing list, or if this has been fixed 
in STABLE since the beta.


Anyway, I've gparted it into 2 partitions - 16GB for slog/zil and 
357GB for l2arc. Adding the slog partition to the pool takes about 2 
minutes - machine seems hung during that time. Ping works, but I can't 
open another ssh session.


Adding the l2arc doesn't seem to complete - it's been going 10 minutes 
now and nothing. Ping works, but I can't log in to the local console 
or another ssh session.


I'm adding the partitions using their gpt names. i.e.
zpool add zroot log gpt/slog
zpool add zroot cache gpt/l2arc

The system BIOS is up-to-date. The OS was a fresh 10.1 install, then 
freebsd-update to 10.2-beta2. 10.1 exhibited the same symptoms.


Root is on zfs.

Device was tested to be working on Windows 8.1 on a Dell T1700 
workstation.


Any ideas?

Cheers,

Dylan Just

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Problems adding Intel 750 to zfs pool

2015-07-20 Thread Steven Hartland
Those figures are often inaccurate as the actual results can vary wildly 
based of if the device FW thinks there is actual data on the sectors 
being TRIM'ed.


Regards
Steve

On 20/07/2015 12:06, Will Green wrote:

I wonder if this is connected to NVMe?

Our SATA Intel DC S3500 drives (which are not that different to the 750s) TRIM 
much more quickly.

What does camcontrol think the secure erase time should be? For a 600GB S3500 
camcontrol accurately gives the secure erase time as 4 minutes:

# camcontrol security ada0

erase time4 min
enhanced erase time   4 min

Trimming drives before addition to a system is definitely worthwhile.

Will


On 20 Jul 2015, at 10:28, Steven Hartland kill...@multiplay.co.uk wrote:

This will almost certainly be due to slow TRIM support on the device.

Try setting the sysctl vfs.zfs.vdev.trim_on_init to 0 before adding the devices.

On 18/07/2015 05:35, dy...@techtangents.com wrote:

Hi,

I've installed an Intel 750 400GB NVMe PCIe SSD in a Dell R320 running FreeBSD 
10.2-beta-1... not STABLE, but not far behind, I think. Apologies if this is 
the wrong mailing list, or if this has been fixed in STABLE since the beta.

Anyway, I've gparted it into 2 partitions - 16GB for slog/zil and 357GB for 
l2arc. Adding the slog partition to the pool takes about 2 minutes - machine 
seems hung during that time. Ping works, but I can't open another ssh session.

Adding the l2arc doesn't seem to complete - it's been going 10 minutes now and 
nothing. Ping works, but I can't log in to the local console or another ssh 
session.

I'm adding the partitions using their gpt names. i.e.
zpool add zroot log gpt/slog
zpool add zroot cache gpt/l2arc

The system BIOS is up-to-date. The OS was a fresh 10.1 install, then 
freebsd-update to 10.2-beta2. 10.1 exhibited the same symptoms.

Root is on zfs.

Device was tested to be working on Windows 8.1 on a Dell T1700 workstation.

Any ideas?

Cheers,

Dylan Just

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: wrong patch number in releng/10.1?

2015-06-10 Thread Steven Hartland

You kernel hasn't been rebuilt then.

On 10/06/2015 11:23, Kurt Jaeger wrote:

Hi!

I see the same: uname says: 10.1p10


What does the following say when run from your source directory:

grep BRANCH sys/conf/newvers.sh

BRANCH=RELEASE-p11



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: wrong patch number in releng/10.1?

2015-06-10 Thread Steven Hartland

What does the following say when run from your source directory:

grep BRANCH sys/conf/newvers.sh

Regards
Steve
On 10/06/2015 10:20, Palle Girgensohn wrote:

Also

# uname -a
FreeBSD pingpongdb 10.1-RELEASE-p10 FreeBSD 10.1-RELEASE-p10 #0: Wed May 13 
06:54:13 UTC 2015 
r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64
# uptime
  2:18am  up 36 mins, 4 users, load averages: 0,08 0,14 0,10
# ls -lrt /boot/kernel/kernel /boot/kernel/*zfs*
-r-xr-xr-x  2 root  wheel  21160449 10 Jun 01:36 /boot/kernel/kernel
-r-xr-xr-x  2 root  wheel   2320144 10 Jun 01:36 /boot/kernel/zfs.ko
-r-xr-xr-x  1 root  wheel  19103144 10 Jun 01:36 /boot/kernel/zfs.ko.symbols
# strings  /boot/kernel/kernel|grep 10.1-RELEASE
@(#)FreeBSD 10.1-RELEASE-p10 #0: Wed May 13 06:54:13 UTC 2015
FreeBSD 10.1-RELEASE-p10 #0: Wed May 13 06:54:13 UTC 2015
10.1-RELEASE-p10

It seems to me the verions numbering is not correct, but the patch *is* there, 
it should be 10.1-p11, right?


10 jun 2015 kl. 11:01 skrev Palle Girgensohn gir...@freebsd.org:

Hi,

It seems the patch level in the UPDATING document is bad in releng/10.1, it is 
p29 which is the patch level for 8.4?

Palle


r284193 | delphij | 2015-06-10 00:13:25 +0200 (Ons, 10 Jun 2015) | 8 lines

Update base system file(1) to 5.22 to address multiple denial of
service issues. [EN-15:06]

Improve reliability of ZFS when TRIM/UNMAP and/or L2ARC is used.
[EN-15:07]

Approved by:so



But the UPDATING says:

20150609:   p29 FreeBSD-EN-15:06.file
FreeBSD-EN-15:07.zfs

Updated base system file(1) to 5.22 to address multiple denial
of service issues. [EN-15:06]

Improved reliability of ZFS when TRIM/UNMAP and/or L2ARC is used.
[EN-15:07]

20150513:   p10 FreeBSD-EN-15:04.freebsd-update
FreeBSD-EN-15:05.ufs

Fix bug with freebsd-update(8) that does not ensure the previous
upgrade was completed. [EN-15:04]

Fix deadlock on reboot with UFS tuned with SU+J. [EN-15:05]






___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: zfs, cam sticking on failed disk

2015-05-07 Thread Steven Hartland

On 07/05/2015 09:07, Slawa Olhovchenkov wrote:

I have zpool of 12 vdev (zmirrors).
One disk in one vdev out of service and stop serving reuquest:

dT: 1.036s  w: 1.000s
  L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/w   %busy Name
 0  0  0  00.0  0  00.00.0| ada0
 0  0  0  00.0  0  00.00.0| ada1
 1  0  0  00.0  0  00.00.0| ada2
 0  0  0  00.0  0  00.00.0| ada3
 0  0  0  00.0  0  00.00.0| da0
 0  0  0  00.0  0  00.00.0| da1
 0  0  0  00.0  0  00.00.0| da2
 0  0  0  00.0  0  00.00.0| da3
 0  0  0  00.0  0  00.00.0| da4
 0  0  0  00.0  0  00.00.0| da5
 0  0  0  00.0  0  00.00.0| da6
 0  0  0  00.0  0  00.00.0| da7
 0  0  0  00.0  0  00.00.0| da8
 0  0  0  00.0  0  00.00.0| da9
 0  0  0  00.0  0  00.00.0| da10
 0  0  0  00.0  0  00.00.0| da11
 0  0  0  00.0  0  00.00.0| da12
 0  0  0  00.0  0  00.00.0| da13
 0  0  0  00.0  0  00.00.0| da14
 0  0  0  00.0  0  00.00.0| da15
 0  0  0  00.0  0  00.00.0| da16
 0  0  0  00.0  0  00.00.0| da17
 0  0  0  00.0  0  00.00.0| da18
24  0  0  00.0  0  00.00.0| da19

 0  0  0  00.0  0  00.00.0| da20
 0  0  0  00.0  0  00.00.0| da21
 0  0  0  00.0  0  00.00.0| da22
 0  0  0  00.0  0  00.00.0| da23
 0  0  0  00.0  0  00.00.0| da24
 0  0  0  00.0  0  00.00.0| da25
 0  0  0  00.0  0  00.00.0| da26
 0  0  0  00.0  0  00.00.0| da27

As result zfs operation on this pool stoped too.
`zpool list -v` don't worked.
`zpool detach tank da19` don't worked.
Application worked with this pool sticking in `zfs` wchan and don't killed.

# camcontrol tags da19 -v
(pass19:isci0:0:3:0): dev_openings  7
(pass19:isci0:0:3:0): dev_active25
(pass19:isci0:0:3:0): allocated 25
(pass19:isci0:0:3:0): queued0
(pass19:isci0:0:3:0): held  0
(pass19:isci0:0:3:0): mintags   2
(pass19:isci0:0:3:0): maxtags   255

How I can cancel this 24 requst?
Why this requests don't timeout (3 hours already)?
How I can forced detach this disk? (I am lready try `camcontrol reset`, 
`camconrol rescan`).
Why ZFS (or geom) don't timeout on request and don't rerouted to da18?

If they are in mirrors, in theory you can just pull the disk, isci will 
report to cam and cam will report to ZFS which should all recover.


With regards to not timing out this could be a default issue, but having 
a very quick look that's not obvious in the code as 
isci_io_request_construct etc do indeed set a timeout when 
CAM_TIME_INFINITY hasn't been requested.


The sysctl hw.isci.debug_level may be able to provide more information, 
but be aware this can be spammy.


Regards
Steve
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: zfs, cam sticking on failed disk

2015-05-07 Thread Steven Hartland



On 07/05/2015 15:28, Matthew Seaman wrote:

On 05/07/15 14:32, Steven Hartland wrote:


I wouldn't have thought so, I would expect that to only have an effect
on removal media such as CDROM drives, but no harm in trying ;-)

zpool offline -t zroot da19


That might work but it also might just wedge waiting for the outstanding 
IO to complete as I thought that was a nice off-line instead of a I 
don't care one.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: zfs, cam sticking on failed disk

2015-05-07 Thread Steven Hartland



On 07/05/2015 13:51, Slawa Olhovchenkov wrote:

On Thu, May 07, 2015 at 01:46:40PM +0100, Steven Hartland wrote:


Yes in theory new requests should go to the other vdev, but there could
be some dependency issues preventing that such as a syncing TXG.

Currenly this pool must not have write activity (from application).
What about go to the other (mirror) device in the same vdev?
Same dependency?

Yes, if there's an outstanding TXG, then I believe all IO will stall.

Where this TXG released? When all devices in all vdevs report
'completed'? When at the least one device in all vdevs report
'completed'? When at the least one device in at least one vdev report
'completed'?

When all devices have report completed or failed.

Hence if you pull the disk things should continue as normal, with the 
failed device being marked as such.


Regards
Steve
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: zfs, cam sticking on failed disk

2015-05-07 Thread Steven Hartland



On 07/05/2015 13:44, Slawa Olhovchenkov wrote:

On Thu, May 07, 2015 at 01:35:05PM +0100, Steven Hartland wrote:



On 07/05/2015 13:05, Slawa Olhovchenkov wrote:

On Thu, May 07, 2015 at 01:00:40PM +0100, Steven Hartland wrote:


On 07/05/2015 11:46, Slawa Olhovchenkov wrote:

On Thu, May 07, 2015 at 11:38:46AM +0100, Steven Hartland wrote:


How I can cancel this 24 requst?
Why this requests don't timeout (3 hours already)?
How I can forced detach this disk? (I am lready try `camcontrol reset`, 
`camconrol rescan`).
Why ZFS (or geom) don't timeout on request and don't rerouted to da18?


If they are in mirrors, in theory you can just pull the disk, isci will
report to cam and cam will report to ZFS which should all recover.

Yes, zmirror with da18.
I am surprise that ZFS don't use da18. All zpool fully stuck.

A single low level request can only be handled by one device, if that
device returns an error then ZFS will use the other device, but not until.

Why next requests don't routed to da18?
Current request stuck on da19 (unlikely, but understund), but why
stuck all pool?

Its still waiting for the request from the failed device to complete. As
far as ZFS currently knows there is nothing wrong with the device as its
had no failures.

Can you explain some more?
One requst waiting, understand.
I am do next request. Some information need from vdev with failed
disk. Failed disk more busy (queue long), why don't routed to mirror
disk? Or, for metadata, to less busy vdev?

As no error has been reported to ZFS, due to the stalled IO, there is no
failed vdev.

I see that device isn't failed (for both OS and ZFS).
I am don't talk 'failed vdev'. I am talk 'busy vdev' or 'busy device'.


Yes in theory new requests should go to the other vdev, but there could
be some dependency issues preventing that such as a syncing TXG.

Currenly this pool must not have write activity (from application).
What about go to the other (mirror) device in the same vdev?
Same dependency?

Yes, if there's an outstanding TXG, then I believe all IO will stall.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: zfs, cam sticking on failed disk

2015-05-07 Thread Steven Hartland



On 07/05/2015 13:05, Slawa Olhovchenkov wrote:

On Thu, May 07, 2015 at 01:00:40PM +0100, Steven Hartland wrote:



On 07/05/2015 11:46, Slawa Olhovchenkov wrote:

On Thu, May 07, 2015 at 11:38:46AM +0100, Steven Hartland wrote:


How I can cancel this 24 requst?
Why this requests don't timeout (3 hours already)?
How I can forced detach this disk? (I am lready try `camcontrol reset`, 
`camconrol rescan`).
Why ZFS (or geom) don't timeout on request and don't rerouted to da18?


If they are in mirrors, in theory you can just pull the disk, isci will
report to cam and cam will report to ZFS which should all recover.

Yes, zmirror with da18.
I am surprise that ZFS don't use da18. All zpool fully stuck.

A single low level request can only be handled by one device, if that
device returns an error then ZFS will use the other device, but not until.

Why next requests don't routed to da18?
Current request stuck on da19 (unlikely, but understund), but why
stuck all pool?

Its still waiting for the request from the failed device to complete. As
far as ZFS currently knows there is nothing wrong with the device as its
had no failures.

Can you explain some more?
One requst waiting, understand.
I am do next request. Some information need from vdev with failed
disk. Failed disk more busy (queue long), why don't routed to mirror
disk? Or, for metadata, to less busy vdev?
As no error has been reported to ZFS, due to the stalled IO, there is no 
failed vdev.


Yes in theory new requests should go to the other vdev, but there could 
be some dependency issues preventing that such as a syncing TXG.


Regards
Steve
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: zfs, cam sticking on failed disk

2015-05-07 Thread Steven Hartland



On 07/05/2015 14:10, Slawa Olhovchenkov wrote:

On Thu, May 07, 2015 at 02:05:11PM +0100, Steven Hartland wrote:



On 07/05/2015 13:51, Slawa Olhovchenkov wrote:

On Thu, May 07, 2015 at 01:46:40PM +0100, Steven Hartland wrote:


Yes in theory new requests should go to the other vdev, but there could
be some dependency issues preventing that such as a syncing TXG.

Currenly this pool must not have write activity (from application).
What about go to the other (mirror) device in the same vdev?
Same dependency?

Yes, if there's an outstanding TXG, then I believe all IO will stall.

Where this TXG released? When all devices in all vdevs report
'completed'? When at the least one device in all vdevs report
'completed'? When at the least one device in at least one vdev report
'completed'?

When all devices have report completed or failed.

Thanks for explained.


Hence if you pull the disk things should continue as normal, with the
failed device being marked as such.

I am have trouble to phisical access.
May be someone can be suggest software method to forced detach device
from system.
In 11 that should be possible with devctl, but under 10 I'm not aware of 
anything that wouldn't involve some custom kernel level code I'm afraid.


Regards
Steve
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: zfs, cam sticking on failed disk

2015-05-07 Thread Steven Hartland



On 07/05/2015 14:29, Ronald Klop wrote:
On Thu, 07 May 2015 15:23:58 +0200, Steven Hartland 
kill...@multiplay.co.uk wrote:





On 07/05/2015 14:10, Slawa Olhovchenkov wrote:

On Thu, May 07, 2015 at 02:05:11PM +0100, Steven Hartland wrote:



On 07/05/2015 13:51, Slawa Olhovchenkov wrote:

On Thu, May 07, 2015 at 01:46:40PM +0100, Steven Hartland wrote:

Yes in theory new requests should go to the other vdev, but 
there could

be some dependency issues preventing that such as a syncing TXG.

Currenly this pool must not have write activity (from application).
What about go to the other (mirror) device in the same vdev?
Same dependency?
Yes, if there's an outstanding TXG, then I believe all IO will 
stall.

Where this TXG released? When all devices in all vdevs report
'completed'? When at the least one device in all vdevs report
'completed'? When at the least one device in at least one vdev report
'completed'?

When all devices have report completed or failed.

Thanks for explained.


Hence if you pull the disk things should continue as normal, with the
failed device being marked as such.

I am have trouble to phisical access.
May be someone can be suggest software method to forced detach device
from system.
In 11 that should be possible with devctl, but under 10 I'm not aware 
of anything that wouldn't involve some custom kernel level code I'm 
afraid.





Maybe I'm talking BS here, but does 'camcontrol eject' do something on 
a disk?
I wouldn't have thought so, I would expect that to only have an effect 
on removal media such as CDROM drives, but no harm in trying ;-)


Regards
Steve
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: zfs, cam sticking on failed disk

2015-05-07 Thread Steven Hartland



On 07/05/2015 11:46, Slawa Olhovchenkov wrote:

On Thu, May 07, 2015 at 11:38:46AM +0100, Steven Hartland wrote:


How I can cancel this 24 requst?
Why this requests don't timeout (3 hours already)?
How I can forced detach this disk? (I am lready try `camcontrol reset`, 
`camconrol rescan`).
Why ZFS (or geom) don't timeout on request and don't rerouted to da18?


If they are in mirrors, in theory you can just pull the disk, isci will
report to cam and cam will report to ZFS which should all recover.

Yes, zmirror with da18.
I am surprise that ZFS don't use da18. All zpool fully stuck.

A single low level request can only be handled by one device, if that
device returns an error then ZFS will use the other device, but not until.

Why next requests don't routed to da18?
Current request stuck on da19 (unlikely, but understund), but why
stuck all pool?


Its still waiting for the request from the failed device to complete. As 
far as ZFS currently knows there is nothing wrong with the device as its 
had no failures.


You didn't say which FreeBSD version you where running?

Regards
Steve
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: zfs, cam sticking on failed disk

2015-05-07 Thread Steven Hartland



On 07/05/2015 10:50, Slawa Olhovchenkov wrote:

On Thu, May 07, 2015 at 09:41:43AM +0100, Steven Hartland wrote:


On 07/05/2015 09:07, Slawa Olhovchenkov wrote:

I have zpool of 12 vdev (zmirrors).
One disk in one vdev out of service and stop serving reuquest:

dT: 1.036s  w: 1.000s
   L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/w   %busy Name
  0  0  0  00.0  0  00.00.0| ada0
  0  0  0  00.0  0  00.00.0| ada1
  1  0  0  00.0  0  00.00.0| ada2
  0  0  0  00.0  0  00.00.0| ada3
  0  0  0  00.0  0  00.00.0| da0
  0  0  0  00.0  0  00.00.0| da1
  0  0  0  00.0  0  00.00.0| da2
  0  0  0  00.0  0  00.00.0| da3
  0  0  0  00.0  0  00.00.0| da4
  0  0  0  00.0  0  00.00.0| da5
  0  0  0  00.0  0  00.00.0| da6
  0  0  0  00.0  0  00.00.0| da7
  0  0  0  00.0  0  00.00.0| da8
  0  0  0  00.0  0  00.00.0| da9
  0  0  0  00.0  0  00.00.0| da10
  0  0  0  00.0  0  00.00.0| da11
  0  0  0  00.0  0  00.00.0| da12
  0  0  0  00.0  0  00.00.0| da13
  0  0  0  00.0  0  00.00.0| da14
  0  0  0  00.0  0  00.00.0| da15
  0  0  0  00.0  0  00.00.0| da16
  0  0  0  00.0  0  00.00.0| da17
  0  0  0  00.0  0  00.00.0| da18
 24  0  0  00.0  0  00.00.0| da19

  0  0  0  00.0  0  00.00.0| da20
  0  0  0  00.0  0  00.00.0| da21
  0  0  0  00.0  0  00.00.0| da22
  0  0  0  00.0  0  00.00.0| da23
  0  0  0  00.0  0  00.00.0| da24
  0  0  0  00.0  0  00.00.0| da25
  0  0  0  00.0  0  00.00.0| da26
  0  0  0  00.0  0  00.00.0| da27

As result zfs operation on this pool stoped too.
`zpool list -v` don't worked.
`zpool detach tank da19` don't worked.
Application worked with this pool sticking in `zfs` wchan and don't killed.

# camcontrol tags da19 -v
(pass19:isci0:0:3:0): dev_openings  7
(pass19:isci0:0:3:0): dev_active25
(pass19:isci0:0:3:0): allocated 25
(pass19:isci0:0:3:0): queued0
(pass19:isci0:0:3:0): held  0
(pass19:isci0:0:3:0): mintags   2
(pass19:isci0:0:3:0): maxtags   255

How I can cancel this 24 requst?
Why this requests don't timeout (3 hours already)?
How I can forced detach this disk? (I am lready try `camcontrol reset`, 
`camconrol rescan`).
Why ZFS (or geom) don't timeout on request and don't rerouted to da18?


If they are in mirrors, in theory you can just pull the disk, isci will
report to cam and cam will report to ZFS which should all recover.

Yes, zmirror with da18.
I am surprise that ZFS don't use da18. All zpool fully stuck.
A single low level request can only be handled by one device, if that 
device returns an error then ZFS will use the other device, but not until.



With regards to not timing out this could be a default issue, but having

I am understand, no universal acceptable timeout for all cases: good
disk, good saturated disk, tape, tape library, failed disk, etc.
In my case -- failed disk. This model already failed (other specimen)
with same symptoms).

May be exist some tricks for cancel/aborting all request in queue and
removing disk from system?

Unlikely tbh, pulling the disk however should.



a very quick look that's not obvious in the code as
isci_io_request_construct etc do indeed set a timeout when
CAM_TIME_INFINITY hasn't been requested.

The sysctl hw.isci.debug_level may be able to provide more information,
but be aware this can be spammy.

I am already have this situation, what command interesting after
setting hw.isci.debug_level?
I'm afraid I'm not familiar isci I'm afraid possibly someone else who is 
can chime in.


Regards
Steve
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: File-Backed ZFS Kernel Panic still in 10.1-RELEASE (PR 195061)

2015-04-26 Thread Steven Hartland
Looks like it got lost in the tubes, sitting with me to get info across 
to re@


On 26/04/2015 16:29, Will Green wrote:

Thanks Steven. I’ll hold off releasing the updated tutorials for now.


On 21 Apr 2015, at 17:45, Steven Hartland kill...@multiplay.co.uk wrote:

I did actually request this back in November, but I don't seem to have had a 
reply so I'll chase.

On 21/04/2015 16:23, Will Green wrote:

Hello,

I have been updating my ZFS tutorials for use on FreeBSD 10.1. To allow users 
to experiment with ZFS I use file-backed ZFS pools. On FreeBSD 10.1 they cause 
a kernel panic.

For example a simple command like the following causes a panic: zpool create 
/tmp/zfstut/disk1

This issue was identified in PR 195061: 
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=195061
And fixed in r274619 (2014-11-17): 
https://svnweb.freebsd.org/base?view=revisionrevision=274619

However, even on 10.1-RELEASE-p9 the kernel panic still occurs (but doesn’t on 
11-CURRENT).

Are there any plans to patch this in 10.1? I note it’s not in the errata.

My tutorials are not the only ones that use file-backed ZFS: new users 
experimenting with ZFS on FreeBSD are likely to encounter this issue.

Thanks,
Will
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

Re: File-Backed ZFS Kernel Panic still in 10.1-RELEASE (PR 195061)

2015-04-21 Thread Steven Hartland
I did actually request this back in November, but I don't seem to have 
had a reply so I'll chase.


On 21/04/2015 16:23, Will Green wrote:

Hello,

I have been updating my ZFS tutorials for use on FreeBSD 10.1. To allow users 
to experiment with ZFS I use file-backed ZFS pools. On FreeBSD 10.1 they cause 
a kernel panic.

For example a simple command like the following causes a panic: zpool create 
/tmp/zfstut/disk1

This issue was identified in PR 195061: 
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=195061
And fixed in r274619 (2014-11-17): 
https://svnweb.freebsd.org/base?view=revisionrevision=274619

However, even on 10.1-RELEASE-p9 the kernel panic still occurs (but doesn’t on 
11-CURRENT).

Are there any plans to patch this in 10.1? I note it’s not in the errata.

My tutorials are not the only ones that use file-backed ZFS: new users 
experimenting with ZFS on FreeBSD are likely to encounter this issue.

Thanks,
Will
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

Re: Adaptec 7805H drivers

2015-04-19 Thread Steven Hartland

On 20/04/2015 00:25, Phil Murray wrote:

Hi Ian,

Thanks for the suggestion, I tried that but had no luck


Try hint.ahcich.X.disabled=1 instead, where X is the relevant channel.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Significant memory leak in 9.3p10?

2015-03-26 Thread Steven Hartland



On 26/03/2015 23:47, J David wrote:

In our case,

On Thu, Mar 26, 2015 at 5:03 PM, Kevin Oberman rkober...@gmail.com wrote:

This is just a shot in the dark and not a really likely one, but I have had
issues with Firefox leaking memory badly. I can free the space by killing
firefox and restarting it.

In our case, we can log in from the console, kill every single
user-mode process on the system except the init, login, and the
console shell, and the memory is not recovered.  Gigabytes and
gigabytes user memory of it are being held by some un-findable
anonymous persistent structure not linked to any process.  Konstantin
proposed that it was some sort of shared memory usage, but there
appears to be no way to check or investigate most types of shared
memory usage on FreeBSD.


Does vmstat -m or vmstat -z shed any light?

Regards
Steve
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Problems with openssl 1.0.2 update

2015-03-23 Thread Steven Hartland



On 23/03/2015 14:38, Gerhard Schmidt wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 23.03.2015 15:14, Dewayne Geraghty wrote:


On 24/03/2015 12:16 AM, Gerhard Schmidt wrote:

On 23.03.2015 13:40, Guido Falsi wrote:

On 03/23/15 11:33, Gerhard Schmidt wrote:

Hi,

we experiencing a problem after upgrading  the openssl port to openssl
1.0.2.

/usr/bin/vi started to crash after some seconds with segfault.
/rescue/vi works just fine. Deleting the openssl 1.0.2 package
everything works just fine again. Installing the old openssl 1.0.1_18
package it still works just fine.

it seams that besides vi the bash also has this problem. Anybody
experiencing the same or is this something specific to my system.

I'm running FreeBSD 10.1 updated tonight.

I am seeing runtime problems with asterisk13 (which I maintain), caused
by the OpenSSL update fallout.

In this case, after some analysis, I concluded the problem is the
libsrtp port requiring OpenSSL from ports(for a reason), causing
asterisk to link to that too, which would be correct.

Asterisk also uses the security/trousers port, which links to system
OpenSSL. This ensues a conflict which now results in asterisk
segfaulting and stopping to work.

I'm investigating what can be done about this. As a local solution I can
force the trousers port to link against OpenSSL from ports, but this
will not fix the general problem. As a port maintaner I ony see
modifying the trousers port to depend on ports OpenSSL as a solution, is
this acceptable?


Most Ports link against the port openssl if its installed and agains the
system openssl if not. That should be the prefered way to handle problem.

I don't know if an incompatibility between system an port openssl is a
problem. I've removed the portbuild openssl from this server completely.

As far as i can see the problem is with openldap-client build agains the
ports openssl and used by nss_ldap or pam_ldap modul. I will do some
testing when my test host is ready. Testing on an Production server is
not that good :-)

Regards
Estartu



I only use openssl from ports and have just completed a rebuild of 662
packages for server requirements and include: trousers, ldap client and
server, and 71 other ports built without any issues on amd64 10.1Stable
using clang.  Not so successful on i386 but I don't believe its related
to openssl.

I never had an issue building anything. Using it is the problem. Setup
authentication via ldap (nss_ldap) and you are in hell. Bash crashes
when you try to login. vi crashes when you try to change a file.
Anything that uses nsswitch has some problems.

Does rebuilding all ports with WITH_OPENSSL_PORT=yes set in 
/etc/make.conf help?


Regards
Steve
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS hanging on too hard

2013-10-13 Thread Steven Hartland
- Original Message - 
From: Daniel O'Connor docon...@gsoft.com.au

Hi all,
I'm trying to setup a ZFS mirror system with a USB disk as backup. The backup 
disk is a ZFS pool which I am zfs send'ing to.

However I find that if the disk is disconnected while mounted then things go 
pear shaped..
root@gateway:~ # zpool status -v
  pool: backupA
 state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://illumos.org/msg/ZFS-8000-HC
  scan: none requested
config:

 NAME  STATE READ WRITE CKSUM
 backupA   UNAVAIL  0 0 0
   1877640355  REMOVED  0 0 0  was /dev/da0

errors: List of errors unavailable (insufficient privileges)

(but I am root..)

root@gateway:~ # zpool online pool /dev/da0
cannot online /dev/da0: no such device in pool

?!

Anyone have any ideas?


First pool is not your pool name its backupA so try:
zpool online backupA /dev/da0

If that still fails try:
zpool online backupA 1877640355

   Regards
   Steve


This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS hanging on too hard

2013-10-13 Thread Steven Hartland
- Original Message - 
From: Daniel O'Connor docon...@gsoft.com.au


On 14/10/2013, at 2:32, Steven Hartland kill...@multiplay.co.uk wrote:
 First pool is not your pool name its backupA so try:
 zpool online backupA /dev/da0
 
 If that still fails try:

 zpool online backupA 1877640355


I get..
root@gateway:~ # zpool online backupA /dev/da0
cannot online /dev/da0: pool I/O is currently suspended
root@gateway:~ # zpool online backupA 1877640355
cannot online 1877640355: pool I/O is currently suspended

It seems that it does not recognise the disk is present :(

I tried zpool export but that hangs, eg
root@gateway:~ # zpool export -f backupA
load: 0.04  cmd: zpool 1384 [tx-tx_sync_done_cv)] 2.63r 0.00u 0.00s 0% 2804k
load: 0.04  cmd: zpool 1384 [tx-tx_sync_done_cv)] 2.79r 0.00u 0.00s 0% 2804k
load: 0.04  cmd: zpool 1384 [tx-tx_sync_done_cv)] 2.99r 0.00u 0.00s 0% 2804k


Hmm I guess your going to have to reboot, which is not ideal.

   Regards
   Steve








This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


  1   2   3   4   5   6   >