No USB?

2017-04-08 Thread Kevin Oberman
Today, for the first time in a couple of weeks, I plugged in a USB drive to
my 11-STABLE system (r316552). No device was created and usbconfig only
sees EHCI hubs:
ugen1.1:  at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps)
pwr=SAVE (0mA)
ugen0.1:  at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps)
pwr=SAVE (0mA)

Seems like I should be seeing UHCI stuff, too. Even internal devices like
my webcam don't show up.

I'm running a GENERIC kernel with the following exceptions:
nooptions SCHED_ULE   # ULE scheduler
options   SCHED_4BSD  # 4BSD scheduler
optionsIEEE80211_DEBUG

I tried updating my system and that made no difference. I booted up Windows
and it sees the USB drive just fine.

Any things I should try or look at to try to figure out what is happening?
I really want to get an image of my system before moving in three days.
--
Kevin Oberman, Part time kid herder and retired Network Engineer
E-mail: rkober...@gmail.com
PGP Fingerprint: D03FB98AFA78E3B78C1694B318AB39EF1B055683
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: moutnroot failing on zpools in Azure after upgrade from 10 to 11 due to lack of waiting for da0

2017-04-08 Thread Edward Tomasz Napierała
On 0313T1206, Pete French wrote:
> I have a number of machines in Azure, all booting from ZFS and, until
> the weekend, running 10.3 perfectly happily.
> 
> I started upgrading these to 11. The first went fine, the second would
> not boot. Looking at the boot diagnistics it is having problems finding the
> root pool to mount. I see this is the diagnostic output:
> 
>   storvsc0:  on vmbus0
>   Solaris: NOTICE: Cannot find the pool label for 'rpool'
>   Mounting from zfs:rpool/ROOT/default failed with error 5.
>   Root mount waiting for: storvsc
>   (probe0:blkvsc0:0:storvsc1: 0:0):  on 
> vmbus0
>   storvsc scsi_status = 2
>   (da0:blkvsc0:0:0:0): UNMAPPED
>   (probe1:blkvsc1:0:1:0): storvsc scsi_status = 2
>   hvheartbeat0:  on vmbus0
>   da0 at blkvsc0 bus 0 scbus2 target 0 lun 0
> 
> As you can see, the drive da0 only appears after it has tried, and failed,
> to mount the root pool.

Does the same problem still happen with recent 11-STABLE?

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: moutnroot failing on zpools in Azure after upgrade from 10 to 11 due to lack of waiting for da0

2017-04-08 Thread Edward Tomasz Napierała
On 0316T1004, Warner Losh wrote:
> [[ stupid mouse ]]
> 
> On Thu, Mar 16, 2017 at 10:01 AM, Warner Losh  wrote:
> > On Thu, Mar 16, 2017 at 6:06 AM, Pete French  
> > wrote:
> >>> I don't like the delay and retry approach at all.
> >>
> >> Its not ideal, but it is what we do for UFS after all...
> >>
> >>> Imagine that you told the kernel that you want to mount your root from a 
> >>> ZFS
> >>> pool which is on a USB driver which you have already thrown out.  Should 
> >>> the
> >>> kernel just keep waiting for that pool to appear?
> >>
> >> I'm not talking about an infinite loop here, just making it honour
> >> the 'vfs.mountroot.timeout' setting like it does ofr UFS. So it
> >> should wait for the timeout I have set and then proceed as it would if
> >> there had been no timeout. Default behaviout is for it to behave as it
> >> does now, its onyl when you need the retry that you enable it.
> >
> > Put another way: With UFS is keeps retrying until the timeout expires.
> > If the first try succeeds, the boot is immediate.
> >
> >> Right now this works for UFS, but not for ZFS, which is an inconsistency
> >> that I dont like, and also means I am being forced down a UFS root
> >> path if I require this.
> >
> > Yes. ZFS is special, but I don't think the assumptions behind its
> > specialness are quite right:
> >
> > /*
> >  * In case of ZFS and NFS we don't have a way to wait for
> >  * specific device.  Also do the wait if the user forced that
> >  * behaviour by setting vfs.root_mount_always_wait=1.
> >  */
> > if (strcmp(fs, "zfs") == 0 || strstr(fs, "nfs") != NULL ||
> > dev[0] == '\0' || root_mount_always_wait != 0) {
> > vfs_mountroot_wait();
> > return (0);
> > }
> >
> > So you can make it always succeed by forcing the wait, but that's lame...
> 
> Later we check to see if a device by a given name is present. Since
> ZFS doesn't present its pool names as devices to the rest of the
> system, that's not going to work quite right. That's the real reason
> that ZFS is special. It isn't that we can't wait for individual
> devices, it's that we can't wait for the 'mount token' that we use for
> what to mount to be 'ready'. NFS suffers from the same problem, but
> since its device is always ready since it's stateless, it isn't as
> noticeable.

Not sure what you mean.  The problem we handle ZFS and NFS in
a different way (always waiting) is _exactly_ so that we don't
have a way to wait for the individual device, like we can for
eg UFS, and we have to fall back to mount tokens, which were
used unconditionally before 11.0.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"