A couple things that I've discovered over time that might help:
Don't ever use the root user for zpool queries such as "zpool status". If you
have a really bad failing disk a zpool status command can take forever to
complete when ran as root. A "su nobody -c 'zpool status'" will return results
On Thu, 22 Jun 2017, Schweiss, Chip wrote:
I'm talking about an offline pool. I started this thread after rebooting
a server that is part of an HA pair. The other server has the pools
online. It's been over 4 hours now and it still hasn't completed its disk
scan.
Every tool I have that
Hi,
Certain commands (in particular during attach) are send by mptsas itself,
these have a timeout set in the driver and are not issued by SD hence these
commands are not affected by changing those values. See for example,
mptsas_access_config_page()
- Jeffry
On Thu, Jun 22, 2017 at 10:12 PM,
I'm talking about an offline pool. I started this thread after rebooting
a server that is part of an HA pair. The other server has the pools
online. It's been over 4 hours now and it still hasn't completed its disk
scan.
Every tool I have that helps me locate disks, suffers from the same
Have you able to and have tried offlining it in the zpool?
zpool offline thepool
I'm assuming the pool has some redundancy which would allow for this.
/dale
> On Jun 22, 2017, at 11:54 AM, Schweiss, Chip wrote:
>
> When ever a disk goes south, several disk related takes
On Thu, Jun 22, 2017 at 11:05 AM, Michael Rasmussen wrote:
>
> > I thought this /etc/system setting would reduce the timeout to 5 seconds:
> > set sd:sd_io_time = 5
> >
> I think it expects a hex value so try 0x5 instead.
>
>
Unfortunately, no, I've tried that too.
-Chip
> --
On Thu, 22 Jun 2017 10:54:25 -0500
"Schweiss, Chip" wrote:
> I thought this /etc/system setting would reduce the timeout to 5 seconds:
> set sd:sd_io_time = 5
>
I think it expects a hex value so try 0x5 instead.
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG
When ever a disk goes south, several disk related takes become painfully
slow. Boot up times can jump into the hours to complete the disk scans.
The logs slowly get these type messages:
genunix: WARNING /pci@0,0/pci8086,340c@5/pci15d9,400@0 (mpt_sas0):
Timeout of 60 seconds expired with 1
Hi Dan,
Thanks for pointing this out. No the service is not running:
svcs -a | grep cap
Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49 (0)6151 8257 744 | Fax: +49 (0)6151 8257 799
> On Jun 22, 2017, at 3:13 AM, Oliver Weinmann
> wrote:
>
> Hi,
>
> Don’t think so:
>
> svcs -vx rcapd
>
> shows nothing.
You're not looking for the right thing.
neuromancer(~)[0]% pgrep rcapd
340
neuromancer(~)[0]% svcs -a | grep cap
online
Hi,
Running the zfs mount –a from / shows the same errors.
I now ran the following commands to correct the mountpoints:
Re-enable inheritance:
zfs inherit -r mountpoint hgst4u60/ReferencePR
Reset mountpoint on the root folder:
zfs set mountpoint=/hgst4u60/ReferencePR hgst4u60/ReferencePR
On Thu, Jun 22, 2017 at 3:45 PM, Oliver Weinmann <
oliver.weinm...@telespazio-vega.de> wrote:
> One more thing I just noticed is that the system seems to be unable to
> mount directories:
>
>
>
> root@omnios01:/hgst4u60/ReferenceAC/AGDEMO# /usr/sbin/zfs mount -a
>
> cannot mount
One more thing I just noticed is that the system seems to be unable to mount
directories:
root@omnios01:/hgst4u60/ReferenceAC/AGDEMO# /usr/sbin/zfs mount -a
cannot mount '/hgst4u60/ReferenceAC': directory is not empty
cannot mount '/hgst4u60/ReferenceDF': directory is not empty
cannot
Hi Stephan,
It seems that the problem is not VMware related as we also have non VMware NFS
shares that are disappearing too. We have joined the omnios to our win2k8 r2
domain. Previously we also setup ldap client but we had several problems with
it as you can see from the fmdump and so we
Hi Oliver,
Von: "Oliver Weinmann"
An: "Tobias Oetiker"
CC: "omnios-discuss"
Gesendet: Donnerstag, 22. Juni 2017 09:13:27
Betreff: Re: [OmniOS-discuss] Loosing NFS shares
Hi,
Don’t think so:
Hi Oliver,
- Ursprüngliche Mail -
> Von: "Oliver Weinmann"
> An: omnios-discuss@lists.omniti.com
> Gesendet: Donnerstag, 22. Juni 2017 08:45:14
> Betreff: [OmniOS-discuss] Loosing NFS shares
> Hi,
> we are using OmniOS for a few months now and have
Hi,
Don’t think so:
svcs -vx rcapd
shows nothing.
[cid:Logo_Telespazio_180_px_signature_eng_b58fa623-e26d-4116-9230-766adacfe55e1.png]
Oliver Weinmann
Senior Unix VMWare, Storage Engineer
Telespazio VEGA Deutschland GmbH
Europaplatz 5 - 64293 Darmstadt - Germany
Ph: + 49
Oliver,
are you running rcapd ? we found that (at least of the box) this thing wrecks
havoc to both
nfs and iscsi sharing ...
cheers
tobi
- On Jun 22, 2017, at 8:45 AM, Oliver Weinmann
wrote:
> Hi,
> we are using OmniOS for a few months now and
Hi,
we are using OmniOS for a few months now and have big trouble with stability.
We mainly use it for VMware NFS datastores. The last 3 nights we lost all NFS
datastores and VMs stopped running. I noticed that even though zfs get sharenfs
shows folders as shared they become inaccessible.
19 matches
Mail list logo