On 12/04/2020 18:08, Stefan Bethke wrote:
Am 12.04.2020 um 19:03 schrieb Slawa Olhovchenkov :
Now I can't boot into single user mode anymore, ZFS just waits forever, and the
kernel is printing an endless chain of SATA error messages.
I really need a way to remove the broken disk before ZFS
It may well depend on the extent of the deletes occurring.
Have you tried disabling TRIM to see if it eliminates the delay?
Regards
Steve
On 28/11/2019 09:59, Eugene Grosbein wrote:
28.11.2019 14:26, Steven Hartland wrote:
As you mentioned it’s on SSD you could be suffering from
As you mentioned it’s on SSD you could be suffering from poor TRIM
performance from your devices if you run gstat -pd you’ll be able to get an
indication if this is the case.
On Thu, 28 Nov 2019 at 06:50, Eugene Grosbein wrote:
> Hi!
>
> Is it normal that "zfs destroy" for one ZVOL with
Great to hear you got your data back even after all the terrible luck you
suffered!
Regards
Steve
On Fri, 7 Jun 2019 at 00:49, Michelle Sullivan wrote:
> Michelle Sullivan wrote:
> >> On 02 May 2019, at 03:39, Steven Hartland
> wrote:
> >>
> >>
> &g
Is disagree, having them hatched causes us less work not more, as others
have said one update not many, which result in one outage of systems that
need patching not many.
Regards
Steve
On Wed, 15 May 2019 at 16:48, Julian H. Stacey wrote:
> Hi, Reference:
> > From: Alan Somers
>
On 01/05/2019 15:53, Michelle Sullivan wrote:
Paul Mather wrote:
On Apr 30, 2019, at 11:17 PM, Michelle Sullivan
wrote:
Been there done that though with ext2 rather than UFS.. still got
all my data back... even though it was a nightmare..
Is that an implication that had all your data
wrote:
On 4/20/2019 10:50, Steven Hartland wrote:
Have you eliminated geli as possible source?
No; I could conceivably do so by re-creating another backup volume set
without geli-encrypting the drives, but I do not have an extra set of
drives of the capacity required laying around to do that. I
Have you eliminated geli as possible source?
I've just setup an old server which has a LSI 2008 running and old FW
(11.0) so was going to have a go at reproducing this.
Apart from the disconnect steps below is there anything else needed e.g.
read / write workload during disconnect?
mps0:
On 18/01/2019 10:34, Thomas Steen Rasmussen wrote:
On 1/16/19 8:16 PM, Thomas Steen Rasmussen wrote:
On 1/16/19 6:56 PM, Steven Hartland wrote:
PS: are you going to file a PR ?
Yes here https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=235005
Hello all,
A quick follow up
On 16/01/2019 17:33, Pete French wrote:
I have confirmed that pfsync is the culprit. Read on for details.
Excellent work. I;m home now, so won't get a chnace to out this into
practice until tomorrow unfortunately, but it's brilliant that you have
confirmed it.
I tried disabling pfsync and
I can't see how any of those would impact carp unless pf is now
incorrectly blocking carp packets, which seems unlikely from that commit.
Questions:
* Are you running a firewall?
* What does sysctl net.inet.carp report?
* What exactly does ifconfig report about your carp on both hosts?
*
What triggered the issue, was it a reboot and how many disks?
On 18/09/2018 08:59, The Doctor via freebsd-stable wrote:
This about abot the 3rd time within 3 weeks that I have hadto
rebuild a server (Backup are great but long restores).
What isthe 'formula' for the following:
zfs i/o on your
The recommended size for a boot partition has been 512K for a while.
We always put swap directly after it so if a resize is needed its easy
without and resilvering .
If your pool is made up of partitions which are only 34 block smaller
than your zfs partition you're likely going to need to
That is very hw and use case dependent.
The reason we originally sponsored the project to add TRIM to ZFS was that
in our case without TRIM the performance got so bad that we had to secure
erase disks every couple of weeks as they simply became so slow they where
unusable.
Now admittedly that
You can indeed tune things here are the relevant sysctls:
sysctl -a | grep trim |grep -v kstat
vfs.zfs.trim.max_interval: 1
vfs.zfs.trim.timeout: 30
vfs.zfs.trim.txg_delay: 32
vfs.zfs.trim.enabled: 1
vfs.zfs.vdev.trim_max_pending: 1
vfs.zfs.vdev.trim_max_active: 64
Have you tried setting dumpdev to AUTO in rc.conf to see if you can obtain
a panic dump? You could also try disabling reboot on it panic using the
sysctl
On Mon, 12 Mar 2018 at 22:18, Rick Miller wrote:
> Hi all,
>
> Thanks in advance to anyone that might be able to help. I
Try reducing the disk connection speed down to see if that helps.
On Sun, 24 Sep 2017 at 06:49, Graham Menhennitt
wrote:
> G'day all,
>
> I'm setting up a machine running 11-Stable on a PC Engines APU2C board.
> It has a 16Gb SSD as its first disk (ada0), and a Seagate
If you have remote console via IPMI it might be an idea to leave an
active top session so you can see the last point at which the machine
stopped. It may provided a pointer to bad process e.g. eating all the
machine RAM.
Regards
Steve
On 14/09/2017 11:30, wishmaster wrote:
Hi!
My
Could you post the decoded crash info from /var/crash/...
I would also create a bug report:
https://bugs.freebsd.org/bugzilla/enter_bug.cgi?product=Base%20System
Regards
Steve
On 12/09/2017 14:40, Mark Martinec wrote:
A couple of days ago I have upgraded an Intel box from FreeBSD 10.3
Based on your boot info you're using mps, so this could be related to
mps fix committed to stable/11 today by ken@
https://svnweb.freebsd.org/changeset/base/321415
re@ cc'ed as this could cause hangs for others too on 11.1-RELEASE if
this is the case.
Regards
Steve
On 24/07/2017
Given how old 9.1 is, even if you did investigate its unlikely it would
get fixed.
I'd recommend updating to 11.0-RELEASE and see if the panic still happens.
On 21/06/2017 17:35, Efraín Déctor wrote:
Hello.
Today one of my servers crashed with a kernel panic. I got this message:
panic:
That looks like its trying to do an erase of the sectors, which is
likely failing due to the device being a HW RAID, have you tried with
nodiscard set?
On 08/05/2017 16:42, HSR Hackspace wrote:
Hi folks;
I'm trying to format a 300 GB partition on x86_64 box running running
BSD 10.1 with HW
It's not an external vulnerability in the DRAC is it as that seems to be
more and more common these days
On Tue, 18 Apr 2017 at 15:10, tech-lists wrote:
> On 18/04/2017 13:34, Kurt Jaeger wrote:
>
> > 1)
> > echo '-Dh -S115200' > /boot.config
> > 2)
> > vi
I seem to remember this happening when I tried it too, likely it blows
the stack, what's your panic?
When doing similar tracing before I've flagged the relevant methods with
__noinline.
On 22/02/2017 20:47, Lev Serebryakov wrote:
Hello Freebsd-stable,
Now if you build zfs.ko with -O0
On 23/01/2017 07:24, Sergei Akhmatdinov wrote:
On Sun, 22 Jan 2017 22:57:46 -0800
Walter Parker wrote:
For decades there has always been a warning not to do parallel builds of
the kernel or the world (Linux kernel builds also suggest not to do this).
Every once in a while,
On 12/01/2017 22:57, Stefan Bethke wrote:
Am 12.01.2017 um 23:29 schrieb Stefan Bethke :
I’ve just created two pools on a freshly partitioned disk, using 11.0 amd64,
and the shift appears to be 9:
# zpool status -v host
pool: host
state: ONLINE
status: One or more devices
On 12/01/2017 21:12, Jeremie Le Hen wrote:
Hey Steven,
(Please cc: me on reply)
On Thu, Jan 12, 2017 at 1:32 AM, Steven Hartlan
The reason I'd recommend 512k for boot is to provide room for expansion
moving forward, as repartitioning to upgrade is a scary / hard thing to do.
Remember it
On 11/01/2017 22:58, Jeremie Le Hen wrote:
(Sorry I had to copy-paste this email from the archives to a new
thread, because I'm not subscribed to -stable@. Would you mind cc:ing
me next time please?)
As your not at the boot loader stage yet for keyboard enabling legacy
USB keyboard / mouse
As your not at the boot loader stage yet for keyboard enabling legacy
USB keyboard / mouse support in BIOS may help.
If you see issues with keyboard after the kernel takes over setting
hint.ehci.0.disabled=1 may help.
If your installing 11.x then the guides boot partition size is out of
find also has -delete which avoids the exec overhead, not much of an impact
here but worth noting if you're removing lots.
On 18 December 2016 at 00:38, Adam Vande More wrote:
> On Sat, Dec 17, 2016 at 3:01 PM, David Marec
> wrote:
>
> > [I had
Obviously this shouldn't happen but would need a stack trace to identify
the cause.
If you want to disable TRIM on ZFS you should really use:
vfs.zfs.trim.enabled=0
On 09/12/2016 13:13, Eugene M. Zheganin wrote:
Hi.
Recently I've encountered the issue with "slow TRIM" and Sandisk SSDs,
so I
Are you sure your kernel and world are in sync?
On 01/12/2016 23:53, Miroslav Lachman wrote:
There is some minor problem with "zpool get all" command if one of two
pools is unvailable:
# zpool get all
Assertion failed: (nvlist_lookup_nvlist(config, "feature_stats",
) == 0), file
On 29/11/2016 12:30, Eugene M. Zheganin wrote:
Hi.
On 28.11.2016 23:07, Steven Hartland wrote:
Check your gstat with -dp so you also see deletes, it may be that your
drives have a very slow TRIM.
Indeed, I see a bunch of delete operations, and when TRIM disabled my
engineers report
Check your gstat with -dp so you also see deletes, it may be that your
drives have a very slow TRIM.
On 28/11/2016 17:54, Eugene M. Zheganin wrote:
Hi,
recently we bough a bunch of "Sandisk CloudSpeed Gen. II Eco Channel"
disks (the model name by itself should already made me suspicious) for
At one point lz4 wasn't supported on boot, I seem to remember that may
have been addressed but not 100% sure.
If it hasn't and your kernel is now compressed that may explain it?
Have you tried booting from a live cd and checking the status of the pool?
On 22/11/2016 08:43, Pete French wrote:
When you say corrupt what do you mean, specifically what's the output
from zpool status?
One thing that springs to mind if zpool status doesn't show any issues, and:
1. You have large disks
2. You have performed an update and not rebooted since.
You may be at the scenario where there's enough
On 21/10/2016 10:04, Eugene M. Zheganin wrote:
Hi.
On 21.10.2016 9:22, Steven Hartland wrote:
On 21/10/2016 04:52, Eugene M. Zheganin wrote:
Hi.
On 20.10.2016 21:17, Steven Hartland wrote:
Do you have atime enabled for the relevant volume?
I do.
If so disable it and see if that helps
On 21/10/2016 04:52, Eugene M. Zheganin wrote:
Hi.
On 20.10.2016 21:17, Steven Hartland wrote:
Do you have atime enabled for the relevant volume?
I do.
If so disable it and see if that helps:
zfs set atime=off
Nah, it doesn't help at all.
As per with Jonathon what does gstat -pd and top
2016 at 12:56, Steven Hartland <kill...@multiplay.co.uk> wrote:
[...]
When you see the stalling what does gstat -pd and top -SHz show?
On my dev box:
1:38pm# uname -a
FreeBSD irontree 10.3-STABLE FreeBSD 10.3-STABLE #0 r307401: Mon Oct
17 10:17:22 NZDT 2016 root@irontree:/usr/obj/usr/s
On 20/10/2016 23:48, Jonathan Chen wrote:
On 21 October 2016 at 11:27, Steven Hartland <kill...@multiplay.co.uk> wrote:
On 20/10/2016 22:18, Jonathan Chen wrote:
On 21 October 2016 at 09:09, Peter <p...@citylink.dinoex.sub.org> wrote:
[...]
I see this on my pgsql_tmp dirs (wh
On 20/10/2016 22:18, Jonathan Chen wrote:
On 21 October 2016 at 09:09, Peter wrote:
[...]
I see this on my pgsql_tmp dirs (where Postgres stores intermediate
query data that gets too big for mem - usually lots of files) - in
normal operation these dirs are
Do you have atime enabled for the relevant volume?
If so disable it and see if that helps:
zfs set atime=off
Regards
Steve
On 20/10/2016 14:47, Eugene M. Zheganin wrote:
Hi.
I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation
on different releases) and a zfs. I
On 17/10/2016 22:50, Karl Denninger wrote:
I will make some effort on the sandbox machine to see if I can come up
with a way to replicate this. I do have plenty of spare larger drives
laying around that used to be in service and were obsolesced due to
capacity -- but what I don't know if
On 17/10/2016 20:52, Andriy Gapon wrote:
On 17/10/2016 21:54, Steven Hartland wrote:
You're hitting stack exhaustion, have you tried increasing the kernel stack
pages?
It can be changed from /boot/loader.conf
kern.kstack_pages="6"
Default on amd64 is 4 IIRC
Steve,
perhaps you
On 10/17/2016 15:16, Steven Hartland wrote:
Be good to confirm its not an infinite loop by giving it a good bump
first.
On 17/10/2016 19:58, Karl Denninger wrote:
I can certainly attempt setting that higher but is that not just
hiding the problem rather than addressing it?
On 10/17/2016 13
Be good to confirm its not an infinite loop by giving it a good bump first.
On 17/10/2016 19:58, Karl Denninger wrote:
I can certainly attempt setting that higher but is that not just
hiding the problem rather than addressing it?
On 10/17/2016 13:54, Steven Hartland wrote:
You're hitting
You're hitting stack exhaustion, have you tried increasing the kernel
stack pages?
It can be changed from /boot/loader.conf
kern.kstack_pages="6"
Default on amd64 is 4 IIRC
On 17/10/2016 19:08, Karl Denninger wrote:
The target (and devices that trigger this) are a pair of 4Gb 7200RPM
SATA
Almost certainly its TRIMing the drives try setting the sysctl
vfs.zfs.vdev.trim_on_init=0
On 22/09/2016 12:54, Eugene M. Zheganin wrote:
Hi.
Recently I spent a lot of time setting up various zfs installations, and
I got a question.
Often when creating a raidz on disks considerably big (>~
Yes but you need to boot with efi as that's the old thing mode that
supports nvme boot. We have an nvme only ZFS box and it works fine
On Thursday, 15 September 2016, Andrey Cherkashin
wrote:
> Hi,
>
> I know FreeBSD supports NVMe as root device, but does it support it as
>
That file is not accessible
On 03/09/2016 10:51, Volodymyr Kostyrko wrote:
Hi all.
Got one host without keyboard so can't dump it.
Screenshot: http://limb0.b1t.name/incoming/IMG_20160903_120545.jpg
This is MINIMAL kernel with minor additions.
On 21/07/2016 13:52, Andriy Gapon wrote:
On 21/07/2016 15:25, Karl Denninger wrote:
The crash occurred during a backup script operating, which is (roughly)
the following:
zpool import -N backup (mount the pool to copy to)
iterate over a list of zfs filesystems and...
zfs rename fs@zfs-base
The panic was due to stack exhaustion, why it was so deep not looked.
On 20/07/2016 15:32, Karl Denninger wrote:
The panic occurred during a zfs send/receive operation for system
backup. I've seen this one before, unfortunately, and it appears
that it's still there -- may be related to
Does adding the following to /boot/loader.conf make any difference?
hw.memtest.tests="0"
On 28/06/2016 14:59, Miroslav Lachman wrote:
I installed FreeBSD 10.3 on brand new machine Supermicro X11SSW-F. It
sits on top of 4x 1TB Samsung SSDs on ZFS RAIDZ2.
The booting is painfully slow from BTX
On 17/05/2016 08:49, Borja Marcos wrote:
On 05 May 2016, at 16:39, Warner Losh wrote:
What do you think? In some cases it’s clear that TRIM can do more harm than
good.
I think it’s best we not overreact.
I agree. But with this issue the system is almost unusable for now.
I wouldn't rule out a bad cpu as we had a very similar issue and that's
what it was.
Quick way to confirm is to move all the dram from the disabled CPU to one
of the other CPUs and see if the issue stays away with the current CPU
still disabled.
If that's the case it's likely the on chip memory
I thought that was in 10.3 as well?
On 28/04/2016 11:55, krad wrote:
I think the new pivotroot type stuff in 11 may help a lot with this
https://www.freebsd.org/news/status/report-2015-10-2015-12.html#Root-Remount
On 28 April 2016 at 10:31, Malcolm Herbert wrote:
On Thu,
On 05/04/2016 23:09, Warren Block wrote:
On Tue, 5 Apr 2016, Steven Hartland wrote:
On 05/04/2016 20:48, Warren Block wrote:
Actually, the more I think about it, using bootcode -p to write the
entire EFI partition seems dangerous. Unless it is surprisingly
smart, it will wipe out any
On 05/04/2016 20:48, Warren Block wrote:
On Tue, 5 Apr 2016, Boris Samorodov wrote:
05.04.16 12:30, Trond Endrestøl пишет:
What am I doing wrong? Can't gpart(8) write both the pmbr and the efi
image as a single command? Is it an off-by-one error in gpart(8)?
gpart bootcode -b /boot/pmbr
On 07/03/2016 16:43, Will Green wrote:
On 4 Mar 2016, at 18:49, Mark Dixon wrote:
Will Green sundivenetworks.com> writes:
I am happy to test patches and/or current on this server if that helps. If
you want more details on the
motherboard/system I have started a post on
Is have an inkling that r293673 may be at fault here, can you try
reverting that change and see if it fixes the issue?
On 22/02/2016 16:35, Andy Carrel via freebsd-stable wrote:
I've created a 10.3-BETA2 image for Google Compute Engine using swills'
script and am getting a panic on boot when
On 14/02/2016 00:47, claudiu vasadi wrote:
Hello all,
While trying to boot 10.3 amd64 uefi disk1 BETA1 and BETA2 iso on a Mac Pro
2008, I get the following message:
BETA1 -
http://s76.photobucket.com/user/da1_27/media/10.3-BETA1_zpswjatgfg2.jpg.html
BETA2 -
On 12/02/2016 20:36, Thomas Laus wrote:
I have a new Asus H170-Plus-D3 motherboard that will be used for a DOM0 Xen
Server. It uses an Intel i5-6300 processor and a Samsung 840 EVO SSD. I
would like to use ZFS on this new installation. The Xen Kernel does not
have UEFI support at this time,
Some more information about your enviroment would be helpful:
1. What revision of stable/10 are you running?
2. What workloads are you running?
3. What's the output of procstat -k -k when this happens (assuming its
possible to run)?
4. What's the output of sysctl -a |grep vnode, usually and when
Try adding this to /boot/loader.conf:
hw.mfi.mrsas_enable="1"
Regards
Steve
On 02/02/2016 17:06, Zara Kanaeva wrote:
Dear list,
I have one Fujitsu server with LSI SAS 3108 RAID Controller. These LSI
SAS 3108 RAID Controller is supported by the mpr driver
Investigating, strangely enough this builds and cleans just fine from
buildenv.
On 29/01/2016 17:47, Glen Barber wrote:
CC'd smh, who committed an update recently in this file. I have
confirmed the failure (but do not understand why I did not see it when
testing the patch...).
Steven, can
On 29/01/2016 18:24, John wrote:
On Fri, Jan 29, 2016 at 06:10:26PM +, Glen Barber wrote:
On Fri, Jan 29, 2016 at 05:47:46PM +, Glen Barber wrote:
CC'd smh, who committed an update recently in this file. I have
confirmed the failure (but do not understand why I did not see it when
Check cables and devices are in good condition. These things are usually
a connectivity issue or device failing
On 06/12/2015 03:06, Perry Hutchison wrote:
Does anyone know the condition of the ICH5 ATA support in FreeBSD 10?
In preparing to repurpose an elderly Dell Dimension 4600 from
On 17/11/2015 15:26, Johan Hendriks wrote:
Hello all
We have a NFS server witch has three network ports.
We have bonded these interfaces as a lagg interface, but when we use the
server it looks like only two interfaces are used.
This is our rc.conf file
ifconfig_igb0="up"
On 17/11/2015 22:08, Christopher Forgeron wrote:
I just submitted this as a bug:
( https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204641 )
..but I thought I should bring it to the list's attention for more exposure
- If that's a no-no, let me know, as I have a few others that are related
case large amounts are
freed to avoid this kind of starvation.
-- Nicolas
On Thu, Oct 29, 2015 at 7:22 PM, Steven Hartland
<kill...@multiplay.co.uk> wrote:
If you running NVMe, are you running a version which has this:
https://svnweb.freebsd.org/base?view=revision=285767
I'm pretty sur
If you running NVMe, are you running a version which has this:
https://svnweb.freebsd.org/base?view=revision=285767
I'm pretty sure 10.2 does have that, so you should be good, but best to
check.
Other questions:
1. What does "gstat -d -p" show during the stalls?
2. Do you have any other zfs
As you say using RAID for ZFS is a bad idea, so ideally change the hardware.
If not see if your RAID controller has a stripe size option to help or
just ignore the warning, its just a warning as it will be non-optimal
performance.
On 12/10/2015 12:46, Marko Cupać wrote:
Hi,
I've got HP
On 06/10/2015 19:03, Jim Harris wrote:
On Tue, Oct 6, 2015 at 9:42 AM, Steven Hartland
<kill...@multiplay.co.uk <mailto:kill...@multiplay.co.uk>> wrote:
Also looks like nvme exposes a timeout_period sysctl you could try
increasing that as it could be too small for a fu
As a guess you're timing out the full disk TRIM request.
Try: sysctl vfs.zfs.vdev.trim_on_init=0 and then re-run the create.
On 06/10/2015 16:18, Sean Kelly wrote:
Back in May, I posted about issues I was having with a Dell PE R630 with 4x800GB NVMe
SSDs. I would get kernel panics due to the
Also looks like nvme exposes a timeout_period sysctl you could try
increasing that as it could be too small for a full disk TRIM.
Under CAM SCSI da support we have a delete_max which limits the max
single request size for a delete it may be we need something similar for
nvme as well to
Typo they should be:
https://security.FreeBSD.org/patches/EN-15:16/pw.patch
https://security.FreeBSD.org/patches/EN-15:16/pw.patch.asc
On 17/09/2015 09:20, Pietro Cerutti wrote:
On 2015-09-16 23:31, FreeBSD Errata Notices wrote:
# fetch https://security.FreeBSD.org/patches/EN-15:26/pw.patch
#
This should be fixed by r286223 in HEAD.
I'll MFC to stable/10 after the relevant time-out.
Thanks again for the report :)
Regards
Steve
On 31/07/2015 22:21, Trond Endrestøl wrote:
stable/10, i386, r286139, 4 GiB RAM, custom kernel loudly claims:
ZFS NOTICE: KSTACK_PAGES is 2 which
Thanks for the report Trond, I've reproduced this and am investigating.
On 31/07/2015 22:21, Trond Endrestøl wrote:
stable/10, i386, r286139, 4 GiB RAM, custom kernel loudly claims:
ZFS NOTICE: KSTACK_PAGES is 2 which could result in stack overflow panic!
Please consider adding 'options
What's the panic?
As your using ZFS I'd lay money on the fact your blowing the stack,
which would require kernel built with:
options KSTACK_PAGES=4
Regards
Steve
On 22/07/2015 08:10, Holm Tiffe wrote:
Hi,
yesterday I've decided to to put my old Workstation in my shack and
to
Be aware that kern.geom.dev.delete_max_sectors will still come into play
here hence the large request will still get chunked.
This is good to prevent excessively long running individual BIO's which
would result in user operations being uncancelable.
Regards
Steve
On 21/07/2015
This will almost certainly be due to slow TRIM support on the device.
Try setting the sysctl vfs.zfs.vdev.trim_on_init to 0 before adding the
devices.
On 18/07/2015 05:35, dy...@techtangents.com wrote:
Hi,
I've installed an Intel 750 400GB NVMe PCIe SSD in a Dell R320 running
FreeBSD
min
Trimming drives before addition to a system is definitely worthwhile.
Will
On 20 Jul 2015, at 10:28, Steven Hartland kill...@multiplay.co.uk wrote:
This will almost certainly be due to slow TRIM support on the device.
Try setting the sysctl vfs.zfs.vdev.trim_on_init to 0 before adding
You kernel hasn't been rebuilt then.
On 10/06/2015 11:23, Kurt Jaeger wrote:
Hi!
I see the same: uname says: 10.1p10
What does the following say when run from your source directory:
grep BRANCH sys/conf/newvers.sh
BRANCH=RELEASE-p11
___
What does the following say when run from your source directory:
grep BRANCH sys/conf/newvers.sh
Regards
Steve
On 10/06/2015 10:20, Palle Girgensohn wrote:
Also
# uname -a
FreeBSD pingpongdb 10.1-RELEASE-p10 FreeBSD 10.1-RELEASE-p10 #0: Wed May 13
06:54:13 UTC 2015
On 07/05/2015 09:07, Slawa Olhovchenkov wrote:
I have zpool of 12 vdev (zmirrors).
One disk in one vdev out of service and stop serving reuquest:
dT: 1.036s w: 1.000s
L(q) ops/sr/s kBps ms/rw/s kBps ms/w %busy Name
0 0 0 00.0 0 00.0
On 07/05/2015 15:28, Matthew Seaman wrote:
On 05/07/15 14:32, Steven Hartland wrote:
I wouldn't have thought so, I would expect that to only have an effect
on removal media such as CDROM drives, but no harm in trying ;-)
zpool offline -t zroot da19
That might work but it also might just
On 07/05/2015 13:51, Slawa Olhovchenkov wrote:
On Thu, May 07, 2015 at 01:46:40PM +0100, Steven Hartland wrote:
Yes in theory new requests should go to the other vdev, but there could
be some dependency issues preventing that such as a syncing TXG.
Currenly this pool must not have write
On 07/05/2015 13:44, Slawa Olhovchenkov wrote:
On Thu, May 07, 2015 at 01:35:05PM +0100, Steven Hartland wrote:
On 07/05/2015 13:05, Slawa Olhovchenkov wrote:
On Thu, May 07, 2015 at 01:00:40PM +0100, Steven Hartland wrote:
On 07/05/2015 11:46, Slawa Olhovchenkov wrote:
On Thu, May 07
On 07/05/2015 13:05, Slawa Olhovchenkov wrote:
On Thu, May 07, 2015 at 01:00:40PM +0100, Steven Hartland wrote:
On 07/05/2015 11:46, Slawa Olhovchenkov wrote:
On Thu, May 07, 2015 at 11:38:46AM +0100, Steven Hartland wrote:
How I can cancel this 24 requst?
Why this requests don't timeout
On 07/05/2015 14:10, Slawa Olhovchenkov wrote:
On Thu, May 07, 2015 at 02:05:11PM +0100, Steven Hartland wrote:
On 07/05/2015 13:51, Slawa Olhovchenkov wrote:
On Thu, May 07, 2015 at 01:46:40PM +0100, Steven Hartland wrote:
Yes in theory new requests should go to the other vdev
On 07/05/2015 14:29, Ronald Klop wrote:
On Thu, 07 May 2015 15:23:58 +0200, Steven Hartland
kill...@multiplay.co.uk wrote:
On 07/05/2015 14:10, Slawa Olhovchenkov wrote:
On Thu, May 07, 2015 at 02:05:11PM +0100, Steven Hartland wrote:
On 07/05/2015 13:51, Slawa Olhovchenkov wrote
On 07/05/2015 11:46, Slawa Olhovchenkov wrote:
On Thu, May 07, 2015 at 11:38:46AM +0100, Steven Hartland wrote:
How I can cancel this 24 requst?
Why this requests don't timeout (3 hours already)?
How I can forced detach this disk? (I am lready try `camcontrol reset`,
`camconrol rescan
On 07/05/2015 10:50, Slawa Olhovchenkov wrote:
On Thu, May 07, 2015 at 09:41:43AM +0100, Steven Hartland wrote:
On 07/05/2015 09:07, Slawa Olhovchenkov wrote:
I have zpool of 12 vdev (zmirrors).
One disk in one vdev out of service and stop serving reuquest:
dT: 1.036s w: 1.000s
L(q
Looks like it got lost in the tubes, sitting with me to get info across
to re@
On 26/04/2015 16:29, Will Green wrote:
Thanks Steven. I’ll hold off releasing the updated tutorials for now.
On 21 Apr 2015, at 17:45, Steven Hartland kill...@multiplay.co.uk wrote:
I did actually request
I did actually request this back in November, but I don't seem to have
had a reply so I'll chase.
On 21/04/2015 16:23, Will Green wrote:
Hello,
I have been updating my ZFS tutorials for use on FreeBSD 10.1. To allow users
to experiment with ZFS I use file-backed ZFS pools. On FreeBSD 10.1
On 20/04/2015 00:25, Phil Murray wrote:
Hi Ian,
Thanks for the suggestion, I tried that but had no luck
Try hint.ahcich.X.disabled=1 instead, where X is the relevant channel.
___
freebsd-stable@freebsd.org mailing list
On 26/03/2015 23:47, J David wrote:
In our case,
On Thu, Mar 26, 2015 at 5:03 PM, Kevin Oberman rkober...@gmail.com wrote:
This is just a shot in the dark and not a really likely one, but I have had
issues with Firefox leaking memory badly. I can free the space by killing
firefox and
On 23/03/2015 14:38, Gerhard Schmidt wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 23.03.2015 15:14, Dewayne Geraghty wrote:
On 24/03/2015 12:16 AM, Gerhard Schmidt wrote:
On 23.03.2015 13:40, Guido Falsi wrote:
On 03/23/15 11:33, Gerhard Schmidt wrote:
Hi,
we experiencing a
- Original Message -
From: Daniel O'Connor docon...@gsoft.com.au
Hi all,
I'm trying to setup a ZFS mirror system with a USB disk as backup. The backup
disk is a ZFS pool which I am zfs send'ing to.
However I find that if the disk is disconnected while mounted then things go
pear
- Original Message -
From: Daniel O'Connor docon...@gsoft.com.au
On 14/10/2013, at 2:32, Steven Hartland kill...@multiplay.co.uk wrote:
First pool is not your pool name its backupA so try:
zpool online backupA /dev/da0
If that still fails try:
zpool online backupA 1877640355
I
1 - 100 of 500 matches
Mail list logo