Re: FreeBSD 12.x, virtio and alicloud (aliyun.com)

2020-11-05 Thread Eugene M. Zheganin
: https://enazadev.ru/stub-data/freebsd12-patched-trap.png 05.11.2020 11:06, Cevin пишет: The problem seems to have been fixed, but the code is still in the review. For more details, see https://reviews.freebsd.org/D26915#601420 Eugene M. Zheganin 于2020年11月5日周四 下午12:35写道: Hello, Guys, does

FreeBSD 12.x, virtio and alicloud (aliyun.com)

2020-11-04 Thread Eugene M. Zheganin
Hello, Guys, does anyone have VM running in AliCloud Chinese provider (one of the biggest, if not the biggest one) ? They seem to provide stock FreeBSD 11.x images on some Redhat-based Linux with VirtIO which run just fine (at least I take a look at their kernel and it seem to be a stock

Re: pf and hnX interfaces

2020-10-13 Thread Eugene M. Zheganin
Hello, On 13.10.2020 14:19, Kristof Provost wrote: Are these symptoms of a bug ? Perhaps. It can also be a symptom of resource exhaustion. Are there any signs of memory allocation failures, or incrementing error counters (in netstat or in pfctl)? Well, the only signs of resource

pf and hnX interfaces

2020-10-13 Thread Eugene M. Zheganin
Hello, I'm running a FreeBSD 12.1 server as a VM under Hyper-V. And although this letter will make an impression of another lame post blaming FreeBSD for all of the issues while the author should blame himselm, I'm atm out of another explanation. The thing is: I'm getting loads of sendmail

Re: spa_namespace_lock and concurrent zfs commands

2020-09-09 Thread Eugene M. Zheganin
On 09.09.2020 17:29, Eugene M. Zheganin wrote: Hello, I'm using sort of FreeBSD ZFS appliance with custom API, and I'm suffering from huge timeouts when large (dozens, actually) of concurrent zfs/zpool commands are issued (get/create/destroy/snapshot/clone mostly). Are there any tunables

spa_namespace_lock and concurrent zfs commands

2020-09-09 Thread Eugene M. Zheganin
Hello, I'm using sort of FreeBSD ZFS appliance with custom API, and I'm suffering from huge timeouts when large (dozens, actually) of concurrent zfs/zpool commands are issued (get/create/destroy/snapshot/clone mostly). Are there any tunables that could help mitigate this ? Once I took part

Re: running out of ports: every client port is used only once in outgoing connection

2020-08-27 Thread Eugene M. Zheganin
Hello, 27.08.2020 23:01, Eugene M. Zheganin wrote: And as soon as I'm switching to it from DNS RR I'm starting to get get "Can't assign outgoing address when connecting to ...". The usual approach would be to assign multiple IP aliases to the destination backends, so I wil

running out of ports: every client port is used only once in outgoing connection

2020-08-27 Thread Eugene M. Zheganin
Hello, I have a situation where I'm running out of client ports on a huge reverse-proxy. Say I have an nginx upstream like this: upstream geoplatform {     hash $hashkey consistent;     server 127.0.0.1:4079 fail_timeout=10s;     server 127.0.0.1:4080 fail_timeout=10s;    

CARP under Hyper-V: weird things happen

2020-05-31 Thread Eugene M. Zheganin
Hello, I'm Running 12.0-REL in a VM under W2016S with CARP enabled and paired to a baremetal FreeBSD server. All of a sudden I realized that thjis machine is unable to become a CARP MASTER - because it sees it's own ACRP announces, but instead of seeing them from a CARP synthetic MAC

ipsec/gif(4) tunnel not working: traffic not appearing on the gif(4) interface after deciphering

2019-03-26 Thread Eugene M. Zheganin
Hello, I have a FreeBSD 11.1 box with 2 public IPs that has two tunnels to another FreeBSD box with 1 public IP. One of these tunnels is working, the other isn't. Long story short: I have some experience in ipsec tunnels setup. and I supposed that have configured everything properly, and to

11-STABLE, gstat and swap: uneven mirror disk usage

2018-11-23 Thread Eugene M. Zheganin
Hello, Am I right concluding that there's something wrong in either how freebsd works with swap partition, or with how gstat reports its activity ? Because on the consistently woring mirror the situation when only one disk member is used and the other is not for both reads and writes just

Re: Where is my memory on 'fresh' 11-STABLE? It should be used by ARC, but it is not used for it anymore.

2018-11-20 Thread Eugene M. Zheganin
Hello, 20.11.2018 15:42, Lev Serebryakov пишет: I have server which is mostly torrent box. It uses ZFS and equipped with 16GiB of physical memory. It is running 11-STABLE (r339914 now). I've updated it to r339914 from some 11.1-STABLE revision 3 weeks ago. I was used to see 13-14GiB of

Re: plenty of memory, but system us intensively swapping

2018-11-20 Thread Eugene M. Zheganin
Hello, On 20.11.2018 16:22, Trond Endrestøl wrote: I know others have created a daemon that observe the ARC and the amount of wired and free memory, and when these values exceed some threshold, the daemon will allocate a number of gigabytes, writing zero to the first byte or word of every

Re: plenty of memory, but system us intensively swapping

2018-11-20 Thread Eugene M. Zheganin
Hello, On 20.11.2018 15:12, Trond Endrestøl wrote: On freebsd-hackers the other day, https://lists.freebsd.org/pipermail/freebsd-hackers/2018-November/053575.html, it was suggested to set vm.pageout_update_period=0. This sysctl is at 600 initially. ZFS' ARC needs to be capped, otherwise it

plenty of memory, but system us intensively swapping

2018-11-20 Thread Eugene M. Zheganin
Hello, I have a recent FreeBSD 11-STABLE which is mainly used as an iSCSI target. The system has 64G of RAM but is swapping intensively. Yup, about of half of the memory is used as ZFS ARC (isn't capped in loader.conf), and another half is eaten by the kernel, but it oly uses only about

Re: ZFS: Can't find pool by guid

2018-10-24 Thread Eugene M. Zheganin
Hello. On 28.04.2018 17:46, Willem Jan Withagen wrote: Hi, I upgraded a server from 10.4 to 11.1 and now al of a sudden the server complains about: ZFS: Can't find pool by guid And I end up in the boot prompt: lsdev gives disk0 withe on p1 the partion that the zroot is/was. This is an

FreeBSD CTL device data/id questions

2018-07-23 Thread Eugene M. Zheganin
Hi, I have a bunch if the dumb (not sarcasm) questions concerning FreeBSD CTL layer and iSCSI target management: - is the "FREEBSD CTLDISK 0001" line that the ctladm lunlist is presenting, and that the initiators are seeing as the hwardware id hardcoded somewhere, especially the "CTLDISK

Re: extract the process arguments from the crashdump

2018-05-14 Thread Eugene M. Zheganin
Hello, On 14.05.2018 18:12, Konstantin Belousov wrote: On Mon, May 14, 2018 at 05:32:21PM +0500, Eugene M. Zheganin wrote: Well, unfortunately this gives me exactly same information as the core.X.txt file contains - process names without arguments, and I really want to know what arguments

Re: extract the process arguments from the crashdump

2018-05-14 Thread Eugene M. Zheganin
Hello, On 14.05.2018 16:15, Konstantin Belousov wrote: On Mon, May 14, 2018 at 01:02:28PM +0500, Eugene M. Zheganin wrote: Hello, Is there any way to extract the process arguments from the system crashdump ? If yes, could anyone please explain to me how do I do it. ps -M vmcore.file -N

Re: extract the process arguments from the crashdump

2018-05-14 Thread Eugene M. Zheganin
Hello, On 14.05.2018 16:15, Konstantin Belousov wrote: On Mon, May 14, 2018 at 01:02:28PM +0500, Eugene M. Zheganin wrote: Hello, Is there any way to extract the process arguments from the system crashdump ? If yes, could anyone please explain to me how do I do it. ps -M vmcore.file -N

extract the process arguments from the crashdump

2018-05-14 Thread Eugene M. Zheganin
Hello, Is there any way to extract the process arguments from the system crashdump ? If yes, could anyone please explain to me how do I do it. Thanks. Eugene. ___ freebsd-stable@freebsd.org mailing list

clear old pools remains from active vdevs

2018-04-26 Thread Eugene M. Zheganin
Hello, I have some active vdev disk members that used to be in pool that clearly have not beed destroyed properly, so I'm seeing in a "zpool import" output something like # zpool import pool: zroot id: 14767697319309030904 state: UNAVAIL status: The pool was last accessed by

vputx: usecount not zero for vnode

2018-04-18 Thread Eugene M. Zheganin
Hello, what is this panic, got it just recently on a fresh -STABLE r332466: Apr 18 17:52:39 san1 kernel: vputx: usecount not zero for vnode Apr 18 17:52:39 san1 kernel: 0xf80f4d7d1760: tag devfs, type VCHR Apr 18 17:52:39 san1 kernel: usecount -1, writecount -1, refcount 0 "vputx:

HAST, cyclic singal 6, and inability to start

2018-04-12 Thread Eugene M. Zheganin
Hi. About a month ago I was experimenting with HAST on my servers, and, though I did have a complications with signal 6 on init phase, I was able to start it and it was working in test mode for a couple of weeks. After that I had to reboo both of them and now it doesn't start al all - both

Re: TRIM, iSCSI and %busy waves

2018-04-07 Thread Eugene M. Zheganin
Hi, 05.04.2018 20:15, Eugene M. Zheganin wrote: You can indeed tune things here are the relevant sysctls: sysctl -a | grep trim |grep -v kstat vfs.zfs.trim.max_interval: 1 vfs.zfs.trim.timeout: 30 vfs.zfs.trim.txg_delay: 32 vfs.zfs.trim.enabled: 1 vfs.zfs.vdev.trim_max_pending: 1

Re: TRIM, iSCSI and %busy waves

2018-04-05 Thread Eugene M. Zheganin
Hello, On 05.04.2018 20:00, Warner Losh wrote: I'm also having a couple of iSCSI issues that I'm dealing through bounty with, so may be this is related somehow. Or may be not. Due to some issues in iSCSI stack my system sometimes reboots, and then these "waves" are stopped for

Re: TRIM, iSCSI and %busy waves

2018-04-05 Thread Eugene M. Zheganin
Hello, On 05.04.2018 19:57, Steven Hartland wrote: You can indeed tune things here are the relevant sysctls: sysctl -a | grep trim |grep -v kstat vfs.zfs.trim.max_interval: 1 vfs.zfs.trim.timeout: 30 vfs.zfs.trim.txg_delay: 32 vfs.zfs.trim.enabled: 1 vfs.zfs.vdev.trim_max_pending: 1

TRIM, iSCSI and %busy waves

2018-04-05 Thread Eugene M. Zheganin
Hi, I have a production iSCSI system (on zfs of course) with 15 ssd disks and it's often suffering from TRIMs. Well, I know what TRIM is for, and I know it's a good thing, but sometimes (actually often) I'm seeing my disks in gstat are overwhelmed by the TRIM waves, this looks like a "wave"

Re: another question about zfs compression numbers

2018-04-04 Thread Eugene M. Zheganin
Hi, On 04.04.2018 12:35, Patrick M. Hausen wrote: Hi all, Am 04.04.2018 um 09:21 schrieb Eugene M. Zheganin <eug...@zhegan.in>: I'm just trying to understand these numbers: file size is 232G, it's actual size on the lz4-compressed dataset is 18G, so then why is the compressratio only

another question about zfs compression numbers

2018-04-04 Thread Eugene M. Zheganin
Hello, I'm just trying to understand these numbers: file size is 232G, it's actual size on the lz4-compressed dataset is 18G, so then why is the compressratio only 1.86x ? And why logicalused is 34.2G ? On one hand, 34.2G exactlyfits to the 1.86x compresstaio, but still I don't get it.

Re: panic: vdrop: holdcnt 0

2018-03-22 Thread Eugene M. Zheganin
Hello, On 22.03.2018 18:05, Eugene M. Zheganin wrote: today I eventyally got "panic: vdrop: holdcnt 0" on an iSCSI host, on an 11.1. Since I don't see any decent information on this - I just wanted to ask - what this kind of panic generally mean ? And where do I go with this. The

panic: vdrop: holdcnt 0

2018-03-22 Thread Eugene M. Zheganin
Hi, today I eventyally got "panic: vdrop: holdcnt 0" on an iSCSI host, on an 11.1. Since I don't see any decent information on this - I just wanted to ask - what this kind of panic generally mean ? And where do I go with this. The only PR I see is about 9.[, and the author there got multiple

HAST, configuration, this actually looks insane

2018-03-18 Thread Eugene M. Zheganin
Hi, I'm trying to configure a HAST on FreeBSD, and suddenly it appears to be a mind-breaking procedure. I totally don't get it, thus it doesn't work, dumps cores and behaves weirdly. First of all, in an existing configuration files paradigm, used widely in the whole IT industry, the local

Re: mc, xterm-clear, Ctrl-O and Home/End dilemma

2017-12-21 Thread Eugene M. Zheganin
Hi, On 22.12.2017 00:38, Marek Zarychta wrote: Maybe switching to the x-window driven desktop environment at home should be taken into consideration in this case. Both ncures and slang versions of misc/mc work fine (key bindings, border drawing etc.) for ssh(1) client called from xterm

Re: mc, xterm-clear, Ctrl-O and Home/End dilemma

2017-12-21 Thread Eugene M. Zheganin
Hi, On 21.12.2017 23:20, Eugene M. Zheganin wrote: Hi, So, there's a puzzle of minor issues and I wanted to ask how you guys deal with it. - with standard ncurses misc/mc there's no borders in mc in putty, and Ctrl-O flushes the output beneath panels. -with slang misc/mc Ctrl-O flushes

mc, xterm-clear, Ctrl-O and Home/End dilemma

2017-12-21 Thread Eugene M. Zheganin
Hi, So, there's a puzzle of minor issues and I wanted to ask how you guys deal with it. - with standard ncurses misc/mc there's no borders in mc in putty, and Ctrl-O flushes the output beneath panels. -with slang misc/mc Ctrl-O flushes the output beneath panels (and I lived with this

ctladm - create the target using only cli

2017-12-15 Thread Eugene M. Zheganin
Hi, my company is developing some sort of API for iSCSI managing, and at this time we are trying to figure out how to create and delete targets using ctladm and not usingb the configuration file. And the relationship between LUNs and ports are unclear to us if we don't use the

hw.vga.textmode=1 and the installation media

2017-12-10 Thread Eugene M. Zheganin
Hi, would be really nice if the 11.2 and subsequent versions would come with the hw.vga.textmode=1 as the default in the installation media. Because you know, there's a problem with some vendors (like HP) who's servers are incapable of showing graphics in IPMI with the default

Re: zfs, iSCSI and volmode=dev

2017-10-09 Thread Eugene M. Zheganin
Hi, On 27.09.2017 16:07, Edward Napierala wrote: 2017-08-30 11:45 GMT+02:00 Eugene M. Zheganin <e...@norma.perm.ru <mailto:e...@norma.perm.ru>>: Hi, I have an iSCSI production system that exports a large number of zvols as the iSCSI targets. System is running Free

iSCSI: LUN modification error: LUN XXX is not managed by the block backend and LUN device confusion

2017-10-04 Thread Eugene M. Zheganin
Hi, I got one more problem while dealing iSCSI targets in the production (yeah, I'm boring and stubborn). The environment is as in previous questions (a production site, hundreds of VMs and hundreds of disks). I've encountered this issue before, but this time i decided to ask whether it's

Re: ctld: only 579 iSCSI targets can be created

2017-10-04 Thread Eugene M. Zheganin
Hi. On 02.10.2017 15:03, Edward Napierala wrote: Thanks for the packet trace. What happens there is that the Windows initiator logs in, requests Discovery ("SendTargets=All"), receives the list of targets, as expected, and then... sends "SendTargets=All" again, instead of logging off. This

Re: ctld: only 579 iSCSI targets can be created

2017-09-22 Thread Eugene M. Zheganin
Hi, Edward Tomasz Napierała wrote 2017-09-22 12:15: There are two weird things here. First is that the error is coming from ctld(8) - the userspace daemon, not the kernel. The second is that those invalid opcodes are actually both valid - they are the Text Request, and the Logout Request

Re: ctld: only 579 iSCSI targets can be created

2017-09-21 Thread Eugene M. Zheganin
Hi, Eugene M. Zheganin писал 2017-09-22 10:36: Hi, I have old 11-STABLE as an iSCSI server, but out of the blue I encountered weird problem: only 579 targets can be created. I mean, I am fully aware that the out-of-the-box limit is 128 targets, with is enforced by the CTL_MAX_PORTS define

ctld: only 579 iSCSI targets can be created

2017-09-21 Thread Eugene M. Zheganin
Hi, I have old 11-STABLE as an iSCSI server, but out of the blue I encountered weird problem: only 579 targets can be created. I mean, I am fully aware that the out-of-the-box limit is 128 targets, with is enforced by the CTL_MAX_PORTS define, and I've set it to 1024 (and of course rebuilt

zfs, iSCSI and volmode=dev

2017-08-30 Thread Eugene M. Zheganin
Hi, I have an iSCSI production system that exports a large number of zvols as the iSCSI targets. System is running FreeBSD 11.0-RELEASE-p7 and initially all of the zvols were confugured with default volmode. I've read that it's recommended to use them in dev mode, so the system isn't

Re: zfs listing and CPU

2017-08-13 Thread Eugene M. Zheganin
On 13.08.2017 16:13, Tenzin Lhakhang wrote: You may want to have an async zfs-get program/script that regularly does a zfs get -Ho and stores then in a local cache (redis or your own program) at a set interval and then the api can hit the cache instead of directly running get or list. I cannot

Re: zfs listing and CPU

2017-08-13 Thread Eugene M. Zheganin
Hi, On 12.08.2017 20:50, Paul Kraus wrote: On Aug 11, 2017, at 2:28 AM, Eugene M. Zheganin <e...@norma.perm.ru> wrote: Why does the zfs listing eat so much of the CPU ? 47114 root 1 200 40432K 3840K db->db 4 0:05 26.84% zfs 47099 root 1 200 40432K

zfs listing and CPU

2017-08-11 Thread Eugene M. Zheganin
Hi, Why does the zfs listing eat so much of the CPU ? last pid: 47151; load averages: 3.97, 6.35, 6.13up 1+23:21:18 09:15:13 146 processes: 3 running, 142 sleeping, 1 waiting CPU: 0.0% user, 0.0% nice, 30.5% system, 0.3% interrupt, 69.2%

Re: a strange and terrible saga of the cursed iSCSI ZFS SAN

2017-08-08 Thread Eugene M. Zheganin
On 05.08.2017 22:08, Eugene M. Zheganin wrote: Hi, I got a problem that I cannot solve just by myself. I have a iSCSI zfs SAN system that crashes, corrupting it's data. I'll be short, and try to describe it's genesis shortly: 1) autumn 2016, SAN is set up, supermicro server, external JBOD

Re: a strange and terrible saga of the cursed iSCSI ZFS SAN

2017-08-05 Thread Eugene M. Zheganin
Hi, On 05.08.2017 22:08, Eugene M. Zheganin wrote: pool: userdata state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire

a strange and terrible saga of the cursed iSCSI ZFS SAN

2017-08-05 Thread Eugene M. Zheganin
Hi, I got a problem that I cannot solve just by myself. I have a iSCSI zfs SAN system that crashes, corrupting it's data. I'll be short, and try to describe it's genesis shortly: 1) autumn 2016, SAN is set up, supermicro server, external JBOD, sandisk ssds, several redundant pools, FreeBSD

panic: dva_get_dsize_sync(): bad DVA on 2016 11-STABLE.

2017-08-03 Thread Eugene M. Zheganin
Hi, today I got the following panic on the December 2016 11-STABLE: FreeBSD san02.playkey.net 11.0-STABLE FreeBSD 11.0-STABLE #0 r310734M: Thu Dec 29 19:22:30 UTC 2016 emz@san02:/usr/obj/usr/src/sys/GENERIC amd64 panic: dva_get_dsize_sync(): bad DVA 4294967295:2086400 GNU gdb 6.1.1

Re: some general zfs tuning (for iSCSI)

2017-08-03 Thread Eugene M. Zheganin
Hi. On 02.08.2017 17:43, Ronald Klop wrote: On Fri, 28 Jul 2017 12:56:11 +0200, Eugene M. Zheganin <e...@norma.perm.ru> wrote: Hi, I'm using several FreeBSD zfs installations as the iSCSI production systems, they basically consist of an LSI HBA, and a JBOD with a bunch of SSD dis

some general zfs tuning (for iSCSI)

2017-07-28 Thread Eugene M. Zheganin
Hi, I'm using several FreeBSD zfs installations as the iSCSI production systems, they basically consist of an LSI HBA, and a JBOD with a bunch of SSD disks (12-24, Intel, Toshiba or Sandisk (avoid Sandisks btw)). And I observe a problem very often: gstat shows 20-30% of disk load, but the

ctl.conf includes

2017-07-28 Thread Eugene M. Zheganin
Hi, any chance we will get the "include" directive for ctl.conf ? Because, for instance, I'm using bunch of custom APIs on top of iSCSI/zfs and the inability to split the ctl.conf to a set of different one-for-a-target config files complicates lot of things. I understand clearly that this

Re: cannot destroy faulty zvol

2017-07-23 Thread Eugene M. Zheganin
Hi. On 23.07.2017 0:28, Eugene M. Zheganin wrote: Hi, On 22.07.2017 17:08, Eugene M. Zheganin wrote: is this weird error "cannot destroy: already exists" related to the fact that the zvol is faulty ? Does it indicate that metadata is probably faulty too ? Anyway, is there a way

Re: cannot destroy faulty zvol

2017-07-22 Thread Eugene M. Zheganin
Hi, On 22.07.2017 17:08, Eugene M. Zheganin wrote: is this weird error "cannot destroy: already exists" related to the fact that the zvol is faulty ? Does it indicate that metadata is probably faulty too ? Anyway, is there a way to destroy this dataset ? Follow-up: I sent a si

cannot destroy faulty zvol

2017-07-22 Thread Eugene M. Zheganin
Hi, I cannot destroy a zvol for a reason that I don't understand: [root@san1:~]# zfs list -t all | grep worker182 zfsroot/userdata/worker182-bad1,38G 1,52T 708M - [root@san1:~]# zfs destroy -R zfsroot/userdata/worker182-bad cannot destroy 'zfsroot/userdata/worker182-bad': dataset

mdconfig and UDF

2017-07-14 Thread Eugene M. Zheganin
Hi. Is there any chance to mount UDF filesystem under FreeBSD with mdconfig and ISO image ? Mount -t cd9660 /dev/md0 /mnt/cdrom gives me the readme.txt with "This is UDF, you idiot" and mount -t udf /dev/md0 /mnt/cdrom gives me # mount -t udf /dev/md0 cdrom mount_udf: /dev/md0: Invalid

Re: redundant zfs pool, system traps and tonns of corrupted files

2017-06-29 Thread Eugene M. Zheganin
Hi, On 29.06.2017 16:37, Eugene M. Zheganin wrote: Hi. Say I'm having a server that traps more and more often (different panics: zfs panics, GPFs, fatal traps while in kernel mode etc), and then I realize it has tonns of permanent errors on all of it's pools that scrub is unable to heal

redundant zfs pool, system traps and tonns of corrupted files

2017-06-29 Thread Eugene M. Zheganin
Hi. Say I'm having a server that traps more and more often (different panics: zfs panics, GPFs, fatal traps while in kernel mode etc), and then I realize it has tonns of permanent errors on all of it's pools that scrub is unable to heal. Does this situation mean it's a bad memory case ?

system is unresponsive and the amount of wired memory is cycling - zfs/iscsi ?

2017-05-22 Thread Eugene M. Zheganin
Hi. I'm using a FreeBSD 11.0-R server as a SAN system (with a native iSCSI target). It has 12 disks attached via external enclosure and a Megaraid SAS 3003 mrsas(4) controller. Actually I'm using several FreeBSD in similar configurations as SAN systems, but this one is frequently

Re: freebsd on Intel s5000pal - kernel reboots the system

2017-04-18 Thread Eugene M. Zheganin
Hi. On 18.04.2017 14:44, Konstantin Belousov wrote: You did not provide any information about your issue. It is not known even whether the loader breaks for you, or a kernel starts booting and failing. Ideally, you would use serial console and provide the log of everything printed on it,

freebsd on Intel s5000pal - kernel reboots the system

2017-04-18 Thread Eugene M. Zheganin
Hi, I need to install FreBSD on an Intel system with s5000pal mainboard. The problem is, that on the kernel loading stage, FreeBSD reboots the server. Like, always and silently, without trapping. I have plugged out all of the discrete PCI controllers, leaving only onboard ones. Still

zpool list show nonsense on raidz pools, at least it looks like it for me

2017-04-12 Thread Eugene M. Zheganin
Hi, It's not my first letter where I fail to understand the space usage from zfs utilities, and in previous ones I was kind of convinced that I just read it wrong, but not this time I guess. See for yourself: [emz@san01:~]> zpool list data NAME SIZE ALLOC FREE EXPANDSZ FRAGCAP

about that DFBSD performance test

2017-03-07 Thread Eugene M. Zheganin
Hi. Some have probably seen this already - http://lists.dragonflybsd.org/pipermail/users/2017-March/313254.html So, could anyone explain why FreeBSD was owned that much. Test is split into two parts, one is nginx part, and the other is the IPv4 forwarding part. I understand that nginx

Re: reset not working like 70% of the time

2017-01-25 Thread Eugene M. Zheganin
Hi. On 25.01.2017 15:15, Kurt Jaeger wrote: > Hi! > >> does anyone suffer from this too ? Right now (and for several last >> years) a 100% decent way to reset a terminal session (for instance, >> after a connection reset, after acidentally displaying a binary file >> with symbols that are treated

reset not working like 70% of the time

2017-01-25 Thread Eugene M. Zheganin
Hi, does anyone suffer from this too ? Right now (and for several last years) a 100% decent way to reset a terminal session (for instance, after a connection reset, after acidentally displaying a binary file with symbols that are treated as terminal control sequence, after breaking a cu session,

Re: decent 40G network adapters

2017-01-18 Thread Eugene M. Zheganin
Hi. On 18.01.2017 15:03, Slawa Olhovchenkov wrote: > I am use Chelsio and Solarflare. > Not sure about you workload -- I am have 40K+ TCP connections, you > workload need different tuning. > Do you planed to utilise both ports? > For this case you need PCIe 16x card. This is Chelsio T6 and >

decent 40G network adapters

2017-01-18 Thread Eugene M. Zheganin
Hi. Could someone recommend a decent 40Gbit adapter that are proven to be working under FreeBSD ? The intended purpose - iSCSI traffic, not much pps, but rates definitely above 10G. I've tried Supermicro-manufactured Intel XL710 ones (two boards, different servers - same sad story: packets loss,

Re: camcontrol rescan seems to be broken

2016-12-25 Thread Eugene M. Zheganin
Hi. On 22.12.2016 23:46, Warner Losh wrote: Sure sounds like your binaries are cross-threaded with the kernel. what was "file `which camcontrol`" tell you? I just got this on the FreeBSD 11.0-RELEASE Live CD, whan trying to rescan a scsci bus on an LSI3008 adapter. Looks more like a bug in

Re: camcontrol rescan seems to be broken

2016-12-22 Thread Eugene M. Zheganin
Hi. On 22.12.2016 11:51, Eugene M. Zheganin wrote: Hi, could anyone tell me where am I wrong: # camcontrol rescan all camcontrol: CAMIOCOMMAND ioctl failed: Invalid argument # uname -U 1100122 # uname -K 1100122 # uname -a FreeBSD bsdrookie.norma.com. 11.0-RELEASE-p5 FreeBSD 11.0-RELEASE-p5

Re: cannot detach vdev from zfs pool

2016-12-22 Thread Eugene M. Zheganin
Hi. On 22.12.2016 21:26, Alan Somers wrote: I'm not surprised to see this kind of error in a ZFS on GELI on Zvol pool. ZFS on Zvols has known deadlocks, even without involving GELI. GELI only makes it worse, because it foils the recursion detection in zvol_open. I wouldn't bother opening a PR

cannot detach vdev from zfs pool

2016-12-22 Thread Eugene M. Zheganin
Hi, Recently I decided to remove the bogus zfs-inside-geli-inside-zvol pool, since it's now officially unsupported. So, I needed to reslice my disk, hence to detach one of the disks from a mirrored pool. I issued 'zpool detach zroot gpt/zroot1' and my system livelocked almost immidiately, so I

camcontrol rescan seems to be broken

2016-12-21 Thread Eugene M. Zheganin
Hi, could anyone tell me where am I wrong: # camcontrol rescan all camcontrol: CAMIOCOMMAND ioctl failed: Invalid argument # uname -U 1100122 # uname -K 1100122 # uname -a FreeBSD bsdrookie.norma.com. 11.0-RELEASE-p5 FreeBSD 11.0-RELEASE-p5 #0 r310364: Wed Dec 21 19:03:58 YEKT 2016

Re: Upgrading boot from GPT(BIOS) to GPT(UEFI)

2016-12-18 Thread Eugene M. Zheganin
Hi. On 19.12.2016 11:51, Warner Losh wrote: > On Sun, Dec 18, 2016 at 11:34 PM, Eugene M. Zheganin <e...@norma.perm.ru> > wrote: >> I tried the UEFI boot sequence on a Supermicro server. It boots only >> manually, gives some cryptic error while booting automatically. W

Re: Upgrading boot from GPT(BIOS) to GPT(UEFI)

2016-12-18 Thread Eugene M. Zheganin
Hi. On 16.12.2016 22:08, Fernando Herrero Carrón wrote: > I am reading uefi(8) and it looks like FreeBSD 11 should be able to boot > using UEFI straight into ZFS, so I am thinking of converting that > freebsd-boot partition to an EFI partition, creating a FAT filesystem and > copying

iscsi limit to 255 entities

2016-12-18 Thread Eugene M. Zheganin
Hi. I kind of stepped on a limit of 255 targets (a bunch of VMs), what is the possible workaround for this, besides running a secont ctld in bhyve ? I guess I cannot run ctld inside a jail, since it's the kernel daemon, right ? Is the 255 limit a limit on entities - I mean can I ran like 255

Re: [ZFS] files in a weird situtation

2016-12-18 Thread Eugene M. Zheganin
Hi, On 18.12.2016 02:01, David Marec wrote: > > A pass with `zfs scrub` didn't help. > > Any clue is welcome. What's that `dmu_bonus_hold` stands for ? > Just out of the curiosity - is it on a redundant pool and does the 'zpool status' report any error ? Eugene.

sonewconn: pcb [...]: Listen queue overflow to human-readable form

2016-12-15 Thread Eugene M. Zheganin
Hi. Sometimes on one of my servers I got dmesg full of sonewconn: pcb 0xf80373aec000: Listen queue overflow: 49 already in queue awaiting acceptance (6 occurrences) sonewconn: pcb 0xf80373aec000: Listen queue overflow: 49 already in queue awaiting acceptance (2 occurrences) sonewconn:

Re: webcamd panic - is it just me?

2016-12-09 Thread Eugene M. Zheganin
Hi. On 06.12.2016 18:43, Anton Shterenlikht wrote: > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=215000 > > I think this started after moving from 10.3 to 11.0. > > Does nobody else see this panic? > > Saw webcamd-initiated panic once, on an 11.x too; don't remember the details, so I cannot

vfs.zfs.vdev.bio_delete_disable - crash while changing on the fly

2016-12-09 Thread Eugene M. Zheganin
Hi. Recently I've encountered the issue with "slow TRIM" and Sandisk SSDs, so I was told to try to disable TRIM and see what happens (thanks a lot by the way, that did it). But changing the vfs.zfs.vdev.bio_delete_disable on the fly can lead to the system crash with the probability of 50%. Is it

Re: Sandisk CloudSpeed Gen. II Eco Channel SSD vs ZFS = we're in hell

2016-11-29 Thread Eugene M. Zheganin
Hi. On 28.11.2016 23:07, Steven Hartland wrote: > Check your gstat with -dp so you also see deletes, it may be that your > drives have a very slow TRIM. > Indeed, I see a bunch of delete operations, and when TRIM disabled my engineers report that the performance is greatly increasing. Is this it

Sandisk CloudSpeed Gen. II Eco Channel SSD vs ZFS = we're in hell

2016-11-28 Thread Eugene M. Zheganin
Hi, recently we bough a bunch of "Sandisk CloudSpeed Gen. II Eco Channel" disks (the model name by itself should already made me suspicious) for using with zfs SAN on FreeBSD, we're plugged them into the LSI SAS3008 and now we are experiencing the performance that I would call "literally

Re: 11.0-RELEASE-p2: panic: vm_page_unwire: page 0x[...]'s wire count is zero

2016-10-27 Thread Eugene M. Zheganin
Hi. On 27.10.2016 15:01, Eugene M. Zheganin wrote: > Has anyone seen this, and what are my actions ? I've google a bit, saw > some references mentioning FreeBSD 9.x and ZERO_COPY_SOCKETS, but I > don't have neither, so now I'm trying to understand what will my actions > be

11.0-RELEASE-p2: panic: vm_page_unwire: page 0x[...]'s wire count is zero

2016-10-27 Thread Eugene M. Zheganin
Hi, I've upgraded one of my old FreeBSD from 10-STABLE (which was leaking the wired memory and this was fixed in the spring; other than that it was pretty much stable) to the 11.0-RELEASE-p2, and almost immidiately got myself a panic: ===Cut=== # more core.txt.0 calypso.enaza.ru dumped core -

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Eugene M. Zheganin
Hi. On 21.10.2016 15:20, Slawa Olhovchenkov wrote: ZFS prefetch affect performance dpeneds of workload (independed of RAM size): for some workloads wins, for some workloads lose (for my workload prefetch is lose and manualy disabled with 128GB RAM). Anyway, this system have only 24MB in ARC

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Eugene M. Zheganin
Hi. On 21.10.2016 9:22, Steven Hartland wrote: On 21/10/2016 04:52, Eugene M. Zheganin wrote: Hi. On 20.10.2016 21:17, Steven Hartland wrote: Do you have atime enabled for the relevant volume? I do. If so disable it and see if that helps: zfs set atime=off Nah, it doesn't help at all

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin
Hi. On 20.10.2016 21:17, Steven Hartland wrote: Do you have atime enabled for the relevant volume? I do. If so disable it and see if that helps: zfs set atime=off Nah, it doesn't help at all. Thanks. Eugene. ___ freebsd-stable@freebsd.org

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin
Hi. On 20.10.2016 19:18, Dr. Nikolaus Klepp wrote: I've the same issue, but only if the ZFS resides on a LSI MegaRaid and one RAID0 for each disk. Not in my case, both pool disks are attached to the Intel ICH7 SATA300 controller. Thanks. Eugene.

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin
Hi, On 20.10.2016 19:12, Pete French wrote: Have ignored this thread untiul now, but I observed the same behaviour on mysystems over the last week or so. In my case its an exim spool directory, which was hugely full as some point (thousands of files) and now takes an awfully long time to open

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin
Hi. On 20.10.2016 19:03, Miroslav Lachman wrote: What about snapshots? Are there any snapshots on this filesystem? Nope. # zfs list -t all NAMEUSED AVAIL REFER MOUNTPOINT zroot 245G 201G 1.17G legacy zroot/tmp 10.1M 201G

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin
Hi. On 20.10.2016 18:54, Nicolas Gilles wrote: Looks like it's not taking up any processing time, so my guess is the lag probably comes from stalled I/O ... bad disk? Well, I cannot rule this out completely, but first time I've seen this lag on this particular server about two months ago, and

zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin
Hi. I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation on different releases) and a zfs. I also have one directory that used to have a lot of (tens of thousands) files. I surely takes a lot of time to get a listing of it. But now I have 2 files and a couple of dozens

Re: I'm upset about FreeBSD

2016-10-17 Thread Eugene M. Zheganin
Hi. On 17.10.2016 5:44, Rostislav Krasny wrote: Hi, I've been using FreeBSD for many years. Not as my main operating system, though. But anyway several bugs and patches were contributed and somebody even added my name into the additional contributors list. That's pleasing but today I tried to

Re: zfs/raidz: seems like I'm failing with math

2016-10-16 Thread Eugene M. Zheganin
Hi. On 16.10.2016 23:42, Gary Palmer wrote: You're confusing disk manufacturer gigabytes with real (power of two) gigabytes. The below turns 960 197 124 096 into real gigabytes Yup, I thought that smartctl is better than that and already displayed the size with base 1024. :) Thanks.

Re: zfs/raidz: seems like I'm failing with math

2016-10-16 Thread Eugene M. Zheganin
Hi. On 16.10.2016 22:06, Alan Somers wrote: It's raw size, but the discrepancy is between 1000 and 1024. Smartctl is reporting base 10 size, but zpool is reporting base 1024.. 960197124096.0*6/1024**4 = 5.24 TB, which is pretty close to what zpool says. Thanks ! It does explain it. But then

zfs/raidz: seems like I'm failing with math

2016-10-16 Thread Eugene M. Zheganin
Hi. FreeBSD 11.0-RC1 r303979, zfs raidz1: ===Cut=== # zpool status gamestop pool: gamestop state: ONLINE scan: none requested config: NAMESTATE READ WRITE CKSUM gamestopONLINE 0 0 0 raidz1-0 ONLINE 0 0 0

zvol clone diffs

2016-09-22 Thread Eugene M. Zheganin
Hi. I should mention from the start that this is a question about an engineering task, not a question about FreeBSD issue. I have a set of zvol clones that I redistribute over iSCSI. Several Windows VMs use these clones as disks via their embedded iSCSI initiators (each clone represents a disk

zfs/raidz and creation pause/blocking

2016-09-22 Thread Eugene M. Zheganin
Hi. Recently I spent a lot of time setting up various zfs installations, and I got a question. Often when creating a raidz on disks considerably big (>~ 1T) I'm seeing a weird stuff: "zpool create" blocks, and waits for several minutes. In the same time system is fully responsive and I can see in

  1   2   >