Re[2]: make installworld for i386 fails on Kyuafile

2014-10-03 Thread Vladimir Sharun
Hello,

Seems this issue raised again in r272469, at least today and yesterday 
installworld fails. Last time was few minutes ago with the revision shown.

 
  install: /usr/tests/lib/libproc/Kyuafile: No such file or directory
 
 Fixed in r271950. Problem introduced in r271937 which seems to have
 missed part of the change sent out for review in D710.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re[2]: gpart destroy, zpool destroy, zfs destroy under securelevel 3

2014-05-29 Thread Vladimir Sharun
Hello,

 if you have root privileges you can just write some random bytes in some
 places and this will be enough to break your system. So, restricting
 some gpart's or zpool's actions depending from securelevel looks like
 protection from kids.

Having root under securelevel 3 confirmed disallows you to:
1) Direct write to the block devices such as (a)da
2) Change rules and/or shutdown pf
3) Remove system flags such as schg, sunlnk

I think your statement true in case of securelevel -1, we're talking about
the highest one - 3, which shown in logs.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re[2]: gpart destroy, zpool destroy, zfs destroy under securelevel 3

2014-05-29 Thread Vladimir Sharun
Hello,

 Ok, you are right. But geom_dev restricts access only from user level
 applications. When GEOM object does access directly via GEOM methods
 this protection won't work. And it seems it isn't easy to fix, all
 classes should have own check.

Thank you for better clarification. This is the goal I mentioned in
first email: GEOM  ZFS layers/subsystems are securelevel ignorant.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


gpart destroy, zpool destroy, zfs destroy under securelevel 3

2014-05-26 Thread Vladimir Sharun
Hello FreeBSD community,

Recently plays with securelevel and what I discover: no chance for data to 
survive against remote root, except backups of course. Maybe this log can be a 
proposal for raising securelevel further or include securelevel support against 
the software which can deal with zfs and GEOM labels ?


root@tests:~ # sysctl kern.securelevel=3
kern.securelevel: -1 - 3
root@tests:~ # gpart show ada3
gpart: No such geom: ada3.
root@tests:~ # gpart create -s gpt /dev/ada3
ada3 created
root@tests:~ # gpart add -t freebsd-zfs -l testdisk -a4k /dev/ada3
ada3p1 added
root@tests:~ # gpart show /dev/ada3
=34  1953525101  ada3  GPT  (932G)
34   6- free -  (3.0K)
40  1953525088 1  freebsd-zfs  (932G)
1953525128   7- free -  (3.5K)
root@tests:~ # zpool create testpool /dev/gpt/testdisk
root@tests:~ # zpool status testpool
pool: testpool
state: ONLINE
scan: none requested
config:

NAMESTATE READ WRITE CKSUM
testpoolONLINE   0 0 0
gpt/testdisk  ONLINE   0 0 0

errors: No known data errors
root@tests:~ # zfs create testpool/test1
root@tests:~ # zfs list | grep test
system/test2  144K  1.78T   144K  none
testpool  150K   913G32K  /testpool
testpool/test1 31K   913G31K  /testpool/test1

root@tests:~ # zfs create testpool/test1
root@tests:~ # zpool destroy testpool
root@tests:~ # zpool status testpool
cannot open 'testpool': no such pool

root@tests:~ # gpart show /dev/ada3
=34  1953525101  ada3  GPT  (932G)
34   6- free -  (3.0K)
40  1953525088 1  freebsd-zfs  (932G)
1953525128   7- free -  (3.5K)

root@tests:~ # gpart delete -i 1 /dev/ada3
ada3p1 deleted
root@tests:~ # gpart destroy /dev/ada3
ada3 destroyed
root@tests:~ # gpart show /dev/ada3
gpart: No such geom: /dev/ada3.
root@tests:~ # sysctl kern.securelevel
kern.securelevel: 3














___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Strong memory leak in r264294

2014-04-16 Thread Vladimir Sharun
Hello there,

We're just put on test e5-1650 with 128Gb RAM 36x1Tb in several zmirrors, 
2 days uptime and:

Mem: 1643M Active, 13G Inact, 96G Wired, 37M Cache, 14G Free
ARC: 48G Total, 30G MFU, 10G MRU, 3296K Anon, 790M Header, 6992M Other
Swap:

Wired minus ARC = 48G, then:

quick calculation on 100M allocations by vmstat -z shows 50938,1M allocated,
all allocations - 52781,73M

Any suggestions where to look at ?

vmstat -z follows:
ITEM   SIZE  LIMIT USED FREE  REQ FAIL SLEEP

UMA Kegs:   384,  0, 204,   6, 204,   0,   0
UMA Zones: 2176,  0, 204,   0, 204,   0,   0
UMA Slabs:   80,  0, 2734199,  51, 3166108,   0,   0
UMA RCntSlabs:   88,  0,6618,  42,6618,   0,   0
UMA Hash:   256,  0,   2,  88,  78,   0,   0
4 Bucket:32,  0,  542671,   33329, 2285367,   0,   0
6 Bucket:48,  0,   68313,   22572, 1057218,   0,   0
8 Bucket:64,  0,   30133,   17421,  304455,   0,   0
12 Bucket:   96,  0,3139,   41879,  329193,   0,   0
16 Bucket:  128,  0,7288,   88843, 1275015,  11,   0
32 Bucket:  256,  0,   10517,   11878, 2760777,  38,   0
64 Bucket:  512,  0,   35709,   92275, 4330565,2988,   0
128 Bucket:1024,  0,   85602,6658,17845415,10640,   0
vmem btag:   56,  0, 1753014, 615, 1753485,24699,   0
VM OBJECT:  256,  0,  285819,  218541,27170696,   0,   0
RADIX NODE: 144,  0, 1632066,5754,49700902,   0,   0
MAP:240,  0,   3,  61,   3,   0,   0
KMAP ENTRY: 128,  0,   6, 273,   6,   0,   0
MAP ENTRY:  128,  0,   26857,4143,46999863,   0,   0
VMSPACE:448,  0, 228, 528, 1296521,   0,   0
fakepg: 104,  0,   0,   0,   0,   0,   0
mt_zone:   4112,  0, 261,   0, 261,   0,   0
16:  16,  0,   73226, 1105470,149180740,   0,   0
32:  32,  0,  309506,  216619,212029882,   0,   0
64:  64,  0, 4783744, 1534676,207450552,   0,   0
128:128,  0, 1174418,  280846,275707562,   0,   0
256:256,  0,  305187,  385023,127852882,   0,   0
512:512,  0,1058,1158,120105677,   0,   0
1024:  1024,  0,   19324, 380, 7378948,   0,   0
2048:  2048,  0, 482,1836,68059097,   0,   0
4096:  4096,  0,  168830,   42710,35332109,   0,   0
64 pcpu:  8,  0, 866,1182,1685,   0,   0
SLEEPQUEUE:  80,  0,2017, 990,2017,   0,   0
Files:   80,  0,3084,2066,88741054,   0,   0
rl_entry:40,  0, 354,1146, 354,   0,   0
TURNSTILE:  136,  0,2017, 503,2017,   0,   0
umtx pi: 96,  0,   0,   0,   0,   0,   0
MAC labels:  40,  0,   0,   0,   0,   0,   0
PROC:  1208,  0, 248, 184, 1296550,   0,   0
THREAD:1168,  0,1863, 153,8470,   0,   0
cpuset:  72,  0,1339,1191,2041,   0,   0
audit_record:  1248,  0,   0,   0,   0,   0,   0
mbuf_packet:256, 52277955,2105,3395,481227339,   0,   0
mbuf:   256, 52277955, 664,5341,1403660680,   0,   0
mbuf_cluster:  2048, 8168430,5500,   0,5500,   0,   0
mbuf_jumbo_page:   4096, 4084215, 389,3479,84738745,   0,   0
mbuf_jumbo_9k: 9216, 1210137,   0,   0,   0,   0,   0
mbuf_jumbo_16k:   16384, 680702,   0,   0,   0,   0,   0
mbuf_ext_refcnt:  4,  0, 256,2505,34011594,   0,   0
sendfile_sync:  128,  0,   0,   0,   0,   0,   0
g_bio:  248,  0,   0,   13776,176312351,   0,   0
ttyinq: 160,  0, 780, 695,3165,   0,   0
ttyoutq:256,  0, 405, 645,1650,   0,   0
DMAR_MAP_ENTRY: 120,  0,   0,   0,   0,   0,   0
ata_request:336,  0,   0,   0,   0,   0,   0
FPU_save_area:  832,  0,   0,   0,   0,   0,   0
taskq_zone:  48,  0,   0,2573,  683735,   0,   0
VNODE:  472,  0,  676927,  221497,45812118,   0,   0
VNODEPOLL:  112,  0,   1, 279,   4,   0,   0
BUF TRIE:   144,  0,   0,  105948,   0,   0,   0
NAMEI: 1024,  0,   0, 224,254783238,   0,   0
S VFS Cache: 

Unable to build kernel #263665 config: illegal option -- I

2014-03-23 Thread Vladimir Sharun
Hello FreeBSD comunity,

Got yesterday following issue with #263665:

# make kernel


--
 Kernel build for COBALT started on Sun Mar 23 17:44:45 EET 2014
--
=== COBALT
mkdir -p /usr/obj/usr/src/sys


--
 stage 1: configuring the kernel
--
cd /usr/src/sys/amd64/conf;  
PATH=/usr/obj/usr/src/tmp/legacy/usr/sbin:/usr/obj/usr/src/tmp/legacy/usr/bin:/usr/obj/usr/src/tmp/legacy/usr/games:/usr/obj/usr/src/tmp/legacy/bin:/usr/obj/usr/src/tmp/usr/sbin:/usr/obj/usr/src/tmp/usr/bin:/usr/obj/usr/src/tmp/usr/games:/sbin:/bin:/usr/sbin:/usr/bin
  config  -d /usr/obj/usr/src/sys/COBALT  -I /usr/src/sys/amd64/conf 
/usr/src/sys/amd64/conf/COBALT
config: illegal option -- I
usage: config [-CgmpV] [-d destdir] sysname
       config -x kernel
*** Error code 64


Stop.
make[1]: stopped in /usr/src
*** Error code 1


Stop.
make: stopped in /usr/src

The same for GENERIC as well. Emptying src.conf and make.conf doesn't help.
Last kernel succesfully built (the system now runs it) was r263345

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re[2]: Unable to build kernel #263665 config: illegal option -- I

2014-03-23 Thread Vladimir Sharun
Hello Milan,

This solution (make toolchain first) cure the problem.

Thank you.


 did see this (or something similar) too. Cured with 'make
 kernel-toolchain' and only then 'make buildkernel'. Could you try tis?
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: sshd sandbox failure

2014-02-04 Thread Vladimir Sharun

Dear Ian, 

Seems it must be in UPDATING or even in the buildworld: without capsicum 
framework support no ssh access to the server anymore.

I step in the same problem this weekend, thank to the IPMI on the home testebed 
I figured out what's the cause.

 
 Hi
 
 Since the openssh update in recent -CURRENT, I get these in my
 auth.log until I disable sandbox UsePrivilegeSeparation in sshd_config.
 
 Feb 3 10:02:14 firewall1 sshd[90062]: fatal: ssh_sandbox_child: failed to 
 limit the network socket [preauth]
 
 Is there something that I missed during the update?
 
 Ian
 
 -- 
 Ian Freislich
 ___
 freebsd-current@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current
 To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: sshd sandbox failure

2014-02-04 Thread Vladimir Sharun

Dear Ian, 

Seems it must be in UPDATING or even in the buildworld: without capsicum 
framework support no ssh access to the server anymore.

I step in the same problem this weekend, thank to the IPMI on the home testebed 
I figured out what's the cause.

 
 Hi
 
 Since the openssh update in recent -CURRENT, I get these in my
 auth.log until I disable sandbox UsePrivilegeSeparation in sshd_config.
 
 Feb  3 10:02:14 firewall1 sshd[90062]: fatal: ssh_sandbox_child: failed to 
 limit the network socket [preauth]
 
 Is there something that I missed during the update?
 
 Ian
 
 -- 
 Ian Freislich
 ___
 freebsd-current@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current
 To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re[2]: ARC pressured out, how to control/stabilize ? (reformatted to text/plain)

2014-01-30 Thread Vladimir Sharun
Dear Andriy and FreeBSD community,

L2ARC temporarily turned off by setting secondarycache=none everywhere it was 
enabled,
so no more leak for one particular day.

Here's the top header:
last pid: 89916;  load averages:  2.49,  2.91,  2.89up 5+19:21:42  14:09:12
561 processes: 2 running, 559 sleeping
CPU:  5.7% user,  0.0% nice, 14.0% system,  1.0% interrupt, 79.3% idle
Mem: 23G Active, 1017M Inact, 98G Wired, 1294M Cache, 3285M Buf, 1997M Free
ARC: 69G Total, 3498M MFU, 59G MRU, 53M Anon, 1651M Header, 4696M Other
Swap:

Here's the calculated vmstat -z (mean all of the allocations, which exceeds 
100*1024^2 printed):
UMA Slabs:  199,915M
VM OBJECT:  207,354M
32: 205,558M
64: 901,122M
128:215,211M
256:242,262M
4096:   2316,01M
range_seg_cache:205,396M
zio_buf_512:1103,31M
zio_buf_16384:  15697,9M
zio_data_buf_16384: 348,297M
zio_data_buf_24576: 129,352M
zio_data_buf_32768: 104,375M
zio_data_buf_36864: 163,371M
zio_data_buf_53248: 100,496M
zio_data_buf_57344: 105,93M
zio_data_buf_65536: 101,75M
zio_data_buf_73728: 111,938M
zio_data_buf_90112: 104,414M
zio_data_buf_106496:100,242M
zio_data_buf_131072:61652,5M
dnode_t:3203,98M
dmu_buf_impl_t: 797,695M
arc_buf_hdr_t:  1498,76M
arc_buf_t:  105,802M
zfs_znode_cache:352,61M

zio_data_buf_131072 (61652M) + zio_buf_16384 (15698M) = 77350M
easily exceeds ARC total (70G)


Here's the same calculations from exact the same system where L2 was disabled 
before reboot:
last pid: 63407;  load averages:  2.35,  2.71,  2.73up 8+19:42:54  14:17:33
527 processes: 1 running, 526 sleeping
CPU:  4.8% user,  0.0% nice,  6.6% system,  1.1% interrupt, 87.4% idle
Mem: 21G Active, 1460M Inact, 99G Wired, 1748M Cache, 3308M Buf, 952M Free
ARC: 87G Total, 4046M MFU, 76G MRU, 37M Anon, 2026M Header, 4991M Other
Swap:

and the vmstat -z filtered:
UMA Slabs:  208,004M
VM OBJECT:  207,392M
32: 172,831M
64: 752,226M
128:210,024M
256:244,204M
4096:   2249,02M
range_seg_cache:245,711M
zio_buf_512:1145,25M
zio_buf_16384:  15170,1M
zio_data_buf_16384: 422,766M
zio_data_buf_20480: 120,742M
zio_data_buf_24576: 148,641M
zio_data_buf_28672: 112,848M
zio_data_buf_32768: 117,375M
zio_data_buf_36864: 185,379M
zio_data_buf_45056: 103,168M
zio_data_buf_53248: 105,32M
zio_data_buf_57344: 122,828M
zio_data_buf_65536: 109,25M
zio_data_buf_69632: 100,406M
zio_data_buf_73728: 126,844M
zio_data_buf_77824: 101,086M
zio_data_buf_81920: 100,391M
zio_data_buf_86016: 101,391M
zio_data_buf_90112: 112,836M
zio_data_buf_98304: 100,688M
zio_data_buf_102400:106,543M
zio_data_buf_106496:108,875M
zio_data_buf_131072:63190,5M
dnode_t:3437,36M
dmu_buf_impl_t: 840,62M
arc_buf_hdr_t:  1870,88M
arc_buf_t:  114,942M
zfs_znode_cache:353,055M

Everything seems within ARC total range.

We will try patch attached within few days and will come back with the result.

Thank you for your help.

 on 28/01/2014 11:28 Vladimir Sharun said the following:
  Dear Andriy and FreeBSD community,
  
  After applying this path one of the systems runs fine (disk subsystem load 
  low to moderate 
  - 10-20% busy sustained),
  
  Then I saw this patch was merged to the HEAD and we apply it to the one of 
  the systems 
  with moderate to high disk load: 30-60% busy (11.0-CURRENT #7 r261118: Fri 
  Jan 24 17:25:08 EET 2014)
  
  Within 4 days we experiencing the same leak(?) as without patch: 
  
  last pid: 53841;  load averages:  4.47,  4.18,  3.78 up 3+16:37:09  
  11:24:39
  543 processes: 6 running, 537 sleeping
  CPU:  8.7% user,  0.0% nice, 14.6% system,  1.4% interrupt, 75.3% idle
  Mem: 22G Active, 1045M Inact, 98G Wired, 1288M Cache, 3284M Buf, 2246M Free
  ARC: 73G Total, 3763M MFU, 62G MRU, 56M Anon, 1887M Header, 4969M Other
  Swap:
  
  The ARC is populated within 30mins under load to the max (90Gb) then start 
  decreasing.
  
  The delta between Wiread and ARC total start growing from typical 10-12Gb 
  without L2 enabled
  to the 25Gb with L2 enabled and counting (4 hours ago was 22Gb delta).
 
 First,  have you checked that vmstat -z output contains the same anomaly as 
 for
 in your original report?
 
 If yes, the please try to reproduce the problem with the following debugging 
 patch:
 http://people.freebsd.org/~avg/l2arc-b_tmp_cdata-diag.patch
 Please make sure to compile your kernel (and modules) with INVARIANTS.
 
 -- 
 Andriy Gapon
 ___
 freebsd-current@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current
 To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re[2]: ARC pressured out, how to control/stabilize ? (reformatted to text/plain)

2014-01-28 Thread Vladimir Sharun
Dear Andriy and FreeBSD community,

After applying this path one of the systems runs fine (disk subsystem load low 
to moderate 
- 10-20% busy sustained),

Then I saw this patch was merged to the HEAD and we apply it to the one of the 
systems 
with moderate to high disk load: 30-60% busy (11.0-CURRENT #7 r261118: Fri Jan 
24 17:25:08 EET 2014)

Within 4 days we experiencing the same leak(?) as without patch: 

last pid: 53841;  load averages:  4.47,  4.18,  3.78 up 3+16:37:09  11:24:39
543 processes: 6 running, 537 sleeping
CPU:  8.7% user,  0.0% nice, 14.6% system,  1.4% interrupt, 75.3% idle
Mem: 22G Active, 1045M Inact, 98G Wired, 1288M Cache, 3284M Buf, 2246M Free
ARC: 73G Total, 3763M MFU, 62G MRU, 56M Anon, 1887M Header, 4969M Other
Swap:

The ARC is populated within 30mins under load to the max (90Gb) then start 
decreasing.

The delta between Wiread and ARC total start growing from typical 10-12Gb 
without L2 enabled
to the 25Gb with L2 enabled and counting (4 hours ago was 22Gb delta).

L2ARC statistics:

L2 ARC Size: (Adaptive) 291.63  GiB
Header Size:0.25%   734.14  MiB

L2 ARC Evicts:
Lock Retries:   682
Upon Reading:   0

L2 ARC Breakdown:   106.56m
Hit Ratio:  29.04%  30.95m
Miss Ratio: 70.96%  75.62m
Feeds:  317.18k

So, again, what shall we do to better understand/mitigate the problem further ?

Thank you.

 on 15/01/2014 12:28 Vitalij Satanivskij said the following:
  Dear Andriy and FreeBSD community,
  
  Andriy Gapon wrote:
  AG on 14/01/2014 07:27 Vladimir Sharun said the following:
  AG  Dear Andriy and FreeBSD community,
  AG  
  AG  I am not sure if the buffers are leaked somehow or if they are 
  actually in use.
  AG  It's one of the very few places where data buffers are allocated 
  without
  AG  charging ARC. In all other places it's quite easy to match 
  allocations and
  AG  deallocations. But in L2ARC it is not obvious that all buffers get 
  freed or
  AG  when that happens.
  AG  
  AG  After one week under load I think we figure out the cause: it's 
  L2ARC. 
  AG  Here's the top's header for 7d17h of the runtime:
  AG  
  AG  last pid: 46409; load averages: 0.37, 0.62, 0.70 up 7+17:14:01 
  07:24:10
  AG  173 processes: 1 running, 171 sleeping, 1 zombie
  AG  CPU: 2.0% user, 0.0% nice, 3.5% system, 0.4% interrupt, 94.2% idle
  AG  Mem: 8714M Active, 14G Inact, 96G Wired, 1929M Cache, 3309M Buf, 
  3542M Free
  AG  ARC: 85G Total, 2558M MFU, 77G MRU, 28M Anon, 1446M Header, 4802M 
  Other
  AG  
  AG  ARC related tunables:
  AG  
  AG  vm.kmem_size=110G
  AG  vfs.zfs.arc_max=90G
  AG  vfs.zfs.arc_min=42G
  AG  
  AG  For more than 7 days of hard runtime the picture clearly shows: 
  AG  Wired minus ARC = 11..12Gb, ARC grow and shrinks in 80-87Gb range and 
  the
  AG  system runs just fine.
  AG  
  AG  So what shall we do with L2ARC leakage ?
  AG 
  AG 
  AG Could you please try this patch
  AG http://cr.illumos.org/~webrev/skiselkov/3995/illumos-gate.patch ?
  AG 
  
  While applying path to curent version of arc.c (r260622) I'm found next 
  truble with compilation 
  
  olaris/uts/common/fs/zfs/arc.c -o arc.o
  /usr/src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:4628:18:
   error: use of
  undeclared identifier 'abl2'
  trim_map_free(abl2-b_dev-l2ad_vdev, abl2-b_daddr,
  ^
  1 error generated.
  *** Error code 1
  
  
  the code is - 
  
  if (zio-io_error != 0) {
  /*
  * Error - drop L2ARC entry.
  */
  list_remove(buflist, ab);
  ARCSTAT_INCR(arcstat_l2_asize, -l2hdr-b_asize);
  ab-b_l2hdr = NULL;
  trim_map_free(abl2-b_dev-l2ad_vdev, abl2-b_daddr,
  ab-b_size, 0);
  kmem_free(l2hdr, sizeof (l2arc_buf_hdr_t));
  ARCSTAT_INCR(arcstat_l2_size, -ab-b_size);
  }
  
  
  Looks like it's part is freebsd specific changes.
  Can somebody help with this part of code ?
  
 
 The first hunk of the patch is renaming of abl2 to l2hdr.
 
 -- 
 Andriy Gapon
 ___
 freebsd-current@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current
 To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re[2]: ARC pressured out, how to control/stabilize ? (reformatted to text/plain)

2014-01-13 Thread Vladimir Sharun
Dear Andriy and FreeBSD community,

 I am not sure if the buffers are leaked somehow or if they are actually in 
 use.
 It's one of the very few places where data buffers are allocated without
 charging ARC.  In all other places it's quite easy to match allocations and
 deallocations.  But in L2ARC it is not obvious that all buffers get freed or
 when that happens.

After one week under load I think we figure out the cause: it's L2ARC. 
Here's the top's header for 7d17h of the runtime:

last pid: 46409;  load averages:  0.37,  0.62,  0.70 up 7+17:14:01  07:24:10
173 processes: 1 running, 171 sleeping, 1 zombie
CPU:  2.0% user,  0.0% nice,  3.5% system,  0.4% interrupt, 94.2% idle
Mem: 8714M Active, 14G Inact, 96G Wired, 1929M Cache, 3309M Buf, 3542M Free
ARC: 85G Total, 2558M MFU, 77G MRU, 28M Anon, 1446M Header, 4802M Other

ARC related tunables:

vm.kmem_size=110G
vfs.zfs.arc_max=90G
vfs.zfs.arc_min=42G

For more than 7 days of hard runtime the picture clearly shows: 
Wired minus ARC = 11..12Gb, ARC grow and shrinks in 80-87Gb range and the
system runs just fine.

So what shall we do with L2ARC leakage ?
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re[2]: ARC pressured out, how to control/stabilize ? (reformatted to text/plain)

2014-01-06 Thread Vladimir Sharun
Dear Andriy and FreeBSD community,

I got the few minutes run for this dtrace hook; here's the output for 15 
minutes run:

http://pastebin.com/pKm9kLwa

Does it explain something ?

 
 on 04/01/2014 14:50 Vladimir Sharun said the following:
 [snip]
  ARC: 28G Total, 2085M MFU, 20G MRU, 29M Anon, 1858M Header, 3855M Other
 [snip]
  ITEM   SIZE  LIMIT USED FREE  REQ FAIL SLEEP
 [snip]
  zio_data_buf_131072: 131072,  0,  488217,   9,287155442,   0,   0
 
 I noticed a particular discrepancy between reported ARC usage and sizes of UMA
 zones used by ZFS code:
 
 488217 * 131072 = ~59GB right there.
 
 There are several possibilities for this discrepancy:
 - bad accounting or reporting of ARC stats
 - those 128K buffers being used in a special way and thus not accounted as ARC
 - some sort of resource leak
 
 You could try to use DTrace to gather the stacks of all code paths that lead 
 to
 allocation of those buffers.  Something like:
 
 fbt::zio_data_buf_alloc:entry
 /arg0 == 131072/
 {
 @[stack()] = count();
 }
 
 This could be a start for understanding the issue.
 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re[2]: ARC pressured out, how to control/stabilize ? (reformatted to text/plain)

2014-01-06 Thread Vladimir Sharun
Dear Andriy, FreeBSD community,

Thank you for your suggestion, so we will turn off the L2ARC and will report if 
the issue persist or not.
For now the test server rebooted with L2ARC turned off and there's no 
allocations done in 
l2arc_feed_thread according to dtrace hook you provided.

Let's imagine the situation, we found l2arc_feed_thread allocations is the 
cause, what shall our next step ?

The feedback from us will be here within few days (can't reproduce faster)
 
 on 06/01/2014 13:14 Vladimir Sharun said the following:
  Dear Andriy and FreeBSD community,
  
  I got the few minutes run for this dtrace hook; here's the output for 15 
  minutes run:
  
  http://pastebin.com/pKm9kLwa
  
  Does it explain something ?
 
 The following makes me suspect a problem with L2ARC compression code.
 
 zfs.ko`l2arc_feed_thread+0x7d9
 kernel`fork_exit+0x9a
 kernel`0x8069ad6e
 95131
 
 I am not sure if the buffers are leaked somehow or if they are actually in 
 use.
 It's one of the very few places where data buffers are allocated without
 charging ARC.  In all other places it's quite easy to match allocations and
 deallocations.  But in L2ARC it is not obvious that all buffers get freed or
 when that happens.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

ARC pressured out, how to control/stabilize ? (reformatted to text/plain)

2014-01-04 Thread Vladimir Sharun
Good day community,

We ran the system in production with r259544 on it. 128Gb RAM, 72Gb - arc_max 
and 42Gb - arc_min set in loader.conf.
The system ran both application (which are FS ops hungry) and 18Tb storage on 
this setup.
From the start, ARC clearly shows 72Gb of memory consumed, but during next few 
days it pressured out to even lower than arc_min (after 14 days of uptime 
there's 28Gb only used by ARC. The problem is: lesser data in the ARC, lesser 
performance from the entire system. First two days (when ARC slowly decreases 
to 55-60Gb) the system run fine with excellent performance telemetry 
(application response time) up to 7-10 days, when it reach ~30Gb and then the 
performance falls off to unacceptable level.

So the question is: how to understand, who eats memory from ARC, and how to 
control this memory eating ?

top, vmstat's follow (vm.kmem_size limited to 100Gb, if not, wired will reach 
107Gb approx with the same ARC pressured out)

last pid: 13273;  load averages:  2.99,  1.55,  1.20   up 14+19:00:35  14:24:55
227 processes: 1 running, 226 sleeping
CPU:  4.6% user,  0.0% nice,  5.9% system,  1.0% interrupt, 88.5% idle
Mem: 667M Active, 30G Inact, 91G Wired, 1783M Cache, 3309M Buf, 933M Free
ARC: 28G Total, 2085M MFU, 20G MRU, 29M Anon, 1858M Header, 3855M Other
Swap:

$ vmstat -z
ITEM   SIZE  LIMIT USED FREE  REQ FAIL SLEEP

UMA Kegs:   384,  0, 208,   2, 208,   0,   0
UMA Zones: 2176,  0, 208,   0, 208,   0,   0
UMA Slabs:   80,  0, 2311694,  204306,718386402,   0,   0
UMA RCntSlabs:   88,  0,9229,  41,   18930,   0,   0
UMA Hash:   256,  0,   3,  12,  79,   0,   0
4 Bucket:32,  0,1785,   21090,1248973183,   0,   0
6 Bucket:48,  0, 185,   12763,148487075,   0,   0
8 Bucket:64,  0,  65,   20643,41333608,   0,   0
12 Bucket:   96,  0,9665,9482,11049020,   0,   0
16 Bucket:  128,  0, 649,4807,11971939,  11,   0
32 Bucket:  256,  0,   12630,   50790,31362215,  33,   0
64 Bucket:  512,  0,   23203,6781,17367241,133,   0
128 Bucket:1024,  0,   95458,   48746,810293851,317422,   0
vmem btag:   56,  0, 1411085,   96316,27776938,21231,   0
VM OBJECT:  256,  0,  771497,  279598,2938160439,   0,   0
RADIX NODE: 144,  0, 2598081, 1326882,4515451399,   0,   0
MAP:240,  0,   3,  61,   3,   0,   0
KMAP ENTRY: 128,  0,  25, 874,  25,   0,   0
MAP ENTRY:  128,  0,   25469,   45149,11668899620,   0,   0
VMSPACE:448,  0, 235, 782,149851562,   0,   0
fakepg: 104,  0,   0,   0,   0,   0,   0
mt_zone:   4112,  0, 263,   0, 263,   0,   0
16:  16,  0, 2812047,   93027,53798617567,   0,   0
32:  32,  0, 9723566, 1887934,12748376945,   0,   0
64:  64,  0,10475477, 6543585,20857570861,   0,   0
128:128,  0, 1848882,  490254,23475247741,   0,   0
256:256,  0,  914734,  702251,12021626624,   0,   0
512:512,  0,1307, 581,10911947626,   0,   0
1024:  1024,  0,   28407, 233,234462675,   0,   0
2048:  2048,  0, 511,2463,2135442383,   0,   0
4096:  4096,  0,  507210,   85487,572913690,   0,   0
uint64 pcpu:  8,  0, 298, 598, 298,   0,   0
SLEEPQUEUE:  80,  0,1666, 721,1669,   0,   0
Files:   80,  0,4779,   10221,3345112793,   0,   0
TURNSTILE:  136,  0,1666, 414,1669,   0,   0
rl_entry:40,  0, 696, 804, 696,   0,   0
umtx pi: 96,  0,   0,   0,   0,   0,   0
MAC labels:  40,  0,   0,   0,   0,   0,   0
PROC:  1208,  0, 256, 473,149840920,   0,   0
THREAD:1168,  0,1514, 151,  783235,   0,   0
cpuset:  72,  0, 791,1134,   11746,   0,   0
audit_record:  1248,  0,   0,   0,   0,   0,   0
sendfile_sync:   64,  0,   0,   0,   0,   0,   0
mbuf_packet:256, 41943045,8248,4127,8662647947,   0,   0
mbuf:   256, 41943045, 314,   13696,64235586097,   0,   0
mbuf_cluster:  2048, 6553600,   12375,  41,   20500,   0,   0
mbuf_jumbo_page:   4096, 3276800, 311,2710,1565741448,   0,   0
mbuf_jumbo_9k: 9216, 970903,   0,   0,   0,   0,   0
mbuf_jumbo_16k:   16384, 546133,  

Re[2]: pf reply-to malfunction after r258468 (seems r258479)

2013-12-04 Thread Vladimir Sharun
Dear Gleb, 
Yesterday I (finally) got my server back to work and the problem disappear. 
Can't reproduce it anymore on r258865. 
On Tue, Dec 03, 2013 at 07:54:08PM +0200, Vladimir Sharun wrote:
V Dear Gleb, 
V Unfortunately can't boot both revisions kernel, it hangs on trying to mount 
root from ssdzfs  (which is my zfs root). 
V   Vladimir,

You can run the kernel that boots, but update only sys/netpfil/pf
directory to suspected revision(s), if you think this is related
to changes in pf.


-- 
Totus tuus, Glebius.
___
 freebsd-current@freebsd.org  mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current 
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

pf reply-to malfunction after r258468 (seems r258479)

2013-12-03 Thread Vladimir Sharun
I have a test setup with direct internet connection Reail_IP_A and netgraph 
tunnel with Real_IP_B. I have used a reply-to pf ruleset to sent all the 
traffic back via tunnel, if it came via tunnel: pass in quick on $tunnel_if 
reply-to ($tunnel_if 10.1.0.1) \ proto tcp from any to Real_IP_B port 443 And 
it works at least in r258468. After harware change/reboot yesterday I got 
strange performance via netgraph tunnel. Investigation shows clear: this is not 
tunnel itself, because endpoint can saturate wire speed, but when we run 
routable schema we got very low throughput. Deeper analyzing shows packet 
duplication from reply-to, looks like that: 09:36:59.576405 IP Real_IP_B.443  
Testbed.43775: Flags [.], seq 523587:525035, ack 850, win 1040, options 
[nop,nop,TS val 3415853201 ecr 44833816], length 1448 09:36:59.576413 IP 
Real_IP_B.443  Testbed.43775: Flags [.], seq 523587:525035, ack 850, win 1040, 
options [nop,nop,TS val 3415853201 ecr 44833816], length 1448 09:36:59.577583 
IP Testbed.4
 3775  Real_IP_B.443: Flags [.], ack 525035, win 1018, options [nop,nop,TS val 
44834046 ecr 3415853201], length 0 09:36:59.577713 IP Testbed.43775  
Real_IP_B.443: Flags [.], ack 525035, win 1040, options [nop,nop,TS val 
44834046 ecr 3415853201], length 0 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


pf reply-to malfunction after r258468 (seems r258479)

2013-12-03 Thread Vladimir Sharun
I have a test setup with direct internet connection Reail_IP_A and netgraph 
tunnel with Real_IP_B. 
I have used a reply-to pf ruleset to sent all the traffic back via tunnel, if 
it came via tunnel: 

pass in quick on $tunnel_if reply-to ($tunnel_if 10.1.0.1) \ 
proto tcp from any to Real_IP_B port 443 

And it works at least in r258468. After harware change/reboot yesterday I got 
strange performance 
via netgraph tunnel. Investigation shows clear: this is not tunnel itself, 
because endpoint can 
saturate wire speed, but when we run routable schema we got very low 
throughput. Deeper analyzing 
shows packet duplication from reply-to, looks like that: 
09:36:59.576405 IP Real_IP_B.443  Testbed.43775: Flags [.], seq 523587:525035, 
ack 850, win 1040, options [nop,nop,TS val 3415853201 ecr 44833816], length 
1448 
09:36:59.576413 IP Real_IP_B.443  Testbed.43775: Flags [.], seq 523587:525035, 
ack 850, win 1040, options [nop,nop,TS val 3415853201 ecr 44833816], length 
1448 
09:36:59.577583 IP Testbed.43775  Real_IP_B.443: Flags [.], ack 525035, win 
1018, options [nop,nop,TS val 44834046 ecr 3415853201], length 0 
09:36:59.577713 IP Testbed.43775  Real_IP_B.443: Flags [.], ack 525035, win 
1040, options [nop,nop,TS val 44834046 ecr 3415853201], length 0 

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re[2]: pf reply-to malfunction after r258468 (seems r258479)

2013-12-03 Thread Vladimir Sharun
Dear Gleb, 
Is kernel rebuilding enuff ? 


  Vladimir,

On Tue, Dec 03, 2013 at 11:52:26AM +0200, Vladimir Sharun wrote:
V I have a test setup with direct internet connection Reail_IP_A and netgraph 
tunnel with Real_IP_B. 
V I have used a reply-to pf ruleset to sent all the traffic back via tunnel, 
if 
V it came via tunnel: 
V 
V pass in quick on $tunnel_if reply-to ($tunnel_if 10.1.0.1) \ 
V proto tcp from any to Real_IP_B port 443 
V 
V And it works at least in r258468. After harware change/reboot yesterday I 
got strange performance 
V via netgraph tunnel. Investigation shows clear: this is not tunnel itself, 
because endpoint can 
V saturate wire speed, but when we run routable schema we got very low 
throughput. Deeper analyzing 
V shows packet duplication from reply-to, looks like that: 
V 09:36:59.576405 IP Real_IP_B.443  Testbed.43775: Flags [.], seq 
523587:525035, ack 850, win 1040, options [nop,nop,TS val 3415853201 ecr 
44833816], length 1448 
V 09:36:59.576413 IP Real_IP_B.443  Testbed.43775: Flags [.], seq 
523587:525035, ack 850, win 1040, options [nop,nop,TS val 3415853201 ecr 
44833816], length 1448 
V 09:36:59.577583 IP Testbed.43775  Real_IP_B.443: Flags [.], ack 525035, win 
1018, options [nop,nop,TS val 44834046 ecr 3415853201], length 0 
V 09:36:59.577713 IP Testbed.43775  Real_IP_B.443: Flags [.], ack 525035, win 
1040, options [nop,nop,TS val 44834046 ecr 3415853201], length 0 

I doubt that r258479 can cause a regression in reply-to.

Can you please test r258478 and r258479 and confirm or decline that?

-- 
Totus tuus, Glebius.
___
 freebsd-current@freebsd.org  mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current 
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re[2]: pf reply-to malfunction after r258468 (seems r258479)

2013-12-03 Thread Vladimir Sharun
Dear Gleb, 
Unfortunately can't boot both revisions kernel, it hangs on trying to mount 
root from ssdzfs  (which is my zfs root). 
  Vladimir,

On Tue, Dec 03, 2013 at 11:52:26AM +0200, Vladimir Sharun wrote:
V I have a test setup with direct internet connection Reail_IP_A and netgraph 
tunnel with Real_IP_B. 
V I have used a reply-to pf ruleset to sent all the traffic back via tunnel, 
if 
V it came via tunnel: 
V 
V pass in quick on $tunnel_if reply-to ($tunnel_if 10.1.0.1) \ 
V proto tcp from any to Real_IP_B port 443 
V 
V And it works at least in r258468. After harware change/reboot yesterday I 
got strange performance 
V via netgraph tunnel. Investigation shows clear: this is not tunnel itself, 
because endpoint can 
V saturate wire speed, but when we run routable schema we got very low 
throughput. Deeper analyzing 
V shows packet duplication from reply-to, looks like that: 
V 09:36:59.576405 IP Real_IP_B.443  Testbed.43775: Flags [.], seq 
523587:525035, ack 850, win 1040, options [nop,nop,TS val 3415853201 ecr 
44833816], length 1448 
V 09:36:59.576413 IP Real_IP_B.443  Testbed.43775: Flags [.], seq 
523587:525035, ack 850, win 1040, options [nop,nop,TS val 3415853201 ecr 
44833816], length 1448 
V 09:36:59.577583 IP Testbed.43775  Real_IP_B.443: Flags [.], ack 525035, win 
1018, options [nop,nop,TS val 44834046 ecr 3415853201], length 0 
V 09:36:59.577713 IP Testbed.43775  Real_IP_B.443: Flags [.], ack 525035, win 
1040, options [nop,nop,TS val 44834046 ecr 3415853201], length 0 

I doubt that r258479 can cause a regression in reply-to.

Can you please test r258478 and r258479 and confirm or decline that?

-- 
Totus tuus, Glebius.
 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Recent nandfs commits broke buildworld with clang

2012-05-20 Thread Vladimir Sharun
env MACHINE=amd64 CPP=/usr/bin/clang-cpp  sh /usr/src/usr.bin/kdump/mkioctls 
print /usr/obj/usr/src/tmp/usr/include  ioctl.c
stdin:34:10: fatal error: 'fs/nandfs/nandfs_fs.h' file not found
#include fs/nandfs/nandfs_fs.h
 ^
1 error generated.
/bin/sh /usr/src/usr.bin/kdump/../../sys/kern/makesyscalls.sh  
/usr/src/usr.bin/kdump/../../sys/amd64/linux32/syscalls.master 
/usr/src/usr.bin/kdump/linux_syscalls.conf
echo int nlinux_syscalls = sizeof(linux_syscallnames) / 
sizeof(linux_syscallnames[0]);   linux_syscalls.c
rm -f .depend
CC='/usr/bin/clang' mkdep -f .depend -a    -I/usr/src/usr.bin/kdump/../ktrace 
-I/usr/src/usr.bin/kdump -I/usr/src/usr.bin/kdump/../.. -I. -std=gnu99  
kdump_subr.c /usr/src/usr.bin/kdump/kdump.c ioctl.c 
/usr/src/usr.bin/kdump/../ktrace/subr.c linux_syscalls.c
ioctl.c:57:10: fatal error: 'fs/nandfs/nandfs_fs.h' file not found
#include fs/nandfs/nandfs_fs.h
 ^
1 error generated.
mkdep: compile failed
*** [.depend] Error code 1

r235624 on amd64

# clang -v
FreeBSD clang version 3.1 (branches/release_31 155985) 20120503
Target: x86_64-unknown-freebsd10.0
Thread model: posix

Didn't test it with stock gcc.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


building port zsh (static version) failed on r235325

2012-05-12 Thread Vladimir Sharun
cc -L/usr/local/lib -rpath=/usr/lib:/usr/local/lib -static   -o zsh main.o  
`cat stamp-modobjs`   -L/usr/local/lib -Wl,-R/usr/local/lib -lpcre -liconv 
-lncursesw -lrt -lm  -lc
/usr/lib/libc.a(jemalloc_jemalloc.o): In function `calloc':
jemalloc_jemalloc.c:(.text+0x1d80): multiple definition of `calloc'
mem.o:mem.c:(.text+0xf90): first defined here
/usr/lib/libc.a(jemalloc_jemalloc.o): In function `malloc':
jemalloc_jemalloc.c:(.text+0x1f40): multiple definition of `malloc'
mem.o:mem.c:(.text+0x900): first defined here
/usr/lib/libc.a(jemalloc_jemalloc.o): In function `realloc':
jemalloc_jemalloc.c:(.text+0x3380): multiple definition of `realloc'
mem.o:mem.c:(.text+0xfe0): first defined here
/usr/lib/libc.a(jemalloc_jemalloc.o): In function `free':
jemalloc_jemalloc.c:(.text+0x3940): multiple definition of `free'
mem.o:mem.c:(.text+0x8f0): first defined here
*** [zsh] Error code 1


# cc -v
Using built-in specs.
Target: amd64-undermydesk-freebsd
Configured with: FreeBSD/amd64 system compiler
Thread model: posix
gcc version 4.2.1 20070831 patched [FreeBSD]

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: buildworld fails on FreeBSD 7.x for HEAD from 19.04.2012

2012-04-24 Thread Vladimir Sharun



=== usr.bin/file (all)
/usr/bin/clang -O2 -pipe  -DMAGIC='/usr/share/misc/magic'
-DHAVE_CONFIG_H -I/usr/src/usr.bin/file/../../lib/libmagic -std=gnu99
-Qunused-arguments -fstack-protector -Wsystem-headers -Wall
-Wno-format-y2k -W -Wno-unused-parameter -Wstrict-prototypes
-Wmissing-prototypes -Wpointer-arith -Wreturn-type -Wcast-qual
-Wwrite-strings -Wswitch -Wshadow -Wunused-parameter -Wcast-align
-Wchar-subscripts -Winline -Wnested-externs -Wredundant-decls
-Wold-style-definition -Wno-pointer-sign -Wno-empty-body
-Wno-string-plus-int  -o file file.o -lmagic -lz
file.o: In function `main':
/usr/src/usr.bin/file/../../contrib/file/file.c:(.text+0x717): undefined
reference to `magic_getpath'
/usr/src/usr.bin/file/../../contrib/file/file.c:(.text+0x7df): undefined
reference to `magic_list'
clang: error: linker command failed with exit code 1 (use -v to see
invocation)
*** [file] Error code 1

r234657

clang -v:
FreeBSD clang version 3.1 (trunk 154661) 20120413
Target: x86_64-unknown-freebsd10.0
Thread model: posix

FreeBSD 10.0-CURRENT #6: Thu Apr 12 08:56:05 EEST 2012 amd64

make buildworld without j's

On Sun, Apr 22, 2012 at 09:06:18AM -0700, Garrett Cooper wrote:
  On 4/20/2012 5:16 AM, Jan Sieka wrote:
  I can't build world from recent sources (HEAD as of 2012.04.19 11:06:48
  UTC) on a machine running FreeBSD 7.3.
...
 Ugh. The usecase (that's now broken) is that Jan from Semihalf might
 have been running CURRENT builds on an older (stable) build machine.

Lets not guess.  If you've found that any version of 10-CURRENT cannot
build HEAD post r234449 please let me know.

I've verified that I can build HEAD on 8.3-PRERELEASE (r231882).

-- 
-- David  (obr...@freebsd.org)
___freebsd-curr...@freebsd.org 
mailing listhttp://lists.freebsd.org/mailman/listinfo/freebsd-currentTo 
unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Recent changes in libthr broke base bind package tools and named on amd64

2012-04-10 Thread Vladimir Sharun
I've recently installworld my test setup and found bind tools: dig, host hangs 
during usage (latest bind 9.8.2 in -CURRENT base. The same with named too. 
Replacing /lib/libthr.so.3 with previous build (26 march) eliminates the 
problem.

backtrace every time the same:
(gdb) bt
#0  0x00080123ae4c in kevent () from /lib/libc.so.7
#1  0x0052a250 in ?? ()
#2  0x000800d64cdd in pthread_create () from /lib/libthr.so.3
#3  0x in ?? ()
Error accessing memory address 0x7f7fc000

Hang processes can be killed only by SIGKILL

I see correlated messages in this list with subject recent update breaks some 
ports. I can assume they use libthr as well.

The whole system compiled (in my case) with stock gcc 4.2.1, the kernel with 
the base clang. btw buildworld fails with clang.


# uname -rp
10.0-CURRENT amd64
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org