Re: dhcrelay: send_packet: No buffer space available

2016-02-13 Thread Kapetanakis Giannis

On 12/02/16 18:56, Stuart Henderson wrote:

On 2016-02-12, Kapetanakis Giannis  wrote:

Hi,

I have a carped firewall which is using dhcrelay to forward dhcp
requests to another carped dhcp server.
After upgrade to Feb  4 snapshot I'm seeing these in my logs:

What version were you running before?

To establish whether it's a dhcrelay problem or something to do with carp
can you try dhcrelay from slightly older source e.g. 'cvs up -D 2016/02/01'?



The previous version was from July 2015 so it was far away from now.
I guess it will not work with current kernel and pledge(2), tame(2) 
changes correct?


G



Re: Daily digest, Issue 3715 (21 messages)

2016-02-13 Thread Alan Corey
Re: dhcrelay: send_packet: No buffer space available

If it's easy to do try a different network card.  The only time I've
ever seen that error came from a urtwn card under OpenBSD 5.7 and
earlier.  But Stuart knows a lot more about it than I do.

On 2/13/16, owner-m...@openbsd.org  wrote:
> The pre-dawn daily digest
> Volume 1 : Issue 3715 : "text" Format
>
> Messages in this Issue:
>   Re: Kernel panic during installation
>   Re: Kernel panic during installation
>   Re: Kernel panic during installation
>   Re: Kernel panic during installation
>   Re: Kernel panic during installation
>   Re: Kernel panic during installation
>   Re: Kernel panic during installation
>   OpenBGPd RTBH peer with match clause on community
>   Re: OpenBGPd RTBH peer with match clause on community
>   Re: OpenBGPd RTBH peer with match clause on community
>   Re: NVM Express (NVMe) support status
>   Re: Problems using squid as transparent proxy for SSL/TLS
>   Re: dhcrelay: send_packet: No buffer space available
>   Re: dhcrelay: send_packet: No buffer space available
>   Re: OpenBSD5.7, hangs on ppb6 "Intel 5000 PCIE" Dell poweredge 1950
>   how to mount a *dmg in -current
>   Re: how to mount a *dmg in -current
>   Re: how to mount a *dmg in -current
>   pfsync and table
>   Re: pfsync and table
>   Re: sshfs man page, -o idmap=user
>
> --
>
> Date: Fri, 12 Feb 2016 07:51:34 -0500
> From: Donald Allen 
> To: misc 
> Subject: Re: Kernel panic during installation
> Message-ID:
> 
>
> On Feb 12, 2016 05:08, "Stefan Sperling"  wrote:
>>
>> On Thu, Feb 11, 2016 at 08:42:21PM -0500, Donald Allen wrote:
>> > When attempting to install the 2/8 snapshot on my Thinkpad x-250, I
> chose
>> > to configure the wireless network interface (iwm). This resulted in the
>> > following:
>> >
>> > iwm0: could not read firmware iwm-7265-9 (error 2)
>> > panic: attempt to execute user address 0x0 in supervisor mode
>>
>> Do you have a trace from ddb please?
>
> There was no entry to ddb. There was one additional message after the
> above:
>
> syncing disks... done
>
> and that was all she wrote. (I took a photo of the screen.)
>
> If you have a suggestion for how to get the trace, I will certainly try to
> help. Or maybe build a kernel with some additional printfs to try to
> isolate where this is happening?
>
>
> --
>
> Date: Fri, 12 Feb 2016 07:45:35 -0800
> From: Chris Cappuccio 
> To: Donald Allen 
> Cc: misc 
> Subject: Re: Kernel panic during installation
> Message-ID: <20160212154535.gb5...@ref.nmedia.net>
>
> Donald Allen [donaldcal...@gmail.com] wrote:
>> On Feb 12, 2016 05:08, "Stefan Sperling"  wrote:
>> >
>> > On Thu, Feb 11, 2016 at 08:42:21PM -0500, Donald Allen wrote:
>> > > When attempting to install the 2/8 snapshot on my Thinkpad x-250, I
>> chose
>> > > to configure the wireless network interface (iwm). This resulted in
>> > > the
>> > > following:
>> > >
>> > > iwm0: could not read firmware iwm-7265-9 (error 2)
>> > > panic: attempt to execute user address 0x0 in supervisor mode
>> >
>> > Do you have a trace from ddb please?
>>
>> There was no entry to ddb. There was one additional message after the
>> above:
>>
>> syncing disks... done
>>
>> and that was all she wrote. (I took a photo of the screen.)
>>
>> If you have a suggestion for how to get the trace, I will certainly try
>> to
>> help. Or maybe build a kernel with some additional printfs to try to
>> isolate where this is happening?
>
> sysctl ddb.panic=1 ??
>
>
> --
>
> Date: Fri, 12 Feb 2016 10:47:16 -0500
> From: Donald Allen 
> To: Chris Cappuccio 
> Cc: misc 
> Subject: Re: Kernel panic during installation
> Message-ID:
> 
>
> On Fri, Feb 12, 2016 at 10:45 AM, Chris Cappuccio  wrote:
>
>> Donald Allen [donaldcal...@gmail.com] wrote:
>> > On Feb 12, 2016 05:08, "Stefan Sperling"  wrote:
>> > >
>> > > On Thu, Feb 11, 2016 at 08:42:21PM -0500, Donald Allen wrote:
>> > > > When attempting to install the 2/8 snapshot on my Thinkpad x-250, I
>> > chose
>> > > > to configure the wireless network interface (iwm). This resulted in
>> the
>> > > > following:
>> > > >
>> > > > iwm0: could not read firmware iwm-7265-9 (error 2)
>> > > > panic: attempt to execute user address 0x0 in supervisor mode
>> > >
>> > > Do you have a trace from ddb please?
>> >
>> > There was no entry to ddb. There was one additional message after the
>> above:
>> >
>> > syncing disks... done
>> >
>> > and that was all she wrote. (I took a photo of the screen.)
>> >
>> > If you have a suggestion for 

Will Softraid RAID1 read from the fastest mirror/-s / supports user-specified device read priority order, nowadays? Takes broken disk out of use?

2016-02-13 Thread Tinker

Hi,

1)
http://www.openbsd.org/papers/asiabsdcon2010_softraid/softraid.pdf page 
3 "2.2 RAID 1" says that it reads "on a round-robin basis from all 
active chunks", i.e. read operations are spread evenly across disks.


Since then did anyone implement selective reading based on experienced 
read operation time, or a user-specified device read priority order?



That would allow Softraid RAID1 based on 1 SSD mirror + 1 SSD mirror + 1 
HDD mirror, which would give the best combination of IO performance and 
data security OpenBSD would offer today.


2)
Also if there's a read/write failure (or excessive time consumption for 
a single operation, say 15 seconds), will Softraid RAID1 learn to take 
the broken disk out of use?


Thanks,
Tinker



How extensive OpenBSD's write caching (for softdep or async-mounted UFS, as long as I never fsync() )?

2016-02-13 Thread Tinker

Hi,

How much of my file writing, and filesystem operations such as creating 
a new file/directory, will land in OpenBSD's disk/write cache without 
touching the disk before return of the respective operation to my 
program, for softdep or async UFS media and I never fsync() ?



This is relevant to know for any usecase where there may be a big write 
load to a magnet disk *and* there's lots of RAM and "sysctl 
kern.bufcachepercent" is high.


If those ops will be done in a way that is synchronous with the magnet 
disk, the actual fopen(), fwrite(), fread() (for re-read of the data 
that's been written but still only is in the OS RAM CACHE) etc. might be 
so slow that a program would need to implement its own write cache for 
supporting even small spikes in write activity.


Sorry for the fuss but neither "man" nor googling taught me anything.

Thanks!!
Tinker



Buffer cache made to use >32bit mem addresses (i.e. >~3GB support for the buffer cache) nowadays or planned soon?

2016-02-13 Thread Tinker

Hi,

Some quite deep reading [1] taught me that at least quite recently, 
there was a ~3GB cap on the buffer cache, independent of architecture 
and system RAM size.


Reading the source history of vfs_bio.c [2] gives me a vague impression 
that this cap is there also today.


Just wanted to check, has this cap been removed, or is there any plan to 
remove it next months from now?


Thanks,
Tinker

(Sorry for the spam, hope it serves some constructive purpose.)


[1]:
 * 
http://unix.stackexchange.com/questions/61459/does-sysctl-kern-bufcachepercent-not-work-in-openbsd-5-2-above-1-7gb 
mentions:

   "... the buffer cache being restricted to 32bit DMA memory ...
   http://www.openbsd.org/cgi-bin/cvsweb/src/sys/kern/vfs_bio.c
   http://marc.info/?l=openbsd-tech=130174663714841=2 "

 * Ted mentions:
   "One hopeful change is to finally add support for highmem on amd64 
systems."

   http://undeadly.org/cgi?action=article=20140908113732

 * Bob Beck mentioned something that maybe is to this effect in 
http://www.openbsd.org/papers/bsdcan13-buf/
   in the last-but one slide,  that is 
http://www.openbsd.org/papers/bsdcan13-buf/mgp00017.html .


[2]
http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/sys/kern/vfs_bio.c



Re: Buffer cache made to use >32bit mem addresses (i.e. >~3GB support for the buffer cache) nowadays or planned soon?

2016-02-13 Thread Karel Gardas
I think you would also like to investigate this one:
http://www.undeadly.org/cgi?action=article=2006061416

> Some quite deep reading [1] taught me that at least quite recently, there
> was a ~3GB cap on the buffer cache, independent of architecture and system
> RAM size.



Re: Buffer cache made to use >32bit mem addresses (i.e. >~3GB support for the buffer cache) nowadays or planned soon?

2016-02-13 Thread Tinker

Dear Karel,

Thanks - wait - this post from 2006 you mentioned now, is it saying that 
actually >32bit/>~3GB buffer cache IS SUPPORTED/WORKS on any AMD64 *with 
IOMMU* support in the CPU, and was working all the time??


(That would mean that I misunderstood those references I posted in the 
previous email because in actuality the 32bit/~3GB constraint only is in 
certain usecases that is on non-IOMMU CPU:s.)


Please clarify!



So if so, any Intel processor with the "Intel® Virtualization Technology 
for Directed I/O (VT-d)" feature such as for example this one 
http://ark.intel.com/products/81061/ , or any of the CPU:s listed on 
https://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware , has 
that support?


Does the motherboard need specific support too as Wikipedia indicates - 
though at least any Xeon server motherboard from the last 3-4 years must 
for sure have it right?


Thank you everyone for your excellent work with OpenBSD!

Best regards,
Tinker

[1]
David Mazieres linked to two docs in there, those are only on 
archive.org now:

https://web.archive.org/web/20150814051509/http://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/vt-directed-io-spec.pdf
https://web.archive.org/web/20081218031805/http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/34434.pdf


On 2016-02-14 01:15, Karel Gardas wrote:

I think you would also like to investigate this one:
http://www.undeadly.org/cgi?action=article=2006061416

Some quite deep reading [1] taught me that at least quite recently, 
there
was a ~3GB cap on the buffer cache, independent of architecture and 
system

RAM size.




Re: Buffer cache made to use >32bit mem addresses (i.e. >~3GB support for the buffer cache) nowadays or planned soon?

2016-02-13 Thread Karel Gardas
I'm afraid you read too quickly and w/o attention to detail, please
reread and pay special attention to the last paragraph. Especially to:

"IOMMU is present in all "real" AMD64 machines, but not the Intel
clones. Unfortunately, OpenBSD support for IOMMU on the AMD machines
is not quite ready for primetime (code exists, but "real life" has
consorted against me finishing it)."

On Sat, Feb 13, 2016 at 7:35 PM, Tinker  wrote:
> Dear Karel,
>
> Thanks - wait - this post from 2006 you mentioned now, is it saying that
> actually >32bit/>~3GB buffer cache IS SUPPORTED/WORKS on any AMD64 *with
> IOMMU* support in the CPU, and was working all the time??
>
> (That would mean that I misunderstood those references I posted in the
> previous email because in actuality the 32bit/~3GB constraint only is in
> certain usecases that is on non-IOMMU CPU:s.)
>
> Please clarify!
>
>
>
> So if so, any Intel processor with the "Intel® Virtualization Technology
for
> Directed I/O (VT-d)" feature such as for example this one
> http://ark.intel.com/products/81061/ , or any of the CPU:s listed on
> https://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware , has that
> support?
>
> Does the motherboard need specific support too as Wikipedia indicates -
> though at least any Xeon server motherboard from the last 3-4 years must
for
> sure have it right?
>
> Thank you everyone for your excellent work with OpenBSD!
>
> Best regards,
> Tinker
>
> [1]
> David Mazieres linked to two docs in there, those are only on archive.org
> now:
>
https://web.archive.org/web/20150814051509/http://www.intel.com/content/dam/w
ww/public/us/en/documents/product-specifications/vt-directed-io-spec.pdf
>
https://web.archive.org/web/20081218031805/http://www.amd.com/us-en/assets/co
ntent_type/white_papers_and_tech_docs/34434.pdf
>
>
>
> On 2016-02-14 01:15, Karel Gardas wrote:
>>
>> I think you would also like to investigate this one:
>> http://www.undeadly.org/cgi?action=article=2006061416
>>
>>> Some quite deep reading [1] taught me that at least quite recently, there
>>> was a ~3GB cap on the buffer cache, independent of architecture and
>>> system
>>> RAM size.



"Abort trap" when pledge()d and compiled with -pg

2016-02-13 Thread Michal Mazurek
When compiling a program that calls pledge(2) with "-pg" the resulting
binary will execute seemingly fine, but at the very end die with:
Abort trap (core dumped)
I think the problem lies in a call to profil(2).

Is this a bug or a feature?

-- 
Michal Mazurek



Re: Buffer cache made to use >32bit mem addresses (i.e. >~3GB support for the buffer cache) nowadays or planned soon?

2016-02-13 Thread Tinker
Aha. So the article is saying that full IOMMU support is waiting on all 
AMD64 machines (so that would mean any Intel and AMD-manufactured 
processor with VT-d etc.), and you're saying that this is what needs to 
be implemented for the buffer cache to finally get >32bit/>~3GB support?


Are there plans today to implement this and then get the >32bit/>~3GB 
support going?


(I don't understand exactly how these things fit together, this is why I 
asked tentatively.)


Thanks, Tinker

On 2016-02-14 02:03, Karel Gardas wrote:

I'm afraid you read too quickly and w/o attention to detail, please
reread and pay special attention to the last paragraph. Especially to:

"IOMMU is present in all "real" AMD64 machines, but not the Intel
clones. Unfortunately, OpenBSD support for IOMMU on the AMD machines
is not quite ready for primetime (code exists, but "real life" has
consorted against me finishing it)."

On Sat, Feb 13, 2016 at 7:35 PM, Tinker  wrote:

Dear Karel,

Thanks - wait - this post from 2006 you mentioned now, is it saying 
that
actually >32bit/>~3GB buffer cache IS SUPPORTED/WORKS on any AMD64 
*with

IOMMU* support in the CPU, and was working all the time??

(That would mean that I misunderstood those references I posted in the
previous email because in actuality the 32bit/~3GB constraint only is 
in

certain usecases that is on non-IOMMU CPU:s.)

Please clarify!



So if so, any Intel processor with the "Intel® Virtualization 
Technology

for

Directed I/O (VT-d)" feature such as for example this one
http://ark.intel.com/products/81061/ , or any of the CPU:s listed on
https://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware , has 
that

support?

Does the motherboard need specific support too as Wikipedia indicates 
-
though at least any Xeon server motherboard from the last 3-4 years 
must

for

sure have it right?

Thank you everyone for your excellent work with OpenBSD!

Best regards,
Tinker

[1]
David Mazieres linked to two docs in there, those are only on 
archive.org

now:


https://web.archive.org/web/20150814051509/http://www.intel.com/content/dam/w
ww/public/us/en/documents/product-specifications/vt-directed-io-spec.pdf



https://web.archive.org/web/20081218031805/http://www.amd.com/us-en/assets/co
ntent_type/white_papers_and_tech_docs/34434.pdf




On 2016-02-14 01:15, Karel Gardas wrote:


I think you would also like to investigate this one:
http://www.undeadly.org/cgi?action=article=2006061416

Some quite deep reading [1] taught me that at least quite recently, 
there

was a ~3GB cap on the buffer cache, independent of architecture and
system
RAM size.




Re: Buffer cache made to use >32bit mem addresses (i.e. >~3GB support for the buffer cache) nowadays or planned soon?

2016-02-13 Thread Stuart Henderson
On 2016-02-13, Tinker  wrote:
> Hi,
>
> Some quite deep reading [1] taught me that at least quite recently, 
> there was a ~3GB cap on the buffer cache, independent of architecture 
> and system RAM size.
>
> Reading the source history of vfs_bio.c [2] gives me a vague impression 
> that this cap is there also today.
>
> Just wanted to check, has this cap been removed, or is there any plan to 
> remove it next months from now?

There was this commit, I don't *think* it got reverted.



CVSROOT:/cvs
Module name:src
Changes by: b...@cvs.openbsd.org2013/06/11 13:01:20

Modified files:
sys/kern   : kern_sysctl.c spec_vnops.c vfs_bio.c
 vfs_biomem.c vfs_vops.c
sys/sys: buf.h mount.h
sys/uvm: uvm_extern.h uvm_page.c
usr.bin/systat : iostat.c

Log message:
High memory page flipping for the buffer cache.

This change splits the buffer cache free lists into lists of dma reachable
buffers and high memory buffers based on the ranges returned by pmemrange.
Buffers move from dma to high memory as they age, but are flipped to dma
reachable memory if IO is needed to/from and high mem buffer. The total
amount of buffers  allocated is now bufcachepercent of both the dma and
the high memory region.

This change allows the use of large buffer caches on amd64 using more than
4 GB of memory

ok tedu@ krw@ - testing by many.



Re: Accessing USB with OpenBSD 5.7/amd64

2016-02-13 Thread jla
Hi Richard

Same issue for me. Did you get any answer?

Regards
Jean-Louis



Re: Kernel panic during installation

2016-02-13 Thread Stuart Henderson
On 2016-02-12, Donald Allen  wrote:
> On Fri, Feb 12, 2016 at 3:42 PM, Stefan Sperling  wrote:
>
>> On Fri, Feb 12, 2016 at 03:03:46PM -0500, Donald Allen wrote:
>> > I just used this exchange as an example to a friend who buys everything
>> > Apple and then complains when their software is buggy. This is a perfect
>> > example of how a direct negative feedback path makes software converge
>> > quickly to correctness, something you don't get with the big proprietary
>> > players like Apple, Microsoft, etc.
>>
>> That's correct, and it's why we're all here.
>>
>> Make sure your friend understands that we're not trying to provide a
>> drop-in replacement for Apple, to avoid potential disappointment ;-)
>>
>
> Now *I'm* disappointed. I was looking forward to the OpenBSD equivalent of
> Siri, which I expected would be called Theo. Can you imagine the response
> to "Theo -- where is the nearest good pizza place?".

mg
(esc) x theo



Re: Kernel panic during installation

2016-02-13 Thread Ingo Schwarze
Stuart Henderson wrote on Sat, Feb 13, 2016 at 08:45:35PM +:
> On 2016-02-12, Donald Allen  wrote:
>> On Fri, Feb 12, 2016 at 3:42 PM, Stefan Sperling  wrote:
>>> On Fri, Feb 12, 2016 at 03:03:46PM -0500, Donald Allen wrote:

 I just used this exchange as an example to a friend who buys everything
 Apple and then complains when their software is buggy. This is a perfect
 example of how a direct negative feedback path makes software converge
 quickly to correctness, something you don't get with the big proprietary
 players like Apple, Microsoft, etc.

>>> That's correct, and it's why we're all here.
>>> Make sure your friend understands that we're not trying to provide a
>>> drop-in replacement for Apple, to avoid potential disappointment ;-)

>> Now *I'm* disappointed. I was looking forward to the OpenBSD equivalent
>> of Siri, which I expected would be called Theo. Can you imagine the
>> response to "Theo -- where is the nearest good pizza place?".

> mg
> (esc) x theo

But, but,...  he neither talks about The Hose And Hound in Inglewood
in that mode, nor do they serve pizza!

SCNR,
  Ingo



USB device descriptor access bug (?)

2016-02-13 Thread Dave Vandervies
On an amd64 5.8-release system running under VMWare, with the virtual USB
controller configured as USB 2.0, I'm seeing a problem getting device
descriptor strings off of USB devices (or, at least, the one that I
care about).
With the virtual USB configured to support USB 3.0, the problem goes away.
(I don't have any non-virtual machines available to try it out on,
so I'm not sure if this is caused in part by a VMWare quirk, but my
coworkers who use Linux under VMWare report that it works for them with
a 2.0-only virtual USB controller, so it's not just VMWare.)

It manifests as dfu-util (either from ports or built locally) failing
to upload a new firmware image to an STM32 chip via the USB bootloader.
I've traced it through dfu-util to a failure of a libusb call (using the
libusb from ports), libusb_get_string_descriptor_ascii, which dfu-util
responds to by populating the descriptor with "UNKNOWN" and failing
at a later point when it tries to parse the chip memory map out of it.
Asking it to list the devices it sees instead of trying to upload outputs
the same "UNKNOWN" in place of the correct descriptor.


Any suggestions for what to look at next to debug this?
(I'm going to assume the answer will include installing -current and
building from source, and go ahead and allocate time to do that this
afternoon.)


dave

dmesg, usbdevs, pcidump, and dfu-util output from both non-working and
working configurations is below.
(Scan for '^\$' to find the next one.)

Virtual USB 2.0 only (not working)
==

Script started on Sat Feb 13 13:14:43 2016
dj3vande@termite64:~ (0)
$ dmesg
OpenBSD 5.8 (GENERIC.MP) #1229: Wed Aug  5 08:08:22 MDT 2015
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 3204382720 (3055MB)
avail mem = 3103436800 (2959MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.4 @ 0xe0010 (556 entries)
bios0: vendor Phoenix Technologies LTD version "6.00" date 05/20/2014
bios0: VMware, Inc. VMware Virtual Platform
acpi0 at bios0: rev 2
acpi0: sleep states S0 S1 S4 S5
acpi0: tables DSDT FACP BOOT APIC MCFG SRAT HPET WAET
acpi0: wakeup devices PCI0(S3) USB_(S1) P2P0(S3) S1F0(S3) S2F0(S3) S3F0(S3) 
S4F0(S3) S5F0(S3) S6F0(S3) S7F0(S3) S8F0(S3) S9F0(S3) S10F(S3) S11F(S3) 
S12F(S3) S13F(S3) [...]
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Core(TM) i5-4278U CPU @ 2.60GHz, 2600.07 MHz
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,PAGE1GB,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,SENSOR,ARAT
cpu0: 256KB 64b/line 8-way L2 cache
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
cpu0: apic clock running at 65MHz
cpu1 at mainbus0: apid 2 (application processor)
cpu1: Intel(R) Core(TM) i5-4278U CPU @ 2.60GHz, 2599.19 MHz
cpu1: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,PAGE1GB,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,SENSOR,ARAT
cpu1: 256KB 64b/line 8-way L2 cache
cpu1: smt 0, core 0, package 2
ioapic0 at mainbus0: apid 1 pa 0xfec0, version 11, 24 pins
acpimcfg0 at acpi0 addr 0xf000, bus 0-127
acpihpet0 at acpi0: 14318179 Hz
acpiprt0 at acpi0: bus 0 (PCI0)
acpicpu0 at acpi0: C1(@1 halt!)
acpicpu1 at acpi0: C1(@1 halt!)
acpibat0 at acpi0: BAT1 not present
acpibat1 at acpi0: BAT2 not present
acpiac0 at acpi0: AC unit online
acpibtn0 at acpi0: SLPB
acpibtn1 at acpi0: LID_
pvbus0 at mainbus0: VMware
vmt0 at pvbus0
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "Intel 82443BX AGP" rev 0x01
ppb0 at pci0 dev 1 function 0 "Intel 82443BX AGP" rev 0x01
pci1 at ppb0 bus 1
pcib0 at pci0 dev 7 function 0 "Intel 82371AB PIIX4 ISA" rev 0x08
pciide0 at pci0 dev 7 function 1 "Intel 82371AB IDE" rev 0x01: DMA, channel 0 
configured to compatibility, channel 1 configured to compatibility
atapiscsi0 at pciide0 channel 0 drive 0
scsibus1 at atapiscsi0: 2 targets
cd0 at scsibus1 targ 0 lun 0:  ATAPI 5/cdrom 
removable
cd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 2
pciide0: channel 1 disabled (no drives)
piixpm0 at pci0 dev 7 function 3 "Intel 82371AB Power" rev 0x08: SMBus disabled
"VMware VMCI" rev 0x10 at pci0 dev 7 function 7 not configured
vga1 at pci0 dev 15 function 0 "VMware SVGA II" rev 0x00
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
ppb1 at pci0 dev 17 function 0 "VMware PCI" rev 0x02
pci2 at ppb1 bus 2
uhci0 at pci2 dev 0 function 0 "VMware UHCI" rev 0x00: apic 1 

Setting setenv=DISPLAY=:1 in login.conf problem

2016-02-13 Thread Jiri B
Setting DISPLAY=:1 as setenv in /etc/login.conf

selenium:\
:setenv=DISPLAY=:1:\
:tc=daemon:

is a problem as colon is a separator and thus value is lost.
Escaping or quoting did not work too.

I put in my selenium rc script `env' to get environment vars
and DISPLAY is unset:

...
+ pgrep -q -xf /usr/local/jdk-1.7.0/bin/java -jar 
/usr/local/share/selenium/selenium-server-standalone.jar -log 
/var/log/selenium/selenium.log
selenium
doing rc_start
LOGNAME=_selenium
HOME=/var/selenium
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/X11R6/bin:/usr/local/bin:/usr/local/sbin
TERM=screen
DISPLAY=
SHELL=/bin/sh
USER=_selenium
...

Any idea how to trick login.conf to pass DISPLAY value correctly?

j.



Re: Setting setenv=DISPLAY=:1 in login.conf problem

2016-02-13 Thread Philip Guenther
On Sat, Feb 13, 2016 at 3:54 PM, Jiri B  wrote:
> Setting DISPLAY=:1 as setenv in /etc/login.conf
>
> selenium:\
> :setenv=DISPLAY=:1:\
> :tc=daemon:
>
> is a problem as colon is a separator and thus value is lost.
> Escaping or quoting did not work too.
...
> Any idea how to trick login.conf to pass DISPLAY value correctly?

man login.conf
...
CAPABILITIES
 Refer to getcap(3) for a description of the file layout.  All entries in
 the login.conf file are either boolean or use a `=' to separate the
...


man 3 getcap
...
 String capability values may contain any character.  Non-printable ASCII
 codes, new lines, and colons may be conveniently represented by the use
 of escape sequences:

   ^X('X' & 037)  control-X
   \b, \B(ASCII 010)  backspace
   \t, \T(ASCII 011)  tab
   \n, \N(ASCII 012)  line feed (newline)
   \f, \F(ASCII 014)  form feed
   \r, \R(ASCII 015)  carriage return
   \e, \E(ASCII 027)  escape
   \c, \C(:)  colon
...



Re: Setting setenv=DISPLAY=:1 in login.conf problem

2016-02-13 Thread Jiri B
On Sat, Feb 13, 2016 at 04:28:48PM -0800, Philip Guenther wrote:
> On Sat, Feb 13, 2016 at 3:54 PM, Jiri B  wrote:
> > Setting DISPLAY=:1 as setenv in /etc/login.conf
> >
> > selenium:\
> > :setenv=DISPLAY=:1:\
> > :tc=daemon:
> >
> > is a problem as colon is a separator and thus value is lost.
> > Escaping or quoting did not work too.
> ...
> > Any idea how to trick login.conf to pass DISPLAY value correctly?
> 
> man login.conf
> ...
> CAPABILITIES
>  Refer to getcap(3) for a description of the file layout.  All entries in
>  the login.conf file are either boolean or use a `=' to separate the
> ...
> 
> 
> man 3 getcap
> ...
>  String capability values may contain any character.  Non-printable ASCII
>  codes, new lines, and colons may be conveniently represented by the use
>  of escape sequences:
> 
>^X('X' & 037)  control-X
>\b, \B(ASCII 010)  backspace
>\t, \T(ASCII 011)  tab
>\n, \N(ASCII 012)  line feed (newline)
>\f, \F(ASCII 014)  form feed
>\r, \R(ASCII 015)  carriage return
>\e, \E(ASCII 027)  escape
>\c, \C(:)  colon
> ...

Thank you! PEBKAC solved.

j.



Re: How extensive OpenBSD's write caching (for softdep or async-mounted UFS, as long as I never fsync() )?

2016-02-13 Thread Tinker
Did two tests, one with async and one with softdep, on amd64, 
5.9-CURRENT, UFS.


(Checked "dd"'s sources and there is no fsync() anywhere in there.

The bufcache setting was 90, 3GB free RAM, pushed 2GB of data using "dd" 
to disk.


It took 12 and 15 seconds respectively, which is the harddrive's write 
speed - the buffer cache of course would have absorbed this in 0 
seconds.)



So, both runs showed that OpenBSD *not* does any write caching to talk 
about, at all.



This means if a program wants write caching, it needs to implement it 
itself.


Good to know.

Tinker

On 2016-02-13 23:47, Tinker wrote:

Hi,

How much of my file writing, and filesystem operations such as
creating a new file/directory, will land in OpenBSD's disk/write cache
without touching the disk before return of the respective operation to
my program, for softdep or async UFS media and I never fsync() ?


This is relevant to know for any usecase where there may be a big
write load to a magnet disk *and* there's lots of RAM and "sysctl
kern.bufcachepercent" is high.

If those ops will be done in a way that is synchronous with the magnet
disk, the actual fopen(), fwrite(), fread() (for re-read of the data
that's been written but still only is in the OS RAM CACHE) etc. might
be so slow that a program would need to implement its own write cache
for supporting even small spikes in write activity.

Sorry for the fuss but neither "man" nor googling taught me anything.

Thanks!!
Tinker




Re: Buffer cache made to use >32bit mem addresses (i.e. >~3GB support for the buffer cache) nowadays or planned soon?

2016-02-13 Thread Tinker

On 2016-02-14 03:39, Stuart Henderson wrote:

On 2016-02-13, Tinker  wrote:

Hi,

Some quite deep reading [1] taught me that at least quite recently,
there was a ~3GB cap on the buffer cache, independent of architecture
and system RAM size.

Reading the source history of vfs_bio.c [2] gives me a vague 
impression

that this cap is there also today.

Just wanted to check, has this cap been removed, or is there any plan 
to

remove it next months from now?


There was this commit, I don't *think* it got reverted.



CVSROOT:/cvs
Module name:src
Changes by: b...@cvs.openbsd.org2013/06/11 13:01:20

Modified files:
sys/kern   : kern_sysctl.c spec_vnops.c vfs_bio.c
 vfs_biomem.c vfs_vops.c
sys/sys: buf.h mount.h
sys/uvm: uvm_extern.h uvm_page.c
usr.bin/systat : iostat.c

Log message:
High memory page flipping for the buffer cache.

This change splits the buffer cache free lists into lists of dma 
reachable
buffers and high memory buffers based on the ranges returned by 
pmemrange.
Buffers move from dma to high memory as they age, but are flipped to 
dma

reachable memory if IO is needed to/from and high mem buffer. The total
amount of buffers  allocated is now bufcachepercent of both the dma and
the high memory region.

This change allows the use of large buffer caches on amd64 using more 
than

4 GB of memory

ok tedu@ krw@ - testing by many.



Indeed it's in the box e.g. 
http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/sys/kern/vfs_bio.c etc.,


So >32bit/>~3GB support has been in the box since OpenBSD 5.6 or 5.7.

Awesome, thank you so much for clarifying!

Tinker