make update not working

2013-03-15 Thread Martijn van Duren
Hello misc,

I'm currently trying to update my system, after a cvs -q up -P. In some
folders, when I type make update, the process exits immediately with an
error code 0.
Even adding a FORCE_UPDATE=Yes to it doesn't do anything.

The directories are out of date, as pointed out to my by
infrastructure/bin/out-of-date.

I also tried emptying my pobj directory.

I've encountered this problem before, in a different directory that
doesn't have the problem right now, so I can't find out a pattern, and a
full checkout of the ports tree did solve my problem last time. However,
that seems a bit excessive to do every time make update fails to
cooperate.

Doing a normal make (deinstall/install) does work.

Is there some flag/lockfile/etc that I'm missing that causes make update
to fail immediately?

Sincerely,

Martijn



Re: PHP mini_sendmail problems

2013-03-15 Thread Stuart Henderson
femail-chroot will already be installed; the PHP packages have it
as a run dependency.

On 2013-03-14, Richard Toohey richardtoo...@paradise.net.nz wrote:
 Also check out /usr/local/share/doc/pkg-readmes/femail-chroot-0.98p2

Yes this is important, it covers the 2 most common errors.



Re: renaming name of interfaces

2013-03-15 Thread Stuart Henderson
On 2013-03-15, Lars Hansson romaby...@gmail.com wrote:
 On Thu, Mar 14, 2013 at 10:22 PM, Jiri B ji...@devio.us wrote:

 I'm aware of both. So what is this renaming of ifaces good
 for?


 On Windows it has it's advantages because by default you get stupid and
 unhelpful names like Local Area Connection X.
 It's pretty nice to be able to rename it to something useful like Internal
 NIC.

This is more like setting 'descr', the difference is that Windows
usually hides the real name it uses for the interface.



Re: make update not working

2013-03-15 Thread Stuart Henderson
On 2013-03-15, Martijn van Duren martijn...@gmail.com wrote:
 Hello misc,

 I'm currently trying to update my system, after a cvs -q up -P. In some
 folders, when I type make update, the process exits immediately with an
 error code 0.
 Even adding a FORCE_UPDATE=Yes to it doesn't do anything.

make update only works in the most simple cases; where subpackages or
flavours have changed it can easily become confused. As with all things
in the ports tree (as opposed to packages) it is more of a developer
convenience rather than something which is expected to work at all times.

Best bet is to clean the package directory of any old junk before
you start building new packages, then make package and
PKG_PATH=/usr/ports/packages/`arch -s`/all sudo pkg_add -u.

Alternatively dpb -u is likely to work better than make update,
though still not totally reliable (the best method is generally to
use dpb on a clean machine/chroot, then point pkg_add -u at the
new packages).



Re: two questions about packages, library and module

2013-03-15 Thread Stuart Henderson
On 2013-03-14, Sean Shoufu Luo luosho...@gmail.com wrote:
 2. How can I build a package from source code tree?

Unlike Linux distributions but like many other OS, the base OS
is *not* kept in packages at all, it is a set of .tgz files
which are updated together, outside of the package system.



Re: PHP mini_sendmail problems

2013-03-15 Thread Benny Lofgren
On 2013-03-14 21:02, John Tate wrote:
 It seems to be a problem with drupal, I wrote my own php script that could
 send mail without issues. I have no idea how such a problem is possible
 unless drupal doesn't use php's mail() but I can't find anyone with similar
 problems.
 
 I didn't notice the log entries because they don't have a timestamp and I
 thought they were just wrap around when I first posted here.
 
 Sorry for wasting everyone's time.

I had that exact problem a while ago, with chroot():ed httpd, php,
mini_sendmail
and Drupal.

Turned out Drupal messes up the arguments, which made mini_sendmail unhappy.

Can't remember exactly what the problem was now, but I solved it by renaming
/bin/mini_sendmail to /bin/mini_sendmail.bin and replacing it with a shell
script that corrected the arguments and exec:d /bin/mini_sendmail.bin.

/B


 On Fri, Mar 15, 2013 at 6:57 AM, Pascal Stumpf pascal.stu...@cubes.dewrote:
 
 On Thu, 14 Mar 2013 20:12:52 +0100, Stefan Sperling wrote:
 On Thu, Mar 14, 2013 at 06:51:54PM +, Alexey E. Suslikov wrote:
 John Tate john at johntate.org writes:


 I've been trying to get PHP to be able to email from a chrooted
 apache
 server. Running without chroot is not an option. I can't find clear
 documentation on doing this, and the logs don't contain any errors I
 can
 find about the problem.

 you need femail from ports.

 More precisely, the femail-chroot package.

 And you need /usr/libexec/ld.so inside of the /var/www chroot dir.

 Not any more.  -static now implies -nopie when linking.

 Else, femail won't run inside chroot (on 5.3, not sure if 5.2 requires
 this).

 
 
 

-- 
internetlabbet.se / work:   +46 8 551 124 80  / Words must
Benny Lofgren/  mobile: +46 70 718 11 90 /   be weighed,
/   fax:+46 8 551 124 89/not counted.
   /email:  benny -at- internetlabbet.se



Squid not working for connections from ssh-tunnel

2013-03-15 Thread John Tate
I have a server I use to serve a squid proxy only accessible via ssh
tunnel, which has worked fine for over a year. I upgraded from OpenBSD 5.1
to OpenBSD 5.2 and I've also rebuilt squid in ports. It has stopped working
for ssh tunnel connections. It works for the elinks browser, but both
should be from localhost and be no different as far as I know.

I get these errors in the log:
[15/Mar/2013:04:01:40 -0700] elijah.secusrvr.com mail.google.com CONNECT
mail.google.com:443 HTTP/1.1 403 1323 - Mozilla/5.0 (X11; Linux x86_64)
AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.172 Safari/537.22
TCP_DENIED:NONE

My squid.conf:
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
hosts_file /etc/hosts
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl purge method PURGE
acl CONNECT method CONNECT
acl Safe_ports port 21 80
acl SSL_ports port 443
cache_mem 256 MB
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
acl lan src 127.0.0.1
http_access allow localhost
http_access allow lan
http_access deny all
http_reply_access allow all
icp_access allow all
visible_hostname secusrvr.com
coredump_dir /var/squid
http_port 127.0.0.1:3128
https_port 127.0.0.1:3128 cert=/etc/ssl/private/secusrvr.com.crt
key=/etc/ssl/private/server.key
logformat combined [%tl] %A %{Host}h %rm %ru HTTP/%rv %Hs %st
%{Referer}h %{User-Agent}h %Ss:%Sh
access_log /var/squid/logs/access.log combined
cache_store_log /var/squid/logs/store.log
cache_log  /var/squid/logs/cache.log
logfile_rotate 8
cache_dir ufs /var/squid/cache 4096 64 256

I tried googling the error and looking in the manual but still don't fully
understand it.
-- 
www.johntate.org



Re: make update not working

2013-03-15 Thread James Griffin
[- Fri 15.Mar'13 at  8:41:03 +0100  Martijn van Duren :-]

 Hello misc,
 
 I'm currently trying to update my system, after a cvs -q up -P. In some

Shouldn't that be `cvs -q up -Pd` 

You missed the -d switch.

I just check again on http://www.openbsd.org/anoncvs.html and the update 
command for cvs(1) is:

cvs -q up -Pd


Perhaps this might make a difference? 

-- 
James Griffin:  jmz at kontrol.kode5.net 
jmzgriffin at gmail.com

A4B9 E875 A18C 6E11 F46D  B788 BEE6 1251 1D31 DC38



Re: make update not working

2013-03-15 Thread Martijn van Duren
On Fri, 2013-03-15 at 10:27 +, Stuart Henderson wrote:
 On 2013-03-15, Martijn van Duren martijn...@gmail.com wrote:
  Hello misc,
 
  I'm currently trying to update my system, after a cvs -q up -P. In some
  folders, when I type make update, the process exits immediately with an
  error code 0.
  Even adding a FORCE_UPDATE=Yes to it doesn't do anything.
 
 make update only works in the most simple cases; where subpackages or
 flavours have changed it can easily become confused. As with all things
 in the ports tree (as opposed to packages) it is more of a developer
 convenience rather than something which is expected to work at all times.
 
 Best bet is to clean the package directory of any old junk before
 you start building new packages, then make package and
 PKG_PATH=/usr/ports/packages/`arch -s`/all sudo pkg_add -u.

I also get the same problem when doing doing a make package. Even with a
make clean and make distclean beforehand.
# make distclean
===  Cleaning for help2man-1.41.1
===  Dist cleaning for help2man-1.41.1
# make package
# echo $?
0

I did some small testing, everything step (from ports(7)) from make
fetch to make build work, but make package, make install and make update
exit immediately with the package installed. After I've deinstalled the
package I can install it again via make install

 
 Alternatively dpb -u is likely to work better than make update,
 though still not totally reliable (the best method is generally to
 use dpb on a clean machine/chroot, then point pkg_add -u at the
 new packages).

What I noticed is that this command builds all the packages, which I
find a waste of cpu-time and I can't even spare the diskspace on this
machine. I also couldn't find a switch to select the proper ports to
upgrade manually. So I guess that dpb isn't suitable for my purpose.

Sincerely,

Martijn



Re: make update not working

2013-03-15 Thread Marc Espie
On Fri, Mar 15, 2013 at 12:28:22PM +0100, Martijn van Duren wrote:
 On Fri, 2013-03-15 at 10:27 +, Stuart Henderson wrote:
  On 2013-03-15, Martijn van Duren martijn...@gmail.com wrote:
   Hello misc,
  
   I'm currently trying to update my system, after a cvs -q up -P. In some
   folders, when I type make update, the process exits immediately with an
   error code 0.
   Even adding a FORCE_UPDATE=Yes to it doesn't do anything.
  
  make update only works in the most simple cases; where subpackages or
  flavours have changed it can easily become confused. As with all things
  in the ports tree (as opposed to packages) it is more of a developer
  convenience rather than something which is expected to work at all times.
  
  Best bet is to clean the package directory of any old junk before
  you start building new packages, then make package and
  PKG_PATH=/usr/ports/packages/`arch -s`/all sudo pkg_add -u.
 
 I also get the same problem when doing doing a make package. Even with a
 make clean and make distclean beforehand.
 # make distclean
 ===  Cleaning for help2man-1.41.1
 ===  Dist cleaning for help2man-1.41.1
 # make package
 # echo $?
 0

Dude, obviously you already have the older package around.
Neither make clean nor make distclean will clean it.

Also, RTFM, bsd.port.mk(5) has an extensive documentation about clean.

And dpb -R is more likely to work, for a reasonable value of work.



Using hostnames in pf rules

2013-03-15 Thread Gilles LAMIRAL
Hello,

I need to use an hostname in a pf rule to allow a connection.
The hostname is needed because the resolution is dynamic, 
it can change at any minute (TTL 60).

Is there a flag to tell pf to resolve the name each time it tries to match this 
part?
The domain name server is trusted and near/fast the pf host,
The rules are written so that this rule is not read often.
There's no no problem if pf slows down because of name resolution times.
I've seen I can do it with an anchor and a script flushing/adding the hostname 
each minute or less,
I ask if there's a way less complicated and more understandable (reading 
pf.conf).

Thanks in advance.

-- 
Au revoir, 09 51 84 42 42
Gilles Lamiral. France, Baulon (35580) 06 20 79 76 06 



Re: Using hostnames in pf rules

2013-03-15 Thread Janne Johansson
make a table, and have cron update the contents of this table with the
result of the latest resolved ip.

2013/3/15 Gilles LAMIRAL gilles.lami...@laposte.net

 Hello,

 I need to use an hostname in a pf rule to allow a connection.
 The hostname is needed because the resolution is dynamic,
 it can change at any minute (TTL 60).

 Is there a flag to tell pf to resolve the name each time it tries to match
 this part?
 The domain name server is trusted and near/fast the pf host,
 The rules are written so that this rule is not read often.
 There's no no problem if pf slows down because of name resolution times.
 I've seen I can do it with an anchor and a script flushing/adding the
 hostname each minute or less,
 I ask if there's a way less complicated and more understandable (reading
 pf.conf).

 Thanks in advance.

 --
 Au revoir, 09 51 84 42 42
 Gilles Lamiral. France, Baulon (35580) 06 20 79 76 06




-- 
May the most significant bit of your life be positive.



Re: Using hostnames in pf rules

2013-03-15 Thread Peter N. M. Hansteen
On Fri, Mar 15, 2013 at 11:16:53AM +0100, Gilles LAMIRAL wrote:
 I need to use an hostname in a pf rule to allow a connection.
 The hostname is needed because the resolution is dynamic, 
 it can change at any minute (TTL 60).

host names in pf.conf and friends are resolved at load time so it's either 
reload the pf.conf
fairly often (a cron job comes to mind) or make the rule refer to a table that 
will only
ever contain the freshly resolved IP address for that hostname and let a 
sufficiently frequent
job (cron or otherwise) update the table with whatever the hostname currently 
resolves to.

 I've seen I can do it with an anchor and a script flushing/adding the 
 hostname each minute or less,
 I ask if there's a way less complicated and more understandable (reading 
 pf.conf).

an anchor would work too, so you may have a workable solution there already.

All the best,
Peter

-- 
Peter N. M. Hansteen, member of the first RFC 1149 implementation team
http://bsdly.blogspot.com/ http://www.bsdly.net/ http://www.nuug.no/
Remember to set the evil bit on all malicious network traffic
delilah spamd[29949]: 85.152.224.147: disconnected after 42673 seconds.



Re: PHP mini_sendmail problems

2013-03-15 Thread Stuart Henderson
On 2013-03-14, John Tate j...@johntate.org wrote:
 It seems to be a problem with drupal, I wrote my own php script that could
 send mail without issues. I have no idea how such a problem is possible
 unless drupal doesn't use php's mail() but I can't find anyone with similar
 problems.

ah - always good to mention the exact thing that isn't working rather
than trying to simplify.

for drupal6 I used phpmailer, which talks SMTP directly thus avoiding
the need to have /bin/sh in the jail. many PHP things have some way to
do this; mail() isn't always that reliable in hosting environments.

there is a version of phpmailer for drupal7 but only a development
release so far (also I haven't used d7 yet so haven't looked at adding
it to ports).



Re: HEllo - static rthread bug

2013-03-15 Thread sven falempin
On Thu, Mar 14, 2013 at 1:21 PM, Ted Unangst t...@tedunangst.com wrote:

 On Tue, Mar 05, 2013 at 22:18, Ted Unangst wrote:
  On Mon, Mar 04, 2013 at 17:46, sven falempin wrote:
  Dear misc readers,
 
  I have a home small c++ program, i used it for a while with no fuss and
 use
  the -static on my command line.
 
  Today i correct a 'feature' in the program (like deep inside), pass unit
  test and then rebuilt the all just
  like before on openbsd 5.2
  and now it crash before the main routine in pthread_self (librthread)

 Update: The current status of this is that static linking isn't
 supported with threads. It's complicated.

 Every OpenBSD machine you're going to run on is going to have
 libthread. And if they have a different version of the library, they
 have a different version of the kernel. We've been pretty aggressive
 about reworking some of the thread system calls, so even if you did
 statically link, the resulting binary wouldn't run with a different
 kernel.



I failed to use a snapshot to build (a week ego) and then get busy.
! thanx for the update !
I will not use -static and linked the c++ with objects archive (.a)


-- 
-
() ascii ribbon campaign - against html e-mail
/\



Re: Using hostnames in pf rules

2013-03-15 Thread Stuart Henderson
 2013/3/15 Gilles LAMIRAL gilles.lami...@laposte.net
 Is there a flag to tell pf to resolve the name each time it tries to match
 this part?

This would mean having a DNS resolver in the kernel; not going to happen.


On 2013-03-15, Janne Johansson icepic...@gmail.com wrote:
 make a table, and have cron update the contents of this table with the
 result of the latest resolved ip.

Yes, this is simpler than using an anchor and a script.
Simple one-liner in crontab should do:

pfctl -t tablename -Tr hostname



Re: Squid not working for connections from ssh-tunnel

2013-03-15 Thread Stuart Henderson
On 2013-03-15, John Tate j...@johntate.org wrote:
 I have a server I use to serve a squid proxy only accessible via ssh
 tunnel, which has worked fine for over a year. I upgraded from OpenBSD 5.1
 to OpenBSD 5.2 and I've also rebuilt squid in ports. It has stopped working
 for ssh tunnel connections. It works for the elinks browser, but both
 should be from localhost and be no different as far as I know.

 I get these errors in the log:
 [15/Mar/2013:04:01:40 -0700] elijah.secusrvr.com mail.google.com CONNECT
 mail.google.com:443 HTTP/1.1 403 1323 - Mozilla/5.0 (X11; Linux x86_64)
 AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.172 Safari/537.22
 TCP_DENIED:NONE


iirc TCP_DENIED/403 is due to acl, try following this about getting
some more logging:

http://wiki.squid-cache.org/SquidFaq/SquidAcl#I_set_up_my_access_controls.2C_but_they_don.27t_work.21__why.3F

localhost can be all sorts of things: 127.0.0.1, ::1, or even some
other address, depending on what's set in /etc/resolv.conf and /etc/hosts.



built-in http and BEAST attack(PCI compliance)

2013-03-15 Thread Steve Pribyl
Good Evening,

I have recently come to support a OpenBSD e-commerce site have to pass PCI DSS 
compliance.  It currently fails the BEAST attack scan because the server 
responds with vulnerable ciphers.  I am looking for suggestions on remediating 
the problem. 

Neither of these seem to actually turnoff the bad ciphers. 

SSLHonorCipherOrder On
SSLCipherSuite RC4-SHA:HIGH:!ADH

SSLHonorCipherOrder On
SSLCipherSuite 
ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH

If there is not real problem I can accept that but I will need some real 
statement so I can apply for an exemption.

Thanks
Steve



Re: built-in http and BEAST attack(PCI compliance)

2013-03-15 Thread Philip Guenther
On Fri, Mar 15, 2013 at 6:42 PM, Steve Pribyl spri...@viaforensics.com wrote:
 I have recently come to support a OpenBSD e-commerce site have to pass PCI 
 DSS compliance.  It currently
 fails the BEAST attack scan because the server responds with vulnerable 
 ciphers.  I am looking for suggestions
 on remediating the problem.

1) TLS CBC cipher suites are subject to BEAST and Lucky 13 attacks
2) TLS RC4 cipher suites are subject to an initial bias attack** and
use only 128bits of key
3) the commonly deployed TLS 1.0 cipher suites use either CBC or RC4
4) TLS 1.1 and 1.2 client deployment may be insufficient to support
your customer base

So, which one will you bite the bullet on?

Personally, if I was managing a publicly faced secure web server, I
would pick (1) and sneer at the BEAST and Lucky 13 attacks and just
offer the 3DES and AES256 cipher suites.

** c.f. 
http://www.forbes.com/sites/andygreenberg/2013/03/13/cryptographers-show-mathematically-crackable-flaws-in-common-web-encryption/
for example


 Neither of these seem to actually turnoff the bad ciphers.

 SSLHonorCipherOrder On
 SSLCipherSuite RC4-SHA:HIGH:!ADH

If you want to never use a cipher suite, you need to never add it to
the list (which you do via HIGH) or remove it completely via the '!'
operator and not add it back afterwards.

To test your attempts to get it to what you want, use the openssl
ciphers -v command, ala:
  openssl ciphers -v RC4-SHA:HIGH:!ADH


 SSLHonorCipherOrder On
 SSLCipherSuite 
 ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH

You explicitly list ECDHE-RSA-AES128-SHA256, which is a CBC cipher,
*first*!   What were you intending when you did that?

And then, of course, HIGH pulls in all the generic AES and 3DES
ciphers.  What were you intending when you included that?


 If there is not real problem I can accept that but I will need some real 
 statement so I can apply for an exemption.

c.f. (1) to (5) above and make your choice.


Philip Guenther