Re: relayd bypass SSL interception for URL

2015-06-16 Thread Felipe Scarel
Does anyone have a working Squid peek-n-splice (with optional splicing with
SNI lookup, preferably) config I can test with?
I'm having trouble finding clear examples, and stage2 bumping is prompting
certificate errors.

Thanks in advance,
fbscarel

On Tue, Mar 10, 2015 at 5:00 PM, Felipe Scarel fbsca...@gmail.com wrote:

 On Mon, Mar 9, 2015 at 12:03 PM, Stuart Henderson s...@spacehopper.org
 wrote:
  On 2015-03-06, Felipe Scarel fbsca...@gmail.com wrote:
  Hello all,
 
  I'm currently using relayd as a forward proxy, selectively blocking
  HTTP and HTTPS requests while doing MitM inspection (as per
  http://www.reykfloeter.com/post/41814177050/relayd-ssl-interception).
 
  To allow certain domains to go through the SSL proxy, a simple 'pass
  quick url file' is sufficient, and works. However, this option does
  not prevent the MitM operation from relayd; the request is simply
  allowed through, and the original certificate is still 'patched' by
  the local CA. The configuration is shown below:
 
  http protocol httpsfilter {
tcp { nodelay, sack, socket buffer 65536, backlog 1024 }
return error
 
match header set Keep-Alive value $TIMEOUT
match header set Connecton value close
 
pass quick url file /etc/relayd.d/custom_whitelist
block url file /etc/relayd.d/custom_blacklist
include /etc/relayd.d/auto_blacklist
 
ssl ca key  /etc/ssl/private/ca.key password password
ssl ca cert /etc/ssl/ca.crt
  }
 
  relay httpsproxy {
listen on 127.0.0.1 port 8443 ssl
protocol httpsfilter
forward with ssl to destination
  }
 
  This is a problem for a few sites (especially banking websites) that
  absolutely demand that the original certificate is not tampered in any
  way. I'm currently solving the problem with pf passthrough rules
  (allowing traffic directly to destination on a per-IP basis), which is
  far from an ideal solution as covered previously in
 
 http://openbsd.7691.n7.nabble.com/DNS-lookups-for-hostnames-in-PF-tables-td69546.html
  (scenarios like round robin DNS, CDNs providing content for multiple
  organizations, etc.)
 
  So, my question is: Is there a way to completely bypass SSL
  interception for a given URL file?
 
  Thanks in advance,
  fbscarel
 
 
 
  relayd doesn't have much information available at the point where it
  decides whether to pick up the request. Specifically it just has IP
  addresses. It can't tell the URL or even the domain name of the request
  to be able to identify the destination.
 
  The domain name *is* available before a full SSL negotiation, at least
  for connections from non-ancient browsers, but it requires opening at
  least the client-side of the connection, and reading the name from the
  ClientHello (this is the first packet sent by the client; server name is
  provided unencrypted by SNI).
 
  It is technically possible to use this information as part of a decision
  process, but it's much more complicated - you first need to identify
  whether interception is wanted, and then either replay the ClientHello
  (and afterwards forward packets directly to the server), or do the
  cert generation/MITM as usual.
 
  relayd doesn't support this yet.
 
  Recent versions of Squid (3.5.x) do; feature is called peek and
  splice, but I haven't tested it with OpenBSD yet. (Squid's normal
  SSL interception does work, at least in OpenBSD -current). Even then,
  the most you will be able to do is look at the domain name; the URL
  is not available until *after* the SSL handshake, at which point it
  is too late to make the decision whether to spoof the cert or not.
 

 The domain name would do, I'll try testing with Squid.
 Thanks for the input, Stuart.



Re: Dual-NSD setup management

2015-05-27 Thread Felipe Scarel
Thanks for the input Stuart and Bryan, I think the dual-authoritative
setup might indeed be overkill.
I'll look into unbound local-data options, hadn't considered that.

On Wed, May 27, 2015 at 3:10 PM, Bryan Irvine sparcta...@gmail.com wrote:
 Additionally to all this good advice, you can create multiple loopback
 interfaces if you did want to use divert-to. 'ifconfig create lo1' then you
 don't need to use weird ports to accomplish things.

 On Wed, May 27, 2015 at 4:06 AM, Stuart Henderson s...@spacehopper.org
 wrote:

 On 2015-05-26, Felipe Scarel fbsca...@gmail.com wrote:
  after reading some documentation on the NSD manpage and online, it
  seems there's no support for views as offered with BIND. I've gathered
  that the general suggestion is to run two separate instances (running
  on 127.0.0.1, for example), and divert traffic from pf depending on
  the connecting source-address.

 What are you using views *for*?

 If it's to present some internal-only hosts to a trusted network that
 is also using you as a resolver, just use local-data entries in unbound
 for internal use, and run NSD facing external hosts. Simple setup and
 fairly easy to use.

 If it's something more complex (i.e. where you have other resolvers
 querying you and need to present different views to these based on IP
 address etc) then yes you will need two separate authoritative servers
 (or you could keep using BIND for this job of course).



Dual-NSD setup management

2015-05-26 Thread Felipe Scarel
Hello all,

after reading some documentation on the NSD manpage and online, it
seems there's no support for views as offered with BIND. I've gathered
that the general suggestion is to run two separate instances (running
on 127.0.0.1, for example), and divert traffic from pf depending on
the connecting source-address.

I've successfully configured such a setup using two NSD servers,
listening on ports 53 and 8053, and using pf rdr-to and nat-to rules
to divert traffic. I tried to use divert-to instead, but for the life
of me I couldn't figure out why it wasn't working. This is what I'm
using right now:

pass in quick inet proto { tcp, udp } from { internal_networks } \
  to any port domain rdr-to localhost port 53
pass out quick inet proto { tcp, udp } from { internal_networks } \
  to any port domain nat-to self

pass in quick inet proto { tcp, udp } from any \
  to any port domain rdr-to localhost port 8053
pass out quick inet proto { tcp, udp } from any \
  to any port domain nat-to self

Management of this setup during boot is not so great, though. The
/etc/rc.d/nsd script more or less expects the configuration to reside
on /var/nsd/etc, so my best solution was to use nsd-control directly
from /etc/rc.local, which somewhat solves the problem (albeit not very
elegantly).

Perhaps someone has additional experiences to share on this kind of
setup. Is it possible to use divert-to on pf? What would be the
preferred method to manage two NSD daemons during boot?



Missing FAQ 10.16 section

2015-05-20 Thread Felipe Scarel
Hello all,

I was just reviewing the femail-chroot-1.0p0 post-install README, which reads:

# cat /usr/local/share/doc/pkg-readmes/femail-chroot-1.0p0 | grep 'By
default' -A2
By default, femail will use `localhost' for smtphost.  Make sure to
review FAQ Section 10.16 discussing name resolution with httpd(8)'s
default chroot(2).

Section 10.16 seems missing from the OpenBSD FAQ, though.
http://www.openbsd.org/faq/faq10.html goes from section 10.15 straight to 10.17.

Regards,
fbscarel



Re: Missing FAQ 10.16 section

2015-05-20 Thread Felipe Scarel
On Wed, May 20, 2015 at 2:11 PM, Gleydson Soares gsoa...@gmail.com wrote:

 Felipe Scarel fbsca...@gmail.com writes:

 Hello all,

 I was just reviewing the femail-chroot-1.0p0 post-install README, which 
 reads:

 # cat /usr/local/share/doc/pkg-readmes/femail-chroot-1.0p0 | grep 'By
 default' -A2
 By default, femail will use `localhost' for smtphost.  Make sure to
 review FAQ Section 10.16 discussing name resolution with httpd(8)'s
 default chroot(2).

 Section 10.16 seems missing from the OpenBSD FAQ, though.
 http://www.openbsd.org/faq/faq10.html goes from section 10.15 straight to 
 10.17.

 Regards,
 fbscarel

 seems this faq section presumably got removed when somebody was in a rush to
 delete apache.

 i have been sent a diff that fix femail bits. it is now waiting okan@
 (maintainer) ok/review.

 Thanks,
 gsoares

Thanks Josh/Gleydson. Glad I could help.



Re: relayd crashes often

2015-03-26 Thread Felipe Scarel
On Thu, Mar 26, 2015 at 12:37 AM, Yonas Yanfa yo...@fizk.net wrote:
 On 15-03-24 03:26 AM, Claudio Jeker wrote:
 On Mon, Mar 23, 2015 at 11:54:41PM -0400, Yonas Yanfa wrote:
 Hi,

 I'm running relayd/OpenBSD 5.6-stable on a KVM virtual machine. relayd
 always crashes within a few hours of restarting it, but works properly
 before that.

 I guess you are talking about reloading relayd (as in relayctl reload)...


 Killing all relayd processes and then running relayd.


 When relayd stops working, sometimes the relayd process is up but
 `relayctl
 show summary` says that /var/run/relayd.sock doesn't exist. Other times
 none
 of the relayd processes are running.


 I hit similar issues and came up with the following diff against -current.
 It may apply to 5.6 but did not test that at all. I'm not 100% sure about
 the ca.c change since OpenSSL is a black box.


 Thanks for the patches.

 Before I try to apply the patches, I think the issue might be caused by
 having too many CLOSE_WAIT connections. I seem to have 2,236 CLOSE_WAIT
 connections:


 $ netstat -n|grep CLOSE_WAIT|wc -l
 2236

 And relayd seems to have 501 active connections:

 relay www, session 1806 (501 active), 0, xxx.xxx.xxx.xxx - :0, hard timeout


 How can I get relayd to close these connections?


 Cheers,
 Yonas


I can confirm this has also been observed on my end, using relayd as a
forward ssl-inspecting proxy on amd64 hardware. Runs without issue for
a few hours, next time I look all the (ca|hfe|pfe) processes are gone,
only relays and the parent process remains. Killing all of them and
restarting the daemon solves the problem.



Re: Set PKG_PATH using Time Zone?

2015-03-26 Thread Felipe Scarel
Routing from certain countries can also be funny sometimes (for
example, I'm pretty sure users in Peru would get better speeds
downloading from US servers rather than from Brazil, despite the
geographical proximity).

On Thu, Mar 26, 2015 at 4:18 PM, Joshua Smith jsm...@mail.wvnet.edu wrote:
 On Thu, Mar 26, 2015 at 06:55:50PM +, L.R. D.S. wrote:
 Is really boring write the package repository everytime we install.
 Why not set the repository using the Time Zone as a reference?
 For example, if you set Japan as your zone, then run
 export PKG_PATH=http://www.ftp.ne.jp/OpenBSD/'uname -r'/packages/'uname -m'/

 What about regions which contain multiple mirrors?

 --
 Joshua Smith

 Montani Semper Liberi



Re: relayd bypass SSL interception for URL

2015-03-12 Thread Felipe Scarel
On Mon, Mar 9, 2015 at 12:03 PM, Stuart Henderson s...@spacehopper.org wrote:
 On 2015-03-06, Felipe Scarel fbsca...@gmail.com wrote:
 Hello all,

 I'm currently using relayd as a forward proxy, selectively blocking
 HTTP and HTTPS requests while doing MitM inspection (as per
 http://www.reykfloeter.com/post/41814177050/relayd-ssl-interception).

 To allow certain domains to go through the SSL proxy, a simple 'pass
 quick url file' is sufficient, and works. However, this option does
 not prevent the MitM operation from relayd; the request is simply
 allowed through, and the original certificate is still 'patched' by
 the local CA. The configuration is shown below:

 http protocol httpsfilter {
   tcp { nodelay, sack, socket buffer 65536, backlog 1024 }
   return error

   match header set Keep-Alive value $TIMEOUT
   match header set Connecton value close

   pass quick url file /etc/relayd.d/custom_whitelist
   block url file /etc/relayd.d/custom_blacklist
   include /etc/relayd.d/auto_blacklist

   ssl ca key  /etc/ssl/private/ca.key password password
   ssl ca cert /etc/ssl/ca.crt
 }

 relay httpsproxy {
   listen on 127.0.0.1 port 8443 ssl
   protocol httpsfilter
   forward with ssl to destination
 }

 This is a problem for a few sites (especially banking websites) that
 absolutely demand that the original certificate is not tampered in any
 way. I'm currently solving the problem with pf passthrough rules
 (allowing traffic directly to destination on a per-IP basis), which is
 far from an ideal solution as covered previously in
 http://openbsd.7691.n7.nabble.com/DNS-lookups-for-hostnames-in-PF-tables-td69546.html
 (scenarios like round robin DNS, CDNs providing content for multiple
 organizations, etc.)

 So, my question is: Is there a way to completely bypass SSL
 interception for a given URL file?

 Thanks in advance,
 fbscarel



 relayd doesn't have much information available at the point where it
 decides whether to pick up the request. Specifically it just has IP
 addresses. It can't tell the URL or even the domain name of the request
 to be able to identify the destination.

 The domain name *is* available before a full SSL negotiation, at least
 for connections from non-ancient browsers, but it requires opening at
 least the client-side of the connection, and reading the name from the
 ClientHello (this is the first packet sent by the client; server name is
 provided unencrypted by SNI).

 It is technically possible to use this information as part of a decision
 process, but it's much more complicated - you first need to identify
 whether interception is wanted, and then either replay the ClientHello
 (and afterwards forward packets directly to the server), or do the
 cert generation/MITM as usual.

 relayd doesn't support this yet.

 Recent versions of Squid (3.5.x) do; feature is called peek and
 splice, but I haven't tested it with OpenBSD yet. (Squid's normal
 SSL interception does work, at least in OpenBSD -current). Even then,
 the most you will be able to do is look at the domain name; the URL
 is not available until *after* the SSL handshake, at which point it
 is too late to make the decision whether to spoof the cert or not.


The domain name would do, I'll try testing with Squid.
Thanks for the input, Stuart.



Re: httpd + dokuwiki or mailman

2015-03-06 Thread Felipe Scarel
On Fri, Mar 6, 2015 at 9:37 AM, Felipe Scarel fbsca...@gmail.com wrote:

 On Thu, Mar 5, 2015 at 6:06 PM, agrquinonez agrquino...@agronomos.ca wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 03/05/2015 12:14 PM, Michael wrote:
  I run dokuwiki on httpd with php-fpm.
 
  I did an: cd /var/www/htdocs  ln -s ../../dokuwiki doku The
  config in /etc/examples will work ok if you adjust the root
  directive. You will need to open the full name via your browser, as
  in my setup http://127.0.0.1/doku/doku.php as you will get an error
  otherwise.
 
 
  Thanks to respond.
 
  Yes, but i do not have, and i do not want a browser in the server;
  then i tried /var/www/htdocs/dokuwiki/install.php with:
 
  ln -sf /var/www/htdocs/dokuwiki /var/www/dokuwiki
 
  and:
 
  location *.php {
  fastcgi socket /run/php-fpm.sock
  }
 
  After that, i tried from an external machine:
 
  www.my_server.org/dokuwiki/install.php
 
  What is wrong with it?
 
  Is it what you mean?
  # A name-based virtual server on the same address
  server dokuwiki {
  listen on $ext_addr port 80
  root /dokuwiki
  }
  iQEcBAEBAgAGBQJU+MVEAAoJEKbsEnZGVkUMx38IAJf8dr8nZ5NPxCLVtebSIcHt
  hRnMwtVEs1t3/COkGuH10tgs2qzsmvL78/gnnDeM5O6xVAVLARAjA0pCXvgGudd3
  45K0GnpJCLR6cx0e6OTpSijwxYa1rPmI8fe4alq4wWT6cyJ1f+p7WgyfU+VkDLZp
  e2oaxRehy4DDunAPhj4TH8uQg5PMcATXKWHjq86Ip7NV04e5zLasgOZNWN9c2Wcb
  bGhCYpWFbp1KdLUBlODShdrbHGvzrNTJ8SFCwsr5xoGjG50iEcCorC/OD3019iag
  blVgOk2fZeohXsU4oqTNQpSNROPRlqTDR0FdeAIXFJb6maeC3WtrkdRnxvBiObM=
  =o7Dj
  -END PGP SIGNATURE-
 

 Check out the root and fastcgi parameters for httpd on 5.7-beta. Also 
 check out directory index if you wish to serve a PHP index file.
 Of course, you also have to install php-fpm for PHP processing and check 
 directory permissions if the application needs write permissions to any files.


I forgot to mention, Reyk has a good tutorial for running OwnCloud
with httpd + php-fpm, here:
https://github.com/reyk/httpd/wiki/Running-ownCloud-with-httpd-on-OpenBSD

Much of what is there can be used as general instructions to run PHP
applications under httpd + php-fpm setups (in my case, Wordpress).



Re: httpd + dokuwiki or mailman

2015-03-06 Thread Felipe Scarel
On Thu, Mar 5, 2015 at 6:06 PM, agrquinonez agrquino...@agronomos.ca
wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 03/05/2015 12:14 PM, Michael wrote:
 I run dokuwiki on httpd with php-fpm.

 I did an: cd /var/www/htdocs  ln -s ../../dokuwiki doku The
 config in /etc/examples will work ok if you adjust the root
 directive. You will need to open the full name via your browser, as
 in my setup http://127.0.0.1/doku/doku.php as you will get an error
 otherwise.


 Thanks to respond.

 Yes, but i do not have, and i do not want a browser in the server;
 then i tried /var/www/htdocs/dokuwiki/install.php with:

 ln -sf /var/www/htdocs/dokuwiki /var/www/dokuwiki

 and:

 location *.php {
 fastcgi socket /run/php-fpm.sock
 }

 After that, i tried from an external machine:

 www.my_server.org/dokuwiki/install.php

 What is wrong with it?

 Is it what you mean?
 # A name-based virtual server on the same address
 server dokuwiki {
 listen on $ext_addr port 80
 root /dokuwiki
 }
 iQEcBAEBAgAGBQJU+MVEAAoJEKbsEnZGVkUMx38IAJf8dr8nZ5NPxCLVtebSIcHt
 hRnMwtVEs1t3/COkGuH10tgs2qzsmvL78/gnnDeM5O6xVAVLARAjA0pCXvgGudd3
 45K0GnpJCLR6cx0e6OTpSijwxYa1rPmI8fe4alq4wWT6cyJ1f+p7WgyfU+VkDLZp
 e2oaxRehy4DDunAPhj4TH8uQg5PMcATXKWHjq86Ip7NV04e5zLasgOZNWN9c2Wcb
 bGhCYpWFbp1KdLUBlODShdrbHGvzrNTJ8SFCwsr5xoGjG50iEcCorC/OD3019iag
 blVgOk2fZeohXsU4oqTNQpSNROPRlqTDR0FdeAIXFJb6maeC3WtrkdRnxvBiObM=
 =o7Dj
 -END PGP SIGNATURE-


Check out the root and fastcgi parameters for httpd on 5.7-beta. Also
check out directory index if you wish to serve a PHP index file.
Of course, you also have to install php-fpm for PHP processing and check
directory permissions if the application needs write permissions to any
files.



relayd bypass SSL interception for URL

2015-03-06 Thread Felipe Scarel
Hello all,

I'm currently using relayd as a forward proxy, selectively blocking
HTTP and HTTPS requests while doing MitM inspection (as per
http://www.reykfloeter.com/post/41814177050/relayd-ssl-interception).

To allow certain domains to go through the SSL proxy, a simple 'pass
quick url file' is sufficient, and works. However, this option does
not prevent the MitM operation from relayd; the request is simply
allowed through, and the original certificate is still 'patched' by
the local CA. The configuration is shown below:

http protocol httpsfilter {
  tcp { nodelay, sack, socket buffer 65536, backlog 1024 }
  return error

  match header set Keep-Alive value $TIMEOUT
  match header set Connecton value close

  pass quick url file /etc/relayd.d/custom_whitelist
  block url file /etc/relayd.d/custom_blacklist
  include /etc/relayd.d/auto_blacklist

  ssl ca key  /etc/ssl/private/ca.key password password
  ssl ca cert /etc/ssl/ca.crt
}

relay httpsproxy {
  listen on 127.0.0.1 port 8443 ssl
  protocol httpsfilter
  forward with ssl to destination
}

This is a problem for a few sites (especially banking websites) that
absolutely demand that the original certificate is not tampered in any
way. I'm currently solving the problem with pf passthrough rules
(allowing traffic directly to destination on a per-IP basis), which is
far from an ideal solution as covered previously in
http://openbsd.7691.n7.nabble.com/DNS-lookups-for-hostnames-in-PF-tables-td69546.html
(scenarios like round robin DNS, CDNs providing content for multiple
organizations, etc.)

So, my question is: Is there a way to completely bypass SSL
interception for a given URL file?

Thanks in advance,
fbscarel



Re: relayd memory usage when loading large URL lists

2015-03-04 Thread Felipe Scarel
On Wed, Mar 4, 2015 at 6:29 AM, Stuart Henderson s...@spacehopper.org wrote:
 On 2015-03-01, Felipe Scarel fbsca...@gmail.com wrote:
 Now loading the phishing/domains URL list, which has about ~63k
 entries. relayd's parent process ballons to over 2GB memory usage
 (I'm assuming it's reading the URL lists and building a data structure
 for the relays),

 Yes, it's building a red-black tree structure during startup.


Nice to know.

 So that's about ~520 MB of memory per relay process, out of 3 total.

 This is probably shared (fork does copy-on-write, so forked processes can
 just use the original memory unless they make changes to it). Try adjusting
 the prefork number and check the free memory with top(1) rather than the
 per-process memory with ps(1).


Alright, I'll do that. In other news, Reyk replied to me via Twitter
saying that relayd is not optimized for large blacklists yet. I'll
keep using the current version for the time being, as ~100k URLs is
sufficient for my current demand.

Thanks for your help!



Re: relayd memory usage when loading large URL lists

2015-03-02 Thread Felipe Scarel
On Sun, Mar 1, 2015 at 4:45 PM, Felipe Scarel fbsca...@gmail.com wrote:
 Hello all,

 I'm implementing a simple SSL forward proxy using relayd.
 Configuration has been fine, as was testing. There seems to be one
 issue with memory consumption, however.

 To better illustrate my issue, here follows an excerpt of /etc/relayd.conf :

 http protocol httpsfilter {
   tcp { nodelay, sack, socket buffer 65536, backlog 1024 }
   return error

   match header set Keep-Alive value $TIMEOUT
   match header set Connecton value close

   pass quick url file /etc/relayd.d/custom_whitelist
   block url file /etc/relayd.d/custom_blacklist
   include /etc/relayd.d/auto_blacklist

   ssl ca key  /etc/ssl/private/ca.key password password
   ssl ca cert /etc/ssl/ca.crt
 }

 So basically it checks against a custom whitelist, then a custom
 blacklist, and finally an auto blacklist (which is the main source
 of the problem). Using a few URLs with both custom black/white lists
 poses no issue, but when attempting to load a somewhat bigger URL list
 downloaded from the internet (I'm using
 ftp://ftp.ut-capitole.fr/pub/reseau/cache/squidguard_contrib/blacklists.tar.gz)
 I run into memory problems.

 For example, here is relayd's memory usage when only the custom
 white/black lists are loaded (2 URLs total, no big deal):

 # ps aux | grep relayd
 USER   PID %CPU %MEM   VSZ   RSS TT  STAT  STARTED   TIME COMMAND
 _relayd  17238  0.0  0.1  1528  3208 ??  I  3:27PM0:00.01
 relayd: relay (relayd)
 _relayd  14280  0.0  0.1  1524  3176 ??  I  3:27PM0:00.02
 relayd: relay (relayd)
 _relayd  30448  0.0  0.1  1396  2812 ??  I  3:27PM0:00.01
 relayd: ca (relayd)
 _relayd  10020  0.0  0.1  1376  2768 ??  I  3:27PM0:00.01
 relayd: ca (relayd)
 _relayd  25775  0.0  0.1  1400  2852 ??  I  3:27PM0:00.01
 relayd: ca (relayd)
 root   346  0.0  0.1  1912  3672 ??  Is 3:27PM0:00.02
 relayd: parent (relayd)
 _relayd  15883  0.0  0.1  1440  2828 ??  I  3:27PM0:00.01
 relayd: pfe (relayd)
 _relayd  32000  0.0  0.1  1220  2560 ??  I  3:27PM0:00.01
 relayd: hce (relayd)
 _relayd   2677  0.0  0.1  1516  3188 ??  I  3:27PM0:00.01
 relayd: relay (relayd)

 Now loading the phishing/domains URL list, which has about ~63k
 entries. relayd's parent process ballons to over 2GB memory usage
 (I'm assuming it's reading the URL lists and building a data structure
 for the relays), and after that the relays stabilize with the
 following memory usage:

 # ps aux | grep relayd
 USER   PID %CPU %MEM   VSZ   RSS TT  STAT  STARTED   TIME COMMAND
 _relayd  12982  0.0 12.9 516728 526288 ??  S  3:31PM0:03.44
 relayd: relay (relayd)
 _relayd   1206  0.0  0.1  1368  2836 ??  I  3:31PM0:00.01
 relayd: ca (relayd)
 root 25673  0.0  2.7 155616 111228 ??  Is 3:31PM0:16.35
 relayd: parent (relayd)
 _relayd  15513  0.0  0.1  1416  2832 ??  S  3:31PM0:00.01
 relayd: pfe (relayd)
 _relayd  15643  0.0  0.1  1200  2560 ??  I  3:31PM0:00.01
 relayd: hce (relayd)
 _relayd  25822  0.0 12.9 516716 526296 ??  S  3:31PM0:03.37
 relayd: relay (relayd)
 _relayd  17950  0.0  0.1  1380  2824 ??  I  3:31PM0:00.01
 relayd: ca (relayd)
 _relayd   9068  0.0  0.1  1360  2784 ??  I  3:31PM0:00.01
 relayd: ca (relayd)
 _relayd  19666  0.0 12.9 516712 526292 ??  S  3:31PM0:03.46
 relayd: relay (relayd)

 So that's about ~520 MB of memory per relay process, out of 3 total.
 Next I load another URL list alongside the previous one, the
 adult/urls list, which contains roughtly ~55k entries. Adding up
 with the previous list, we have more or less ~118k URLs for relayd to
 process. The parent process takes a couple minutes to process
 everything, going over 4GB VSZ and 2.2GB RSS. After all's said and
 done, here's what's shown by ps:

 # ps aux | grep relayd
 USER   PID %CPU %MEM   VSZ   RSS TT  STAT  STARTED   TIME COMMAND
 _relayd   6332  0.0  0.1  1428  2228 ??  I  3:35PM0:00.01
 relayd: ca (relayd)
 _relayd   8736  0.0 23.9 967808 976768 ??  I  3:35PM0:06.81
 relayd: relay (relayd)
 _relayd  22890  0.0 23.9 967812 976768 ??  I  3:35PM0:06.77
 relayd: relay (relayd)
 _relayd   5871  0.0 23.9 967804 976760 ??  I  3:35PM0:06.33
 relayd: relay (relayd)
 _relayd   8199  0.0  0.1  1440  2256 ??  I  3:35PM0:00.01
 relayd: ca (relayd)
 root  5571  0.0  5.3 315032 214796 ??  Is 3:35PM1:28.45
 relayd: parent (relayd)
 _relayd  30781  0.0  0.1  1488  2136 ??  S  3:35PM0:00.01
 relayd: pfe (relayd)
 _relayd   1502  0.0  0.0  1272  2040 ??  I  3:35PM0:00.01
 relayd: hce (relayd)
 _relayd  29135  0.0  0.1  1432  2236 ??  I  3:35PM0:00.01
 relayd: ca (relayd)

 Nearly 1GB of RAM per relay process, and ~214 MB to the parent
 process. This server I'm working with has 4GB of RAM, so it can't go
 much further. If I attempt to load the biggest URL list from the set,
 adult

relayd memory usage when loading large URL lists

2015-03-01 Thread Felipe Scarel
Hello all,

I'm implementing a simple SSL forward proxy using relayd.
Configuration has been fine, as was testing. There seems to be one
issue with memory consumption, however.

To better illustrate my issue, here follows an excerpt of /etc/relayd.conf :

http protocol httpsfilter {
  tcp { nodelay, sack, socket buffer 65536, backlog 1024 }
  return error

  match header set Keep-Alive value $TIMEOUT
  match header set Connecton value close

  pass quick url file /etc/relayd.d/custom_whitelist
  block url file /etc/relayd.d/custom_blacklist
  include /etc/relayd.d/auto_blacklist

  ssl ca key  /etc/ssl/private/ca.key password password
  ssl ca cert /etc/ssl/ca.crt
}

So basically it checks against a custom whitelist, then a custom
blacklist, and finally an auto blacklist (which is the main source
of the problem). Using a few URLs with both custom black/white lists
poses no issue, but when attempting to load a somewhat bigger URL list
downloaded from the internet (I'm using
ftp://ftp.ut-capitole.fr/pub/reseau/cache/squidguard_contrib/blacklists.tar.gz)
I run into memory problems.

For example, here is relayd's memory usage when only the custom
white/black lists are loaded (2 URLs total, no big deal):

# ps aux | grep relayd
USER   PID %CPU %MEM   VSZ   RSS TT  STAT  STARTED   TIME COMMAND
_relayd  17238  0.0  0.1  1528  3208 ??  I  3:27PM0:00.01
relayd: relay (relayd)
_relayd  14280  0.0  0.1  1524  3176 ??  I  3:27PM0:00.02
relayd: relay (relayd)
_relayd  30448  0.0  0.1  1396  2812 ??  I  3:27PM0:00.01
relayd: ca (relayd)
_relayd  10020  0.0  0.1  1376  2768 ??  I  3:27PM0:00.01
relayd: ca (relayd)
_relayd  25775  0.0  0.1  1400  2852 ??  I  3:27PM0:00.01
relayd: ca (relayd)
root   346  0.0  0.1  1912  3672 ??  Is 3:27PM0:00.02
relayd: parent (relayd)
_relayd  15883  0.0  0.1  1440  2828 ??  I  3:27PM0:00.01
relayd: pfe (relayd)
_relayd  32000  0.0  0.1  1220  2560 ??  I  3:27PM0:00.01
relayd: hce (relayd)
_relayd   2677  0.0  0.1  1516  3188 ??  I  3:27PM0:00.01
relayd: relay (relayd)

Now loading the phishing/domains URL list, which has about ~63k
entries. relayd's parent process ballons to over 2GB memory usage
(I'm assuming it's reading the URL lists and building a data structure
for the relays), and after that the relays stabilize with the
following memory usage:

# ps aux | grep relayd
USER   PID %CPU %MEM   VSZ   RSS TT  STAT  STARTED   TIME COMMAND
_relayd  12982  0.0 12.9 516728 526288 ??  S  3:31PM0:03.44
relayd: relay (relayd)
_relayd   1206  0.0  0.1  1368  2836 ??  I  3:31PM0:00.01
relayd: ca (relayd)
root 25673  0.0  2.7 155616 111228 ??  Is 3:31PM0:16.35
relayd: parent (relayd)
_relayd  15513  0.0  0.1  1416  2832 ??  S  3:31PM0:00.01
relayd: pfe (relayd)
_relayd  15643  0.0  0.1  1200  2560 ??  I  3:31PM0:00.01
relayd: hce (relayd)
_relayd  25822  0.0 12.9 516716 526296 ??  S  3:31PM0:03.37
relayd: relay (relayd)
_relayd  17950  0.0  0.1  1380  2824 ??  I  3:31PM0:00.01
relayd: ca (relayd)
_relayd   9068  0.0  0.1  1360  2784 ??  I  3:31PM0:00.01
relayd: ca (relayd)
_relayd  19666  0.0 12.9 516712 526292 ??  S  3:31PM0:03.46
relayd: relay (relayd)

So that's about ~520 MB of memory per relay process, out of 3 total.
Next I load another URL list alongside the previous one, the
adult/urls list, which contains roughtly ~55k entries. Adding up
with the previous list, we have more or less ~118k URLs for relayd to
process. The parent process takes a couple minutes to process
everything, going over 4GB VSZ and 2.2GB RSS. After all's said and
done, here's what's shown by ps:

# ps aux | grep relayd
USER   PID %CPU %MEM   VSZ   RSS TT  STAT  STARTED   TIME COMMAND
_relayd   6332  0.0  0.1  1428  2228 ??  I  3:35PM0:00.01
relayd: ca (relayd)
_relayd   8736  0.0 23.9 967808 976768 ??  I  3:35PM0:06.81
relayd: relay (relayd)
_relayd  22890  0.0 23.9 967812 976768 ??  I  3:35PM0:06.77
relayd: relay (relayd)
_relayd   5871  0.0 23.9 967804 976760 ??  I  3:35PM0:06.33
relayd: relay (relayd)
_relayd   8199  0.0  0.1  1440  2256 ??  I  3:35PM0:00.01
relayd: ca (relayd)
root  5571  0.0  5.3 315032 214796 ??  Is 3:35PM1:28.45
relayd: parent (relayd)
_relayd  30781  0.0  0.1  1488  2136 ??  S  3:35PM0:00.01
relayd: pfe (relayd)
_relayd   1502  0.0  0.0  1272  2040 ??  I  3:35PM0:00.01
relayd: hce (relayd)
_relayd  29135  0.0  0.1  1432  2236 ??  I  3:35PM0:00.01
relayd: ca (relayd)

Nearly 1GB of RAM per relay process, and ~214 MB to the parent
process. This server I'm working with has 4GB of RAM, so it can't go
much further. If I attempt to load the biggest URL list from the set,
adult/domains (slightly above 1 million entries), the server hangs
up after a while and demands a hard reset.

Is there any configuration parameter I'm missing here? I've reviewed
the 

httpd client certificates and URL rewriting

2015-02-22 Thread Felipe Scarel
Hello,

I'm currently using httpd + php_fpm to serve a Wordpress website with
OpenBSD 5.7-snapshot (20/02/2015). The added capabilities to use a
fastcgi target as default index and general improvements are really
nice, and for the most part there are no issues. I'd like to thank
Reyk and all OpenBSD devs for such great software.

Three questions regarding current httpd capabilities or perhaps future
improvements, though:

1) Is there support for client-side certificates on a per-location
basis? Would be a good alterrnative to improve security accessing
administrative parts of a website, not relying solely on password
authentication.

2) I'm using a *.php* location block to redirect requests to
php_fpm, which in turn demands the URL to contain an expression
following that format. Therefore, /index.php/%CONTENT% style URLs
are currently in use. In order to use so-called 'pretty' URLs (that
is, without the index.php portion), would it be necessary to have
rewrite capabilities offered by Apache/nginx, or is there any
alternative I'm overseeing?

Thanks for your input,
fbscarel



Re: English and Spanish keyboard at same time?

2009-07-22 Thread Felipe Scarel
Try kbd(8).

On Wed, Jul 22, 2009 at 11:43, Chris Bennett
ch...@bennettconstruction.biz wrote:

 I do most of my work in English, but I also do a small amount in Spanish.
 I have a Spanish keyboard, but when I tried hooking it up, didn't get what
was on keys.

 Is there any way to change this dynamically so that I can switch back and
forth easily?

 Chris Bennett

 --
 A human being should be able to change a diaper, plan an invasion,
 butcher a hog, conn a ship, design a building, write a sonnet, balance
 accounts, build a wall, set a bone, comfort the dying, take orders,
 give orders, cooperate, act alone, solve equations, analyze a new
 problem, pitch manure, program a computer, cook a tasty meal, fight
 efficiently, die gallantly. Specialization is for insects.
  -- Robert Heinlein



Re: Stupid Ideas - softraid and ExpEther

2009-04-08 Thread Felipe Scarel
Forgot to CC the list, my bad.

On Wed, Apr 8, 2009 at 12:25 PM, Joseph C. Bender
jcben...@bendorius.com wrote:
 J.C. Roberts wrote:

 As for the mentioned issue of encrypting the bus data, since you've got
 the VLAN it is feasible, but if you've got an attacker inside the
 switches of your datacenter, then you obviously have more important
 problems.

 Another scenario is that you get a compromised machine that has access to
 this pool of resources.  I don't have to compromise your switching, I just
 have to compromise a host that uses this network.  Given that Windows hosts
 get to participate with this sort of thing, that's just a matter of time.

 Given that the security model relies on *VLANS* of all things to segment
 network resources (from what little information is out there), one
 compromised host could ruin your whole day, especially if the switch has
 VLAN tagging vulnerabilities as well (which has happened more times than
I'd
 like to think about.)


Since J.C. is talking about HPC, I don't think that'd be such a
concern. Like Matthew said, the dedicated network scenario is much
more likely, and thus the probability of a compromised host decreases
dramatically (since you control every single host in the network).

I'm currently working with bioinformatics algorithms in cluster
environments, so (as always), your extremely detailed emails have been
a great reading material, J.C. Thanks, and keep up the great work!


 -JCB



Re: (bit)torrent openbsd client

2009-01-28 Thread Felipe Scarel
Currently using rtorrent over here.

On Wed, Jan 28, 2009 at 1:12 PM, Mihai Popescu B.S. mihai...@gmail.com
wrote:

 Hello,

 Could you make some suggestion for a good openbsd (bit)torrent client
 with or without GUI ? I know some names, but I want to have some user
 experience presented.

 Thanks.




--
PWSys - Solugues em Tecnologia (http://pwsys.com.br)
PWFriends - Amizade i o que ha! (http://pwfriends.com.br)
#!/bin/ksh - My Web Log (http://fbscarel.wordpress.com)



Re: (bit)torrent openbsd client

2009-01-28 Thread Felipe Scarel
I'm using rtorrent on -current, no issues whatsoever.

On Wed, Jan 28, 2009 at 1:54 PM, Mattieu Baptiste mattie...@gmail.com wrote:
 On Wed, Jan 28, 2009 at 4:26 PM, fRANz andrea.francesc...@gmail.com wrote:

 try rtorrent:
 http://libtorrent.rakshasa.no/


 Any feedback on the status of rtorrent on -current ? I tested it two
 months ago and I experienced system crashes like some people had.

 --
 Mattieu Baptiste
 /earth is 102% full ... please delete anyone you can.



Re: Letter to OLPC

2006-10-06 Thread Felipe Scarel

I totally agree with Siju on this. Living in a 3rd world country, as I
guess he also lives, I am pretty sure that a laptop isn't at all
important for disadvantaged children, as said.

REAL need in our countries are, as previously said, for food, health
care and good education. The most urgent of them all is for food, so I
could bet anything that a disadvantaged children wouldn't think
twice if he/she could sell the useless laptop in exchange for some
money, or such. Moreover, there isn't easy access to internet
connections in 3rd world countries, so the laptop is even MORE useless
than ever.

All that said, these disadvantaged children talk is clearly a load
of bullshit. No doubt OLPC is after money, and only that.

PS: I feel happy everyday to read the emails at [EMAIL PROTECTED] it reinforces
my beliefs in truly Free software and, of course, in OpenBSD. Keep it
up!

On 10/6/06, Siju George [EMAIL PROTECTED] wrote:

On 10/6/06, Jack J. Woehr [EMAIL PROTECTED] wrote:
  Free and open software is a means to an end, rather than the
  sole end unto itself for OLPC.
 
  I was totally stunned by this admission.  morally bankrupt, as Bob
  says, is exactly what is going on.

 Hmm, sounds like you are saying that abstract goal of unlimited
 software freedom is
 a higher goal than providing access to modern technology to
 disadvantaged children in
 3rd-world countries.


If the real concern is for *disadvantaged children* in third world
countries then giving them a laptop is the most ridiculous idea ever
orginated!

Some time back I saw a cartoon. One of the 3rd world countries blasted
their nuclear bomb and was proud of it. Proud that they were in par
with the others in the West. While their people were still begging and
starving in the streets and villages.

The cartoon showed a poor beggar sitting on the street with torn
clothes with the beggars basin to reveive a missile sent to it.

In  the third world the basic necissities are food, water, clothing,
shelter, medical care etc.
Disadvantaged children could care less about a stupid laptop when they
have had no meal for a week and are tired of the sun while watching
their siblings dying of cholera.

Getting a laptop to a child for low cost seems to be a noble idea on
the outside.
add a *3rd-world country* phase and you get a more polished *charity
painted/noble* image.

I don't think OLPC it that great!. It is another form of business.
They have seen a market. They want to reach it. thats all!

Mostly people who applaude such endeavours *do not have any idea* of
the issues of the third world countries.

I am not angry Jack.
But When I find people *over nobleizing* at the expense of the 3rd
world countries I think I need to say this.

Kind Regards

Siju






--

 Felipe Brant Scarel
 PATUX/OpenBSD Project Leader (http://www.patux.cic.unb.br)



Re: GPL = BSD + DRM [Was: Re: Intel's Open Source Policy Doesn't Make Sense]

2006-10-06 Thread Felipe Scarel

Is that all you can say to defend your point of view? If you are wrong
(and you probably are), you should admit it, not repeat quote out of
context as a silly escape.

On 10/6/06, Han Boetes [EMAIL PROTECTED] wrote:

quote out of context

Rod.. Whitworth wrote:
 On Fri, 6 Oct 2006 03:50:38 +0159, Han Boetes wrote:

  In my world freedom is something you have to fight for, otherwise
  it gets taken away. Putting a limit on your freedoms is a good
  thing.

 Bullshit!

 Now don't quote me that specious crap about how free speech is limited
 by no freedom to falsely cry Fire! in a crowded theatre.

 That is the refuge of philosophy 101 students or shitheads who only
 advance it so that they can gloat about the stupidity of someone who
 did not recognise the trick.

 You are free to spout whatever crap you espouse. You yourself never
 fought for that right but I won't deny you that right.

 Somebody may call you to account for abusing that freedom.

 Like now.

 Your puerile confusion of freedoms of speech or thought with free
 software (as we know it) does not do more than deomonstrate your lack
 of maturity and a need for some training of your brain's crap detector.
 If it is not atrophied, that is.

 I was an IBM Linux instructor until a couple of years ago and I can
 tell you for certain that your (wishful) thinking about why they (IBM)
 espouse Linux is wildly astray. Try again.

 But not here, please. You have woffled on too long and I am waeried of
 watching your twaddle go by.

 plonk
 EOF



 From the land down under: Australia.
 Do we look umop apisdn from up over?

 Do NOT CC me - I am subscribed to the list.
 Replies to the sender address will fail except from the list-server.
 Your IP address will also be greytrapped for 24 hours after any
 attempt.
 I am continually amazed by the people who run OpenBSD who don't take
 this advice. I always expected a smarter class. I guess not.




# Han





--

 Felipe Brant Scarel
 PATUX/OpenBSD Project Leader (http://www.patux.cic.unb.br)



Re: xmms does not run smoothly

2006-05-18 Thread Felipe Scarel

I have always had the suspection that desktop software like xmms and firefox
run a bit slower on OpenBSD in comparison with other OS's, but never had a
clue why it happened, or if it was only happening on my machine.

I suspect (and may be completely wrong) that it could be something regarding
process switching latency; let me explain: when compiling Linux
kernel, somewhere
it has an option to change kernel latency, with three options {server,
?, low-latency
desktop} - I forgot the middle one.

Is my guess wrong, should I change login.conf, is there any sysctl to
be changed?
Thanks and don't flame me [too much]. =)

Regards,

On 5/18/06, Philip Guenther [EMAIL PROTECTED] wrote:

On 5/18/06, Martin Toft [EMAIL PROTECTED] wrote:
 xmms on my computer freezes temporarily when doing disk-intensive tasks,
 e.g. examining the ID3-tags of a long playlist.  ...

While not a solution to the general problem of stuttering in xmms, the
stuttering while scrolling your playlist is easily solved now by
saving into the playlist the ID3-tag data.  xmms will do this for you
if you load a playlist, scroll through it (so that xmms has to load
the data for all the entries), and then save the playlist back to
disk, overwriting the file you loaded.  When you next load that
playlist, you'll experience no delays for ID3-tag loading.  That's
just for .m3u playlist files, of course.

Not perfect, but it's an easy to to get rid of an annoying class of blips.

(The data is just placed in comments in the .m3u file, so you could
probably whip up a script to insert that info without having to work
the xmms GUI...)


Philip Guenther






--

 Felipe Brant Scarel
 PATUX/OpenBSD Project Leader (http://www.patux.cic.unb.br)



Re: Binary Update for Packages

2006-05-08 Thread Felipe Scarel

Why in the hell don't you simply use the provided precompiled packages?


From the OpenBSD FAQ:


Another advantage is that users rarely need to compile software from
source, as packages have already been compiled and are available and
ready to be used on an OpenBSD system.



The ports tree is meant for advanced users. Everyone is encouraged to
use the pre-compiled binary packages.

Regards,

On 5/8/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Hello everybody,

I would like to set up a local FTPd.
pkg_add allows to update packages but for this new packages must be
avaiable ofcourse. Is there a script to build all packages from the
Ports-Tree (f.e. the script used to make all the packages for every
release).

I`m not sure if a simply make package would be enought already.

Did somebody wrote such a script (wich also prevents rebuilding of every
package if just a few Ports where updated?).

I plan it this way:
I`ve a Duron 900 wich would run an ftpD and does cvs-updates (so the Ports
are updated). But compiling the same software on many systems is a kind of
waste and the official Sourced do not (always) provide updated packages.
So the little Box should compile all packages and then just recompile the
packages wich need to be updated.

This would lead to a simply and easy to use Update-Server for a local lan.
(Maybe that`s also an idea for tech@ to include it to the Base-System (a
simple shellscript wich helps you building own Update-Servers).

Kind regards,
Sebastian






--

 Felipe Brant Scarel
 PATUX/OpenBSD Project Leader (http://www.patux.cic.unb.br)



Re: openbsd and the money -solutions

2006-03-24 Thread Felipe Scarel
Copyright law is complex, OpenBSD policy is simple - OpenBSD strives to
maintain the spirit of the original Berkeley Unix copyrights.

This is the first sentence of this page: http://www.openbsd.org/policy.html

Can't people see how ridiculous is all that talk about why don't we change
the license? It's written clearly: strives, which means that being free
as
Berkeley Unix was is damn important to the project.

Besides, let's say that all of a sudden OpenSSH's license changes as has
been suggested by many. Any company and/or project could think Well,
the new version has XXX license but the previous version is BSD! So, let's
just get the old code and fork it.

Read the OpenSSH history here: http://openssh.org/history.html

Whoo, these first sentences are really great: OpenSSH is a derivative of
the original free ssh 1.2.12 release from Tatu Ylvnen.

Tatu changed the license and created... SSH.com! How ironic... why
wouldn't someone think to do just the same if OpenSSH's license changed?
Cut these threads, please, and let the devs code.

On 3/24/06, Brian [EMAIL PROTECTED] wrote:

 --- Spruell, Darren-Perot [EMAIL PROTECTED] wrote:

  Better approach. How about said companies belly up and support the group
  that enables them (in part) to enjoy the financial success they have?

 Because there is no reason for them to.  Here's what would happen:

 1) license change comes out
 2) IT looks for alternative program
 3) IT provides figures to finance for either the alternative program,
the new license, or in house development
 4) finance runs some cash flow analysis and sits down with the CIO and CFO
 based
on the results
 5) suggestion is provided to management

 I work in finance.  There is no reason to provide funding from a business
 standpoint.  What does the business gain?  Corporations basically have a
 free
 development team.  Sure they cannot dictate requests, but the code quality
 is
 high and the product works well.

 Honestly, unless the openSSH team mandates funding, no one will cough up
 cash.
 And the license price has to be the sweet spot, where it isn't too high
 that no
 funding is received and not too low that it doesn't accomplish anything.

 And Theo from his messages doesn't want the direction of the program
 dictated
 to him by folks that donate.  No corporation is gonna provide funding
 unless
 they get something out of it.

 I think Theo needs to put his foot down on this issue.  I would think of
 openSSH as separate from openBSD.  I would not advocate changing licenses
 on
 the rest of openBSD.  Of course, the downside is that some of the
 corporations
 might withhold documentation needed for driver development unless the
 license
 is lifted.

 Cheers,

 Brian
 Tired of spam?  Yahoo! Mail has the best spam protection around
 http://mail.yahoo.com




--

  Felipe Brant Scarel
  PATUX/OpenBSD Project Leader (http://www.patux.cic.unb.br)



Re: OBPkg (Port/Package installer)

2006-03-15 Thread Felipe Scarel
404 Not Found... is the URI correct?

On 3/14/06, Steffen Wendzel [EMAIL PROTECTED] wrote:

 Hi,

 I wrote an inofficial front-end for the installation of ports
 and packages under OpenBSD. It is Gtk+-2 based (you need v. 2.6
 or newer).

 You can install local ports, local packages (e.g. mounted CD-ROM)
 and packages from FTP. It also supports universe package mirrors
 that can include inofficial packages. You can use these inofficial
 mirrors to provide more packages for OpenBSD than currently available.
 This is just a idea, I hope it works and I stole this Idea from
 the ubuntu project. They own a tool called 'synaptic' (or so) and
 this supports such 'universe' packages -- a good think, they now
 have thousands of additional inofficial packages.

 You can find the software here:
 http://www.doomed-reality.org/projekte/obpkg/description.html

 hope some of you will like it,

 Steffen


 --
 cdp.doomed-reality.org

 Phantasie ist wichtiger als Wissen, denn Wissen ist begrenzt.
   -- Einstein




--

  Felipe Brant Scarel
  PATUX/OpenBSD Project Leader (http://www.patux.cic.unb.br)



Re: X11 exploit info

2006-02-13 Thread Felipe Scarel
I thought the very same thing yesterday, when he published his web site
on the list. I took a look there, and assuming everything is correct, looks
like he ported KDE and Qt to OpenBSD, which seems huge (of course he
shouldn't have done that alone.

Moreover, his job carrer include big companies like IBM and ATT, so he
mustn't be such a novice... how come his recent posts are so troll-like?
It doesn't make any sense to me.

PS: Great book Craig, thanks for the suggestion!

On 2/13/06, Craig M [EMAIL PROTECTED] wrote:

 Regarding Dave's postings to misc@:
 I posted here about Dave's trollisms and recommended that he read
 page 17 of Absolute OpenBSD - Unix for the practical paranoid
 (By Michael Lucas - ISBN: 1886411999)

 That post I made, might have been a little naive, as I have just
 read the 'Acknowledgements' section. And LO! it turns out that
 Dave Feustel is mentioned on that very page. I apologise if this is
 already common knowledge among list users, but I'm not that well
 informed on particular individuals who are involved in the OpenBSD
 and similar 'movements'.

 However, it has raised my suspicions to a higher level. The book is
 copyrighted in 2003, long before I subscribed to this list and maybe
 even heard of OpenBSD really. Thing is, why would somebody who has
 assisted in the writing of this excellent book, be posting such
 troll-like pieces to this list?

 Maybe Dave, or somebody with better knowledge on these matters, would
 like to enlighten me on this? It just seems very strange to me.

 Regards,

 Craig M

 On Sat, 2006-02-11 at 06:03 -0500, Dave Feustel wrote:
  at http://www.hackinglinuxexposed.com/articles/
  is a 3-part series on X-11 exploits which those who
  think they understand x11 security might wish to
  read and comment upon. I clearly don't understand
  x11 security so I have no comments, but I will read
  with great interest comments by anyone else.
 
  05-Jul-2004: SSH Users beware: The hazards of X11 forwarding
   Logging into another machine can compromise your desktop...
 
  08-Jun-2004: The ease of (ab)using X11, Part 2
   Abusing X11 for fun and passwords.
 
  13-May-2004: The ease of (ab)using X11, Part 1
   X11 is the protocol that underlies your graphical desktop environment,
 and you need to be aware of its security model.
 
  Dave Feustel




--

  Felipe Brant Scarel
  PATUX/OpenBSD Project Leader (http://www.patux.cic.unb.br)



Re: The Apache Question

2006-02-08 Thread Felipe Scarel
Well then, I'll take a look at you suggestion, Joachim, seems reasonable.
Too bad most developers actually *prefer* FTP over ssh, so it's going to be
difficult to convince them. Well, looks like I'll just have to implement...
they'll
get used to it anyway =)

Talking about the Apache2 port, as soon as I get the grasp of porting
software to OpenBSD I'll try to do that, would be quite helpful.

Erm... just a lazy question, but lighttpd has support for DAV?

On 2/8/06, Joachim Schipper [EMAIL PROTECTED] wrote:

 On Tue, Feb 07, 2006 at 11:05:44PM -0200, Felipe Scarel wrote:
  Since it's an open source project in which anyone can commit to the
  repository anytime, it's not possible to add each and every user as a
  system user.  Instead, we're using Plone to write user information on
  the htaccess-style file that Subversion reads.
 
  However, I guess I'm going to use your strategy on another server that
  is not wide open to commits, looks more than enough.
 
  Anyway, an Apache2 port wouldn't be a bad idea... I'll study some more
  and try to work on that on the near future.

 There is no need for that, really. Use public key authentication, one
 key per person, and a .ssh/authorized_keys file that looks like this,
 minus line breaks and empty lines and with actual public keys:

 command=umask 027; svnserve -t --tunnel-user=joachim -r
 /var/svn,no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty
 ssh-rsa $pubkey_joachim [EMAIL PROTECTED]

 command=umask 027; svnserve -t --tunnel-user=felipe -r
 /var/svn,no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty
 ssh-rsa $pubkey_felipe [EMAIL PROTECTED]

 It's quite neat, and no neat for Apache 2. Setting up a session might be
 slightly quicker in Apache, but data throughput might be equal. Or not -
 I don't know if mod_dav_svn does any caching, and I've never benchmarked
 it.

 And if you keep an ssh session open (ControlMaster and so on, see
 ssh_config(5)), I'd imagine it being quite a bit faster under a normal
 usage pattern for a developer (lots of connections, exchanging litte
 data each time).

 Joachim




--

  Felipe Brant Scarel
  PATUX/OpenBSD Project Leader (http://www.patux.cic.unb.br)



Re: The Apache Question

2006-02-08 Thread Felipe Scarel
Thanks there, I'll consider using lighttpd then.

On 2/8/06, Bret Lambert [EMAIL PROTECTED] wrote:

 Felipe Scarel wrote:
  Well then, I'll take a look at you suggestion, Joachim, seems
 reasonable.
  Too bad most developers actually *prefer* FTP over ssh, so it's going to
 be
  difficult to convince them. Well, looks like I'll just have to
 implement...
  they'll
  get used to it anyway =)
 
  Talking about the Apache2 port, as soon as I get the grasp of porting
  software to OpenBSD I'll try to do that, would be quite helpful.
 
  Erm... just a lazy question, but lighttpd has support for DAV?
 

 From http://www.lighttpd.net/documentation/webdav.html:

 The WebDAV module is a very minimalistic implementation of RFC 2518.
 Minimalistic means that not all operations are implemented yet.

 - Bret




--

  Felipe Brant Scarel
  PATUX/OpenBSD Project Leader (http://www.patux.cic.unb.br)



Re: The Apache Question

2006-02-07 Thread Felipe Scarel
Sure OpenBSD's modified Apache 1.3 is way more secure than most stuff out
there, and is working great.

However, the Subversion versioning control system (which my project uses)
demands Apache2 in order to do DAV checkouts and commits, better
authentication and more. So, my only choice was to manually install Apache2
and compile mod_dav_svn.so in order to use these features in OpenBSD. No big
deal, but I would surely appreciate a port for Apache2, it would have made
my life much easier.

Anyway, I agree with the other guys: no way Apache2 will make it to the base
system, its license is a major issue against that.

--

  Felipe Brant Scarel
  PATUX/OpenBSD Project Leader (http://www.patux.cic.unb.br)



Re: openbsd's future plans?

2006-02-07 Thread Felipe Scarel
Aside from all (somewhat funny, especially the java one) jokes, what are the
plans
regarding SMP?

Recently I had to install FreeBSD on a dual-Xeon server because it's SMP
support
is kinda better than OpenBSD's, but that did not please me at all, so that
is indeed
a good question.

--

  Felipe Brant Scarel
  PATUX/OpenBSD Project Leader (http://www.patux.cic.unb.br)



Re: The Apache Question

2006-02-07 Thread Felipe Scarel
Since it's an open source project in which anyone can commit to the
repository
anytime, it's not possible to add each and every user as a system user.
Instead,
we're using Plone to write user information on the htaccess-style file that
Subversion
reads.

However, I guess I'm going to use your strategy on another server that is
not wide
open to commits, looks more than enough.

Anyway, an Apache2 port wouldn't be a bad idea... I'll study some more and
try
to work on that on the near future.

On 2/7/06, Joachim Schipper [EMAIL PROTECTED] wrote:

 On Tue, Feb 07, 2006 at 09:26:31PM -0200, Felipe Scarel wrote:
  Sure OpenBSD's modified Apache 1.3 is way more secure than most stuff
 out
  there, and is working great.
 
  However, the Subversion versioning control system (which my project
 uses)
  demands Apache2 in order to do DAV checkouts and commits, better
  authentication and more. So, my only choice was to manually install
 Apache2
  and compile mod_dav_svn.so in order to use these features in OpenBSD. No
 big
  deal, but I would surely appreciate a port for Apache2, it would have
 made
  my life much easier.
 
  Anyway, I agree with the other guys: no way Apache2 will make it to the
 base
  system, its license is a major issue against that.

 I don't know about you, but I had the same svn-over-apache-2 setup. I
 switched to svn+ssh, and all seems well. It has the added advantage of
 taking version control further away from my very untrusted web scripts
 and somewhat untrusted web server.

 sshd is a trusted component, at least in the sense that anyone who can
 break that essentially owns the system.

 Joachim




--

  Felipe Brant Scarel
  PATUX/OpenBSD Project Leader (http://www.patux.cic.unb.br)



Re: OpenBSD hardware router

2006-02-02 Thread Felipe Scarel
Any chance of buying one of those here from Brazil?

On 2/2/06, Will H. Backman [EMAIL PROTECTED] wrote:

 Kenny Mann wrote:
  I'm looking for something that which I can slap OpenBSD 3.8 on and use
  it as a router.
  This will be used for a house (~ 4 people) and I'm looking for something
  small in form factor and that which doesn't run hot because it will run
  in a closet.
  I'm seeking to replace our D-Link router because it seems to lock up on
  an occasion and this seem like a fun little project to do.
  I'd also like it to have wireless capabilities as well.
  Anyone know where I can start looking or can point in a direction to
 start?
  Or are my hopes too high and I should just get a PC and make it happen
  that route (pun not intended)?
 
  Kenny Mann
 

 If you are trying not to spend a lot of money, you could find an almost
 free laptop (200 - 300 mhz) and use that.  Cost will go up if you don't
 already have some PCMCIA or USB ethernet and wireless cards.




--

  Felipe Brant Scarel
  PATUX/OpenBSD Project Leader (http://www.patux.cic.unb.br)



Re: Ruby queries

2005-12-17 Thread Felipe Scarel
Regarding tcl and tk, few days ago i had to compile PIL (Python Imaging
Library) for
my Zope/Plone server. Since it also requires tcl and tk, this information
may be
useful for you trouble.

I installed both using openbsd packages method, but when I tried to run
setup.py,
tk complained about Xlib.h header file. I realized that packages
xshare38.tgz and
xserver38.tgz were missing (duh), and installed them. After a
updatedb/locate
Xlib.h I found it had been correctly installed, so compilation should be OK.

However, tk still was complaining about X libraries, so something ought to
be wrong.
After some search, I found that it was looking for libraries in the default
place,
/usr/include, but it was in /usr/X11R6/include/X11/Xlib.h . So I symlinked
/usr/X11R6/include/X11 to /usr/include/X11, and evertything went just fine.

Not sure if this is your problem (probably not), but if anyone runs on this,
they
will find answers here. By the way, if anyone is asking why didn't I simply
use
openbsd's PIL package, it's because Plone 2.1.1 sorta requires PIL 1.1.5,
and
only PIL 1.1.4p0 is available at present.

On 12/17/05, Edd Barrett [EMAIL PROTECTED] wrote:

 Hello misc@openbsd.org,

 I have been tinkering with ruby on OpenBSD recently, and I have come
 across
 the following troubles, which I have researched on google and marc, but no
 cigar:

 a) I have been unable to configure mod_ruby. First if all I jumped in and
 added a LoadModule line and also an AddType line to my httpd.conf, and
 hoped
 it would work. It didnt.  Secondly I constulted the mod_ruby webpage,
 which
 offers a more complicated solution, which also didnt work. Then I stumbled
 across mod_ruby-enable in the packing list, which pretty much does what I
 did in the first case, but copies the .so to another dir (is this
 neccessary? Unaccounted for files are not good). So my basic question is
 how
 do you set up mod_ruby, and could it be documented someplace?

 b) Which pkg holds tcltklib? If I try to run any program that requires
 tk,
 then I get an error like this:
 /usr/local/lib/ruby/1.8/tk.rb:7:in `require': No such file to load --
 tcltklib

 I have tcl, tk, tcllib installed.

 Heres a dmesg for luck:

 OpenBSD 3.8-current (GENERIC) #0: Thu Dec 15 18:17:09 GMT 2005
 [EMAIL PROTECTED]:/usr/src/sys/arch/i386/compile/GENERIC
 cpu0: Intel(R) Celeron(R) M processor 1500MHz (GenuineIntel 686-class)
 1.50 GHz
 cpu0:


FPU,V86,DE,PSE,TSC,MSR,MCE,CX8,SEP,MTRR,PGE,MCA,CMOV,PAT,CFLUSH,ACPI,MMX,FXSR
 ,SSE,SSE2,TM,SBF
 real mem  = 258449408 (252392K)
 avail mem = 228962304 (223596K)
 using 3180 buffers containing 13025280 bytes (12720K) of memory
 mainbus0 (root)
 bios0 at mainbus0: AT/286+(d8) BIOS, date 02/21/05, BIOS32 rev. 0 @
 0xfd740
 pcibios0 at bios0: rev 2.1 @ 0xfd6d0/0x930
 pcibios0: PCI IRQ Routing Table rev 1.0 @ 0xfdeb0/256 (14 entries)
 pcibios0: PCI Interrupt Router at 000:31:0 (Intel 82371FB ISA rev 0x00)
 pcibios0: PCI bus #2 is the last bus
 bios0: ROM list: 0xc/0xd000! 0xcd000/0x1000 0xce000/0x1000
 0xdc000/0x4000! 0xe/0x1
 cpu0 at mainbus0
 pci0 at mainbus0 bus 0: configuration mode 1 (no bios)
 pchb0 at pci0 dev 0 function 0 Intel 82852GM Hub-PCI rev 0x02
 Intel 82852GM Memory rev 0x02 at pci0 dev 0 function 1 not configured
 Intel 82852GM Configuration rev 0x02 at pci0 dev 0 function 3 not
 configured
 vga1 at pci0 dev 2 function 0 Intel 82852GM AGP rev 0x02: aperture at
 0xe000, size 0x800
 wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
 wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
 Intel 82852GM AGP rev 0x02 at pci0 dev 2 function 1 not configured
 uhci0 at pci0 dev 29 function 0 Intel 82801DB USB rev 0x01: irq 11
 usb0 at uhci0: USB revision 1.0
 uhub0 at usb0
 uhub0: Intel UHCI root hub, rev 1.00/1.00, addr 1
 uhub0: 2 ports with 2 removable, self powered
 uhci1 at pci0 dev 29 function 1 Intel 82801DB USB rev 0x01: irq 11
 usb1 at uhci1: USB revision 1.0
 uhub1 at usb1
 uhub1: Intel UHCI root hub, rev 1.00/1.00, addr 1
 uhub1: 2 ports with 2 removable, self powered
 uhci2 at pci0 dev 29 function 2 Intel 82801DB USB rev 0x01: irq 11
 usb2 at uhci2: USB revision 1.0
 uhub2 at usb2
 uhub2: Intel UHCI root hub, rev 1.00/1.00, addr 1
 uhub2: 2 ports with 2 removable, self powered
 ehci0 at pci0 dev 29 function 7 Intel 82801DB USB rev 0x01: irq 11
 usb3 at ehci0: USB revision 2.0
 uhub3 at usb3
 uhub3: Intel EHCI root hub, rev 2.00/1.00, addr 1
 uhub3: 6 ports with 6 removable, self powered
 ppb0 at pci0 dev 30 function 0 Intel 82801BAM Hub-to-PCI rev 0x81
 pci1 at ppb0 bus 1
 cbb0 at pci1 dev 0 function 0 Texas Instruments PCI1510 CardBus rev
 0x00:
 irq 11
 iwi0 at pci1 dev 2 function 0 Intel PRO/Wireless 2200BG rev 0x05: irq
 11,
 address 00:12:f0:79:36:41
 fxp0 at pci1 dev 8 function 0 Intel PRO/100 VE rev 0x81: irq 11, address
 00:0a:e4:33:68:74
 inphy0 at fxp0 phy 1: i82562ET 10/100 PHY, rev. 0
 cardslot0 at cbb0 slot 0 flags 0
 cardbus0 at cardslot0: bus 2 device 0 cacheline 0x8, lattimer