Re: HP LaserJet 1100 lpr printing?

2015-06-12 Thread hruodr
Is realy so much software necessary? Isnt enough the ghostscript driver?
Not posssible without cups?

Rodrigo.


skin...@britvault.co.uk (Craig Skinner) wrote:

 The HP LaserJet 1100 has been working with CUPS when connected to an old

 [...]

 Long version for others with this printer (2 sections):

 [...]

 $ pkg_info -I hplip hpijs hpcups hpaio \
   cups cups-filters cups-libs avahi dbus \
   foomatic-db foomatic-db-engine
 hplip-3.14.6HP Linux Imaging and Printing
 hpijs-3.14.6HP ghostscript driver (spooler independent)
 hpcups-3.14.6   HP native CUPS driver
 hpaio-3.14.6HP sane(7) scanner backend
 cups-1.7.4p0Common Unix Printing System
 cups-filters-1.0.54p2 OpenPrinting CUPS filters
 cups-libs-1.7.4 CUPS libraries and headers
 avahi-0.6.31p13 framework for Multicast DNS Service Discovery
 dbus-1.8.6v0message bus system
 foomatic-db-4.0.20131218 Foomatic PPD data
 foomatic-db-engine-4.0.11 Foomatic PPD generator



Re: Flex / Regexps (Re: awk regex bug)

2015-06-08 Thread hruodr
I wrote:

 [...] Why should
 be difficult to track the indices in yytext of the beginning and the end 
 of each matching subexpression, in two arrays of integers (one for
 the beginning and one for the end)? [...] 

More exactly: in the first array the index of the first element of
the matching subexpression, in the second the index of the last
element plus one. When both indices are equal, then the subexpression
is void. 

If the second index correspond to something irrelevant in yytext, then 
one can set yytext there to 0 for easily obtaining a pointer to a null 
terminating string equal to the subexpression.

Just dreaming.

Rodrigo.



Flex / Regexps (Re: awk regex bug)

2015-06-08 Thread hruodr
Otto Moerbeek o...@drijf.net wrote:

 Refering to subpatterns is not available in flex.  I suppose it is not
 available since it would require a more complex re engine.
 Interpretation of the lexical value should be hand-crafted. 

I also though caomplexity can be the reason, but I have doubts. Why should
be difficult to track the indices in yytext of the beginning and the end 
of each matching subexpression, in two arrays of integers (one for
the beginning and one for the end)? Neither memory nor time seems to
be a problem. And hand crafting means not only avoidable programming
work and unreadability, but a second pass that adds complexity.

A nice source on regexps is here:  https://swtch.com/~rsc/regexp/

In the first article listed there you read:


While writing the text editor sam [6] in the early 1980s, Rob Pike wrote a 
new regular expression implementation, which Dave Presotto extracted into 
a library that appeared in the Eighth Edition. Pike's implementation 
incorporated submatch tracking [sic!] into an efficient NFA simulation but, 
like the rest of the Eighth Edition source, was not widely distributed. 
Pike himself did not realize that his technique was anything new. Henry 
Spencer reimplemented the Eighth Edition library interface from scratch, 
but using backtracking, and released his implementation into the public 
domain. It became very widely used, eventually serving as the basis for 
the slow regular expression implementations mentioned earlier: Perl, PCRE, 
Python, and so on. (In his defense, Spencer knew the routines could be 
slow, and he didn't know that a more efficient algorithm existed. He 
even warned in the documentation, Many users have found the speed 
perfectly adequate, although replacing the insides of egrep with this 
code would be a mistake.) Pike's regular expression implementation, 
extended to support Unicode, was made freely available with sam in late 
1992, but the particularly efficient regular expression search algorithm 
went unnoticed. The code is now available in many forms: as part of sam, 
as Plan 9's regular expression library, or packaged separately for Unix. 
Ville Laurikari independently discovered Pike's algorithm in 1999, 
developing a theoretical foundation as well [2]. 


Note that OpenBSD's regex library seems to use the slow Spencer 
implementation.

Rodrigo.



Re: awk regex bug

2015-06-08 Thread hruodr
Otto Moerbeek o...@drijf.net wrote:

 Tradiotionally, { } pattersn are not part of awk re's.

 Posix added them, but we do not include them afaik. Gnu awk only accepts
 them if given an extra arg (--posix or --re-interval).

 I think this should be documented.

Although there is a clear theory about regular expressions, I have the
impression that there is no standard syntax. One needs to read again and
again the documentation of programs that use them.

I am just missing a way to reference in a (f)lex action a previously
matched subexpression (like with \m in a substitution with ed).

Why is this? Because lex is so old? And what does people do in these
cases?

Rodrigo



Problem with g command in ed

2015-05-16 Thread hruodr
Dear Sirs!

I am having a problem with the above command. I am using an older
version of OpenBSD, but perhaps the problem is also in newer versions.

Experiment:

(1) Write a file with 5 lines, each containg only the character a.

(2) apply the following two lines command:

g/a/\
d



Does it do, what one expects?

Thanks
Rodrigo.



Re: Problem with g command in ed

2015-05-16 Thread hruodr
Andreas Kusalananda Kähäri andreas.kah...@icm.uu.se wrote:

  g/a/\
  d

 The command that you give is, according to the manual, equivalent to

 g/a/p\
 d

 (since A newline alone in command-list is equivalent to a p command).

 So, it prints all lines matching /a/ and then deletes them.

I expect print and delete of each matching line. OpenBSD4.8 ed does not
do it, it behaves strange. Was the problem perhaps fixed later?

Rodrigo.



ed -s

2015-05-09 Thread hruodr
Invoking ed -s file.txt, where file does not contain a newline at the
end, sends to stderr in spite of -s flag: newline appended. Is this 
normal behaviour?

Does ed/sed spoil files with non ascii bytes (for example unicode characters)?
By experience seems to me that they, as also lex, do not spoil the files, 
that one can edit them, but I do not know to what extent, if there is a risk.

Thanks
Rodrigo.



man mg

2015-05-01 Thread hruodr
I have just discovered this nice, fast emacs imitate. The man page
says:


 CAVEATS
 Since it is written completely in C, there is currently no language in
 which you can write extensions; however, you can rebind keys and change
 certain parameters in startup files.
 [...]

In my opinion, there is a language: C. It would be nice to have in a 
default OpenBSD instalation the sources, a structure and documentation 
to easily extend it in C (at own risc). Also to have the ability to run 
external functions, writen in any language one want (for example C, lex, 
sed, awk) for editing the buffer without leaving mg.


 In order to use 8-bit characters (such as German umlauts), the Meta key
 needs to be disabled via the ``meta-key-mode'' command.

Unfortunaly, writing non ASCII is not working well.

Rodrigo.



Re: fax capable UMTS sticks [Was: Re: Question on Serial Ports]

2015-04-12 Thread hruodr
On Thu, 9 Apr 2015, Marcus MERIGHI mcmer-open...@tor.at wrote:

 Sorry, no. But I can confirm failure. Are you sure AT+GCAP is the
 right command? I'd be interested in such a fax-capable device as well...

+GCAP is a standard command to ask capabilities, see for example:

http://www.itu.int/rec/T-REC-V.250-200106-I!Sup1/en

And my Sony Ericsson GM29 is capable of Fax Class 1 and 2. With google
you find a description of it under GM29 Integrators Manual. Such
descriptions of UMTS Sticks are difficult to find, this is why I asked.

Your sticks answer +FCLASS: (0-1) to AT+FCLASS=?. They seem to
support class 1 fax, even if they do not answer to +GCAP. Perhaps
one have to try to send a fax with them. But there is another problem: the
mobile telephone provider must support GSM Fax, and the sellers of 
pre paid sim cards do not know what this means. I am still trying to
find the right provider.

In any case, the question has less to do with OpenBSD, but a short
positive answer of someone that had success will perhaps not disturb too much.

Rodrigo.



Question on Serial Ports

2015-04-04 Thread hruodr
I have an elementary question about serial ports. Perhaps someone here
have the answer, as he perhaps knows, how the driver works.

I observed that two modems attached to the DSUB9 (RS232 Serial) behave 
different.

I can speak with the Elsa Microlink 33.6TQV with almost any
speed (using cu command), also similar speeds, in which one is
not exact multiple of the other.

With the Sony Ericsson GM29 only with a speed of 9600bps. If I give it
the two commands *E2FAX and F, I can do it with 19200bps, and
only with 19200bps.

Why? Does the first modem adapt its comunication speed with the computer? 
And if this is a normal behaviour, why is the second modem so strict with 
the speed?

What are the consequences of this behaviour? I tried to use efax
program with the GM29, and got always the following error:


efax: 50:29 opened /dev/cua01
efax: 50:35 sync: dropping DTR
efax: 50:35 Error: tcsetattr on fd=3 failed: Invalid argument
efax: 50:40 sync: sending escapes
efax: 50:46 Error: sync: modem not responding
[...]
efax: 50:46 done, returning 4 (no response from modem)


I googled, some people had the same problem, but I found no solution.

After changing the speed, I did not have this problem anymore (but
till now I did not manage to send a fax :). I wonder that efax program
does not allow to change its speed and that its man page do not mention
that the modem must work with exactly 19200bps (or something else).

Did somenone manage to send Fax with an UMTS Stick? What vendor and device?

And if somene has an UMTS Stick that answers +FCLASS to AT+GCAP,
then I would thank for a hint, perhaps per Email, to the its vendor, 
device and answer to AT+FCLASS=?.

Regards
Rodrigo



Re: low power device

2014-09-18 Thread hruodr
On Sun, 14 Sep 2014, Stuart Henderson wrote:

 Personally I'd go for a modern cheap PC based on a soldered-on Atom,
 Celeron or AMD Fusion type system.

I use an old Celeron 800 Mhz, and thinking to downgrade to 500Mhz,
to a geode LX 800. OpenBSD seems to run OK there, I tried it as
diskless. I bought an Asus E35M1-M motherboard with an AMD E-350,
but dont need so much. I think the advantage is not only low power
consumption, but mainly to avoid heat and noise.

But another question. Someone here recommended me long ago a
low power server, I even saw the web page of the product, but
I dont find the mail anymore. Does someone remember it?

Regards
Rodrigo.



Re: low power device

2014-09-18 Thread hruodr
On Thu, 18 Sep 2014, Christer Solskogen wrote:

 This one: http://www.pcengines.ch/apu.htm ?

No. It was a ready 19 server. But this is also an interesting piece
of hardware. 

The power consume, noise and reliability also depends on the power supply.

My Asus M35M1-M consumes much more than what the CPU needs according
to AMD. And it developes heat, so that it would be better install a
fan.

Rodrigo.



Re: FYA: http://heartbleed.com/

2014-04-12 Thread hruodr
patrick keshishian pkesh...@gmail.com wrote:

[...]
 | ... the NSA has more than 1,000 experts
 | devoted to ferreting out such flaws using
 | sophisticated analysis techniques, many of them
 | classified. The agency found Heartbleed shortly
 | after its introduction, according to one of the
 | people familiar with the matter, and it became a
 | basic part of the agency's toolkit for stealing
 | account passwords and other common tasks.

 found! OK. so it wasn't implanted in there... what
 a relief!

[...]
 source: 
 http://www.businessweek.com/news/2014-04-11/nsa-said-to-have-used-heartbleed-bug-exposing-consumers


I just want to put your quotation in its context. Just before it:

| While many Internet companies rely on the free code, its integrity depends 
| on a small number of underfunded researchers who devote their energies to 
| the projects.
|
| In contrast, ... [your quotation]

For businessweek it is just a matter of money. :)

Rodrigo.



Re: FYA: http://heartbleed.com/

2014-04-11 Thread hruodr
John Moser john.r.mo...@gmail.com wrote:

 On Thu, Apr 10, 2014 at 4:18 PM, John Moser john.r.mo...@gmail.com wrote:

  Also why has nobody corrected me on this yet?  I've read El Reg's
  analysis, and they missed a critical detail that I didn't see until I read
  the code in context:  IT ALLOCATES TOO SMALL OF A WRITE BUFFER, TOO.  Okay,
  it would send out the payload on exploit.  It would also kill a heap canary
  that glibc should catch on free().
 
 

 Christ maybe you're right.  I'm looking at this again and I'm wrong:  it
 DOES allocate big enough of a payload.

 Obviously I am not a programmer.  There actually is no memory allocator bug
 in this code; it uses the allocator entirely correctly.

I have never seen before such technical news in a normal newspaper:

http://www.faz.net/aktuell/feuilleton/openssl-sicherheitsluecke-jetzt-muss-jeder-jedes-passwort-aendern-12889676.html

Rodrigo.



Re: Sorry OpenBSD people, been a bit busy

2013-10-08 Thread hruodr
On Mon, 7 Oct 2013, James Griffin wrote:

 [...] But when people don't listen, or continuosly repeat themselves 
 unnecessarily, the discussion digresses and becomes irrelevent and/or 
 annoying for those of us subscribed to the list. That's the point I 
 tried to make. Anyway, this is digressing too. 

No. This was obviously not the reason. The offenses did not come from
people that complained about the amount of Emails. And I was not in the
discussion alone: mainly I answered; if I repeated, then because people 
did not understand me. Perhaps was the thema a little off-topic, but in my 
oppinion not irrelevant, it deserves to be discussed, and an objective 
discussion here was impossible. On the other side, I understand that such
discussions can be disturbing in a mailing list. This is one of the reasons
because I was for the existence of the old OpenBSD Usenet Groups.

In my opinion, the reason of the insults and diffamations is something very
primitive. For many people the operating system they use is part of their 
identity (as for others their car or their mobile telephone). Without 
their Operating System they feel to be no one. Belonging to a community 
they feel as part of an elite. Insulting and diffamating people outside 
make these feelings stronger, people insulting and diffamiting one individual
feel to be more together, they need it colectively from time to time.
Not to be part of it is a question of conscience, also of education,
from the ones that do it you cannot expect a much better behaviour.
BTW. The insults came together with the demand that I leave the list, not that
I stop posting about the thema: I was the enemy outside the community.

Rodrigo.



Re: Sorry OpenBSD people, been a bit busy

2013-10-07 Thread hruodr
dera...@cvs.openbsd.org wrote:

 Layers of hurt being thrown around.  Why?

This is a legitim question. 

Since I am here, I think I received twice an Email from you: I 
remember you as a polite person. But I did read a little of what
people write about you arround.

Some weeks ago a question of me here produced unfortunately too much
traffic. I was critical to the optimism in Tridgelll doctoral
thesis, about the rsync algorithm, about new programming technics
that seem to allow the use of hash values as a unique key. If the
question was a little off-topic, then it was not anymore as some
people here felt attacked by my critic. I was continously exposed
to insult and defamation. I continously tried to keep the discussion
objective, without much success. I asked me the very same question
you ask, I ask me it till now. And of course I tried to find an answer. 
And the answer of Robert Garrett throw new questions: Because people
are idiots? Then we all are idiots and cannot compain. Or only people
that do usefull things can complain and other do not deserve respect?

Rodrigo.



Re: mailx : mime handling?

2013-09-27 Thread hruodr
Predrag Punosevac punoseva...@gmail.com wrote:

 On 2013-09-26 Thu 10:15 AM |, Roberto E. Vargas Caballero wrote:
  I use mutt basically because it has threading support, and I cannot
  live without it.
  
 NetBSD version of mailx does support threading as well

 http://netbsd.gw.com/cgi-bin/man-cgi?mailx++NetBSD-current

 and it does have the right license :)

 Cheers,
 Predrag

Heirloom mailx also supports threads and has BSD license. Who wants such an
mailx, can install the port. If you make from OpenBSD mailx a mailx
similar to heirloom mailx, then there will be no small mail client
anymore.

Perhaps the only interesting improvement of mailx would be to make
it possible to pass the headers through an external filter: for 
searching, ordering, etc. As far as I know this is not possible.
Heirlom mailx has also an interesting -H option.

For supporting mime, perhaps can everyone write his own script. For
example tcl tcl using tcllib, there is a library for Manipulation of 
MIME body parts.  

BTW, I just discovered that also alpine supports threading. I never
used it.

Rodrigo.



Re: mailx : mime handling?

2013-09-25 Thread hruodr
mayur...@devio.us (Mayuresh Kathe) wrote:

 hi, how do mailx users currently handle mime?

I use nail. I think metamail OpenBSD port was broken, I tried it
long ago and do not remember.

Rodrigo.



Re: mailx : mime handling?

2013-09-25 Thread hruodr
Eric Johnson eri...@mathlab.gruver.net wrote:

 On Wed, 25 Sep 2013, Dmitrij D. Czarkoff wrote:

  Mayuresh Kathe said:
   hi, how do mailx users currently handle mime?
  
  They don't. They install mutt, s-nail or whatever.

 pine/alpine

Alpine is what I normally use. As imap client very nice, also for
reading and adding attachments. But the program is huge. And
it needs internet connections when it does not need it: when
you are editing a mail and the connection is interrupted, it
hangs, so that the writing is blocked. 

nail (heirloom mail) has also its defects. When you write a message
and the autentification fails, it may happen that you lose the mail
writen: it does not land in dead. letter. BTW, it would be good if
the configuration file be called nailrc and not mailrc.

The best seems to be mutt, but it has a strange configuration
file.

It would be nice if metamail works again. Perhaps to have something
like an editor to be called with ~e (when EDITOR is set to it) in mail
that allow to add attachments and to call another editor for writing
the text. For reading IMAP it would be nice to have the possibility
to mount the remote folder as a local file (no work in FUSE?). 
Another question is how to send with alternative smtp servers.

Rodrigo.



Re: mailx : mime handling?

2013-09-25 Thread hruodr
Dmitrij D. Czarkoff czark...@gmail.com wrote:

 And you don't need threaded view for IMAP?

I dont need it, because I never had it and never used it. Perhaps a 
good thing to have.

  For reading IMAP it would be nice to have the possibility to mount the
  remote folder as a local file (no work in FUSE?).

 You have mail/isync and mail/offlineimap for that. I use the former, and it
 does the trick.

I used fetchmail (recommended here by Roberto Vargas) and I have
very good experience with it. Would isync or offlineimap do a 
better work? 

The idea is not to syncronize remote and local mailfolders, but to
read the headers and only download the messages that one wants to read.
That is also what imap is for. Perhaps this problem will some day be
solved with the plan9 for the user space port. 

  Another question is how to send with alternative smtp servers.

 OpenSMTPd sends my mail via Google's SMTP for me (though you may obsorve in
 the headers of this message that it doesn't try to hide my IP and hostname).
 Sendmail also supports this.

I did configure sendmail to do it, it was not trivial. But I cannot
decide at the moment of sending a mail, what smtp server I want to use.
to change the configuration of sendmail only for sending a mail is
too much.

In hairloom mailx (nail) you can define different accounts in the 
configuration file, they contain a key, the imap and smtp server to use, 
as also data for the authentification. When calling nail, you can
give it with the option -A the key of the account to use. If you use normal 
mail, it will take the same configuration file and complain because of these
data: that is why I said that the configuration file should have another
name than mailx.

 In the end I use mutt in always disconnected mode, and it feels quite good.
 (Or would feel if Google's IMAP wasn't so brain-damaged and unconformant.)

I suspect mutt is the better mail program, although more complicated, less
intuitive to use and configure. I gave up the search for the perfect mail
program.

Rodrigo.



Re: mailx : mime handling?

2013-09-25 Thread hruodr
Predrag Punosevac punoseva...@gmail.com wrote:

 That is not true! NetBSD version of mailx does support MIME. Porting
[...]

 I looked the NetBSD code and most likely it would talk one afternoon for
 an experienced OpenBSD hacker to compile that thing on OpenBSD.

But what speaks against my solution? mailx allows you to pass mails
through filters, allows you to call external editors with ~e and ~v.
And that should be enough to read and write mails with mime, to use
pgp, etc, if you have the appropriate external programs. 

On the other side, if you begin adding mime, the you should follow
adding pgp, etc, and the we have another inflated mail program.

I think, people that do not use mailx, do it, because they like other
programs. Inflating mailx will not bring them to use it. And the 
external programs are also usefull iin other contexts for everybody.

Rodrigo.



Re: cvsync, rsync

2013-09-24 Thread hruodr
On Fri, 20 Sep 2013, Johan Mellberg wrote:

 Your error in thinking is that if we have an extremely large set of strings,
 a very large set is mapped to each hash value. Therefore you reason that a
 collision is very likely. But if you are comparing two specific strings, the
 likelihood of them hashing to the same value is EXTREMELY small (other
 people replying have provided the numbers).

No. I do not reason like that. The collision is *by esperience* very seldom. 
The estimation given by people, including Trigdell, are optimistic and 
no upper bounds of the probability are given.

 Thus, if hash(A)=hash(B), A=B quite a bit more often than not.

No. You can prove (analytically, hence not empirical observation) 
that the probability of A=B under the condition hash(A)=hash(B) 
is close to zero.

Let us suppose, the hash function h maps X to Y, that Y have n elements,
that X have m elements; that X_i is the subset of the elements of X
mapped to the element i of Y, that the cardinality of X_i is m_i.
X is the disjunct union of the n sets X_i, m is the sum of the n numbers
m_i. We suppose that the elements of X are events, the probability is uniform
distributed, hence, each element have probability 1/m.

When speaking about collisions or about the probability of A=B, we are
dealing with two elements of X, with pairs. We must concentrate on the
product space X*X with m^2 elements, each with probability 1/m^2. For
calculating probabilities we must count there the number of elements
of subsets. For this purpose, we arrange the elements of X in a line,
the elements of each X_i contigously, and we arrange the elements of
X*X in a square whose sides are that arrange of the elements of X.
In the diagonal of this square are the elements of the form (a,a). 
The elements (a,b) with h(a)=h(b)=i are in smaller squares with m_i^2 
elements and whose diagonal coincide with the diagonal of the big square.

The event A=B is a subset of the event h(A)=h(B) (collision).

The probability of A=B is m*(1/m^2) = 1/m.

The probability of h(A)=h(B) is (\sum m_i^2)*(1/m^2).

The probability of A=B under the condition h(A)=h(B) is the division
of the above probabilities: m / (\sum m_i^2).

Let p=m/n. If h maps X to Y uniformly, then all m_i will be p. We observe
m_i as p+(m_i - p):

m_i = p + (m_i - p)

m_i^2 = p^2 + 2*p*(m_i - p) + (m_i - p)^2

\sum m_i^2 = n*p^2 + 2*p*(\sum m_i - n*p) + \sum (m_i - p)^2

   = m^2/n + \sum (m_i -p)^2 

   = m^2/n   (the equality holds only when all m_i=p, 
   i.e. h distributes X uniformly in Y.)
 
Conclusions:

(1) The probability of a collision, of h(A)=h(B), is hence *bigger or 
equal* than 1/n, and only equal to it when h distributes X uniformly 
in Y.

(2) The probability of A=B under the condition h(A)=h(B) is less than
n/m, and only equal to it when h distributes X uniformly in Y. Hence,
the probability is very small when m is much bigger than n.

These are my calculations, you may correct me, but the probability of
getting insults and diffamation are bigger.

--

On Tue, 24 Sep 2013, Janne Johansson wrote:

 I didn't seem to get an answer here.

 How would I know that the 4G wav-file I sent from one box to another is 100%
 identical?

 If we assume (and I think that is what you seem to claim) that we can't
 blindly trust hashing, but we will assume that no cosmic rays nor hard-drive
 bit failures can affect the contents, [...]

To much stories about cosmic rays and the age of the universe in relation
to the probability of collitions of functions that no one knows exactly.
Let the software be software, the hardware be hardware.

Rodrigo.



Re: cvsync, rsync

2013-09-20 Thread hruodr
Andreas Gunnarsson o-m...@zzlevo.net wrote:

 On Thu, Sep 19, 2013 at 01:46:20PM +, hru...@gmail.com wrote:
  Raimo, if people believe that hash(A)=hash(B) implies A=B, so strong
  believe, that they use it in their programs,

 It's a matter of engineering. Usually that is good enough.

 If you don't think it's good enough then you should probably also worry
 about how often strcmp(a, b) returns 0 when strings a and b don't match.

No, that is not good enough, that is not good, that is very bad. The
probability of A=B under the condition hash(A)=hash(B) is close
to zero, not to one as Marc Espie  Co are telling here. Please, read
my answer to Henning Brauer from Wednesday to see why rsync is giving 
correct answers in praxis.

---

On Thu, 19 Sep 2013, Matthew Weigel wrote:

 That seems like a useful exercise for you to do.  Like Marc said very early
 on, rsync is based in part on Andrew Tridgell's PhD Thesis, Efficient
 Algorithms for Sorting and Synchronization.  You can find it and read it at
 http://www.samba.org/~tridge/phd_thesis.pdf.

Thanks. He checks at the end the transmited file with a checksum, and if
it does not pass the check, the algorithm is repeated with a variation of 
the checksum to find equal parts. Although Tridgell is very optimistic 
when calculating probabilities, he found this last check necessary.
With very big, very different files will rsync perhaps not be as efficient 
as promissed due to this backtracking.

 If you are still worried about it, you are trolling either misc@ or 
 yourself or both.

If I express critics, then I am XYZ, where XYZ ranges (till now) from 
troll through idiot until motherfr. Is this an honest way of arguing?
And what is a person that argues in this way?

Rodrigo.



Re: cvsync, rsync

2013-09-20 Thread hruodr
Janne Johansson icepic...@gmail.com wrote:

 In practical terms, if I rsync a file from X to Y, and rsync says it is
 complete, how to verify the 4G files actually are equal?
 Given that rsync only knows that hash(A) was equal to hash(B) at the end,
 what do you propose to use for verification?

In practical terms, it is indeed very unprobable that a file that
passed the last check is not the right one.

I answered first Andreas Gunnarsson, not Matthew Weigel who brought
something new to the discussion. Can you figure why? Because, although
my original question was about rsync and cvsync, it mutated to a
discussion about programming praxis. And yes, it is relevant to 
OpenBSD, because programming of an Operating System is a very delicate thing
and developers of OpenBSD have here a strange standpoint that they
defend without sound argumentation, including asking the one that
expresses the critics that he goes away.

If you read rsync(1) and other documentation, you do not find any mention 
of the last check and backtracking, but you read:

Rsync is widely used for backups and mirroring and as an improved copy 
command for everyday use.

This is like the first answer I got, from Kenneth R Westerback:

People use cvsync or rsync to create/maintain a local copy or copies
of the repository. I use cvsync to sync one repository with an
external source and then run cvsyncd on that box if I want repositories
on other local machines. 

Or also like:

Coca Cola is healthy, most people in the World drink Coca Cola.  

Rodrigo.



Re: cvsync, rsync

2013-09-19 Thread hruodr
I want to give a hint for those working till now in the problem of 
estimating the probability of A=B under the condition of hash(A)=hash(B).

Just suppose that hash is any function from a set X to Y, first suppose
that X is finite (but very big), and that the probability to pick any
element is the same (uniform). Y also finite (but not so big). A and B 
are supposed to be elements of X. But well, so much definition was
not necessary: we know from where we have the problem, from the 
algorithm of rsync. And this problem is realy trivial.

A little more complicated is to see the probability that rsync finds
different A and B with hash(A)=hash(B), and hence with very high probability
gives a false result. rsync divide the file in the client in m_1 
(disjunct) blocks, the file in the server in m_2 blocks, 
and try to find blocks in the server with similar hash as
blocks in the client. As said before, this is strongly dependent on
the way the hash function maps X to Y. You may first suppose that it does it
uniformly and calculate the probability that rsync does not find
the collission.

Rodrigo.


Jan Stary h...@stare.cz wrote:

 On Sep 18 21:59:08, hru...@gmail.com wrote:
  What was the probability?

 Motherfucking troll, you have been given the answer
 like five times already. now get the fuck OFF.

  Rodrigo.
  
  
  Eric Furman ericfur...@fastmail.net wrote:
  
   Troll, the question has been answered.
   You are an entertaining troll, though.
   It is highly amusing seeing someone make themselves look so silly.
  
   On Tue, Sep 17, 2013, at 11:28 AM, hru...@gmail.com wrote:
Marc Espie es...@nerim.net wrote:

  You have strings A and B, and you know only that hash(A)=hash(B): 
  what
  is the probability that A=B? 2^-160?  

 No, that's never the problem.

 You have a *given*  string A, and another string B.

O.K. You have string A in the client with hash(A)=n. You find string
B in the server also with hash(B)=n. What is the probability that
A=B?

Rodrigo.



Re: cvsync, rsync

2013-09-19 Thread hruodr
Raimo Niskanen raimo+open...@erix.ericsson.se wrote:

 Rodrigo,

 was there anything wrong with my answer below (and others equal),
 apart from it not being the one you wanted, since you keep repeating
 the same question over and over again?

 Do you have a better answer?  Please share it for us to check.

Raimo, if people believe that hash(A)=hash(B) implies A=B, so strong
believe, that they use it in their programs, that they insult those
that contradict them, then my answer will not convince them, because I 
say that hash(A)=hash(B) is very far away of implying A=B. If you get
alone the answer, then it will be different, you will believe it. Just
imagine the hash function as a function from X with cardinality m to
an Y with cardinality n, where mn, m much bigger than n: how do look
the set of the elements in X that are mapped to a single y in Y? Imagine
the partition of X in theese sets X_1, , X_n with cardinalities
m_1, , m_n and try to do conditional probabilities. But sure there
is a more direct way. Just play with it.

If you have one, two, three, a handful files with 4TB and calculate 
the hashes, the probability that two hashes coincide are sure far away.

If rsync take two 4TB files, one in the client, one in the server,
divide them in a lot of 500 bytes blocks, the one in the server in 
a lot more blocks than the one in the client, calculates hashes
here and there, and compare hashes here with hashes there, then we 
have something completely different.

From time to time I think I should follow Kenneth Westerbacks recomendation
and go to a  math-for-idiots list, for example to Usenet Group sci.math,
and then make a link to this thread in gmane: they will sure admire Marc
Espies wisdom and his efforts teaching idiots like me.

Rodrigo.



Re: cvsync, rsync

2013-09-18 Thread hruodr
Henning Brauer lists-open...@bsws.de wrote:

 * hru...@gmail.com hru...@gmail.com [2013-09-16 21:33]:
  It confirms that it supposes: A=B if hash(A)=hash(B).

 which is fine even with a relatively poor hash like md5 when the size
 is also checked.

A=B because parts of the file in the server coincide with parts of the
file in the client, because both files are related, and not because
hash(A)=hash(B), what is again not very probable if the files are not
very big: the last is just experience and not to suppose as definitive
truth.

And what is the probability of A=B, if you know that hash(A)=hash(B)?

Rodrigo.



Re: cvsync, rsync

2013-09-18 Thread hruodr
What was the probability?

Rodrigo.


Eric Furman ericfur...@fastmail.net wrote:

 Troll, the question has been answered.
 You are an entertaining troll, though.
 It is highly amusing seeing someone make themselves look so silly.

 On Tue, Sep 17, 2013, at 11:28 AM, hru...@gmail.com wrote:
  Marc Espie es...@nerim.net wrote:
  
You have strings A and B, and you know only that hash(A)=hash(B): what
is the probability that A=B? 2^-160?  
  
   No, that's never the problem.
  
   You have a *given*  string A, and another string B.
  
  O.K. You have string A in the client with hash(A)=n. You find string
  B in the server also with hash(B)=n. What is the probability that
  A=B?
  
  Rodrigo.



Re: cvsync, rsync

2013-09-18 Thread hruodr
Alexander Hall alexan...@beard.se wrote:

 Marc already anwered all your questions. Let me quote it.

  Fuck off

The most brilliant answers of the experts:



Date: Tue, 17 Sep 2013 19:18:03 +0200
From: Marc Espie es...@nerim.net
To: hru...@gmail.com
Cc: t...@servasoftware.com, misc@openbsd.org
Subject: Re: cvsync, rsync

On Tue, Sep 17, 2013 at 04:16:47PM +, hru...@gmail.com wrote:
 Intentionally I left the problem generic. Is the probability near to 1?

YES it is near to 1.

Your way to phrase mathematical problems is BOGUS. You can't do probability
without formulating a set of complete hypothesis. 

Your way to reason about this is BOGUS.

We've been telling you THE EXACT SAME THING about those hash properties
for MESSAGES by now.

It seems it doesn't get through.

This is either indicative of a very poor grasp of english, or of
mathematical
concepts, or both.

So I'm going to finally stop answering your stupid stupid questions.

I've got a feeling you don't trust those programs because you really
don't understand numbers in an intuitive way.

Like I said, FUD.   Just because you feel insecure about numbers doesn't
mean you have to try to communicate your insecurity about it to other
people.

--

Date: Tue, 17 Sep 2013 14:36:05 -0400
From: Kenneth R Westerback kwesterb...@rogers.com
To: hru...@gmail.com
Cc: kwesterb...@rogers.com, misc@openbsd.org, h...@stare.cz
Subject: Re: cvsync, rsync

On Tue, Sep 17, 2013 at 06:18:48PM +, hru...@gmail.com wrote:
 Kenneth R Westerback kwesterb...@rogers.com wrote:
 
  And your endless meanderings around the pointless questions you pose
  are not welcome on the list. They certainly have NOTHING to do with
  OpenBSD.
 
 What you say in the last sentence is exactly what I hope. One
 of my questions was:
 
 This is a conjecture. Do you have a proof that the probability is so
 small? For me it is difficult to accept it. Is this conjecture used
 elsewhere?
 
 Rodrigo.

Wow. Missing the point again. Please go away. Bother some non-OpenBSD
list. As with others I am torn between recommending an english-as-a-second
language list, or a math-for-idiots list. OpenBSD lists provide neither
service.

In any case. PLEASE GO AWAY.

 Ken

---

Date: Tue, 17 Sep 2013 20:42:03 +0200
From: Jan Stary h...@stare.cz
To: hru...@gmail.com
Subject: Re: cvsync, rsync

On Sep 17 18:18:48, hru...@gmail.com wrote:
 Kenneth R Westerback kwesterb...@rogers.com wrote:
 
  And your endless meanderings around the pointless questions you pose
  are not welcome on the list. They certainly have NOTHING to do with
  OpenBSD.
 
 What you say in the last sentence is exactly what I hope.

Why are you bothering people on an OpenBSD maling list then?

 One of my questions was:
 This is a conjecture.
 Do you have a proof that the probability is so small?

No, it's not a conjecture.
It's a property of the hash function.

 For me it is difficult to accept it.

That's because you don't know the first thing about it,
and are immune to reason.

Now please, with all due respect, fuck off.



Date: Tue, 17 Sep 2013 14:37:10 -0400
From: Eric Furman ericfur...@fastmail.net
To: hru...@gmail.com
Subject: Re: cvsync, rsync

Begone troll

On Tue, Sep 17, 2013, at 02:18 PM, hru...@gmail.com wrote:
 Kenneth R Westerback kwesterb...@rogers.com wrote:
 
  And your endless meanderings around the pointless questions you pose
  are not welcome on the list. They certainly have NOTHING to do with
  OpenBSD.
 
 What you say in the last sentence is exactly what I hope. One
 of my questions was:
 
 This is a conjecture. Do you have a proof that the probability is so
 small? For me it is difficult to accept it. Is this conjecture used
 elsewhere?
 
 Rodrigo.



Re: cvsync, rsync

2013-09-17 Thread hruodr
Alexander Hall alexan...@beard.se wrote:

 Leaving the internals of rsync aside (of which I assume much but *know* 
 little), if I consider two 4TB blobs to be equal just because they have 
 the same SHA1 hash, I can easily see myself ending up in one of these 
 conditions (but not both):

This was git, not rsync.

Perhaps the hash is only used as an index for a database, and if two
blobs have equal hash, one will note it. It is necessary to see the
details.

As you see, many people do not have any problem with: A=B if hash(A)=hash(B).
Perhaps it will become general programming thechnique.

Rodrigo.



Re: cvsync, rsync

2013-09-17 Thread hruodr
Marc Espie es...@nerim.net wrote:

 On Mon, Sep 16, 2013 at 08:16:50PM +, hru...@gmail.com wrote:
  Marc Espie es...@nerim.net wrote:
  
From a checksum I expect two things: (1) the pre-images of elements
in the range have all similar sizes, 
   Why ?  This makes no sense, and is in contradiction with (2).
  
  I must correct my previous mail. The Domain is numerable, to speak
  about cardinality as size do not help, but perhaps you get the
  idea. The probability of colisions is stronly dependent on how the
  hash distributes the elements of the domain in the range.

 That's still a crypto hash.  Apart from specially crafted attacks,
 input size is irrelevant to the chance of collision.

It is not the input size, it is how the input is mapped.

In the case of rsync the hash is applied to strings of a fixed lenth.
In this case the input is finite and we can argue with cardinality.
Just imagine the set finite strings mapped to a single element in the
range. If all these sets have the same number of elements and the range
n elements, then the probability of colition is n*(1/n)^2=1/nr; otherwise
it is greater (simple school agebra to calculate it). The extreme case
is that all strings are mapped to the same element.

Rodrigo.



Re: cvsync, rsync

2013-09-17 Thread hruodr
Raimo Niskanen raimo+open...@erix.ericsson.se wrote:

 When you have two different real world contents the collision probability
 is just that; 2^-160 for SHA-1. It is when you deliberately craft a
 second content to match a known hash value there may be weaknesses
 in cryptographic hash functions, but this is not what rsync nor Git
 does, as Marc Espie pointed out in this thread.

You have strings A and B, and you know only that hash(A)=hash(B): what
is the probability that A=B? 2^-160?  

Rodrigo.



Re: cvsync, rsync

2013-09-17 Thread hruodr
Marc Espie es...@nerim.net wrote:

 On Tue, Sep 17, 2013 at 07:23:07AM +, hru...@gmail.com wrote:
  In the case of rsync the hash is applied to strings of a fixed lenth.
  In this case the input is finite and we can argue with cardinality.
  Just imagine the set finite strings mapped to a single element in the
  range. If all these sets have the same number of elements and the range
  n elements, then the probability of colition is n*(1/n)^2=1/nr; otherwise
  it is greater (simple school agebra to calculate it). The extreme case
  is that all strings are mapped to the same element.

 It doesn't really matter. You can go straight to the limit.  If you choose
 a given collection of data, the chance of any other collection of data
 mapping to the exact same hash is 1/2^128, irregardless of its size.

I state you the same question that I stated Raimo Niskanen in my previous
mail.

I think you misunderstand me: I am not speaking about the size of the
input strings, but about the size of sets of input strings.

Rodrigo.



Re: cvsync, rsync

2013-09-17 Thread hruodr
I wrote to the list. If you have something to say about the thema,
then please to the list. Your impolite mails are not welcome in 
my mailbox.

Rodrigo.

Jan Stary h...@stare.cz wrote:

 On Sep 17 13:21:04, hru...@gmail.com wrote:
  Raimo Niskanen raimo+open...@erix.ericsson.se wrote:
  
   When you have two different real world contents the collision probability
   is just that; 2^-160 for SHA-1. It is when you deliberately craft a
   second content to match a known hash value there may be weaknesses
   in cryptographic hash functions, but this is not what rsync nor Git
   does, as Marc Espie pointed out in this thread.
  
  You have strings A and B, and you know only that hash(A)=hash(B): what
  is the probability that A=B? 2^-160?  

 No. The probability that A!= B is 2^-160.

 However, this is irrelevant to the problem you are describing.
 Please don't enbarass yourself any further and take this silly
 issue somewhere else; preferably to your English teacher.



Re: cvsync, rsync

2013-09-17 Thread hruodr
Intentionally I left the problem generic. Is the probability near to 1?

You can suppose that A is 500 bytes long, that the server knows the hash
value of A (but not A), that it searchs only strings of this 
length with the same hash value, that it found such a string
B, that the hash function is the concatenation of the rolling hash of 
rsync with md4.

You can also suppose that A and B are 4 TB long and the hash sha1.

Rodrigo.


Tony Abernethy t...@servasoftware.com wrote:

 INSUFFICIENT DATA

 -Original Message-
 From: owner-m...@openbsd.org [mailto:owner-m...@openbsd.org] On Behalf Of 
 hru...@gmail.com
 Sent: Tuesday, September 17, 2013 10:28 AM
 To: misc@openbsd.org
 Subject: Re: cvsync, rsync

 Marc Espie es...@nerim.net wrote:

   You have strings A and B, and you know only that hash(A)=hash(B): what
   is the probability that A=B? 2^-160?  
 
  No, that's never the problem.
 
  You have a *given*  string A, and another string B.

 O.K. You have string A in the client with hash(A)=n. You find string
 B in the server also with hash(B)=n. What is the probability that
 A=B?

 Rodrigo.



Re: cvsync, rsync

2013-09-17 Thread hruodr
Marc Espie es...@nerim.net wrote:

  You have strings A and B, and you know only that hash(A)=hash(B): what
  is the probability that A=B? 2^-160?  

 No, that's never the problem.

 You have a *given*  string A, and another string B.

O.K. You have string A in the client with hash(A)=n. You find string
B in the server also with hash(B)=n. What is the probability that
A=B?

Rodrigo.



Re: cvsync, rsync

2013-09-17 Thread hruodr
Kenneth R Westerback kwesterb...@rogers.com wrote:

 And your endless meanderings around the pointless questions you pose
 are not welcome on the list. They certainly have NOTHING to do with
 OpenBSD.

What you say in the last sentence is exactly what I hope. One
of my questions was:

This is a conjecture. Do you have a proof that the probability is so
small? For me it is difficult to accept it. Is this conjecture used
elsewhere?

Rodrigo.



Re: cvsync, rsync

2013-09-16 Thread hruodr
Raimo Niskanen raimo+open...@erix.ericsson.se wrote:

 A resembling application is the Git version control system that is
 based on the assumption that all content blobs can be uniquely
 decribed by their 128-bit SHA1 hash value. If two blobs have
 the same hash value they are assumed to be identical.

The developers of rsync and git may be carefull and their programs be 
in practice reliable. But perhaps we will have in future a lot of 
mediocre probabilistic programmers that dont care too much.

 If SHA1 is a perfect cryptographic hash value the probability
 for mistake is as has been said before 2^-128 which translates
 to (according to the old MB vs MiB rule of 10 bit corresponding
 to 3 decimal digits) around 10^-37.

Your sentence above begins with the two letters: if.

 According to a previous post in this thread the probability for
 disk bit error for a 4 TB hard drive is around 10^-15 so the
 SHA1 hash value wint with a factor 10^22 which is a big margin.
 So it can be 10^22 times worse than perfect and still beat
 the hard drive error probability.

And in a later Email you write:

 And now I just read on the Wikipedia page for SHA-1 that a theoretical
 weakness was discovered in 2011 that can find a collision with a 
 complexity of 2^61, which gives a probability of 10^-18 still
 1000 times better than a hard drive of 10^-15.

The probability sinks as people discover something new about the hash
function? Will it continously sink as the hash function becomes better known?
This is a very subjective probability. Can we rely on this kind of
probability calculations?

 Now you can read what you can find about cryptographic
 hash algorighms to convince yourself that the algorithms
 used by rsync and/or Git are good enough. [...]

I agree that I have too read, although I was never interested in
cryptography.

 The assumption of cryptographic hash functions being, according
 to their definition; reliable, is heavily used today. At least
 by rsync and Git. And there must be a lot of intelligent and
 hopefully skilled  people backing that up.

We must hope, believe and pray.

-

Marc Espie es...@nerim.net wrote:

 A little knowledge is a dangerous thing.
 
 weakness in a cryptographic setting doesn't mean *anything* if
 you're using it as a pure checksum to find out accidental errors.

And now we are back to my starting poit. The checksum is not used
in rsync as a pure checksum to find accidental errors. That was my
critic.

From a checksum I expect two things: (1) the pre-images of elements
in the range have all similar sizes, (2) it is very discontinous.
The second to use it for proving the integrity of data transmited: 
little changes produce a completely different checksums. The values when 
the changes are big do not play a role. Now, Rsync conclude A=B from 
hash(A)=hash(B) also when A and B are completely different.

Are md4 and sha1 good? When we use rsync and git, we are part of a 
big empirical proof (or refutation) of it.

Rodrigo.



Re: cvsync, rsync

2013-09-16 Thread hruodr
Marc Espie es...@nerim.net wrote:

  From a checksum I expect two things: (1) the pre-images of elements
  in the range have all similar sizes, 
 Why ?  This makes no sense, and is in contradiction with (2).

I must correct my previous mail. The Domain is numerable, to speak
about cardinality as size do not help, but perhaps you get the
idea. The probability of colisions is stronly dependent on how the
hash distributes the elements of the domain in the range.

Rodrigo.



Re: cvsync, rsync

2013-09-16 Thread hruodr
Marc Espie es...@nerim.net wrote:

  And now we are back to my starting poit. The checksum is not used
  in rsync as a pure checksum to find accidental errors. That was my
  critic.

 No, it is.   Really. Read the papers. Do your homework, check the maths.

I have read this:

 http://rsync.samba.org/tech_report/

It confirms that it supposes: A=B if hash(A)=hash(B). The last is
checked in the server for constructing the file in the client.

  From a checksum I expect two things: (1) the pre-images of elements
  in the range have all similar sizes, 
 Why ?  This makes no sense, and is in contradiction with (2).

No contradiction. Similar sizes mean similar cardinality. The
pre-image of an element is a subset of the domain. An extreme case
would be that most of the strings in the domain are mapped to a 
single element. This would be similar to having a hash with a very
small range.

 All you're doing is trying to spread FUD about perfectly fine programs.

I do not have doubt that the probability of getting a false file is very
small, perhaps indeed neglectable, specialy after the lot of checks.  I 
think also with very bad hashes one could also get good results in practice 
with the same algorithm.

How you presume to know my intentions? Again making persons the object of
discussion?

Jan Stary h...@stare.cz wrote in a personal mail:

 We must hope, believe and pray.

 You, in particular, need to shut up and read.

Normaly I am glad to receive personal mails, but my Email is public
here not for this kind of misuse.

Rodrigo.



cvsync, rsync

2013-09-14 Thread hruodr
Dear Sirs!

I have a question, perhaps a little of-topic, but it arose as I
read about cvsync in openbsd web page. And OpenBsd people sure know
a lot about cryptography :)

Does rsync suppose that a part of a file in the server is equal to
a part of a file in the client, if a hash value of these parts are
equal?

Does cvsync do the same?

Is this reliable? Mathematicaly not a catastrophe? (equal if the
images under a non-injective function are equal)?

Is there a reliable way to make a local copy of the repository and
update it from time to time (I have only very elementary knowledge 
about cvs and few experience)?

Thanks
Rodrigo



Re: cvsync, rsync

2013-09-14 Thread hruodr
Kenneth R Westerback kwesterb...@rogers.com wrote:

 People use cvsync or rsync to create/maintain a local copy or copies

[...]

 Not sure what your 'reliable' metrics are, but works for me.

My question was not about what people do or if it works (till now)
for you. It was about the algorithm.

Is the algorithm correct in the sense that it *always* give the right
result, or there is only a (high) probability that in practical cases
it gives the right result? Just this is my question.

You make copy file F from computer A to computer B, compute the hash
in both copies and see if they coincide. This is just a check of the
transmition, of course it is possible that the copy is corrupted and
in spite of it have the same hash.

A completely other thing is to conclude that two *arbitrary* pieces of
data are the same only because they have the same hash. Arbitrary 
means here that the one was not a copy of the other. And this is what
rsync seems to do as far as I understand the wikipedia web-page.

Regards
Rodrigo



Re: cvsync, rsync

2013-09-14 Thread hruodr
Marc Espie es...@nerim.net wrote:

 On Sat, Sep 14, 2013 at 03:09:48PM +, hru...@gmail.com wrote:

  A completely other thing is to conclude that two *arbitrary* pieces of
  data are the same only because they have the same hash. Arbitrary 
  means here that the one was not a copy of the other. And this is what
  rsync seems to do as far as I understand the wikipedia web-page.

 The probability of an electrical failure in your hard drive causing
 it to munge the file, or of a bug in the software using that file
 is much higher than this happening.

This is a conjecture. Do you have a proof that the probability is so
small? For me it is difficult to accept it. Is this conjecture used
elsewhere?

About my original intention: to get a copy of the repository. Does the 
repository only grow with new files? Old files never change? Can I 
hence expect that cvsync never rely on the above questionable conjecture?
Even if the transmition for whatever reason is interrupted and I try
again?

Is there an alternative for downloading the repository without the
conjecture?

I dont like rsync and similars

Thanks
Rodrigo.



Re: cvsync, rsync

2013-09-14 Thread hruodr
Marc Espie es...@nerim.net wrote:

 I consider 1/2^128 to be *vanishingly small*. 

Christian Weisgerber mentions that a relative small range of
the hash function would be a problem, but a big range is not
enough: the whole depends on the hash function itself. But this would 
be a big discussion: in some contexts a necessary discussion in which 
experience with the function is not enough.

 Just because you're irrational doesn't mean we have to cater to your 
 irrational fears.

You have a different concept of rationality. It has nothing to do
with to fear or not fear, with to like or dislike, or to express that one 
dislikes an algorithm (as I did).

Rationality means to have a convincing explanation. You show to be more 
rational than Patric Conant, but your of his rationality is not the
thema: we are speaking about rsync, cvsync, md5, not about
Marc, Patric or Rodrigo. 

And no, I am not the expert: that is why I asked. But I do have my 
reasons to dislike such an algorithm, and even aesthetical reasons are 
reasons, and I never expected or demanded that you share my reasons.

Rodrigo.



Bootparamd

2013-09-12 Thread hruodr
Dear Sirs!

I managed to boot OpenBSD 5.3 in a Fujitsu Siemens Futro A220 (AMD Geode 
LX800) thin client from a Celeron Machine running OpenBSD 4.8.

I followed what I read in DISKLESS(8) and PXEBOOT(8) almost blindly,
without understandig very much. Perhaps the pages could be more
understandable. I have a lot of questions. I begin with some.

Who uses bootparam? Only the Kernel? 

Is Bootparamd a standard? FreeBSD has it with the same author,
there is also a Bootparamd in SunOS. I think this is important
to know for booting from other operating systems, but I dont
find this info in the man pages.

After it boots, I have the root file I gave in /etc/bootparam,
I dont need to mount it with /etc/fstab. Should I give the
root file in /etc/bootparam and leave the kernel mount the
root again?

The above does not happen with the swap file. To have a swap
file I have to give it in /etc/fstab. Is this due to an error
in my configuration?

Thanks
Rodrigo.



Re: Bootparamd

2013-09-12 Thread hruodr
Miod Vallat m...@online.fr wrote:

Thanks for the good tips!

 I think the bootparams swap file information will be used correctly (I
 remember seeing a fix in this area some time ago). It doesn't hurt
 anyway to mention it in /etc/fstab with the nfsmntpt option.

OK, both, swap and rootfs, again in /etc/fstab.

I think my configuration is correct, because during booting I get
the messages:

nfs_boot: root on 10.0.0.1:/export/geode0/root 
nfs_boot: swap on 10.0.0.1:/export/geode0/swap

But if I give the commando swapctl -l after booting, I see no
mounted swap, unless I mention it in /etc/fstab.

Rodrigo.