Re: [gentoo-user] Re: Issues with AMD_IOMMU

2017-05-15 Thread taii...@gmx.com

On 05/15/2017 10:59 PM, taii...@gmx.com wrote:


On 05/15/2017 12:58 AM, Ian Zimmerman wrote:


The 990FX / 790FX Chipset doesn't have a GART / AGP Aperature or IOMMU
in it.  The CPU contains the original K8 IOMMU ( v1 ) engineered /
converted from a GART.


The 8 and 9 series (not 7) does have an IOMMU, AMD-Vi v1.26.

I have two 890 series (but the server flavor - SR5690 chipset) boards 
with an IOMMU and it works great, I play games in a VM with an 
attached graphics card on my libre firmware KGPE-D16 and devices are 
DMA restricted.


Most consumer boards don't properly implement this feature, in fact I 
have never seen one that did which is why I bought my coreboot (libre 
init variant) D16.
It wouldn't be that difficult to port coreboot to your board if you 
want this to work FYI.



To be clear, it is present on the northbridge chipset not the CPU - AMD 
documentation backs this up.




[gentoo-user] Re: replacement for ftp?

2017-05-15 Thread Nikos Chantziaras

On 04/25/2017 05:29 PM, lee wrote:

since the usage of FTP seems to be declining, what is a replacement
which is at least as good as FTP?

I'm aware that there's webdav, but that's very awkward to use and
missing features.


Is this about security? Then the closest replacement is FTPS (aka SSL 
FTP). It's just plain old FTP at its core, but over SSL. You do need an 
FTP client that supports SSL though. You can configure it to only 
encrypt passwords while the payload data remains non-encrypted.


There's also SFTP, but that's completely different (even though it has 
"FTP" in the name, it's actually SSH, not FTP.)


And there's rsync too, which is for synchronization of files (only 
changes are uploaded, not the whole data) which makes it very fast if 
what you upload is intended to replace old data.





[gentoo-user] Re: replacement for ftp?

2017-05-15 Thread Kai Krakow
Am Mon, 15 May 2017 21:31:32 +0100
schrieb lee :

> > I'm sorry, but that's only marginally more believable than claiming
> > keyboards are too complicated for your users.  
> 
> Does it matter what you or I believe?  Some users have difficulties
> using a keyboard and/or a mouse.  I've seen that, so no, what you or I
> believe does not matter.

If this is the underlying (and perfectly legitimate) problem, you need
to deploy a solution that's most easy for your users and not for you.
That may involve a custom transfer solution where they simply can drop
files to. The underlying technology is then up to you: Use what is
appropriate.


-- 
Regards,
Kai

Replies to list-only preferred.




Re: [gentoo-user] Re: Issues with AMD_IOMMU

2017-05-15 Thread taii...@gmx.com

On 05/15/2017 12:58 AM, Ian Zimmerman wrote:


The 990FX / 790FX Chipset doesn't have a GART / AGP Aperature or IOMMU
in it.  The CPU contains the original K8 IOMMU ( v1 ) engineered /
converted from a GART.


The 8 and 9 series (not 7) does have an IOMMU, AMD-Vi v1.26.

I have two 890 series (but the server flavor - SR5690 chipset) boards 
with an IOMMU and it works great, I play games in a VM with an attached 
graphics card on my libre firmware KGPE-D16 and devices are DMA restricted.


Most consumer boards don't properly implement this feature, in fact I 
have never seen one that did which is why I bought my coreboot (libre 
init variant) D16.
It wouldn't be that difficult to port coreboot to your board if you want 
this to work FYI.




[gentoo-user] Re: replacement for ftp?

2017-05-15 Thread Kai Krakow
Am Mon, 15 May 2017 21:47:17 +0100
schrieb lee :

> > Depending on what data is transferred, you should also take into
> > account if your solution is certificated to transfer such data. E.g.
> > medical data may only be transferred through properly certificated
> > VPN appliances. Otherwise, you should fall back to sneakernet. I'm
> > not sure how that is any more secure but that's how things are.  
> 
> Interesting, who certifies such appliances?

I really never asked... ;-) Maybe I should...


> What if I, as a patient,
> do not want my data transferred that way,

See your words below: "nobody in Germany actually cares"... So you
won't be asked because it's secure by definition (as in
"certification"). ;-)

The old transport was ISDN. But that is being shut down.

Or did you direct your concern to sneakernet transmission? I doubt that
such data would even be encrypted... Although it clearly should.


> and how do I know if they
> didn't make a mistake when certifying the equipment?

That's German bureaucracy: It has the certificate stamp, so it's okay.
The technical internals do not matter: Nobody asks for that after it's
been certified.


> It's not medical data, and nobody in Germany actually cares about
> protecting peoples data anyway.  The little that is being done towards
> that is nothing but pretense.

We are servicing a medical laboratory: They take this certification
very seriously, so at least they care to fulfill the requirements.
However, we do not control that: After the initial setup they do most
configuration by themselves and we only deliver equipment now. As far
as I know, they cannot even freely choose the provider on their side of
the connection. And they are managing their internal network by
themselves, we wouldn't be easily allowed to do that.

Usually, as a IT service company, you would also sign a non-disclosure
contract when working for a company handling sensitive data. But only
few companies seem to know that...


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: replacement for ftp?

2017-05-15 Thread Kai Krakow
Am Mon, 15 May 2017 22:14:48 +0100
schrieb lee :

> Kai Krakow  writes:
> 
> > Am Sun, 14 May 2017 01:28:55 +0100
> > schrieb lee :
> >  
> >> Kai Krakow  writes:
> >>   
>  [...]  
>  [...]  
> >>  [...]
>  [...]  
> >> 
> >> Wow, you must be living in some sort of paradise.  Here, internet
> >> is more like being cut off from the rest of the world.
> >> 
> >> But then, there's a manufacturer that makes incredibly slow USB
> >> sticks which I won't buy anymore ...  
> >
> > Okay, it really depends. I shouldn't say "most"... ;-)  
> 
> Intenso --- pretty cheap, but awfully slow; however, it does
> work. Better don't buy anything they make unless your time is entirely
> worthless to you.
> 
> > I compared my really crappy (but most reliable yet) old USB stick
> > to my internet connection. My USB stick doesn't do 48 MByte/s, more
> > like 5-10. And don't even ask when writing data.  
> 
> 5--10MB/s?  How do you get that much?

For reading? It can work, tho it will eventually drop to 2 MB/s after a
short time. For writing: It drops well below 1 MB/s after a short burst.

> > Even my rusty hard disk (read: not SSD) has a hard time writing
> > away a big download with constantly high download rate.  
> 
> It must be really old then, about 20 years.

No, it's just that other IO is also ongoing and filesystem internals
have some write overheads and involve head movement which easily limits
the drive from its theoretical ideal rate of 120-150 MB/s. Short
bursts: No problem. Long running writes are more like 50-70 MB/s, which
is pretty near the download rate.

It's also what I see in gigabit networks: Copy speed could be somewhere
between 100 and 120 MB/s, but the local drive seems to easily limit
this to 70-80 MB/s.

My current setup allows constant writing of around 270-280 MB/s
according to:

# dd bs=1M if=/dev/urandom of=test.dat
13128171520 bytes (13 GB, 12 GiB) copied, 48,0887 s, 273 MB/s

So it's not that bad... ;-)

But dd also runs at 100% CPU during that time, so I guess the write
rate could be even higher. I see combined rate of up to 500 MB/s
sometimes tho I'm not sure if this is actual transfer rate or just
queued IO rate. Also, it is pretty near the SATA bus saturation. I'm
not sure if my chipset would deliver this rate per SATA connection or
as a combined rate.


> > But I guess that a good internet connection should be at least 50
> > MBit these days.  
> 
> I'd say 100, but see above.  The advantage is that you have sufficient
> bandwidth to do several things at the same time.  I've never seen fast
> internet.

My provider easily delivers such rates, given the remote side is fast
enough. Most downloads a saturated at around 15-20 MB/s. Only few
servers can deliver more. Probably not only a limit of the servers, but
the peer network connections.


> > And most USB sticks are really crappy at writing. That also counts
> > when you do not transfer the file via network. Of course, most DSL
> > connections have crappy upload speed, too. Only lately, Telekom
> > offers 40 MBit upload connections in Germany.  
> 
> They offer 384kbit/s downstream and deliver 365.  It's almost
> symmetrical, yet almost unusable.

Sounds crappy... No alternative providers there? Problem is almost
always a combination of multiple factors: A long running cable limiting
DSL to a lower physical bandwidth, and usually a too limited traffic
concentrator in that area: You should see very different transfer rates
at different times of the day.


> They also offer 50Mbit and deliver between 2 and 12, and upstream is
> awfully low.  Tell them you could pay for 16 instead of 50 because you
> don't get even that much, and they will tell you that you would get
> even less than you do now.  That is unacceptable.

Yes... They would downgrade you to less performing DSL technology then.
It's all fine for them because you only pay for "up to" that bandwidth.


> And try to get a static IP so you could really use your connection ...

No problems so far, at least for business plans.


> > I'm currently on a 400/25 MBit link and can saturate the link only
> > with proper servers like the Steam network which can deliver 48
> > MByte/s.  
> 
> You must be sitting in a data center and be very lucky to have that.

Cable network... In a smallish city.

Next involvement is announced to be 1 GBit in around 2018-2019...
Something, that's already almost standard in other European countries.
Well... "standard" in terms of availability... Not actual usage. I
think the prices will be pretty high. But that's okay: If you need it,
you should be willing to pay that. It won't help to have such bandwidth
without the provider being able to effort the needed infrastructure.
It's already over-provisioned too much as you already found out.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Issues with AMD_IOMMU

2017-05-15 Thread Ian Zimmerman
On 2017-05-15 14:33, Corbin Bird wrote:

> Gigabyte has long been known to ship 'broken for Linux' e820 firmware.
> 'e820' is basically the APG Aperature setup in the firmware.
> Gigabyte kills OR overrides the IOMMU support in the firmware, to setup
> the AGP Aperature.
> 
> Closed source / binary video drivers WANT an AGP Aperature.

So is _that_ what it's all about?  Is the default assumption that I run
non-free video drivers?  Why?  I _never_ use these, and if you bet on me
using them in the future and win the bet, you can probably make a cool
million or two.

> The kernel wants an IOMMU on x86_64, as an IOMMU.

But it has one!  The GART_IOMMU.  It was even added specifically for
Linux, according to kernel.org discussions I remember seeing but can't
locate now.

I also found [1] , which I think is the clearest and most human-oriented
explanation of this issue, yet.  And ... the recommendation is pretty
much the opposite of yours.  The ball is in your court, sir.

> > CONFIG_X86_X2APIC=y

I did this long ago, and it has no effect - still only one is found,
according to dmesg.  As I wrote in the earlier thread.

Also, the Kconfig help for this option says it does something quite
different.  

> > CONFIG_X86_MPPARSE=y

Ok, this one I have not set so far.  Will read up on it and try,

[1]
http://bogdan.org.ua/2009/09/30/iommu-this-costs-you-64-mb-of-ram.html

-- 
Please *no* private Cc: on mailing lists and newsgroups
Personal signed mail: please _encrypt_ and sign
Don't clear-text sign:
http://primate.net/~itz/blog/the-problem-with-gpg-signatures.html



Re: [gentoo-user] Re: replacement for ftp?

2017-05-15 Thread Mick
On Monday 15 May 2017 20:57:50 Kai Krakow wrote:
> > Of course the server will have to be accessible over port 500 for the
> > clients to be able to get to it, but this is a port forwarding/DMZ
> > network configuration exercise at the server end.
> 
> Oh wait... So I need to forward port 500 and 4500 so NAT-T does work
> properly? Even when both sides are NATed? I never got that to work
> reliably for one side NATed, and it never worked for both sides NATed.
> And my research in support forums always said: That does not work...

Well, I haven't presented a complete topology of a proposed network 
architecture because I don't know what the OP's set up is.  I assumed in the 
above statement that the VPN gateway is running on the same (probably NAT'ed) 
server as the ftp service.  Therefore port 500 won't be accessible from the 
Internet unless forwarded.  If the VPN gateway is public facing then no port 
forwarding is necessary.  Site to site IPSec VPN needs only port 500 to set up 
the tunnel.

I have used mobile clients to VPN gateway connections, using IPsec tunnels 
with the client side NAT'ed and the link was very reliable.  Even when the 
mobile clients were connected using unreliable WiFi the tunnel would be re-
established when the WiFi link connectivity was restored.  Key to keeping the 
connection up is to enable Dead-Peer-Detection, or set up some regular ping 
between the peers if either side does not support DPD.  IKEv2 is better than 
IKEv1 and it also allows client roaming (MOBIKE).

Anyway, this is probably getting off topic.  Lee, please start a new thread 
with VPN specific questions if you need help to get it going.  There are quite 
a few examples on the interwebs for configuring OpenVPN and various 
implementations of IKE/IPSec VPNs.  For the latter I recommend StrongSwan 
which has extensive documentation and example configurations.

Saying all this, I would still stick with ftps/filezilla and get the users 
trained.  When things don't work troubleshooting ought to be simpler.  ;-)

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Re: replacement for ftp?

2017-05-15 Thread lee
Kai Krakow  writes:

> Am Sun, 14 May 2017 01:25:24 +0100
> schrieb lee :
>
>> "Poison BL."  writes:
>> 
>> > On Sat, Apr 29, 2017 at 9:11 PM, lee  wrote:  
>> >>
>> >> "Poison BL."  writes:  
>>  [...]  
>> > trust  
>>  [...]  
>> >>
>> >> Why not?  (12GB are nowhere close to half a petabyte ...)  
>> >
>> > Ah... I completely misread that "or over 50k files in 12GB" as 50k
>> > files *at* 12GB each... which works out to 0.6 PB, incidentally.
>> >  
>> >> The data would come in from suppliers.  There isn't really anything
>> >> going on atm but fetching data once a month which can be like
>> >> 100MB or 12GB or more.  That's because ppl don't use ftp ...  
>> >
>> > Really, if you're pulling it in from third party suppliers, you
>> > tend to be tied to what they offer as a method of pulling it from
>> > them (or them pushing it out to you), unless you're in the unique
>> > position to dictate the decision for them.  
>> 
>> They need to use ftp to deliver the data, we need to use ftp to get
>> the data.  I don't want that any other way.
>> 
>> The problem is that the ones supposed to deliver data are incompetent
>> and don't want to use ftp because it's too complicated.  So what's the
>> better solution?
>
> Use an edge router appliance with proper VPN support.

That's what I'm doing, and it doesn't make VPN easy.  I guess that lies
in the nature of VPN.

> You are from Germany? I can recommend Securepoint appliances. You pay
> for the hardware and support, they support you with setting everything
> up. You can also find a distributor who can install this for
> you. Securepoint works with competent partners all around Germany.

That would probably cost a lot of money, and external support always
involves significant delays.  I'll just have to learn it myself.

> There's also other alternatives like Watchguard (but their OpenVPN
> support is not that good), and a lot of free router/firewall softwares
> you can deploy to semi-professional equipment by firmware replacement.
> But at least with the latter option, you're mostly on your own and need
> to invest a lot of effort to make it work properly and secure.

Yes, that would make it much more complicated than it needs to be.

> Depending on what data is transferred, you should also take into
> account if your solution is certificated to transfer such data. E.g.
> medical data may only be transferred through properly certificated VPN
> appliances. Otherwise, you should fall back to sneakernet. I'm not sure
> how that is any more secure but that's how things are.

Interesting, who certifies such appliances?  What if I, as a patient, do
not want my data transferred that way, and how do I know if they didn't
make a mistake when certifying the equipment?

It's not medical data, and nobody in Germany actually cares about
protecting peoples data anyway.  The little that is being done towards
that is nothing but pretense.


-- 
"Didn't work" is an error.



Re: [gentoo-user] replacement for ftp?

2017-05-15 Thread lee
Alan McKinnon  writes:

> On 03/05/2017 22:04, lee wrote:
>> Alan McKinnon  writes:
>> 
>>> On 30/04/2017 03:11, lee wrote:
 "Poison BL."  writes:

> On Sat, Apr 29, 2017 at 3:24 PM, lee  wrote:
>
>> Mick  writes:
>>
>>> On Tuesday 25 Apr 2017 16:45:37 Alan McKinnon wrote:
 On 25/04/2017 16:29, lee wrote:
> Hi,
>
> since the usage of FTP seems to be declining, what is a replacement
> which is at least as good as FTP?
>
> I'm aware that there's webdav, but that's very awkward to use and
> missing features.

 Why not stick with ftp?
 Or, put another way, why do you feel you need to use something else?

 There's always dropbox
>>>
>>>
>>> Invariably all web hosting ISPs offer ftp(s) for file upload/download.
>> If you
>>> pay a bit more you should be able to get ssh/scp/sftp too.  Indeed, many
>> ISPs
>>> throw in scp/sftp access as part of their basic package.
>>>
>>> Webdav(s) offers the same basic upload/download functionality, so I am
>> not
>>> sure what you find awkward about it, although I'd rather use lftp
>> instead of
>>> cadaver any day. ;-)
>>>
>>> As Alan mentioned, with JavaScript'ed web pages these days there are 
>>> many
>>> webapp'ed ISP offerings like Dropbox and friends.
>>>
>>> What is the use case you have in mind?
>>
>> transferring large amounts of data and automatization in processing at
>> least some of it, without involving a 3rd party
>>
>> "Large amounts" can be "small" like 100MB --- or over 50k files in 12GB,
>> or even more.  The mirror feature of lftp is extremely useful for such
>> things.
>>
>> I wouldn't ever want having to mess around with web pages to figure out
>> how to do this.  Ftp is plain and simple.  So you see why I'm explicitly
>> asking for a replacement which is at least as good as ftp.
>>
>>
>> --
>> "Didn't work" is an error.
>>
>>
> Half petabyte datasets aren't really something I'd personally *ever* trust
> ftp with in the first place.

 Why not?  (12GB are nowhere close to half a petabyte ...)

> That said, it depends entirely on the network
> you're working with. Are you pushing this data in/out of the network your
> machines live in, or are you working primarily internally? If internal,
> what're the network side capabilities you have? Since you're likely 
> already
> using something on the order of CEPH or Gluster to back the datasets where
> they sit, just working with it all across network from that storage would
> be my first instinct.

 The data would come in from suppliers.  There isn't really anything
 going on atm but fetching data once a month which can be like 100MB or
 12GB or more.  That's because ppl don't use ftp ...
>>>
>>> I have the opposite experience.
>>> I have the devil's own time trying to convince people to NOT use ftp for
>>> anything and everything under the sun that even remotely resembles
>>> getting data from A to B...
>> 
>> I guess you're lucky then.
>> 
>>> (especially things that are best done over a
>>> message bus)
>> 
>> Why would anyone try to transfer data over a message bus?  Doesn't that
>> require extra wiring and specialized hardware?
>> 
>>> I'm still not understanding why you are asking your questions. What you
>>> describe looks like the ideal case for ftp:
>> 
>> it is
>> 
>> Still nobody uses it, and apparently ftp usage is generally declining,
>> so I would expect there to be a better alternative.
>> 
>>>
>>> - supplier pushes a file or files somewhere
>>> - you fetch those files later at a suitable time
>>>
>>> it looks like a classic producer/consumer scenario and ftp or any of
>>> it's webby clones like dropbox really it still the best tool overall.
>>> Plus it has the added benefit that no user needs extra software - all
>>> OSes have ftp clients even if it's just a browser
>> 
>> The users don't know about that.
>> 
>> 
>
>
> OK, so here is what you have.
>
> You apparently must use ftp as pretty much nothing else works.

I was merely asking if there is a better solution.

> You als claim that your users are too stupid to use ftp, and can't even
> type ftp:// into a browser.

I never said that.

> I'm sorry, but that's only marginally more believable than claiming
> keyboards are too complicated for your users.

Does it matter what you or I believe?  Some users have difficulties
using a keyboard and/or a mouse.  I've seen that, so no, what you or I
believe does not matter.

> I doubt anyone here can help you - you do not have a technical problem,
> you have a people problem. I recommend the very first action you take
> now is to critically re-examine 

Re: [gentoo-user] Re: replacement for ftp?

2017-05-15 Thread lee
Kai Krakow  writes:

> Am Sun, 14 May 2017 01:28:55 +0100
> schrieb lee :
>
>> Kai Krakow  writes:
>> 
>> > Am Sat, 29 Apr 2017 22:02:51 -0400
>> > schrieb "Walter Dnes" :
>> >  
>> >>   Then there's always "sneakernet".  To quote Andrew Tanenbaum from
>> >> 1981
>> >>   
>>  [...]  
>> >
>> > Hehe, with the improvements in internet connections nowadays, we
>> > almost stopped transferring backups via sneakernet. Calculating the
>> > transfer speed of the internet connection vs. the speed calculating
>> > miles per hour, internet almost always won lately. :-)
>> >
>> > Most internet connections are faster than even USB sticks these
>> > days.  
>> 
>> Wow, you must be living in some sort of paradise.  Here, internet is
>> more like being cut off from the rest of the world.
>> 
>> But then, there's a manufacturer that makes incredibly slow USB sticks
>> which I won't buy anymore ...
>
> Okay, it really depends. I shouldn't say "most"... ;-)

Intenso --- pretty cheap, but awfully slow; however, it does
work. Better don't buy anything they make unless your time is entirely
worthless to you.

> I compared my really crappy (but most reliable yet) old USB stick to my
> internet connection. My USB stick doesn't do 48 MByte/s, more like 5-10.
> And don't even ask when writing data.

5--10MB/s?  How do you get that much?

> Even my rusty hard disk (read: not SSD) has a hard time writing away a
> big download with constantly high download rate.

It must be really old then, about 20 years.

> But I guess that a good internet connection should be at least 50 MBit
> these days.

I'd say 100, but see above.  The advantage is that you have sufficient
bandwidth to do several things at the same time.  I've never seen fast
internet.

> And most USB sticks are really crappy at writing. That also counts when
> you do not transfer the file via network. Of course, most DSL
> connections have crappy upload speed, too. Only lately, Telekom offers
> 40 MBit upload connections in Germany.

They offer 384kbit/s downstream and deliver 365.  It's almost
symmetrical, yet almost unusable.

They also offer 50Mbit and deliver between 2 and 12, and upstream is
awfully low.  Tell them you could pay for 16 instead of 50 because you
don't get even that much, and they will tell you that you would get even
less than you do now.  That is unacceptable.

And try to get a static IP so you could really use your connection ...

> I'm currently on a 400/25 MBit link and can saturate the link only with
> proper servers like the Steam network which can deliver 48 MByte/s.

You must be sitting in a data center and be very lucky to have that.


-- 
"Didn't work" is an error.



Re: [gentoo-user] Re: replacement for ftp?

2017-05-15 Thread lee
Kai Krakow  writes:

> Am Sun, 14 May 2017 02:18:56 +0100
> schrieb lee :
>
>> Kai Krakow  writes:
>> 
>> > Am Sat, 29 Apr 2017 20:02:57 +0100
>> > schrieb lee :
>> >  
>> >> Alan McKinnon  writes:
>> >>   
>>  [...]  
>>  [...]  
>>  [...]  
>> >> 
>> >> The intended users are incompetent, hence it is too difficult to
>> >> use ...  
>> >
>> > If you incompetent users are using Windows: Have you ever tried
>> > entering ftp://u...@yoursite.tld in the explorer directory input
>> > bar?  
>> 
>> I tried at work and it said something like that the service cannot be
>> accessed.
>> 
>> 
>> > [...]
>> > Debian is not the king to rule the internet. You shouldn't care when
>> > they shut down their FTP services. It doesn't matter to the rest of
>> > the world using the internet.  
>> 
>> Who can say what their influence actually is?  Imagine Debian going
>> away, and all the distributions depending on them as well because they
>> loose their packet sources, then what remains?  It is already rather
>> difficult to find a usable distribution, and what might the effect on
>> upstream sources be.
>
> The difference is: They only shut down a service. They are not
> vanishing from the internet. You cannot conclude from that, they are:
>
> (a) shutting down all their service
> (b) ftp is deprecated and nobody should use it any longer
>
> And I didn't write that you shouldn't care if Debian vanishes. I only
> said it shouldn't mean anything to you if they shut down their FTP
> services for probably good reasons. It's not the end of life, the
> universe, and everything. And you can keep your towel.
>
> What I wanted to say: Debian is not that important that everyone will
> shut down FTP now and kill FTP support from client software. That
> simply won't happen. That is not what it means when Debian is shutting
> down a service.

I didn't say that Debian is vanishing, only that I doubt that they are
without influence and that their influence might easily be
underestimated.


-- 
"Didn't work" is an error.



[gentoo-user] Re: nfs-utils update fails to compile: missing rpc/auth_gss.h

2017-05-15 Thread Grant Edwards
On 2017-05-15, Grant Edwards  wrote:
> During a routine update, emerge failed to compile nfs-utils:
>
>   [...]
>
>   context.c:40:26: fatal error: rpc/auth_gss.h: No such file or directory
>#include 

And of course immediatly after posting this, I _did_ find it in
bugzilla:

  https://bugs.gentoo.org/show_bug.cgi?id=618544

I guess I'll block the nfs-utils update for the present.

-- 
Grant Edwards   grant.b.edwardsYow! Kids, don't gross me
  at   off ... "Adventures with
  gmail.comMENTAL HYGIENE" can be
   carried too FAR!




[gentoo-user] nfs-utils update fails to compile: missing rpc/auth_gss.h

2017-05-15 Thread Grant Edwards
During a routine update, emerge failed to compile nfs-utils:

  [...]

  context.c:40:26: fatal error: rpc/auth_gss.h: No such file or directory
   #include 
  ^
  compilation terminated.
  make[2]: *** [Makefile:660: gssd-context.o] Error 1
  make[2]: *** Waiting for unfinished jobs
  make[2]: Leaving directory 
'/var/tmp/portage/net-fs/nfs-utils-1.3.4-r1/work/nfs-utils-1.3.4/utils/gssd'
  make[1]: *** [Makefile:450: all-recursive] Error 1
  make[1]: Leaving directory 
'/var/tmp/portage/net-fs/nfs-utils-1.3.4-r1/work/nfs-utils-1.3.4/utils'
  make: *** [Makefile:474: all-recursive] Error 1
   * ERROR: net-fs/nfs-utils-1.3.4-r1::gentoo failed (compile phase):
   *   emake failed

And gcc is correct: there is no such include file:

  $ ls -l /usr/include/rpc
  total 116
  -rw-r--r-- 1 root root  3559 Jan  5 14:02 auth_des.h
  -rw-r--r-- 1 root root  6636 Jan  5 14:02 auth.h
  -rw-r--r-- 1 root root  2914 Jan  5 14:02 auth_unix.h
  -rw-r--r-- 1 root root 12531 Jan  5 14:02 clnt.h
  -rw-r--r-- 1 root root  3383 Jan  5 14:02 des_crypt.h
  -rw-r--r-- 1 root root 11756 Jan  5 14:02 key_prot.h
  -rw-r--r-- 1 root root  2897 Jan  5 14:02 netdb.h
  -rw-r--r-- 1 root root  3837 Jan  5 14:02 pmap_clnt.h
  -rw-r--r-- 1 root root  3810 Jan  5 14:02 pmap_prot.h
  -rw-r--r-- 1 root root  2311 Jan  5 14:02 pmap_rmt.h
  -rw-r--r-- 1 root root  2485 Jan  5 14:02 rpc_des.h
  -rw-r--r-- 1 root root  3938 Jan  5 14:02 rpc.h
  -rw-r--r-- 1 root root  4753 Jan  5 14:02 rpc_msg.h
  -rw-r--r-- 1 root root  1976 Jan  5 14:02 svc_auth.h
  -rw-r--r-- 1 root root 11524 Jan  5 14:02 svc.h
  -rw-r--r-- 1 root root  3233 Jan  5 14:02 types.h
  -rw-r--r-- 1 root root 14577 Jan  5 14:02 xdr.h
  
I've been googling, but haven't been able to find anything that looks
relevent other than a Sabayon user posting on a Gentoo list/forum
many years ago about the exact same error message.

He was told to go away.

Where is rpc/auth_gss.h supposed to come from, and why does the
nfs-utils ebuild suddenly expect it to be present?

-- 
Grant Edwards   grant.b.edwardsYow! Catsup and Mustard all
  at   over the place!  It's the
  gmail.comHuman Hamburger!




Re: [gentoo-user] Issues with AMD_IOMMU

2017-05-15 Thread R0b0t1
On Sun, May 14, 2017 at 12:31 AM, Adam Carter  wrote:
> Tried kernels 4.10.13 and 4.11, with
> CONFIG_GART_IOMMU=y
> CONFIG_IOMMU_HELPER=y
> CONFIG_IOMMU_API=y
> CONFIG_IOMMU_SUPPORT=y
> CONFIG_IOMMU_IOVA=y
> CONFIG_AMD_IOMMU=y
> CONFIG_AMD_IOMMU_V2=m
> Chipset is 990FX, and AFAICT the V2 is for the APU (bdver3 and 4 vintage).
>
> When I enable the IOMMU in the BIOS I get stack traces. Is anyone using
> AMD_IOMMU successfully? If so, any tips?

I don't mean to suggest the problem is intractable, but on fairly new
hardware (i7-4770K, MSI Z87-G45) there has been a regression in the
kernel's nested KVM support that makes me view VT-x/AMD-V VT-d/AMD-Vi
with a great deal of suspicion. In any case, devices are liable to
just spontaneously not work, even if the hardware reports it has been
properly initialized. This may require changes to the kernel and the
information needed to make those changes may or may not be available.



Re: [gentoo-user] Re: Issues with AMD_IOMMU

2017-05-15 Thread Corbin Bird
On 05/14/2017 11:58 PM, Ian Zimmerman wrote:
> On 2017-05-14 20:07, Corbin Bird wrote:
> 
>> Background / System Info : 2 systems.
>> Chipset 990FX, FX-9590 ( -march=bdver2 / Fam15h )
>> Chipset 790FX, PhenomII 980 ( -march=amdfam10 / Fam10h )
>> Gentoo x86_64, multilib, Kernel 4.9.x
>> IOMMU enabled in UEFI, BIOS.
> 
>> General Info : ( if you don't know already ... )
>>
>> The 990FX / 790FX Chipset doesn't have a GART / AGP Aperature or IOMMU
>> in it.  The CPU contains the original K8 IOMMU ( v1 ) engineered /
>> converted from a GART.
>>
>> Standard output in '/var/log/dmesg'.
>>
>>> [0.00] AGP: No AGP bridge found
>>> [0.926911] PCI-DMA: Disabling AGP.
>>> [0.927142] PCI-DMA: aperture base @ b400 size 65536 KB
>>> [0.927263] PCI-DMA: using GART IOMMU.
>>> [0.927376] PCI-DMA: Reserving 64MB of IOMMU area in the AGP aperture
> 
> I know we have been over this in an earlier thread "Another x11
> problem", but I am still very confused.
> 
> What BIOS do you have, on the computer that is BIOS based?  I assume you
> have one BIOS based and one EFI based, although you don't really say
> that.  I am asking because the setup screen of my BIOS (Award modular)
> doesn't mention any of these things at all.
> 
> Another source of my confusion is that my computer appears to work _now_
> without setting any additional kernel options.  The problem I had (X11
> turning the screen all black and never coming back) seems to have fixed
> itself with an upgrade of either kernel, X11, or radeon firmware, I
> don't remember which (and I probably never knew for sure which).
> 
> There are some 2 or 3 BIOS upgrades for my motherboard (GA-880GM-D2H)
> but they are all ancient anyway, from 2011 - the same year I built the
> computer, and the description Gigabyte gives sounds irrelevant to this
> stuff, so I never changed the BIOS.
> 
> I have looked at the kernel docs you recommend, but they leave a lot to
> be desired in the clarity department.  For example, the general
> kernel-parameters doc lists a iommu=pt option, without explanation but
> tagged with [X86][IA-64].  But the x86_64/boot-options doc doesn't
> mention this value at all.
> 
> I guess my high-level confusion is that I don't understand the
> relationships between all these bits of hardware, and I also don't
> understand why the automatic setup that Linux does wouldn't result in a
> normally working system.  So there's no aperture, so steal 64M of
> addresses from RAM.  Big deal, no?  What's the problem?
> 
> Finally, in your last article in that earlier thread you said that I
> should turn on multiple APICs in the kernel.  But I can find no kernel
> option for that.  What's the exact CONFIG_* symbol to tweak?
> 
> Thanks.
> 



Answer #1 :

Asrock, Extreme6, 990FX, UEFI, AMD FX-9590
Gigabyte, GA-MA790FX-UD5P, 790FX, BIOS, AMD PhenomII x4 980

Update the BIOS on that motherboard. Gigabyte doesn't state everything
done by their BIOS updates. The IOMMU enable option was added to the
BIOS at a later date on the GA-MA790FX-UD5P.

Gigabyte has long been known to ship 'broken for Linux' e820 firmware.
'e820' is basically the APG Aperature setup in the firmware.
Gigabyte kills OR overrides the IOMMU support in the firmware, to setup
the AGP Aperature.

Closed source / binary video drivers WANT an AGP Aperature.
The kernel wants an IOMMU on x86_64, as an IOMMU.
Squabbling over memory, and I/O, used for what, by whom, yes?

That is why I mentioned 'the monkey wrench'.



Answer #2 :

Here is what to look for, ( x2 APIC answer )

> 
> kernel 4.9.x, make menuconfig ... ( look just under "Symmetric 
> multi-processing support" )
> 
>  Processor type and features  ---> 
>   │ │  [*] Symmetric 
> multi-processing support  
>  │ │  
>   │ │  [*] Support x2apic 
>   
> │ │  
>   │ │  [*] Enable MPS 
> table 
> │ │  
>   │ │  [ ] Support for 
> extended (non-PC) x86 platforms   
>│ │  
>   │ │  [ ] Intel Low 
> Power Subsystem Support   
>  │ │  
>   │ │  [ ] AMD 
> ACPI2Platform devices support 
>│ │  
>   │ │  < > Intel SoC IOSF 
> Sideband 

[gentoo-user] Re: replacement for ftp?

2017-05-15 Thread Kai Krakow
Am Mon, 15 May 2017 08:53:15 +0100
schrieb Mick :

> On Sunday 14 May 2017 11:35:29 Kai Krakow wrote:
> > Am Sun, 14 May 2017 09:52:41 +0100
> > 
> > schrieb Mick :  
> > > On Saturday 13 May 2017 23:58:17 R0b0t1 wrote:  
>  [...]  
> > > 
> > > OpenVPN is not the most efficient VPN implementation for
> > > connections to a server because it is not multithreaded  
> > 
> > Probably true but it works well here for connections of up to 100
> > MBit.  
> 
> It can work well for well above that throughput, but the limitation
> is the tun/tap mechanism and the CPU of the device/PC it is running
> on.

I think most important is to use the UDP transport and not TCP, because
the tunnel protocol doesn't need to ensure packet delivery. This is
done by the protocols running inside the tunnel. Also, we usually
enable compression at least for low-bandwidth uplinks (which become
rare these days fortunately).

To compensate for the UDP protocol, we usually also give the tunneling
packets higher priority at the edge router to reduce drop rate under
uplink pressure.

This works well for most dial-up links we encounter (currently up to
100 MBit). I probably won't consider it for higher throughput links
because I fear the appliance CPU may become a bottleneck. But so far,
no problems, even not with CPU usage.


> > > and also because unlike
> > > IKE/IPSec it operates in userspace, not in kernelspace.  
> > 
> > IPsec also doesn't work without help from userspace processes.   
> 
> Sure, but this is only for managing the (re)keying process, which BTW
> takes longer with IKE than with OpenVPN (we're talking about
> milliseconds here). Once the keys have been agreed and set up between
> peers the rest happens exceedingly fast in kernelspace, managed as a
> network layer interface (L3).  I recall seeing IPSec tunnels running
> 10 times faster than OpenVPN, being processed even faster than VLAN
> trunking, but this is very much dependent on the resources of the
> device running the tunnel.

I use IPsec only between to endpoints directly connected to the
internet (without NAT) and static IP. And only then it was really
reliable, and it performed well. No question about it...

And I like the fact that I don't need an intermediate transfer net as
opposed to OpenVPN.

OTOH, only OpenVPN has been reliable enough (and very reliable so far)
when one or both sides were NATed with dynamic IP.

And we had one customer running two networks across four sites, and
their IPsec solution never ran reliable. And this was with
professional, expensive firewall appliances. We replaced it with
site-2-site OpenVPN and it runs faster now and without any disconnects.
All sites use static IPs so that was never the problem. I don't know
what caused this. The old appliances were mostly blackboxes, and at
least one was faulty hardware (which explained problems at one site).


> > But I
> > see what you mean: With OpenVPN, traffic bounces between kernel and
> > userspace multiple times before leaving the machine. But I don't
> > really see that as a problem for the scenario OpenVPN is used in:
> > It best fits with dial-up connections which are really not gigabit
> > yet. For this, performance overhead is almost zero.  
> 
> Yes, at dial-up throughput even a smart phone has enough resources to
> manage OpenVPN without it becoming a constraint.
> 
> 
> > IPsec can be a big pita if NAT is involved. For Windows client, L2TP
> > may be a good alternative.  
> 
> IKE/IPSec uses NAT-Traversal (NAT-T) by encapsulating ESP packets
> within UDP over port 4500.  This will allow clients to initiate a
> connection with the server over port 500 and then switch to 4500 as
> part of NAT-T detection. Trivia:  many routers/VPN concentrators use
> Vendor ID strings to determine if the remote peer can implement NAT-T
> among other attributes to shorten this NAT-T detection process.
> 
> Of course the server will have to be accessible over port 500 for the
> clients to be able to get to it, but this is a port forwarding/DMZ
> network configuration exercise at the server end.

Oh wait... So I need to forward port 500 and 4500 so NAT-T does work
properly? Even when both sides are NATed? I never got that to work
reliably for one side NATed, and it never worked for both sides NATed.
And my research in support forums always said: That does not work...


>  [...]  
> > > 
> > > If the users are helpless then you may be better configuring a VPN
> > > tunnel between their Internet gateway and the server, so they can
> > > access the server as if it were a local share, or using the built
> > > in ftp client that MSWindows comes with.  SMB will work securely
> > > in this case too.  
> > 
> > This is what I would recommend, too. Put the VPN endpoints on the
> > network edges and no clients needs to worry: They just use the
> > connection.  
> 
> If there is a large number of client PCs this is the only sane
> solution.

There are 

Re: [gentoo-user] Re: replacement for ftp?

2017-05-15 Thread Mick
On Sunday 14 May 2017 11:35:29 Kai Krakow wrote:
> Am Sun, 14 May 2017 09:52:41 +0100
> 
> schrieb Mick :
> > On Saturday 13 May 2017 23:58:17 R0b0t1 wrote:
> > > I had some problems setting up OpenVPN that were solved by using
> > > per-client public keys. That seems to be the best supported
> > > configuration (as well as the most secure). Windows-side using
> > > OpenVPN-GUI is very easy.
> > > 
> > > OpenVPN tends to have poor bandwidth due to overhead, but that may
> > > be in large part due to my connection.
> > 
> > OpenVPN is not the most efficient VPN implementation for connections
> > to a server because it is not multithreaded
> 
> Probably true but it works well here for connections of up to 100 MBit.

It can work well for well above that throughput, but the limitation is the 
tun/tap mechanism and the CPU of the device/PC it is running on.


> > and also because unlike
> > IKE/IPSec it operates in userspace, not in kernelspace.
> 
> IPsec also doesn't work without help from userspace processes. 

Sure, but this is only for managing the (re)keying process, which BTW takes 
longer with IKE than with OpenVPN (we're talking about milliseconds here).  
Once the keys have been agreed and set up between peers the rest happens 
exceedingly fast in kernelspace, managed as a network layer interface (L3).  I 
recall seeing IPSec tunnels running 10 times faster than OpenVPN, being 
processed even faster than VLAN trunking, but this is very much dependent on 
the resources of the device running the tunnel.


> But I
> see what you mean: With OpenVPN, traffic bounces between kernel and
> userspace multiple times before leaving the machine. But I don't really
> see that as a problem for the scenario OpenVPN is used in: It best fits
> with dial-up connections which are really not gigabit yet. For this,
> performance overhead is almost zero.

Yes, at dial-up throughput even a smart phone has enough resources to manage 
OpenVPN without it becoming a constraint.


> IPsec can be a big pita if NAT is involved. For Windows client, L2TP
> may be a good alternative.

IKE/IPSec uses NAT-Traversal (NAT-T) by encapsulating ESP packets within UDP 
over port 4500.  This will allow clients to initiate a connection with the 
server over port 500 and then switch to 4500 as part of NAT-T detection.  
Trivia:  many routers/VPN concentrators use Vendor ID strings to determine if 
the remote peer can implement NAT-T among other attributes to shorten this 
NAT-T detection process.

Of course the server will have to be accessible over port 500 for the clients 
to be able to get to it, but this is a port forwarding/DMZ network 
configuration exercise at the server end.


> > > > The ftp server already doesn't allow unencrypted connections.
> > > > 
> > > > Now try to explain to ppl for whom Filezilla is too complicated
> > > > how to set up a VPN connection and how to secure their LAN once
> > > > they create the connection (if we could ever get that to work).
> > > > I haven't been able to figure that out myself, and that is one of
> > > > the main reasons why I do not have a VPN connection but use ssh
> > > > instead.  The only disadvantage is that I can't do RDP sessions
> > > > with that ---  I probably could and just don't know how to ---
> > > > but things might be a lot easier if wireguard works.
> > 
> > If the users are helpless then you may be better configuring a VPN
> > tunnel between their Internet gateway and the server, so they can
> > access the server as if it were a local share, or using the built in
> > ftp client that MSWindows comes with.  SMB will work securely in this
> > case too.
> 
> This is what I would recommend, too. Put the VPN endpoints on the
> network edges and no clients needs to worry: They just use the
> connection.

If there is a large number of client PCs this is the only sane solution.


> >  [...]
> >  
> > > > I'm finding it a horrible nightmare, see above.  It is the most
> > > > difficult thing you could come up with.  I haven't found any good
> > > > documentation that explains it, the different types of it, how it
> > > > works, what to use (apparently there are many different ways or
> > > > something, some of which require a static IP on both ends, and
> > > > they even give you different disadvantages in performance ...),
> > > > how to protect the participants and all the complicated stuff
> > > > involved.  So far, I've managed to stay away from it, and I
> > > > wouldn't know where to start.  Of course, there is some
> > > > documentation, but it is all confusing and no good.
> > > 
> > > Feel free to start a thread on it. As above, I recommend
> > > one-key-per-client and running your own CA.
> 
> I wouldn't recommend running your own CA because you will have to
> deploy a trust relationship with every client.
> 
> > For secure connections you will have to set up CA and TLS keys with
> > any option.  Even ftps - unless the ftp server is already