[gentoo-user] Re: Issues with AMD_IOMMU

2017-05-14 Thread Ian Zimmerman
On 2017-05-14 20:07, Corbin Bird wrote:

> Background / System Info : 2 systems.
> Chipset 990FX, FX-9590 ( -march=bdver2 / Fam15h )
> Chipset 790FX, PhenomII 980 ( -march=amdfam10 / Fam10h )
> Gentoo x86_64, multilib, Kernel 4.9.x
> IOMMU enabled in UEFI, BIOS.

> General Info : ( if you don't know already ... )
> 
> The 990FX / 790FX Chipset doesn't have a GART / AGP Aperature or IOMMU
> in it.  The CPU contains the original K8 IOMMU ( v1 ) engineered /
> converted from a GART.
> 
> Standard output in '/var/log/dmesg'.
> 
> > [0.00] AGP: No AGP bridge found
> > [0.926911] PCI-DMA: Disabling AGP.
> > [0.927142] PCI-DMA: aperture base @ b400 size 65536 KB
> > [0.927263] PCI-DMA: using GART IOMMU.
> > [0.927376] PCI-DMA: Reserving 64MB of IOMMU area in the AGP aperture

I know we have been over this in an earlier thread "Another x11
problem", but I am still very confused.

What BIOS do you have, on the computer that is BIOS based?  I assume you
have one BIOS based and one EFI based, although you don't really say
that.  I am asking because the setup screen of my BIOS (Award modular)
doesn't mention any of these things at all.

Another source of my confusion is that my computer appears to work _now_
without setting any additional kernel options.  The problem I had (X11
turning the screen all black and never coming back) seems to have fixed
itself with an upgrade of either kernel, X11, or radeon firmware, I
don't remember which (and I probably never knew for sure which).

There are some 2 or 3 BIOS upgrades for my motherboard (GA-880GM-D2H)
but they are all ancient anyway, from 2011 - the same year I built the
computer, and the description Gigabyte gives sounds irrelevant to this
stuff, so I never changed the BIOS.

I have looked at the kernel docs you recommend, but they leave a lot to
be desired in the clarity department.  For example, the general
kernel-parameters doc lists a iommu=pt option, without explanation but
tagged with [X86][IA-64].  But the x86_64/boot-options doc doesn't
mention this value at all.

I guess my high-level confusion is that I don't understand the
relationships between all these bits of hardware, and I also don't
understand why the automatic setup that Linux does wouldn't result in a
normally working system.  So there's no aperture, so steal 64M of
addresses from RAM.  Big deal, no?  What's the problem?

Finally, in your last article in that earlier thread you said that I
should turn on multiple APICs in the kernel.  But I can find no kernel
option for that.  What's the exact CONFIG_* symbol to tweak?

Thanks.

-- 
Please *no* private Cc: on mailing lists and newsgroups
Personal signed mail: please _encrypt_ and sign
Don't clear-text sign:
http://primate.net/~itz/blog/the-problem-with-gpg-signatures.html



Re: [gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Kent Fredric
On Sun, 14 May 2017 02:59:41 +0100
lee  wrote:

> That requires shell access.

Not necessarily, it just requires a competent ISP.

For instance, there's no shell access on github, but there's still
ssh-based sync.

So you just need to have a restricted environment that only allows
spawning of the server side parts of the sftp protocol, and a suitable
authentication scheme.

You're going to want authentication for push anyway.


pgpcRbWjSdB36.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Issues with AMD_IOMMU

2017-05-14 Thread Adam Carter
On a Phenom II?Yeah, I just retired mine a month ago. On that

> system, the IOMMU hardware was kinda a legacy orphan thingy, I had to go
> through all kinds of gyrations on the kernel command line in order to
> initialize it correctly. I think I had it off in the bios, then enabled
> it using a bunch of kernel commands.
>
> If it was off in the BIOS then i think that means you were just using
CONFIG_GART_IOMMU not CONFIG_AMD_IOMMU


Re: [gentoo-user] Issues with AMD_IOMMU

2017-05-14 Thread Adam Carter
I've added iommu=pt to my kernel command line, and now have;

[1.240242] AMD-Vi: Found IOMMU at :00:00.2 cap 0x40

I was looking at IOMMU in the hope that my Win10 vmware guest would be more
stable. No change, however.


> APU?
> I was under the impression that those CPU's used a different chipset,
> not the 990FX.
>

Thanks for clearing up my confusion on the location of the IOMMU being the
CPU, not the chipset. FWIW i'm using an FX-8350.

The APU comment was with respect to CONFIG_AMD_IOMMU_V2. I believe FX CPU
just uses CONFIG_AMD_IOMMU - that's certainly working for me with the
FX-8350.


Re: [gentoo-user] world rebuild with gcc-6.3.0 - not completely there yet

2017-05-14 Thread Walter Dnes
On Sun, May 14, 2017 at 05:22:27AM -0400, Rasmus Thomsen wrote
> Hi,
> 
> unwind tables sounds really interesting, but does it affect the
> binary size when I compile with -O2 anyway?

  Yes, because it's a different part of the build.  Personal experience;
I do the contributed SSE-only Pale Moon build for older (i.e Pentium 3
class) linux machines.  To qualify for official Pale Moon branding, I
have to stick with standard specs.  For personal use at home on a T400
Lenovo, I can do as I please.  My home version uses unwind tables, but
the official version does not.  The ".bz2" compressed tarball is several
megabytes smaller for my home version than for the official SSE-only
version.  It's even more noticable when extracted/uncompressed.

  Obviously, this reduction only applies to elf-binaries and libraries,
not to sqlite databases, JSON, XML, text files, etc.

-- 
Walter Dnes 
I don't run "desktop environments"; I run useful applications



Re: [gentoo-user] world rebuild with gcc-6.3.0 - not completely there yet

2017-05-14 Thread Walter Dnes
On Sun, May 14, 2017 at 11:05:14AM +0200, Helmut Jarausch wrote
> Many thanks for your hints!
> 
> On 05/14/2017 07:13:05 AM, Walter Dnes wrote:
> > Out of sheer curiousity, what
> > does gcc report as your native cpu when you run...
> > 
> > gcc -c -Q -march=native --help=target | grep march=
> 
> -march=amdfam10

  According to the GCC website...
https://gcc.gnu.org/onlinedocs/gcc-6.3.0/gcc/x86-Options.html#x86-Options
GCC 6.3.0 "-march=amdfam10" ("-march=native" on your machine) supports
MMX, SSE, SSE2, SSE3, SSE4A, 3DNow!, enhanced 3DNow!, and ABM.  GCC
builds binaries to use these instructions where appropriate.

  When building for the machine that's doing the build, I always use
"-march=native".  This gets the full available speedup, without having
to figure out what the exact cpu family is.

-- 
Walter Dnes 
I don't run "desktop environments"; I run useful applications



Re: [gentoo-user] Issues with AMD_IOMMU

2017-05-14 Thread Corbin Bird
On 05/14/2017 12:31 AM, Adam Carter wrote:
> Tried kernels 4.10.13 and 4.11, with
> CONFIG_GART_IOMMU=y
> CONFIG_IOMMU_HELPER=y
> CONFIG_IOMMU_API=y
> CONFIG_IOMMU_SUPPORT=y
> CONFIG_IOMMU_IOVA=y
> CONFIG_AMD_IOMMU=y
> CONFIG_AMD_IOMMU_V2=m
> Chipset is 990FX, and AFAICT the V2 is for the APU (bdver3 and 4 vintage).
> 
> When I enable the IOMMU in the BIOS I get stack traces. Is anyone using
> AMD_IOMMU successfully? If so, any tips?

-

Background / System Info : 2 systems.
Chipset 990FX, FX-9590 ( -march=bdver2 / Fam15h )
Chipset 790FX, PhenomII 980 ( -march=amdfam10 / Fam10h )
Gentoo x86_64, multilib, Kernel 4.9.x
IOMMU enabled in UEFI, BIOS.

-


General Info : ( if you don't know already ... )

The 990FX / 790FX Chipset doesn't have a GART / AGP Aperature or IOMMU
in it.
The CPU contains the original K8 IOMMU ( v1 ) engineered / converted
from a GART.

Standard output in '/var/log/dmesg'.

> [0.00] AGP: No AGP bridge found

> [0.926911] PCI-DMA: Disabling AGP.
> [0.927142] PCI-DMA: aperture base @ b400 size 65536 KB
> [0.927263] PCI-DMA: using GART IOMMU.
> [0.927376] PCI-DMA: Reserving 64MB of IOMMU area in the AGP aperture

-

Suggestion :

read the Kernel Parameter docs. The automatic setup may need tweaking.

You have to do a manual setup if you want, both, a working IOMMU and an
AGP Aperature. Firmware can throw a major monkey wrench into that type
of setup however.

> /usr/src/linux/Documentation/kernel-parameters.txt

This doc has the x86_64 IOMMU options listed ...
> /usr/src/linux/Documentation/x86/x86_64/boot-options.txt


Question :

APU?
I was under the impression that those CPU's used a different chipset,
not the 990FX.


Corbin




Re: [gentoo-user] Re: replacement for ftp?

2017-05-14 Thread R0b0t1
On Sun, May 14, 2017 at 4:47 PM, R0b0t1  wrote:
> That is only in one setup. It is possible to assign an IP address to
> OpenVPN such that you will need any traffic to cross onto your LAN.
>

Whoops: "It is possible to assign an IP address to OpenVPN such that
you will need routing rules for any traffic to cross onto your LAN."



Re: [gentoo-user] Re: replacement for ftp?

2017-05-14 Thread R0b0t1
On Sun, May 14, 2017 at 3:52 AM, Mick  wrote:
>> I had some problems setting up OpenVPN that were solved by using
>> per-client public keys. That seems to be the best supported
>> configuration (as well as the most secure). Windows-side using
>> OpenVPN-GUI is very easy.
>>
>> OpenVPN tends to have poor bandwidth due to overhead, but that may be
>> in large part due to my connection.
>
> OpenVPN is not the most efficient VPN implementation for connections to a
> server because it is not multithreaded and also because unlike IKE/IPSec it
> operates in userspace, not in kernelspace.  If you have more than one client
> connecting to the server at the same time you will need to set up multiple
> instances with different ports or different protocols.  With IKE/IPSec you
> don't.  MSWindows PCs come with IKEv2 natively so they can be configured to
> use it without installing additional client applications.
>
> A VPN connection will expose each endpoint's LAN to the other and therefore
> additional firewall configurations could be required.
>

That is only in one setup. It is possible to assign an IP address to
OpenVPN such that you will need any traffic to cross onto your LAN.

>> >> OpenVPN also offers transparent compression which can be a big
>> >> plus for your scenario.
>> >
>> > Not really, a lot of data is images, usually JPEG, some ZIP files, some
>> > PDF.  All that doesn't compress too well.
>> >
>> >> OpenVPN is not too difficult to setup, and the client is available for
>> >> all major OSes. And it's not too complicated to use: Open VPN
>> >> connection, then use your file transfer client as you're used to. Just
>> >> one simple extra step.
>> >
>> > I'm finding it a horrible nightmare, see above.  It is the most
>> > difficult thing you could come up with.  I haven't found any good
>> > documentation that explains it, the different types of it, how it works,
>> > what to use (apparently there are many different ways or something, some
>> > of which require a static IP on both ends, and they even give you
>> > different disadvantages in performance ...), how to protect the
>> > participants and all the complicated stuff involved.  So far, I've
>> > managed to stay away from it, and I wouldn't know where to start.  Of
>> > course, there is some documentation, but it is all confusing and no
>> > good.
>>
>> Feel free to start a thread on it. As above, I recommend
>> one-key-per-client and running your own CA.
>
> For secure connections you will have to set up CA and TLS keys with any
> option.  Even ftps - unless the ftp server is already configured with its TLS
> certificates.
>

No, certain OpenVPN modes allow encryption without a CA, but they are
limited (e.g. single user, single password, etc).



Re: [gentoo-user] replacement for ftp?

2017-05-14 Thread Alan McKinnon
On 03/05/2017 22:04, lee wrote:
> Alan McKinnon  writes:
> 
>> On 30/04/2017 03:11, lee wrote:
>>> "Poison BL."  writes:
>>>
 On Sat, Apr 29, 2017 at 3:24 PM, lee  wrote:

> Mick  writes:
>
>> On Tuesday 25 Apr 2017 16:45:37 Alan McKinnon wrote:
>>> On 25/04/2017 16:29, lee wrote:
 Hi,

 since the usage of FTP seems to be declining, what is a replacement
 which is at least as good as FTP?

 I'm aware that there's webdav, but that's very awkward to use and
 missing features.
>>>
>>> Why not stick with ftp?
>>> Or, put another way, why do you feel you need to use something else?
>>>
>>> There's always dropbox
>>
>>
>> Invariably all web hosting ISPs offer ftp(s) for file upload/download.
> If you
>> pay a bit more you should be able to get ssh/scp/sftp too.  Indeed, many
> ISPs
>> throw in scp/sftp access as part of their basic package.
>>
>> Webdav(s) offers the same basic upload/download functionality, so I am
> not
>> sure what you find awkward about it, although I'd rather use lftp
> instead of
>> cadaver any day. ;-)
>>
>> As Alan mentioned, with JavaScript'ed web pages these days there are many
>> webapp'ed ISP offerings like Dropbox and friends.
>>
>> What is the use case you have in mind?
>
> transferring large amounts of data and automatization in processing at
> least some of it, without involving a 3rd party
>
> "Large amounts" can be "small" like 100MB --- or over 50k files in 12GB,
> or even more.  The mirror feature of lftp is extremely useful for such
> things.
>
> I wouldn't ever want having to mess around with web pages to figure out
> how to do this.  Ftp is plain and simple.  So you see why I'm explicitly
> asking for a replacement which is at least as good as ftp.
>
>
> --
> "Didn't work" is an error.
>
>
 Half petabyte datasets aren't really something I'd personally *ever* trust
 ftp with in the first place.
>>>
>>> Why not?  (12GB are nowhere close to half a petabyte ...)
>>>
 That said, it depends entirely on the network
 you're working with. Are you pushing this data in/out of the network your
 machines live in, or are you working primarily internally? If internal,
 what're the network side capabilities you have? Since you're likely already
 using something on the order of CEPH or Gluster to back the datasets where
 they sit, just working with it all across network from that storage would
 be my first instinct.
>>>
>>> The data would come in from suppliers.  There isn't really anything
>>> going on atm but fetching data once a month which can be like 100MB or
>>> 12GB or more.  That's because ppl don't use ftp ...
>>
>> I have the opposite experience.
>> I have the devil's own time trying to convince people to NOT use ftp for
>> anything and everything under the sun that even remotely resembles
>> getting data from A to B...
> 
> I guess you're lucky then.
> 
>> (especially things that are best done over a
>> message bus)
> 
> Why would anyone try to transfer data over a message bus?  Doesn't that
> require extra wiring and specialized hardware?
> 
>> I'm still not understanding why you are asking your questions. What you
>> describe looks like the ideal case for ftp:
> 
> it is
> 
> Still nobody uses it, and apparently ftp usage is generally declining,
> so I would expect there to be a better alternative.
> 
>>
>> - supplier pushes a file or files somewhere
>> - you fetch those files later at a suitable time
>>
>> it looks like a classic producer/consumer scenario and ftp or any of
>> it's webby clones like dropbox really it still the best tool overall.
>> Plus it has the added benefit that no user needs extra software - all
>> OSes have ftp clients even if it's just a browser
> 
> The users don't know about that.
> 
> 


OK, so here is what you have.

You apparently must use ftp as pretty much nothing else works.
You als claim that your users are too stupid to use ftp, and can't even
type ftp:// into a browser.

I'm sorry, but that's only marginally more believable than claiming
keyboards are too complicated for your users.

I doubt anyone here can help you - you do not have a technical problem,
you have a people problem. I recommend the very first action you take
now is to critically re-examine why you think of those users in the way
you do and fix/amend what is going on inside your head. Then your way
forward will be clear.


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Issues with AMD_IOMMU

2017-05-14 Thread taii...@gmx.com

On 05/14/2017 01:31 AM, Adam Carter wrote:


Tried kernels 4.10.13 and 4.11, with
CONFIG_GART_IOMMU=y
CONFIG_IOMMU_HELPER=y
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y
CONFIG_IOMMU_IOVA=y
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_V2=m
Chipset is 990FX, and AFAICT the V2 is for the APU (bdver3 and 4 vintage).

When I enable the IOMMU in the BIOS I get stack traces. Is anyone using
AMD_IOMMU successfully? If so, any tips?


Is this a gentoo kernel or one from kernel.org?

What are the exact errors you are getting? random? can you post?
On 05/14/2017 11:06 AM, Alan Grimes wrote:

Adam Carter wrote:

Tried kernels 4.10.13 and 4.11, with
CONFIG_GART_IOMMU=y
CONFIG_IOMMU_HELPER=y
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y
CONFIG_IOMMU_IOVA=y
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_V2=m
Chipset is 990FX, and AFAICT the V2 is for the APU (bdver3 and 4 vintage).

When I enable the IOMMU in the BIOS I get stack traces. Is anyone
using AMD_IOMMU successfully? If so, any tips?

On a Phenom II?Yeah, I just retired mine a month ago. On that
system, the IOMMU hardware was kinda a legacy orphan thingy, I had to go
through all kinds of gyrations on the kernel command line in order to
initialize it correctly. I think I had it off in the bios, then enabled
it using a bunch of kernel commands.
990FX would probably be an AMD FX CPU such as the 8350, it usually works 
fine and is enabled by default on most distros.




Re: [gentoo-user] Issues with AMD_IOMMU

2017-05-14 Thread Alan Grimes
Adam Carter wrote:
> Tried kernels 4.10.13 and 4.11, with
> CONFIG_GART_IOMMU=y
> CONFIG_IOMMU_HELPER=y
> CONFIG_IOMMU_API=y
> CONFIG_IOMMU_SUPPORT=y
> CONFIG_IOMMU_IOVA=y
> CONFIG_AMD_IOMMU=y
> CONFIG_AMD_IOMMU_V2=m
> Chipset is 990FX, and AFAICT the V2 is for the APU (bdver3 and 4 vintage).
>
> When I enable the IOMMU in the BIOS I get stack traces. Is anyone
> using AMD_IOMMU successfully? If so, any tips?

On a Phenom II?Yeah, I just retired mine a month ago. On that
system, the IOMMU hardware was kinda a legacy orphan thingy, I had to go
through all kinds of gyrations on the kernel command line in order to
initialize it correctly. I think I had it off in the bios, then enabled
it using a bunch of kernel commands.


-- 
Strange Game.
The only winning move is not to play. 

Powers are not rights.




Re: [gentoo-user] Pseudo first impressions

2017-05-14 Thread Daniel Frey
On 05/13/2017 06:05 PM, lee wrote:
> It worked --- now some time when I do upgrade the kernel, I somehow need
> to remove these sources from the world list, I guess ...
> 

That's easy: `emerge --deselect sys-kernel/gentoo-sources:4.4.52` undoes
my previous suggestion.

Dan





Re: [gentoo-user] gcc 6 PIE use flag

2017-05-14 Thread Adam Carter
>
> the Gentoo dev list.  I'd normally just link to a mail archive, but it's
>> too new to have been archived yet.  Here's a copy of the proposed news
>> item.  Note the example error message about relocation against .rodata
>>
>> ###
>> Title: GCC 6 defaults to USE="pie ssp"
>> Author: Matthias Maier 
>> Content-Type: text/plain
>> Posted: 2017-05-07
>> Revision: 1
>> News-Item-Format: 1.0
>> Display-If-Installed: >=sys-devel/gcc-6.3.0
>> Display-If-Keyword: amd64 applications
>>
>>
pie: recently promoted to default, now forced off...

[ebuild   R] sys-devel/gcc-6.3.0:6.3.0::gentoo  USE="cxx fortran
(multilib) nls nptl openmp pch sanitize ssp vtv (-altivec) (-awt) -cilk
-debug -doc (-fixed-point) (-gcj) -go -graphite (-hardened) (-jit)
(-libssp) -mpx -objc -objc++ -objc-gc (-pie*) -regression-test -vanilla" 0
KiB


Re: [gentoo-user] [OT] Tux AWOL

2017-05-14 Thread Jorge Almeida
On Sun, May 14, 2017 at 10:50 AM, Alan Mackenzie  wrote:

>

> I see 16 Tuxen at boot up, no problem.

A whole raft, then. (Or is it a waddle?) I was happy enough with my
huddle of 4...

 >
> Something strange happened when I installed the 4.11.0 sources - all the
> options were initialised to what they were in my 4.9.16 running kernel.

I don't use gentoo-sources. Nothing against it, it's just that I got
used to upgrade by hand, once in a while...

But the kernel of gentoo-sources would be patched, if anything. The
vanilla kernel should have the usual penguins.

thanks

Jorge



Re: [gentoo-user] Re: [OT] Tux AWOL

2017-05-14 Thread Jorge Almeida
On Sun, May 14, 2017 at 10:37 AM, Kai Krakow  wrote:
> Am Sun, 14 May 2017 10:11:31 +0100
> schrieb Jorge Almeida :

>>
>>
>> I suppose it's goodbye to Tux, for now. I was hoping someone else
>> would be using the same kernel...
>
> You could setup plymouth and I'm pretty sure there should be a
> fullscreen Tux theme somewhere... ;-)
>
I can live without the little penguins, which, after all, would be
there to be seen only at boot, they would disappear on a 2nd visit to
TTY1. But missing Tux might be a symptom of something else wrong...

It's the first time I hear about plymouth. Visiting
https://cgit.freedesktop.org/plymouth/ I found zilch documentation.

thanks

Jorge



[gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 01:25:24 +0100
schrieb lee :

> "Poison BL."  writes:
> 
> > On Sat, Apr 29, 2017 at 9:11 PM, lee  wrote:  
> >>
> >> "Poison BL."  writes:  
>  [...]  
> > trust  
>  [...]  
> >>
> >> Why not?  (12GB are nowhere close to half a petabyte ...)  
> >
> > Ah... I completely misread that "or over 50k files in 12GB" as 50k
> > files *at* 12GB each... which works out to 0.6 PB, incidentally.
> >  
> >> The data would come in from suppliers.  There isn't really anything
> >> going on atm but fetching data once a month which can be like
> >> 100MB or 12GB or more.  That's because ppl don't use ftp ...  
> >
> > Really, if you're pulling it in from third party suppliers, you
> > tend to be tied to what they offer as a method of pulling it from
> > them (or them pushing it out to you), unless you're in the unique
> > position to dictate the decision for them.  
> 
> They need to use ftp to deliver the data, we need to use ftp to get
> the data.  I don't want that any other way.
> 
> The problem is that the ones supposed to deliver data are incompetent
> and don't want to use ftp because it's too complicated.  So what's the
> better solution?

Use an edge router appliance with proper VPN support. You are from
Germany? I can recommend Securepoint appliances. You pay for the
hardware and support, they support you with setting everything up. You
can also find a distributor who can install this for you. Securepoint
works with competent partners all around Germany.

There's also other alternatives like Watchguard (but their OpenVPN
support is not that good), and a lot of free router/firewall softwares
you can deploy to semi-professional equipment by firmware replacement.
But at least with the latter option, you're mostly on your own and need
to invest a lot of effort to make it work properly and secure.

Depending on what data is transferred, you should also take into
account if your solution is certificated to transfer such data. E.g.
medical data may only be transferred through properly certificated VPN
appliances. Otherwise, you should fall back to sneakernet. I'm not sure
how that is any more secure but that's how things are.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 01:28:55 +0100
schrieb lee :

> Kai Krakow  writes:
> 
> > Am Sat, 29 Apr 2017 22:02:51 -0400
> > schrieb "Walter Dnes" :
> >  
> >>   Then there's always "sneakernet".  To quote Andrew Tanenbaum from
> >> 1981
> >>   
>  [...]  
> >
> > Hehe, with the improvements in internet connections nowadays, we
> > almost stopped transferring backups via sneakernet. Calculating the
> > transfer speed of the internet connection vs. the speed calculating
> > miles per hour, internet almost always won lately. :-)
> >
> > Most internet connections are faster than even USB sticks these
> > days.  
> 
> Wow, you must be living in some sort of paradise.  Here, internet is
> more like being cut off from the rest of the world.
> 
> But then, there's a manufacturer that makes incredibly slow USB sticks
> which I won't buy anymore ...

Okay, it really depends. I shouldn't say "most"... ;-)

I compared my really crappy (but most reliable yet) old USB stick to my
internet connection. My USB stick doesn't do 48 MByte/s, more like 5-10.
And don't even ask when writing data.

Even my rusty hard disk (read: not SSD) has a hard time writing away a
big download with constantly high download rate.

But I guess that a good internet connection should be at least 50 MBit
these days.

And most USB sticks are really crappy at writing. That also counts when
you do not transfer the file via network. Of course, most DSL
connections have crappy upload speed, too. Only lately, Telekom offers
40 MBit upload connections in Germany.

I'm currently on a 400/25 MBit link and can saturate the link only with
proper servers like the Steam network which can deliver 48 MByte/s.


-- 
Regards,
Kai

Replies to list-only preferred.




Re: [gentoo-user] [OT] Tux AWOL

2017-05-14 Thread Alan Mackenzie
Hello, Jorge.

On Sat, May 13, 2017 at 09:46:41 +0100, Jorge Almeida wrote:
> In case someone is using kernel 4.11: I tried it and everything seems
> fine, except that the linux logo on the boot screen (i.e. tty1) is
> gone. It was there before (with 4.10.9), and I used make oldconfig.

> So, anyone with the same experience?

> Note that I'm talking about the vanilla kernel from kernel.org, off
> portage (hence the OT in the title).

I've put gentoo-sources-4.11.0 on my new machine, in an attempt to get
sound working (the sound chip has a new Realtek codec S1220A which
allegedly has been implemented in 4.11).

I see 16 Tuxen at boot up, no problem.

Something strange happened when I installed the 4.11.0 sources - all the
options were initialised to what they were in my 4.9.16 running kernel.
This saved me a lot of time.

> thanks

> Jorge Almeida

-- 
Alan Mackenzie (Nuremberg, Germany).



[gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 02:18:56 +0100
schrieb lee :

> Kai Krakow  writes:
> 
> > Am Sat, 29 Apr 2017 20:02:57 +0100
> > schrieb lee :
> >  
> >> Alan McKinnon  writes:
> >>   
>  [...]  
>  [...]  
>  [...]  
> >> 
> >> The intended users are incompetent, hence it is too difficult to
> >> use ...  
> >
> > If you incompetent users are using Windows: Have you ever tried
> > entering ftp://u...@yoursite.tld in the explorer directory input
> > bar?  
> 
> I tried at work and it said something like that the service cannot be
> accessed.
> 
> 
> > [...]
> > Debian is not the king to rule the internet. You shouldn't care when
> > they shut down their FTP services. It doesn't matter to the rest of
> > the world using the internet.  
> 
> Who can say what their influence actually is?  Imagine Debian going
> away, and all the distributions depending on them as well because they
> loose their packet sources, then what remains?  It is already rather
> difficult to find a usable distribution, and what might the effect on
> upstream sources be.

The difference is: They only shut down a service. They are not
vanishing from the internet. You cannot conclude from that, they are:

(a) shutting down all their service
(b) ftp is deprecated and nobody should use it any longer

And I didn't write that you shouldn't care if Debian vanishes. I only
said it shouldn't mean anything to you if they shut down their FTP
services for probably good reasons. It's not the end of life, the
universe, and everything. And you can keep your towel.

What I wanted to say: Debian is not that important that everyone will
shut down FTP now and kill FTP support from client software. That
simply won't happen. That is not what it means when Debian is shutting
down a service.


-- 
Regards,
Kai

Replies to list-only preferred.




Re: [gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Neil Bothwick
On Sun, 14 May 2017 02:48:46 +0100, lee wrote:

> > But you could offer access via OpenVPN and tunnel samba through
> > that.  
> 
> I haven't been able yet to figure out what implications creating a VPN
> has.  I understand it's supposed to connect networks through a secured
> tunnel, but what kind of access to the LAN does someone get who connects
> via VPN?  Besides, VPN is extremely complicated and difficult to set
> up.  I consider it an awful nightmare.

Try ZeroTier, it's easy to install and setup (there's an ebuild on b.g.o)
and provides VPN like access, but to individual machines on your network
instead of exposing the entire network to these incompetent people. You
administer ZeroTier centrally, the users only need to install the
software and join the network - then you can give them access to the
Samba server and nothing else on your network.

However, I think owncloud/nextcloud might be a better option. As long as
your users can use a web browser, they can use this and no extra software
is needed.

Another option may be Syncthing, which syncs a local directory with one
over the network. It's encrypted and P2P and needs minimal setup on the
client side. Once set up, they simply copy the files to a directory,
which is synced in the background.


-- 
Neil Bothwick

Hickory Dickory Dock, The mice ran up the clock, The clock struck one, The
others escaped with minor injuries.


pgpSsem40BZXE.pgp
Description: OpenPGP digital signature


[gentoo-user] Re: [OT] Tux AWOL

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 10:11:31 +0100
schrieb Jorge Almeida :

> On Sun, May 14, 2017 at 9:31 AM, Kai Krakow 
> wrote:
> > Am Sun, 14 May 2017 08:32:46 +0100
> > schrieb Jorge Almeida :  
> 
>  [...]  
> 
> 
> >> $ zgrep -i logo /proc/config.gz
> >> CONFIG_LOGO=y
> >> # CONFIG_LOGO_LINUX_MONO is not set
> >> # CONFIG_LOGO_LINUX_VGA16 is not set
> >> CONFIG_LOGO_LINUX_CLUT224=y
> >> $  
> >
> > Use
> >
> > # vimdiff oldlinux/.config newlinux/.config
> >  
> 
> Done that. There are only a few differences and none seems relevant.
> >
> > I think there were changes to the framebuffer devices. You may need
> > to switch to a different one.
> >  
> I use the Intel integrated graphics, I didn't do anything special
> about framebuffer. The current one works smoothly regarding KMS and
> I'm happy with it (I do use VTs).
> 
> $ cat /proc/fb
> 0 inteldrmfb
> 
> 
> I suppose it's goodbye to Tux, for now. I was hoping someone else
> would be using the same kernel...

You could setup plymouth and I'm pretty sure there should be a
fullscreen Tux theme somewhere... ;-)


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 09:52:41 +0100
schrieb Mick :

> On Saturday 13 May 2017 23:58:17 R0b0t1 wrote:
> > I had some problems setting up OpenVPN that were solved by using
> > per-client public keys. That seems to be the best supported
> > configuration (as well as the most secure). Windows-side using
> > OpenVPN-GUI is very easy.
> > 
> > OpenVPN tends to have poor bandwidth due to overhead, but that may
> > be in large part due to my connection.  
> 
> OpenVPN is not the most efficient VPN implementation for connections
> to a server because it is not multithreaded

Probably true but it works well here for connections of up to 100 MBit.

> and also because unlike
> IKE/IPSec it operates in userspace, not in kernelspace.

IPsec also doesn't work without help from userspace processes. But I
see what you mean: With OpenVPN, traffic bounces between kernel and
userspace multiple times before leaving the machine. But I don't really
see that as a problem for the scenario OpenVPN is used in: It best fits
with dial-up connections which are really not gigabit yet. For this,
performance overhead is almost zero.


>  If you have
> more than one client connecting to the server at the same time you
> will need to set up multiple instances with different ports or
> different protocols.

That is not true: We connect many clients to the same server port
without problems, each with their own certificate.

>  With IKE/IPSec you don't.  MSWindows PCs come
> with IKEv2 natively so they can be configured to use it without
> installing additional client applications.

IPsec can be a big pita if NAT is involved. For Windows client, L2TP
may be a good alternative.

>  [...]  
> > > 
> > > The ftp server already doesn't allow unencrypted connections.
> > > 
> > > Now try to explain to ppl for whom Filezilla is too complicated
> > > how to set up a VPN connection and how to secure their LAN once
> > > they create the connection (if we could ever get that to work).
> > > I haven't been able to figure that out myself, and that is one of
> > > the main reasons why I do not have a VPN connection but use ssh
> > > instead.  The only disadvantage is that I can't do RDP sessions
> > > with that ---  I probably could and just don't know how to ---
> > > but things might be a lot easier if wireguard works.  
> 
> If the users are helpless then you may be better configuring a VPN
> tunnel between their Internet gateway and the server, so they can
> access the server as if it were a local share, or using the built in
> ftp client that MSWindows comes with.  SMB will work securely in this
> case too.

This is what I would recommend, too. Put the VPN endpoints on the
network edges and no clients needs to worry: They just use the
connection.

>  [...]  
> > > 
> > > I'm finding it a horrible nightmare, see above.  It is the most
> > > difficult thing you could come up with.  I haven't found any good
> > > documentation that explains it, the different types of it, how it
> > > works, what to use (apparently there are many different ways or
> > > something, some of which require a static IP on both ends, and
> > > they even give you different disadvantages in performance ...),
> > > how to protect the participants and all the complicated stuff
> > > involved.  So far, I've managed to stay away from it, and I
> > > wouldn't know where to start.  Of course, there is some
> > > documentation, but it is all confusing and no good.  
> > 
> > Feel free to start a thread on it. As above, I recommend
> > one-key-per-client and running your own CA.  

I wouldn't recommend running your own CA because you will have to
deploy a trust relationship with every client.

> For secure connections you will have to set up CA and TLS keys with
> any option.  Even ftps - unless the ftp server is already configured
> with its TLS certificates.

Or you use certificates from LetsEncrypt. Their CA is already trusted
on most machines my default.


-- 
Regards,
Kai

Replies to list-only preferred.


pgpoaF9L6wIsl.pgp
Description: Digitale Signatur von OpenPGP


Re: [gentoo-user] Pseudo first impressions

2017-05-14 Thread Neil Bothwick
On Sun, 14 May 2017 02:05:58 +0100, lee wrote:

> > It will only remove things that it deems not needed. Usually these are
> > packages that have just been upgraded.  
> 
> Yes, the sources wouldn't be needed if I had upgraded the kernel.  Still
> one might expect it to figure out which kernel is in use and to not try
> to delete it --- that would make some sense.

This is Gentoo, you have to make most of the decisions yourself. You
don't necessarily need the sources for the running kernel.

> > For kernel sources, tell portage to not remove it:
> >
> > `emerge --noreplace sys-kernel/gentoo-sources:4.4.52`
> >
> > as an example.
> >
> > If you do that, --depclean will not remove the sources for 4.4.52 (as
> > an example.)  
> 
> Thanks, I couldn't find an option like this.

It's in the emerge man page.

Alternatively, you can create a set that ensures no kernel sources are
ever depcleaned. Add this to /etc/portage/sets.conf

[kernels]
class = portage.sets.dbapi.OwnerSet
world-candidate = False
files = /usr/src

and add the kernels set to world with this

emerge -n @kernels

Now any kernel sources you install will not be removed until you
specifically instruct portage to do so.

> It worked --- now some time when I do upgrade the kernel, I somehow need
> to remove these sources from the world list, I guess ...

They will be removed when you manually unmerge the kernel

emerge -Ca \=sys-kernel/gentoo-sources:4.4.52


-- 
Neil Bothwick

The sergeant walked into the shower and caught me giving myself a
dishonorable discharge. Without missing a beat, I said, "It's my dick
and I can wash it as fast as I want!"


pgpEFg5WIq8_f.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] world rebuild with gcc-6.3.0 - not completely there yet

2017-05-14 Thread Rasmus Thomsen
Hi,

unwind tables sounds really interesting, but does it affect the binary size 
when I compile with -O2 anyway?

Regards,
Rasmus

 Original Message 
On 14 May 2017, 07:13, Walter Dnes wrote:
On Sat, May 13, 2017 at 10:21:19AM +0200, Helmut Jarausch wrote
>
> I have switched to gcc-6.3.0 at the end of last year.
> I have (re-)compiled all my packages including icewm-1.3.12-r1.
> Everything works just fine.
>
> I did have some segfaults (with gimp) until I recompiled each package
> with gcc-6.3.0
>
> I have
> CFLAGS="-mtune=native -O2 -msse3 -pipe -fPIC"
>
> in /etc/portage/make.conf

Why aren't you running "-march=native"? You're probably missing quite
a few features of your cpu. BTW, "-march=native" implies "-mtune=native"
and "-msse3" (if the cpu supports it). Out of sheer curiousity, what
does gcc report as your native cpu when you run...

gcc -c -Q -march=native --help=target | grep march=

> and quite a e few
> CFLAGS="-O3 -mtune=native -pipe -msse -msse2 -msse3 -msse4a -m3dnow"
> (for my aged AMD64 Phenom II machine)
>
> in /etc/portage/env/...

"-O3"... yikes! Not really recommended. If you want speedup, use
"-march=native" instead.

In make.conf, I have...

CFLAGS="-O2 -march=native -mfpmath=sse -fomit-frame-pointer -pipe 
-fno-unwind-tables -fno-asynchronous-unwind-tables"

The 2 "unwind_tables" flags reduce code bloat. See
http://comments.gmane.org/gmane.linux.busybox/36695 for details. The
guys at busybox do what they can to reduce code bloat. Knocking 15% off
the size of an executable or library helps. This results in faster
loading at startup (less to load), and less stuff in ram cuts down on
swapping.

--
Walter Dnes 
I don't run "desktop environments"; I run useful applications

[gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 02:48:46 +0100
schrieb lee :

> Kai Krakow  writes:
> 
> > Am Sat, 29 Apr 2017 20:30:03 +0100
> > schrieb lee :
> >  
> >> Danny YUE  writes:
> >>   
>  [...]  
>  [...]  
>  [...]  
> >> 
> >> Doesn't that require ssh access?  And how do you explain that to
> >> ppl finding it too difficult to use Filezilla?  Is it available for
> >> Windoze?  
> >
> > Both, sshfs and scp, require a full shell (that may be restricted
> > but that involves configuration overhead on the server side).  
> 
> I wouldn't want them to have that.

And I can understand this...

> > You can use sftp (FTP wrapped into SSH), which is built into SSH. It
> > has native support in many Windows clients (most implementations use
> > PuTTY in the background). It also has the advantage that you can
> > easily restrict users on your system to SFTP-only with an easy
> > server-side configuration.  
> 
> From what I've been reading, sftp is deprecated and has been replaced
> by ftp with TLS.

From what I'm guessing, you're mixing up sftp and ftps. sftp is
ssh+ftp, and ftps is ftp with ssl. The latter is probably deprecated in
favor of ftp with tls. TLS supports name indication (to show the
correct server certificate) and it supports handshaking so the same
port can be used for secure and insecure connections.

Apparently, many sites on the internet also mix up ftps und sftp, for
them both is FTP with SSL. But that's not true. I think that comes from
the fact that "secure ftp" often is a synonym for "ssl encryption" as
it is with "secure http". But that doesn't mean the acronym is "sftp"
as it also is not "shttp".

>  [...]  
> >> 
> >> Does that work well, reliably and securely over internet
> >> connections?  
> >
> > It supports encryption as transport security, and it supports
> > kerberos for secure authentication, the latter is not easy to setup
> > in Linux, but it should work with Windows clients out-of-the-box.
> >
> > But samba is a pretty complex daemon and thus offers a big attack
> > surface for hackers and bots. I'm not sure you want to expose this
> > to the internet without some sort of firewall in place to restrict
> > access to specific clients - and that probably wouldn't work for
> > your scenario.  
> 
> At least it's a possibility.  I don't even know if they have static
> IPs, though.

Modern CIFS implementations can be forced to encrypt the transport
layer and only accept kerberos authenticated clients. It should be safe
to use then if properly firewalled. At least "CIFS" (which is samba)
afaik means "common internet file system" - that should at least have a
minimal meaning of "intended to be used over internet connections". Of
course this really doesn't say anything about transport security. Be
sure to apply one, and you should be good to go.

> > But you could offer access via OpenVPN and tunnel samba through
> > that.  
> 
> I haven't been able yet to figure out what implications creating a VPN
> has.  I understand it's supposed to connect networks through a secured
> tunnel, but what kind of access to the LAN does someone get who
> connects via VPN?  Besides, VPN is extremely complicated and
> difficult to set up.  I consider it an awful nightmare.

You need to first understand how tunnel devices work. Then it becomes
very easy to set up. The access to the LAN can be restricted by
firewall rules. As long as you don't setup routes from the transfer
network (where the tunnel is located) to your LAN, there won't be
access. And then there's firewall rules after you set up routing.

> Wireguard seems a lot easier.

I didn't know that, I will look into it.

> > By that time, you can as easily offer FTP, too, through the tunnel
> > only, as there should be no more security concerns now: It's
> > encrypted now.  
> 
> The ftp server already doesn't allow unencrypted connections.
> 
> Now try to explain to ppl for whom Filezilla is too complicated how to
> set up a VPN connection and how to secure their LAN once they create
> the connection (if we could ever get that to work).  I haven't been
> able to figure that out myself, and that is one of the main reasons
> why I do not have a VPN connection but use ssh instead.  The only
> disadvantage is that I can't do RDP sessions with that ---  I
> probably could and just don't know how to --- but things might be a
> lot easier if wireguard works.

You can always deploy VPN at the edge of the network, so your clients
won't need to bother with the details but just use the connection.

You can also try using WinSCP instead of filezilla (it supports,
despite the name, also FTP). Then put a connection file to their
desktop and configure it to run in explorer-mode. Now it should mostly
look like a file explorer and they can copy files like they used to.

But then again: Ppl want to get paid for their work. That also means
they need to invest at least a bit more than just their time... 

Re: [gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Adam Carter
> > These certificates are a very stupid thing.  They are utterly
> > complicated, you have to self-sign them which produces warnings, and
> > they require to have the host name within them as if the host wasn't
> > known by several different names.
>
> Use LetsEncrypt then, you can add any number of host names you want, as
> far as I know. But you need a temporary web server to prove ownership
> of the server/hostname and sign the certificates.
>

As an alternative you can publish a DNS record, rather than providing a web
server.

I like https to distribute fires - its easiest for the users, and no client
end setup required.

If there are many files just zip them up as most users can handle that too,
and zip clients are build into most OSes.


Re: [gentoo-user] Re: [OT] Tux AWOL

2017-05-14 Thread Jorge Almeida
On Sun, May 14, 2017 at 9:31 AM, Kai Krakow  wrote:
> Am Sun, 14 May 2017 08:32:46 +0100
> schrieb Jorge Almeida :

>> >> In case someone is using kernel 4.11: I tried it and everything
>> >> seems fine, except that the linux logo on the boot screen (i.e.
>> >> tty1) is gone. It was there before (with 4.10.9), and I used make


>> $ zgrep -i logo /proc/config.gz
>> CONFIG_LOGO=y
>> # CONFIG_LOGO_LINUX_MONO is not set
>> # CONFIG_LOGO_LINUX_VGA16 is not set
>> CONFIG_LOGO_LINUX_CLUT224=y
>> $
>
> Use
>
> # vimdiff oldlinux/.config newlinux/.config
>

Done that. There are only a few differences and none seems relevant.
>
> I think there were changes to the framebuffer devices. You may need to
> switch to a different one.
>
I use the Intel integrated graphics, I didn't do anything special
about framebuffer. The current one works smoothly regarding KMS and
I'm happy with it (I do use VTs).

$ cat /proc/fb
0 inteldrmfb


I suppose it's goodbye to Tux, for now. I was hoping someone else
would be using the same kernel...

Thanks

Jorge



Re: [gentoo-user] world rebuild with gcc-6.3.0 - not completely there yet

2017-05-14 Thread Helmut Jarausch

Many thanks for your hints!

On 05/14/2017 07:13:05 AM, Walter Dnes wrote:
  Why aren't you running "-march=native"?  You're probably missing  
quite
a few features of your cpu.  BTW, "-march=native" implies  
"-mtune=native"

and "-msse3" (if the cpu supports it).  Out of sheer curiousity, what
does gcc report as your native cpu when you run...

gcc -c -Q -march=native --help=target | grep march=


-march=amdfam10



> and quite a e few
> CFLAGS="-O3 -mtune=native -pipe -msse -msse2 -msse3 -msse4a -m3dnow"
> (for my aged AMD64  Phenom II machine)
>
> in /etc/portage/env/...

  "-O3"... yikes!  Not really recommended.  If you want speedup, use
"-march=native" instead.


I didn't have any problems with that. I only use these setting non  
compute intensive packages which

are not system critical.


  In make.conf, I have...

CFLAGS="-O2 -march=native -mfpmath=sse -fomit-frame-pointer -pipe  
-fno-unwind-tables -fno-asynchronous-unwind-tables"


  The 2 "unwind_tables" flags reduce code bloat.  See
http://comments.gmane.org/gmane.linux.busybox/36695 for details.  The
guys at busybox do what they can to reduce code bloat.  Knocking 15%  
off

the size of an executable or library helps.  This results in faster
loading at startup (less to load), and less stuff in ram cuts down on
swapping.


Many thanks, again
Helmut




Re: [gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Mick
On Saturday 13 May 2017 23:58:17 R0b0t1 wrote:
> On Wed, May 3, 2017 at 1:40 AM, Mick  wrote:
> > On Monday 01 May 2017 22:36:00 Nils Freydank wrote:
> >> On Sat, 30 Apr 2017 19:04:06 +0200 Andrew Savchenko wrote:
> >> > [...]
> >> > I fail to see why FTP needs to be replaced: it works, it is
> >> > supported, it is secure when used with care, it is damn fast.
> >> 
> >> I’ll just drop the somewhat popular rant “FTP must die“[1] and a
> >> follow-up
> >> discussion about it[2]. IMHO the main reasons are missing data integrity
> >> and authentication security issues. The latter one can be solved with
> >> FTPS[3] - but honestly I never saw FTPS somewhere actually used in the
> >> wild.> 
> > I'm not sure what you mean "used in the wild".  I use lftp to connect via
> > ftps with a number of webservers for updates and backups on a daily
> > basis.  Some of the connections are scripted.
> > 
> >> [1] http://mywiki.wooledge.org/FtpMustDie
> >> [2] https://news.ycombinator.com/item?id=11251907
> >> [3] i.e. FTP over SSL/TLS (not to mix up with SFTP, which comes from the
> >> SSH family)
> >> 
> >> Greetings,
> >> Nils
> 
> That was an interesting read. The only RFC as bad as FTP's that I know
> of might be IRC's.
> 
> On Sat, May 13, 2017 at 8:48 PM, lee  wrote:
> > Kai Krakow  writes:
> >> Am Sat, 29 Apr 2017 20:30:03 +0100
> >> 
> >> schrieb lee :
> >>> Danny YUE  writes:
> >>> > On 2017-04-25 14:29, lee  wrote:
> >>> >> Hi,
> >>> >> 
> >>> >> since the usage of FTP seems to be declining, what is a replacement
> >>> >> which is at least as good as FTP?
> >>> >> 
> >>> >> I'm aware that there's webdav, but that's very awkward to use and
> >>> >> missing features.
> >>> > 
> >>> > What about sshfs? It allows you to mount a location that can be
> >>> > accessed via ssh to your local file system, as if you are using
> >>> > ssh.
> >>> 
> >>> Doesn't that require ssh access?  And how do you explain that to ppl
> >>> finding it too difficult to use Filezilla?  Is it available for
> >>> Windoze?
> >> 
> >> Both, sshfs and scp, require a full shell (that may be restricted but
> >> that involves configuration overhead on the server side).
> > 
> > I wouldn't want them to have that.
> > 
> >> You can use sftp (FTP wrapped into SSH), which is built into SSH. It
> >> has native support in many Windows clients (most implementations use
> >> PuTTY in the background). It also has the advantage that you can
> >> easily restrict users on your system to SFTP-only with an easy
> >> server-side configuration.
> > 
> > From what I've been reading, sftp is deprecated and has been replaced by
> > ftp with TLS.

MSWindows does not do ftps, only unencrypted ftp.  The solution is to use a 
MSWindows ftp client, life filezilla.  If the users can be trained, or have 
their PCs set up with Filezilla bookmarks for required connections to the ftp 
server, then this problem can be overcome.


> >>> > Also samba can be a replacement. I have a samba server on my OpenWRT
> >>> > router and use mount.cifs to mount it...
> >>> 
> >>> Does that work well, reliably and securely over internet connections?
> >> 
> >> It supports encryption as transport security, and it supports kerberos
> >> for secure authentication, the latter is not easy to setup in Linux,
> >> but it should work with Windows clients out-of-the-box.
> >> 
> >> But samba is a pretty complex daemon and thus offers a big attack
> >> surface for hackers and bots. I'm not sure you want to expose this to
> >> the internet without some sort of firewall in place to restrict access
> >> to specific clients - and that probably wouldn't work for your scenario.

SMB is being patched on a regular basis by MS to improve its security - the 
recent global Wannacry attack being a case in point.  I would think SMB is the 
most attacked protocol on a daily basis and trying to configure a SMB server 
from scratch, when an ftp server is already available would not be the wisest 
investment of time.


> > At least it's a possibility.  I don't even know if they have static IPs,
> > though.
> > 
> >> But you could offer access via OpenVPN and tunnel samba through that.
> > 
> > I haven't been able yet to figure out what implications creating a VPN
> > has.  I understand it's supposed to connect networks through a secured
> > tunnel, but what kind of access to the LAN does someone get who connects
> > via VPN?  Besides, VPN is extremely complicated and difficult to set
> > up.  I consider it an awful nightmare.
> > 
> > Wireguard seems a lot easier.

I hadn't heard of its dev, Jason A. Donenfeld, who has trademarked the 
Wireguard name and website, until I visited the Wireguard website.  I don't 
know how many people are part of the team, but Jason appears to be the only 
person answering questions on the M/L.  The website advises that wireguard is 
work in progress and therefore 

[gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 02:59:41 +0100
schrieb lee :

> Kai Krakow  writes:
> 
> > Am Sat, 29 Apr 2017 20:38:24 +0100
> > schrieb lee :
> >  
> >> Kai Krakow  writes:
> >>   
>  [...]  
>  [...]  
>  [...]  
> >> 
> >> Yes, I'm using it mostly for backups/copies.
> >> 
> >> The problem is that ftp is ideal for the purpose, yet users find it
> >> too difficult to use, and nobody uses it.  So there must be
> >> something else as good or better which is easier to use and which
> >> ppl do use.  
> >
> > Well, I don't see how FTP is declining, except that it is
> > unencrypted. You can still use FTP with TLS handshaking, most sites
> > should support it these days but almost none forces correct
> > certificates because it is usually implemented wrong on the server
> > side (by giving you ftp.yourdomain.tld as the hostname instead of
> > ftp.hostingprovider.tld which the TLS cert has been issued for).
> > That makes it rather pointless to use. In linux, lftp is one of the
> > few FTP clients supporting TLS out-of-the-box by default, plus it
> > forces correct certificates.  
> 
> These certificates are a very stupid thing.  They are utterly
> complicated, you have to self-sign them which produces warnings, and
> they require to have the host name within them as if the host wasn't
> known by several different names.

Use LetsEncrypt then, you can add any number of host names you want, as
far as I know. But you need a temporary web server to prove ownership
of the server/hostname and sign the certificates.

> > But I found FTP being extra slow on small files, that's why I
> > suggested to use rsync instead. That means, where you could use
> > sftp (ssh+ftp), you can usually also use ssh+rsync which is
> > faster.  
> 
> That requires shell access.
> 
> What do you consider "small files"?  I haven't observed a slowdown
> like that, but I haven't been looking for it, either.

Transfer 1 smallish files (like web assets, php files) to a server
with FTP, then try rsync. You should see a very big difference in time
needed. That's due to the connection overhead of FTP.

> > There's also the mirror command in lftp, which can be pretty fast,
> > too, on incremental updates but still much slower than rsync.
> >  
> >> I don't see how they would transfer files without ftp when ftp is
> >> the ideal solution.  
> >
> > You simply don't. FTP is still there and used. If you see something
> > like "sftp" (ssh+ftp, not ftp+ssl which I would refer to as ftps),
> > this is usually only ftp wrapped into ssh for security reasons. It
> > just using ftp through a tunnel, but to the core it's the ftp
> > protocol. In the end, it's not much different to scp, as ftp is
> > really just only a special shell with some special commands to
> > setup a file transfer channel that's not prone to interact with
> > terminal escape sequences in whatever way those may be implemented,
> > something that e.g. rzsz needs to work around.
> >
> > In the early BBS days, where you couldn't establish a second
> > transfer channel like FTP does it using TCP, you had to send
> > special escape sequences to put the terminal into file transfer
> > mode, and then send the file. By that time, you used rzsz from the
> > remote shell to initiate a file transfer. This is more the idea of
> > how scp implements a file transfer behind the scenes.  
> 
> IIRC, I used xmodem or something like that back then, and rzsz never
> worked.

Yes, or xmodem... ;-)

> > FTP also added some nice features like site-to-site transfers where
> > the data endpoints both are on remote sites, and your local site
> > only is the control channel. This directly transfers data from one
> > remote site to another without going through your local connection
> > (which may be slow due to the dial-up nature of most customer
> > internet connections).  
> 
> Interesting, I didn't know that.  How do you do that?

You need a client that supports this. I remember LeechFTP for Windows
supported it back then. The client needs to log in to both FTP servers
and then shuffle correct PORT commands between them, so that the data
connection is directly established between both.

That feature is also the reason why this looks so overly complicated
and incompatible to firewalls. When FTP was designed, there was a real
need to directly transfer files between servers as your connection was
usually a slow modem connection below 2400 baud, or some other slow
connection. Or even one that wouldn't transfer binary data at all...

> > Also, FTP is able to stream multiple files in a single connection
> > for transferring many small files, by using tar as the transport
> > protocol, thus reducing the overhead of establishing a new
> > connection per file. Apparently, I know only few clients that
> > support that, and even fewer servers which that would with.
> >
> > FTP can be pretty powerful, as you see. It's just victim of its poor
> > 

[gentoo-user] Re: [OT] Tux AWOL

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 08:32:46 +0100
schrieb Jorge Almeida :

> On Sun, May 14, 2017 at 4:30 AM, Stroller
>  wrote:
> >  
> >> On 13 May 2017, at 09:46, Jorge Almeida 
> >> wrote:
> >>
> >> In case someone is using kernel 4.11: I tried it and everything
> >> seems fine, except that the linux logo on the boot screen (i.e.
> >> tty1) is gone. It was there before (with 4.10.9), and I used make
> >> oldconfig.  
> >
> > Using `make oldconfig` isn't enough to diagnose - you need to
> > establish whether the option is enabled.  
> 
> I use make menuconfig after oldconfig. I did check the usual suspects,
> but maybe something needs to
> be explicitly enabled that was formerly implicit.
> 
> >
> > On my system:
> >
> > $ uname -r
> > 4.9.4-gentoo
> > $ zgrep -i logo /proc/config.gz
> > CONFIG_LOGO=y
> > # CONFIG_LOGO_LINUX_MONO is not set
> > # CONFIG_LOGO_LINUX_VGA16 is not set
> > CONFIG_LOGO_LINUX_CLUT224=y
> > $
> >  
> 
> >  
> $ zgrep -i logo /proc/config.gz
> CONFIG_LOGO=y
> # CONFIG_LOGO_LINUX_MONO is not set
> # CONFIG_LOGO_LINUX_VGA16 is not set
> CONFIG_LOGO_LINUX_CLUT224=y
> $

Use

# vimdiff oldlinux/.config newlinux/.config

to edit both files side by side. It will show you the differences pretty
easily.

I think there were changes to the framebuffer devices. You may need to
switch to a different one.


-- 
Regards,
Kai

Replies to list-only preferred.




Re: [gentoo-user] [OT] Tux AWOL

2017-05-14 Thread Jorge Almeida
On Sun, May 14, 2017 at 4:30 AM, Stroller
 wrote:
>
>> On 13 May 2017, at 09:46, Jorge Almeida  wrote:
>>
>> In case someone is using kernel 4.11: I tried it and everything seems
>> fine, except that the linux logo on the boot screen (i.e. tty1) is
>> gone. It was there before (with 4.10.9), and I used make oldconfig.
>
> Using `make oldconfig` isn't enough to diagnose - you need to establish 
> whether the option is enabled.

I use make menuconfig after oldconfig. I did check the usual suspects,
but maybe something needs to
be explicitly enabled that was formerly implicit.

>
> On my system:
>
> $ uname -r
> 4.9.4-gentoo
> $ zgrep -i logo /proc/config.gz
> CONFIG_LOGO=y
> # CONFIG_LOGO_LINUX_MONO is not set
> # CONFIG_LOGO_LINUX_VGA16 is not set
> CONFIG_LOGO_LINUX_CLUT224=y
> $
>

>
$ zgrep -i logo /proc/config.gz
CONFIG_LOGO=y
# CONFIG_LOGO_LINUX_MONO is not set
# CONFIG_LOGO_LINUX_VGA16 is not set
CONFIG_LOGO_LINUX_CLUT224=y
$


Thanks
Jorge