Uwe Thiem wrote:
On Wednesday 09 April 2008, 7v5w7go9ub0o wrote:
Ralf Stephan wrote:
Hello,
I'm fed up with waiting for ever the same name requests from my
browser (and open servers don't cut it either): which DNS cache
or caching DNS for simple local installation would you recommend
what
it's telling me:
treat kevin # lpstat -t
scheduler is running
system default destination: lp0
device for lp0: parallel:/dev/lp0
lp0 accepting requests since Sun Apr 13 13:07:25 2008
printer lp0 is idle. enabled since Sun Apr 13 13:07:25 2008
lp0-155 kevin 4096 Fri
On Tuesday 29 April 2008 10:51:30 Peter Humphrey wrote:
Having just installed mysql on my server, I've found that I have to set
bind-address = 0.0.0.0 in /etc/mysql/my.cnf to enable me to connect to
mysqld over the local network: leaving it at the default 127.0.0.1 causes
connection requests
connection requests to be rejected.
Is there a more secure value for this parameter? I want to be able to
connect over either of two network segments, 192.168.2.0/29 and
192.168.3.0/29, as well as locally on the server box. I've tried a
compound setting in bind-address, but mysqld then refuses
to client
requests - theoretically that is. However, since this is a zero sum game,
some other processes will be short changed. So they may (theoretically
again) run slower. It could well be that your KDE session becomes slower as
a result, ha! Anyway, just looking at the info page I read
*
the relicensing, despite repeated requests for that).
m.
--
gentoo-user@lists.gentoo.org mailing list
by the party outside your
network, that is). So make sure that either such requests aren't
getting forwarded to your router (and this is most probably already the
case for your setup -- DSL or cable, I guess?) or your LAN doesn't care
(i.e. is secured). Most PPP endpoints, however, would drop
running as a regular user.
On my box, running chage as a regular user fails with permission
denied, but it's not related to /etc/shadow permissions. Instead
(looking at the strace and at the sources), chage checks the real UID of
the user and terminates if it's not 0 and the user requests to change
with permission
denied, but it's not related to /etc/shadow permissions. Instead
(looking at the strace and at the sources), chage checks the real UID
of
the user and terminates if it's not 0 and the user requests to change
the info (instead of just listing it with -l). It does not even touch
about their services and how secure they are.
Of course they could answer the (1) email granting your requests and
maybe you wouldn't have to take steps (2) and (3). Happened to me
once.
--
[EMAIL PROTECTED] mailing list
and how secure they are.
Of course they could answer the (1) email granting your requests and
maybe you wouldn't have to take steps (2) and (3). Happened to me
once.
I couldn't agree more. It feels like I should have a legal recourse
in this situation. My Dad is a lawyer but has no knowledge
(star, cdrecord) works, is well supported and the programmer reacts
in time to requests. What else do you expect?
m.
Greetings,
Michael
--
[EMAIL PROTECTED] mailing list
, is well supported and the programmer reacts
in time to requests. What else do you expect?
The problem with cdrecord (and J.S.) is that in a new version of
cdrecord he bundled CDDL and GPL code together, thinking that it's right
to distribute such an hybrid binary.
Debian and a large number
it under a file name resembling the URI, thus
leading your browser to the assumption it might be XML - and it has to
make assumptions for file:// requests, since there's no Content-Type
on plain file systems. The conceptual failure is the part that
circumvents this (unreliable) detection algorithm
?
Disclaimer: I have not tested what follows.
Can't you just use traceroute?
If you run tracert from windows, it should already work, since it uses
ICMP echo requests. Otherwise, you should open UDP port 33434 on the
router.
I don't have access to a MS Windows machine right now, but using mtr I get
Hi,
~
after downloding the iso and verifying it was OK using knoppix 5.1.1,
while trying to boot gentoo, I got:
~
XIO: fatal IO error 104 (Connection reset by peer) o XServer :0.0
after 0 requests (0 known processed) with 0 events remaining
~
I am trying to boot gentoo livecd:
~
Linux
of IPTABLES code that
will do what you want.
I remember using something with IPTABLES that made any connections
from internet only happen in response to requests from your localhost.
I don't remember the lines now but someone might post it.
Wouldn't the above and not running any unnecessary
Harry wrote:
Wait... I'm pretty sure there are a few lines of IPTABLES code that
will do what you want.
I remember using something with IPTABLES that made any connections
from internet only happen in response to requests from your localhost.
I don't remember the lines now but someone might
:192.168.0.3 udp 5
ACCEPT $FW loc icmp
ACCEPT $FW net icmp
Does this reject ssh requests from the net zone or do I need to
specify that? It looks like maybe there is another set of basic
ACCEPT/REJECT configs that this is modifying. Does anyone know which
be something like a build server
that builds and provides a set of packages with different USE- and
CFLAGS, possibly even accepting automatic requests from clients.
Everything could be digitally signed and distributed over a network.
Other things to improve? A better documentation on USE-flags. In my
of completely useless bug reports and requests
for help from users who had
a) chosen the wrong stage 1 or 2 for their arch
b) set the wrong flags and compile options
c) listened to ricer advice and been left with an unusable system
d) bitch and moan as to why it takes 96 hours to get a bash prompt
e
messages showed in the beginning messages of the form
LCP: timeout sending Config-Requests
but they seem to no longer appear. As I said, I don't even know where to
start looking for the problem. Any help is appreciated.
OK, this may me a little off topic. I don't even know where to start
when
is connected,
and everything is fine. I don't know where else to look. The log
messages showed in the beginning messages of the form
LCP: timeout sending Config-Requests
but they seem to no longer appear. As I said, I don't even know where to
start looking for the problem. Any help
messages showed in the beginning messages of the form
LCP: timeout sending Config-Requests
but they seem to no longer appear. As I said, I don't even know where to
start looking for the problem. Any help is appreciated.
I can suggest trying at least two things:
1. pon connection
letters. On most
browsers set up for English, it seems to come out in latin letters,
but there are no latin letter in that font, although these same
browsers honor requests for a variety of other fonts. This is true
even on some machines that definitely have the symbol font, and it's
usable in word
...
I really want to avoid having to access a non-standard port from the
URLs - I want to use the final URLs exactly as they will be once the
in-development website is eventually deployed.
But you don't have to!
Just setup first apache to forward requests to the second one in any
way you like
what you want - I find latency in
browsing surfing with BitTorrent consuming only 60% - 70% of my
upload - it doesn't help that other peers are continually making
requests of you. If you lower the bandwidth consumption in Azureous
then you have to remember to up it again when you go to bed
On Fri, Dec 09, 2005 at 05:08:55PM -0700, Richard Fish wrote
nscd does this, and is much simpler. It is already installed as part
of glibc. Just do rc-update -a nscd default.
My system is connecting to the net, but nscd doesn't seem to be
caching DNS requests.
Yes, I did rc-update add
On Fri, Jan 13, 2006 at 12:32:43AM -0600, Penguin Lover Michael Sullivan
squawked:
Now I have a new ISP. I thought I'd try things again. My router is set
to reroute all requests to port 22 (FTP) to internal address 192.168.1.2
(which is my server box). However, when I try to run nmap
on the
network has the same address as your PC.
I think it's more like too much broadcast or problematic Switch/hub
etc?? I don't knwo
You might try some network
monitoring tools like tcpdump or ethereal to see what is going on. In
particular, pay attention to ARP requests and responses
? When I download something with
wget, or watch emerge invoking wget, it's always
maxing out the saturation of the line.
Well, if you have a high bandwidth line, like 8MB ADSL for example,
the host providing the download usually limit your download speed
reserving some band to other requests. If you
steps: starting apache with -X (run on the console in the
foreground, don't fork or start any children, handle incoming requests
directly one at a time) and adding configuration to use Apache::DB.
My problem is that I am trying to do this on a server not entirely
under my comlete control; it writes
Mr Leverton:
Please be advised that I have initiated a formal process requesting to have
you banned from this list by [EMAIL PROTECTED]
This is after many requests from others for you to calm down on the list, and
two private mails from myself asking the same, both of which you have
. It is appropriate to
discuss the usage of your very fine software here. Judging by the sheer
number of requests you received the last time to stop, it can probably be
safely assumed that discussion of the alleged crusade against free software
by a Debian faction, and your counter-crusade against them
On 24 Nov 2008, at 11:07, Dirk Heinrichs wrote:
...
If you have further questions, feel free to ask.
I would love a file system that transparently replicates over several
systems - say 2 - 5.
It doesn't need to amalgamate spare in any way (as BillK requests),
let's just say I just have
in any way (as BillK requests),
let's just say I just have a couple of gig on each machine that I want
replicated.
I should be able to read operate on the files on the partition
just as normal, but when a file is saved to or deleted from any one
machine the change should be replicated
after 0 requests (0 known processed) with 0 events remaining.
--
gentoo-user@gentoo.org mailing list
'. The main goal is to relate the sameway to 'contributing'
as sourceforge to deploy software. It would abstract technologies
(cvs,svn,whatever) from the common user enabling him to have an access
to a portal whre he could browse 'task requests' in a categorized way.
Perhaps the most elucidating
'. The main goal is to relate the sameway to 'contributing'
as sourceforge to deploy software. It would abstract technologies
(cvs,svn,whatever) from the common user enabling him to have an access
to a portal whre he could browse 'task requests' in a categorized way.
Perhaps the most elucidating
://www.xbox-linux.org/Version_1.6_Warning
While the XBox and a normal PC aren't identical by any means it seems
to me that the Linux kernel running on either of these platforms is,
more or less, the same and would likely make the same sort of requests
to BIOS if it had any interest in doing so. If the 2.6
the
requests for each service to the correct box.
For example, www.mydomain.org, ftp.mydomain.org and mail.mydomain.org
will all resolve to your public IP address, but you may have something
like this /etc/hosts setup for your lan
192.168.0.1 box1 www.lan
192.168.0.2 box2 ftp.lan
192.168.0.3 box3
by a local process at that time),
but what I do see are a few instances of distccd popping in and out of
existance every now and then, but using very little cpu power. Is it
perhaps taking but rejecting the requests?
Thanks,
Ezequiel Tolnay
--
gentoo-user@gentoo.org mailing list
of requests). worker is a hybrid
forking and threading MPM, it forks several processes, which each
thread. It has better performance, but any module you use will have to
be thread-safe.
Note that you don't have to specify a mpm-* USE-flag - if none of them
are set, sane defaults are used.
As far
mod_access.c
Order deny,allow
Allow from all
/IfModule
/Directory
I wanted All options turned on (which includes Indexes) since this
server only sees requests from local home lan.
Also I noticed that just including the parent dir is enough.
That is, just `[...]/hpweb' not hpweb
open bugs are
organized properly and the count lessened somewhat. Note, most existing
feature requests have been marked as LATER so the above number is pretty
much only real bugs.
1- In /etc/make.conf add a new key to FEATURES that would cause portage
to cleanup the DISTDIR folder
. I guess with squid all of that machine's
requests go to this machine and then it forwards stuff on, keeping a
record? Is that the basic idea?
If so, does that mean that all traffic for the machine in question
actually does through the squid machine? If so that might mean too
much wireless
/docs
LogLevel info
User lp
Group lp
Port 631
SystemGroup lp
IfRequested - Use encryption if the server requests it
Location /
Order Deny,Allow
Deny From All
Allow From 127.0.0.1
Allow From 192.168.0.2
/Location
Location /printers
Order Deny,Allow
Deny From All
Allow From 127.0.0.1
Allow From
Michael Kintzios wrote:
Not silly at all. Yes, I have firewall on both - but the first thing I
did was to shut down the firewalls, just in case.
OK, I may be getting somewhere. I removed the IfRequested - Use
encryption if the server requests it - although I would have thought that
some pop
of Gentoo. And marking packages stable with any regularity
seems to be an exercise in patience and nagging and bug requests and
waiting and ...
So then when I do an emerge world, there are sometimes hundreds of
packages. All nickel and diming me to death. Like a -r1 -r2 -r3... Or a
v1.0.1
: it can't resolve google.com
It sounds DNS isn't working properly. Just because the DNS server is
pingable (using ICMP) doesn't mean DNS resolution is working!
DNS requests run over UDP port 53 (zone transfers over TCP 53, but
that's realy only useful for DNS admins).
It could be a problem with your
a url filter to help it out. I noticed that I get
requests like the following in my apache log:
70.121.133.60 - - [07/Mar/2006:21:31:05 -0500] SEARCH
/\x90\xc9\xc9\xc9\xc9\xc9\
The above is one line and it is 30,000 characters long in the log file.
Near the end of that line should
any benefits, as the system
should throw away memory pages occupied by cache stuff,
as soon as there are more important requests (like
any malloc).
Or am I wrong?
That is how it should be. However I noticed when I only had 512 MB of
memory that most of my memory would be used and I would see
]:/mnt/hda5/
Tried ever which way; here's what I get:
scp:SshFileCopy/sshfilecopy.c:616/connect_done_cb:
Connection to local,
ready to serve requests.
scp:SshFCTransfer/sshfc_transfer.c:219/transfer_get_newline_conventions:
Fething source newline convention extension data.
scp:SshFCTransfer
that are more interested in supporting the needs of the
embedded communities.
There is a gentoo-embedded group, but it seems to be limited to
x86 and PPC architectures.
If I'm really desperate, I look at bugs.gentoo.org to find folks that
have posted bugs, revision update requests, and such and send them
of the hard drive to use for
cloning. I don't know an alternative way.
I am planning to get a few machines going on a manual installation
while
I troubleshoot the livecd problem. These machines are behind a
router
and there will be multiple emerge --sync requests coming from the
same
IP
password access
alltogether for SSH instead opting for keys only. They can be
restricted at the application layer (in ~/.ssh/authorized_keys) to
reject requests for a login based on originating IP as well as key
authentication.
Frontpage is terrible, GoLive is comparable to Frontpage
error 104 (Connection reset by peer) on
X server :0.0
after 0 requests (0 known processed) with 0
events remaining.
[EMAIL PROTECTED] ~ $
not true:
[EMAIL PROTECTED] ~ $ lsmod
Module Size Used by
radeon105216 0
drm68884 1 radeon
agpgart
available.
Fatal server error:
no screens found
XIO: fatal IO error 104 (Connection reset by peer) on
X server :0.0
after 0 requests (0 known processed) with 0
events remaining.
[EMAIL PROTECTED] ~ $
not true:
[EMAIL PROTECTED] ~ $ lsmod
Module Size Used
, the authoritative directive should be uncommented.
authoritative;
I think this one schould prefer your local dhcp to the clients at home
instead of using your notebook.
correct me if I'm wrong.
It can mean long time outs if the client requests an address that is not
in the pool. I know I had problems
On Sunday 24 December 2006 23:04, Mike Myers [EMAIL PROTECTED]
wrote about 'Re: [gentoo-user] OT, but short':
On 12/23/06, Michael Sullivan [EMAIL PROTECTED] wrote:
Is anyone out there using Residential SBC/Yahoo DSL with dynamic DNS?
I want to know if the ISP blocks incoming requests
doesnt get marked stable mostly because there aren't any
stable requests.
Stabilisation bug it not a requirement. Package should go to stable
after 30 days + no bugs even without stabilization bug. I have an
impresion that developers are _waiting_ for stabilization bugs which
is wrong.
I've raised
stable requests.
Stabilisation bug it not a requirement. Package should go to stable
after 30 days + no bugs even without stabilization bug.
No, it's not a requirement. It's a notice telling the developers that hey,
someone wants it marked stable. Plus, if a user / arch tester does
mostly because there aren't
any stable requests.
Stabilisation bug it not a requirement.
Actually, everything I said in that last email was a little off.
Stabilization bugs are required because ultimately it is the
architecture team that is going to mark it stable, not the
developer
accordingly to use the proxy. This way, your http requests are sent to
the proxy via the ssh tunnel, and from there go to the their intended
destinations using your ISP connectivity.
Actually it is very simple to socks proxy your ssh connection and use
that without any additional software.
ssh -D
that fits my needs best (b, Slackware :-)?
You won't be bothered by bureacuracy untill the day you discover that
that package you want has been left months ago to fall into ignominious
forgottenness among the thicket of bugzilla ebuild requests or the day
you discover your favorite package hasn't
Raphael Melo de Oliveira Bastos Sales wrote:
I have one machine (Machine 1) that I need backup its files
periodically. I also have another machine (Machine 2) that will hold
the backup. Machine 2 can see (make requests to) Machine 1, but the
opposite isn't true. The network is covered
Hi Raphael,
* Raphael Melo de Oliveira Bastos Sales [EMAIL PROTECTED],
Friday, June 24, 2005, 8:27:02 PM:
I have one machine (Machine 1) that I need backup its files
periodically. I also have another machine (Machine 2) that will hold
the backup. Machine 2 can see (make requests to) Machine
* Raphael Melo de Oliveira Bastos Sales [EMAIL PROTECTED],
Friday, June 24, 2005, 8:27:02 PM:
I have one machine (Machine 1) that I need backup its files
periodically. I also have another machine (Machine 2) that will hold
the backup. Machine 2 can see (make requests to) Machine 1
This is on 10.0.0.101, right?
You also have 'DISTCCD_OPTS=${DISTCCD_OPTS} --allow 10.0.0.101 on
.103, right?
Yes, the 10.0.0.101 is the one on which compiling is started.
Well, distcc*d* is the daemon that answers requests. It has to be
properly configured (and started) on all the machines before distcc
On 6/27/05, Niklas Herder [EMAIL PROTECTED] wrote:
Travis Osterman wrote:
I recently switched from a linksys router to a gentoo-based system and
have gotten along pretty well with it. One of the last things I have
left to figure out is how to get dynamic dns name requests to the
correct
. As well, there are no logs of blocked requests on
thoes ports.
I'm not very experience with this accpect of networking. I've set up web
servers before, but never and email server and I don't even know where to
look for problems.
If anyone requires any config information please let me know, but I
2005.0 and blossom is
running Red Hat 9. I would like to set up something where my wife can
log in on blossom (it's my wife's computer) and it connects blossom to
baby (via the network) I want blossom to act as a dumb terminal, with
all requests made on blossom going to baby. My wife prefers GNOME
in the original version.
Please send bug reports and support requests to burbon04 at gmx.de.
For more information please see
http://burbon04.gmxhome.de/linux/CDREncryption.html.
The original author should not be bothered with problems of this version.
cdrecord: Warning: Running on Linux
it slows the connections down noticeably. So I am
wondering if there's a way to implement it such that connection
requests to certain websites will be sent through the university
proxy server while the remainder of the connections are unaffected.
Can this be done with some sort of squid
. There don't exist IP
addresses.
Just for clarification.
arp will do exactly this and arpd can even collect such information
because every machine on a subnet will see all of the requests and
replies.
Regards
Frank
On Wed, 2005-08-31 at 05:50 -0500, John Jolet wrote:
On Aug 31, 2005, at 1:38 AM
PROTECTED] ~ $ openssl s_client -connect pop.gmail.com:pop3s
CONNECTED(0003)
[lots of info snipped]
+OK Gpop ready for requests from 123.45.67.89 n23pf2387435nfc
---snip
For your test case: POP3 is usually on port 110, POP3S is usually on
port 995. If the SSL connection isn't set up on connection
$ tcpdump -vvni ppp0
I can see network activity on my dsl modem led.
Oh, totally normal behaviour. There's a lot of noise on the 'net, you
know ;-) my modem's led blinks continously due to a lot of incoming
requests to ports like 135 (worms), 4xxx-6xxx (P2P)...
Right before I switched to Gentoo, my
knowing that at least 75
packages still needed to be stabilized in order to compile with gcc-4.1.1 was
that release engineering didn't want to delay the release even further
because of maintainers who didn't respond to stabilization requests in a
timely manner.
They had to stabilize it before
2006/9/11, Peter [EMAIL PROTECTED]:
Today, I got a lot of rebuild requests from emerge after doing:
emerge -puDvt --newuse world
I spot checked a couple of builds, but nothing has changed! Can you help
decipher the following please:
Using the first application that shows up as an example
of those
systems would be able to serve client requests at a given moment.
--
Best regards,
Daniel
ports is determined by your router
(responding to ICMP echo requests) and is for all intends and purposes
irrelevant.
I ran a iptable script and saved the config a long time ago. I don't
know if it is the modem or my iptables that is making me stealthy or
what. I'm just glad that me
system /
Output information may be incomplete.
[...]
Anyone know what might be going on here?
Perhaps server is not responding... so make it respond :)
If that is the case add some rule to firewall or /etc/hosts to redirect
requests to localhost (use REJECT not DROP
it to
respond to service discovery requests. By default CUPS uses
mDNSResponder, which is Apple's implementation; with +avahi is uses
avahi instead. This means any Mac on your network will automatically
see CUPS printers, as will any Linux client with avahi properly
installed. Windows machines
: emerging cups with +zeroconf allows it to
respond to service discovery requests. By default CUPS uses
mDNSResponder, which is Apple's implementation; with +avahi is uses
avahi instead. This means any Mac on your network will automatically
see CUPS printers, as will any Linux client with avahi properly
that
Microsoft would quite happily run you bankrupt with pre-trial requests
to your lawyers, and drag things out with various legal motions,
rather than actually lose the case.
Stroller.
have auth daemons running on ancient
hardware with 256M RAM that have to deal with 100+ auth requests a second and
a 5000-line config file. Despite my needs being complex, any cron daemon
should cope with that without breaking a sweat
--
alan dot mckinnon at gmail dot com
, and in this time it sends alot of requests to the slapd server,
using anonymous binds. Any idea how i make anonymous binds return attrs such as
groupUid etc?
On 05/05/2010, at 7:00 AM, Daniel Troeder wrote:
# auth
authsufficient pam_opie.so no_warn
no_fake_prompts
auth
prevents the user from
piping an arbitrary command into ssh (or even using a ssh-invoking
wrapper such as scp or rsync) and getting successfully authenticated on
the server. You are only guaranteed that the server will run tar in
place of whatever remote command the client requests, so
seconds = 0.132 FPS
1 frames in 7.6 seconds = 0.132 FPS
1 frames in 8.2 seconds = 0.122 FPS
1 frames in 7.6 seconds = 0.131 FPS
1 frames in 7.6 seconds = 0.132 FPS
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server
:0.0
after 46 requests (46 known processed) with 0
error 11 (Resource temporarily unavailable) on X server
:0.0
after 46 requests (46 known processed) with 0 events remaining.
r...@smoker / #
So, looks like emerge -e world didn't help after all. So what should
I check now? Do I need to get my hammer out?
Dale
a remote Windows box and able to
use it (e.g. if you're viewing XP Pro; XP Home might be able to send viewing
requests, I can't recall how they work).
I'm not 100% clear how you're doing things at the moment. Are you using an X11
connection to view the screen at the moment? If so I would certainly
driver
(2010-10-12) We have a datasheet for the IT8721F. Different from previous
IT87xxF chips (12 mV ADC instead of 16 mV, amongst other.) Two requests ( Jae
Sammy, Virgil Gheorghiu.) The IT8758E was reported to have the same device
ID, so hopefully they are fully compatible. Support added
. The framerate should be
approximately the same as the monitor refresh rate.
74062 frames in 5.0 seconds = 14812.268 FPS
77502 frames in 5.0 seconds = 15500.350 FPS
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server
:0.0
after 57 requests (57 known processed) with 0
with it. It can monitor processes
(if it is running, answering requests, etc), resources (disk,
memory, swap, cpu, i/o), files (content, permissions, checksums),
remote hosts (with some basic protocol checks i.e. http, ssh,
smtp, ftp, mysql, ntp, dns...), it can inform you about problems
(mail, log) and you
for 30
to 45 seconds, while IPV6 DNS requests timed out, and then fall back to
IPV4. I did *NOT* appreciate that.
I guess the kernel will have the kde USE flag next. lol At least
that should be in @system tho. ;-)
Check your profile. Is it kde-desktop? And while you're at it, set
your
, but I think the thrash would be expensive in a normal
system where you've got a sensible amount of RAM.
I do like the reverse proxy idea. Turn Apache into an application
server on localhost and let the reverse proxy deal with the Internet. If
you picked the right proxy multiple requests could
the requests it got
from kmail/akonadi/whatever. Don't remember the details though.
I did check there, and a few other places too such as enabling mail-
related stuff in kdeugdialog and observing console output. I found
nothing that seemed relevant.
Need I mention that NOT chucking pim data away
= 1540.542 FPS
7570 frames in 5.0 seconds = 1513.977 FPS
39883 frames in 5.0 seconds = 7976.487 FPS
49839 frames in 5.0 seconds = 9967.718 FPS
49720 frames in 5.0 seconds = 9943.888 FPS
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server :0
after 48 requests (48 known processed
, but the code is there if you feel like give it
a try.
Where did I write, that I am in the position to write such a beast?
I only take the freedom to name this a design flaw in udev.
It needs things from userspace, which are not yet available at the point it
requests them. An initramsfs is a workaround
that if you don't like the
design, you have the option of improved (or completely replace it) and
implement such design.
It needs things from userspace, which are not yet available at the point it
requests them. An initramsfs is a workaround for this, not a proper fix.
Again, it's a complex problem
401 - 500 of 1044 matches
Mail list logo