-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 7/16/2013 1:23 PM, Lennart Poettering wrote:
So, Kay suggested we should use BSD file locks for this. i.e. all
tools which want to turn off events for a device would take one on
that specific device fd. As long as it is taken udev would not
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Various tools, but most notably partitioners, manipulate disks in such
a way that they need to prevent the rest of the system from racing
with them while they are in the middle of manipulating the disk.
Presently this is done with a hodge podge of
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 6/18/2013 2:03 PM, David Zeuthen wrote:
When I was younger I used to think things like this was a good
idea and, in fact, did a lot of work to add complex interfaces for
this in the various components you mention. These interfaces didn't
really
Someone in #debian mentioned to me that they were getting some odd
errors in their logs when running gparted. It seems that several years
ago there was someone with a problem caused by systemd auto mounting
filesystems in response to udev events triggered by gparted, and so as a
workaround,
Lennart Poettering writes:
> Can you file a bug about this? Sounds like something to fix.
Sure.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Simon McVittie writes:
> The Debian/Ubuntu package for systemd already masks various services
> that are superseded by something in systemd, such as procps.service and
> rcS.service. It used to also mask all the services from initscripts,
> but that seems to have been dropped in version 243-5.
Michael Biebl writes:
> Are you sure?
> Which Ubuntu version is that?
> At least in Debian, /etc/init.d/killprocs is shipped by "initscripts"
> which is no longer installed by default.
20.04. apt-cache rdepends shows:
Reverse Depends:
sysv-rc
util-linux
hostapd
wpasupplicant
Lennart Poettering writes:
> Look at the logs?
>
> if they are not immeidately helpful, consider turning on debug logging
> in systemd first, and then redoing the action and then looking at the
> logs. You can use "systemd-analyze log-level debug" to turn on debug
> logging in PID 1 any time.
I used to just have to add-wants ssh.service to rescue.target and I
could isolate to rescue mode for remote system maintainence without
loosing remote access to the server. After an upgrade, even though
ssh.service is wanted by rescue.target, it is still killed if I
isolate. How can I figure out
Lennart Poettering writes:
> What is "killprocs"?
>
> Is something killing services behind systemd's back? What's that
> about?
It's the thing that kills all remaining processes right before shutdown
that we've had since the sysvinit? And also when isolating I suppose.
Lennart Poettering writes:
> Are you running systemd? If so, please get rid of "killproc". It will
> interfere with systemd's service management.
I see.. apparently Ubuntu still has it around. How does systemd handle
it? For instance, if a user logged in and forked off a background
process,
Reindl Harald writes:
> topic missed - it makes no difference if it can hold the power 3
> minutes, 3 hours or even 3 days at the point where it decides "i need to
> shutdown everything because the battery goes empty"
It is that point that really should be at least 3 minutes before power
Reindl Harald writes:
> i have seen "user manager" instances hanging for way too long and way
> more than 3 minutes over the last 10 years
The default timeout is 3 minutes iirc, so at that point it should be
forcibly killed.
___
systemd-devel
Lennart Poettering writes:
> Well, at least on my system here there are still like 20 fragments per
> file. That's not nothin?
In a 100 mb file? It could be better, but I very much doubt you're
going to notice a difference after defragmenting that. I may be the nut
that rescued the old ext2
Chris Murphy writes:
>> It sounds like you are arguing that it is better to do the wrong thing
>> on all SSDs rather than do the right thing on ones that aren't broken.
>
> No I'm suggesting there isn't currently a way to isolate
> defragmentation to just HDDs.
Yes, but it sounded like you
Chris Murphy writes:
> I showed that the archived journals have way more fragmentation than
> active journals. And the fragments in active journals are
> insignificant, and can even be reduced by fully allocating the journal
Then clearly this is a problem with btrfs: it absolutely should not
Lennart Poettering writes:
> You are focussing only on the one-time iops generated during archival,
> and are ignoring the extra latency during access that fragmented files
> cost. Show me that the iops reduction during the one-time operation
> matters and the extra latency during access
Chris Murphy writes:
> But it gets worse. The way systemd-journald is submitting the journals
> for defragmentation is making them more fragmented than just leaving
> them alone.
Wait, doesn't it just create a new file, fallocate the whole thing, copy
the contents, and delete the original?
Chris Murphy writes:
> Basically correct. It will merge random writes such that they become
> sequential writes. But it means inserts/appends/overwrites for a file
> won't be located with the original extents.
Wait, I thoguht that was only true for metadata, not normal file data
blocks? Well,
Chris Murphy writes:
> And I agree 8MB isn't a big deal. Does anyone complain about journal
> fragmentation on ext4 or xfs? If not, then we come full circle to my
> second email in the thread which is don't defragment when nodatacow,
> only defragment when datacow. Or use BTRFS_IOC_DEFRAG_RANGE
Chris Murphy writes:
> It's not interleaving. It uses delayed allocation to make random
> writes into sequential writes. It's tries harder to keep file blocks
Yes, and when you do that, you are inverleaving data from multiple files
into a single stream, which you really shouldn't be doing.
Lennart Poettering writes:
> inode, and then donate the old blocks over. This means the inode nr
> changes, which is something I don't like. Semantically it's only
> marginally better than just creating a new file from scratch.
Wait, what do you mean the inode nr changes? I thought the whole
Phillip Susi writes:
> Wait, what do you mean the inode nr changes? I thought the whole point
> of the block donating thing was that you get a contiguous set of blocks
> in the new file, then transfer those blocks back to the old inode so
> that the inode number and timestamps of th
Colin Guthrie writes:
> Are those journal files suffixed with a ~. Only ~ suffixed journals
> represent a dirty journal file (i.e. from an unexpected shutdown).
Nope.
> Journals rotate for other reason too (e.g. user request, overall space
> requirements etc.) which might explain this
Colin Guthrie writes:
> I think the defaults are more complex than just "each journal file can
> grow to 128M" no?
Not as far as I can see.
> I mean there is SystemMaxUse= which defaults to 10% of the partition on
> which journal files live (this is for all journal files, not just the
>
What special treatment does systemd-resolved give to .local domains?
The corporate windows network uses a .local domain and even when I point
systemd-resolved at the domain controller, it fails the query without
bothering to ask the dc saying:
resolve call failed: No appropriate name servers or
Silvio Knizek writes:
> So in fact your network is not standard conform. You have to define
> .local as search and routing domain in the configuration of sd-
> resolved.
Interesting... so what are you supposed to name your local, private
domains? I believe Microsoft used to ( or still do? )
Lennart Poettering writes:
> journalctl gives you one long continues log stream, joining everything
> available, archived or not into one big interleaved stream.
If you ask for everything, yes... but if you run journalctl -b then
shuoldn't it only read back until it finds the start of the
Maksim Fomin writes:
> I would say it depends on whether defragmentation issues are feature
> of btrfs. As Chris mentioned, if root fs is snapshotted,
> 'defragmenting' the journal can actually increase fragmentation. This
> is an example when the problem is caused by a feature (not a bug) in
>
Dave Howorth writes:
> PS I'm subscribed to the list. I don't need a copy.
FYI, rather than ask others to go out of their way when replying to you,
you should configure your mail client to set the Reply-To: header to
point to the mailing list address so that other people's mail clients do
what
Lennart Poettering writes:
> Nope. We always interleave stuff. We currently open all journal files
> in parallel. The system one and the per-user ones, the current ones
> and the archived ones.
Wait... every time you look at the journal at all, it has to read back
through ALL of the archived
Pekka Paalanen writes:
> have you checked your boot ID, maybe it's often the same as the previous
> boot?
Good thought, but it doesn't look like it:
IDX BOOT ID FIRST ENTRY LAST ENTRY
-20 c2a5e3af1f044d79805c4fbdd120beec Wed 2023-05-10
Phillip Susi writes:
> Lennart Poettering writes:
>
>> It actually checks that first:
>>
>> https://github.com/systemd/systemd/blob/main/src/libsystemd/sd-journal/journal-file.c#L2201
>
> That's what I'm saying: it should have noticed that FIRST and not gotten
Lennart Poettering writes:
> oomd/PSI looks at memory allocation latencies to determine memory
> pressure. Since you disallow anonymous memory to be paged out and thus
> increase IO on file backed memory you increase the latencies
> unnecessarily, thus making oomd trigger earlier.
Did this
Michael Chapman writes:
> What specifically is the difference between:
>
> * swap does not exist at all;
> * swap is full of data that will not be swapped in for weeks or months;
That's the wrong question. The question is, what is the difference
between having NO swap, and having some swap
Lennart Poettering writes:
> It actually checks that first:
>
> https://github.com/systemd/systemd/blob/main/src/libsystemd/sd-journal/journal-file.c#L2201
That's what I'm saying: it should have noticed that FIRST and not gotten
to the monotonic time check, but it didn't.
Lennart Poettering writes:
> We want that within each file all records are strictly ordered by all
> clocks, so that we can find specific entries via bisection.
Why *all* clocks? Even if you want to search on the monotonic time, you
first have to specify a boot ID within which that monotonic
Every time I reboot, when I first log in, journald ( 253.3-r1 )
complains that the monotonic time went backwards, rotating log file.
This appears to happen because journal_file_append_entry_internal()
wishes to enforce strict time ordering within the log file. I'm not
sure why it cares about the
38 matches
Mail list logo