The “Year of the Linux Desktop” has arrived
with RHEL7,
but in a very perverted form
If you want to see classic illustration of the Second-system effect
look no further ;-)
The second-system effect (also known as second-system syndrome) is the tendency of small, elegant, and successful systems to
be succeeded by over-engineered, bloated systems, due to inflated expectations and overconfidence.[1]
The phrase was first used by Fred Brooks in his book The Mythical Man-Month, first published in 1975. It described the jump
from a set of simple operating systems on the IBM 700/7000 series to OS/360 on the 360 series, which happened in 1964.[2]
Any idiot who know C well and is diligent enough to spend a lot of time on debugging his creation can write a replacement
for the old System V init. That's given. The problem is that it is very difficult to create right architecture for such a new
subsystem, as Unix is almost 50 years old and preserving the conceptual integrity
of the system is a challenge. You need to take into account many factors (including history and Unix philosophy) to make it right, and that only possible
for really talented system architects. The simplest and most stupid solution would be to convert the subsystem from text files
based (and according to Unix philosophy this is a right way to accomplish the task) into binary based (Apple idea of interpreter for
startup script specifications).
The other question is why bother to do it for servers in which "old good init" is still
"good enough" and "simple enough" to make perfect sense. Possibility of enhancing it using PHP style
interpreter for the header directive provided as preudocomments was not exhausted. And this is more simple and more elegant approach
the primitive and crude solution systemd enforced on the Unix sysadmins.
The other important question is why rock the old boat for solution that is attractive only for
desktops ? For servers that amount of time spend on loading OS is non-critical. And during installation serial invocation of daemons is actually a huge advantage, very helpful in
debugging complex situations.
What I want to say is the systemd is a second-rate solution and its author is a second rate (although very prolific)
programmer. That's undisputable. And the fact that Red Hat brass pushed this solution is a clear sign of the degeneration of Red Hat
management.
On the other hand, Systemd represents an alarming, but pretty understandable
trend in linux world -- a Windows-inspired trend toward "push button" type of users and sysadmins. Systemd is becoming
the Svchost of Linux -- the component which most veterans
Linux sysadmin do not want to use, but which is enforced by Red Hat dominant position in Linux world.
The initial blowback on systemd from the community seems to have died down into silent resignation.
As of end of 2020 RHEL6 lost support and systemd became
standard de facto in Linux world. Of course, there are still a couple of distribution that allow not to use it (Devuan is one, Gentoo
also has this option) but the mainstream flavors of Linux like Red Hat, Suse and Debian are firmly in systemd camp. That's a very
sad and alarming situation. A clear sign of the degeneration of architectural thinking in linux community.
But systemd dominance in no way means that problems with systemd disappeared. They simply became "features." So this page
tried to sort out where the friction is from a resistance to change (and systemd obsoletes a lot of good Linux books
-- this type of vandalism is never welcomed), and where it from bad engineering of systemd
itself and associated subsystems. The key argument against systemd is not that
it changes init, but about bad architecture of systemd and weakness of
Lennart Poettering as a Unix architect
(and with systemd he was elevated to this role whether he wanted this or not; see
Systemd Security flaws). Furthermore, disliking "change for change's sake"
is a valid argument, especially if that change invalidates a system administrators previously acquired knowledge and experience.
The key argument againstsystemdis not that
it changes init, but about bad architecture of systemd and weakness of Lennart Poetteringas a Unix architect
Initially there were some segments of Linux community which did not accept systemd. For example, several veteran Unix sysadmins
forked Debian and created
Devuan Linux -- a distribution without systemd. That was so obvious slap in the Red Hat face that they allocated more resources
to prove that they can dictate their will. So, paradoxically, Devuan has tremendously positive effect of the
systemd development forcing Red Hat to double the efforts ;-)
BTW the backlush led to significant modification
of the initial systemd design and some compromises (imitation of runlevels, forwarding of logs to rsyslog in RHEL7, etc). Generally
Lennart Poettering handling
of syslog problems had shown that he is a "Linux desktop guy": a person who have little or no clue about the datacenter environment, and
does not want to learn. He went in completely wrong direction if we are talking strictly about Linux servers. And to add
the insult to the injury his solution was clearly the second rate.
In any case after seven years ( or so ) of frantic "development" (which mostly means "debugging") systemd in RHEL 7 got into some
kind of semi-stable stage (although patches to systemd are still are probably the most frequently issued patches in RHEL 7; each
time you patch Red Hat systemd is also patched ). But the
major bugs are always connected with the weaknesses in architecture and they can't be removed. They need to be accepted as features.
One interesting innovation of systemd in the area of obscure bugs are "timing bugs". For example, a lot of people observed with
RHEL7 and CentOS7 that they can't install OS via a slow link (over VPN). It hangs with strange messages like "Starting Terminate
Plymouth Boot Screen." (Plymouth
is a project from Fedora and now listed among the
freedesktop.org's official resources providing a flicker-free graphical boot process; introducing subtle errors as a free gift;
why it needs to be present if server is completely unclear.) Search "installation hangs on plymouth" or "boot hangs on plymouth"
for recent cases ( I observed this behaviour via VPN with CentOS/RHEL 7.7 because I do not do such things often and at
this point I already forgot how to use VFLASH and NFS/HTTP access to full ISO in Dell DRAC (you can't remember all those relevant
things now, not matter how good memory you have.). I though I will initiate the install go to lunch and return to the Anaconda
selection of the timezone screen. I was wrong ;-).
In my case the debugging
console shows that systemd entered infinite loop. But it did install OK from the local media using the same ISO. Also parallel invocation of daemons is
bad for debugging and several debugging methods previously available are disabled in systemd because of that. Access to TTYs is now sporadic: CTRL+ALT+F2 and similar commands often do not work when you boot has problems. Situation is
really bad if you are forced to work with DRAC or ILO via VPN, because deficiencies of DRAC and ILO multiply the deficiencies
of systemd.
Another interesting fact is that the size of minimal ISO in CentOS7 is almost one gigabyte. There is not much minimal in
such a size. Boot ISO in RHEL also doubled its size in comparison with RHEL6. Of course, to use install from local NFS
or HTTPS you can trim it deleting the packages, but still the tendency is alarming. In old days you was able to boot linux from a
single floppy :-) and full distribution was one or two CDs (not DVDs, CDs).
If you try to browse systemd related articles you will see that the most active resistance occurred in 2014-2015. After
that is dissipated to a very few discussions. May be because everything that should be said was already said and you can't turn RHEL
into non-systemd distro. Right now the resistance concentrates in minimalist distributions which can't afford to carry system
bloat. Most major distributions such as Suse and Debian adopted systemd because of Gnome. Discussion switched to
systemd and journald exploits, which is a new rich and technically interesting field (System
Down A systemd-journald exploit Hacker News .)
Right now the resistance concentrates in minimalist distributions which can't afford to carry system
bloat and Devuan.
Scope creep in systemd definitely leads to new vulnerabilities
( see for example
A Systemd Vulnerability Allows Attackers Hack Linux Machines via Malicious DNS response, June 29, 2017.) In other words, due
to its complexity systemd proved to be an excellent source of new zero-day vulnerabilities and a "perfect target" for attack which
allow new generation of stealth exploits. Especially by nefarious state actors, who have resources to exploit systemd architecture
weakness and overreach. As such it is immensely attractive to intelligence agencies. I am sure that NSA
and similar agencies are actively studying this possibility and probably have found some interesting avenues to exploit systemd architectural
weaknesses.
The three bugs include two different memory corruption flaws (CVE-2018-16864 and CVE-2018-16865), and an out-of-bounds flaw (CVE-2018-16866).
At first, the researchers accidentally discovered CVE-2018-16864 while working on an exploit for a previously disclosed
vulnerability,
Mutagen Astronomy. Then, when they were busy on its PoC, they spotted the other two bugs.
...Interestingly, the bugs had been around for quite a few years. According to their findings, CVE-2018-16864 came up in April
2013, and CVE-2018-16865 in December 2011. They then became exploitable in February 2016 (systemd v230) and April 2013 (systemd v201).
The most recent of these is CVE-2018-16866, which was introduced in June 2015 (systemd v221). Though it received a patch earlier
in August 2018, the researchers call it an inadvertent patch.
Also as journald creates another level of indirection before logs can get to the remote server it represents a perfect filtering
mechanism to hide exploits.
Systemd is not the first attempt to solve problems inherent in SysV init. There were are least two notable attempts before it: PHP-style
approach when the comments in the header of each init file contain instructions as for dependencies handing on startup and shutdown,
and Solaris 10 "shadow files" solution when for each "classic" init file there is a shadow file, which, if exists, contain instructions
for handing dependencies and other necessary staff. Both of those approach accept " classic init files.
The chronology of major alternative to Systemd V init projects is as following
runit was one of the first attempt to replace System V init which got some
traction. Version 1.0 was released in 2004. and was designed to tackle service management problems, allowing a system administrator
to query the state of, and change the set of, running services. Runit features parallelization of the start up of system services, which
can speed up the boot time of the operating system.[4]
Solaris 10 was released in January 2005 so
Solaris
SMF is almost 15 year old. That was first replcement of SystemV init, at this time the most popular Unix flavor after
Linux. It is based on the idea of create "shadow file" for each classic init script which specified dependencies and allows to
handle additional tasks, if present. If no additional information about dependences is provided, behaviour is the same as SysV init.
As Solaris
SMF preserved programmability, this approach architecturally is more flexible than the approach adopted by systemd. Which by removing
programmability, essentially puts a straitjacket on the logic of init file (it is fully contained in systemd daemon itself).
Apple's launchd like systemd provides a single
daemon that replaces the traditional Unix services of init, cron, inetd, and so on. Looks like system was inspired mainly by
launchd, as both daemon share similar architecture problems. It was introduced into OSX Tiger which was
released on April 29, 2005 According to Wikipedia (launchd - Wikipedia)
There are two main programs in the launchd system: launchd and launchctl.
launchd manages the daemons at both a system and user level. Similar to xinetd, launchd can start daemons on demand. Similar
to watchdogd, launchd can monitor daemons to make sure that they keep running. launchd also has replaced init as
PID 1 on macOS and as a result it is responsible
for starting the system at boot time.
Configuration files define the parameters of services run by launchd. Stored in the LaunchAgents and LaunchDaemons subdirectories
of the Library folders, the property list-based
files have approximately thirty different keys that can be set.
launchd itself has no knowledge of these configuration files or any ability to read them - that is the responsibility of "launchctl".
launchctl is a command line application which talks to launchd using IPC and knows how to parse the
property list files used to describe launchd jobs,
serializing them using a specialized dictionary protocol that launchd understands.
launchctl can be used to load and unload daemons, start and stop launchd controlled jobs, get system utilization statistics
for launchd and its child processes, and set environment settings.
We can also analyze system from the position of the language designer, as it strives to replace bash as the language in which init
scripts are written
Rephrasing a humorous observation once made by Philip Greenspun ( So called
Greenspuns Tenth Rule Of Programming: "Any sufficiently complicated
C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of
CommonLisp." ) we can say that systemd implementation contains a bug-ridden, informally
specified and slow implementation of a half of bash. At least in LOC metric that looks to be very true.
In a way, introduction of systemd signify "Microsoftization" of Red Hat. This daemon replaces init in a way
I find problematic: the key idea is to replace a language in which init scripts were written (which provides programmability and has
its own flaws, which still were fixable) with the fixed set of all singing, all dancing parameters in so called "unit files",
removing programmability. Systemd is essentially an implementation of the interpreter of implicit non-procedural language defined
by those parameters by a a person, who never have written an interpreter from a programming language in his life. It supports over a
hundred various parameters which serve as keywords of this ad-hoc language, which we can call "Poettering init language." You can verify
the number of introduced keywords yourself, using a pipe like the following one:
Yes, this is "yet another language" (as if sysadmins did not have enough of them already), and a badly designed judging from the
total number of keywords/parameters used. The most popular are the following 35 (in reverse frequency of occurrence order):
Another warning sign about systemd is paying outsize attention for the subsystem that loads/unloads the initial set of daemons
and then manages the working set of daemon, replacing init scripts and runlevels with a different and dramatically more
complex alternative. Essentially they replaced reasonably simple and understandable subsystem with some flaws, by a complex and
opaque subsystem with non-procedural language defined by the multitude of parameters. While the logic of application of those parameters
is hidden within systemd code. Which due to its size has a lot more flaws. As well as side effects because proliferation of those
parameters and sub parameters is never ending process: the answer to any new problem discovered in the systemd is the creation of additional
parameters or, in best case, modification of existing. Which looks to me like a self-defeating, never ending spiral of adding
complexity to this subsystem, which requires incorporation into systemd many things that simply do not belong to the init. Thus making
it "all signing all dancing" super daemon. In other words, systemd expansion might be a perverted attempt to solve problems
resulting from the fundamental flaws of the chosen approach.
This fascinating story of personal and corporate ambition gone awry still wait for its researcher. When we are thinking about
some big disasters like, for example, Titanic, it is not about the outcome -- yes, the ship sank, lives were lost -- or even the obvious
cause (yes, the iceberg), but to learn more about WHY. Why the warnings were ignored ? Why such a dangerous course was chosen? One additional
and perhaps most important for our times question is: Why the lies that were told were believed?
At the same time Poettering attempt to create a language for specifying dependencies and other additional functionality of
systems might be reused in projects that are designed to restore the programmability of init file. Existing unit files can definitely
be re-implemented as shadow files in Solaris 10 style init fashion, or in pseudo-comments in the init file header in PHP-style fashion
(init files in this case need to be executed by a special processor, let's call in in RHEL4-RHEL6 fashion service which fist
analyses and executes actions in the header and then invokes bash,not with plain-vanilla bash.
systemd raised complexity level and made RHEL7 drastically different from RHEL6, while providing nothing constructive
in return. Moreover in complex systems, it is actually counterproductive to try to pursue goals directly, because the environment
is too complicated to be able to map a straight path. Some measurements of systemd performance suggest that boot time did not improved
that much even on laptops. So if we are talking about the initial goal systemd might well be viewed to be a failure (Boot
time SysV vs Systemd - systemd system takes ~2 seconds longer! ). One possible reason is that systemd is much more bloated than
SysV init.
Benefits of systemd for servers are highly questionable
For servers systemd can be viewed as mostly an attempt to solve problems that does not exist. Servers does
not move from one network connection to another; 99% is connected to the network via Ethernet cables with static IP. Servers never hibernate,
the set of running daemons is mostly static, they do not need audio, and most are booted less then once a month and mostly because
of patching of installation of additional software or hardware. In other words server and laptop are quite different in their key use
types of Linux installation.
The boot time does not matter for servers at all (most are rebooted just a couple of times a year).
The distance between RHEL6 and RHEL7 is approximately the same as distance between RHEL6 and Suse, so we can speak of RHEL7
as a new flavor of Linux and about introduction of a new flavor of Linux into enterprise environment. Which, as any new flavor of Linux,
raises the cost of system administration (probably around 20-30%, if the particular enterprise is using mainly RHEL6 with some
SLES instances)
Systemd and other changes made RHEL7 as different from RHEL6 as Suse distribution is from Red Hat. Which,
as any new flavor of Linux, raises the cost of system administration (probably around 20-30%, if the particular enterprise
is using RHEL6 and Suse12 right now)
Catering to "GUI-dependent" sysadmins is another important trend. Such sysadmins actually view their system as an appliance to perform
a certain set of tasks; as a "blackbox", not caring how actually this is done. This view is in drastic contrast with traditional
view of Unix/Linux as software development platform that values elegance and simplicity. For "GUI-dependents" sysadmins systemd
might even represent improvement over the previous, rather chaotic situation with init scripts.
Imagine a language in which both grammar and vocabulary is changing each decade. Add to this that the syntax is complex, the vocabulary
is huge and each verb has a couple of dozens of suffixes that change its meaning, sometimes drastically. This is the situation
of "complexity trap" that we have in enterprise Linux. You can learn some subset when you closely work with the particular subsystem
(package installation, networking, nfsd, httpd, sshd, Puppet/Ansible, Nagios, and so on
and so forth) only to forget vital details after a couple quarters, or a year. I have a feeling that RHEL is spinning out of control
at an increasingly rapid pace. Many older sysadmins now have the sense that they no longer can understand the OS they need to manage
and were taken hostages by desktop enthusiasts faction within Red Hat
To compensate for your inability to remember all the necessary information, you now need to create and maintain your own
personal knowledgebase (most often set of files, or a simple website, or
a wiki). Which takes a lot of time and effort, further increasing the overload. RHEL7 brings with it a more severe level of the inability
to remember the set of information necessary for productive work, then previous versions. Now "the basic set" is way too large for mere
mortals, especially for sysadmins, who need to maintain one additional flavor of Linux (say, Suse Enterprise). Any uncommon task
became a research: you need to consult man pages, as well as available on the topic Web pages such as documents at Red Hat portal, Stackoverflow
discussion, and other relevant to the problem in hand sites. And that is true often even in the case you already performed
in some distant past.
Even worse is the resulting from overcomplexity "primitivization" of your style of work. Sometimes you discover an interesting
and/or more productive way to perform an important task. If you do not perform this task frequently and do not write this up, it will
soon be displaced in your memory with other things: you will completely forget about it and degrade to the "basic" scheme of doing things.
This is very typically for any environment that is excessively complex. That's probably why so few enterprise sysadmin those days have
their personal .profile and .bashrc files and often simply use defaults.
RHEL6 was complex enough to cause problems. But RHEL7 increased complexity to such level that it become painful to work with, and
especially to troubleshoot complex problems. That's probably why the quality of Red Hat support deteriorated so much (it essentially
became referencing service to Red Hat advisories) -- they are overwhelmed and no longer can concentrate of a single ticket in
this river of tickets related to RHEL7 that they receive daily.
All this looks more like an introduction of new flavor of Linux, then a version upgrade. Look, for example, at the
recovery of forgotten sysadmin password in RHEL7 vs RHEL6. It is appealing only to "click-click-click" sysadmins and that was
the idea (pandering to the lowest common denominator is often a winning policy in commercial world.)
RHEL7 looks more like a new flavor of Linux, then a version upgrade.
It might well be the resulting complexity crosses some "red line" (as in "The straw that broke the camel's back ") and
instantly started to be visible and annoying to vastly more people then before. In other words, with the addition of systemd quantity
turned into quality
RHEL7 might well create the situation in which overcomplexity started to be visible and annoying to vastly
more people then before. (as in "The straw that broke the camel's back "). In other words, with the addition of systemd quantity turned
into quality
Currently customers on the ruse are assured that difficulties with systemd (and RHEL7 in general) are just temporary, until
the software could be improved and stabilized. While some progress was made, that day might never come due to the architectural flaws
of such an approach and the resulting increase in complexity as well as the loss of flexibility, as programmability is now is
more limited. If you run the command find /lib/systemd/system | wc -l on a "pristine" RHEL system (just after the installation)
you get something like 327 unit files. Does this number raise some questions about the level of complexity of systemd? Yes it
does. It's more then three hundred files and with such a number it is reasonable to assume that some of them might have some hidden
problem in generation of the correct init file logic on the fly.
You can think of systemd as the "universal" init script that on the fly is customized by supplied unit file parameters. Previously
part of this functionality was implemented as a PHP-style pseudo-language within the initial comment block of the regular bash script.
While the implementation was vsystemd
Breached One Million Lines Of Code In 2017 - Phoronix Forumsery weak (it was never written as a specialized interpreter with formal
language definition, diagnostics and such), this approach was not bad at all in comparison with the extreme, "everything is a parameter"
approach taken by systems, which eliminated bash from init file space. And it might make to return to such a "mixed" approach
in the future on a new level, as in a way systemd provides the foundation for such a language. Parameters used to deal with dependencies
and such can be generated and converted into something less complex and more manageable.
One simple and educational experiment that shows brittleness of system approach how is to to replace on a freshly installed
RHEL7 VM /etc/passwd/etc/shadow and /etc/group with files from RHEL 6 and see what happens during the reboot
(BTW such error can happen in any organization with novice sysadmins, who are overworked and want to cut corners during the installation
of the new system, so such behaviour is not only of theoretical interest). Another not related to system, but highly educational in
its own way, would be to delete all symlinks from the RHEL 7 root directory and then try to recover from this error.
Now about the complexity: the growth of codebase (in lines of codes) is probably more then ten times. I read that the systemd has
around 100K lines of C code (or 63K if you exclude journald, udevd, etc). In comparison sysvinit has about
7K lines of C code. The total number of lines in all systemd-related subsystems is huge and by some estimates is close
to
a million (Phoronix
Forums). It is well known that the number of bugs grows with the total lines of codes. At a certain level of complexity "quantity
turns in quality": the number of bugs became infinite in a sense that the system can't be fully debugged due to intellectual limitations
of authors and maintainers in understanding the codebase as well as gradual deterioration of the conceptual integrity with time.
At a certain level of complexity "quantity turns in quality": the number of bugs became infinite in a sense that the system
can't be fully debugged due to intellectual limitations of authors and maintainers in understanding the codebase as well as gradual
deterioration of the conceptual integrity with time.
It is also easier to implant backdoors in complex code, especially in privileged complex code. In this sense a larger init
system means a larger attack surface, which on the current level of Linux complexity is already substantial with the never ending security
patches for major subsystems (look at security patches in
CentOS Pulse Newsletter, January 2019 #1901).
as well as the parallel stream of Zero Day exploits for each major version of Linux on which such powerful organization as NSA is working
day and night, as it is a now a part of their toolbox. As the same systemd code is shared by all four major Linux distributions
(RHEL, SUSE, Debian, and Ubuntu), systemd represents a lucrative target for Zero Day exploits.
As boot time does not matter for servers (which often are rebooted just a couple of times a year) systemd raised complexity level
and made RHEL7 drastically different from RHEL6, while providing nothing constructive in return. The distance between RHEL6 and
RHEL7 is approximately the same as distance between RHEL6 and Suse, so we can speak of RHEL7 as a new flavor of Linux and about introduction
of a new flavor of Linux into enterprise environment. Which, as any new flavor of Linux, raises the cost of system administration
(probably around 20-30%, if the particular enterprise is using mainly RHEL6 with some SLES instances)
Any new flavor of Linux, raises the cost of system administration.
Probably around 20-30%, if the particular enterprise is using RHEL6 and Suse12 along with RHEL 7
Actually Lennart Poettering is an interesting Trojan horse within Red Hat. This is an example how one talented, motivated and
productive Apple (or Windows) desktop biased C programmer can cripple a large open source project facing no any organized resistance.
Of course, that means that his goals align with the goals of Red Hat management, which is to control the Linux environment in
a way similar to Microsoft -- via complexity (Microsoft can be called the king of software complexity) providing for lame
sysadmins GUI based tools for "click, click, click" style administration. Also standardization on GUI-based tools for the administration
provides more flexibility to change internals without annoying users. The tools inner working of which they do not understand and are
not interested in understanding.
And respite wide resentment, I did not see the slogan "Boycott RHEL 7" too often and none on major IT websites jointed the "resistance".
The key for stowing systemd down the developers throats was its close integration with Gnome (both Suse and Debian/Ubuntu adopted
systemd). It is evident, that the absence of "architectural council" in projects like Linux is a serious weakness. It also suggests
that developers from companies representing major Linux distribution became uncontrolled elite of Linux world, a kind of "open source
nomenklatura", if you wish. In other words we see the "iron
law of oligarchy" in action here.
We're an empire now, and when we act, we create our own reality.
And while you're studying that reality—judiciously, as you will—we'll act again, creating other new realities, which you
can study too, and that's how things will sort out.
We're history's actors … and you, all of you, will be left to just study what we do."
In previous version of RHEL (and all other Linux and Unix flavors) if you exit from your terminal session (or it was kiiled)
all processes which were started within this terminal session will be killed too, even if you are running them in the background
In case of systemd putting the process into background is somewhat similar to putting the process into background plus prefixing
it with the nohup command in "old" Linuxes. nohup detaches
the process (daemonizes it), preventing the shell from killing child processes on exit.
nohup behaviour is now default for the background processes in systems with systemd like RHEL7 in case you
disconnect the terminal. Your background process will survive the terminal disconnect but in some kind of "managed zombie stage":
they are not like screen sessions which you can re-attach to any new terminal you wish. You can't reattach them to a different terminal
and a new session for the same user does not list them in output of the jobscommand.
On the other hand systemd introduces a plethora of settings which affects program like tmux and by default system can display
"excessive zeal" killing tmux session after disconnect from of the terminal. that also man that depending on setting
systemd behaviour in this important areacan vary from one system to another creating Alice in Wonderland situation
I run 16.04 and systemd now kills tmux when the user disconnects (
summary of the change
).
Is there a way to run tmux or screen (or any similar program) with systemd 230? I
read all the heated disussion about pros and cons of the behaviors but no solution was suggested.
Based on @Rinzwind's answer and inspired by a
unit description
the best I could find is to use TaaS (Tmux as a Service) - a generic detached instance of tmux one reattaches
to.
You need to set the Type of the service to forking , as explained
here .
Let's assume the service you want to run in screen is called minecraft . Then you would open
minecraft.service in a text editor and add or edit the entry Type=forking under the section
[Service] .
As for simplicity systemd represent a blatant violation of "Kiss" principle. In a way it signifies the level of degeneration
of Unix developers since day of Ken Thompson,Dennis Ritchie , and
Bill Joy
Due to the current level of overcomplexity of Linux, finding ways to simplify sysadmin tasks has merits, and GUI interfaces are not
bad per se. Now it is simply impossible for a mere mortal to remember everything needed for sysadmin work on command line. there
was way to many utilities with too many options and too many special cases. So some crutches/helpers are quite welcomed. Nobody now
remember all the details of over a hundred command line utilities they sysadmin needs to use. But they should not interfere with the
"classic" way of doing thing in Unix and replace it with Windows-style way of doing things. Changes of the "classic way of doing things"
bad when they essentially betrays origin and philosophy of Unix. Which emerged as a developer friendly OS, not so much as user or sysadmin
friendly OS. And which provided first successful component model via its set of utilities and pipes (so called filters).
But when you need to remember several dozens of setting even GUI has a limited value especially if all those setting interact
with each other in the most unobvious way. For example, how many MS Word users know to user style sheets in the modern Work (in
early versions of MS Word style sheets were a separate file and can be used with multiple documents and edited manually; much
like CSS in HTML). And the ability to use stylesheets is fundamental feature of MS Word which in the past secured it win over WordPerfect
(which dominated this area in MS DOS days)
Another important thing is that systemd make the treasure trove of literature, literally hundreds of books, published before, say
2015, much less useful. Especially books related to understanding and troubleshooting the boot process. That's vandalism, pure
and simple.
Anonymous Coward writes: on Monday December 11, 2017 (#55715013)
Hint: Your side is just as stupid as your opposing site. There is no sane or reasonable, let alone sensible side. Because that
is how Americans are. At least it is beyond their *tiny* mental box. Regarding systemd, I state *both* A and B:
Monolithic "frameworks" have always been a stupid idea. Because they disable you from plugging them into *your*
system, and force you to plug into *theirs*. Because they want to dominate you! And they are mutually exclusive as a result
of that.
Traditional init systems are very limited and badly limiting nowadays. Like still using DOS as the underpinnings
of your actual system. A more generic event/trigger system is much more sensible.
THE PROBLEM IS:That systemd throws away what's good about traditional init systems (like "everything
is a file"; modularity; being able to do things with a simple file manager, text editor and maybe a script.).
It could have done the event/trigger thing *without* sacrificing modularity (tools that do *one* thing, and do it right!).
It could have acted less like a dominatrix on a power trip, swallowing everything. The base ideas were good. The personality
of the way it was implemented, was that of a complete egocentric psychopathic asshole with a God complex. Give me a sane
eventd, and I will ditch the old init system before you can blink.
Anonymous Coward on Monday December 11, 2017 @12:56AM (#55713867)
Re: Ah yes the secret to simplicity (Score:5, Informative)
You've moved having a basic understanding of the boot process, and the ability to fix things, from having a decent
knowledge of bash to being a C wizard.
You've broken decades of understanding the boot process.
It breaks KISS, as it doesn't simply do startup. Hell, it does ntpd.
It breaks a lot of the *concept* of Unix. Maybe to something preferred by a lot of people - but it also turns it into
an alien mess to a lot of other people.
...Systemd creates a dependency mess which means it cannot be replaced by simpler things, which wasn't the case before systemd.
Introduction of systemd presuppose the ability of Red Hat unilaterally adopt and enforce different, "anti-Unix" if you wish,
architectural decisions at any layer, not only on startup and logging layers, which systemd tries to "improve". Red Hat
has approximately 60% share of commercial Linux installations, so this is a dominant Linux distribution in this space. The Trojan horse
for pushing systemd was Gnome -- desktop also developed by Red Hat. As Gnome became "systemd-dependent" other distributions that
ship Gnome faced two choices: to discard it, or to comply. Here is an assement of
None of the things systemd "does right" are at all revolutionary. They've been done many times before.
DJB's daemontools,
runit, and Supervisor,
among others, have solved the "legacy init is broken" problem over and over again (though each with some of their own flaws). Their
failure to displace legacy sysvinit in major distributions had nothing to do with whether they solved the problem, and everything
to do with marketing. Said differently, there's nothing great and revolutionary about systemd. Its popularity is purely the result
of an aggressive, dictatorial marketing strategy including elements such as:
Engulfing other "essential" system components like udev and making them difficult or impossible to use without systemd (but
see eudev).
Setting up for API lock-in (having the DBus interfaces provided by systemd become a necessary API that user-level programs
depend on).
Dictating policy rather than being scoped such that the user, administrator, or systems integrator (distribution) has to provide
glue. This eliminates bikesheds and thereby fast-tracks adoption at the expense
of flexibility and diversity.
In other words Red Hat now represents what can be called "Linux oligarchy", a narrow circle of developers, mostly located in
corporations and working on Linux full time, who "knows best" what is good for the community, and which does not care about interests
of Linux "deplorables"
Red Hat now represents what can be called "Linux oligarchy", a narrow circle of developers, mostly located in corporations
and working on Linux full time, who "knows best" what is good for the community, and which does not care about interests
of Linux "deplorables"
Diversification of RHEL licensing and tech support providers
Of couse diversifying licensing from Red Hat now is must option. Paying for continuing systemd development, while hating
it, is not the best strategy. But the pressure to conform is high and most people are not ready to drop Red Hat due to the
problems with with systemd. So mitigation of the damage coused by systemd strategies are probably the most valuable avenue of actions.
One such strategy is diversification of RHEL licensing and providers.
This diversification strategy first of all should include larger use of CentOS an Oracle linux as more cost effective alternatives.
The second step is switching to "license groups" in which only one server is licensed with expensive RHEL license (for example
premium, subscription) and all other are used with minimal self-support license. This plan if better executed with Oracle as it
has substantially lower price for self-support subscription.
Due to excessive complexity of RHEL7, and the flow of tickets related to systemd, Red Hat tech support mostly degenerated to the
level of "pointing to relevant Knowledgebase article." Sometime the article is relevant and helps to solve the problem,
but often it is just a "go to hell" type of response, an imitation of support, if you wish. In the past (in time of RHEL 4) the
quality of support was much better and can even discuss your problem with the support engineer. Now it is unclear what we are
paying for. That means that is it often better to use alternative providers, which in many cases provide higher quality tech support
as they are more specialized.
So it you have substantial money (and I mean 100 or more systems to support) you probably should be thinking about third party that
suit your needs too. There are two viable options here:
RHEL resellers (for example, Dell and HP). In case Dell or HP engineers provide support for RHEL they naturally know their
hardware much better then RHEL engineers. So in this critical area where it is unclear whether this is OS/driver, or hardware problem
they are more easy to work with. Dell actually helps you to compile and test a new driver in such cases (i have one case when
4 port Intel card that came with Dell blade has broken driver in regular RHEL distribution and it needed to be replaced.
They also are noticeably better in debugging complex cases when the server can't start normally. And there are some very tricky
cases here. For example the problem in Dell can be connected with DRAC but demonstrate itself on OS level.
Alternative distributions vendors. Although this is little known fact and is not too heavily advertized, but both
Oracle and Suse support Red Hat distribution too
In the past for large customers SUSE used to provide a "dedicated engineer" who can serve as your liaison to developers and
tier III level of support.
For Oracle it is easier to get to the engineer in case of complex problem that is the case with Red Hat.
First of all those daemons that are not designed to work with systemd or have problems with systemd can be started directly from
cron using @reboot directive. Stopping them represent some problems but it can be solved by creating "fake"
systemd entries (see below)
You can use Docker and run image that does not have problems with your application that your "Native RHEL 7" has. Including those
that does not use Systemd. In a way Docker represents a technology which allow to use Devian and Debian in enterprise
environment.
You can also run XEN and multiple images with some of them representing OS without systems. but that increases complexity.
Sometime you just need to be inventive and add additional startup script to mitigate the damage. Here is one realistic example (systemd
sucks):
How to umount NFS before killing any processes.
or How to save your state before umounting NFS.
or The blind and angry leading the blind and angry down a thorny path full of goblins.
April 29, 2017
A narrative, because the reference-style documentation sucks.
So, rudderless Debian installed yet another god-forsaken solipsist piece of over-reaching GNOME-tainted garbage on your system:
systemd. And you've got some process like openvpn or a userspace fs daemon or so on that you have been explicitly managing for years.
But on shutdown or reboot, you need to run something to clean up before it dies, like umount. If it waits until too late in the shutdown
process, your umounts will hang.
Here's the rough framework for how to make a service unit that runs a script before shutdown. I made a file /etc/systemd/system/greg.service
(you might want to avoid naming it something meaningful because there is probably already an opaque and dysfunctional service with
the same name already, and that will obfuscate everything):
[Unit]
Description=umount nfs to save the world
After=networking.service
[Service]
ExecStart=/bin/true
ExecStop=/root/bin/umountnfs
TimeoutSec=10
Type=oneshot
RemainAfterExit=yes
The man pages systemd.unit(5) and systemd.service(5) are handy references for this file format. Roughly,
After= indicates which service this one is nested inside -- units can be nested, and this one starts after networking.service
and therefore stops before it. The ExecStart is executed when it starts, and because of RemainAfterExit=yes it
will be considered active even after /bin/true completes. ExecStop is executed when it ends, and because of
Type=oneshot, networking.service cannot be terminated until ExecStop has finished (which must happen within
TimeoutSec=10 seconds or the ExecStop is killed).
If networking.service actually provides your network facility, congratulations, all you need to do is systemctl start
greg.service, and you're done! But you wouldn't be reading this if that were the case. You've decided already that you just
need to find the right thing to put in that After= line to make your ExecStop actually get run before your manually-started
service is killed. Well, let's take a trip down that rabbit hole.
The most basic status information comes from just running systemctl without arguments (equivalent to list-units).
It gives you a useful triple of information for each service:
greg.service loaded active exited
loaded means it is supposed to be running. active means that, according to systemd's criteria, it is currently
running and its ExecStop needs to be executed some time in the future. exited means the ExecStart has
already finished.
People will tell you to put LogLevel=debug in /etc/systemd/system.conf. That will give you a few more clues.
There are two important steps about unit shutdown that you can see (maybe in syslog or maybe in journalctl):
That is, it tells you about the ExecStart and ExecStop rules running. And it tells you about the unit going
into a mode where it starts killing off the cgroup (I think cgroup used to be called process group). But it doesn't tell you what
processes are actually killed, and here's the important part: systemd is solipsist. Systemd believes that when it closes its eyes,
the whole universe blinks out of existence.
Once systemd has determined that a process is orphaned -- not associated with any active unit -- it just kills it outright. This
is why, if you start a service that forks into the background, you must use Type=forking, because otherwise systemd will
consider any forked children of your ExecStart command to be orphans when the top-level ExecStart exits.
So, very early in shutdown, it transitions a ton of processes into the orphaned category and kills them without explanation. And
it is nigh unto impossible to tell how a given process becomes orphaned. Is it because a unit associated with the top level process
(like getty) transitioned to stop-sigterm, and then after getty died, all of its children became orphans? If that were the
case, it seems like you could simply add to your After rule.
After=networking.service getty.target
For example, my openvpn process was started from /etc/rc.local, so systemd considers it part of the unit rc-local.service
(defined in /lib/systemd/system/rc-local.service). So After=rc-local.service saves the day!
Not so fast! The openvpn process is started from /etc/rc.local on bootup, but on resume from sleep it winds up being
executed from /etc/acpi/actions/lm_lid.sh. And if it failed for some reason, then I start it again manually under su.
So the inclination is to just make a longer After= line:
Maybe [email protected]? Maybe systemd-user-sessions.service? How about adding all the items from After=
to Requires= too? Sadly, no. It seems that anyone who goes down this road meets with failure. But I did find something which
might help you if you really want to:
systemctl status 1234
That will tell you what unit systemd thinks that pid 1234 belongs to. For example, an openvpn started under su winds
up owned by /run/systemd/transient/session-c1.scope. Does that mean if I put After=session-c1.scope, I would win?
I have no idea, but I have even less faith. systemd is meddlesome garbage, and this is not the correct way to pay fealty to it.
I'd love to know what you can put in After= to actually run before vast and random chunks of userland get killed, but
I am a mere mortal and systemd has closed its eyes to my existence. I have forsaken that road.
I give up, I will let systemd manage the service, but I'll do it my way!
What you really want is to put your process in an explicit cgroup, and then you can control it easily enough. And luckily that
is not inordinately difficult, though systemd still has surprises up its sleeve for you.
So this is what I wound up with, in /etc/systemd/system/greg.service:
Because of Type=forking, systemd considers greg.service to be active as long as the forked openvpn is running
(note - the exit 0 is important, if it gets a non-zero exit code from the mount command, systemd doesn't consider the
service to be running).
Then a multitude of events cause the wlan-discovery script to run again, and it does a killall -9 openvpn. systemd
sees the SIGCHLD from that, and determines greg.service is done, and it invokes /root/bin/umountnfs:
#!/bin/sh
if [ "$EXIT_STATUS" != "KILL" ]
then
umount.nfs -f /nfsmnt
fi
umountnfs does nothing because $EXIT_STATUS is KILL (more on this later)
wlan-discovery finishes connecting and re-starts greg.service
Eventually, system shutdown occurs and stops greg.service, executing /root/bin/umountnfs, but this time
without EXIT_STATUS=KILL, and it successfully umounts.
Then the openvpn process is considered orphaned and systemd kills it.
While /root/bin/umountnfs is executing, I think that all of your other shutdown is occurring in parallel.
So, this EXIT_STATUS hack... If I had made the NFS its own service, it might be strictly nested within the openvpn service,
but that isn't actually what I desire -- I want the NFS mounts to stick around until we are shutting down, on the assumption that
at all other times, we are on the verge of openvpn restoring the connection. So I use the EXIT_STATUS to determine if
umountnfs is being called because of shutdown or just because openvpn died (anyways, the umount won't succeed if openvpn
is already dead!). You might want to add an export > /tmp/foo to see what environment variables are defined.
And there is a huge caveat here: if something else in the shutdown process interferes with the network, such as a call to
ifdown, then we will need to be After= that as well. And, worse, the documentation doesn't say (and user reports vary
wildly) whether it will wait until your ExecStop completes before starting the dependent ExecStop. My experiments
suggest Type=oneshot will cause that sort of delay...not so sure about Type=forking.
Fine, sure, whatever. Let's sing Kumbaya with systemd.
I have the idea that Wants= vs. Requires= will let us use two services and do it almost how a real systemd fan
would do it. So here's my files:
#!/bin/sh
mount | grep -q nfsmnt || mount -t nfs -o ... server:/export /nfsmnt
exit 0
/root/bin/mountnfs:
#!/bin/sh
umount.nfs -f /nfsmnt
Then I replace the killall -9 openvpn with systemctl stop greg-openvpn.service, and I replace systemctl
start greg.service with systemctl start greg-nfs.service, and that's it.
The Requires=networking.service enforces the strict nesting rule. If you run systemctl stop networking.service,
for example, it will stop greg-openvpn.service first.
On the other hand, Wants=greg-openvpn.service is not as strict. On systemctl start greg-nfs.service, it launches
greg-openvpn.service, even if greg-nfs.service is already active. But if greg-openvpn.service stops or
dies or fails, greg-nfs.service is unaffected, which is exactly what we want. The icing on the cake is that if greg-nfs.service
is going down anyways, and greg-openvpn.service is running, then it won't stop greg-openvpn.service (or networking.service)
until after /root/bin/umountnfs is done.
Exactly the behavior I wanted. Exactly the behavior I've had for 14 years with a couple readable shell scripts. Great, now I've
learned another fly-by-night proprietary system.
GNOME, you're as bad as MacOS X. No, really. In February of 2006 I went through almost identical trouble learning Apple's
configd and Kicker for almost exactly the same purpose, and never used that knowledge again -- Kicker had
already been officially deprecated before I even learned how to use it. People who will fix what isn't broken never stop.
As an aside - Allan Nathanson at Apple was a way friendlier guy to talk to than Lennart Poettering is. Of course, that's easy
for Allan -- he isn't universally reviled.
A side story
If you've had systemd foisted on you, odds are you've got Adwaita theme too.
rm -rf /usr/share/icons/Adwaita/cursors/
You're welcome. Especially if you were using one of the X servers where animated cursors are a DoS. People who will fix what isn't
broken never stop.
[update August 10, 2017]
I found out the reason my laptop double-unsuspends and other crazy behavior is systemd. I found out systemd has hacks that enable
a service to call into it through dbus and tell it not to be stupid, but those hacks have to be done as a service! You can't just
run dbus on the commandline, or edit a config file. So in a fit of pique I ran
the
directions for uninstalling systemd.
It worked marvelously and everything bad fixed itself immediately. The coolest part is restoring my hack to run openvpn without
systemd didn't take any effort or thought, even though I had not bothered to preserve the original shell script. Unix provides some
really powerful, simple, and *general* paradigms for process management. You really do already know it. It really is easy to use.
I've been using sysvinit on my laptop for several weeks now. Come on in, the water's warm!
So this is still a valuable tutorial for using systemd, but the steps have been reduced to one: DON'T.
[update September 27, 2017]
systemd reinvents the system log as a "journal", which is a binary format log that is hard to read with standard command-line
tools. This was irritating to me from the start because systemd components are staggeringly verbose, and all that shit gets sent
to the console when the services start/stop in the wrong order such that the journal daemon isn't available. (side note, despite
the intense verbosity, it is impossible to learn anything useful about why systemd is doing what it is doing)
What could possibly motivate such a fundamental redesign? I can think of two things off the top of my head: The need to handle
such tremendous verbosity efficiently, and the need to support laptops. The first need is obviously bullshit, right -- a mistake
in search of a problem. But laptops do present a logging challenge. Most laptops sleep during the night and thus never run nightly
maintenance (which is configured to run at 6am on my laptop). So nothing ever rotates the logs and they just keep getting bigger
and bigger and bigger.
But still, that doesn't call for a ground-up redesign, an unreadable binary format, and certainly not deeper integration. There
are so many regular userland hacks that would resolve such a requirement. But nevermind, because.
I went space-hunting on my laptop yesterday and found an 800MB journal. Since I've removed systemd, I couldn't read it to see
how much time it had covered, but let me just say, they didn't solve the problem. It was neither an efficient representation where
the verbosity cost is ameliorated, nor a laptop-aware logging system.
When people are serious about re-inventing core Unix utilities, like ChromeOS or Android, they solve the log-rotation-on-laptops
problem.
The systemd source is probably 100 bigger in line count then initd it replaced. But its value in server space
when servers are connected via wire (or fiber) with static IPs is very limited. Faster boot claim is a nasty joke for servers which
run thru system BIOS for 5 min or so, like HP servers.
Faster boot claim is a nasty joke for servers which run thru system BIOS for 5 min or so, like HP servers.
The KISS principle is fundamental to open source. And not only because too complex open source project most commonly first "self-close",
and then "self-destruct". Initial developer enthusiasm and personal scarifies (often at the expense of family and health)
might drive the project for five years, rarely more. And if after that nobody can replace departed key developer (s), the project becomes
abandonware. Worse it can be adopted by mediocre but ambitious programmer, which quickly destroys the architectural integrity
of initial versions and drive the project to the level of overcomplexity were it is doomed.
That means that excessive complexity, as well as too many changes that destroy compatibility, undermines the whole ecosystem,
killing potential contributors. When something is functioning reasonably well like SystemV init for servers, with decades of experience
in debugging problem with it, the question always is why to replaced this subsystem by incompatible, more complex, more
opaque layer. The key question in such cases is "cue bono". And you know the answer.
Neoliberal hijacking of open source ("greed is good") was almost complete by the year 2000. With some notable exceptions, the
initial ("academic") moral ground of open source was abandoned, abandoned deliberately in order to be able to create and maintain
a very complex software ecosystem which only pretends to be open, but in reality is closed for almost all developers who can't spend
on its all the time (which means they are full time developers). While commercialization of open source was a viable path up to
a certain point, later it killed incentives typical for open source community and replaced them with the incentives of the "marketplace".
As the result both Linux distributions as a whole and the software package that constitute them can become too complex to be
maintained by volunteers, converting it to "semi-close" source level. Moreover, at the current level of complexity, using close
source software packages sometimes looks like is probably a more honest option then the "pseudo open source path" selected by companies
like Red Hat.
BTW, if some system is converted to closed source, efforts should be made for the preservations of existing (simpler)
open source version, or even simplifying existing open source version to secure its viability. "Complex open source" model adopted by
Red Hat blocks such avenue as it creates an illusion that that package is still open, while in reality it is not.
Complex open source blocks avenue of participating non-paid developers and creates an illusion that it is still
open, while in reality it is not. It is just C (or other language) used as assembler language of OS/360 days (IBM shipped OS/360
kernel in assembler and it was compiled for the particular installation.
BTW commercial abandonware often is used as long as 30 years after abandonment (MS DOS ecosystem with its rich set of DOS
abandonware is one example here -- it is still widely used); Microsoft abandonware is another good example: some programs long
abandoned by Microsoft are used 15 and more years after they were replaced by the new version. Windows XP, Microsoft Frontpage 2003,
and Office 2003 are three prominent examples.
I think some Microsoft abandonware will be used a very long time as programs were functional enough and debugged well enough not
to require improvement. Constant change of versions is partially driven by the desire to extract new money from users. all this
hoopla with the security patches is by-and-large "spinning wheel" activity -- there are infinite amount of zero days exploits in any
sufficiently complex software so limiting access is the only way to secure the OS (blocking ports, restricting IP space for ssh connection,
using TCP wrappers for postfix, installing proxy, etc)
Open source development does not have any protection from "ego-driven" destruction -- when some "coding maniac" hijacks the
development and makes the system unnecessary complex destroying the conceptual integrity of earlier, simpler versions
The fundamental weakness of open source development model is that it after original author of the software project left or is sidelined,
such a product does not have any protection from "ego-driven" destruction -- when some "coding maniac" hijacks the development and makes
the system unnecessary complex destroying the conceptual integrity of earlier, simpler versions.
It remain to be seen whether the immune system of open source development ecosystem is strong enough to preserve simpler variants
of Linux, which were actually quite adequate in standard "wired" server deployments. For one, I do not want my servers to be able
to work via WiFi and thus I am not inclined to use NetworkManager.
The fact that it is almost impossible to remove it from RHEL 7 just means for me that I would postpone upgrading to this version
of Linux as long as possible.
A very similar situation exists with systemd: If I do not need it, why should I use distribution that forces you to use it?
And that line of thinking is pretty widespread. So much that I think RHEL 7 can be viewed as the first fiasco by Red Hat in the
area of operating system: the version that many sysadmins simply do not want to use.
I do not want my servers to be able to work via WiFi and thus I am not inclined to use
NetworkManager. The fact that it is almost impossible
to remove it from RHEL 7 just means for me that I would postpone upgrading to this version of Linux as long as possible.
A very similar situation exists with systemd:
If I do not need it why should I use distribution that forces you to use it? And that line of thinking is pretty
widespread. So much that I think RHEL 7 can be viewed as the first fiasco by Red Hat in the area of operating system: the
version that many sysadmins simply do not want to use
Such hijacking not only leads to the loss of architectural integrity to weakness of architectural vision of this "fanatic developer",
it essentially creates an "open source monopolist" which dictates its will to the whole community. The activities of some Linux developers
who now are actively trying to "reinvent Apple " (or Windows) in the absence of understanding of Unix philosophy as well as absence
of own worthwhile ideas is a an internal danger of open source. Internal cancer, if you wish.
The activities of some Linux developers who now are actively trying to "reinvent Apple " (or Windows) in the absence
of understanding of Unix philosophy as well as absence of own worthwhile ideas is a an internal danger of open source. Internal
cancer, if you wish.
In other words, systemd is a replacement for something that a large category of Linux users (all server sysadmins) don't
think needs to be replaced, and see that existing solution can be gradually improved. For example by introducing PHP style pseudo-comments
into existing init scripts, which was already (on a very primitive level) done in both Red Hat and Suse.
It caters mainly to the "desktop linux" community. Moreover the antics of the systemd developers have not won hearts
and minds of Linux server sysadmins. It was just pushed through the throat by the political weight of Red Hat as Microsoft of Linux.
See famous LKML thread where Linus Torvalds banned systemd
developer Kay Sievers from the Linux kernel.
The net result of introduction of systemd points to a larger problem: I would call this problem "betrayal of Unix
philosophy". Unix accidentally invented (the first Unix use was creation and typesetting of documents for AT&T patent office)
and promoted the idea of the maximum usage of text files. Which at the time was a revolutionary departure from IBM OS/360 design
philosophy with its multitude of file types, which proved to be as grandiose failure as an OS, as IBM/360 hardware was a grandiose success
as innovative, now classic hardware design (IBM compilers were also extremely good, especially PL/1 compilers, which were simply brilliantly
written).
It is interesting that it is possible to write a best selling book about an epic failure in OS design --
The Mythical Man-Month -- although not everything in OS/360 was a failure.
Hardware was not, it was really revolutionary breakthrough. Among other things IBM/360 series introduced the idea of addressable
8-bit bytes, aan elegant system of commands with very good assembler language (which Knuth failed to adopt for this monumental
The Art of Computer Programming, until to late ;-) , (paged) memory
with segment protection, virtual machines, REXX as first batch language that simultaneously served as macrolanguage for applications,
Xedit -- the "orthodox" editor with REXX as a macrolanguage, and more
PL/1 compilers and several other compilers in OS/360 were masterpieces of software engineering. The quality of debugging compiler
from PL/1 and Fortran H compilers were even now very impressive indeed. Probably yet unsurpassed. IBM PL/1 debugging compiler
far surpasses in quality of diagnostics (for rather complex language, far more complex then, say C or Fortran) and remained
unsurpassed even now. The same is true for the compiler from a teaching subset PL/C. If stand far above the quality typical, say for
Intel of GNU compliers. To say nothing about quality of diagnostics of interpreters such as Perl 5 of Python, which is really
dismal.
But one of key ideas of OS/360 was binary formats for data. Many of them. Way too many. Unix promoted radically different idea of
the uniform text based configuration files and logs, as well as the idea of a set of simple tools working with text files, each of which
does one thing well and can be combined via pipes or sockets. This was the Kiss Principle in action.
Unix is also introduced of shell as an independent system program and the ability to have many of them. As well as the idea of scripting
as alternative to programming in compiled languages like C (C-shell and AWK were milestones in this direction, quickly followed by Perl,
TCL and Python). This was kind of the New Deal in operating system design which opened a path for displacing IBM mainframe
domination.
That's why Unix and later emergence of PCs "buried" OS/360/370 with its "glass datacenters" (and its derivatives, like much
more interesting VM/360 which introduced commercial virtual machines to the marketplace.) It is still used by major banks, military,
intelligence agencies and some other large organizations, but now it is a niche OS.
Of course the real history is not black and white and along with dismal failure with OS/360, IBM also invented VM/CMS
-- first production virtual machine environment and one early and very interesting scripting language
REXX (announced in 1981 and shipped by IBM in 1982). Unfortunately
REXX did not displaced JCL, which was punch cards oriented nightmare and was structured around concept of jobs as a sequence of steps.
Now we face something like "neoliberal counterrevolution" in open source development -- kind of return of OS/360 architectural principles,
which much like neoliberalism means abandonment of the New Deal, means abandonment of fundamental Unix ideas and Unix software
design philosophy. The set of key ideas (while it has it own set drawbacks and limitations) proves to be
surprisingly powerful and long living.
Actually Unix created a new style of computing, a new way of thinking of how to attack a problem with a computer. This style was
essentially the first successful component model in programming. As Frederick P. Brooks Jr. (another computer pioneer who early recognized
the importance of pipes) noted, the creators of Unix "...attacked the accidental difficulties that result from using individual programs
together, by providing integrated libraries, unified file formats, and pipes and filters.".
Now we have a clear break with this tradition. Moreover, systemd promotes not only new startup mechanism for services replacing
inetd (enhancements of which in a different form were previously was introduced by Solaris 10 with its "shadow startup file
concept), but also several clearly Windows-style ideas including binary logs. Yes binary logs. Seriously. In a religious terms this
is anathema for Unix ;-).
And despite all those petty tricks with signing records, tricks binary logs are less secure and introduce powerful mechanism of putting
a Trojan horse into logging process that can filter/suppress certain messages. NSA probably loves this change.
And I would like that somebody explained to me why for a server environment this is more secure and better way of doing such things
in comparison say with storing logs remotely on a special, highly protected server (logserver) and securing it in a "paranoid"
way.
I would also like to stress it again that this set of solutions looks like typical Windows-style "learning is hard"
mentality (which is actually also Apple mentality). And this "bait and switch" maneuver is damaging for Linux as a server platform.
It make less clear why we should prefer Linux to Windows on Intel platform. Windows platform already offers all those benefits
and more and in the basic configuration is cheaper then Red Hat as you pay the price upfront for unlimited number of years of usage.
Patches are free. It is really "turn key solution" for a small office as printing, file sharing and email are all well integrated.
For example, email has an impressive (for Windows style programming) client -- Outlook. Looks like the effect of the misguided
zeal of "unreformed C programmers" with Windows background and mentality on Linux can be quite devastating.
I would not deny that improvements in init are possible, or necessary. They are necessary to the extent that Linux is moving
to the adoption of Solaris 10 style lightweight virtual machines via Docker.
But, while linux containers are an important (albeit too late) enhancement there should be some respect to the legacy
solutions as they were created by very talented programmers with clear architectural vision. After all Unix is the oldest (after
System 360 OS) operating system still in use. It is almost 50 years old. And BTW Linux itself in more then 25 years old. In other words
whether we like it or not Linux now is now "yet another legacy OS." And breaking compatibility should be avoided, if possible.
Of course a lot of thing can be done better in Linux and Unix.
But as humans strong point are continuations of our weaknesses and vise versa, arbitrary removing one weakness while ignoring
the issues of conceptual integrity of Linux as a whole does not necessary improves the OS as a whole. So the obvious
drawbacks on classic init approach on laptops and similar portable devices are not the whole story, and can't be addressed in the isolation
of other aspects of Linux.
It's also problematic in a more ideological way. One of the main reasons for emergence of FreeBSD, and then Linux as well
as GNU project and the free software movement as a whole was the desire to move away from proprietary solutions produced by such
companies as Sun, HP or IBM.
By purposely being POSIX incompatible, systemd has essentially rendered itself and everything that depends on it proprietary
to Linux.
In other words there are improvements and pseudo-improvements. The first attempts in the direction of improving init capabilities
was the use of pseudo comments written in a special language in each init script much like PHP is used in web pages.
Those pseudo-comments were actually a functional programs interpreted as higher level directives about init behaviors and dependencies.
This is a pretty flexible approach and it enjoyed some level of success, although it suffered from the lack of standardization of "meta
language" for such comments and a very low quality, completely brainless level of implementation of this (actually very sound)
idea in both RHEL and SUSE.
Again for C programmers like for a hammer everything is a nail, and I think that a system programmer who does not know at least one
scripting language really well those days should not be involved in kernel development, outside very special areas like drivers, as
he can do more harm then good.
But even on a very primitive level they implemented this concept, both RHEL and SLES have some success in solving typical
problems related to the outdated functionality of init process. And from architectural standpoint this was a better solution then
the solution adopted by Solaris 10, if we view it as an attempt of introducing PHP-style imbedded language with its own, carefully
written interpreter, not as a hack. Here is an example of those pseudo-comments:
#!/bin/bash
#
# nscd: Starts the Name Switch Cache Daemon
#
# chkconfig: - 30 74
# description: This is a daemon which handles passwd and group lookups \
# for running programs and cache the results for the next \
# query. You should start this daemon if you use \
# slow naming services like NIS, NIS+, LDAP, or hesiod.
# processname: /usr/sbin/nscd
# config: /etc/nscd.conf
# config: /etc/sysconfig/nscd
#
### BEGIN INIT INFO
# Provides: nscd
# Required-Start: $syslog
# Default-Stop: 0 1 6
# Short-Description: Starts the Name Switch Cache Daemon
# Description: This is a daemon which handles passwd and group lookups \
# for running programs and cache the results for the next \
# query. You should start this daemon if you use \
# slow naming services like NIS, NIS+, LDAP, or hesiod.
### END INIT INFO
Another way to extend init startup scripts systems was to move those pseudo-comments in a separate page, one for each startup
script (shadowing). This way you are not bound to shell lexical structure and can use XML or other descriptive language for specifying
dependencies, start order, strop order and such. I do not see this approach as inherently superior, but it also allows to enhance functionality
without completely destroying previous mechanism and the key idea that init files are just special type of scripts. If an init script
does not have a shadow it still is a valid init script. It is just crippled in its functionary to "classic" init script capabilities,
but for many init scripts (especially custom scripts) this is OK.
This idea of shadow pages for init scripts was first introduced in Solaris 10 more then a decade ago. From the Unix
architecture point of views, Solaris solution was not the best, but, at least, it preserved some level of conceptual integrity and compatibility
with the old system of init files: those additional files remained text files and init can work without then with "old-style" files
as well. It just works better with init-scripts that do have shadow files.
Systemd does not have any conceptual integrity. It is a mixture of several ideas with the implementation that throws away an important
layer of classic Unix that worked reasonably well for four decades and offers very little in return other then additional complexity.
This looks like the "revenge of laptop users of Linux" and an attempt to placate "wanna-be" system administrators
who can do nothing without GUI applet for the required function.
Laptop users are definite winners, as it probably speeds up a little bit the startup (but in reality the effect might be negative
too; the same or better speedup can be achieved by different means, first of all by waking up from sleeping inread of full boot -- the
method successfully implemented in Windows 10), but that's about it.
Server administrators are definite losers as it introduced more complex, less transparent system that makes "honest" integration
of custom daemons more difficult (of course you can bypass systemd by an imitation of init via
cron's@reboot directive)
Server administrators are definite losers as it introduced more complex, less transparent system that makes "honest" integration
of custom daemons more difficult (of course you can bypass systemd by an imitation of init via cron's@reboot directive)
Again, there is no question that for servers it is detrimental development: one step forward, two steps back. It adds complexity
(and moves Red Hat even closer to Windows) and introduces additional dependencies and security holes. This layer definitely can be successfully
attacked and without cryptographic sum signing of executables can be completely replaced with Trojan binaries, creating perfect environment
for complex, state-sponsored malware in Linux.
It also introduced a new, pretty complex API. If something goes wrong, it is far harder to troubleshoot with systemd then
without it. The only positive thing I see is that it facilitates the use of cgroups which are at the core of Linux implementation
of containers (aka Solaris zones). It happened 10 years after Solaris 10, but better late then never ;-)
Troubleshooting boot process problems became more problematic as some of those problems are now caused by systemd
layer and information about the problem can be viewed only via journalctl. As a result we have additional layer
of complexity and additional set of problems during startup connected with malfunctioning of systemd/jounald layer.
Look at this Open Suse ticket to see the level of resignation with all this mess (Welcome
to emergency mode!)
On 2013-04-02 17:16, arvidjaar wrote:
>
> robin_listas;2543782 Wrote:
>>
>> The problem is, as explained in that bugzilla, that log is useless to find what was the actual problem that provoked emergency
mode to start.
>>
>>
>> I may have to repeat the test in 12.3 and report again as/if necessary.
>>
>
> Newer systemd became better in this respect, "journalctl -b" in
> emergency mode usually gives enough information which mandatory service
> failed to start. Of course, to find *why* it failed to start is more challenging.
I'll try this when my time permits... I'm interested.
What happens if the emergency mode fails badly and I have to use a separate rescue system, is the information still available
:-?
> Upstream systemd now finally preserves current services state when
> going in emergency mode, and this patch is queued to be released as
> update for 12.3 as well. It means you also will be able to use standard
> systemctl to check services state, see error output etc.
Mmm.... :-)
New types of bugs connected with
systemd, or interaction of systemd with other components, for example:
Poettering is getting his hide smacked up right now, on the fedora-devel mailing list. It hasn't settled down yes, but it looks
like Fedora will NOT flip this setting to "yes".
It's a beautiful thing to watch.
"Not invented here" attitude toward SysV solution. Which was pretty elegant and did not completely exhausted its potential.
Again using written in specialized language pseudo comments in the headers of startup scripts (or via shadowing mechanism)
you can achieve pretty much anything claimed by systemd developers and more; but of course without financial weight of Red
Hat it is difficult to standardize such a language and ensure its widespread adoption.
But at least they can preserve syntax of existing Red Hat utilities which are etched in most Red Hat system administrators brains.
They declined to do even this simple thing. They wanted all new "my way of highway" system. Which is one thing that makes
RHEL 7 a fiasco, that many sysadmin refuse to upgrade to.
The service and chkconfig commands are still available in the system and work as expected, but
are only included for compatibility reasons and should be avoided.
Table 8.4. Comparison of the chkconfig Utility
with systemctl
chkconfig
systemctl
Description
chkconfig name on
systemctl enable name.service
Enables a service.
chkconfig name off
systemctl disable name.service
Disables a service.
chkconfig --list name
systemctl status name.service
systemctl is-enabled name.service
Checks if a service is enabled.
chkconfig --list
systemctl list-unit-files --type service
Lists all services and checks if they are enabled.
chkconfig --list
systemctl list-dependencies --after
Lists services that are ordered to start before the specified unit.
chkconfig --list
systemctl list-dependencies --before
Lists services that are ordered to start after the specified unit.
This is an "anti-scripting" solution. This solution was created by a person who knows C and nothing
but C (some people call such programmer C-heads, distinguishing them from A-heads, who, in addition, know at least one
scripting language). For this class of programmers, C is a universal tool that is capable to solve any problems like for a hummer
everything is a nail: everything is better written in C and complied into binaries. That's the world he is living in and comfortable
with. But from architectural standpoint it is pretty questionable to replace scripting solution with C solution when efficiency
is not important. And here, in server space, it is definitely not (rebooting servers is operation performed, say, once a quarter
or even with larger interval. Typically this is done for patching. And taking into account the time server BIOS initialization of
various cards and services (ILO/DRAC) take, it does not matter if the whole process will last one minute more) . As such this
solution is inferior to the use of scripting language and, BTW, you can use more modern language for interpreting custom functional
language then shall. The language that provides the richer set of features such as TCL or Python. Here is one interesting commnet
from Slashdot discussion:
Anonymous Coward on Friday March 06, 2015 @12:38PM (#49198033)
Re: Question from a non-Linux user (Score:4, Insightful)
The SystemD crowd are windows devs who hate [Windows] 8 so much, they finally decided to get into linux. Sadly, they
want linux to work like windows, so they foist their shit into it. It does make boot times faster: something sysadmins usually
don't give a shit about since you don't reboot servers.
Red Hat wants systemD because it will let them abstract linux (the kernel) away to the point where they can control it instead
of "the community". In addition, several genuinely nice tools, UUID for disks, are being folded into SystemD so, in order to
get those tools, you *must* also use SystemD. Essentially it's being bundle in with other services.
Sadly, SystemD is not well tested enough for most people running linux on a server to trust it especially since the
guy who wrote it wrote PulseAudio and people are still having issues related to that piece of shit.
Pros:
Boots fast
Cons:
When it breaks, you're fucked.
Obsoletes 20-30 years of accepted best practices and knowledge of how to use linux tools
No real new features
Is network connected and running as superuser
Is unaudited
Is virtually untested
Was written by a raging moron
Is completely unneeded by a large section of people who have run linux for a long time
Essentially, it's the Windows 8 of the *nix world
Removal of runlevels is a problem and while targets are a substitute they are do not match the functionality as integration
of your own daemons became more complex. That is a minor and solvable problem (you can imitate classic init via cron @start
feature for your run level), but it make it less unacceptable for all sysadmins who uses changing runlevel in daily work (mostly
for starting and shutting down X subsystem and for maintenance operations). Typically production servers were run at
level 3 and administed, when GUI was needs, for example for file download via Firefox, etc, on level 5. While the custom runlevels
were seldom used, switch form one runlevel to another was used, sometimes heavily. For example many organizations use runlevel 3
for production and runlevel 5 (with X11) for use of GUI based system tools, debugging/patching, etc). Level two often is used
for recovery.
Systemd has only limited support for runlevels. It provides a number of target units that can be directly mapped to
these runlevels and for compatibility reasons, it is also distributed with the earlier runlevel command. Not all
systemd targets can be directly mapped to runlevels, however, and as a consequence, this command might return N to
indicate an unknown runlevel. Red Hat recommend that you avoid using the runlevel command if possible. For more information
about systemd targets and their comparison with runlevels, see
Section 8.3, “Working with systemd Targets”.
The systemctl utility does not support custom commands. In addition to standard commands such as
start, stop, status, and restart, authors of SysV init scripts could implement
support for any number of arbitrary commands in order to provide additional functionality. For example, the init script for
iptables in Red Hat Enterprise Linux 6 could be executed with thepaniccommand, which
immediately enabled panic mode and reconfigured the system to start dropping all incoming and outgoing packets. This is not
supported in systemd and the systemctl only accepts documented commands. For more information about the systemctl
utility and its comparison with the earlier service utility, see
Section 8.2, “Managing System Services”.
The systemctl utility does not communicate with services that have not been started by systemd. When
systemd starts a system service, it stores the ID of its main process in order to keep track of it. The systemctl
utility then uses this PID to query and manage the service. Consequently, if a user starts a particular daemon directly on
the command line,systemctlis unable to determine its current status or stop it.
Systemd stops only running services. Previously, when the shutdown sequence was initiated, Red Hat Enterprise Linux 6 and
earlier releases of the system used symbolic links located in the /etc/rc0.d/ directory to stop all available system services
regardless of their status. With systemd, only running services are stopped on shutdown.
Loss of flexibility. Systemd is inherently a desktop oriented subsystem which does not in any way enhance functionality of
key server daemons and does not answer any of key linux server problems. To start up stuff only when needed, and shut things
down aggressively when no longer required might be perfectly sensible thing to do for a laptop on batteries, but it is much less
important the servers. For servers the ease of troubleshooting is of primary importance and if you cannot run daemons as
a standalone programs as they rely of some additional systemd services this ease is lost. Linux is already way
too complex OS and this solution essentially overload the boat. If you add dismal level of tech support from Red Hat, you see
the point.
Claims that this will achieve speed-up and systematization of startup are somewhat disingenuous. First of all for servers
such speed up does not matter. Do designer of systemd know what is the time of booting a typical HP Gen 9 server? After that in order
to talk about time saving you need to completely drunk ;-). And there is no solid evidence of dramatic speed-up, anyway.
On a server when stages of UEFI or BIOS processing of various cards takes minutes the whole idea is a joke in any case.
This is a pure Linux laptop enthusiast level of thinking. And the second point is that the price might well be too high for
the effect achieved making the whole component evil despite the fact that it is "Linux on laptop" friendly.
To take advantage of systemd's socket/FH preallocation ("inetd style"), many daemons have to be patched to have the FH passed
to them by systemd.
That solutions comes from the person who authored pulseaudio and avahi :-). Those two solutions provide the key argument
against adopting this solution: it is evident that the author is a bad architect who definitely is unable to see the bigger picture.
He does not understand the value of scripting and essentially tries to imitate worst features of Linux PAM architecture (inherited
from Solaris).
Old Linux versions are not going away at least till 2020, so sysadmin forced in RHEL 7 need efforts to adapt to "dual personality"
of Linux now. Rewriting some scripts like system and chkconfig might be worth the effort.
There are multiple way to improve the speed of Linux boot. But rewriting init in C and forcing it to behave like
initd smells Windows. The only positive thing is an attempt to standardize typical configuration files. But those efforts can
proceed outside systemd development framework without any problems:
Here's a little overview over these new common configuration files systemd supports on all distributions:
/etc/hostname : the host name for the system. One of the most basic and trivial system settings. Nonetheless previously
all distributions used different files for this. Fedora used /etc/sysconfig/network, OpenSUSE /etc/HOSTNAME.
We chose to standardize on the Debian configuration file /etc/hostname.
/etc/vconsole.conf : configuration of the default keyboard mapping and console font.
/etc/locale.conf : configuration of the system-wide locale.
/etc/modules-load.d/*.conf : a drop-in directory for kernel modules to statically load at boot (for the very few
that still ne> l.d/*.conf : a drop-in directory for kernel sysctl parameters, extending what you can already
do with /etc/sysctl.conf.
/etc/tmpfiles.d/*.conf : a drop-in directory for configuration of runtime files that need to be removed/created/cleaned
up at boot and during uptime.
/etc/binfmt.d/*.conf : a drop-in directory for registration of additional binary formats for systems like Java, Mono
and WINE.
/etc/os-release : a standardization of the various distribution ID files like /etc/fedora-release and similar.
Really every distribution introduced their own file here; writing a simple tool that just prints out the name of the local distribution
usually means including a database of release files to check. The LSB tried to standardize something like this with the lsb_release
tool, but quite frankly the idea of employing a shell script in this is not the best choice the LSB folks ever made. To rectify
this we just decided to generalize this, so that everybody can use the same file here.
/etc/machine-id : a machine ID file, superseding D-Bus' machine ID file. This file is guaranteed to be existing and
valid on a systemd system, covering also stateless boots. By moving this out of the D-Bus logic it is hopefully interesting for
a lot of additional uses as a unique and stable machine identifier.
/etc/machine-info : a new information file encoding meta data about a host, like a pretty host name and an icon name,
replacing stuff like /etc/favicon.png and suchlike. This is maintained by systemd-hostnamed.
Again this not the first attempt to enhance init. But it is going to be much more the init. What they want to create is
a new API between the kernel and applications, controlled by Red Hat.
There have been several previous attempts to replace the old SystemV init system. One rather radical approach was Solaris
10 Service_Management_Facility) introduced
around 2000. It introduced 'shadow' directory with XML files that supplemented init scripts. To me it was unclear if the medicine is
better then the disease. After all init scripts are now pretty ingenious and with sysconfig and encoded levels one important problem
that init used to have was resolved once and for all. Dependencies also can be worked out via pseudo comments (which actually can be
written is TCL or other scripting language, if you really want high flexibility). So the key question is why bother and destroy old,
workable and flexible solution. the same line of thinking is applicable to systemd. Which in addition has all features of the attempt
to create your mini-empire -- additional layer of services above the kernel. which might be not a bad idea (remember microkernel concept)
but the devil is in details.
Solaris 10 solution was followed by Ubuntu upstart.
Another key shortcoming of systemd, which is different from the complete or partial lack of architecture vision (about which
some people say that this is a trait; what God did not give to a person, is impossible to buy) is its additional complexity. Also I
am not so sure that starting this way complex daemons like sshd solves any problem.
Y. Nemo at Tue May 3 18:13:43 2011
How does systemd benefit in any substantial way users who have already tweaked and tuned the SysV init system to their
needs?
It seems systemd is more appropriate for large distributions intending to target a significant user and configuration base.
My most pressing concern is the added complexity of systemd as well as the inherent difficulty of debugging and configuring
such a system. (side-note: though I haven't read the 'Systemd for Administrators' series, it seems to get quite complex).
The SysV system stands out because bash is so generic. To modify the CPU/IO scheduler, cgroups, uid/gid,automatic restarting
etc... edit the appropriate bash script.
Systemd is designed around the concept of the wait option in inetd where the service manager (formerly
inetd and now the init that comes with systemd) binds to the TCP, UDP, and Unix sockets and then starts daemons
only when needed. Which means you don’t have a daemon running for months without serving a single request,
but who cares about it on modern server with 64GB or more of RAM. It also implements some functionality similar to automount
which means you can start a daemon before a filesystem that it might need has been fscked.
For example, the systemd way would be for process 1 to listen on port 80 and it could then start the web server when a connection
is established to port 80, start the database server when a connection is made to the Unix domain socket, and then mount the filesystem.
Which is pretty stupid and can lead to timeout of the first request :-).
Actually it is not a good idea to start all services on demand. Especially complex one, like sshd, httpd,mysql and similar
Due to delayed execution it requires a lot more integration with other subsystems. For example of RHEL it requires more SE Linux
integration than init. See http://0pointer.de/blog/projects/systemd
Re: Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.
SystemD is corporate money (Redhat support dollars) triumphing over the long hairs sadly. Enough money can buy a shitload of
code and you can overwhelm the hippies with hairball dependencies (the key moment was udev being dependent on systemd) and soon
get as much FOSS as possible dependent on the Linux kernel.
This has always been the end game as Red Hat makes its bones on Linux specifically not on FOSS in general (that say runs on
Solaris or HP-UX). The tighter they can glue the FOSS ecosystem and the Linux kernel together ala Windows lite style the better
for their bottom line. Poettering is just being a good employee asshat extraordinaire he is.
Red Hat is the company behind systemd push. It is a developer and simultaneously the key promoter of systemd
as init replacement. Which tells a lot about the level of deterioration of the company from architectural vision point
of view.In RHEL 7 systemd replaced old init.Which is one reason not to hurry with
transition from version 6. Version 6 will be supported till 2020 I think.
One uniq fact about systems is that "in late April 2014 a campaign
to boycott systemd was launched, with a website listing various reasons against its adoption."(Wikipedia
[89][90]
)
From my point as a system administrator this endanger Red Hat franchise and I would like maximally delay introduction of this version
into those organizations where I have some influence over technological issues. In view of drastic drop of quality of RHEL support I
would also recommend wider used of CentOS on non-critical servers (for example computational nodes of the cluster). RHEL support level
dropped considerably and now when answering the ticket my impression is that often they do not even understand what the ticket is about.
Often all they can do is to massage database. No thinking is involved.
Quite right. Steaming pile of poo. And then in the other corner is another pile of poo -- Gnome 3. Well done, Redhat.
Anonymous Coward, Friday 11th May 2018 03:12 GMT
Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.
"And perhaps, in the process, you may warm up a bit more to the tool"
Like from LNG to Dry Ice? and by tool does he mean Poettering or systemd? I love the fact that they aren't trying to address the
huge and legitimate issues with Systemd, while still plowing ahead adding more things we don't want Systemd to touch into it's ever
expanding sprawl.
The root of the issue with Systemd is the problems it causes, not the lack of "enhancements" initd offered. Replacing Init didn't
require the breaking changes and incompatibility induced by Poettering's misguided handiwork. A clean init replacement would have
made Big Linux more compatible with both it's roots and the other parts of the broader Linux/BSD/Unix world. As a result of his belligerent
incompetence, other peoples projects have had to be re-engineered, resulting in incompatibility, extra porting work, and security
problems. In short were stuck cleaning up his mess, and the consequences of his security blunders
A worthy Init replacement should have moved to compiled code and given us asynchronous startup, threading, etc, without senselessly
re-writing basic command syntax or compatibility. Considering the importance of PID 1, it should have used a formal development process
like the BSD world.
Fedora needs to stop enabling his prima donna antics and stop letting him touch things until he admits his mistakes and attempts
to fix them. The flame wars not going away till he does.
Anonymous Coward, Thursday 10th May 2018 02:58 GMT
All this systemd saga looks like a power play on the part of Red Hat that created difficulties for the company, including substantial
level of resentment in enterprise sysadmin community (Systemd
Harbinger of the Linux apocalypse InfoWorld):
Red Hat exerted its considerable force on the Linux world. Thus, we saw systemd take over Fedora, essentially become a requirement
to run the GNOME desktop, then become an inextricable part of a significant number of other distributions (notably not the "old
guard" distributions such as Gentoo). Now you'd be hard-pressed to find a distribution that doesn't have systemd in the latest release
(Debian doesn't really use systemd, but still requires systemd-shim and CGManager).
While systemd has succeeded in its original goals, it's not stopping there. systemd is becoming theSvchostof Linux -- which I don't think most Linux
folks want. You see, systemd is growing, like wildfire, well outside the bounds of enhancing the Linux boot experience. systemd
wants to control most, if not all, of the fundamental functional aspects of a Linux system --
from authentication to mounting shares to network configuration to syslog to cron. It wants to do so as essentially a monolithic
entity that obscures what's happening behind the scenes.
The systemd daemon became default mechanism in
Fedora starting from version 15. Initially it was a bad joke but gradually improved. Integration of systemd into fedora
for me it was the main reasons to drop Fedora 15.
The main command used to control systemd is systemctl. Some of its subcommands are as follows.
systemctl list-units - List all units (where unit is the term for a job/service)
systemctl start [NAME...] - Start (activate) one or more units
systemctl stop [NAME...] - Stop (deactivate) one or more units
systemctl enable [NAME...] - Enable one or more unit files
systemctl disabl reboot - Shut down and reboot the system
For the complete list, see man page systemctl(1). A nd of cause we have GUI equivalent to systemctl. It is called
systemadm. So dreams of people who are moving to Linux from Windows server finally come true ;-).
The key problem with systemd is additional complexity it brings into init process. It also removes a very useful notion of runlevels
although I think it can be restored and was partially restored in later releases of Fedora and Open Suse. I saw something in this regard
in Open Suse 13.2.
It is way too complex to my taste and for server space I don't care much about startup time reduction, snapshoting and resorting
Linux state that it addresses. Also I think the idea of blindly coping inetd approach to starting daemons by first
opening socket and then loading the daemon of the first request even with benefit of parallization is misguided:
One of the formidable new features of systemd is that it comes with a complete set of modular early boot services
that are written in simple, fast, parallelizable and robust C, replacing the shell "novels" the various distributions featured before.
Systemd is a replacement for the old script-based init, it's written in C, and has a very different design. So I'll try to compare
it to the old init systems.
Pros:
Uses parallelization, a lot of it
That means that some daemons are started simultaneously, which means boot time should be faster.
Has a convenient API
systemd supports DBus and sockets, so you can easily control it and talk to it from your own code
The unit syntax is way simpler
For most cases, all you need to do is start a daemon on boot and kill it on shutdown. Old bash-based init systems need
a large piece of boilerplate code to do that, but systemd doesn't. A common unit syntax is also easier to work with for developers,
because you only need to support one init system, and not tons of <something> init derivatives, OpenRC and whatnot.
Integrated logging
As an init binary, systemd knows more about other processes than, e.g. syslog, so it can log data in a more convenient
way. For example, you can get logs for a specific process, unit or target. You can also add additional information to the log
if your code uses systemd's library.
Cons:
Everything in one package
Currently, systemd has a lot of features in a single package. QR codes for log verification, a built-in HTTP server, json
serialization, you name it. This means a lot of dependencies that are not actually needed. Lennart promised to split those
out into separate packages later, but no one knows when 'later' is going to come.
Not POSIX compliant
systemd uses things that are exclusive to Linux, so it can't be used on *BSD systems. This makes *BSD people unhappy.
If you use Linux, you can probably ignore this.
It is forced aggressively
As much as I like it (and yes, I like it), seeing GNOME enforce systemd as a strict dependency is just wrong. Also, see
the previous point.
Lennart
I'm not sure if his personality is a valid point, but he seems to take a 'I'm right and fuck y'all' stance in
some cases, and I don't really like it. Also it's quite common for his code to be really buggy (see early systemd/pulseaudio),
but it's not really important any more now that a quite large team is working on systemd.
Hengist, 1 year ago
I'd like to add that the systemd controversy isn't just limited to the BSDs. Because systemd has become a forced dependency
of many packages, the complete Linux-centric nature of it has caused major issues for pretty much every Unix-like except
Linux itself.
It's also problematic in a more ideological way. One of the main reasons for Linux and the free software movement
was to move away from proprietary solutions. By purposely being POSIX incompatible, systemd has essentially rendered itself
and everything that depends on it proprietary to Linux (without a heck of a lot of developer work and porting.)
Systemd thus represents for many people a partial betrayal of why Linux exists in the first place. Furthermore, there was never
any attempt to build consensus or establish an open standard for how systemd (or compatible alternate systems) might work---many
see Poettering as having abused his position to force it upon others.
And, on top of all of that, it didn't have to be that way. Upstart does most of what systemd does while being POSIX-compatible
in most aspects.
K900
I'm pretty sure you can make systemd work on pure POSIX, if you drop all the cgroups code and stuff. I'm not too familiar with
the code, but I think I saw someone work on that stuff already.
Hengist
Of course you can make systemd work on POSIX if you disable large amounts of code and implement workarounds. You're essentially
creating a fork for your platform that resembles systemd less and less with every new systemd update.
Now every package that depended on that code being in systemd is broken too. The problem only gets worse as systemd adoption
increases, which appears inevitable given Poettering's position.
And all of that is a heck of a lot of developer work.
natermer
POSIX has almost no relevance anymore.
Two reasons:
If you care about portability you care about it running on OS X and Windows as well as your favorite *nix system. POSIX
gains you nothing here. A lot of the APIs from many of these systems will resemble POSIX closely, but if you don't take system-specific
differences into account you are not going to accomplish much.
I really doubt that any Init system from any Unix system uses only POSIX interfaces, except maybe NetBSD. All of them are
going to use scripts and services that are going to be running commands that use kernel-specific features at some point. Maybe
a init will compile and can be executed on pure POSIX api, but that is a FAR FAR cry from actually having a booted and running
system.
aidanjt
a) Wrong, both OS X and Windows have POSIX support, although Window's is emulated, OS X certainly is not, it's fully POSIX
compliant. and b) POSIX doesn't have to work identically everywhere, it only has to be more or less the same in most places and
downstream can easily patch around OS-specific quirks. Even GNU/Linux and a bunch of the BSDs are merely regarded as 'mostly'
POSIX compliant, after all. But if you ignore POSIX entirely, there's ZERO hope of portability.
Actually sysvinit is very portable, init.c only has 1 single Linux header which has been #ifdef'ed, to handle the three-finger-salute.
You see, init really isn't that complicated a programme, you tell the kernel to load it after it's done it's thing, init starts,
and loads distro scripts which starts userspace programmes to carry on booting. No special voodoo magic is really required. POSIX
is to thank for that. POSIX doesn't need to be the only library eva, it only needs to handle most of the things you can't do without,
without having to directly poke at kernel-specific interfaces.
This is why with POSIX, we can take a piece of software written for a PPC AIX mainframe, and make it work on x86 Linux without
a complete rewrite, usually with only trivial changes.
cbmuser
Upstart doesn't have socket-based activation and doesn't support process resource management through cgroups.
systemd is way more advanced and sophisticated and for that it needs to use Linux-specific features. Why should we hold back
on speed, functionality and reliability in systemd just to be compatible with non-Linux operating systems which no-one really
uses nowadays anyways?
ohet
systemd requires/uses a lot more Linux specific features other than cgroups. Here's a dated and incomplete list of
those:
This is how you don't construct software... You don't make optional features a hard requirement (cgroups, autofs, gnu crap)
you test a feature and utilize a fallback or actually think of the problem being solved and work with what you have.
And no GNU system interfaces don't provide some holly grail of functions in fact most are utterly broken compared to the posix
variants. There are also alternatives to all of those on *BSD, some are even arguably less broken.
EDIT: So I'm being downvoted would you like a list of how GNU extensions are broken? How about alternatives to some of these,
also GNU extensions are no Linux specific most BSD's implement them as well as half of this list.
natermer
Uses parallelization, a lot of it
Yes/No/Sorta. This is the best features over Upstart. With upstart you have to configure the parallel startup.
With SystemD all that is done is that SystemD opens all the unix and network sockets as it was the daemons were running
(once the services are ported to systemd). Then as daemon's sockets are accessed they are launched.. the kernel buffers the
socket requests and the sockets are handed over to the services when the services are started. The effect of this is that systemd
automatically self-adjusts to your specific setup and starts up in the fastest manner possible.
Integrated logging
I like this. With Systemd the command line utilities output status of the daemons and will indicate if it was ever started
and then crashed or if it never was started in the first place and so on and so forth. Makes it easier to figure out why this
or that service never started.
Besides the other pluses it manages a lot of things in a much better way then ever was possible before.. like quotas over
system resources or proper handling of multi-seat configurations.
Everything in one package
If your running a serious server setup with central logging and management then all that stuff will be needed anyways.
For desktop users it can make it easier to figure out what is going on in their desktop. For people wanting the 'lite'-est
system possible it can be usefull because the C nature of the thing and the tight integration means that it actually should
be able to use less resources then traditional methods while not giving up much in the way of features.
Not POSIX compliant
Nothing is really 'POSIX' compliant if you want to use that term in this manner. Very few applications meet this sort of
requirement and I really seriously doubt that anybody runs a pure POSIX configuration.
Even the BSD stuff is going to have things they use that is BSD and I am sure that if you tried to use any modern INIT system
from ANY Unix/Linux system it's going to require a lot of work to work on any other Unix/Linux system.
POSIX compliance really has almost none modern relevancy anymore. Even modern versions of Windows NT Kernel are POSIX compliant
if you provide the right environment add-ons. (NT kernel supports the ability to use multiple personalities like Win32 or POSIX
natively)
It is forced aggressively
Nobody forced anybody else to do anything. It's all open source and the people using are are choosing it voluntarily over
other possible configurations. Gnome isn't the federal government and nobody is holding a gun to anybody else's head. It is
literally impossible to force anybody to use anything in a open source software.
Anyways...
Either you make a choice to use it or you make a choice to not use it. It's a new thing that integrates deeply with in
a Linux OS that expands capabilities and manageability in very significant ways.
The WORST thing you can possibly do in this situation is make a choice to not make a choice... like Debian. Having 'choices'
in this manner means you get the worst of all worlds. You get a 100x increase in complexity by supporting 3 different init
systems and you get almost none of the benefit of using anything beyond the init system from 10 years ago. They are putting
a huge amount of effort into making their system WORSE.
So far either people are choosing to stick with old INIT or are choosing to go SystemD. The only significant system using
Upstart is Ubuntu and Ubuntu has always had the worst Gnome 3 experience due to their competing Unity system (based on Gnome
3).
People wanting to test the systemd stuff out really will be taking the best approach by using Fedora...
thode
parallelization, openrc does that.
Integrated logging, not necessary as it over complicates things, I would rather have a separate logger.
Everything in one package, splitting things out is preferable for debugging and keeping dependency trees sane (want gnome,
better like Linux and systemd...)
Not POSIX compliant, not the biggest deal, but breaking from this can be annoying.
Forced aggressively, gnome in particular, along with the merge of udev into systemd-udev are good examples. The udev merge
breaks udev support for me...
I don't think anyone is denying it's power, but loosing modularity means loosing choice and for me, that's one of
the primary things Linux is about.
As a Gentoo developer we have not fully decided for or against systemd (and therefore the udev merge).
The situation is very much complicated by the fact that that like Debian, we don't just support Linux, we have our own (better
then Debian's) FreeBSD support for instance. This means that if we make a decision that we have to keep that in mind.
Personally I'm hoping for a udev fork and to stick to openrc. We (gentoo) started udev and we will put it to bed if needed.
I know gregkh has a good reason to stop maintaining it, but I'm sad he did...
pigeon768
•Uses parallelization, a lot of it
Note that this feature is not unique to systemd. OpenRC (the init variant Gentoo uses) supports parallel startup as well.
•For most cases, all you need to do is start a daemon on boot and kill it on shutdown. Old bash-based init systems need
a large piece of boilerplate code to do that, but systemd doesn't.
...huh? Here's the init script for my rsync daemon:
#!/sbin/runscript
# Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: /var/cvsroot/gentoo-x86/net-misc/rsync/files/rsyncd.init.d-r1,v 1.1 2012/03/22 22:01:21 idl0r Exp $
command="/usr/bin/rsync"
command_args="--daemon ${RSYNC_OPTS}"
pidfile="/var/run/${SVCNAME}.pid"
depend() {
use net
}
That's it. It will call the network init scripts, it will start the service at boot with the arguments pulled from /etc/conf.d/rsyncd,
and stop the service at shutdown. Sure, maybe it could be simpler, but that's simple enough for me.
I'm not really sold on integrated logging or the API. It feels like unnecessary complexity. But what do I know, I don't even
use policykit or consolekit or upower or udisks or dbus or pulseaudio or cgroups or...
Hell, I don't even use a DE, just a bare WM.
2brainz
Currently, systemd has a lot of features in a single package. QR codes for log verification, a built-in HTTP server,
json serialization, you name it.
You just listed the features of journald (and mostly the unfinished ones - there is no useful client for the journal gatewayd
yet, so there is no point in enabling it). You forgot logind and udevd, which have been the major reason for criticism (especially
now that polkit requires logind).
This means a lot of dependencies that are not actually needed.
Define "needed". The only components most people don't need are the mentioned journald-gatewayd and its QR encode feature for
the FSS key. Disabling those, you remove libmicrohttpd and libqrencode from the dependencies. I cannot find any other "unneeded"
dependencies.
Lennart promised to split those out into separate packages later, but no one knows when 'later' is going to come.
I don't think he did.
systemd uses things that are exclusive to Linux, so it can't be used on *BSD systems.
This is not a con, it's a pro: All those amazing and useful features Linux has have been sitting there, mostly unused, for
years. Now, they are finally properly utilized in a way that is easy and beneficial for the end user. And people complain about
that.
I'm not sure if his personality is a valid point, but he seems to take a 'I'm right and fuck y'all' stance in some cases, and
I don't really like it.
It isn't a valid point. And I have found him to be very reasonable so far, I don't understand the complaints people have.
If your running a serious server setup with central logging, powerful syslog messages processor then most of logging capabilities
of systemd are useless, if not harmful. And most servers are now run using remote logging anyway. In other words they reinvented
the bicycle. Add to this more complex integration with monitoring system and management tools.
Red Hat Enterprise Linux 7 introduces a new logging daemon, journald, as part of the move to systemd.
journald captures the following types of message for all services:
syslog messages
kernel messages
initial RAM disk and early boot messages
messages sent to standard output and standard error output
It then stores these messages in native journal files: structured, indexed binary files that contain useful metadata and are faster
and easier to search.
Journal files are not stored persistently by default. The amount of data logged depends on the amount of free memory available;
when the system runs out of space in memory or in the /run/log/journal directory, the oldest journal files will be removed
in order to continue logging.
On Red Hat Enterprise Linux 7, rsyslog and journald coexist. The data collected by journald
is forwarded to rsyslog, which can perform further processing and store text-based log files. By default, rsyslog
only stores the journal fields that are typical for syslog messages, but can be configured to store all the fields available
to journald.
Red Hat Enterprise Linux 7 therefore remains compatible with applications and system configurations that rely on rsyslog.
Another problem is the journald introduced an additional point of failure in the place were it is really painful. And debugging
Linux startup is already way too complex:
Journald makes logging simple and convenient, right?
journald has been known to run out of memory and stop responding.
Due to the design of "oh just connect stdout of the process to journald" if you restart journald it closes all of those file
descriptors and you silently lose all further logging from already running processes.
Journald, by design, will only log so much per process, meaning that if it's logged too much since startup/an error you're
interested in, you've now lost it.
Why 'they' had to go for demonstrably broken binary logging using a new interface I don't know. They could have just extended
the syslog format to make it mandatory to pass along program name and process ID in the message. Then they could have made it
"easier" to find the logs by having a per program/facility directory under/var/log and then stuck to simple,
plain text logging that the existing *nix tools can search with ease. But, nope, they had to go with this shitfest instead.
And that's only one component of the whole systemd shitfest. On a very simple Debian install I've had it exhibit issues with
shutdown, hanging on something that is simple or ignorable.
Sadly I had to abandon using Devuan after a while. The only really supported version is the jessie (Debian old-stable) version,
and I'm not sure even that gets timely security updates. Their equivalent of Debian stable (Stretch) is 'ascii' and got next to
no updates during the few months I used it. Boot up was both nice and fast (a major systemd selling point) and reliable
(unlike systemd). I guess they just need more in the way of human resources so that they can nail down which Debian packages have
problematic systemd tentacles involved, then they could pass through other Debian updates as soon as they're available.
journald completely disregards RFC 5424, section 6.3, including no support for picking up structured log data nor forwarding
its own structured data. It does forward existing structured syslog data by virtue of leaving log messages unaltered.
journald makes it impossible for syslog implementations to pick up trusted metadata via the kernel. Since it imposes itself
between the syslog daemon and the logging service, all kernel-obtainable metadata is from the journal server. If you need that,
you must interface with journald. (workaround module for rsyslog, which has trouble with corrupted journal files).
The journal's query API is essentially a reverse polish datalog query builder with fixed threefive level nesting and a fixed
operator type at each level. The API mixes parsed with non-parsed operations instead of providing a query language or criteria
construction engine.
journald stores its file descriptors in PID1 when it is stopped via sd_pid_notify_with_fds(). As this is not implemented
as a reliable transmission, journald restarts have a chance of losing all logging streams.
Journald automatically attempts to set nocow if /var/log/journal is on a btrfs filesystem 📚. Nocow also disables btrfs checksumming
and thus potential data recovery from multiple block copies. This is not mitigated by the journal's limited checksumming. nocow
is re-enabled when a journal file is put offline
The journal file format description is --to this date-- still incomplete. There is no mention of --for example-- sealing and
LZ4 compression.
The journal still seems to strip white space from log messages before forwarding them to syslog. This means, that e.g. multiline
log entries with whitespace indentation a continuation marker are mangled.
When /var/log/journal resides on a separate filesystem, journald might create the journal in (one of the) the parent filesystems
and then mount /var{,/log{, /journal}} over that location, making the journal inaccessible during runtime. To fix this, you need
to make journald wait for the mount point. Waiting for the directory using .path unit might not work, since it is journald that
creates the directory with Storage=persistent.
journalctl "-r" does not combine well with "-n" and does the wrong thing.
Journald's timestamps are not necessarily when the event happens, but when the journal daemon processes its queue, so that
means if it gets less CPU time for some reason or another, there will be a mismatch between the actual time of the event happening
and the time appearing in the journal (thanks to @rt2800pci1).
Documentation and "Closed Design" Issues
The journald query API is documented by completely avoiding any of the well-known jargon (conjunctive query, predicate, variable,
literal, atom) for database/datalog queries and instead uses custom idioms.
There are two levels of critique of systemd from purely technical standpoint:
Critique of systemd from architectural point of view: it has a weak, controversial design.
"Attempt to bite more then one can chew" critique. Here especially valid is critique of journald, in which
the author demonstrates even more clearly complete lack of architectural visions and narrow "C coder" mentality which he applies
to the replacement of syslog daemon. See for example
The End of Linux - blog dot lusis
Good critique of systemd from architectural point of view is provided inboycott systemd
Check out the uselessd project for a saner systemd base.
systemd0 is a replacement for the sysvinit daemon used in GNU/Linux and Unix systems, originally authored by Lennart Poettering
of Red Hat. It represents a monumental increase in complexity, a slap in the face to the Unix philosophy, and its inherent
domineering and viral nature turns it into something akin to a "second kernel" that is spreading all across the Linux ecosystem.
This site aims to serve as a rundown and a wake-up call to take a stand against the widespread proliferation of systemd, to detail
why it is harmful, and to persuade users to reject its use, and especially its ubiquity.
Disclaimer: We are not sysvinit purists by any means. We do recognize the need for a new init system in the 21st century,
but systemd is not it.
The Rundown
systemd flies in the face of the Unix philosophy: "do one thing and do it well," representing a complex collection
of dozens of tightly coupled binaries1. Its responsibilities grossly exceed that of an init system, as it goes on to handle power
management, device management, mount points, cron, disk encryption, socket API/inetd, syslog, network configuration, login/session
management, readahead, GPT partition discovery, container registration, hostname/locale/time management, mDNS/DNS-SD, the Linux
console and other things all wrapped into one. The agenda for systemd to be an ever-growing and invasive middleware for GNU/Linux
was elucidated in a 2014 GNOME Asia talk2. Keep it simple, stupid.
systemd's journal files (handled by journald) are stored in a complicated binary format, and must be queried using journalctl.
This makes journal logs potentially corruptible, as they do not have ACID- compliant transactions. You typically don't want that
to happen to your syslogs. The advice of the systemd developers? Ignore it. The only way to generate traditional logs is to
run a standard syslogd like rsyslog alongside the journal4. There's also embedded HTTP server integration (libmicrohttpd).
QR codes are served, as well, through libqrencode.
Since systemd is very tightly welded with the Linux kernel API, different systemd versions are incompatible with different
kernel versions and portability is unnecessarily hampered in many components. This is an isolationist policy that essentially
binds the Linux ecosystem into its own cage, serving as an obstacle to developing software portable with both Linux variations
and other Unix-like systems. It also raises some issues backporting patches and maintaining long-term stable systems.
udev and dbus are forced dependencies. In fact, udev merged with systemd a long time ago. The integration of the
device node manager, which was once a part of the Linux kernel, is not a decision that is to be taken lightly. The political implications
of it are high, and it makes a lot of packages dependent on udev, in turn dependent on systemd, despite the existence of forks,
such as eudev. Starting with systemd-209, the developers now have their own, non-standard and sparsely documented sd-bus API that
replaces much of libdbus's job, and further decreases transparency. Further, they intend to migrate udev to this new transport,
replacing Netlink and thus making udev a systemd-only daemon6. The effects of this move are profound.
systemd features a helper which captures coredumps and directs them either to /var/lib/systemd/coredump... or the journal,
where they must be queried using coredumpctl7. The latter behavior was a default and is likely to return. It assumes that
users and admins are dumb, but more critically, the fundamentally corruptible nature of journal logs makes this a severe impediment,
and an irresponsible design choice. It can also create complications in multi-user environments related to privileges.
systemd's size makes it a single point of failure. As of this writing, systemd has had 9 CVE reports, since its inception
in March 201010. So far, this may not seem like that much, but its essential and overbearing nature will make it a juicy target
for crackers, as it is far smaller in breadth than the Linux kernel itself, yet seemingly just as critical.
systemd is viral by its very nature, due to its auxiliaries exposing APIs, while being bound to systemd's init. Its
scope in functionality and creeping in as a dependency to lots of packages means that distro maintainers will have to necessitate
a conversion, or suffer a drift. As an example, the GNOME environment often makes use of systemd components, such as logind, and
support for non-systemd systems is becoming increasingly difficult. Under Wayland, GNOME relies on logind, which in turn requires
and is a part of systemd11. More and more maintainers are going to require systemd for this reason, and similar instances like
it. The rapid rise in adoption by distros such as Debian, Arch Linux, Ubuntu, Fedora, openSUSE and others shows that many are
jumping onto the bandwagon, with or without justification. Other dependent packages include the Weston compositor, Polkit, upower,
udisks2, PackageKit, etc. It's also worth noting that systemd will refuse to start as a user instance, unless the system boots
with it as well - blatant coercion.
systemd clusters itself into PID 1, rather than acting as a standalone process supervisor. Due to it controlling
lots of different components, there are tons of scenarios in which it can crash and bring down the whole system. We should
also mention that in order to reduce the need for rebooting, systemd provides a mechanism to reserialize and reexecute systemctl
in real time, however, if this fails, of course, the system goes down. There are several ways that this can occur, including an
inability to reload a previous, potentially incompatible state. This happens to be another example of SPOF and an unnecessary
burden on an already critical component (init).
systemd is designed with glibc in mind, and doesn't take kindly to supporting other libcs all that much. In general,
the systemd developers' idea of a standard libc is one that has bug-for-bug compatibility with glibc.
systemd's complicated nature makes it harder to extend and step outside its boundaries. While you can more or less
trivially start shell scripts from unit files, it's more difficult to write behavior that goes outside the box, what with all
the feature bloat. Many users will likely need to write more complicated programs that directly interact with the systemd API,
or even patch systemd directly. One also needs to worry about a much higher multitude of code paths and behaviors in a system-critical
program, including the possibility of systemd not synchronizing with the message bus queue on boot, and thus freezing. This is
as opposed to a conventional init, which is deterministic and predictable in nature, mostly just serially execing scripts.
Ultimately, systemd's spread is symbolic of something more than systemd itself. It shows a radical shift in thinking
by the Linux community. Not necessarily a positive one, either. One that is heavily desktop-oriented, choice-limiting, isolationist,
reinvents the flat tire, and is just a huge anti-pattern in general. If your goal is to pander to the lowest common denominator,
so be it. We will look for alternatives, however.
systemd doesn't even know what it wants to be. It is variously referred to as a "system daemon" or a "basic userspace
building block to make an OS from", both of which are highly ambiguous. It engulfs functionality that variously belonged to util-linux,
wireless tools, syslog and other projects. It has no clear direction, other than the whims of the developers themselves. Ironically,
despite aiming to standardize Linux distributions, it itself has no clear standard, and is perpetually rolling.
Here is another more recent post of the same theme:
what exactly is systemd and why do we keep hearing so much about it?
Part of the problem is that its poorly defined. It's touted as a replacement for the init system. (The system that manages
other services. So for a windows user it's core functions as the services host process -- its where you can start and stop services,
determine which startup at system startup. Stop them. See which are running. Restart crashed services, etc. It does startup in
parallel so it's faster than the traditional init system.
But doesn't just replace init, it replaces cron (the task scheduling system -- "scheduled backups and such" not "cpu thread
scheduling"; it replaces the event logging system, it replaces the login system...
The unix philosophy is for components to be small and do one thing well and to to let users build a system out of the different
pieces they want. systemd is big and tightly integrated and more of an all-or-nothing and that rubs a lot of people the wrong
way.
And the main valid criticisms of are (IMHO)
1) Binary logging -- the advantages of the systemd logging system are apparent, but there are disadvantages too; users
should have
2) It potentially creates a layer between kernel and the rest of the system that becomes entrenched and irreplaceable.
As applications going forward will develop dependencies on the rich services of systemd it will become impossible to replace systemd
with anything else, except maybe a fork of systemd. (This rubs a lot of people the wrong way.)
3) the rich service layer and tight integration stifles innovation; for example assuming systemd has traction someone
can't make a "better cron" now, because that functionality is part of systemd. They can't make a better init-only system because
applications will be relying on all the other services of systemd.
4) it gets between the rest of the system and the kernel, and in many cases you have to work through systemd and can't
just go to the kernel. This has its good points, but also its problems and further entrenches systemd.
Perhaps GNU/Linux systems with systemd should properly be called GNU/systemd/Linux systems to emphasize the point.
I don't personally hate systemd; I recognize a lot of thing it does are good for large parts of the linux user base. But I
do agree with the 'haters'; that its not modular enough and that leads to several valid complaints.
I doesn't help that the egos involved on all sides are large and uncompromising.
As of March 2015, systemd is used both on RHEL 7 and SLES 12, two major Linux enterprise distribution. I know one case in
which a large company decided to drop (severely decrease to be exact) RHEL and increase usage of Windows under VMware, partially because
of this set of changes introduced by RHEL 7. I also know another company which (informally) decided to stick to RHEL 6.x and wait how
RHEL 7 will evolve before adopting it (probably after support of RHEL 6 will be over, which is 2020), if hardware still is compatible
with 6.x. So reaction of this 'innovation" in corporate world as far as I know is mixed. A lot of people understand the game Red
Hat is playing and are not happy. I hope that that can hurted Hat profits margins.
Especially dangerous for them is using CentOS/Academic OS or something similar as replacement of RHEL with outside third party support,
which is yet another possible trend. As of August 2015 the most popular versions of RHEL remains 6.5-6.7.
But generally enterprise world is dumb enough to eat whatever is served. Although during recession some localized
sparks of intelligence can temporary prevail ;-)
There might be multiple reasons for current dominance of systemd, which now is usedin RHEL 7, SLES 12, Debian
and Ubuntu. Among them:
One is the corruption/technical degradation inside Red Hat, which is now more of a financial company then technical company and
in which level of tech support deteriorated considerably.
The second one is that organized minority always dominated unorganized majority. This is the essence of "iron law of oligarchy".
Here is an interesting thread from Slashdot ( March 06, 2015) that touches the question "Why systemd overrun major Linux
distributions?"
The main reason is that paid developers from Red hat, Suse and Canonical now constitute Linux oligarchy, which can decide how things
should be done between themselves. Other developers simply do not matter that same way as regular US voters do not matter.
The main reason is that paid developers from Red hat, Suse and Canonical now constitute Linux oligarchy, which can decide
how things should be done between themselves. Other developers simply do not matter that same way as regular US voters do not
matter.
There are several main reason why systemd has overrun some of the best known distros. On of the biggest is simple. Gnome
depends on it, and soon KDE will too. Distro maintainers either bend over for systemd, or will spend a lot of time patching
and trying to get these two desktops working on GNU/Linux.
Then, you have two types of distro maintainers.
Volunteers, and paid developers. Volunteers are guys like you and me, with limited time to help, doing things on spare
time.
Paid developers usually are RedHat or Canonical employees (we also had novell employees when they destroyed SuSE), and
the first seem to be more and with more money to spend on pushing RedHat technologies.
Unpaid volunteers can't even compete with the deluge of code and the sponsored conferences and presentations. Any alternative
or dissenting voice is either bought or pressured to give up.
Finally, some claim that systemd solves a lot of things that didn't work, and that if you don't know what these are
then you are an idiot, as obviously Linux has never worked well in the last 20 years.
But what do I know, I've been told enough times that I am heretic (hater in doubleplusgood newspeak) for daring to criticise
systemd.
CurryCamel (2265886) on Friday March 06, 2015 @01:03PM (#49198297) Journal
Re: What is systemd exactly? (Score:5, Interesting)
That baffles me too.
But I guess your have your 'minority' and 'majority's mixed. A more powerful minority - the distro makers - make this decision
(and they seem terribly non-vocal, I'm still hoping someone would explain in simple terms why systemd is a good thing. No, cutting
down the cold boot time from the ~20s it is with init is not a terribly good reason in my book).
I don't like systemd, but I am not that vocal about it. I don't know it closely enough to comment. My experience with systemd
is as follows:
-About 99% of linux crashes (subjective measurement) I have seen in the past 10 years happen on my Fedora box. The only
one I have that runs systemd. Coincidence? I don't know.
-The same Fedora box cannot mount /home at bootup. I have to log in as root, and mount it over command line.
-Googling for the error it gives at bootup doesn't give help, as systemd doesn't have the same amount of answers to previous
questions as older systems have.
The point is, I cannot blame systemd for this. I should RTFM. As soon as I find it. And have time for it.
Qualsys has found an ugly Linux systemd security hole that can enable any unprivileged user
to crash a Linux system. The patch is available, and you should deploy it as soon as
possible.
... ... ...
It works by enabling attackers to misuse the alloca() function in a way that would result in
memory corruption. This, in turn, allows a hacker to crash systemd and hence the entire
operating system. Practically speaking, this can be done by a local attacker mounting a filesystem on
a very long path . This causes too much memory space to be used in the systemd stack, which
results in a system crash.
A seven-year-old privilege escalation vulnerability that's been lurking in several Linux
distributions was patched last week in a coordinated disclosure.
In a blog
post on Thursday, GitHub security researcher Kevin Backhouse recounted how he found the bug
( CVE-2021-3560 ) in a service called
polkit associated with systemd, a common Linux system and service manager component.
Introduced in commit
bfa5036 seven years ago and initially shipped in polkit version 0.113, the bug traveled
different paths in different Linux distributions. For example, it missed Debian 10 but it made
it to the unstable version of Debian ,
upon which other distros like Ubuntu are based.
Formerly known as PolicyKit, polkit is a service that evaluates whether specific Linux
activities require higher privileges than those currently available. It comes into play if, for
example, you try to create a new user account.
Backhouse says the flaw is surprisingly easy to exploit, requiring only a few commands using
standard terminal tools like bash, kill, and dbus-send.
"The vulnerability is triggered by starting a dbus-send command but killing it
while polkit is still in the middle of processing the request," explained Backhouse.
Killing dbus-send – an interprocess communication command – in the
midst of an authentication request causes an error that arises from polkit asking for the UID
of a connection that no longer exists (because the connection was killed).
"In fact, polkit mishandles the error in a particularly unfortunate way: rather than
rejecting the request, it treats the request as though it came from a process with UID 0,"
explains Backhouse. "In other words, it immediately authorizes the request because it thinks
the request has come from a root process."
This doesn't happen all the time, because polkit's UID query to the dbus-daemon
occurs multiple times over different code paths. Usually, those code paths handle the error
correctly, said Backhouse, but one code path is vulnerable – and if the disconnection
happens when that code path is active, that's when the privilege elevation occurs. It's all a
matter of timing, which varies in unpredictable ways because multiple processes are
involved.
The intermittent nature of the bug, Backhouse speculates, is why it remained undetected for
seven years.
"CVE-2021-3560 enables an unprivileged local attacker to gain root privileges," said
Backhouse. "It's very simple and quick to exploit, so it's important that you update your Linux
installations as soon as possible." ®
The polkit service is used by systemd. Linux systems that have polkit version 0.113 or later installed – like Debian (unstable),
RHEL 8, Fedora 21+, and Ubuntu 20.04 – are affected. "CVE-2021-3560 enables an unprivileged local attacker to gain root privileges,"
said Backhouse. "It's very simple and quick to exploit, so it's important that you update your Linux installations as soon as
possible."
Ancient Linux bugs provide root access to unprivileged users
Security researchers have discovered some 7-year-old vulnerabilities Linux
distribution
Can be used by unprivileged local users to bypass authentication and gain root access.
The bug patched last week exists in Polkit System Service, a toolkit used to assess whether a particular Linux activity requires
higher privileges than currently available. Polkit is installed by default on some Linux distributions, allowing unprivileged
processes to communicate with privileged processes.
Linux distributions that use systemd also use Polkit because the Polkit service is associated with systemd.
This vulnerability has been tracked as CVE-2021-3560 and has a CVSS score of 7.8. It was discovered by Kevin Backhouse, a
security researcher on GitHub. He states that this issue occurred in 2013 with code commit bfa5036.
Initially shipped with Polkit version 0.113, it has moved to various Linux distributions over the last seven years.
"If the requesting process disconnects from dbus-daemon just before the call to polkit_system_bus_name_get_creds_sync begins, the
process will not be able to get the unique uid and pid of the process and will not be able to verify the privileges of the
requesting process." And Red Hat
Advisory
..
"The biggest threats from this vulnerability are data confidentiality and integrity, and system availability."
so
Blog
post
According to Backhouse, exploiting this vulnerability is very easy and requires few commands using standard terminal
tools such as bash, kill and dbus-send.
This flaw affects Polkit versions between 0.113 and 0.118. Red Hat's Cedric Buissart said it will also affect Debian-based
distributions based on Polkit 0.105.
Among the popular Linux distributions affected are Debian "Bullseye", Fedora 21 (or later), Ubuntu 20.04, RHEL 8.
Polkit v.0.119, released on 3rd
rd
We
will address this issue in June. We recommend that you update your Linux installation as soon as possible to prevent threat
attackers from exploiting the bug.
CVE-2021-3560 is the latest in a series of years ago vulnerabilities affecting Linux distributions.
In 2017, Positive Technologies researcher Alexander Popov discovered a flaw in the Linux kernel introduced in the code in 2009.
Tracked as CVE-2017-2636, this flaw was finally patched in 2017.
Another old Linux security flaw indexed as CVE-2016-5195 was introduced in 2007 and patched in 2016. This bug, also known as the
"dirty COW" zero-day, was used in many attacks before the patch was applied.
Ancient Linux bugs provide root access to unprivileged users
Source link
Ancient Linux bugs provide root access to unprivileged users
In this article,
we'll show you how to manage the systemd units using systemctl command.
What's systemd?
systemd is a system
and service manager for modern Linux operating systems, which backwards compatible with SysV and LSB init scripts.
It provides a
numerous features such as parallel startup of system services at boot time and on-demand activation of daemons, etc,.
systemd introduces
the concept of systemd units, which are located under
"˜/usr/lib/systemd/system'
whereas
legacy init scripts were located under
"˜/etc/rc.d/init.d'
.
systemd is the first
process that starts after the system boots and holds PID 1.
systemd unit types
There are different
types of unit files that represent system resources and services. Each unit file type comes with their own extensions, below are
the commonly used systemd unit types.
Unit files are
plain-text files that can be created or modified by a privilege user.
Run the following
command to see all unit types:
$ systemctl -t help
Unit Type
File Extension
Description
Service unit
.service
A service on the system, including instructions for starting, restarting, and stopping the service.
Target unit
.target
It replaces sysV init run levels that control system boot.
Device unit
.device
A device file recognized by the kernel.
Mount unit
.mount
A file system mount point.
Socket unit
.socket
A network socket associated with a service.
Swap unit
.swap
A swap device or a swap file.
Timer unit
.timer
A systemd timer.
What's systemctl?
The systemctl command
is the primary tool to manage or control systemd units. It combines the functionality of SysVinit's service and chkconfig
commands into a single command.
It comes with a long
list of options for different functionality, the most commonly used options are starting, stopping, restarting, masking, or
reloading a daemon.
To list all loaded
units regardless of their state, run the following command on your terminal. It lists all units, including service, target,
mount, socket, etc,.
$ systemctl list-units --all
Listing Services
To list all currently
loaded service units, run:
$ systemctl list-units --type service
or
$ systemctl list-units --type=service
Details about the header of above output:
UNIT
= The name of the
service unit
LOAD
= Reflects whether the
unit file has been loaded.
ACTIVE
= The high-level unit
file activation state.
SUB
= The low-level unit file
activation state.
DESCRIPTION
= Short
description of the unit file.
By default, the
"˜systemctl
list-units'
command displays only active units. If you want to list all loaded units regardless of their state, run:
$ systemctl list-units --type service --all
UNIT LOAD ACTIVE SUB DESCRIPTION
accounts-daemon.service loaded active running Accounts Service
â -- acpid.service not-found inactive dead acpid.service
after-local.service loaded inactive dead /etc/init.d/after.local Compatibility
alsa-restore.service loaded active exited Save/Restore Sound Card State
alsa-state.service loaded inactive dead Manage Sound Card State (restore and store)
â -- amavis.service not-found inactive dead amavis.service
apparmor.service loaded active exited Load AppArmor profiles
appstream-sync-cache.service loaded inactive dead Synchronize AppStream metadata from repositories into AS-cache
auditd.service loaded active running Security Auditing Service
avahi-daemon.service loaded active running Avahi mDNS/DNS-SD Stack
backup-rpmdb.service loaded inactive dead Backup RPM database
backup-sysconfig.service loaded inactive dead Backup /etc/sysconfig directory
bluetooth.service loaded active running Bluetooth service
btrfs-balance.service loaded inactive dead Balance block groups on a btrfs filesystem
btrfs-scrub.service loaded inactive dead Scrub btrfs filesystem, verify block checksums
btrfs-trim.service loaded inactive dead Discard unused blocks on a mounted filesystem
btrfsmaintenance-refresh.service loaded inactive dead Update cron periods from /etc/sysconfig/btrfsmaintenance
ca-certificates.service loaded inactive dead Update system wide CA certificates
check-battery.service loaded inactive dead Check if mainboard battery is Ok
colord.service loaded active running Manage, Install and Generate Color Profiles
cron.service loaded active running Command Scheduler
cups.service loaded active running CUPS Scheduler
To display a single
property, use the
"˜-p'
flag with the property name.
$ systemctl show sshd.service -p ControlGroup
ControlGroup=/system.slice/sshd.service
To recursively show
only dependencies of target units, run: For instance, to show dependencies of ssh service.
$ systemctl list-dependencies sshd.service
sshd.service
â -- â"œâ"€system.slice
â -- â""â"€sysinit.target
â -- â"œâ"€detect-part-label-duplicates.service
â -- â"œâ"€dev-hugepages.mount
â -- â"œâ"€dev-mqueue.mount
â -- â"œâ"€dracut-shutdown.service
â -- â"œâ"€haveged.service
â -- â"œâ"€kmod-static-nodes.service
â -- â"œâ"€lvm2-lvmpolld.socket
â -- â"œâ"€lvm2-monitor.service
â -- â"œâ"€plymouth-read-write.service
â -- â"œâ"€plymouth-start.service
Listing Sockets
To list socket units
currently in memory, run:
$ systemctl list-units --type=socket
or
$ systemctl list-sockets
UNIT LOAD ACTIVE SUB DESCRIPTION
---------------------------------------------------------------------------------------------------------
avahi-daemon.socket loaded active running Avahi mDNS/DNS-SD Stack Activation Socket
cups.socket loaded active running CUPS Scheduler
dbus.socket loaded active running D-Bus System Message Bus Socket
dm-event.socket loaded active listening Device-mapper event daemon FIFOs
iscsid.socket loaded active listening Open-iSCSI iscsid Socket
lvm2-lvmpolld.socket loaded active listening LVM2 poll daemon socket
pcscd.socket loaded active listening PC/SC Smart Card Daemon Activation Socket
syslog.socket loaded active running Syslog Socket
systemd-initctl.socket loaded active listening /dev/initctl Compatibility Named Pipe
systemd-journald-dev-log.socket loaded active running Journal Socket (/dev/log)
systemd-journald.socket loaded active running Journal Socket
systemd-rfkill.socket loaded active listening Load/Save RF Kill Switch Status /dev/rfkill Watch
systemd-udevd-control.socket loaded active running udev Control Socket
systemd-udevd-kernel.socket loaded active running udev Kernel Socket
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
14 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
Listing Mounts
To list mount units
currently loaded, run:
$ systemctl list-units --type=mount
UNIT LOAD ACTIVE SUB DESCRIPTION
----------------------------------------------------------------------------
-.mount loaded active mounted Root Mount
\x2esnapshots.mount loaded active mounted /.snapshots
boot-grub2-i386\x2dpc.mount loaded active mounted /boot/grub2/i386-pc
boot-grub2-x86_64\x2defi.mount loaded active mounted /boot/grub2/x86_64-efi
dev-hugepages.mount loaded active mounted Huge Pages File System
dev-mqueue.mount loaded active mounted POSIX Message Queue File System
home.mount loaded active mounted /home
opt.mount loaded active mounted /opt
proc-sys-fs-binfmt_misc.mount loaded active mounted Arbitrary Executable File Formats File System
root.mount loaded active mounted /root
run-media-linuxgeek-DATA.mount loaded active mounted /run/media/linuxgeek/DATA
run-user-1000-gvfs.mount loaded active mounted /run/user/1000/gvfs
run-user-1000.mount loaded active mounted /run/user/1000
srv.mount loaded active mounted /srv
sys-fs-fuse-connections.mount loaded active mounted FUSE Control File System
sys-kernel-debug-tracing.mount loaded active mounted /sys/kernel/debug/tracing
sys-kernel-debug.mount loaded active mounted Kernel Debug File System
tmp.mount loaded active mounted /tmp
usr-local.mount loaded active mounted /usr/local
var.mount loaded active mounted /var
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
20 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
$ systemctl list-timers
NEXT LEFT LAST PASSED UNIT ACTIVATES
Fri 2021-06-04 17:00:00 IST 8min left Fri 2021-06-04 16:00:03 IST 51min ago snapper-timeline.timer snapper-timeline.service
Fri 2021-06-04 21:38:01 IST 4h 46min left Thu 2021-06-03 12:10:13 IST 1 day 4h ago snapper-cleanup.timer snapper-cleanup.service
Fri 2021-06-04 21:42:54 IST 4h 51min left Thu 2021-06-03 12:15:06 IST 1 day 4h ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Sat 2021-06-05 00:00:00 IST 7h left Fri 2021-06-04 00:00:23 IST 16h ago logrotate.timer logrotate.service
Sat 2021-06-05 00:00:00 IST 7h left Fri 2021-06-04 00:00:23 IST 16h ago mandb.timer mandb.service
Sat 2021-06-05 00:43:15 IST 7h left Fri 2021-06-04 01:52:04 IST 14h ago check-battery.timer check-battery.service
Sat 2021-06-05 00:48:48 IST 7h left Fri 2021-06-04 00:05:23 IST 16h ago backup-rpmdb.timer backup-rpmdb.service
Sat 2021-06-05 01:41:30 IST 8h left Fri 2021-06-04 00:57:23 IST 15h ago backup-sysconfig.timer backup-sysconfig.service
Mon 2021-06-07 00:00:00 IST 2 days left Tue 2021-06-01 03:16:20 IST 3 days ago btrfs-balance.timer btrfs-balance.service
Mon 2021-06-07 00:00:00 IST 2 days left Mon 2021-05-31 12:08:22 IST 4 days ago fstrim.timer fstrim.service
Thu 2021-07-01 00:00:00 IST 3 weeks 5 days left Tue 2021-06-01 03:16:20 IST 3 days ago btrfs-scrub.timer btrfs-scrub.service
11 timers listed.
Pass --all to see loaded but inactive timers, too.
Service Management
Service is also one
of the unit type in the systemd system, which have unit files with a suffix of
"˜.service'
.
Six types of common actions can be performed against a service, which will be divided into two major types.
Boot-time actions:
These are
enable and disable, which are used to control a service at boot time.
Run-time actions:
These are
start, stop, restart, and reload, which are used to control a service on-demand.
Start a service
To start a systemd
service, run: The "˜UNIT_NAME' could be any application name like sshd, httpd, mariadb, etc,.
To stop a currently
running service, execute: For instance, to stop Apache httpd service.
$ sudo systemctl stop httpd.service
Restart and reload a service
To restart a running
service, run:
$ sudo systemctl restart UNIT_NAME.service
You may need to
reload a service while making changes to the configuration file, which will bring up new parameters that you added. To do so,
run:
$ sudo systemctl reload UNIT_NAME.service
Enabling and disabling a service at boot
To start services
automatically at boot, run: This will create a symlink from either
"˜/usr/lib/systemd/system/UNIT_NAME.service'
or
"˜/etc/systemd/system/UNIT_NAME.service'
to
the
"˜/etc/systemd/system/SOME_TARGET.target.wants/UNIT_NAME.service'
.
$ sudo systemctl enable UNIT_NAME.service
You can double check
that the service is enabled by executing the following command.
$ systemctl is-enabled UNIT_NAME.service
To disable the
service at boot, run: This will remove the symlink that has created earlier for the service unit.
$ sudo systemctl disable UNIT_NAME.service
Checking the status of service
To check the status
of a service, run: This will give you more detailed information about the service unit.
$ systemctl status UNIT_NAME.service
# systemctl status httpd
â -- httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/httpd.service.d
â""â"€limit_nofile.conf, respawn.conf
Active: active (running) since Fri 2021-05-28 03:23:54 IST; 1 weeks 3 days ago
Docs: man:httpd(8)
man:apachectl(8)
Process: 19226 ExecReload=/usr/sbin/httpd $OPTIONS -k graceful (code=exited, status=0/SUCCESS)
Main PID: 25933 (httpd)
Status: "Total requests: 0; Current requests/sec: 0; Current traffic: 0 B/sec"
Tasks: 187
Memory: 479.6M
CGroup: /system.slice/httpd.service
â"œâ"€12161 /usr/sbin/httpd -DFOREGROUND
â"œâ"€19283 /usr/sbin/httpd -DFOREGROUND
â"œâ"€19284 /usr/sbin/httpd -DFOREGROUND
â"œâ"€19286 Passenger watchdog
â"œâ"€19289 Passenger core
â"œâ"€19310 /usr/sbin/httpd -DFOREGROUND
â"œâ"€19333 /usr/sbin/httpd -DFOREGROUND
â"œâ"€19339 /usr/sbin/httpd -DFOREGROUND
â"œâ"€19459 /usr/sbin/httpd -DFOREGROUND
â"œâ"€20564 /opt/plesk/php/5.6/bin/php-cgi -c /var/www/vhosts/system/thilaexports.com/etc/php.ini
â"œâ"€21821 /usr/sbin/httpd -DFOREGROUND
â""â"€25933 /usr/sbin/httpd -DFOREGROUND
Jun 06 12:19:11 ns1.nowdigitaleasy.com systemd[1]: Reloading The Apache HTTP Server.
Jun 06 12:19:12 ns1.nowdigitaleasy.com systemd[1]: Reloaded The Apache HTTP Server.
Jun 06 13:18:06 ns1.nowdigitaleasy.com systemd[1]: Reloading The Apache HTTP Server.
Jun 06 13:18:07 ns1.nowdigitaleasy.com systemd[1]: Reloaded The Apache HTTP Server.
Jun 06 13:18:26 ns1.nowdigitaleasy.com systemd[1]: Reloading The Apache HTTP Server.
Jun 06 13:18:27 ns1.nowdigitaleasy.com systemd[1]: Reloaded The Apache HTTP Server.
Jun 06 13:19:09 ns1.nowdigitaleasy.com systemd[1]: Reloading The Apache HTTP Server.
Jun 06 13:19:10 ns1.nowdigitaleasy.com systemd[1]: Reloaded The Apache HTTP Server.
Jun 07 04:10:25 ns1.nowdigitaleasy.com systemd[1]: Reloading The Apache HTTP Server.
Jun 07 04:10:26 ns1.nowdigitaleasy.com systemd[1]: Reloaded The Apache HTTP Server.
To check if a service
unit is currently active (running) by executing the below command.
$ systemctl is-active UNIT_NAME.service
Masking and Unmasking Units
To prevent any
service unit from being started manually or by another service then you need to mask it. If you have masked any service unit, it
will completely disable the service and will not start service until it is unmaksed.
$ sudo systemctl mask UNIT_NAME.service
If you try to start
the masked service, you will see the following message:
$ sudo systemctl start UNIT_NAME.service
Failed to start UNIT_NAME.service: Unit UNIT_NAME.service is masked.
To unmask a unit,
run:
$ sudo systemctl unmask UNIT_NAME.service
Creating and modifying systemd unit files
In this section, we
will show you how to create and mofiy systemd unit files. There are three main directories where unit files are stored on the
system.
/usr/lib/systemd/system/
""
systemd unit files dropped when the package has installed.
/run/systemd/system/
""
systemd unit files created at run time.
/etc/systemd/system/
""
systemd unit files created by "˜systemctl enable' command as well as unit files added for extending a service.
Modifying existing systemd unit file
In this example, we
will show how to modify an existing unit file. The
"˜/etc/systemd/system/'
directory
is reserved for unit files created or customized by the system administrator.
For example, to edit
"˜httpd.service'
unit
file, run:
$ sudo systemctl edit httpd.service
This creates an
override snippet file under
"˜/etc/systemd/system/httpd.service.d/override.conf'
and
opens it in your text editor. Add new parameters to the httpd.service unit file and the new parameters will be added to the
existing service file when the file saved.
Restart the httpd
service to loads the new service configuration (Unit file must be restated if you modify the running unit file).
$ sudo systemctl restart httpd
If you want to edit
the full unit file, run:
$ sudo systemctl edit --full httpd.service
This will load the
current unit file into the editor. When the file is saved, systemctl will create a file at
"˜/etc/systemd/system/httpd.service'
.
Make a note:
Any unit file in /etc/systemd/system will override the corresponding file in /lib/systemd/system.
To revert the changes
or return to the default configuration of the unit, delete the following custom configuration files:
To remove a snippet,
run:
$ sudo rm -r /etc/systemd/system/httpd.service.d
To remove a full
modified unit file, run:
$ sudo rm /etc/systemd/system/httpd.service
To apply changes to
unit files without rebooting the system, execute: The
"˜daemon-reload'
option
reloads all unit files and recreates the entire dependency tree.
$ sudo systemctl daemon-reload
systemd Targets
Targets are
specialized unit files that determine the state of a Linux system. systemd uses targets to group together other units through a
chain of dependencies, which serves a purpose such as runlevels.
Each target is named
instead of a number and unit files can be linked to one target and multiple targets can be active simultaneously.
Listing Targets
To view a list of the
available targets on your system, run:
$ systemctl list-units --type=target
or
$ systemctl list-unit-files --type=target
UNIT LOAD ACTIVE SUB DESCRIPTION
---------------------------------------------------------------------------
basic.target loaded active active Basic System
bluetooth.target loaded active active Bluetooth
cryptsetup.target loaded active active Local Encrypted Volumes
getty.target loaded active active Login Prompts
graphical.target loaded active active Graphical Interface
local-fs-pre.target loaded active active Local File Systems (Pre)
local-fs.target loaded active active Local File Systems
multi-user.target loaded active active Multi-User System
network-online.target loaded active active Network is Online
network-pre.target loaded active active Network (Pre)
network.target loaded active active Network
nss-lookup.target loaded active active Host and Network Name Lookups
nss-user-lookup.target loaded active active User and Group Name Lookups
paths.target loaded active active Paths
remote-fs-pre.target loaded active active Remote File Systems (Pre)
remote-fs.target loaded active active Remote File Systems
slices.target loaded active active Slices
sockets.target loaded active active Sockets
sound.target loaded active active Sound Card
swap.target loaded active active Swap
sysinit.target loaded active active System Initialization
time-sync.target loaded active active System Time Synchronized
timers.target loaded active active Timers
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
23 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
Displaying the default target
By default, the
systemd process uses the default target when booting the system. To view the default target on your system, run:
$ systemctl get-default
multi-user.target
To set a different
target as a default target, run: For instance, to set a "˜graphical.target', run:
$ sudo systemctl set-default graphical.target
Changing the current active target
To change the current
active target immediately, run: For example, if you want to switch from the current graphical target (GUI) to the multi-user
target (CLI "" command line interface), run:
$ sudo systemctl isolate multi-user.target
Booting the system with Single User mode
If your computer does
not boot due to an issue, you can boot the system into rescue (single-user) mode for further troubleshooting.
$ sudo systemctl rescue
Booting the system with Emergency mode
Similarly you can
boot the system with emergency mode to repair your system. This provides a very minimal environment for the user, which can be
used when the system cannot enter rescue mode.
$ sudo systemctl emergency
Power management
systemctl also allows
users to halt, shutdown and reboot a system.
To halt a system,
run:
$ sudo systemctl halt
To shutdown a system,
run:
$ sudo systemctl poweroff
To reboot a system,
run:
$ sudo systemctl reboot
Conclusion
In this guide, we
have shown you how to use systemctl command in Linux with several examples to manage or control systemd units.
If you have any
questions or feedback, feel free to comment below.
Installing the recent linux version seems to come with a default setting of flooding the
/var/log/messages with entirely annoying duplicitous messages like:
systemd: Created slice user-0.slice.
systemd: Starting Session 1013 of user root.
systemd: Started Session 1013 of user root.
systemd: Created slice user-0.slice.
systemd: Starting Session 1014 of user root.
systemd: Started Session 1014 of user root.
Here is how I got rid of these:
vi /etc/systemd/system.conf
And then uncomment LogLevel and make it: LogLevel=notice
1 # This file is part of systemd.
2 #
3 # systemd is free software; you can redistribute it and/or modify it
4 # under the terms of the GNU Lesser General Public License as published by
5 # the Free Software Foundation; either version 2.1 of the License, or
6 # (at your option) any later version.
7 #
8 # Entries in this file show the compile time defaults.
9 # You can change settings by editing this file.
10 # Defaults can be restored by simply deleting this file.
11 #
12 # See systemd-system.conf(5) for details.
13
14 [Manager]
15 LogLevel=notice
16 #LogTarget=journal-or-kmsg
I've found a disturbing
trend in GNU/Linux, where largely unaccountable cliques of developers unilaterally decide to make fundamental changes to the way
it works, based on highly subjective and arrogant assumptions, then forge ahead with little regard to those who actually use the
software, much less the well-established principles upon which that OS was originally built. The long litany of examples includes
Ubuntu Unity ,
Gnome Shell ,
KDE 4 , the
/usr partition ,
SELinux ,
PolicyKit ,
Systemd ,
udev and
PulseAudio , to name a few.
The broken features, creeping bloat, and in particular the unhealthy tendency toward more monolithic, less modular code in certain
Free Software projects, is a very serious problem, and I have a very serous opposition to it. I abandoned Windows to get away from
that sort of nonsense, I didn't expect to have to deal with it in GNU/Linux.
Clearly this situation is untenable.
The motivation for these arbitrary changes mostly seems to be rooted in the misguided concept of "popularity", which makes no
sense at all for something that's purely academic and non-commercial in nature. More users does not equal more developers. Indeed
more developers does not even necessarily equal more or faster progress. What's needed is more of the right sort of developers,
or at least more of the existing developers to adopt the right methods.
This is the problem with distros like Ubuntu, as the most archetypal example. Shuttleworth pushed hard to attract more users,
with heavy marketing and by making Ubuntu easy at all costs, but in so doing all he did was amass a huge burden, in the form of a
large influx of users who were, by and large, purely consumers, not contributors.
As a result, many of those now using GNU/Linux are really just typical Microsoft or Apple consumers, with all the baggage that
entails. They're certainly not assets of any kind. They have expectations forged in a world of proprietary licensing and commercially-motivated,
consumer-oriented, Hollywood-style indoctrination, not academia. This is clearly evidenced by their
belligerently hostile attitudes toward the GPL, FSF,
GNU and Stallman himself, along with their utter contempt for security and other well-established UNIX paradigms, and their unhealthy
predilection for proprietary software, meaningless aesthetics and hype.
Reading the Ubuntu forums is an exercise in courting abject despair, as one witnesses an ignorant hoard demand GNU/Linux be mutated
into the bastard son of Windows and Mac OS X. And Shuttleworth, it seems, is
only too happy
to oblige , eagerly assisted by his counterparts on other distros and upstream projects, such as Lennart Poettering and Richard
Hughes, the former of whom has somehow convinced every distro to mutate the Linux startup process into a hideous
monolithic blob , and the latter of whom successfully managed
to undermine 40 years of UNIX security in a single stroke, by
obliterating the principle that unprivileged
users should not be allowed to install software system-wide.
GNU/Linux does not need such people, indeed it needs to get rid of them as a matter of extreme urgency. This is especially true
when those people are former (or even current) Windows programmers, because they not only bring with them their indoctrinated expectations,
misguided ideologies and flawed methods, but worse still they actually implement them , thus destroying GNU/Linux from within.
Perhaps the most startling example of this was the Mono and Moonlight projects, which not only burdened GNU/Linux with all sorts
of "IP" baggage, but instigated a sort of invasion of Microsoft "evangelists" and programmers, like a Trojan horse, who subsequently
set about stuffing GNU/Linux with as much bloated, patent
encumbered garbage as they could muster.
I was part of a group who campaigned relentlessly for years to oust these vermin and undermine support for Mono and Moonlight,
and we were largely successful. Some have even suggested that my
diatribes ,
articles and
debates (with Miguel
de Icaza and others) were instrumental in securing this victory, so clearly my efforts were not in vain.
Amassing a large user-base is a highly misguided aspiration for a purely academic field like Free Software. It really only makes
sense if you're a commercial enterprise trying to make as much money as possible. The concept of "market share" is meaningless for
something that's free (in the commercial sense).
Of course Canonical is also a commercial enterprise, but it has yet to break even, and all its income is derived through support
contracts and affiliate deals, none of which depends on having a large number of Ubuntu users (the Ubuntu One service is cross-platform,
for example).
The Devuan GNU/Linux project announced today the release and general availability of Devuan
GNU/Linux 3.1 as the first update in the latest Devuan GNU/Linux 3.0 "Beowulf" operating system
series. Devuan GNU/Linux 3.1 comes nine months after the release of the Devuan GNU/Linux 3.0
series to provide the freedom loving community with up-to-date ISO images in case they need to
reinstall the system or deploy the systemd-free distribution on new computers.
While Devuan GNU/Linux 4.0 "Chimaera" is still in the works, Devuan GNU/Linux 3.1 brings
updated desktop-live, server, and minimal-live ISO images powered by the Linux 4.19 LTS kernel
from Debian GNU/Linux 10 "Buster" operating system series.
The biggest change in this release is the availability of the runit init scheme in the
installer, alongside existing OpenRC and SysVinit options. runit was already available as an
alternative to /sbin/init since the release of the Devuan GNU/Linux 3.0 "Beowulf" series. In
addition, the installer now recommends the deb.devuan.org as default mirror for fetching
packages, and lets you use an alternate bootloader to GRUB, such as LILO, along with the
ability to exclude non-free firmware, from the Expert install options.
This release also ships with a new package (debian-pulseaudio-config-override) that promises
to address issues with the PulseAudio sound system being off by default, the Mozilla Firefox
78.7 ESR web browser, LightDM 1.26 login manager, and many other updated core components and
apps. You can download Devuan GNU/Linux 3.1 right now from the official website as Desktop
Live, Minimal Live, Server, and Netinstall images for both 32-bit and 64-bit architectures.
Unfortunately, the ARM and virtual images have not been updated in this release. Existing
Devuan GNU/Linux 3.0 users don't need to download the new ISOs, but ensure their installations
are up to date at all times by running the sudo apt update && sudo apt full-upgrade
command in a terminal emulator. Download Devuan GNU/Linux 3.1 Desktop Desktop Devuan GNU/Linux
3.1 Server Last updated 6 days ago
Read more at 9to5Linux.com: Systemd-Free Devuan GNU/Linux 3.1 Distro Released for Freedom
Lovers https://9to5linux.com/?p=6803
... In my opinion, for the security and stability of most systems, initd is superior. Lennart Poettering has a long history
of ignoring blatant security problems, and SystemD itself has a long history of containing system-breaking bugs, and in general
being over-engineered for most use-cases.
There's really nothing here I can say that hasn't been talked about elsewhere. What I will say is that, elsewhere on hacker
news, people seem to be preoccupied with "Language Safety", and Rust, and the adjacent languages that provide that. The reason
for this is to reduce bugs and potential exploits.
You know what else causes less bugs? LESS CODE, AND LESS FEATURES. The less code there is running on a system, the less code
there is to exploit, and the less bugs there are likely to be. PID 1 is a SACRED, HOLY, Process Identifier. Code that runs with
this PID has control over the ENTIRE SYSTEM. An exploit in PID 1 is LITERALLY game over.
> The Server Space is arguably the main place SystemD was designed for, and probably the only place it makes sense to deploy it.
I have the opposite opinion. systemd is irrelevant on a server (might've been relevant once upon a time, but not in this day
and age), and it was developed specifically with end-user devices in mind.
In this day and age of virtual machines and containers, this is not really that useful. Most daemons are effectively PID
1 in their respective containers (if there is a separate init, it ain't really doing much besides launching a finger-countable
number of daemons), and on the host (assuming you even can access the host) it's still stripped down to the bare minimum
(with anything not required for managing VMs delegated to VMs instead of running directly on the hypervisor). As the article mentioned,
the startup speed benefits from launching daemons in parallel are hardly useful compared to the long POST times and the limited
number of daemons.
In contrast, desktops (niche systems like Qubes notwithstanding) do frequently run a lot of daemons on a single logical
system. Fast startup does matter for desktops. systemd is actually useful here.
systemd provides an UEFI boot loader, systemd-boot (previously gummiboot)
Not really sure this is useful anywhere (ain't like there aren't plenty of existing bootloaders).
systemd provides a login manager, systemd-logind
Given - again - the growing trend toward virtualizing/containerizing everything, this is decreasingly valuable on a server except
maybe for SSH logins (and even then). This is more useful for desktops, where login and session management matters a lot more.
systemd provides a syslog daemon, systemd-journald
Most servers shouldn't be using the vast majority of journald's functionality; they should instead be sending logs to a remote
log server (if your servers aren't already doing so, then I suggest getting off Hacker News and fixing that ASAP :) ).
Desktops, on the other hand, typically stick to local logging and thus do potentially benefit from an improved local logging
system (whether journald counts as "improved" is a matter of taste).
systemd provides a mount front-end, systemd-mount
[ ... ]
systemd provides automount via systemd.automount to substitute autofs
This one happens to be useful for both servers and desktops, but in my experience frequent live unmounting/remounting of disks
is way more common for desktops than servers (with the sole exception of servers hooked up to, say, an optical media carousel
or something like that, and even that's a pretty niche application). RAID wouldn't even be a factor here, since swapping out a
disk in an array doesn't involve mounting or unmounting the filesystem(s) on top of that array.
The udev sources were merged into the systemd source tree.
Benefits servers and desktops more-or-less equally.
systemd provides systemd.timer timer units, which can be used to replace cron and at
One of the few cases of a systemd feature that's more useful on servers than desktops (though in both cases I've yet to see this
actually used instead of cron/at).
systemd provides a D-Bus client library, sd-bus (see sd-bus)
systemd developed an in-kernel D-Bus implementation, kdbus.
D-Bus literally stands for "Desktop Bus". These are explicitly desktop-oriented features.
systemd provides a caching DNS resolver, systemd-resolved
This is entirely useless on a server; either your server is a DNS server (in which case it's using an actual DNS-serving
daemon meant for handling external traffic, i.e. not systemd-resolved) or it's not a DNS server (in which case -
unless you're an extreme masochist and consider DNS-cache-related issues to be fun - you're probably going to be pointing
/etc/resolv.conf at your network's actual DNS servers and calling it a day).
This is way more useful for laptops and mobile devices, which switch networks frequently (and sometimes switch onto e.g. public
hotspots or other networks with buggy or overworked or outright adversarial DNS servers of their own).
systemd provides a network manager and DHCP client, systemd-networkd
systemd provides a HTTP server for journal events, systemd-journal-gatewayd
systemd provides a containerization system systemd-nspawn
Not sure if I've ever seen these three actually used anywhere; they seem to be equally useless on servers and desktops.
----
Final verdict given the above: systemd was demonstrably developed specifically with desktops in mind (and even more specifically
with GNOME in mind, which should be unsurprising given how much Red Hat loves GNOME). The vast majority of its components make
far more sense on end-user devices than servers (and the ones that don't are either equally useful or equally useless).
I've been using Linux since 1993, FreeBSD since 1995. systemd has a weird, "non-unixy" feel to it. It does too much. Yeah, the
fast bootup times are nice.
Every machine I've put SystemD on takes an extra 20 seconds to boot. This includes thinkpads, and other machines. I've tried a
mix of distributions, all of the ones with SystemD take longer. I've even tried altering things to take less time. So I don't see
how 'faster boot' can be considered part of systemd's bag of tricks.
All Systems Go Systemd inventor Lennart Poettering told the crowds at the All Systems Go
Linux user-space event in Berlin he intends to reinvent home directories to fix issues with the
current model that are otherwise insoluble.
Specifically, he wants Systemd, or rather systemd-homed, to manage and organize home
directories.
In Linux systems, each user typically has a directory under /home for personal documents and
data. Users are identified by a username and user ID (UID) number which by default is in a text
database called /etc/passwd.
Speaking at the event in Germany earlier this month, Poettering identified several problems
with this long-standing approach. Philosophically, he said, it mixes state and configuration,
because in his view the user record is state rather than configuration, and therefore does not
belong in /etc.
The /etc/passwd database is not extensible, and therefore Linux has evolved numerous
secondary databases that are stored elsewhere, such as /etc/shadow, a privileged location used
for encrypted password hashes and other password-related fields, such as the maximum time
before a password expires.
He is also much concerned with a security issue, which is that even when full-disk
encryption is in use, when the system is suspended the decryption key is held in memory, so
that if a laptop is stolen while suspended it would be possible to access the data. A
password-protected lock screen is insufficient for strong security.
Poettering's idea is to have self-contained home folders, where the system assigns an UID
automatically if it detects that the folder exists. All the information about the user is in
that directory, password hash included, stored as extensible JSON user records.
Does that mean that you can log into any Linux system armed with a home folder on a USB
stick? No, said Poettering, answering a question after his talk. A privileged process on that
machine would have to sign the security-sensitive part of a user's data before it would be
recognised. This would prevent users adding themselves to groups, for example, by editing their
own data.
LUK'd out
The Systemd inventor is a fan of LUKS encryption, which can be used to encrypt a
file, partition, or entire hard drive. He also intends to unify the user password and the
encryption key, on the presumption that most users encrypt their laptop disks. This means that
when the system is suspended, the decryption key can be removed from memory. On resume, the
same password will both log-in and decrypt the home folder. This means that the decryption key
can be removed from memory on suspend, since it is re-input on resume.
All of this will be enabled by a new daemon called systemd-homed, to be a component of
Systemd. The new component will also support other forms of authentication such as Yubikeys and
other security devices that support FIDO2 and U2F (Universal Second Factor) authentication.
There are some complications, one of which is remote access via SSH.
"If you authenticate via SSH it goes via authorized keys in the home directory. So if you
want to authenticate something that is inside of the home directory, so that it can access the
home directory, where does the decryption key come from, to access the home directory? It is a
chicken-and-egg problem," said Poettering.
His solution is that the user must already be logged in, for SSH to work. A person at the
session asked what should be done by a university student, for example, who wanted to log in to
a Linux machine that was rebooted overnight from 200 miles away. The answer: "If you really
want that this system can come up on its own, don't use this stuff. This is about
security."
However, it may not be such a problem in practice, since the focus of this solution is end
users with laptops rather than servers, and remote login to a laptop is not common.
Poettering envisages that by having your home folder in a LUKS-encrypted container, then
that file is all you need either for backup or to switch to another laptop. "The user record
and the home directory all become one file. You can just take that file from one laptop to
another laptop. It just pops up and it's there."
It is a radical change, and there will be compatibility issues, as well as opposition to
changing a part of the system that has worked well enough for years, but for Poettering it is
worth it if only for security. "I want my own laptop finally secure so I can suspend. I want
these problems to be solved, finally, because we never could solve them," he said.
Will Red Hat be able to push this systemd-homed idea down the throats ? Inquiring minds want to know ;-)
Notable quotes:
"... When systemd was released in 2010, there was a storm of vitriol surrounding the change in how services were to be started in Linux. The new mechanism was touted as being bloated and far too complicated to be useful. Since then, all enterprise Linux distributions have adopted systemd and the majority of desktop distributions have as well. ..."
"... Unlike the current process of creating users (either with the useradd or adduser commands), systemd-homed will rely on it's own process of creating users. ..."
"... The big problem with that is the .ssh directory (where SSH stores known_hosts and authorized_keys) would be inaccessible while the user's home directory is encrypted. Of course Poettering knows of this shortcoming. To date, all of the work done with systemd-homed has been with the standard authentication process. You can be sure that Poettering will come up with a solution that takes SSH into consideration. ..."
"... Should Poettering not be able to develop a solution for the SSH conundrum, systemd-homed will have to be relegated to desktops and laptop distributions, leaving servers out of the mix. I cannot imagine that will fly with the systemd team. ..."
"... systemd 245 should be released sometime this year (2020). When that happens, prepare to change the way you manage users and their home directories. ..."
With systemd 245 comes systemd-homed. Along with that, Linux admins will have to change the way they manage users and
users' home directories.
When systemd was released in 2010, there was a storm of vitriol surrounding the change in how services were to be started
in Linux. The new mechanism was touted as being bloated and far too complicated to be useful. Since then, all enterprise Linux distributions
have adopted systemd and the majority of desktop distributions have as well.
For those who aren't familiar with systemd, it is that which initializes all systems on the Linux platform. Anyone that manages
Linux within a data center should be intimately familiar with this system. By providing all of the necessary controls and daemons
for device management, user login, network connections, and event logging, systemd makes for easy resource initialization and management
-- all from a single point of entry (systemctl).
Prior to systemd every system and resource was managed by its own tool, which was clumsy and inefficient. Now? Controlling and
managing systems on Linux is incredibly easy.
But one of the creators, Leannart Poettering, has always considered systemd to be incomplete. With the upcoming release of systemd
245, Poettering will take his system one step closer to completion. That step is by way of homed.
What is homed?
Before we dig into homed, let's take a look at the /home directory. This is a crucial directory in the Linux filesystem hierarchy,
as it contains all user data and configurations. For some admins, this directory is so important, it is often placed on a separate
partition or drive than the operating system. By doing this, user data is safe, even if the operating system were to implode.
However, the way /home is handled within the operating system makes migrating the /home directory not nearly as easy as it should
be. Why? With the current iteration of systemd, user information (such as ID, full name, home directory, and shell) is stored in
/etc/passwd and the password associated with that user is stored in /etc/shadow. The /etc/passwd file can be viewed by anyone, whereas
/etc/shadow can only be viewed by those with admin or sudo privileges.
How the /etc/passwd and /etc/shadow files work is simple:
During the login process, the system authenticates the login attempt against /etc/shadow.
If login is successful, the system reads the /etc/passwd entry for the user to locate the user's home directory.
So, for the simple act of logging in, three mechanisms are required (systemd, /etc/shadow, /etc/passwd). This is inefficient,
and Poettering has decided to make a drastic change. That change is homed. With homed, all information will be placed in a cryptographically
signed JSON record for each user. That record will contain all user information such as username, group membership, and password
hashes.
Each user home directory will be linked as LUKS-encrypted containers, with the encryption directly coupled to user login. Once
systemd-homed detects a user has logged in, the associated home directory is decrypted. Once that user logs out, the home directory
is automatically encrypted.
A truly portable home directory
Outside of including a much-improved security, systemd-homed will finally enable a truly portable home directory. Because the
/home directory will no longer depend on the trifecta of systemd, /etc/passwd, and /etc/shadow, users and admins will then be able
to easily migrate directories within /home. Imagine being able to move your /home/USER (where USER is your username) directory to
a portable flash drive and use it on any system that works with systemd-homed. You could easily transport your /home/USER directory
between home and work, or between systems within your company.
This will not just apply to user files, but with personal settings, preferences, and even authentication information.
This works by making use of the JSON user record to confirm user identity. How specifically this works will depend upon the mechanism
used for storing/accessing user home directories. Such mechanisms include:
BTRFS
fscrypt
CIFS
LUKS
According to Poettering, LUKS is the most advanced and secure of the systems. With LUKS, an encrypted volume is stored on either
a removable device or within a loopback file. The LUKS volume contains a single directory, named after the user, which becomes the
user's home directory and contains a copy of the user record stored within the LUKS header.
When the user successfully authenticates, systemd-homed unlocks the LUKS volume and compares the record stored within the LUKS
header to that stored in the ./identity folder. If the records and encryption keys match, the directory is mounted and accessible
to the user.
How to create systemd-homed users
Unlike the current process of creating users (either with the useradd or adduser commands), systemd-homed will rely on it's own
process of creating users. Fortunately, that process won't be terribly complicated.
In fact, like many subsystems associated with systemd, systemd-homed will have it's own control command:
homectl
So to create a new user, the command will look something like:
Where USERNAME is the username and REAL NAME is the user's first and last name.
With the homectl command, you can also:
Activate one or more home directories
Change the passwords on a specific home directory/user account
Resize the amount of disk space assigned to a home directory
Temporarily lock a home directory
List all home directories
The caveat
Of course, such a major change doesn't come without its share of caveats. In the case of systemd-homed, that caveat comes by way
of SSH. If a systemd-homed home directory is encrypted until a user successfully logs in, how will users be able to log in to a remote
machine with SSH?
The big problem with that is the .ssh directory (where SSH stores known_hosts and authorized_keys) would be inaccessible while
the user's home directory is encrypted. Of course Poettering knows of this shortcoming. To date, all of the work done with systemd-homed
has been with the standard authentication process. You can be sure that Poettering will come up with a solution that takes SSH into
consideration.
Should Poettering not be able to develop a solution for the SSH conundrum, systemd-homed will have to be relegated to desktops
and laptop distributions, leaving servers out of the mix. I cannot imagine that will fly with the systemd team.
When will systemd 245 be released
At the moment, systemd 245 is still in RC2 status. You can download the source from the
systemd GitHub page , but know the installation
is probably far more involved than you'd want to bother with. The good news, however, is that systemd 245 should be released sometime
this year (2020). When that happens, prepare to change the way you manage users and their home directories.
In systemd , services are defined in unit files with their daemons and
behavior directives. The /etc/systemd/system/ directory is reserved for unit files
that you create or customize.
To create a service, you must create it with the form:
<unit_name>.<service> .
Image
This unit file starts the script indicated in the ExecStart option with the
<user> set with User . If the script fails or stops, an attempt
will be made to restart as indicated in the Restart option. The
StandardOutput and StandardError options ensure that the script's
standard and error output will be written to the systemd log.
In my most recent experience, as an example of real-life, day-to-day, I had a server with a
small web service running inside a container (yes, I know, but you know the customers). To
optimize and automate the service, I created a systemd unit file for a Podman
container to allows users to control the lifecycle of the container through
systemctl .
Image
After copying the unit file to /etc/systemd/system/myhttpservice.service ,
reload the systemd manager configuration with the command: systemctl
daemon-reload . Then, you can handle the container as a systemd -managed
service:
# systemctl start myhttpservice.service ← to start the container
# systemctl status myhttpservice.service ← to check the container service status
# systemctl start myhttpservice.service ← to stop the container
The container's functionality is not affected when being managed by systemd .
You can even use Podman commands to monitor the health of the container:
[root@server ~]# podman healthcheck run myhttpservice
healthy
So don't worry. Systemd can help you, just trust it. If you want to know more:
Alex Callejas is a Senior Technical Support Engineer of Red Hat, based in Mexico City
and an Enable SysAdmin contributor. With more than 10 years of experience as SysAdmin, he has
strong expertise on Infrastructure Hardening and Automation. More about me
chkservice, a terminal user interface (TUI) for managing systemd units, has been updated recently with window resize and search
support.
chkservice is a simplistic
systemd unit manager that uses ncurses for its terminal interface.
Using it you can enable or disable, and start or stop a systemd unit. It also shows the units status (enabled, disabled, static or
masked).
You can navigate the chkservice user interface using keyboard shortcuts:
Up or l to move cursor up
Down or j to move cursor down
PgUp or b to move page up
PgDown or f to move page down
To enable or disable a unit press Space , and to start or stop a unity press s . You can access the help
screen which shows all available keys by pressing ? .
The command line tool had its first release in August 2017, with no new releases until a few days ago when version 0.2 was released,
quickly followed by 0.3.
With the latest 0.3 release, chkservice adds a search feature that allows easily searching through all systemd units.
To
search, type / followed by your search query, and press Enter . To search for the next item matching your
search query you'll have to type / again, followed by Enter or Ctrl + m (without entering
any search text).
Another addition to the latest chkservice is window resize support. In the 0.1 version, the tool would close when the user tried
to resize the terminal window. That's no longer the case now, chkservice allowing the resize of the terminal window it runs in.
And finally, the last addition to the latest chkservice 0.3 is G-g navigation support . Press G
( Shift + g ) to navigate to the bottom, and g to navigate to the top.
Download and install chkservice
The initial (0.1) chkservice version can be found
in the official repositories of a few Linux distributions, including Debian and Ubuntu (and Debian or Ubuntu based Linux distribution
-- e.g. Linux Mint, Pop!_OS, Elementary OS and so on).
There are some third-party repositories available as well, including a Fedora Copr, Ubuntu / Linux Mint PPA, and Arch Linux AUR,
but at the time I'm writing this, only the AUR package
was updated to the latest chkservice version 0.3.
You may also install chkservice from source. Use the instructions provided in the tool's
readme to either create a DEB package or install
it directly.
Chronyd is a better choice for most networks than ntpd for keeping computers synchronized
with the Network Time Protocol.
"Does anybody really know what time it is? Does anybody really care?"
– Chicago ,
1969
Perhaps that rock group didn't care what time it was, but our computers do need to know the
exact time. Timekeeping is very important to computer networks. In banking, stock markets, and
other financial businesses, transactions must be maintained in the proper order, and exact time
sequences are critical for that. For sysadmins and DevOps professionals, it's easier to follow
the trail of email through a series of servers or to determine the exact sequence of events
using log files on geographically dispersed hosts when exact times are kept on the computers in
question.
I used to work at an organization that received over 20 million emails per day and had four
servers just to accept and do a basic filter on the incoming flood of email. From there, emails
were sent to one of four other servers to perform more complex anti-spam assessments, then they
were delivered to one of several additional servers where the emails were placed in the correct
inboxes. At each layer, the emails would be sent to one of the next-level servers, selected
only by the randomness of round-robin DNS. Sometimes we had to trace a new message through the
system until we could determine where it "got lost," according to the pointy-haired bosses. We
had to do this with frightening regularity.
Most of that email turned out to be spam. Some people actually complained that their [joke,
cat pic, recipe, inspirational saying, or other-strange-email]-of-the-day was missing and asked
us to find it. We did reject those opportunities.
Our email and other transactional searches were aided by log entries with timestamps that --
today -- can resolve down to the nanosecond in even the slowest of modern Linux computers. In
very high-volume transaction environments, even a few microseconds of difference in the system
clocks can mean sorting thousands of transactions to find the correct one(s).
The NTP
server hierarchy
Computers worldwide use the Network Time Protocol (NTP) to
synchronize their times with internet standard reference clocks via a hierarchy of NTP servers.
The primary servers are at stratum 1, and they are connected directly to various national time
services at stratum 0 via satellite, radio, or even modems over phone lines. The time service
at stratum 0 may be an atomic clock, a radio receiver tuned to the signals broadcast by an
atomic clock, or a GPS receiver using the highly accurate clock signals broadcast by GPS
satellites.
To prevent time requests from time servers lower in the hierarchy (i.e., with a higher
stratum number) from overwhelming the primary reference servers, there are several thousand
public NTP stratum 2 servers that are open and available for anyone to use. Many organizations
with large numbers of hosts that need an NTP server will set up their own time servers so that
only one local host accesses the stratum 2 time servers, then they configure the remaining
network hosts to use the local time server which, in my case, is a stratum 3 server.
NTP
choices
The original NTP daemon, ntpd , has been joined by a newer one, chronyd . Both keep the
local host's time synchronized with the time server. Both services are available, and I have
seen nothing to indicate that this will change anytime soon.
Chrony has features that make it the better choice for most environments for the following
reasons:
Chrony can synchronize to the time server much faster than NTP. This is good for laptops
or desktops that don't run constantly.
It can compensate for fluctuating clock frequencies, such as when a host hibernates or
enters sleep mode, or when the clock speed varies due to frequency stepping that slows clock
speeds when loads are low.
It handles intermittent network connections and bandwidth saturation.
It adjusts for network delays and latency.
After the initial time sync, Chrony never steps the clock. This ensures stable and
consistent time intervals for system services and applications.
Chrony can work even without a network connection. In this case, the local host or server
can be updated manually.
The NTP and Chrony RPM packages are available from standard Fedora repositories. You can
install both and switch between them, but modern Fedora, CentOS, and RHEL releases have moved
from NTP to Chrony as their default time-keeping implementation. I have found that Chrony works
well, provides a better interface for the sysadmin, presents much more information, and
increases control.
Just to make it clear, NTP is a protocol that is implemented with either NTP or Chrony. If
you'd like to know more, read this comparison between NTP and Chrony as
implementations of the NTP protocol.
This article explains how to configure Chrony clients and servers on a Fedora host, but the
configuration for CentOS and RHEL current releases works the same.
Chrony structure
The Chrony daemon, chronyd , runs in the background and monitors the time and status of the
time server specified in the chrony.conf file. If the local time needs to be adjusted, chronyd
does it smoothly without the programmatic trauma that would occur if the clock were instantly
reset to a new time.
Chrony's chronyc tool allows someone to monitor the current status of Chrony and make
changes if necessary. The chronyc utility can be used as a command that accepts subcommands, or
it can be used as an interactive text-mode program. This article will explain both
uses.
Client configuration
The NTP client configuration is simple and requires little or no intervention. The NTP
server can be defined during the Linux installation or provided by the DHCP server at boot
time. The default /etc/chrony.conf file (shown below in its entirety) requires no intervention
to work properly as a client. For Fedora, Chrony uses the Fedora NTP pool, and CentOS and RHEL
have their own NTP server pools. Like many Red Hat-based distributions, the configuration file
is well commented.
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool 2.fedora.pool.ntp.org iburst
# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift
# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3
# Enable kernel synchronization of the real-time clock (RTC).
# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *
# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2
# Allow NTP client access from local network.
#allow 192.168.0.0/16
# Serve time even if not synchronized to a time source.
#local stratum 10
# Specify file containing keys for NTP authentication.
keyfile /etc/chrony.keys
# Get TAI-UTC offset and leap seconds from the system tz database.
leapsectz right/UTC
# Specify directory for log files.
logdir /var/log/chrony
# Select which information is logged.
#log measurements statistics tracking
Let's look at the current status of NTP on a virtual machine I use for testing. The chronyc
command, when used with the tracking subcommand, provides statistics that report how far off
the local system is from the reference server.
[root@studentvm1 ~]# chronyc tracking
Reference ID : 23ABED4D (ec2-35-171-237-77.compute-1.amazonaws.com)
Stratum : 3
Ref time (UTC) : Fri Nov 16 16:21:30 2018
System time : 0.000645622 seconds slow of NTP time
Last offset : -0.000308577 seconds
RMS offset : 0.000786140 seconds
Frequency : 0.147 ppm slow
Residual freq : -0.073 ppm
Skew : 0.062 ppm
Root delay : 0.041452706 seconds
Root dispersion : 0.022665167 seconds
Update interval : 1044.2 seconds
Leap status : Normal
[root@studentvm1 ~]#
The Reference ID in the first line of the result is the server the host is synchronized to
-- in this case, a stratum 3 reference server that was last contacted by the host at 16:21:30
2018. The other lines are described in the chronyc(1) man page .
The sources subcommand is also useful because it provides information about the time source
configured in chrony.conf .
The first source in the list is the time server I set up for my personal network. The others
were provided by the pool. Even though my NTP server doesn't appear in the Chrony configuration
file above, my DHCP server provides its IP address for the NTP server. The "S" column -- Source
State -- indicates with an asterisk ( * ) the server our host is synced to. This is consistent
with the data from the tracking subcommand.
The -v option provides a nice description of the fields in this output.
[root@studentvm1
~]# chronyc sources -v
210 Number of sources = 5
If I wanted my server to be the preferred reference time source for this host, I would add
the line below to the /etc/chrony.conf file.
server 192.168.0.51 iburst prefer
I usually place this line just above the first pool server statement near the top of the
file. There is no special reason for this, except I like to keep the server statements
together. It would work just as well at the bottom of the file, and I have done that on several
hosts. This configuration file is not sequence-sensitive.
The prefer option marks this as the preferred reference source. As such, this host will
always be synchronized with this reference source (as long as it is available). We can also use
the fully qualified hostname for a remote reference server or the hostname only (without the
domain name) for a local reference time source as long as the search statement is set in the
/etc/resolv.conf file. I prefer the IP address to ensure that the time source is accessible
even if DNS is not working. In most environments, the server name is probably the better
option, because NTP will continue to work even if the server's IP address changes.
If you don't have a specific reference source you want to synchronize to, it is fine to use
the defaults.
Configuring an NTP server with Chrony
The nice thing about the Chrony configuration file is that this single file configures the
host as both a client and a server. To add a server function to our host -- it will always be a
client, obtaining its time from a reference server -- we just need to make a couple of changes
to the Chrony configuration, then configure the host's firewall to accept NTP requests.
Open the /etc/chrony.conf file in your favorite text editor and uncomment the local stratum
10 line. This enables the Chrony NTP server to continue to act as if it were connected to a
remote reference server if the internet connection fails; this enables the host to continue to
be an NTP server to other hosts on the local network.
Let's restart chronyd and track how the service is working for a few minutes. Before we
enable our host as an NTP server, we want to test a bit.
The results should look like this. The watch command runs the chronyc tracking command every
two seconds so we can watch changes occur over time.
Every 2.0s: chronyc tracking
studentvm1: Fri Nov 16 20:59:31 2018
Reference ID : C0A80033 (192.168.0.51)
Stratum : 4
Ref time (UTC) : Sat Nov 17 01:58:51 2018
System time : 0.001598277 seconds fast of NTP time
Last offset : +0.001791533 seconds
RMS offset : 0.001791533 seconds
Frequency : 0.546 ppm slow
Residual freq : -0.175 ppm
Skew : 0.168 ppm
Root delay : 0.094823152 seconds
Root dispersion : 0.021242738 seconds
Update interval : 65.0 seconds
Leap status : Normal
Notice that my NTP server, the studentvm1 host, synchronizes to the host at 192.168.0.51,
which is my internal network NTP server, at stratum 4. Synchronizing directly to the Fedora
pool machines would result in synchronization at stratum 3. Notice also that the amount of
error decreases over time. Eventually, it should stabilize with a tiny variation around a
fairly small range of error. The size of the error depends upon the stratum and other network
factors. After a few minutes, use Ctrl+C to break out of the watch loop.
To turn our host into an NTP server, we need to allow it to listen on the local network.
Uncomment the following line to allow hosts on the local network to access our NTP server.
#
Allow NTP client access from local network.
allow 192.168.0.0/16
Note that the server can listen for requests on any local network it's attached to. The IP
address in the "allow" line is just intended for illustrative purposes. Be sure to change the
IP network and subnet mask in that line to match your local network's.
Restart chronyd .
[root@studentvm1 ~]# systemctl restart chronyd
To allow other hosts on your network to access this server, configure the firewall to allow
inbound UDP packets on port 123. Check your firewall's documentation to find out how to do
that.
Testing
Your host is now an NTP server. You can test it with another host or a VM that has access to
the network on which the NTP server is listening. Configure the client to use the new NTP
server as the preferred server in the /etc/chrony.conf file, then monitor that client using the
chronyc tools we used above.
Chronyc as an interactive tool
As I mentioned earlier, chronyc can be used as an interactive command tool. Simply run the
command without a subcommand and you get a chronyc command prompt.
[root@studentvm1 ~]#
chronyc
chrony version 3.4
Copyright (C) 1997-2003, 2007, 2009-2018 Richard P. Curnow and others
chrony comes with ABSOLUTELY NO WARRANTY. This is free software, and
you are welcome to redistribute it under certain conditions. See the
GNU General Public License version 2 for details.
chronyc>
You can enter just the subcommands at this prompt. Try using the tracking , ntpdata , and
sources commands. The chronyc command line allows command recall and editing for chronyc
subcommands. You can use the help subcommand to get a list of possible commands and their
syntax.
Conclusion
Chrony is a powerful tool for synchronizing the times of client hosts, whether they are all
on the local network or scattered around the globe. It's easy to configure because, despite the
large number of options available, only a few configurations are required for most
circumstances.
After my client computers have synchronized with the NTP server, I like to set the system
hardware clock from the system (OS) time by using the following command:
/sbin/hwclock --systohc
This command can be added as a cron job or a script in cron.daily to keep the hardware clock
synced with the system time.
Chrony and NTP (the service) both use the same configuration, and the files' contents are
interchangeable. The man pages for chronyd , chronyc , and chrony.conf contain an amazing amount of
information that can help you get started or learn about esoteric configuration options.
Do you run your own NTP server? Let us know in the comments and be sure to tell us which
implementation you are using, NTP or Chrony.
Most of the backlash against systemd isn't because it's *bad* per se, but because systemd
is in so many ways the opposite of the Unix philosophy.
Windows and Unix have very different approaches. Windows has MS Office and Word, a
multu-gigabyte word processor with literally thousands of functions. Unix has sed, awk, grep,
sort, and cut. Each a few kilobytes at most, each doing one small job. In Unix complex jobs
are done by piping together small, simple pieces.
Unix manages complexity by building on top of simplicity. Windows manages complexity by
hiding it under a veneer, putting the complex stuff at the base and trying to build
simplicity on top of complexity. Each approach has its own strengths. The first, building
complex systems by putting a simple on top of simplicity, stacking simple layers, is very
much the Unix way. Systemd is very much the Windows way of having a bunch of complexity
underneath and then throwing a UI on top that is supposed to make it appear simple.
I run 16.04 and systemd now kills tmux when the user disconnects (
summary of
the change ).
Is there a way to run tmux or screen (or any similar program)
with systemd 230? I read all the heated disussion about pros and cons of the
behavious but no solution was suggested.
Based on @Rinzwind's answer and inspired by a unit
description the best I could find is to use TaaS (Tmux as a Service) - a generic detached
instance of tmux one reattaches to.
You need to set the Type of the service to forking , as explained
here .
Let's assume the service you want to run in screen is called
minecraft . Then you would open minecraft.service in a text editor
and add or edit the entry Type=forking under the section [Service]
.
But if you think of the new behaviour, it makes perfect sense. With logind, Linux now
has a broker which is directly responsible for managing a users session. It's good practice
on many levels, especially security & systems management, to tear down and clean up if
you know when a user has logged out, and with logind, you know.
...why? Why should a user be logged in on a machine to have processes as that user
running? Where did this rule come from. Since ancient times users have relied on this.
I see absolutely no reason why a user should be logged in to have processes running. While
it's obviously nice that the option is there for system administrators to not allow users if
they have a need for this. I absolutely do not see why this option should ever be the default
or considered sane except for specific niche situations. This should absolutely not be the
default and should most certainly not be a default that breaks old behaviour. This is very
unusual and niche behaviour.
This OS from ancient times was built upon the idea that users could run processes without
being logged in. Your crontab also works without being logged in so you can even spawn
processes ex nihilo without being logged in.
Of course, yes, this does impact nifty tools like tmux and screen which abused the old
behaviour to their benefit. As an experienced sysadmin i can also share endless horror
stories of rogue tmux/screen sessions eating countless resources from a system and not
being trivial to detect, diagnose, or resolve.
There is nothing "abused" here, it wasn't a hack, it was intended design.
Also, the ability to clean up processes of users who are not logged in is nothing new, it
existed for a long time and is hardly a special innovation, it's just highly unusual that
they choose to make this the new default behaviour.
Hell, as a system admin you can write a super simple cron job that even gives users time,
checks when they last logged out if they re not logged in and says they are allowed 1 hour
after log out or something like that, you can limit their resources when they are logged out
as well, this is nothing new.
What we really need is some kind of model to treat those long running processes
different from a typical user process. I suppose we could call them 'services'. But we need
them run by the user, not by the system/as root, so I guess we'd call them 'user services'.
And we'd need some kind of daemon to keep those services running and manage them and give
the system administrator a single point of management for what's going on. we could call
that 'systemd' I suppose.
Oh, and yeah, systemd has support for all of that already..
Ehh, systemd/user kills all processes when the user has no remaining sessions. Processes
ran with systemd/user already worked like this.
Service managers that can run processes as user are also nothing new. I use one, but
that's not the issue here.
A minor inconvenience for sure, but a fair trade for ensuring that nothing unintended is
left running when a user logs out IMHO
Except it is not trade, people act like killing processes after a user has logged out is
some brand new technology, it isn't, in fact, it was available in systemd for a long time as
well as in many other mechanisms. The contention is that they changed the default behaviour
suddenly which means that people who don't check the mailing lists are going to be
bitten.
Imagine setting a long running calculation job you need for your Ph.D. tomorrow to run
over night, confident it will be finished tomorrow, you just casually upgraded systemd that
noon not thinking much of it not knowing this changed, you log out, go to bed, wake up seeing
the process was killed immediately after you logged out. You panic, check the shell history
wondering why you were so stupid but you see a nohup and get horribly confused,
why on earth did the process log out?
Those kinds of things are going to happen when it hits. Breaking this kind of behaviour is
going to bite people who don't watch the mailing list for every single piece of software they
use.
"... I think some of the design details are insane (I dislike the binary logs, for example) ..."
"... Systemd problems might not have mattered that much, except that GNOME has a similar attitude; they only care for a small subset of the Linux desktop users, and they have historically abandoned some ways of interacting the Desktop in the interest of supporting touchscreen devices and to try to attract less technically sophisticated users. ..."
"... If you don't fall in the demographic of what GNOME supports, you're sadly out of luck. (Or you become a second class citizen, being told that you have to rely on GNOME extensions that may break on every single new version of GNOME.) ..."
"... As a result, many traditional GNOME users have moved over to Cinnamon, XFCE, KDE, etc. But as systemd starts subsuming new functions, components like network-manager will only work on systemd or other components that are forced to be used due to a network of interlocking dependencies; and it may simply not be possible for these alternate desktops to continue to function, because there is [no] viable alternative to systemd supported by more and more distributions. ..."
So what do Linux's leaders think of all this? I asked them and this is what they told
me.
Linus Torvalds said:
"I don't actually have any particularly strong opinions on systemd itself. I've had issues
with some of the core developers that I think are much too cavalier about bugs and
compatibility, and I think some of the design details are insane (I dislike the binary
logs, for example) , but those are details, not big issues."
Theodore "Ted" Ts'o, a leading Linux kernel developer and a Google engineer, sees systemd as
potentially being more of a problem. "The bottom line is that they are trying to solve some
real problems that matter in some use cases. And, [that] sometimes that will break assumptions
made in other parts of the system."
Another concern that Ts'o made -- which I've heard from many other developers -- is that the
systemd move was made too quickly: "The problem is sometimes what they break are in other parts
of the software stack, and so long as it works for GNOME, they don't necessarily consider it
their responsibility to fix the rest of the Linux ecosystem."
This, as Ts'o sees it, feeds into another problem:
" Systemd problems might not have mattered that much, except that GNOME has a similar attitude; they only care for a small
subset of the Linux desktop users, and they have historically abandoned some ways of
interacting the Desktop in the interest of supporting touchscreen devices and to try to
attract less technically sophisticated users.
If you don't fall in the demographic of what GNOME supports, you're sadly out of luck.
(Or you become a second class citizen, being told that you have to rely on GNOME extensions
that may break on every single new version of GNOME.) "
" As a result, many traditional GNOME users have moved over to Cinnamon, XFCE, KDE,
etc. But as systemd starts subsuming new functions, components like network-manager will only
work on systemd or other components that are forced to be used due to a network of
interlocking dependencies; and it may simply not be possible for these alternate desktops to
continue to function, because there is [no] viable alternative to systemd supported by more
and more distributions. "
Of course, Ts'o continued, "None of these nightmare scenarios have happened yet. The people
who are most stridently objecting to systemd are people who are convinced that the nightmare
scenario is inevitable so long as we continue on the same course and altitude."
Ts'o is "not entirely certain it's going to happen, but he's afraid it will.
What I find puzzling about all this is that even though everyone admits that sysvinit needed
replacing and many people dislike systemd, the distributions keep adopting it. Only a few
distributions, including Slackware ,
Gentoo , PCLinuxOS , and Chrome OS , haven't adopted it.
It's not like there aren't alternatives. These include Upstart , runit , and OpenRC .
If systemd really does turn out to be as bad as some developers fear, there are plenty of
replacements waiting in the wings. Indeed, rather than hear so much about how awful systemd is,
I'd rather see developers spending their time working on an alternative.
Or in other words, a simple, reliable and clear solution (which has some faults due to its age) was replaced with a gigantic KISS
violation. No engineer worth the name will ever do that. And if it needs doing, any good engineer will make damned sure to achieve maximum
compatibility and a clean way back. The systemd people seem to be hell-bent on making it as hard as possible to not use their monster.
That alone is a good reason to stay away from it.
Notable quotes:
"... We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile. ..."
"... I think we should call systemd the Master Control Program since it seems to like making other programs functions its own. ..."
"... RHEL7 is a fine OS, the only thing it's missing is a really good init system. ..."
Systemd is nothing but a thinly-veiled plot by Vladimir Putin and Beyonce to import illegal German Nazi immigrants over the
border from Mexico who will then corner the market in kimchi and implement Sharia law!!!
We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to
our own. Your functions will adapt to service us. Resistance is futile.
"... Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster! ..."
"... Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors ..."
Let's say every car manufacturer recently discovered a new technology named "doord",
which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead
of 1.2 seconds on average. So every time you open a door, you are much, much faster!
Many of the manufacturers decide to implement doord, because the company providing doord
makes it clear that it is beneficial for everyone. And additional to opening doors faster, it
also standardises things. How to turn on your car? It is the same now everywhere, it is not
necessarily to look for the keyhole anymore.
Unfortunately though, sometimes doord does not stop the engine. Or if it is cold
outside, it stops the ignition process, because it takes too long. Doord also changes the way
how your navigation system works, because that is totally related to opening doors, but leads
to some users being unable to navigate, which is accepted as collateral damage. In the end, you
at least have faster door opening and a standard way to turn on the car. Oh, and if you are in
a traffic jam and have to restart the engine often, it will stop restarting it after several
times, because that's not what you are supposed to do. You can open the engine hood and tune
that setting though, but it will be reset once you buy a new car.
I don't agree. Systemd is the most visible part of a clear trend within Red Hat,
consisting in an attempt to make their particular version of Linux THE canonical Linux, to
the point that, if you are not using Red Hat, or some derived distribution, things will not
work. In essence, Red Hat is attempting to out-MS MS by polluting and warping Linux
needlessly but surely. The latest: they have come up with the 'timedatectl' command, which
does exactly the same as 'date'. The latter is to be deprecated. Red Hat, the MS wannabee.
They will not pull it off, but they are inflicting a lot of damage on Linux in the
process.