So, here's a project that I've been working on for the past week or so: benchmarking Artix (all of the available init systems) and other distributions.
I tested different systems in separate virtual machines and wrote the results down. The data is available on a website (https://maxlpm.codeberg.page/) (in fact, it has all of the info in its entirety), in a JSON file, and, since this forum's software allows tables, here's a truncated one:
Value | Arch | Artix OpenRC | Artix runit | Artix s6 | Artix dinit | Debian |
---|
First boot | 5.2 seconds | 9.6 seconds | 6.6 seconds | 4.7 seconds | 5 seconds | 8 seconds |
Second boot | 5.2 seconds | 8.1 seconds | 6.5 seconds | 4.7 seconds | 5 seconds | 6.8 seconds |
Optimized Boot | - | 5.7 seconds | 6 seconds | - | - | - |
Shutdown | 2.3 seconds | 6.3 seconds | 4.6 seconds | 5.9 seconds | 2.8 seconds | 2.8 seconds |
RAM usage | 270 MiB | 250 MiB | 257 MiB | 258 MiB | 251 MiB | 280 MiB |
CPU usage | 0.2% | 0.2% | 0.2% | 0.8% | 0.2% | 0.2% |
Packages installed | 129 | 143 | 141 | 145 | 140 | 220 |
First boot + services | 10.5 seconds | 13.6 seconds | 6.4 seconds | 6.3 seconds | 5.4 seconds | 9.2 seconds |
Second boot + services | 7.2 seconds | 13.3 seconds | 5.9 seconds | 6.5 seconds | 5.4 seconds | 9.5 seconds |
Optimized boot + services | - | 9.1 seconds | 5.8 seconds | - | - | - |
Shutdown + services | 2.3 seconds | 8.6 seconds | 5.3 seconds | 4.7 seconds | 2.6 seconds | 3 seconds |
For more values & notes, please see the website. Also, all of the sources can be found on Codeberg (https://codeberg.org/MaxLPM/pages).
So, what's the moral of the story? Personally, I think that an extra 10 MiB of RAM and a difference of a few seconds in boot/shutdown times don't really matter. What matters is if you get hit by "a stop job is running" (https://duckduckgo.com/?t=h_&q=a+stop+job+is+running+systemd&ia=web) or bugs (https://github.com/systemd/systemd/issues). Which means that rather than choosing your init by comparing perfomance, you should choose it by analyzing the:
- Core design
- Security
- Community support
- Stability
and so on. Unfortunately, it's not really possible to benchmark those in an objective way, so that task is up to the user.
PS: I plan on doing more measurements (e.g. testing with a desktop environment) and maybe repeating all of this in the future. So, any contributions/suggestions/corrections are welcome! ;)
Interesting.
It's nice to see the differences between the Artix inits.
I've read on here that dinit was fast and this confirms it but I agree "a difference of a few seconds in boot/shutdown times don't really matter". Not enough for me to consider switching from openrc. Not just for better up / down times anyway.
I can't really see how how the init should make much, or any, difference to 'real world' benchmarks once the system is fully up. Assuming all the non init parts of the OS remain the same across the tested versions ?
But regardless I look forward to seeing the results of your further testing.
Thanks for the effort.
Mostly I want to see how quickly the login manager appears, and try to reproduce the "a stop job is running" thing (if anyone knows which services usually cause this please let me know)
By the way, here are some more ideas:
- Testing ISO boot times
- Replicating a server setup (mostly installing services like nginx)
#!/bin/bash
SYSTEMD="insane"
counter=1
while [ "$SYSTEMD" != "sane" ]; do
echo -ne "A stop job is running for a vague reason (${counter}s \ 99999999999999s)\r"
counter=$((counter + 1))
sleep 1
done
That's pretty much exactly how they do it ;)
Great job, thanks. Perhaps you'll find this little script (https://gitea.artixlinux.org/nous/scripts/src/branch/master/artix-install.sh) may save you some time creating automated installations, I use it to test our base ISOs functionality.
Slow and steady can be safer, it allows hardware to power up and services to run in n orderly manner, and at shutdown it allows jobs to complete and data to be saved. Whether that's important depends on what hardware and software is in use.
If you want to create a hang at shutdown you need to create a service that keeps running and blocks SIGTERM and perhaps other fatal signals so it won't quit until SIGKILL is sent, you might be able to deduce the delay time until this happens from just looking at the init code though.
You can do this in a simple BASH script, and most other languages, e.g.:
https://stackoverflow.com/questions/24848843/how-do-i-stop-a-signal-from-killing-my-bash-script (https://stackoverflow.com/questions/24848843/how-do-i-stop-a-signal-from-killing-my-bash-script)
RTFM:
https://joborun.neocities.org/joborun
https://joborun.neocities.org/goals
https://wiki.gentoo.org/wiki/GCC_optimization