Skip to main content
Topic solved
This topic has been marked as solved and requires no further attention.
Topic: How to setup a local s6-rc database and plug it into the supervision tree (Read 11861 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

How to setup a local s6-rc database and plug it into the supervision tree

One of systemd's (admittedly) good features is the ability for non-privileged users to manager their own services without needing root privileges at all. You can actually accomplish the same thing (for much much less) by using the power of s6 and s6-rc. It just takes a little bit of setup.

Note: You can actually set this up on any Artix installation regardless of what init system you personally use. s6 and s6-rc can be installed on any machine without interfering with any boot-related things. What differs across the init system is the part that starts the local user's s6-rc services on boot (which is optional of course). If you want to get your feet wet trying s6, this wouldn't be a bad start.

Setup the local source directories:
The source directory for your services can be anywhere your user has read access. I am following the XDG_DATA_DIR spec, so my source directory is in ~/.local/share. We will keep things simple by copying the structure of the /etc/s6 folder. My directory structure looks like this:

~/.local/share
 └── s6
      ├── rc
      └── sv

Or as commands that would simply be:
Code: [Select]
$ mkdir ~/.local/share/s6
$ mkdir ~/.local/share/s6/rc
$ mkdir ~/.local/share/s6/sv

Now we need to actually put some services in ~/.local/share/s6/sv. Any userspace daemons and/or oneshot scripts you want to run are good candidates here. For this example, I will use udiskie which is a userspace daemon I use for automounting usb devices.
Code: [Select]
$ mkdir ~/.local/share/s6/sv/udiskie

Then we just need to make the run and type files here. They look like this.
Code: [Select]
~/.local/share/s6/sv/udiskie/run
--------------------
#!/bin/execlineb -P
exec udiskie
Code: [Select]
~/.local/share/s6/sv/udiskie/type
-------------------
longrun

I used execline for my run script because is a lightweight, non-interactive shell language designed to work well with s6 and s6-rc. However, you can use any scripting language you want with s6-rc*. It does not care as long as the shebang is valid.
*Spoiler (click to show/hide)

For convenience, let's also make a bundle called default that contains udiskie for this user so all services/scripts can be brought up with a single command.
Code: [Select]
$ mkdir -p ~/.local/share/s6/sv/default/contents.d
$ touch ~/.local/share/s6/sv/default/contents.d/udiskie
Code: [Select]
~/.locah/share/s6/sv/default/type
-------------------
bundle
Of course, there's a ton of things you can put in these directories. For the full details, see the upstream documentation for the source directories and service directories.


Setup and compile the s6-rc database:
Note: The s6-base package in Artix comes with scripts that do the below process for you. You can simply run "s6-db-reload -u" to update your local user database. It assumes you store them in "~/.local/share/s6/sv". The below section is left for historical/informational reasons.

Now that we have our service, we need to create and setup the s6-rc database. First, let's compile it.
Code: [Select]
$ s6-rc-compile ~/.local/share/s6/rc/compiled-$(date +%s) ~/.local/share/s6/sv
This command takes two arguments. The first is simply the path to the database you are creating and the second is the path of your source directories. If you notice, the databases name is in the format of compiled-$(date +%s). That command merely gives a unix timestamp of the current time to ensure that the database name is unique. You can call your databases whatever you like, but I would highly recommend using something that generates unique names everytime you run it.

Now that the database exists, we need to make the symlink. Remember that symlinks must be absolute paths. Replace timestamp below with whatever the database actually is.
Code: [Select]
$ ln -sf /home/${USER}/.local/share/s6/rc/compiled-timestamp /home/${USER}/.local/share/s6/rc/compiled

Important note: If you are updating/changing to a new database (first by executing s6-rc-update) and the compiled symlink already exists, you must make the new symlink atomically. The default ln does not do this. This will be omitted in this guide but consult the upstream documentation on database management for more details.

At this point, you're probably thinking, "this sucks can't I make life easier?" Don't worry, the answer is yes. Artix has used a script for automatically updating and changing databases ever since s6 was first implemented. With a little bit of tweaking, it can be easily used for local services. Here it is below.
Code: [Select]
#!/bin/sh

DATAPATH="/home/${USER}/.local/share/s6"
RCPATH="${DATAPATH}/rc"
DBPATH="${RCPATH}/compiled"
SVPATH="${DATAPATH}/sv"
SVDIRS="/run/${USER}/s6-rc/servicedirs"
TIMESTAMP=$(date +%s)

if ! s6-rc-compile "${DBPATH}"-"${TIMESTAMP}" "${SVPATH}"; then
    echo "Error compiling database. Please double check the ${SVPATH} directories."
    exit 1
fi

if [ -e "/runp/${USER}/s6-rc" ]; then
    for dir in "${SVDIRS}"/*; do
        if [ -e "${dir}/down" ]; then
            s6-svc -x "${dir}"
        fi
    done
    s6-rc-update -l "/run/${USER}/s6-rc" "${DBPATH}"-"${TIMESTAMP}"
fi

if [ -d "${DBPATH}" ]; then
    ln -sf "${DBPATH}"-"${TIMESTAMP}" "${DBPATH}"/compiled && mv -f "${DBPATH}"/compiled "${RCPATH}"
else
    ln -sf "${DBPATH}"-"${TIMESTAMP}" "${DBPATH}"
fi

echo "==> Switched to a new database for ${USER}."
echo "    Remove any old unwanted/unneeded database directories in ${RCPATH}."
Just run that as your local user and as long as you follow the paths in there it should just work™. Feel free to modify to your liking.

Now for this next part, I am going to assume you are an Artix s6 user and you want to hook up this new database to your overall supervision tree (that runs as root). If this is not you, you can skip down to the bonus section.

Plugging the local user database to the root supervision tree:
Now it's time to use administrator privileges to finish the job. I implemented this by creating a user-services bundle, a local-s6-user longrun, and a local-s6-rc-user oneshot. However let's first create a simple conf file (/etc/s6/config/user-services.conf) for ease of use.
Code: [Select]
/etc/s6/config/user-services.conf
---------------------------------
# username for the user-services bundle
USER=username
If you want to do multiple users, you could easily put more variables in there as needed.

Now let's setup that user-services bundle.
Code: [Select]
$ mkdir -p /etc/s6/adminsv/user-services/contents.d
$ touch /etc/s6/adminsv/user-services/local-s6-user
$ touch /etc/s6/adminsv/user-services/local-s6-rc-user
Code: [Select]
/etc/s6/adminsv/user-services/type
------------
bundle

For s6-rc to work, we first need an s6-svscan process running. Since this is for the local user, we will make sure all of the commands in this script are run as the local user. We also need to pick a scan directory for s6-svscan to use. It needs to be something the local user has full read/write access to. In this example, the /run/${USER}/service directory will be used. Upstream recommends having this be a RAM filesystem (such as tmpfs) and it works the best with s6-rc. Here are the details.
Code: [Select]
$ mkdir -p /etc/s6/adminsv/local-s6-user/dependencies.d
$ touch /etc/s6/adminsv/local-s6-user/dependencies.d/mount-filesystems
Code: [Select]
/etc/s6/adminsv/local-s6-user/notification-fd
----------------
3
Code: [Select]
/etc/s6/adminsv/local-s6-user/run
---------------------------
#!/bin/execlineb -P
envfile /etc/s6/config/user-services.conf
importas -uD "username" USER USER
foreground { install -d -o ${USER} -g ${USER} /run/${USER} }
foreground { install -d -o ${USER} -g ${USER} /run/${USER}/service }
s6-setuidgid ${USER} exec s6-svscan -d 3 /run/${USER}/service
Code: [Select]
/etc/s6/adminsv/local-s6-user/type
---------------
longrun
While this script does parse the conf file for the USER variable, note that the "username" part allows for a fallback USER in case the envfile fails somehow. Take advantage of it to put your user in there.

Now finally, it is time for the local-s6-rc-user piece. This is merely a oneshot that runs after we have local-s6-user running.
Code: [Select]
$ mkdir -p /etc/s6/adminsv/local-s6-rc-user/dependencies.d
$ touch /etc/s6/adminsv/local-s6-rc-user/mount-filesystems
$ touch /etc/s6/adminsv/local-s6-rc-user/local-s6-user
Code: [Select]
/etc/s6/adminsv/local-s6-rc-user/down
-----------------
#!/bin/execlineb -P
envfile /etc/s6/config/user-services.conf
importas -uD "username" USER USER
foreground { s6-setuidgid ${USER} s6-rc -l /run/${USER}/s6-rc -bDa change }
foreground { s6-setuidgid ${USER} rm -r /run/${USER}/service }
s6-setuidgid ${USER}
elglob -0 dirs /run/${USER}/s6-rc*
forx -E dir { ${dirs} }
rm -r ${dir}
Code: [Select]
/etc/s6/adminsv/local-s6-rc-user/type
----------------
oneshot
Code: [Select]
/etc/s6/adminsv/local-s6-rc-user/up
-----------------
#!/bin/execlineb -P
envfile /etc/s6/config/user-services.conf
importas -uD "username" USER USER
foreground { s6-setuidgid ${USER}
s6-rc-init -c /home/${USER}/.local/share/s6/rc/compiled -l /run/${USER}/s6-rc /run/${USER}/service }
s6-setuidgid ${USER}
exec s6-rc -l /run/${USER}/s6-rc -up change default
The same note about the "username" applies here.

Now we can finally update our root/administrator database.
Code: [Select]
$ sudo s6-db-reload

Let's start up that local s6-rc database session.
Code: [Select]
$ sudo s6-rc -u change user-services

That's it! Now you have a fully local process supervisor and fully local service manager for the user to use. You just have to supply the s6-rc command the -l argument to point to the correct live database. In the case of our udiskie example.
Code: [Select]
$ s6-rc -l /run/${USER}/s6-rc -u change udiskie

Or if you want to bring the user's default bundle down, that would be:
Code: [Select]
$ s6-rc -l /run/${USER}/s6-rc -d change default

The really nice thing about this setup is that the local s6-svscan process is completely supervised. It will never die during the lifetime of the machine unless you purposely tell it to die. If you blindly give the PID a kill command, the s6-supervise process for it will simply respawn it. Your user can continue to use s6-rc and s6 commands on their local services as normal. These scripts are fairly generic you so if you want to add more users, you can basically just copy and paste and just change a few paths/variable names and add them to the user-services bundle. To get these services to always start on boot, just add user-services to your default bundle in the root database and you are good to go.

Bonus: I don't want to plug it into an existing supervision tree/I'm not using s6 as init
In the previous section, all the commands that are run to start up s6-rc are completely done as a local user. You don't have to plug it into any existing frame work if you don't want to. Here's a quick and dirty way to get it working on any system.

First, we need to get s6-svscan working. Let's create those folders in /tmp.
Code: [Select]
$ mkdir /tmp/${USER}
$ mkdir /tmp/${USER}/service
$ s6-svscan /tmp/${USER}/service
The s6-svscan will run in the foreground. This is good and what you want. Keep that running. In a new terminal, let's do this.
Code: [Select]
$ s6-rc-init -c /home/${USER}/.local/share/s6/rc/compiled -l /tmp/${USER}/s6-rc /tmp/${USER}/service
$ s6-rc -l /tmp/${USER}/s6-rc -up change default
That's it! You have a fully local s6-rc ready for use. Just be sure to pass "-l /tmp/${USER}/s6-rc" to all of your s6-rc commands like so:
Code: [Select]
$ s6-rc -l /tmp/${USER}/s6-rc -u change udiskie
If you want to quit, just send the s6-svscan process the kill signal. The various directories in /tmp will need to be cleaned up/removed if you wish to start up s6-rc again. Note that it is 100% possible to run these commands in another init system's startup process if you want to (openrc, runit, even systemd would work). I'll leave that as an exercise to the reader if he is very curious.

Final Thoughts:
This ended up being a quite a bit wordier and longer than I expected, but I hope it is interesting. To my knowledge, nobody has really detailed how to get something like this setup. All the information you need is in the skarnet documentation, but it is scattered across quite a few different pages and you need to conceptually understand the system to piece it all together. I hope this was useful to people. I know I want to start migrating more things to be supervised/handled by s6-rc at least.

Re: How to setup a local s6-rc database and plug it into the supervision tree

Reply #1
This is awesome. Thank you very much for sharing.
Artix Linux Colombia

Re: How to setup a local s6-rc database and plug it into the supervision tree

Reply #2
I'm going to give it a try once the /etc/{,admin}sv split reaches the main repo. I already have a runit-style ~/.service folder (including oneshots simulated with s6-pause, please don't punch me Skarnet!) for a lot of stuff there, and it's awesome. Full s6-rc compatibility (which I didn't manage to achieve) would be even better.

Re: How to setup a local s6-rc database and plug it into the supervision tree

Reply #3
Here's a neat trick I just realized. All of the supervisor children inherit their environment from s6-svscan. When s6-svscan is PID1, there is, of course, no environment variables to speak of so daemons that rely on certain variables to be defined have to have it exported in the script. If you start your local s6-rc instance off of the root supervision tree, it will also have zero environment variables. This is true regardless of what scripting language you use (execline, shell, etc.) Unfortunately, this can be a bit unwieldy since it is fairly common for local user services/daemons to require various variables to be set in your environment (XDG_RUNTIME_USER, DISPLAY, etc.) To avoid a lot of pain, you can simply just set up your local s6-svscan run script to export any variables you want to the environment. Every supervised child (AKA your local user services), will automatically inherit everything you export.

Let's reuse that /etc/s6/config/user-services.conf file and make it more useful. Here's what mine looks like right now (replace username with your actual username of course).
Code: [Select]
# environment variables for the local s6-rc database
DISPLAY=:0
UID=1000
USER=username
WAYLAND_DISPLAY=wayland-1
Adjust that to your liking. If you are are using execline for your scripts (which I recommend), you can not do anything fancy here. Strictly, only key=value pairs are allowed. You can save the fancier stuff for the actual runscript.

Now let's pop open that /etc/s6/adminsv/s6-user/run file.
Code: [Select]
envfile /etc/s6/config/user-services.conf
importas -i UID UID
importas -i USER USER
export HOME /home/${USER}
export XDG_RUNTIME_DIR /run/user/${UID}

foreground { install -d -o ${USER} -g ${USER} /tmp/${USER} }
foreground { install -d -o ${USER} -g ${USER} /tmp/${USER}/service }
s6-setuidgid ${USER} exec s6-svscan -d 3 /tmp/${USER}/service
The envfile file command takes all of those variables we defined in that conf and exports them to the environment. After doing that, we are going to immediately make use of a couple of those variables to define a couple of other handy things by using the importas command. Next, we simply export the two variables (HOME and XDG_RUNTIME_DIR) to the environment. You can pretty easily piece together anything you want in this manner. The rest of the script is simply just setting up and starting s6-svscan as explained before.

Update the database and restart the bundle.
Code: [Select]
$ sudo s6-db-reload
$ sudo s6-rc -d change user-services
$ sudo s6-rc -u change user-services

Now let's actually make use of all of this in a local user service. I am a sway user and I run swayidle as the daemon for managing my dpms settings. Let's hook this into s6-rc. Here's my run file (~/.local/share/s6/sv/swayidle/run).
Code: [Select]
#!/bin/execlineb -P
importas -i XDG_RUNTIME_DIR XDG_RUNTIME_DIR
backtick -n -E SWAYSOCK { pipeline { redirfd -w 2 /dev/null find ${XDG_RUNTIME_DIR} -name "sway-ipc.*" } pipeline { sort } head -n 1 }
export SWAYSOCK ${SWAYSOCK}
redirfd -w 2 /dev/null exec swayidle timeout 600 "swaymsg \"output * dpms off \"" resume "swaymsg \"output * dpms on\""
That may look a little complicated, but it's actually much simpler now than it was before I modified /etc/s6/adminsv/s6-user/run. swayidle needs 3 environment variables to run: XDG_RUNTIME_DIR, WAYLAND_DISPLAY, and SWAYSOCK. Because I already defined XDG_RUNTIME_DIR and WAYLAND_DISPLAY in the supervising s6-svscan process, this saves me from having to export the variables here. Let's step through it line by line.

Code: [Select]
importas -i XDG_RUNTIME_DIR XDG_RUNTIME_DIR
In order to grab the XDG_RUNTIME_DIR from the environment and use it in the script as a variable, we have to make an importas call in execline. The -i argument makes this a hard failure if XDG_RUNTIME_DIR can't be found in the environment.

Code: [Select]
backtick -n -E SWAYSOCK { pipeline { redirfd -w 2 /dev/null find ${XDG_RUNTIME_DIR} -name "sway-ipc.*" } pipeline { sort } head -n 1 }
This line looks a little crazy but it's just one big call to get the path to the sway socket. backtick is a program that runs another program and stores its output in an environment variable. That's the first step. The program I am running here is just piping a few commands and redirecting the output. In shell it would be, $ find ${XDG_RUNTIME_DIR} -name "sway-ipc.*" | sort | head -n 1 2> /dev/null. The reason for the sort and head -n 1 part is simply so that I always get the first match (in case multiple sway sessions are open for example). I also directed the stderr to null since I don't really care about logging this and it silences any noisy errors.

Code: [Select]
export SWAYSOCK ${SWAYSOCK}
redirfd -w 2 /dev/null exec swayidle timeout 600 "swaymsg \"output * dpms off \"" resume "swaymsg \"output * dpms on\""
This part is pretty simple. The SWAYSOCK we got earlier gets exported to the environment so that the following swayidle call can use it and work. The output here is also redirected to null since I'm not really interested in the output here.

Update to the new database using the script from before (change that path based on where you keep it) and start it up.
Code: [Select]
$ sh ~/src/scripts/s6-rc-db-user-update-hook
$ s6-rc -l /tmp/${USER}/s6-rc -u change swayidle

When writing services for the root supervision tree, I never really though too much about this since it's not really an applicable feature (PID1 doesn't have anything). However when making local services, it is immensely useful. My fcitx daemon now just works and I didn't even have to do anything. The inherited environment variables is definitely something you should remember and use.

Re: How to setup a local s6-rc database and plug it into the supervision tree

Reply #4
Hello again.

I think this post is so awesome.

This first time I replicated it worked perfectly, then after the s6 update I moved all my custom services to adminsv and for any reason I won't be able to make it work .

Code: [Select]
s6-rc -v 6 -u change user-services
s6-rc: info: bringing selected services up
s6-rc: info: service s6rc-fdholder: already up
s6-rc: info: service udevd-log: already up
s6-rc: info: service s6rc-oneshot-runner: already up
s6-rc: info: service mount-tmpfs: already up
s6-rc: info: service mount-procfs: already up
s6-rc: info: service mount-devfs: already up
s6-rc: info: service mount-cgroups: already up
s6-rc: info: service kmod-static-nodes: already up
s6-rc: info: service tmpfiles-dev: already up
s6-rc: info: service udevd-srv: already up
s6-rc: info: service udevadm: already up
s6-rc: info: service modules: already up
s6-rc: info: service mount-filesystems: already up
s6-rc: info: service s6-user: starting
s6-rc: info: service s6-user successfully started
s6-rc: info: service s6-rc-user: starting
s6-ipcclient: connected to /run/s6-rc/servicedirs/s6rc-oneshot-runner/s
s6-svscan: fatal: another instance of s6-svscan is already running on the same directory
s6-rc: warning: unable to start service s6-rc-user: command exited 100

I also tried to make my user service for "xdg-desktop-portal-wlr"  because I use my laptop for working purpose and I need to share my screen. The script works in "bash" and even in "execlineb" but for any reason when I start the service it does not work. What it annoys me a bit it's I don't know how to troubleshoot these kind of problems, there are no logs  or there are not strange outputs, it just say the service starts OK but then I cannot see the service running. I don't know what is the right way to troubleshoot this problems. I don't know if the right way is to open another post. If you can help me I will appreciate it.

I like s6 a lot but Understanding completely is quite hard to me.

Artix Linux Colombia

Re: How to setup a local s6-rc database and plug it into the supervision tree

Reply #5
Could you post your s6-rc-user oneshot script? Judging from the output, it looks like you are mistakenly running s6-svscan in there again?

I don't have much experience with the xdg-destop-portal stuff but it is often the case that services like that require environment variables and such to run correctly. If you run it in a normal shell, you will get those but it's not necessarily the case if you run it with s6-rc. The quick and dirty way to see what's going on with this is to try and run the daemon/script in a verbose mode and redirect the output to some arbitrary file. Usually it'll give you a clue at what is going on.

Re: How to setup a local s6-rc database and plug it into the supervision tree

Reply #6
Hello, I could solve the first problem, you are right. I copied the bad script and that's why I saw that error.

I made a couple of scripts. I made one for wlsunset:

Code: [Select]
$HOME/.local/share/s6/rc/sv/wlsunset/run
-------------------------------
#!/usr/bin/execlineb -P
importas -i XDG_RUNTIME_DIR XDG_RUNTIME_DIR
exec wlsunset -l 40.416775 -L -3.703790

Code: [Select]
$HOME/.local/share/s6/rc/sv/wlsunset/type
-------------------------------
longrun

This is the one I made for xdg-desktop-portal-wlr

Code: [Select]
$HOME/.local/share/s6/rc/sv/xdg-desktop-portal-wlr/run
------------------------
#!/usr/bin/execlineb -P
importas -i XDG_RUNTIME_DIR XDG_RUNTIME_DIR
importas -i XDG_CURRENT_DESKTOP XDG_CURRENT_DESKTOP
importas -i WAYLAND_DISPLAY WAYLAND_DISPLAY
exec dbus-update-activation-environment WAYLAND_DISPLAY XDG_CURRENT_DESKTOP=sway

Code: [Select]
$HOME/.local/share/s6/rc/sv/xdg-desktop-portal-wlr/type
------------------------
oneshot

These are the configurations from the "root" side:

Code: [Select]
/etc/s6/config/user-services.conf
--------------------------------
DISPLAY=:0
USER=chucho
UID=1000
WAYLAND_DISPLAY=wayland-1
XDG_CURRENT_DESKTOP=sway
Code: [Select]
/etc/s6/adminsv/s6-user/run
--------------------------
#!/bin/execlineb -P
envfile /etc/s6/config/user-services.conf
importas -i UID UID
importas -i USER USER
export HOME /home/${USER}
export XDG_RUNTIME_DIR /run/user/${UID}

foreground { install -d -o ${USER} -g ${USER} /tmp/${USER} }
foreground { install -d -o ${USER} -g ${USER} /tmp/${USER}/service }
s6-setuidgid ${USER} exec s6-svscan -d 3 /tmp/${USER}/service
Code: [Select]
/etc/s6/adminsv/s6-rc-user/up
---------------------------------
#!/bin/execlineb -P
envfile /etc/s6/config/user-services.conf
importas -uD "username" USER USER
foreground { s6-setuidgid ${USER}
s6-rc-init -c /home/${USER}/.local/share/s6/rc/compiled -l /tmp/${USER}/s6-rc /tmp/${USER}/service }
s6-setuidgid ${USER}
exec s6-rc -l /tmp/${USER}/s6-rc -up change default
Code: [Select]
/etc/s6/adminsv/s6-rc-user/down
---------------------------------------
#!/bin/execlineb -P
envfile /etc/s6/config/user-services.conf
importas -uD "username" USER USER
foreground { s6-setuidgid ${USER} s6-rc -l /tmp/${USER}/s6-rc -bDa change }
foreground { s6-setuidgid ${USER} rm -r /tmp/${USER}/service }
s6-setuidgid ${USER}
elglob -0 dirs /tmp/${USER}/s6-rc*
forx -E dir { ${dirs} }
rm -r ${dir}

I basically have two questions:

1. For me that I don't use a login manager is there a way to start those scripts only if there is a Wayland session? Because if I add "user-services" to the default bundle, I will get a lot of errors because those environments variable don't exist. I normally enter to TTY console and then I execute sway. So, what I want is to start "user-services" as root only if sway has started.

2. The xdg-desktop-portal-wlr shows me it started, I don't see any error.

Code: [Select]
s6-rc -l /tmp/${USER}/s6-rc -v 6 -u change xdg-desktop-portal-wlr
s6-rc: info: bringing selected services up
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service xdg-desktop-portal-wlr: starting
s6-ipcclient: connected to /tmp/chucho/s6-rc/servicedirs/s6rc-oneshot-runner/s
s6-rc: info: service xdg-desktop-portal-wlr successfully started

I even ran the execlineb script manually and it runs properly, but for a reason I completely don't know it does not work.

PS: Maybe I am kidnapping this post, it's not my intention, if you do believe I should open another post for this I will do it.



Artix Linux Colombia

 

Re: How to setup a local s6-rc database and plug it into the supervision tree

Reply #7
No worries, your questions are related to the topic and worth documenting here for potential readers.

For the xdg-desktop-portal-wlr thing the oneshot up script (you typed it as "run" but I think that was just a mistake), I really don't know. The command looks fine. XDG_CURRENT_DESKTOP=sway may be redundant, but it should work. All I can say is make sure the dbus-daemon is actually running. I guess possibly it starts too early (before sway is actually ready)? I don't see why it would matter in this case, but if you haven't done so yet, maybe try bring it up by itself after you have the graphics up and running.

1. For me that I don't use a login manager is there a way to start those scripts only if there is a Wayland session? Because if I add "user-services" to the default bundle, I will get a lot of errors because those environments variable don't exist. I normally enter to TTY console and then I execute sway. So, what I want is to start "user-services" as root only if sway has started.

Remember that almost all of this is accomplished without root. The only reason root comes into play here is to plug in your local s6-svscan into the overall supervision tree. The actual "s6-rc -l /tmp/${USER}/s6-rc -up change default" command can be done completely without root permissions. What you could do is just remove that from the s6-rc-user up oneshot and have that service's only purpose be to run s6-rc-init. Then you could put that s6-rc command in your sway config to automatically execute on startup.

You could also get fancier with the dependencies. sway could be thought of as a daemon. It's a long run process that isn't supposed to die after all. One could make it a s6-rc service for your local user and then have other services depend on it. I'm actually experimenting with this set up right now, but it's a bit imperfect as it takes a bit of time for sway to actually be up  and thus some services start too early.

Re: How to setup a local s6-rc database and plug it into the supervision tree

Reply #8
You could also get fancier with the dependencies. sway could be thought of as a daemon. It's a long run process that isn't supposed to die after all. One could make it a s6-rc service for your local user and then have other services depend on it. I'm actually experimenting with this set up right now, but it's a bit imperfect as it takes a bit of time for sway to actually be up  and thus some services start too early.

I have a similar thing going with bspwm. To avoid race conditions (e. g. polybar starting too early) I give it a unusual notification-fd (9 in my case) and at the very end of my bspwmrc I have echo >&9.

An exec line at the end of the sway config could have a similar effect.

Re: How to setup a local s6-rc database and plug it into the supervision tree

Reply #9
That's a good idea. I should give that a try.

Re: How to setup a local s6-rc database and plug it into the supervision tree

Reply #10
No worries, your questions are related to the topic and worth documenting here for potential readers.

For the xdg-desktop-portal-wlr thing the oneshot up script (you typed it as "run" but I think that was just a mistake), I really don't know. The command looks fine. XDG_CURRENT_DESKTOP=sway may be redundant, but it should work. All I can say is make sure the dbus-daemon is actually running. I guess possibly it starts too early (before sway is actually ready)? I don't see why it would matter in this case, but if you haven't done so yet, maybe try bring it up by itself after you have the graphics up and running.

Remember that almost all of this is accomplished without root. The only reason root comes into play here is to plug in your local s6-svscan into the overall supervision tree. The actual "s6-rc -l /tmp/${USER}/s6-rc -up change default" command can be done completely without root permissions. What you could do is just remove that from the s6-rc-user up oneshot and have that service's only purpose be to run s6-rc-init. Then you could put that s6-rc command in your sway config to automatically execute on startup.

You could also get fancier with the dependencies. sway could be thought of as a daemon. It's a long run process that isn't supposed to die after all. One could make it a s6-rc service for your local user and then have other services depend on it. I'm actually experimenting with this set up right now, but it's a bit imperfect as it takes a bit of time for sway to actually be up  and thus some services start too early.

Hello again, I removed the line "s6-rc -l /tmp/${USER}/s6-rc -up change default" from the s6-rc-user up oneshot  as you said and it works the way I want. At the moment I have two services that runs with s6.  So it's something like this:

Code: [Select]
/etc/s6/adminsv/s6-rc-user/up
--------------------------------------
#!/bin/execlineb -P
envfile /etc/s6/config/user-services.conf
importas -uD "username" USER USER
foreground { s6-setuidgid ${USER}
s6-rc-init -c /home/${USER}/.local/share/s6/rc/compiled -l /tmp/${USER}/s6-rc /tmp/${USER}/service }
s6-setuidgid ${USER}

This is the sway configuration in case people want to check what I have done.

Code: [Select]
.config/sway/config.d/2_autostart
-------------------------------------
#s6 local services
#It runs wlsunset, which is a ligther alternative to Redshift
#Pipewire, a better alternative to pulseaudio crap.
exec s6-rc -l /tmp/${USER}/s6-rc -up change default
exec dbus-update-activation-environment WAYLAND_DISPLAY XDG_CURRENT_DESKTOP=sway
exec /usr/share/sway/scripts/inactive-windows-transparency.py -o 0.9
exec mako

The services wlsunset and pipewire works OK, the other ones (xdg and mako) are making a lot of problems in my case. I will check them in the future when I have more time. At the moment I am pretty happy with this.

Thank you very much for your help.


Artix Linux Colombia

Re: How to setup a local s6-rc database and plug it into the supervision tree

Reply #11
I could finally achieve how to make xdg-desktop-portal-wlr work. The script is a bit messy so if someone else knows a better way to do it I would appreciate it.

This is the code of my oneshot  script.

I basically need the environment variable "DBUS_SESSION_BUS_ADDRESS", which it was empty, and I got it from the "pid" that sway is running.

Code: [Select]
~/.local/share/s6/rc/sv/xdg-desktop-portal-wlr/up
------------------------------------------------------
#!/bin/execlineb -P
importas -i XDG_RUNTIME_DIR XDG_RUNTIME_DIR
importas -i XDG_CURRENT_DESKTOP XDG_CURRENT_DESKTOP
importas -i WAYLAND_DISPLAY WAYLAND_DISPLAY
#Getting the sway pid
backtick -n -E SWAYPID { pipeline { ps aux } pipeline { grep sway\$ } pipeline { grep -v dbus } pipeline { cut -d " " -f 5 } head -n 1 }
#Getting the DBUS_SESSION_BUS_ADDRESS ENVIRONTMENT
backtick -n -E DBUS_SESSION_BUS_ADDRESS { pipeline { strings /proc/${SWAYPID}/environ } pipeline { grep DBUS_SESSION_BUS_ADDRESS } cut -d "=" -f 2,3,4 }
export DBUS_SESSION_BUS_ADDRESS ${DBUS_SESSION_BUS_ADDRESS}
exec dbus-update-activation-environment WAYLAND_DISPLAY=${WAYLAND_DISPLAY} XDG_CURRENT_DESKTOP=${XDG_CURRENT_DESKTOP}

So, basically the line.

Code: [Select]
backtick -n -E SWAYPID { pipeline { ps aux } pipeline { grep sway\$ } pipeline { grep -v dbus } pipeline { cut -d " " -f 5 } head -n 1 }

it's the equivalent of this:

Code: [Select]
ps aux | grep sway$ | grep -v dbus | cut -d "" -f 5 | head -n 1

And it will save the PID into the SWAYPID variable.

Then I used the SWAYPID  in the line:

Code: [Select]
backtick -n -E DBUS_SESSION_BUS_ADDRESS { pipeline { strings /proc/${SWAYPID}/environ } pipeline { grep DBUS_SESSION_BUS_ADDRESS } cut -d "=" -f 2,3,4 }

Which it's an equivalent of:

Code: [Select]
strings /proc/${SWAYPID}/environ | grep DBUS_SESSION_BUS_ADDRESS | cut -d "=" -f 2,3,4

to get the DBUS_SESSION_BUS_ADDRESS that I am running in my sway session. With this variable I just export it as you did in the tutorial and it works.

I will share this because there could be many people who might benefit from this, I am not the first one who need to share the screen in Wayland through Sway.
Artix Linux Colombia

Re: How to setup a local s6-rc database and plug it into the supervision tree

Reply #12
I'm using what I think is a nicer solution to "Things That Need D-Bus" problem.

We can actually set the D-Bus session bus address as part of s6-svscan's environment (as systemd does for systemd --user), and then spawn a supervised dbus-daemon --session process listening on that specific location. It's more reliable than using files or /proc for passing this information to-and-from unsupervised processes, and shaves code from run files.

D-Bus addresses have a very specific format described here. For our purposes, I'll be using unix:abstract=/user/dbus, which will, put simply, create an invisible socket file.

Put this in your user-services.conf file.

Code: [Select]
USERNAME=capezotte
[...]
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/capezotte/dbus

Now, we want to spawn a dbus-daemon at this location. I recommend the following service definition, with readiness notification support:

.local/s6/sv/dbus/notification-fd
Code: [Select]
3

.local/s6/sv/dbus/run
Code: [Select]
fdmove -c 2 1
importas -i ADDR DBUS_SESSION_BUS_ADDRESS
dbus-daemon --nofork --nopidfile --session --address=${ADDR} --print-address=3

Now user services no long need to perform complicated queries on /proc (or on files you create) to get this address - it's known. Just list dbus on the dependencies file and run can be a single line now.

To make your graphical session use your supervised D-Bus for message passing, the lowest-effort way is just bring it up and copy-paste the session address to your {xinit,sx}rc (since supervised Xorg - which would inherit the D-Bus address for "free" - breaks elogind, unfortunately).

Example:
Code: [Select]
#!/bin/bash
s6-rc -l /tmp/capezotte/s6-rc -u change dbus
export DBUS_SESSION_BUS_ADDRESS=unix:abstract=/capezotte/dbus
exec i3

No need for dbus-launch anymore, and both user services and things spawned from your session are "first-class citizens".

Re: How to setup a local s6-rc database and plug it into the supervision tree

Reply #13
Quote
If you want to do multiple users, you could easily put more variables in there as needed.
Like this?
Code: [Select]
USER=abc
USER2=cde
But wouldn't this require to modify all the execline scripts?
Is it possible to create some s6 adminsv service, to bring up the local services just for the current user that logs in (and stopping them on logout)? So that you don't have to specify each user individually in user-services.conf. I have the feeling that this might not be possible as adminsv service, but perhaps it is possible to put something like this in the .bash_profile
Code: [Select]
s6-rc -l /tmp/${USER}/s6-rc -up change default
and it just works without specifying each user individually?

What is the advantage of plugging the local user database into the root supervision tree?
I have the feeling it might be more easy to use the bonus section to implement what I imagine.

Another question: How should logs be handled with this method? Using the example from above, should you create a local udiskie-log service? Or are the logs automatically handled by the adminsv services? And if you want to create an individual udiskie-log service, what is the preferred/advised location to save user-specific logs?

Re: How to setup a local s6-rc database and plug it into the supervision tree

Reply #14
Like this?
Code: [Select]
USER=abc
USER2=cde
But wouldn't this require to modify all the execline scripts?
Is it possible to create some s6 adminsv service, to bring up the local services just for the current user that logs in (and stopping them on logout)? So that you don't have to specify each user individually in user-services.conf. I have the feeling that this might not be possible as adminsv service, but perhaps it is possible to put something like this in the .bash_profile
Code: [Select]
s6-rc -l /tmp/${USER}/s6-rc -up change default
and it just works without specifying each user individually?

Hooking things into logging in and logging out is trickier. Well logging in isn't too bad since there are various ways to run things on startup, but you would need to bring services down on logout and I don't really know of a good way to do that unfortunately.

What is the advantage of plugging the local user database into the root supervision tree?
I have the feeling it might be more easy to use the bonus section to implement what I imagine.

Well the main advantage is that your local s6-svscan process is also fully supervised. If that process dies for some reason, the supervision tree will just respawn it again. You can test it by sending it a kill signal.


Another question: How should logs be handled with this method? Using the example from above, should you create a local udiskie-log service? Or are the logs automatically handled by the adminsv services? And if you want to create an individual udiskie-log service, what is the preferred/advised location to save user-specific logs?

It's entirely up to you. I don't really care about logging for my local user so I didn't implement it. Of course, you always can implement your own logging pipeline if you want. It would just be like the usual artix services (daemon-srv + daemon-log) and then the logs should be dumped somewhere your user has access to.