arrow_upward

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Running Fedora inside an LXD/LXC System Container
#1
This is Part 2 of my documentation of my experience with EUServ free VPSs, but this time from the stand point of running Fedora 30 (for the perspective of them/VPSs being IPv6-only, see Part 1: Running an IPv6-only VPS Gotchas! )

The use of EUServ's VPSs allowed me to experience not only the challenges of running an IPv6-only VPS but also to have a first-hand account of running Fedora 30 inside an LXD/LXC container.

1- A Bit of Context
First I should mention that LXC is a well-established, low-level Linux container runtime, dating back to 2008; thus well before anyone even heard about the currently widely used Docker alternative (which saw light in 2013.) The problem with LXC containers was/is(/will always be) its sheer complexity because of it being that low-level. Enter LXD (in 2015), which is Canonical's effort to make LXC container accessible by building an API around it in Go language; in this sense, LXD is a container orchestration engine with LXC under-the-hood.

LXD containers can only run Linux operating systems, and being containers suggest that its VMs share all the devices with the host operating system. I explicitly state this because, sysadmins should always remember this when on such VPSs.

With this out of the way now is the time to do all the routines of taking control of a brand new VPS running Fedora 30(+) or CentOS (7, 8 +).

2- Running a Properly Managed VPS
In this section, I'll go through all the steps essential to properly administer any RedHat-based system with few tips that are specific to EUServ Fedora 30 template used in their automation process.

2.1- Secure Your VPS
It's always a good practice to check first on the VPS's datetime and see if it's accurate. In our case here -ie LXD Container- time settings is the Host's prerogative, so we'll just check and may be change the timezone if feel like it.
[root@srvXYZ ~]# timedatectl
              Local time: Tue 2020-03-31 14:01:32 CEST
          Universal time: Tue 2020-03-31 12:01:32 UTC
                RTC time: n/a
               Time zone: Europe/Berlin (CEST, +0200)
System clock synchronized: yes
             NTP service: inactive
         RTC in local TZ: no

To modify the timezone:
timedatectl set-timezone XYZ/xyz

You can also change the hostname, but I won't do it here:
[root@srv10120 ~]# hostnamectl status
  Static hostname: srvXYZ
        Icon name: computer-container
          Chassis: container
       Machine ID: *******************************
          Boot ID: ...............................
   Virtualization: lxc
 Operating System: Fedora 30 (Thirty)
      CPE OS Name: cpe:/o:fedoraproject:fedora:30
           Kernel: Linux 4.20.8-1.el7.elrepo.x86_64
     Architecture: x86-64

2.1.1- Disable Root Login
The absolute first thing to do is to remove the root login via SSH, but to be able to do that, we first need to create a user with sudo power (ie, a sudoer.) For clarity, I'll use the standard 3 steps in Fedora.
adduser <username>
passwd <username>
usermod -aG wheel <username>

Now, you can test that your 'sudoer' is functioning, by just login out and relogin as your newly created 'super-user', then try to switch as root:
sudo su -

If you become 'root' after running the command above, then you're all set and you can now go ahead and disable the root login via SSH, by modifying the adhoc line in the '/etc/ssh/sshd_config'
From:
PermitRootLogin yes

To:
PermitRootLogin no
Then:
sshd -t
systemctl restart sshd

2.1.2- Activate the Firewall
I've noticed that Firewalld is down by default:
[root@srvXYZ ~]# firewall-cmd --state
not running
[root@srvXYZ ~]# systemctl start firewalld
[root@srvXYZ ~]# firewall-cmd --state
running
[root@srvXYZ ~]# systemctl enable firewalld
[root@srvXYZ ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
  Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
  Active: active (running) since Tue 2020-03-31 13:21:21 CEST; 1min 7s ago
    Docs: man:firewalld(1)
Main PID: 8609 (firewalld)
   Tasks: 2 (limit: 150)
  Memory: 25.9M
  CGroup: /system.slice/firewalld.service
          └─8609 /usr/bin/python3 /usr/sbin/firewalld --nofork --nopid

Mar 31 13:21:20 srvXYZ systemd[1]: Starting firewalld - dynamic firewall daemon...
Mar 31 13:21:21 srvXYZ systemd[1]: Started firewalld - dynamic firewall daemon.
Mar 31 13:21:23 srvXYZ firewalld[8609]: ERROR: Failed to read file "/proc/sys/net/netfilter/nf_conntrack_helper>
Mar 31 13:21:23 srvXYZ firewalld[8609]: WARNING: Failed to get and parse nf_conntrack_helper setting

Here, I have to pause a second to say that till now I'm still not sure if that thrown error is expected from an LXD container OR is it a bug!.. Is it a permission issue or a misconfiguration ?.. I can't say, didn't dig deep enough!

Ok! Now, eventhough the firewall is running we have to activate it by assigning the VPS public interface to (in this use case) the public zone:

Before, there is no interface in the public zone:
[root@srvXYZ ~]# firewall-cmd --zone=public --list-all
public
 target: default
 icmp-block-inversion: no
 interfaces:
 sources:
 services: dhcpv6-client mdns ssh
 ports:
 protocols:
 masquerade: no
 forward-ports:
 source-ports:
 icmp-blocks:
 rich rules:

To know more about your VPS net. interfaces:
[root@srvXYZ ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
   inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
   inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
435: eth0@if436: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
   link/ether 00:84:ed:5c:ed:dd brd ff:ff:ff:ff:ff:ff link-netnsid 0
   inet6 2a02:180:X:Y::Z/128 scope global
      valid_lft forever preferred_lft forever
   inet6 fe80::284:edff:fe5c:eddd/64 scope link
      valid_lft forever preferred_lft forever

Time to add the eth0 to the public zone and reload the Firewalld config
[root@srvXYZ ~]# firewall-cmd --zone=public --change-interface=eth0 --permanent
success
[root@srvXYZ ~]# firewall-cmd --reload
success

Testing the public zone again after this activation:
[root@srvXYZ ~]# firewall-cmd --zone=public --list-all
public (active)
 target: default
 icmp-block-inversion: no
 interfaces: eth0
 sources:
 services: dhcpv6-client mdns ssh
 ports:
 protocols:
 masquerade: no
 forward-ports:
 source-ports:
 icmp-blocks:
 rich rules:

Now, the only available service through eth0 is the one listed above, anything else blocked!

2.1.3- Removing Password Login and Setting Up Public Key Authentication
I'll come back to this in the last section

This is the minimum required steps to take to secure your VPS. You may add other things, like running the fail2ban package or changing SSH service port, but for me those aren't really important if you do implement the above 3 mechanisms.


2.2- Update your System
Before updating this system, I'd like to make few observations:
> When you check the failed services of your VPS at its startup, you'll see this:
root@srvXYZ ~]# systemctl --failed
 UNIT                          LOAD   ACTIVE SUB    DESCRIPTION                  
● auditd.service                loaded failed failed Security Auditing Service    
● network.service               loaded failed failed LSB: Bring up/down networking
● systemd-journald-audit.socket loaded failed failed Journal Audit Socket        

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

3 loaded units listed.
This is normal for an LXD container (you may add 'sys-kernel-config.mount' service to that list too). What's not normal and an indication of an unstable system is when other things fail and don't won't to restart. I'm thinking for example of one instance where I had the 'sssd.service' (ie, System Security Services Daemon) refusing to start! and it was hell to work on that system

> Being an LXD container, you don't need,,when updating your system, anything related to the kernel, filesystem etc; thus we'll exclude them from the update process, like this:
[mister-no@srv6XYZ ~]$ vi /etc/dnf/dnf.conf
[main]
gpgcheck=1
installonly_limit=3
clean_requirements_on_remove=True
exclude=kernel* grub* filesystem* sudo

If you skip this step, expect a lot of pointless warnings during the update process.

Also, you'll notice that I've also excluded the 'sudo' package, this is because the latest version in Fedora 30 is buggy!

With all this preparation we are now ready to update our system:
dnf update -y

That's all there is to it!

Notice specific to EUServ's Fedora 30 template:
Even though the template used is the English version, it has the 'glibc-langpack-de' package instead of the 'glibc-langpack-en', thus a warning is issued every time you interact with the shell. So, you're advised to install the English version then remove the German one to fix that issue.

My last section in this OP will be about enabling Public-key Authentication, it's the most important thing to do security-wise and thus deserve to be treated separately.

2.3- Enabling/Using Public-key Authentication
If you're logged in your VPS, logout now, and from your Linux machine (I'm not covering Windows case/Putty), check your system's entropy:
cat /proc/sys/kernel/random/entropy_avail
3925
You really must have something decent here +1024, we're about to generate an ECDSA key pair for version 2 of the SSH protocol:
ssh-keygen -f ~/.ssh/vps1-key-ecdsa -t ecdsa -b 521
Generating public/private ecdsa key pair.
Enter passphrase (empty for no passphrase): mySuperHardPWD
Enter same passphrase again:
Your identification has been saved in /home/username/.ssh/vps1-key-ecdsa.
Your public key has been saved in /home/username/.ssh/vps1-key-ecdsa.pub.
The key fingerprint is:
SHA256:qYABz******************************************************
The key's randomart image is:
+---[ECDSA 521]---+
|OOB. .. .*.oo+.  |
|BO.* +=.+.       |
|= . . o ..  o    |
|=.=+. .=..       |
| o. ......F      |
|    .........    |
|     ..          |
|                 |
|                 |
+----[SHA256]-----+

Tha's it!.. Now, what we need to do is send the public key of this key-pair to our VPS, which is done like so (with [email protected] being your sudoer and your VPS IP):
ssh-copy-id -i ~/.ssh/vps1-key-ecdsa.pub [email protected]

Now, double check that you can indeed login with that key, by login with :
ssh -i ~/.ssh/vps1-key-ecdsa [email protected]

If you succeed in loging in, then now is the time to disable any password authentication in the SSHD config file and restrict the in/out to the sudoer group and users (extra-security lines in the /etc/ssh/sshd_config files):
PasswordAuthentication no
AllowUsers <username-of-the-sudoer>
Match User <username-of-the-sudoer>
Match Group wheel
Reload the sshd daemon.
sshd -t
systemctl restart sshd

From now on your system is ALL YOURS!...

NB: Follow up to this OP will land in this thread when it's deemed important.


UPDATED:
A last tip I forgot to mention, given that we've already set our public-key authentication, is how to make the PPK file out of our pair-key for use with Filezilla to have an sFTP access to your VPS (or for use with Putty for those on Windows who need to use it.)

Well it's as simple as issuing this command:
puttygen ~/.ssh/vps1-key-ecdsa -o ~/.ssh/vps1-key-ecdsa.ppk -O private

I did a bit of googling about the followinf issue, mentionned above:
(03-31-2020, 01:20 PM)fChk Wrote: (...)
Mar 31 13:21:23 srvXYZ firewalld[8609]: ERROR: Failed to read file "/proc/sys/net/netfilter/nf_conntrack_helper>
Mar 31 13:21:23 srvXYZ firewalld[8609]: WARNING: Failed to get and parse nf_conntrack_helper setting

Here, I have to pause a second to say that till now I'm still not sure if that thrown error is expected from an LXD container OR is it a bug!.. Is it a permission issue or a misconfiguration ?.. I can't say, didn't dig deep enough!
The issue seems to be well known for systems with kernel 4.7 and up where "the automatic helper assignment in kernel has been turned off by default". Check 'Automatic Helper Assignment' for more on that.

Also check this discussion:
> https://github.com/lxc/lxd/issues/4006
VirMach's Buffalo_VPS-9 Holder (Dec. 20 - July 21)
microLXC's Container Holder (july 20 - ?)
VirMach's Phoenix_VPS-9 Holder (Apr. 20 - June 20)
NanoKVM's NAT-VPS Holder (jan. 20 - ?)


Messages In This Thread
Running Fedora inside an LXD/LXC System Container - by fChk - 03-31-2020, 01:20 PM

Possibly Related Threads…
Thread
Author
Replies
Views
Last Post
2,274
03-25-2020, 04:11 PM
Last Post: fChk

person_pin_circle Users browsing this thread: 1 Guest(s)
Sponsors: VirMach - Host4Fun - CubeData - Evolution-Host - HostDare - Hyper Expert - Shadow Hosting - Bladenode - Hostlease - RackNerd - ReadyDedis - Limitless Hosting