If the system was installed using French, the machine will probably already have French set as the default language. However, it is good to know what the installer does to set the language, so that later, if the need arises, you can change it.
TOOL The locale command to display the current configuration
The locale command lists a summary of the current configuration of various locale parameters (date format, numbers format, etc.), presented in the form of a group of standard environment variables dedicated to the dynamic modification of these settings.
A locale is a group of regional settings. This includes not only the language for text, but also the format for displaying numbers, dates, times, and monetary sums, as well as the alphabetical comparison rules. Although each of these parameters can be specified independently from the others, we generally use a locale, which is a coherent set of values for these parameters corresponding to a “region” in the broadest sense. These locales are usually indicated under the form, language-code_COUNTRY-CODE, sometimes with a suffix to specify the character set and encoding to be used. This enables consideration of idiomatic or typographical differences between different regions with a common language.
The locales package includes all the elements required for proper functioning of “localization” of various applications. During installation, this package will ask you to select a set of supported languages. This set can be changed at any time by running dpkg-reconfigure locales as root.
# dpkg-reconfigure locales
The first question invites you to select “locales” to support. Selecting all English locales (meaning those beginning with “en_”) is a reasonable choice. Do not hesitate to also enable other locales if the machine will host foreign users. The list of locales enabled on the system is stored in the /etc/locale.gen file. It is possible to edit this file by hand, but you should run locale-gen after any modifications. It will generate the necessary files for the added locales to work, and remove any obsolete files.
The second question, entitled “Default locale for the system environment”, requests a default locale. The recommended choice in the USA is “en_US.UTF-8”. British
English speakers will prefer “en_GB.UTF-8”, and Canadians will prefer either “en_CA.UTF-8” or, for French, “fr_CA.UTF-8”. The /etc/default/locale file will then be modified to store this choice. From there, it is picked up by all user sessions since PAM will inject its content in the LANG environment variable.
The /etc/environment file provides the login, gdm, or even ssh programs with the correct environment variables to be created.
The /etc/default/locale file works in a similar manner, but contains only the LANG environment variable. Thanks to this split, some PAM users can inherit a complete environment without localization. Indeed, it is generally discouraged to run server programs with localization enabled; on the other hand, localization and regional settings are recommended for programs that open user sessions.
Even if the keyboard layout is managed differently in console and graphical mode, it offers a single configuration interface that works for both: it is based on debconf and is implemented in the keyboard-configuration package. Thus the dpkgreconfigure keyboard-configuration command can be used at any time to reset the keyboard layout.
$sudo dpkg-reconfigure keyboard-configuration
The generalization of UTF-8 encoding has been a long-awaited solution to numerous difficulties with interoperability, since it facilitates international exchange and removes the arbitrary limits on characters that can be used in a document. The one drawback is that it had to go through a rather difficult transition phase. Since it could not be completely transparent (that is, it could not happen at the same time all over the world), two conversion operations were required: one on file contents, and the other on filenames. Fortunately, the bulk of this migration has been completed and we discuss it largely for reference.
If Network Manager is not installed, then the installer will configure ifupdown by creating the /etc/network/interfaces file. A line starting with auto gives a list of interfaces to be automatically configured on boot by the networking service. When there are many interfaces, it is good practice to keep the configuration in different files inside /etc/network/interfaces.d/ as described in sidebar BACK TO BASICS Directories ending in .d.
If the computer has an Ethernet card, the IP network that is associated with it must be configured by choosing from one of two methods. The simplest method is dynamic configuration with DHCP, and it requires a DHCP server on the local network. It may indicate a desired hostname, corresponding to the hostname setting in the example below. The DHCP server then sends configuration settings for the appropriate network.
By default, the kernel attributes generic names such as eth0 (for wired Ethernet) or wlan0 (for WiFi) to the network interfaces. The number in those names is a simple incremental counter representing the order in which they have been detected. With modern hardware, that order (at least in theory) might change for each reboot and thus the default names are not reliable.
Getting wireless network cards to work can be a bit more challenging. First of all, they often require the installation of proprietary firmwares which are not installed by default. Then wireless networks rely on cryptography to restrict access to authorized users only, this implies storing some secret key in the network configuration. Let’s tackle those topics one by one.
Network Manager knows how to handle various types of connections (DHCP, manual configuration, local network), but only if the configuration is set with the program itself. This is why it will systematically ignore all network interfaces in /etc/network/interfaces and /etc/network/interfaces.d/ for which it is not suited. Since Network Manager does not give details when no network connections are shown, the easy way is to delete from /etc/network/interfaces any configuration for all interfaces that must be managed by Network Manager.
The purpose of assigning names to IP numbers is to make them easier for people to remember. In reality, an IP address identifies a network interface associated with a device such as a network card. Since each machine can have several network cards, and several interfaces on each card, one single computer can have several names in the domain name system.
Each machine is, however, identified by a main (or “canonical”) name, stored in the /etc/hostname file and communicated to the Linux kernel by initialization scripts through the hostname command. The current value is available in a virtual filesystem, and you can get it with the cat /proc/sys/kernel/hostname command.
Surprisingly, the domain name is not managed in the same way, but comes from the complete name of the machine, acquired through name resolution. You can change it in the /etc/hosts file; simply write a complete name for the machine there at the beginning of the list of names associated with the address of the machine, as in the following example:
127.0.0.1 localhost
The mechanism for name resolution in Linux is modular and can use various sources of information declared in the /etc/nsswitch.conf file. The entry that involves host name resolution is hosts. By default, it contains files dns, which means that the system consults the /etc/hosts file first, then DNS servers. NIS/NIS+ or LDAP servers are other possible sources.
DNS (Domain Name Service) is a distributed and hierarchical service mapping names to IP addresses, and vice-versa. Specifically, it can turn a human-friendly name such as www.Predator-OS.com into the actual IP address, 213.244.11.247.
To access DNS information, a DNS server must be available to relay requests. Falcot Corp has its own, but an individual user is more likely to use the DNS servers provided by their ISP.
The DNS servers to be used are indicated in /etc/resolv.conf, one per line, with the nameserver keyword preceding an IP address, as in the following example:
nameserver 212.27.32.176 nameserver 212.27.32.177 nameserver 8.8.8.8
Note that the /etc/resolv.conf file may be handled automatically (and overwritten) when the network is managed by NetworkManager or configured via DHCP, or when resolvconf is installed or systemd-resolved(8) is enabled.
If there is no name server on the local network, it is still possible to establish a small table mapping IP addresses and machine hostnames in the /etc/hosts file, usually reserved for local network stations. The syntax of this file as described in hosts(5) is very simple: each line indicates a specific IP address followed by the list of any associated names (the first being “completely qualified”, meaning it includes the domain name).
This file is available even during network outages or when DNS servers are unreachable, but will only really be useful when duplicated on all the machines on the network. The slightest alteration in correspondence will require the file to be updated everywhere. This is why /etc/hosts generally only contains the most important entries.This file will be sufficient for a small network not connected to the Internet, but with 5 machines or more, it is recommended to install a proper DNS server.
The list of users is usually stored in the /etc/passwd file, while the /etc/shadow file stores hashed passwords. Both are text files, in a relatively simple format, which can be read and modified with a text editor. Each user is listed there on a line with several fields separated with a colon (“:”).
Users and groups are used on GNU/Linux for access control—that is, to control access to the system’s files, directories, and peripherals. Linux offers relatively simple/coarse access control mechanisms by default.
Sudoers file
The file contains a list of users or user groups with permission to execute a subset of commands while having the privileges of the root user or another specified user. The program may be configured to require a password.
Here is the list of fields in the /etc/passwd file:
o
login, for example rhertzog;
o
password: this is a password encrypted by a one-way function (crypt), relying on DES, MD5, SHA-256 or SHA-512. The special value “x” indicates that the encrypted password is stored in /etc/shadow;
o
uid: unique number identifying each user;
o
gid: unique number for the user’s main group
o
GECOS: data field usually containing the user’s full name;
o
login directory, assigned to the user for storage of their personal files (the environment variable $HOME generally points here);
o
program to execute upon login. This is usually a command interpreter (shell), giving the user free rein. If you specify /bin/false (which does nothing and returns control immediately), the user cannot login.
On Linux, the shadow password file is readable only by the superuser and serves to keep encrypted passwords safe from prying eyes and password cracking programs. It also includes some additional account information that wasn’t provided for in the original /etc/passwd format. These days, shadow passwords are the default on all systems.
The shadow file is not a superset of the passwd file, and the passwd file is not generated from it. You must maintain both files or use tools such as useradd that maintain both files on your behalf. Like /etc/passwd, /etc/shadow contains one line for each user. Each line contains nine fields, separated by colons:
•
Login name
•
Encrypted password
•
Date of last password change
•
Minimum number of days between password changes
•
Maximum number of days between password changes
•
Number of days in advance to warn users about password expiration
•
Days after password expiration that account is disabled
•
Account expiration date
•
A field reserved for future use which is currently always empty
The Hidden and Encrypted Password File: /etc/shadow The /etc/shadow file contains the following fields:
o
login;
o
encrypted password;
o
Several fields managing password expiration.
One can expire passwords using this file or set the time until the account is disabled after the password has expired.
The following commands allow modification of the information stored in specific fields of the user databases: passwd permits a regular user to change their password, which in turn, updates the /etc/shadow file (chpasswd allows administrators to update passwords for a list of users in batch mode); chfn (CHange Full Name), reserved for the super-user (root), modifies the GECOS field. chsh (CHange SHell) allows the user to change their login shell; however, available choices will be limited to those listed in /etc/shells; the administrator, on the other hand, is not bound by this restriction and can set the shell to any program of their choosing.
Finally, the chage (CHange AGE) command allows the administrator to change the password expiration settings (the -l user option will list the current settings). You can also force the expiration of a password using the passwd -e user command, which will require the user to change their password the next time they log in.
Besides these tools the usermod command allows to modify all the details mentioned above.
Set a password for a new user with
$ sudo passwd newusername
You will be prompted for the actual password.
Some automated systems for adding new users do not require you to set an initial password. Instead, they force the user to set a password on first login. Although this feature is convenient, it’s a giant security hole: anyone who can guess new login names (or look them up in /etc/passwd) can swoop down and hijack accounts before the intended users have had a chance to log in.
You may find yourself needing to “disable an account” (lock out a user), as a disciplinary measure, for the purposes of an investigation, or simply in the event of a prolonged or definitive absence of a user. A disabled account means the user cannot login or gain access to the machine. The account remains intact on the machine and no files or data are deleted; it is simply inaccessible.
If your site standardizes on the use of sudo, you will have surprisingly little use for actual root passwords. Most of your administrative team will never have occasion to use them.
That fact raises the question of whether a root password is necessary at all. If you decide that it isn’t, you can disable root logins entirely by setting root’s encrypted password to * or to some other fixed, arbitrary string. On Linux, passwd -l “locks” an account by prepending a ! to the encrypted password, with equivalent results. The * and the ! are just conventions; no software checks for them explicitly. Their effect derives from their not being valid password hashes. As a result, attempts to verify root’s password simply fail.
The main effect of locking the root account is that root cannot log in even on the console. Neither can any user successfully run su, because that requires a root password check as well. However the root account continues to exist, and all the software that usually runs as root continues to do so. In particular, sudo works normally.
The main advantage of disabling the root account is that you needn’t record and manage root’s password. You’re also eliminating the possibility of the root password being compromised, but that’s more a pleasant side effect than a compelling reason to go passwordless. Rarely used passwords are already at low risk of violation.
It’s particularly helpful to have a real root password on physical computers. Real computers are apt to require rescuing when hardware or configuration problems interfere with sudo or the boot process. In these cases, it’s nice to have the traditional root account available as an emergency fallback.
Debian stable ships with the root account locked, and all administrative access is funneled through sudo or a GUI equivalent. If you prefer, it’s fine to set a root password on Debian stable and then unlock the account with sudo passwd -u root.
Groups are listed in the /etc/group file, a simple textual database in a format similar to that of the /etc/passwd file, with the following fields:
o
group name;
o
o
gid: unique group identification number;
o
list of members: list of names of users who are members of the group, separated by commas.
Creating Accounts
One of the first actions an administrator needs to do when setting up a new machine is to create user accounts. This is typically done using the adduser command which takes a user-name for the new user to be created, as an argument.
The adduser command asks a few questions before creating the account, but its usage is straightforward. Its configuration file, /etc/adduser.conf, includes all the interesting settings: it can be used to automatically set a quota for each new user by creating a user template, or to change the location of user accounts; the latter is rarely useful, but it comes in handy when you have a large number of users and want to divide their accounts over several disks, for instance. You can also choose a different default shell.
Predator configurations for new user accout
# /etc/adduser.conf: `adduser’ configuration.
# See adduser(8) and adduser.conf(5) for full documentation.
# The DSHELL variable specifies the default login shell on your # system.
DSHELL=/bin/bash
# The DHOME variable specifies the directory containing users’ home # directories.
DHOME=/home
# If GROUPHOMES is “yes”, then the home directories will be created as # /home/groupname/user.
GROUPHOMES=no
# If LETTERHOMES is “yes”, then the created home directories will have # an extra directory - the first letter of the user name. For example: # /home/u/user.
LETTERHOMES=no
# The SKEL variable specifies the directory containing “skeletal” user # files; in other words, files such as a sample .profile that will be # copied to the new user’s home directory when it is created.
SKEL=/etc/skel
# FIRST_SYSTEM_[GU]ID to LAST_SYSTEM_[GU]ID inclusive is the range for UIDs # for dynamically allocated administrative and system accounts/groups.
# Please note that system software, such as the users allocated by the base-passwd # package, may assume that UIDs less than 100 are unallocated.
FIRST_SYSTEM_UID=100
LAST_SYSTEM_UID=999
FIRST_SYSTEM_GID=100
LAST_SYSTEM_GID=999
# FIRST_[GU]ID to LAST_[GU]ID inclusive is the range of UIDs of dynamically # allocated user accounts/groups.
FIRST_UID=1000
LAST_UID=59999
FIRST_GID=1000
LAST_GID=59999
# The USERGROUPS variable can be either “yes” or “no”. If “yes” each
# created user will be given their own group to use as a default. If
# “no”, each created user will be placed in the group whose gid is # USERS_GID (see below).
USERGROUPS=yes
# If USERGROUPS is “no”, then USERS_GID should be the GID of the group # `users’ (or the equivalent group) on your system.
USERS_GID=100
# If DIR_MODE is set, directories will be created with the specified # mode. Otherwise the default mode 0755 will be used.
DIR_MODE=0755
# If SETGID_HOME is “yes” home directories for users with their own
# group the setgid bit will be set. This was the default for
# versions << 3.13 of adduser. Because it has some bad side effects we # no longer do this per default. If you want it nevertheless you can # still set it here.
SETGID_HOME=no
# If QUOTAUSER is set, a default quota will be set from that user with
# `edquota -p QUOTAUSER newuser’
QUOTAUSER=““
# If SKEL_IGNORE_REGEX is set, adduser will ignore files matching this
# regular expression when creating a new home directory
SKEL_IGNORE_REGEX=“dpkg-(old|new|dist|save)”
# Set this if you want the --add_extra_groups option to adduser to add # new users to other groups.
# This is the list of groups that new non-system users will be added to # Default:
#EXTRA_GROUPS=“dialout cdrom floppy audio video plugdev users”
# If ADD_EXTRA_GROUPS is set to something non-zero, the EXTRA_GROUPS
# option above will be default behavior for adding new, non-system users
#ADD_EXTRA_GROUPS=1
# check user and group names also against this regular expression.
#NAME_REGEX=“^[a-z][-a-z0-9_]*\$”
# use extrausers by default
#USE_EXTRAUSERS=1
The creation of an account populates the user’s home directory with the contents of the /etc/skel/ template. This provides the user with a set of standard directories and configuration files.
Creating the home directory and installing startup files
useradd and adduser create new users’ home directories for you, but you will likely want to double-check the permissions and startup files for new accounts. There’s nothing magical about home directories. If you neglected to include a home directory when setting up a new user, you can create it with a simple mkdir. You need to set ownerships and permissions on the new directory as well, but this is most efficiently done after you’ve installed any local startup files.
Startup files traditionally begin with a dot and end with the letters rc, short for “run command,” a relic of the CTSS operating system. The initial dot causes ls to hide these “uninteresting” files from directory listings unless the -a option is used.
We recommend that you include default startup files for each shell that is popular on your systems so that users continue to have a reasonable default environment even if they change shells. Table 8.2 lists a variety of common startup files.
Target Filename Typical uses
all shells
.login_conf
Sets user-specific login defaults (FreeBSD)
sh
.profile
Sets search path, terminal type, and environment
bash a
.bashrc
Sets the terminal type (if needed) Sets biff and mesg switches
.bash_profile
Sets up environment variables
Sets command aliases
Sets the search path
Sets the umask value to control permissions
Sets CDPATH for filename searches
Sets the PS1 (prompt) and HISTCONTROL variables
csh/tcsh
.login
Read by “login” instances of csh
.cshrc
Read by all instances of csh
vi/vim
.vimrc/.viminfo
Sets vi/vim editor options
emacs
.emacs
Sets emacs editor options and key bindings
git
.gitconfig
Sets user, editor, color, and alias options for Git
GNOME
.gconf
GNOME user configuration via gconf
.gconfpath
Path for additional user configuration via gconf
KDE
.kde/
Directory of configuration files
If you prefer to not allow guest access to your computer, you can disable the Guest Session feature.
To do so, press
Ctrl + Alt + T to open a terminal window, and then run this command (it’s one long command, even if it may be shown wrapped on the screen - copy and paste to get it right):
sudo sh -c ‘printf “ [SeatDefaults]\nallowguest=false\n”
>/usr/share/lightdm/lightdm.conf. no-guest.conf’
The command creates a small configuration file. To re-enable Guest Session, simply remove that file:
sudo rm /usr/share/lightdm/lightdm.conf.d no-guest.conf
PAM: Pluggable Authentication Modules
User accounts are traditionally secured by passwords stored (in encrypted form) in the /etc/shadow or /etc/master.passwd file or an equivalent network database. Many programs may need to validate accounts, including login, sudo, su, and any program that accepts logins on a GUI workstation.
These programs really shouldn’t have hard-coded expectations about how passwords are to be encrypted or verified. Ideally, they shouldn’t even assume that passwords are in use at all. What if you want to use biometric identification, a network identity system, or some kind of two-factor authentication? Pluggable Authentication Modules to the rescue!
PAM is a wrapper for a variety of method-specific authentication libraries. Administrators specify the authentication methods they want the system to use, along with the appropriate contexts for each one. Programs that require user authentication simply call the PAM system rather than implement their own forms of authentication. PAM in turn calls the authentication library specified by the system administrator.
Strictly speaking PAM is an authentication technology, not an access control technology. That is; instead of addressing the question “Does user X have permission to perform operation Y?”, it helps answer the precursor question, “How do I know this is really user X?”
PAM is an important component of the access control chain on most systems, and PAM configuration is a common administrative task.
Like PAM, Kerberos deals with authentication rather than access control per se. But whereas PAM is an authentication framework, Kerberos is a specific authentication method. At sites that use Kerberos, PAM and Kerberos generally work together, PAM being the wrapper and Kerberos the actual implementation.
Kerberos uses a trusted third party (a server) to perform authentication for an entire network. You don’t authenticate yourself to the machine you are using, but provide your credentials to the Kerberos service. Kerberos then issues cryptographic credentials that you can present to other services as evidence of your identity.
Passwords must be complex enough to not be easily guessed from e.g. personal information, or cracked using methods like social engineering or brute-force
/etc/shadow
file,
/etc/passwd
By default, Arch stores the hashed user passwords in the root-onlyreadable separated from the other user parameters stored in the world-readable file, see Users and groups#User database. See also #Restricting root.
Passwords are set with the passwd command, which stretches them with the crypt function and then saves them in . See also SHA password hashes. The passwords are also salted in order to defend them against rainbow table attacks.
pam_pwquality provides protection against Dictionary attacks and helps configure a password policy that can be enforced throughout the system. It is based on pam_cracklib
Predator-OS policy is in the following path:
/etc/pam.d/passwd
/etc/security/limits.conf
On systems with many, or untrusted users, it is important to limit the number of processes each can run at once, therefore preventing fork bombs and other denial of service attacks. determines how many processes each user, or group can have open.
soft nproc 0
hard nproc 0
Once sudo is properly configured, full root access can be heavily restricted or denied without losing much usability. To disable root, but still allowing to use sudo, you can use .
Command interpreters (or shells) can be a user’s first point of contact with the computer, and they must therefore be rather friendly. Most of them use initialization scripts that allow configuration of their behavior (automatic completion, prompt text, etc.).
bash, the standard shell, uses the /etc/bash.bashrc initialization script for “interactive” shells, and /etc/profile for “login” shells.
In simple terms, a login shell is invoked when you login to the console either locally or remotely via ssh, or when you run an explicit bash -login command. Regardless of whether it is a login shell or not, a shell can be interactive (in an xterm-type terminal for instance); or non-interactive (when executing a script).
For bash, it is useful to install and activate “automatic completion”. The package bash-completion contains these completions for most common programs and is usually enabled if the user’s .bashrc configuration file was copied from /etc/skel/.bashrc. Otherwise it can be enabled via /etc/bash.bashrc (simply uncomment a few lines) or /etc/profile.
Many command interpreters provide a completion feature, which allows the shell to automatically complete a partially typed command name or argument when the user hits the Tab key. This lets users work more efficiently and be less error-prone.
Bash completion.
Bash is an sh-compatible command language interpreter that executes commands read from the standard input or from a file. Bash can run most sh scripts without modification. bash-completion is a collection of shell functions that take advantage of the programmable completion feature of bash on Debian stable Linux. This page shows how to install and enable Bash auto completion in Debian stable Linux.
1. Install bash-completion package on Debian stable by running:
$ sudo apt install bash-completion
if [ -f /usr/share/bash-completion/bash_completion ]; then source /usr/share/bash-completion/bash_completion
elif [ -f /etc/bash_completion ]; then source /etc/bash_completion fi
Environment variables allow storage of global settings for the shell or various other programs called. They are contextual (each process has its own set of environment variables) but inheritable. This last characteristic offers the possibility for a login shell to declare variables which will be passed down to all programs it executes. Setting default environment variables is an important element of shell configuration. Leaving aside the variables specific to a shell, it is preferable to place system wide variables in the /etc/environment file
Printer configuration used to cause a great many headaches for administrators and users alike. These headaches are now mostly a thing of the past, thanks to CUPS, the free print server using IPP, the Internet Printing Protocol.
The command apt install cups will install CUPS and the filters. It will also install the recommended printer-driver-gutenprint to provide a driver for a wide range of printers, but, unless the printer is being operated driverlessly, an alternative printerdriver might be needed for the particular device.
The printing system is administered easily through a web interface accessible at the local address http://localhost:631/. Members of the lpadmin group can add and remove USB and network printers and administer most aspects of their behavior. Similar administration tasks can also be carried out via the graphical interface provided by a desktop environment or the system-config-printer graphical interface.
It is probably already functional, but it is always good to know how to configure and install the bootloader in case it disappears from the Master Boot Record. This can occur after installation of another operating system, such as Windows. The following information can also help you to modify the bootloader configuration if needed.
Legacy BIOS Traditional BIOS assumes that the boot device starts with a record called the MBR (Master Boot Record). The MBR includes both a first-stage boot loader (aka “boot block”) and a primitive disk partitioning table. The amount of space available for the boot loader is so small (less than 512 bytes) that it’s not able to do much other than load and run a second-stage boot loader. Neither the boot block nor the BIOS is sophisticated enough to read any type of standard filesystem, so the second-stage boot loader must be kept somewhere easy to find. In one typical scenario, the boot block reads the partitioning information from the MBR and identifies the disk partition marked as “active.” It then reads and executes the second-stage boot loader from the beginning of that partition. This scheme is known as a volume boot record. Alternatively, the second-stage boot loader can live in the dead zone that lies between the MBR and the beginning of the first disk partition. For historical reasons, the first partition does not start until the 64th disk block, so this zone normally contains at least 32KB of storage: still not a lot, but enough to store a filesystem driver. This storage scheme is commonly used by the GRUB boot loader; see page 35. To effect a successful boot, all components of the boot chain must be properly installed and compatible with one another. The MBR boot block is OS-agnostic, but because it assumes a particular location for the second stage, there may be multiple versions that can be installed. The second-stage loader is generally knowledgeable about operating systems and filesystems (it may support several of each), and usually has configuration options of its own
UEFI The UEFI specification includes a modern disk partitioning scheme known as GPT (GUID Partition Table, where GUID stands for “globally unique identifier”). UEFI also understands FAT (File Allocation Table) filesystems, a simple but functional layout that originated in MS-DOS. These features combine to define the concept of an EFI System Partition (ESP). At boot time, the firmware consults the GPT partition table to identify the ESP. It then reads the configured target application directly from a file in the ESP and executes it.
Because the ESP is just a generic FAT filesystem, it can be mounted, read, written, and maintained by any operating system. No “mystery meat” boot blocks are required anywhere on the disk.3 In fact, no boot loader at all is technically required. The UEFI boot target can be a UNIX or Linux kernel that has been configured for direct UEFI loading, thus effecting a loader-less bootstrap. In practice, though, most systems still use a boot loader, partly because that makes it easier to maintain compatibility with legacy BIOSes. UEFI saves the pathname to load from the ESP as a configuration parameter. With no configuration, it looks for a standard path, usually /efi/boot/bootx64.efi on modern Intel systems. A more typical path on a configured system (this one for Debian stable and the GRUB boot loader) would be /efi/Debian stable/grubx64.efi. Other distributions follow a similar convention.
Because UEFI has a formal API, you can examine and modify UEFI variables (including boot menu entries) on a running system. For example, efibootmgr -v shows the following summary of the boot configuration:
$ efibootmgr -v
GRUB (GRand Unified Bootloader) is more recent. It is not necessary to invoke it after each update of the kernel; GRUB knows how to read the filesystems and find the position of the kernel on the disk by itself. To install it on the MBR of the first disk, simply type grub-install /dev/sda. This will overwrite the MBR, so be careful not to overwrite the wrong location. While it is also possible to install GRUB into a partition boot record, beware that it is usually a mistake and doing grub-install /dev/sda1 has not the same meaning as grub-install /dev/sda.
GRUB 2 configuration is stored in /boot/grub/grub.cfg,
but this file is generated from others. Be careful not to modify it by hand, since such local modifications will be lost the next time update-grub is run (which may occur upon update of various packages). The most common modifications of the /boot/grub/grub.cfg file (to add command line parameters to the kernel or change the duration that the menu is displayed, for example) are made through the variables in /etc/default/grub. To add entries to the menu, you can either create a /boot/grub/custom.cfg file or modify the /etc/grub.d/40_custom file. For more complex configurations, you can modify other files in /etc/grub.d, or add to them; these scripts should return configuration snippets, possibly by making use of external programs. These scripts are the ones that will update the list of kernels to boot: 10_linux takes into consideration the installed Linux kernels; 20_linux_xen takes into account Xen virtual systems, and 30_osprober adds other existing operating systems (Windows, OS X, Hurd), kernel images, and BIOS/EFI access options to the menu.
The config file is called grub.cfg, and it’s usually kept in /boot/grub (/boot/grub2 in Red Hat and CentOS) along with a selection of other resources and code modules that GRUB might need to access at boot time.5 Changing the boot configuration is a simple matter of updating the grub.cfg file. Although you can create the grub.cfg file yourself, it’s more common to generate it with the grub-mkconfig utility, which is called grub2-mkconfig on Red Hat and CentOS and wrapped as update-grub on Debian and Debian stable. In fact, most distributions assume that grub.cfg can be regenerated at will, and they do so automatically after updates. If you don’t take steps to prevent this, your handcrafted grub.cfg file will get clobbered. As with all things Linux, distributions configure grub-mkconfig in a variety of ways. Most commonly, the configuration is specified in /etc/default/grub in the form of sh variable assignments
Common GRUB configuration options from /etc/default/grub Shell variable name
Contents or function GRUB_BACKGROUND Background image a
GRUB_CMDLINE_LINUX Kernel parameters to add to menu entries for Linux b
GRUB_DEFAULT Number or title of the default menu entry
GRUB_DISABLE_RECOVERY Prevents the generation of recovery mode entries GRUB_PRELOAD_MODULES List of GRUB modules to be loaded as early as possible GRUB_TIMEOUT Seconds to display the boot menu before autoboot.
# If you change this file, run ‘update-grub’ afterwards to update # /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n ‘Simple configuration’
GRUB_DEFAULT=“0”
GRUB_TIMEOUT_STYLE=“menu”
GRUB_TIMEOUT=“15”
GRUB_DISTRIBUTOR=“`lsb_release -i -s 2> /dev/null || echo Debian`”
GRUB_CMDLINE_LINUX_DEFAULT=“mitigations=off loglevel=0 nowatchdog intel_pstate=false quiet splash”
GRUB_CMDLINE_LINUX=“find_preseed=/preseed.cfg auto noprompt priority=critical”
GRUB_DISABLE_OS_PROBER=“false”
# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM=“0x01234567,0xfefefefe,0x89abcdef,0xefefefef”
# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=“console”
# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo’
GRUB_GFXMODE=“1024x768x24”
# Uncomment if you don’t want GRUB to pass “root=UUID=xxx” parameter to Linux
#GRUB_DISABLE_LINUX_UUID=“true”
# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY=“true”
# Uncomment to get a beep at grub start
GRUB_INIT_TUNE=“480 440 1”
#GRUB_HIDDEN_TIMEOUT=“0”
GRUB_SAVEDEFAULT=“false” export GRUB_COLOR_NORMAL=“white/black” export GRUB_COLOR_HIGHLIGHT=“yellow/black”
export GRUB_MENU_PICTURE=“/usr/share/backgrounds/grub.PNG”
Common GRUB configuration options from /etc/default/grub
Shell variable name Contents or function
GRUB_BACKGROUND
Background image a
GRUB_CMDLINE_LINUX
Kernel parameters to add to menu entries for Linux b
GRUB_DEFAULT
Number or title of the default menu entry
GRUB_DISABLE_RECOVERY
Prevents the generation of recovery mode entries
GRUB_PRELOAD_MODULES
List of GRUB modules to be loaded as early as possible
GRUB_TIMEOUT
Seconds to display the boot menu before autoboot
The background image must be a .png, .tga, .jpg, or .jpeg file.
After editing /etc/default/grub, run update-grub or grub2-mkconfig to translate your configuration into a proper grub.cfg file
Using GRUB to boot either a traditional BIOS system (legacy or UEFI-CSM) or a UEFI system is quite different. Fortunately, the user does not need to know the differences because linux provides different packages for each purpose and the installer automatically cares about which one(s) to choose. The grub-pc package is chosen for legacy systems, where GRUB is installed into the MBR, while UEFI systems require grub-efi-arch, where GRUB is installed into the EFI System Partition (ESP). The latter requires a GTP partition table as well as an EFI partition.
o switch an existing system (supporting UEFI) from legacy to UEFI boot mode not only requires to switch the GRUB packages on the system, but also to adjust the partition table and the to create an EFI partition (probably including resizing existing partitions to create the necessary free space). It is therefore quite an elaborate process and we cannot cover it here. Fortunately, there are some manuals by bloggers describing the necessary procedures.
The timezone, configured during initial installation, is a configuration item for the tzdata package. To modify it, use the dpkg-reconfigure tzdata command, which allows you to choose the timezone to be used in an interactive manner. Its configuration is stored in the /etc/timezone file. Additionally, /etc/localtime becomes a symbolic link to the corresponding file in the /usr/share/zoneinfo; the file that contains the rules governing the dates where daylight saving time (DST) is active, for countries that use it.
When you need to temporarily change the timezone, use the TZ environment variable, which takes priority over the configured system default:
$ date
Thu Sep 2 22:29:48 CEST 2023
$ TZ=“Pacific/Honolulu” date
Thu 02 Sep 2023 10:31:01 AM HST
Since work stations are regularly rebooted (even if only to save energy), synchronizing them by NTP at boot is enough. To do so, simply install the ntpdate package. You can change the NTP server used if needed by modifying the /etc/default/ntpdate file.
Log files can grow, fast, and it is necessary to archive them. The most common scheme is a rotating archive: the log file is regularly archived, and only the latest X archives are retained. logrotate, the program responsible for these rotations, follows directives given in the /etc/logrotate.conf file and all of the files in the /etc/logrotate.d/ directory. The administrator may modify these files, if they wish to adapt the log rotation policy. The logrotate(1) man page describes all of the options available in these configuration files. You may want to increase the number of files retained in log rotation, or move the log files to a specific directory dedicated to archiving them rather than delete them. You could also send them by e-mail to archive them elsewhere.
/etc/logrotate.d/ directory
Source of : /etc/logrotate.conf
# see “man logrotate” for details
# global options do not affect preceding include directives
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs rotate 4
# create new (empty) log files after rotating old ones create
# use date as a suffix of the rotated file
#dateext
# uncomment this if you want your log files compressed
#compress
# packages drop log rotation information into this directory include /etc/logrotate.d
# system-specific logs may also be configured here.
System daemons, the kernel, and custom applications all emit operational data that is logged and eventually ends up on your finite sized disks. This data has a limited useful life and may need to be summarized, filtered, searched, analyzed, compressed, and archived before it is eventually discarded. Access and audit logs may need to be managed closely according to regulatory retention rules or site security policies.
A log message is usually a line of text with a few properties attached, including a time stamp, the type and severity of the event, and a process name and ID (PID). The message itself can range from an innocuous note about a new process starting up to a critical error condition or stack trace. It’s the responsibility of system administrators to glean useful, actionable information from this ongoing torrent of messages. This task is known generically as log management, and it can be divided into a few major subtasks:
Collecting logs from a variety of sources
Providing a structured interface for querying, analyzing, filtering, and monitoring messages
Managing the retention and expiration of messages so that information is kept as long as it is potentially useful or legally required, but not indefinitely
UNIX has historically managed logs through an integrated but somewhat rudimentary system, known as syslog, that presents applications with a standardized interface for submitting log messages. Syslog sorts messages and saves them to files or forwards them to another host over the network. Unfortunately, syslog tackles only the first of the logging chores listed above (message collection), and its stock configuration differs widely among operating systems.
Perhaps because of syslog’s shortcomings, many applications, network daemons, startup scripts, and other logging vigilantes bypass syslog entirely and write to their own ad hoc log files. This lawlessness has resulted in a complement of logs that varies significantly among flavors of UNIX and even among Linux distributions. Linux’s systemd journal represents a second attempt to bring sanity to the logging madness. The journal collects messages, stores them in an indexed and compressed binary format, and furnishes a command-line interface for viewing and filtering logs. The journal can stand alone, or it can coexist with the syslog daemon with varying degrees of integration, depending on the configuration.
A variety of third party tools (both proprietary and open source) address the more complex problem of curating messages that originate from a large network of systems. These tools feature such aids as graphical interfaces, query languages, data visualization, alerting, and automated anomaly detection. They can scale to handle message volumes on the order of terabytes per day. You can subscribe to these products as a cloud service or host them yourself on a private network.
Exhibit A on the next page depicts the architecture of a site that uses all the log management services mentioned above. Administrators and other interested parties can run a GUI against the centralized log cluster to review log messages from systems across the network. Administrators can also log in to individual nodes and access messages through the systemd journal or the plain text files written by syslog. When debugging problems and errors, experienced administrators turn to the logs sooner rather than later. Log files often contain important hints that point toward the source of vexing configuration errors, software bugs, and security issues. Logs are the first place you should look when a daemon crashes or refuses to start, or when a chronic error plagues a system that is trying to boot.
The importance of having a well-defined, site-wide logging strategy has grown along with the adoption of formal IT standards such as PCI DSS, COBIT, and ISO 27001, as well as with the maturing of regulations for individual industries. Today, these external standards may require you to maintain a centralized, hardened, enterprise-wide repository for log activity, with time stamps validated by NTP and with a strictly defined retention schedule.1 However, even sites without regulatory or compliance requirements can benefit from centralized logging.
UNIX is often criticized for being inconsistent, and indeed it is. Just take a look at a directory of log files and you’re sure to find some with names like maillog, some like cron.log, and some that use various distribution- and daemon-specific naming conventions. By default, most of these files are found in /var/log, but some renegade applications write their log files elsewhere on the filesystem.
Table 10.1 compiles information about some of the more common log files on our example systems. The table lists the following:
•
The log files to archive, summarize, or truncate
•
The program that creates each
•
An indication of how each filename is specified
•
The frequency of cleanup that we consider reasonable
•
The systems (among our examples) that use the log file
•
A description of the file’s contents
File Program Contents
In accordance with its mission to replace all other Linux subsystems, systemd includes a logging daemon called systemd-journald. It duplicates most of syslog’s functions but can also run peacefully in tandem with syslog, depending on how you or the system have configured it. If you’re leery of switching to systemd because syslog has always “just worked” for you, spend some time to get to know systemd. After a little practice, you may be pleasantly surprised.
Unlike syslog, which typically saves log messages to plain text files, the systemd journal stores messages in a binary format. All message attributes are indexed automatically, which makes the log easier and faster to search. As discussed above, you can use the journalctl command to review messages stored in the journal. The journal collects and indexes messages from several sources:
The /dev/log socket, to harvest messages from software that submits messages according to syslog conventions
The device file /dev/kmsg, to collect messages from the Linux kernel. The systemd journal daemon replaces the traditional klogd process that previously listened on this channel and formerly forwarded the kernel messages to syslog.
The UNIX socket /run/systemd/journal/stdout, to service software that writes log messages to standard output
The UNIX socket /run/systemd/journal/socket, to service software that submits messages through the systemd journal API
Audit messages from the kernel’s auditd daemon
Intrepid administrators can use the systemd-journal-remote utility (and its relatives, systemd-journal-gateway and systemd-journal-upload,) to stream serialized journal messages over the network to a remote journal. Unfortunately, this feature does not come preinstalled on vanilla distributions. As of this writing, packages are available for Debian and Debian stable but not for Red Hat or CentOS. We expect this lapse to be rectified soon; in the meantime, we recommend sticking with syslog if you need to forward log messages among systems.
The default journal configuration file is /etc/systemd/journald.conf; however, this file is not intended to be edited directly. Instead, add your customized configurations to the /etc/systemd/journald.conf.d directory. Any files placed there with a .conf extension are automatically incorporated into the configuration. To set your own options, create a new .conf file in this directory and include the options you want. The default journald.conf includes a commented-out version of every possible option, along with each option’s default value, so you can see at a glance which options are available. They include the maximum size of journal, the retention period for messages, and various rate-limiting settings. /etc/systemd/journald.conf
# This file is part of systemd.
# systemd is free software; you can redistribute it and/or modify it
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file. # See journald.conf(5) for details.
[Journal]
Storage=none
Compress=no
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitIntervalSec=30s
#RateLimitBurst=10000
#SystemMaxUse=
#SystemKeepFree=
#SystemMaxFileSize=
#SystemMaxFiles=100
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
#RuntimeMaxFiles=100
#MaxRetentionSec=
#MaxFileSec=1month
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no #ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg
#LineMax=48K
#ReadKMsg=yes
Audit=no
Journal logs help you debug your system. But for most of the time journaling may write a lot to your storage, and overtime the logs becomes huge. It’s then truncated, but if that’s not what you want, you can disable it by editing
/etc/systemd/journald.conf, and set:
Storage=none
Capturing the log messages produced by the kernel has always been something of a challenge. It became even more important with the advent of virtual and cloudbased systems, since it isn’t possible to simply stand in front of these systems’ consoles and watch what happens. Frequently, crucial diagnostic information was lost to the ether.
systemd alleviates this problem with a universal logging framework that includes all kernel and service messages from early boot to final shutdown. This facility, called the journal, is managed by the journald daemon.
System messages captured by journald are stored in the /run directory. rsyslog can process these messages and store them in traditional log files or forward them to a remote syslog server. You can also access the logs directly with the journalctl command.
Without arguments, journalctl displays all log entries (oldest first): $ journalctl
You can configure journald to retain messages from prior boots. To do this, edit /etc/systemd/journald.conf and configure the Storage attribute:
[Journal]
Storage=persistent
Once you’ve configured journald, you can obtain a list of prior boots with $ journalctl --list-boots
There are many different log files that all serve different purposes. When trying to find a log about something, you should start by identifying the most relevant file. Below is a list of common log file locations.
System logs deal with exactly that - the Debian stable system - as opposed to extra applications added by the user. These logs may contain information about authorizations, system daemons and system messages.
Location: /var/log/auth.log
Keeps track of authorization systems, such as password prompts, the sudo command and remote logins.
Location: /var/log/daemon.log
Daemons are programs that run in the background, usually without user interaction. For example, display server, SSH sessions, printing services, bluetooth, and more.
Location: /var/log/debug
Provides debugging information from the Debian stable system and applications.
Location: /var/log/kern.log
Logs from the Linux kernel.
Location: /var/log/syslog
Contains more information about your system. If you can’t find anything in the other logs, it’s probably here.
Some applications also create logs in /var/log. Below are some examples.
Location: /var/log/apache2/ (subdirectory)
Apache creates several log files in the /var/log/apache2/ subdirectory. The access.log file records all requests made to the server to access files. error.log records all errors thrown by the server.
Location: /var/log/Xorg.0.log
The X11 server creates a seperate log file for each of your displays. Display numbers start at zero, so your first display (display 0) will log to Xorg.0.log. The next display (display 1) would log to Xorg.1.log, and so on.
Not all log files are designed to be read by humans. Some were made to be parsed by applications. Below are some of examples.
Location: /var/log/faillog
Contains info about login failures. You can view it with the faillog command.
Location: /var/log/lastlog
Contains info about last logins. You can view it with the lastlog command.
Location: /var/log/wtmp
Syslog severity levels (descending severity)
Level
Approximate meaning
emerg
Panic situations; system is unusable
alert
Urgent situations; immediate action required
crit
Critical conditions
err
Other error conditions
warning
Warning messages
notice
Things that might merit investigation
info
Informational messages
debug
For debugging only
Making sense out of logs is not an easy task. Log management solutions gather and accept data from multiple sources. Those sources can have different log events structures, providing a different granularity. They may not follow common logging best practices and be hard to get some meaning from.
Because of that, it is important to follow good practices when we develop an application. One of those is keeping meaningful log levels. That allows a person who will read the logs and try to give them meaning to understand the importance of the message that they see in the text files or one of those awesome observability tools out there.
A log level or log severity is a piece of information telling how important a given log message is. It is a simple, yet very powerful way of distinguishing log events from each other. If the log levels are used properly in your application, all you need is to look at the severity first. It will tell you if you can continue sleeping during the on-call night or you need to jump out of bed right away and hit another personal best in running between your bedroom and laptop in the living room.
You can think of the log levels as a way to filter the critical information about your system state and the one that is purely informative. The log levels can help to reduce the information noise and alert fatigue.
Before continuing with the description of the log levels themselves it would be good to know where the log levels come from. It all started with syslog. In the 80s, the Sendmail a mailer daemon project developed by Eric Allman required a logging solution. This is how Syslog was born. It was rapidly adopted by other applications in the Unix-like ecosystem and became a standard. Btw – at Sematext we do support Syslog format with Sematext Logs, our log management tool.
The console log level can also be changed by the klogd program, or by writing the specified level to the /proc/sys/kernel/printk file.
The kernel log levels are:
(KERN_EMERG)
The system is unusable.
(KERN_ALERT)
Actions that must be taken care of immediately.
(KERN_CRIT)
Critical conditions.
(KERN_ERR)
Non-critical error conditions.
(KERN_WARNING)
Warning conditions that should be taken care of.
(KERN_NOTICE)
Normal, but significant events.
(KERN_INFO)
Informational messages that require no action.
(KERN_DEBUG)
Kernel debugging messages, output by the kernel if the developer enabled debugging at compile time.
By default, the log level of Predator-OS is 0.
printk() is one of the most widely known functions in the Linux kernel. It’s the standard tool we have for printing messages and usually the most basic way of tracing and debugging. If you’re familiar with printf(3) you can tell printk() is based on it, although it has some functional differences:
printk() messages can specify a log level.
the format string, while largely compatible with C99, does not follow the exact same specification. It has some extensions and a few limitations (no %n or floating point conversion specifiers). See How to get printk format specifiers right.
where KERN_INFO is the log level (note that it’s concatenated to the format string, the log level is not a separate argument). The available log levels are:
The log level specifies the importance of a message. The kernel decides whether to show the message immediately (printing it to the current console) depending on its log level and the current console_loglevel (a kernel variable). If the message priority is higher (lower log level value) than the console_loglevel the message will be printed to the console.
If the log level is omitted, the message is printed with KERN_DEFAULT level. You can check the current console_loglevel with:
$ cat /proc/sys/kernel/printk
The locate command can find the location of a file when you only know part of the name. It sends a result almost instantaneously, since it consults a database that stores the location of all the files on the system; this database is updated daily by the updatedb command. There are multiple implementations of the locate command and picked mlocate for its standard system. If you want to consider an alternative, you can try plocate which provides the same command line options and can be considered a drop-in replacement. locate is smart enough to only return files which are accessible to the user running the command even though it uses a database that knows about all files on the system (since its updatedb implementation runs with root rights). For extra safety, the administrator can use PRUNEDPATHS in /etc/updatedb.conf to exclude some directories from being indexed.
The rsyslogd daemon is responsible for collecting service messages coming from applications and the kernel, then dispatching them into log files (usually stored in the /var/log/ directory). It obeys the /etc/rsyslog.conf configuration file.
# /etc/rsyslog.conf configuration file for rsyslog #
# For more information install rsyslog-doc and see
# /usr/share/doc/rsyslog-doc/html/configuration/index.html
#
# Default logging rules can be found in /etc/rsyslog.d/50-default.conf
#################
#### MODULES ####
#################
module(load=“imuxsock”) # provides support for local system logging #module(load=“immark”) # provides --MARK-- message capability
# provides UDP syslog reception
#module(load=“imudp”)
#input(type=“imudp” port=“514”)
# provides TCP syslog reception
#module(load=“imtcp”)
#input(type=“imtcp” port=“514”)
# provides kernel logging support and enable non-kernel klog messages module(load=“imklog” permitnonkernelfacility=“on”)
###########################
#### GLOBAL DIRECTIVES ####
###########################
#
# Use traditional timestamp format.
# To enable high precision timestamps, comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
# Filter duplicated messages
$RepeatedMsgReduction on
#
# Set the default permissions for all log files.
#
$FileOwner syslog
$FileGroup adm
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022
$PrivDropToUser syslog
$PrivDropToGroup syslog
#
# Where to place spool and state files
#
$WorkDirectory /var/spool/rsyslog
#
# Include all config files in /etc/rsyslog.d/
#
$IncludeConfig /etc/rsyslog.d/*.conf
Each log message is associated with an application subsystem (called “facility” in the documentation):
auth and authpriv: for authentication;
cron: comes from task scheduling services, cron and atd;
daemon: affects a daemon without any special classification (DNS, NTP, etc.); o ftp: concerns the FTP server; o kern: message coming from the kernel; o lpr: comes from the printing subsystem; o mail: comes from the e-mail subsystem;
news: Usenet subsystem message (especially from an NNTP — Network News
Transfer Protocol — server that manages newsgroups); o syslog: messages from the syslogd server, itself;
user: user messages (generic);
uucp: messages from the UUCP server (Unix to Unix Copy Program, an old protocol notably used to distribute e-mail messages);
local0 to local7: reserved for local use.
Each message is also associated with a priority level. Here is the list in decreasing order:
emerg: “Help!” There is an emergency, the system is probably unusable.
alert: hurry up, any delay can be dangerous, action must be taken immediately; o crit: conditions are critical; o err: error;
warn: warning (potential error);
notice: conditions are normal, but the message is important; o info: informative message; o debug: debugging message.
Plasma is a simple and lightweight graphical desktop, which is a perfect match for
computers with limited resources. It can be installed with apt install Plasma4 (task-Plasma-
desktop). Like GNOME, Plasma is based on the GTK+ toolkit, and several components are common across both desktops.
Unlike GNOME and Plasma, Plasma does not aim to become a vast project. Beyond the basic components of a modern desktop (file manager, window manager, session manager, a panel for application launchers and so on), it only provides a few specific applications: a terminal, a calendar (orage), an image viewer, a CD/DVD burning tool, a media player (parole), sound volume control and a text editor (mousepad).
Modern desktop environments and many window managers provide menus listing the available applications for the user. In order to keep menus up-to-date in relation to the actual set of available applications, each package usually provides
a .desktop file in /usr/share/applications:
AppArmor is a Mandatory Access Control (MAC) system built on Linux’s LSM (Linux Security Modules) interface. In practice, the kernel queries AppArmor before each system call to know whether the process is authorized to do the given operation. Through this mechanism, AppArmor confines programs to a limited set of resources.
AppArmor applies a set of rules (known as “profile”) on each program. The profile applied by the kernel depends on the installation path of the program being executed.
Contrary to SELinux (discussed in Section 14.5, “Introduction to SELinux”), the rules applied do not depend on the user. All users face the same set of rules when they are executing the same program (but traditional user permissions still apply and might result in different behavior!).
AppArmor profiles are stored in /etc/apparmor.d/ and they contain a list of access control rules on resources that each program can make use of. The profiles are compiled and loaded into the kernel by the apparmor_parser command. Each profile can be loaded either in enforcing or complaining mode. The former enforces the policy and reports violation attempts, while the latter does not enforce the policy but still logs the system calls that would have been denied.
AppArmor is a product of Canonical, Ltd., releasers of the Debian stable distribution.
It’s supported by Debian and Debian stable, but has also been adopted as a standard by SUSE distributions. Debian stable and SUSE enable it on default installs, although the complement of protected services is not extensive.
AppArmor implements a form of MAC and is intended as a supplement to the traditional UNIX access control system. Although any configuration is possible, AppArmor is not designed to be a user-facing system. Its main goal is service securement; that is, limiting the damage that individual programs can do if they should be compromised or run amok.
Protected programs continue to be subject to all the limitations imposed by the standard model, but in addition, the kernel filters their activities through a designated and task-specific AppArmor profile. By default, AppArmor denies all requests, so the profile must explicitly name everything the process is allowed to do.
Programs without profiles, such as user shells, have no special restrictions and run as if AppArmor were not installed.
This service securement role is essentially the same configuration that’s implemented by SELinux in Red Hat’s targeted environment. However, AppArmor is designed more specifically for service securement, so it sidesteps some of the more puzzling nuances of SELinux.
AppArmor profiles are stored in /etc/apparmor.d, and they are relatively readable even without detailed knowledge of the system.
AppArmor support is built into the standard kernels provided by Debian. Enabling AppArmor is thus just a matter of installing some packages by executing apt install apparmor apparmor-profiles apparmor-utils with root privileges.
AppArmor is functional after the installation, and aa-status will confirm it quickly:
# aa-status
SELinux (Security Enhanced Linux) is a Mandatory Access Control system built on Linux’s LSM (Linux Security Modules) interface. In practice, the kernel queries SELinux before each system call to know whether the process is authorized to do the given operation.
SELinux uses a set of rules — collectively known as a policy — to authorize or forbid operations. Those rules are difficult to create. Fortunately, two standard policies (targeted and strict) are provided to avoid the bulk of the configuration work.
With SELinux, the management of rights is completely different from traditional Unix systems. The rights of a process depend on its security context. The context is defined by the identity of the user who started the process, the role and the domain that the user carried at that time. The rights really depend on the domain, but the transitions between domains are controlled by the roles. Finally, the possible transitions between roles depend on the identity.
SELinux support is built into the standard kernels provided by Debian. The core Unix tools support SELinux without any modifications. It is thus relatively easy to enable SELinux.
The apt install selinux-basics selinux-policy-defaulti auditd command will automatically install the packages required to configure an SELinux system.
The selinux-policy-default package contains a set of standard rules. By default, this policy only restricts access for a few widely exposed services. The user sessions are not restricted and it is thus unlikely that SELinux would block legitimate user operations.
Given the world’s wide range of computing environments and the mixed success of efforts to advance the standard model, kernel maintainers have been reluctant to act as mediators in the larger debate over access control. In the Linux world, the situation came to a head in 2001, when the U.S. National Security Agency proposed to integrate its Security-Enhanced Linux (SELinux) system into the kernel as a standard facility.
For several reasons, the kernel maintainers resisted this merge. Instead of adopting SELinux or another, alternative system, they developed the Linux Security Modules API, a kernel-level interface that allows access control systems to integrate themselves as loadable kernel modules.
LSM-based systems have no effect unless users load them and turn them on. This fact lowers the barriers for inclusion in the standard kernel, and Linux now ships with SELinux and four other systems (AppArmor, Smack, TOMOYO, and Yama) ready to go.
Developments on the BSD side have roughly paralleled those of Linux, thanks largely to Robert Watson’s work on TrustedBSD. This code has been included in FreeBSD since version 5. It also provides the application sandboxing technology used in Apple’s macOS and iOS.
When multiple access control modules are active simultaneously, an operation must be approved by all of them to be permitted. Unfortunately, the LSM system requires explicit cooperation among active modules, and none of the current modules include this feature. For now, Linux systems are effectively limited to a choice of one LSM add-on module
SELinux is one of the oldest Linux MAC implementations and is a product of the U.S. National Security Agency. Depending on one’s perspective, that might be a source of either comfort or suspicion.7
SELinux takes a maximalist approach, and it implements pretty much every flavor of MAC and RBAC one might envision. Although it has gained footholds in a few distributions, it is notoriously difficult to administer and troubleshoot. This unattributed quote from a former version of the SELinux Wikipedia page vents the frustration felt by many sysadmins:
Intriguingly, although the stated raison d’être of SELinux is to facilitate the creation of individualized access control policies specifically attuned to organizational data custodianship practices and rules, the supportive software tools are so sparse and unfriendly that the vendors survive chiefly on “consulting,’ which typically takes the form of incremental modifications to boilerplate security policies.
Despite its administrative complexity, SELinux adoption has been slowly growing, particularly in environments such as government, finance, and health care that enforce strong and specific security requirements. It is also a standard part of the Android platform.
Our general opinion regarding SELinux is that it is capable of delivering more harm than benefit. Unfortunately, that harm can manifest not only as wasted time and as aggravation for system administrators, but ironically, as security lapses. Complex models are hard to reason about, and SELinux is not really a level playing field; hackers that focus on it understand the system far more thoroughly than the average sysadmin.
In particular, SELinux policy development is a complicated endeavor. To protect a new daemon, for example, a policy must carefully enumerate all the files, directories, and other objects to which the process needs access. For complicated software like sendmail or httpd, this task can be quite complex. At least one company offers a three-day class on policy development.
Fortunately, many general policies are available on-line, and most SELinux-enabled distributions come with reasonable defaults. These can easily be installed and configured for your particular environment. A full-blown policy editor that aims to ease policy application can be found at seedit.sourceforge.net.
SELinux is well supported by both Red Hat (and hence, CentOS) and Fedora. Red Hat enables it by default.
Debian and SUSE Linux also have some available support for SELinux, but you must install additional packages, and the system is less aggressive in its default configuration.
Debian stable inherits some SELinux support from Debian, but over the last few releases, Debian stable’s focus has been on AppArmor (see page 87). Some vestigial SELinux-related packages are still available, but they are generally not up to date.
/etc/selinux/config is the top-level control for SELinux. The interesting lines are
SELINUX=enforcing
SELINUXTYPE=targeted
The first line has three possible values: enforcing, permissive, or disabled. The enforcing setting ensures that the loaded policy is applied and prohibits violations. permissive allows violations to occur but logs them through syslog, which is valuable for debugging and policy development. disabled turns off SELinux entirely.
SELINUXTYPE refers to the name of the policy database to be applied. This is essentially the name of a subdirectory within /etc/selinux. Only one policy can be active at a time, and the available policy sets vary by system.
The SELinux policy is a modular set of rules, and its installation detects and enables automatically all the relevant modules based on the already installed services. The system is thus immediately operational. However, when a service is installed after the SELinux policy, you must be able to manually enable the corresponding module. That is the purpose of the semodule command. Furthermore, you must be able to define the roles that each user can endorse, and this can be done with the semanage command.
Those two commands can thus be used to modify the current SELinux configuration, which is stored in /etc/selinux/default/. Unlike other configuration files that you can find in /etc/, all those files must not be changed by hand. You should use the programs designed for this purpose.
systemd is a suite of basic building blocks for a Linux system. It provides a system and service manager that runs as PID 1 and starts the rest of the
system. systemd provides aggressive parallelization capabilities, uses socket and DBus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux control groups, maintains mount and automount points, and implements an elaborate transactional dependency-based service control logic. systemd supports SysV and LSB init scripts and works as a replacement for sysvinit. Other parts include a logging daemon, utilities to control basic system configuration like the hostname, date, locale, maintain a list of logged-in users and running containers and virtual machines, system accounts, runtime directories and settings, and daemons to manage simple network configuration, network time synchronization, log forwarding, and name resolution. systemd in detail The configuration and control of system services is an area in which Linux distributions have traditionally differed the most from one another. systemd aims to standardize this aspect of system administration, and to do so, it reaches further into the normal operations of the system than any previous alternative.
To see the target the system boots into by default, run the get-default subcommand:
$ systemctl get-default
graphical.target
Most Linux distributions boot to graphical.target by default, which isn’t appropriate for servers that don’t need a GUI. But that’s easily changed:
$ sudo systemctl set-default multi-user.target
To see all the system’s available targets, run systemctl list-units:
$ systemctl list-units --type=target
Activates a service immediately:
systemctl start your_service.service
Deactivates a service immediately:
systemctl stop your_service.service
Restarts a service:
systemctl restart your_service.service
Shows status of a service including whether it is running or not:
systemctl status your_service.service
Enables a service to be started on bootup:
systemctl enable your_service.service
Disables a service to not start during bootup:
systemctl disable your_service.service
Processor manufacturers release stability and security updates to the processor microcode. These updates provide bug fixes that can be critical to the stability of your system. Without them, you may experience spurious crashes or unexpected system halts that can be difficult to track down. All users with an AMD or Intel CPU should install the microcode updates to ensure system stability. To acquire updated microcode, depending on the processor, pre-installed the following packages on Predator-OS:
amd-ucode for AMD processors, intel-ucode for Intel processors.
systemd handles some power-related ACPI events, whose actions can be configured in /etc/systemd/logind.conf or /etc/systemd/logind.conf.d/*.conf .
source of : /etc/systemd/logind.conf
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2.1 of the License, or # (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See logind.conf(5) for details.
[Login]
#NAutoVTs=6
#ReserveVT=6
#KillUserProcesses=no
#KillOnlyUsers=
#KillExcludeUsers=root
#InhibitDelayMaxSec=5
#UserStopDelaySec=10
#HandlePowerKey=poweroff
#HandleSuspendKey=suspend
#HandleHibernateKey=hibernate
#HandleLidSwitch=suspend
#HandleLidSwitchExternalPower=suspend
#HandleLidSwitchDocked=ignore
#HandleRebootKey=reboot
#PowerKeyIgnoreInhibited=no
#SuspendKeyIgnoreInhibited=no
#HibernateKeyIgnoreInhibited=no
#LidSwitchIgnoreInhibited=yes
#RebootKeyIgnoreInhibited=no
#HoldoffTimeoutSec=30s
#IdleAction=ignore
#IdleActionSec=30min
#RuntimeDirectorySize=10%
#RuntimeDirectoryInodes=400k
#RemoveIPC=yes
#InhibitorsMax=8192 #SessionsMax=8192
/etc/systemd/
On systems with no dedicated power manager, this may replace the acpid daemon which is usually used to react to these ACPI events.
The specified action for each event can be one
of ignore, poweroff, reboot, halt, suspend, hibernate, hybrid-sleep, suspend-thenhibernate, lock or kexec. In case of hibernation and suspension, they must be properly set up. If an event is not configured, systemd will use a default action.
Event handler Description
Default action
HandlePowerKey
Triggered when the power key/button is pressed.
poweroff
HandleSuspendKey
Triggered when the suspend key/button is pressed.
suspend
HandleHibernateKey
Triggered when the hibernate key/button is pressed.
hibernate
HandleLidSwitch
Triggered when the lid is closed, except in the cases below.
suspend
HandleLidSwitchDocked
Triggered when the lid is closed if the system is inserted in a docking station, or more than one display is connected.
ignore
HandleLidSwitchExternalPower
Triggered when the lid is closed if the system is connected to external power.
action set for HandleLidSwitch
To apply any changes, signal :
# systemctl kill -s HUP systemd-logind
By default, these features are Disabled in the Predator-OS:
sudo systemctl disable sleep.target suspend.target hibernate.target hybrid-sleep.target sudo systemctl stop sleep.target suspend.target hibernate.target hybrid-sleep.target
To disable bluetooth completely, blacklist the modules.
To turn off bluetooth only temporarily, use rfkill:
# rfkill block bluetooth
Or with udev rule:
/etc/udev/rules.d/50-bluetooth.rules
# disable bluetooth
SUBSYSTEM==“rfkill”, ATTR{type}==“bluetooth”, ATTR{state}=“0”
By default, PulseAudio suspends any audio sources that have become idle for too long. When using an external USB microphone, recordings may start with a pop sound. As a workaround, comment out the following line in
### Automatically suspend sinks/sources that become idle for too long load-module module-suspend-on-idle
Modules can hog memory and may slow down your system. You can list all the modules currently required by your system by issuing `lsmod` command as regular or root user. Blacklist modules that you don’t need.
The Linux kernel is modular, which makes it more flexible than monolithic kernels. New functionality can be easily added to a run kernel, by loading the related module. While that is great, it can also be misused. You can think of loading malicious modules (e.g. rootkits), or unauthorized access to the server and copy data via a USB port. In our previous article about kernel modules, we looked at how to prevent loading any module. In this case, we specifically disallow the ones we don’t want.
Blacklisting modules is one way to disallow them. This defines which modules should no longer be loaded. However, it will only limit the loading of modules during the boot process. You can still load a module manually after booting. Blacklisting a module is simple. Create a file in the /etc/modprobe.d directory and give it a proper name (e.g. blacklist-module.conf).
For me the /etc/modprobe.d/blacklist.conf goes like this:
blacklist iTCO_wdt blacklist pcspkr blacklist joydev blacklist mousedev blacklist mac_hid blacklist uvcvideo
source of /etc/modprobe.d/blacklist.conf
# This file lists those modules which we don’t want to be loaded by # alias expansion, usually so some other driver will be loaded for the # device instead.
# evbug is a debug tool that should be loaded explicitly blacklist evbug
# these drivers are very simple, the HID drivers are usually preferred
#blacklist usbmouse
#blacklist usbkbd
# replaced by e100
blacklist eepro100
# replaced by tulip
blacklist de4x5
# causes no end of confusion by creating unexpected network interfaces blacklist eth1394
# snd_intel8x0m can interfere with snd_intel8x0, does not seem to support much
# hardware on its own (Debian stable bug #2011, #6810) blacklist snd_intel8x0m
# Conflicts with dvb driver (which is better for handling this device) blacklist snd_aw2
# replaced by p54pci
blacklist prism54
# replaced by b43 and ssb.
blacklist bcm43xx
# most apps now use garmin usb driver directly (Debian stable: #114565) blacklist garmin_gps
# replaced by asus-laptop (Debian stable: #184721) blacklist asus_acpi
# low-quality, just noise when being used for sound playback, causes
# hangs at desktop session start (Debian stable: #246969) blacklist snd_pcsp
# ugly and loud noise, getting on everyone’s nerves; this should be done by a
# nice pulseaudio bing (Debian stable: #77010) blacklist pcspkr
# EDAC driver for amd76x clashes with the agp driver preventing the aperture
# from being initialised (Debian stable: #297750). Blacklist so that the driver # continues to build and is installable for the few cases where its # really needed.
blacklist amd76x_edac
blacklist iTCO_wdt blacklist joydev
The NMI watchdog is a debugging feature to catch hardware hangs that cause a kernel panic. On some systems it can generate a lot of interrupts, causing a noticeable increase in power usage:
/etc/sysctl.d/disable_watchdog.conf kernel.nmi_watchdog = 0
or add to the kernel line to disable it completely from early boot.
Increasing the virtual memory dirty writeback time helps to aggregate disk I/O together, thus reducing spanned disk writes, and increasing power saving. To set the value to 60 seconds (default is 5 seconds):
/etc/sysctl.d/dirty.conf
vm.dirty_writeback_centisecs = 6000
To do the same for journal commits on supported filesystems (e.g. ext4, btrfs...), use as a option in fstab.
The swappiness sysctl parameter represents the kernel’s preference (or avoidance) of swap space.
Swappiness can have a value between 0 and 200 (max 100 if Linux < 5.8), the default value is 60. A low value causes the kernel to avoid swapping, a high value causes the kernel to try to use swap space, and a value of 100 means IO cost is assumed to be equal. Using a low value on sufficient memory is known to improve responsiveness on many systems.
To check the current swappiness value:
$ sysctl vm.swappiness
Alternatively, the files /sys/fs/cgroup/memory/memory.swappiness (cgroup v1specific) or /proc/sys/vm/swappiness can be read in order to obtain the raw integer value.
To temporarily set the swappiness value:
# sysctl -w vm.swappiness=10
Zswap is a Linux kernel feature providing a compressed write-back cache for swapped pages, ZRAM creates a virtual compressed swap block in memory as alternative to a swap partition/file on disk. Both approaches increase the swapping performance and decrease the disk I/O operations.
The best choice of scheduler depends on both the device and the exact nature of the workload. Also, the throughput in MB/s is not the only measure of performance: deadline or fairness deteriorate the overall throughput but may improve system responsiveness. Benchmarking may be useful to indicate each I/O scheduler performance.
To list the available schedulers for a device and the active scheduler (in brackets):
$ cat /sys/block/sda/queue/scheduler mq-deadline kyber [bfq] none
To list the available schedulers for all devices:
$ grep ““ /sys/block/*/queue/scheduler
To change the active I/O scheduler to bfq for device sda, use:
# echo bfq > /sys/block/sda/queue/scheduler
HDD I/O Scheduler Benchmarks - BFQ
If you are an SSD user, use mq-deadline IO scheduler.
If you use NVME SSD, use none IO scheduler.
To let your system select the scheduler automatically for you, use a udev rule for that!
For SSDs
/etc/udev/rules.d/60-ssd.rules
ACTION==“add|change”, KERNEL==“sd[a-z]*”, ATTR{queue/rotational}==“0”,
ATTR{queue/scheduler}=“mq-deadline”
For NVME SSDs
/etc/udev/rules.d/60-nvme.rules
ACTION==“add|change”, KERNEL==“nvme[0-9]*”, ATTR{queue/scheduler}=“none”
For HDDs
/etc/udev/rules.d/60-hdd.rules
ACTION==“add|change”, KERNEL==“sd[a-z]*”,
ATTR{queue/rotational}==“1”, ATTR{queue/scheduler}=“bfq”
This will set the IO scheduler for all the non-rotational block devices starting from sda to sdzzz or the maximum devices supported by your system.
Each of the kernel’s I/O scheduler has its own tunables, such as the latency time, the expiry time or the FIFO parameters. They are helpful in adjusting the algorithm to a particular combination of device and workload. This is typically to achieve a higher throughput or a lower latency for a given utilization. The tunables and their description can be found within the kernel documentation. To list the available tunables for a device, in the example below sdb which is using deadline, use:
$ ls /sys/block/sda/queue/iosched
To improve deadline’s throughput at the cost of latency, one can increase with the command:
# echo 32 > /sys/block/sda/queue/iosched/fifo_batch
Overclocking improves the computational performance of the CPU by increasing its peak clock frequency. The ability to overclock depends on the combination of CPU model and motherboard model. It is most frequently done through the BIOS. Overclocking also has disadvantages and risks. It is neither recommended nor discouraged here.
Many Intel chips will not correctly report their clock frequency to acpi_cpufreq and most other utilities. This will result in excessive messages in dmesg, which can be avoided by unloading and blacklisting the kernel module . To read their clock speed use i7z from the i7z package. To check for correct operation of an overclocked CPU
CPU performance scaling enables the operating system to scale the CPU frequency up or down in order to save power or improve performance. Scaling can be done automatically in response to system load, adjust itself in response to ACPI events, or be manually changed by user space programs.
The Linux kernel offers CPU performance scaling via the CPUFreq subsystem, which defines two layers of abstraction:
Scaling governors implement the algorithms to compute the desired CPU frequency, potentially based off of the system’s needs.
Scaling drivers interact with the CPU directly, enacting the desired frequencies that the current governor is requesting.
A default scaling driver and governor are selected automatically, but userspace tools like cpupower, acpid, Laptop Mode Tools, or GUI tools provided for your desktop environment, may still be used for advanced configuration.
thermald is a Linux daemon used to prevent the overheating of Intel CPUs. This daemon proactively controls thermal parameters using P-states, T-states, and the Intel power clamp driver. thermald can also be used for older Intel CPUs. If the latest drivers are not available, then the daemon will revert to x86 model specific registers and the Linux “cpufreq subsystem” to control system cooling.
By default, it monitors CPU temperature using available CPU digital temperature sensors and maintains CPU temperature under control, before hardware takes aggressive correction action. If there is a skin temperature sensor in thermal sysfs, then it tries to keep skin temperature under 45C.
cpupower-gui is a graphical utility designed to assist with CPU frequency scaling. The GUI is based on GTK and is meant to provide the same options as cpupower. cpupower-gui can change the maximum/minimum CPU frequency and governor for each core.
Setting maximum and minimum frequencies
In some cases, it may be necessary to manually set maximum and minimum frequencies.
To set the maximum clock frequency ( is a clock frequency with units: GHz, MHz):
# cpupower frequency-set -u clock_freq
To set the minimum clock frequency:
# cpupower frequency-set -d clock_freq
To set the CPU to run at a specified frequency:
# cpupower frequency-set -f clock_freq
The idea here is to replace the intel-pstate CPU power management driver with the acpi-cpufreq one. This allows for better performance and slightly more efficient power use in some cases, as shown here.
Disable intel-pstate in grub config
To disable the default intel-pstate driver, you need to edit /etc/default/grub: # also hides the splash screen for people like me that like to see log messages on boot instead of a progress bar.
GRUB_CMDLINE_LINUX_DEFAULT=“quiet nosplash debug intel_pstate=disable”
After making our edits, we need to refresh grub:
$sudo update-grub
etc/default/grub
/sys/devices/system/cpu/cpu0/cpufreq/bios_limit
Some CPU/BIOS configurations may have difficulties to scale to the maximum frequency or scale to higher frequencies at all. This is most likely caused by BIOS events telling the OS to limit the frequency resulting in set to a lower value.
Either you just made a specific Setting in the BIOS Setup Utility, (Frequency, Thermal Management, etc.) you can blame a buggy/outdated BIOS or the BIOS might have a serious reason for throttling the CPU on its own.
Warning: Do not apply this setting without considering the vulnerabilities it opens up. See this and this for more information.
Turning off CPU exploit mitigations may improve performance. Use below kernel parameter to disable them all:
mitigations=off
There are several key parameters to tune the operation of the virtual memory subsystem of the Linux kernel and the write out of dirty data to disk. See the official Linux kernel documentation for more information. For example:
Contains, as a percentage of total available memory that contains free pages and reclaimable pages, the number of pages at which a process which is generating disk writes will itself start writing out dirty data.
vm.dirty_background_ratio = 5
Contains, as a percentage of total available memory that contains free pages and reclaimable pages, the number of pages at which the background kernel flusher threads will start writing out dirty data.
Consensus is that setting to 10% of RAM is a sane value if RAM is say 1 GB (so 10% is 100 MB). But if the machine has much more RAM, say 16 GB (10% is 1.6 GB), the percentage may be out of proportion as it becomes several seconds of writeback on spinning disks. A more sane value in this case may be 3 (3% of 16 GB is approximately 491 MB).
vm.dirty_background_ratio
Similarly, setting to 5 may be just fine for small memory values, but again, consider and adjust accordingly for the amount of RAM on a particular system.
Decreasing the virtual file system (VFS) cache parameter value may improve system responsiveness:
vm.vfs_cache_pressure = 50
The value controls the tendency of the kernel to reclaim the memory which is used for caching of directory and inode objects (VFS cache). Lowering it from the default value of 100 makes the kernel less inclined to reclaim VFS cache (do not set it to 0, this may produce out-of-memory conditions).
All the configs are set in the /etc/sysctl.conf
You can use any of three basic methods to configure a Linux kernel. Chances are that you will have the opportunity to try all of them eventually. The methods are
• Modifying tunable (dynamic) kernel configuration parameters
Building a kernel from scratch (by compiling it from the source code, possibly with modifications and additions)
Loading new drivers and modules into an existing kernel on the fly These procedures are used in different situations, so learning which approaches are needed for which tasks is half the battle. Modifying tunable parameters is the easiest and most common kernel tweak, whereas building a kernel from source code is the hardest and least often required. Fortunately, all these approaches become second nature with a little practice.
Tuning Linux kernel parameters
Many modules and drivers in the kernel were designed with the knowledge that one size does not fit all. To increase flexibility, special hooks allow parameters such as an internal table’s size or the kernel’s behavior in a particular circumstance to be adjusted on the fly by the system administrator. These hooks are accessible through an extensive kernel-to-userland interface represented by files in the /proc filesystem (aka procfs). In some cases, a large user-level application (especially an infrastructure application such as a database) might require a sysadmin to adjust kernel parameters to accommodate its needs.
#cat /etc/sysctl.conf|grep -v ^#|grep -v ^$|awk -F”=“ ‘{print $1}’|uniq -c|awk ‘{print $1}’|grep -v 1
it is a device mapper framework that provides logical volume management for the Linux kernel. Most modern Linux distributions are LVM-aware to the point of being able to have their root file systems on a logical volume. The lvm monitor service is disabled by default.
# systemctl status lvm2-monitor
Predator-OS sollution was to reduce the timeout, and to do so without messing with the base installed networking.services systemd file. This will persist during updates in any package.
hugepages in /etc/sysctl.conf file.
vm.nr_hugepages = 126 126 pages x 2 MB = 252 MB
$cat /sys/kernel/mm/transparent_hugepage/
Motd
motd - message of the day
The welcome message shown to a user upon the terminal login whether it is via remote SSH login or directly via TTY or terminal is a part of motd also known as “Message Of
T
he
D
ay
”
daemon
.
The
mot
d
message can by customized to fit
individual needs of
each user or administrator by modifying the
/
etc
/
mot
d
file or script within
the
/
etc
/
update
-
motd
.
d
directory
.
Modifying the /etc/motd file is fast and effective way on how to quickly change the welcome message. However, for more elaborate configuration it is recommend to customize the MOTD via scripts located within the /etc/update-motd.d directory.
/etc/default/motd-news
/var/cache/motd-news
/etc/update-motd.d/*
The default configuration is set during compilation, so configuration is only needed when it is necessary to deviate from those defaults. Initially, the main configuration file in /etc/systemd/ contains commented out entries showing the defaults as a guide to the administrator. Local overrides can be created by editing this file or by creating drop-ins, as described below. Using drop-ins for local configuration is recommended over modifications to the main configuration file.
In addition to the “main” configuration file, drop-in configuration snippets are read from /usr/lib/systemd/*.conf.d/, /usr/local/lib/systemd/*.conf.d/, and /etc/systemd/*.conf.d/. Those drop-ins have higher precedence and override the main configuration file. Files in the *.conf.d/ configuration subdirectories are sorted by their filename in lexicographic order, regardless of in which of the subdirectories they reside. When multiple files specify the same option, for options which accept just a single value, the entry in the file sorted last takes precedence, and for options which accept a list of values, entries are collected as they occur in the sorted files.
When packages need to customize the configuration, they can install drop-ins under /usr/. Files in /etc/ are reserved for the local administrator, who may use this logic to override the configuration files installed by vendor packages. Drop-ins have to be used to override package drop-ins, since the main configuration file has lower precedence. It is recommended to prefix all filenames in those subdirectories with a two-digit number and a dash, to simplify the ordering of the files.
To disable a configuration file supplied by the vendor, the recommended way is to place a symlink to /dev/null in the configuration directory in /etc/, with the same filename as the vendor configuration file.
/etc/systemd/system.conf
DefaultTimeoutStartSec=10s DefaultTimeoutStopSec=7s Then:
$sudo systemctl daemon-reload
If you are multi-booting with other Linuxes, and Windows, you might find an issue, when you update or upgrade Debian stable (maybe with other Linuxes too) sometime now, it’d stop “seeing” other distros and Windows. The issue here is in the GRUB 2.06 it is disabled for OS-detecting feature security.
What file to edit?
You need to edit the Grub configuration file which is located on:
sudo nano /etc/default/grub
Make sure that you are a root or root privileged user who can edit.
To disable the OS Prober, use the following command.
GRUB_DISABLE_OS_PROBER=true
To enable the OS Prober, use the following command.
GRUB_DISABLE_OS_PROBER=false
Once you have set the instruction you can save and exit the file.
Now just update the Grub so it can take effects.
sudo update-grub
This is it, just reboot (restart) the system and check it.
GRUB_DISABLE_OS_PROBER=false
To check whether your system has Secure Boot enabled or disabled, type: /usr/bin/mokutil --sb-state
/usr/bin/mokutil --disable-validation
There is issues on github page to track the bugs or send an issue of Predator-OS.
Please write any bug in the following link:
https://github.com/hosseinseilani/Predator-OS/issues/
There is a sctrip on emergency mode to troubleshooting your system.
password (optional): This is only used to join a group when one is not a usual member (with the newgrp or sg commands, see sidebarBACK TO
;