Wednesday, 17 July 2019

How to configure a user account so that the password will never expire?

# passwd -x -1 <user>
or

# chage -M -1 <user>
Question : Even though iptables is turned OFF using ‘chkconfig –level 345 iptables off’, ‘service iptables status’ still displays some iptables rules after every reboot.

Answer :
The Libvirtd process will add iptables rules into iptables when starting libvirtd. iptables will run when starting libvirtd, even if iptables was disabled before. These rules will not impact firewall configuration for the physical network. If xen environment is not used, these rules are not needed at all. In a non-xen environment, it is safe to turn the service libvirtd off by running:

# chkconfig --level 345 libvirtd off
# service libvirtd stop
How to Configure Kdump:

  • kdump is a crash dumping mechanism.
  • When enabled, the system is booted from the context of another kernel. This second kernel reserves a small amount of memory, and its only purpose is to capture the core dump image in case the system crashes. 
  • Coredump is used to  determine the exact cause of the system failure.
1. Install the kexec-tools package if not already installed
To use the kdump service, you must have the kexec-tools package installed. If not already installed, install the kexec-tools.
# yum install kexec-tools
2. Configuring Memory Usage in GRUB2
To configure the amount of memory that is reserved for the kdump kernel, modify /etc/default/grub and modify GRUB_CMDLINE_LINUX , set crashkernel=[size] parameter to the list of kernel options.

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=128M  vconsole.keymap=us rhgb quiet"
GRUB_DISABLE_RECOVERY="true"

Run command below to regenerate grub configuration :

# grub2-mkconfig -o /boot/grub2/grub.cfg

3. Configuring Dump Location

To configure kdump, we need to edit the configuration file /etc/kdump.conf. The default option is to store the vmcore file is the /var/crash/ directory of the local file system. To change the local directory in which the core dump is to be saved and replace the value with desired directory path.

For example:

path /var/crash

Optionally, you can also save the core dump directly to a raw partition.

For example:

raw /dev/sdb3

To store the dump to a remote machine using the NFS protocol, remove the hash sign (“#”) from the beginning of the #nfs my.server.com:/export/tmp line, and replace the value with a valid hostname and directory path.

For example:

nfs my.server.com:/export/tmp

4. Configuring Core Collector

To reduce the size of the vmcore dump file, kdump allows you to specify an external application to compress the data, and optionally leave out all irrelevant information. Currently, the only fully supported core collector is makedumpfile.
To enable the core collector, modify configuration file /etc/kdump.conf, remove the hash sign (“#”) from the beginning of the #core_collector makedumpfile -c –message-level 1 -d 31 line, and edit the command line options as described below.

For example:

core_collector makedumpfile -c

5. Changing Default Action

We can also specify the default action to perform when the core dump fails to generate at the desired location. If no default action is specified, “reboot” is assumed default.

For example:

default halt

6. Start kdump daemon

Check and make sure kernel command line includes the kdump config and memory was reserved for crash kernel:

# cat /proc/cmdline

BOOT_IMAGE=/vmlinuz-3.8.13-98.2.1.el7uek.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/root crashkernel=128M rd.lvm.lv=rhel/swap vconsole.font=latarcyrheb-sun16 vconsole.keymap=us rhgb quiet nomodeset

Set kdump service can be started when system rebooted.

# systemctl enable kdump.service

To start the service in the current session, use the following command:

# systemctl start kdump.service

7. Testing kdump (manually trigger kdump)

To test the configuration, we can reboot the system with kdump enabled, and make sure that the service is running.

For example:

# systemctl is-active kdump
active

# service kdump status

Redirecting to /bin/systemctl status  kdump.service
kdump.service - Crash recovery kernel arming
Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled)
Active: active (exited) since δΈ€ 2015-08-31 05:12:57 GMT; 1min 6s ago
Process: 19104 ExecStop=/usr/bin/kdumpctl stop (code=exited, status=0/SUCCESS)
Process: 19116 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS)
Main PID: 19116 (code=exited, status=0/SUCCESS)
Aug 31 05:12:57 ol7 kdumpctl[19116]: kexec: loaded kdump kernel
Aug 31 05:12:57 ol7 kdumpctl[19116]: Starting kdump: [OK]
Aug 31 05:12:57 ol7 systemd[1]: Started Crash recovery kernel arming.
Then type the following commands at a shell prompt:

# echo 1 > /proc/sys/kernel/sysrq

# echo c > /proc/sysrq-trigger

This will force the Linux kernel to crash, and the address-YYYY-MM-DD-HH:MM:SS/vmcore file will be copied to the location you have selected in the configuration (that is, to /var/crash/ by default)




Linux Backup Restore Destroy and Install MBR - Master Boot Record


BACKUP MBR

To backup Master Boot Record ( MBR ):
# dd if=/dev/sdb of=my.mbr bs=466 count=1
where my.mbr is a file where we would like to store our MBR backup.

RESTORE MBR

To restore a MBR we need to just switch the order of input and output files.
# dd if=my.mbr of=/dev/sdb bs=466 count=1

DESTROY MBR

If you from any reason want to destroy your MBR us as a input file /dev/zero:
# dd if=/dev/zero of=/dev/sdb bs=466 count=1

INSTALL MBR

Installing MBR can be very useful especially when creating linux USB boot sticks. To install MBR from scratch we can use install-mbr command found in mbr package:
#install-mbr /dev/sdb

Saturday, 29 September 2018

Manage Services in RHEL 7


systemd defines itself as a system and service manager. The project was initiated in 2010 by Lennart Poettering and Kay Sievers to create an integrated set of tools for managing a Linux system including an init daemon. It also includes device management (udev) and logging, among other things.


  • systemd can monitor services and restart them if needed
  • There are watchdogs for each service and for systemd itself
  • Services are started in parallel, reducing boot time
  • The systemd service doesn’t use runlevels as SysV or Upstart do. The alternatives for systemd are called targets.

Systemctl

The systemctl command is used to interact with systemd and can be used to list systemd unit information and manage services.

1. List manageable services
List all service units with the command:
# systemctl list-units --type service
2. Start service on boot
It's alternative to upstart command chkconfig <service> on .
# systemctl enable crond.service
2. Don't Start service on boot
It's alternative to upstart command chkconfig <service> off .
# systemctl disable crond.service
Managing Targets(Runlevels)

Runlevel Target Units
0 poweroff.target
1 rescue.target
2 multi-user.target
3 multi-user.target
4 multi-user.target
5 graphical.target
6 reboot.target

1. List Runlevels/Targets 
List all target units with the command:

# systemctl list-units --type target --all
2. Print the default Target

# systemctl get-default
3. Change Runlevels/Targets 
Change the default target to ‘runlevel 3’:

# systemctl set-default multi-user.target
Switch to ‘runlevel 3’:
# systemctl isolate multi-user.target

List all loaded service units :
# systemctl list-units --type service --all
 List only the active service units :

# systemctl list-unit-files --type service
To check whether a service is running (active) or not running (inactive) : 
# systemctl is-active sshd
active

# systemctl is-enabled sshd
enabled


Friday, 28 September 2018

How to restore/recover a deleted VG in LVM

LVM takes a backup of the on-disk metadata before and after running any LVM operation on a PV/VG/LV. 

These backups are stored in
1. /etc/lvm/archive: contains copies taken before executing a command.
2. /etc/lvm/backup: contains copies taken after executing a command.

Listing available backups

The backup files can also be found using the vgcfgrestore command. The command lists all the available backups of metadata before any LVM operations.
# vgcfgrestore --list   <vg-name>

Restoring the metadata

Once the correct backup file has been found, the metadata contained in it can be written back to the devices belonging to that Volume Group using the vgcfgrestore command:
# vgcfgrestore -f /etc/lvm/archive/[backup_file] [vg-name]
For Example:
# vgcfgrestore -f /etc/lvm/archive/appvg_00_00000-988080.vg  appvg
# Restored volume group appvg
You should now be able to see appvg using the vgs command.

Thursday, 26 September 2013

Linux Booting Process

Linux Booting Process :

 Linux boot processs completes in 6 stages :
  1. System startup(Hardware )
  2. Boot loader Stage 1 (MBR loading)
  3. Boot loader Stage 2 (GRUB loader)
  4. Kernel
  5. INIT
  6. Runlevel
Stage 1 : System startup  
  • Run POST (Power On Self Test) operation.
  • Selecting first Boot device.
POST operation : POST is a processes of checking hardware availability. In order to check if a hardware is available for the present booting or not it will send an electric pluse to each and every device in the list that it already have. If an electrical pulse is returned from that device it will come to a conclusion the hardware is working fine and ready for use. If it does not receive a signal from a particular device it will treat that device as faulty or it was removed from the system.The new list will be stored in BIOS memory for next boot.
Selecting first Boot device: Once the POST is completed BIOS will have the list of devices available.It will select the first boot device and gives back the control to Processor(CPU).  

When the system is booted, processor looks at the end of system memory for the Basic Input/Output System or BIOS program and runs it.Once the control goes to BIOS it will take care of two things

Stage 2 : Boot loader Stage 1 (MBR loading)
  • Primary boot loader code(446 Bytes)
  • Partition table information(64 Bytes)
  • Magic number(2 Bytes)




Once the BIOS gives control back to CPU, it will load the MBR into the first sector of the boot disk.MBR is a small part of Hard Disk with just a size of 512 Bytes, I repeat its just 512 Bytes. 

Which will be equal to 512B (446+64+2)B.

Primary Boot loader code: This code provides boot loader information and location details of actual boot loader code on the hard disk. This is helpful for CPU to load second stage of Boot loader.

Partition table: MBR contains 64 bytes of data which stores Partition table information such as what is the start and end of each partition, size of partition, type of partition(Whether it's a primary or extended etc). As we all know HDD support only 4 partitions, this is because of the limitation of its information in MBR. For a partition to represent in MBR, it requires 16 Bytes of space in it so at most we will get 4 partitions.
Magic Number: The magic number service as validation check for MBR. If MBR gets corrupted this magic number is used to retrieve it. 
Once your CPU knows all these details, it will try to analyse them and read the first portion of MBR to load Second stage of Boot loader.
Stage 3 : Boot loader Stage 2 (GRUB loader)
MBR loads and executes the 2nd stage boot loader GRUB into memory.GRUB located in the first 30 kilobytes of hard disk.
GRUB read its configuration and displays the GRUB boot menu (where the user can manually specify the boot parameters) to the user. GRUB loads the user-selected (or default) kernel into memory and passes control on to the kernel. If user do not select the OS, after a defined timeout GRUB will load the default kernel in the memory for starting it.

Stage 4 : Kernel
  • Mounts the root file system as specified in the “root=” in grub.conf
  • Kernel executes the /sbin/init program
  • Since init was the 1st program to be executed by Linux Kernel, it has the process id (PID) of 1. Do a ‘ps -ef | grep init’ and check the pid.
  • initrd stands for Initial RAM Disk.

initrd is used by kernel as temporary root file system until kernel is booted and the real root file system is mounted. It also contains necessary drivers compiled inside, which helps it to access the hard drive partitions, and other hardware.


Stage 5 : Init


The /sbin/init program (also called init) coordinates the rest of the boot process and configures the environment for the user.

When the init command starts, it becomes the parent or grandparent of all of the processes that start up automatically on the system. First, it runs the/etc/rc.d/rc.sysinit script, which sets the environment path, starts swap, checks the file systems, and executes all other steps required for system initialization.
The init command then runs the /etc/inittab script, which describes how the system should be set up in each init runlevel . Among other things, the/etc/inittab sets the default runlevel .

Stage 6 : Runlevels Programs

Starts the services according to the run levels.