Useful shortcuts

Power
[codesyntax lang="bash"]

Lock screen: ctrl + shift + eject
Sleep: command + control + eject

[/codesyntax]

Screen
[codesyntax lang="bash"]

Screen capture (whole screen): command + shift + 3
Screen capture (selection): command +  shift + 4
Screen capture to clipboard (whole screen): command + control + shift + 3
Screen capture to clipboard (selection): command +  control+ shift + 4

[/codesyntax]

Finder
[codesyntax lang="bash"]

 Get info: command + control + i

[/codesyntax]

How to setup a NIS Slave server on Debian Squeeze

Assumptions

I am assuming that we have two networks linked with a vpn connection (net1: 10.99.0.0/24 and net2: 10.34.132.0/24). Also I am assuming that on net1 is a functional NIS master server.

NIS MASTER: nis1.test.org 10.99.0.10
NIS SERVER: nis2.test.org 10.34.132.195

How to setup a NIS client

If you put a servername in /etc/yp.conf, make sure the server is also in /etc/hosts. Otherwise if your system boots and the network is not yet up or DNS isn't reachable, ypserv cannot resolve the servers in /etc/yp.conf and will hang!

[codesyntax lang="bash"]

vim /etc/hosts

[/codesyntax]

10.99.0.10    nis1.test.org    nis1

Install the netbase, portmap and nis packages

[codesyntax lang="bash"]

apt-get install nis

[/codesyntax]

Configure NIS servers

[codesyntax lang="bash"]

vim /etc/yp.conf

[/codesyntax]

domain test-auth server nis1.test.org

Make domain binding persistent

[codesyntax lang="bash"]

vim /etc/defaultdomain

[/codesyntax]

test-auth

Setup 'running' domain

[codesyntax lang="bash"]

nisdomainname test-auth

[/codesyntax]

Update local maps search rules

[codesyntax lang="bash"]

vim /etc/nsswitch.conf

[/codesyntax]

passwd:         db files compat nis
group:          db files compat nis
shadow:         db files compat nis
netgroup:       nis

Restart NIS services

[codesyntax lang="bash"]

/etc/init.d/nis stop
/etc/init.d/nis start

[/codesyntax]

Make the auth process query NIS

[codesyntax lang="bash"]

vim /etc/passwd

[/codesyntax]

+@gods::0:0:::
+::0:0:::/bin/false

[codesyntax lang="bash"]

vim /etc/group

[/codesyntax]

+:::

Test NIS client setup

[codesyntax lang="bash"]

id user
ypwhich

[/codesyntax]

Setup a NIS server slave

[codesyntax lang="bash"]

vim /etc/default/nis

:%s/NISSERVER=false/NISSERVER=slave
:%s/YPPWDDIR=\/etc/YPPWDDIR=\/etc\/yp
:%s/NISMASTER=/NISMASTER=nis1.test.org
:wq

[/codesyntax]

Restart NIS server

[codesyntax lang="bash"]

/etc/init.d/nis stop
/etc/init.d/nis start

[/codesyntax]

Links:
http://lyre.mit.edu/~powell/debian-howto/nis.html
http://www.server-world.info/en/note?os=Debian_6.0&p=nis
http://www.linuxhelp.in/2010/05/how-to-install-and-configure-nis-server.html

Automatically set the hostname during Kickstart Installation

When you want to install linux on a large number of servers kickstart approach is a very good one. But what about hostname?! You have many choices:

  • A kickstart file for each server, but come on... what kind the choice is this?!
  • A kickstart file for all servers and set hostname after installation (manually on every single server, or using a script)

Fortunately for you there is a third option: Automatically set the hostname during Kickstart Installation. I wish to take credit for this, but this will be so unfair for the guy which wrote an article about this.

I won't get to long with the story so... let's get started.

The trick is to pass the kernel a parameter and use it in our kickstart file. What if you were to pass a parameter that it doesn't recognize? In most cases, it will probably ignore it, but it will still in the kernel list. We can check kernel parameters by issuing the following command:

[codesyntax lang="bash"]

cat /proc/cmdline

[/codesyntax]

So what if we can pass a parameter with desired hostname to the kernel? With a very simple script we can parse the output of the above command and look for our parameter.

[codesyntax lang="bash"]

#!/bin/sh

echo "network --device eth0 --bootproto dhcp --hostname localhost.localdomain" > /tmp/network.ks

for x in `cat /proc/cmdline`; do
        case $x in SERVERNAME*)
            eval $x
        echo "network --device eth0 --bootproto dhcp --hostname ${SERVERNAME}" > /tmp/network.ks
                ;;
            esac;
done

[/codesyntax]

Here we are looking for SERVERNAME end evaluates that value into a variable. We will then echo the network setup with the variable (which we will use as part of the hostname setup) and redirect into the file under /tmp. Then we will include that file in our installation section.

You may ask yourself what is all about this line:

[codesyntax lang="bash"]

echo "network --device eth0 --bootproto dhcp --hostname localhost.localdomain" > /tmp/network.ks

[/codesyntax]

in the script above?! Well, if you don't pass the SERVERNAME to the kernel, then /tmp/network.ks will not be created and your installation will fail.

So this is my kickstart file for a minimal CentOS 6.3 installation:

install
firewall --disabled
url --url="ftp://ftp.ines.lug.ro/centos/6.3/os/i386"
network --bootproto=dhcp --device=eth0
rootpw --iscrypted YOUR_ENCRYPTED_PASSWORD
text

%include /tmp/network.ks

keyboard us
lang en_US
selinux --disabled
skipx
logging --level=info
reboot
timezone --utc Europe/Bucharest
bootloader --location=mbr --driveorder=sda,sdb --append="console=tty0 console=ttyS0,115200N1"
zerombr
clearpart --all --initlabel
part / --fstype="ext4" --size=10000
part swap --fstype="swap" --size=8000
part pv.01 --fstype="ext4" --grow --size=1
volgroup vg0 pv.01
logvol /data --vgname=vg0 --percent=90 --name=lv0 --fsoptions=noatime --fstype=ext4 --size=1 --grow

%packages
@core
sed
perl
less
dmidecode
bzip2
iproute
iputils
sysfsutils
rsync
nano
mdadm
setserial
man-pages.noarch
findutils
tar
net-tools
tmpwatch
lsof
python
screen
lvm2
curl
ypbind
yp-tools
smartmontools
openssh-clients
acpid
irqbalance
which
bind-utils
ntsysv
ntp
man
mysql
postfix
chkconfig
gzip
%end

%pre
#!/bin/sh

echo "network --device eth0 --bootproto dhcp --hostname localhost.localdomain" > /tmp/network.ks

for x in `cat /proc/cmdline`; do
        case $x in SERVERNAME*)
               eval $x
        echo "network --device eth0 --bootproto dhcp --hostname ${SERVERNAME}" > /tmp/network.ks
                ;;
            esac;
    done
%end

%post

cat > /etc/cron.d/ntpdate < /dev/null 2>&1
EOF

chkconfig ntpd on
chkconfig sshd on
chkconfig ypbind on
chkconfig iptables off
chkconfig ip6tables off
chkconfig yum-updatesd off
chkconfig haldaemon off
chkconfig mcstrans off
chkconfig sysstat off

cat > /etc/motd <> /etc/motd

echo >> /etc/motd
%end

Creating backups of running VMs in XenServer

With XenServer it is possible to create backups of VMs, even if they are running. The process is as follows:

Search for the uuid of the VMs to backup

First look for the uuid of the VMs to backup. We don’t want to backup the control domain itself, so we add is-control-domain=false to the vm-list command:
[codesyntax lang="bash"]

xe vm-list is-control-domain=false

[/codesyntax]

Create a snapshot of each (running)

Now we create a snapshot of the VMs we want to backup, replacing the uuid one by one with the ones we found with the previous command. Also replace the name of the snapshot if desired:
[codesyntax lang="bash"]

xe vm-snapshot uuid=8d6f9d81-95b5-2ffb-4ecc-b5e442cc5c22 new-name-label=gb-r7n2-snapshot

[/codesyntax]

This command has a return value: the uuid of the created snapshot. Then we transform the snapshot into a VM to be able to save it to a file, replacing uuid with the return value of the previous command:
[codesyntax lang="bash"]

xe template-param-set is-a-template=false ha-always-run=false uuid=4efd0392-8881-176c-012e-a56e9cb2beed

[/codesyntax]

Save the snapshot to file

In the next step we save the snapshot to a file, replacing uuid with the snapshot uuid and providing a meaningful filename:
[codesyntax lang="bash"]

xe vm-export vm=4efd0392-8881-176c-012e-a56e9cb2beed filename=gb-r7n2-snapshot.xva

[/codesyntax]

Remove the created snapshot

In the final step we delete the snapshot:
[codesyntax lang="bash"]

xe vm-uninstall uuid=4efd0392-8881-176c-012e-a56e9cb2beed force=true

[/codesyntax]

Source: http://www.jansipke.nl/creating-backups-of-running-vms-in-xenserver/

How to install dhcp, dns, pxe on debian squeeze

Introduction

This document describes how to install DHCP, DNS and PXE network services on a debian squeeze.

For this tutorial I use a machine that has two network interfaces:
eth0: 10.34.132.149/255.255.254.0 (WAN interface)
eth1: 172.20.30.1/255.255.255.0 (LAN interface)

To install a PXE server, you will need the following components:
DHCP Server
TFTP Server
NFS/FTP/HTTPD server (to store installing files)

Note: DHCP Server will listen only on eth1.
         In this tutorial I will use apache2 server.

Install required packages

[codesyntax lang="bash"]

apt-get install tftpd-hpa syslinux dhcp3-server bind9 dnsutils

[/codesyntax]

Configure DHCP Server

[codesyntax lang="bash"]

vim /etc/dhcp/dhcpd.conf

[/codesyntax]

ddns-update-style ad-hoc;
log-facility syslog;

option domain-name "test.org";
option domain-name-servers 172.20.30.1;
option subnet-mask 255.255.255.0;
subnet 172.20.30.0 netmask 255.255.255.0 {
    authoritative;
    range 172.20.30.10 172.20.30.90; # ip range
    option routers 172.20.30.1; # gateway for clients
    ######
    # in case want to deny clients that are not configured in dhcpd uncomment the following line
    ######
    #deny unknown-clients;
    allow booting;
    allow bootp;
    next-server 172.20.30.1; # tftpd server's IP
    filename "pxelinux.0";

    ######
    # sample of a client that has mac address reserved on dhcp
    ######
    #host guest1 {
    #    hardware ethernet 00:0C:29:14:DA:AD;
    #    fixed-address 172.20.30.15;
    #}
    ######
}

Force DHCP Server to listen only on eth1

[codesyntax lang="bash"]

vim /etc/default/isc-dhcp-server
:%s/INTERFACES=""/INTERFACES="eth1"/g
:wq

[/codesyntax]

Configure TFTP Server. Change the root directory on startup from /srv/tftp to /tftpboot

[codesyntax lang="bash"]

vim /etc/default/tftpd-hpa
:%s/\/srv\/tftp/\/tftpboot/g
:wq

[/codesyntax]

Setup TFTP Server network boot files

[codesyntax lang="bash"]

mkdir -p /tftpboot
chmod 777 /tftpboot

cp -v /usr/lib/syslinux/pxelinux.0 /tftpboot
cp -v /usr/lib/syslinux/menu.c32 /tftpboot
cp -v /usr/lib/syslinux/memdisk /tftpboot
cp -v /usr/lib/syslinux/mboot.c32 /tftpboot
cp -v /usr/lib/syslinux/chain.c32 /tftpboot

mkdir /tftpboot/pxelinux.cfg

[/codesyntax]

Create PXE menu file

[codesyntax lang="bash"]

vim /tftpboot/pxelinux.cfg/default

[/codesyntax]

default menu.c32
prompt 0
timeout 300
MENU TITLE test.org PXE Menu

LABEL centos6.3_i386
    MENU LABEL CentOS 6.3 i386
    KERNEL /netboot/centos/6.3/i386/vmlinuz
    APPEND console=tty0 console=ttyS0,9600N1 initrd=/netboot/centos/6.3/i386/initrd.img ks=http://172.20.30.1/netboot/centos/6.3/i386/centos6.3-ks.cfg  ksdevice=link

Share the internet connection with clients

[codesyntax lang="bash"]

vim /etc/sysctl.conf
:%s/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1
:wq

[/codesyntax]

Apply the settings:
[codesyntax lang="bash"]

sysctl -p

[/codesyntax]

Share internet connection using iptables:
[codesyntax lang="bash"]

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

[/codesyntax]

Configure bind9

[codesyntax lang="bash"]

echo "include \"/etc/bind/bind.keys\"; ">> /etc/bind/named.conf

vim /etc/bind/named.conf.options

[/codesyntax]

options {
        directory "/var/cache/bind";
        auth-nxdomain no;    # conform to RFC1035
        listen-on-v6 { none; };
        forwarders { 8.8.8.8; 8.8.4.4; };
        listen-on port 53 { any; };
        allow-query { any; };
        allow-query-cache { any; };
};

Add the following lines at the end of the named.conf.default-zones
[codesyntax lang="bash"]

vim /etc/bind/named.conf.default-zones

[/codesyntax]

zone "test.org" {
        type master;
        file "/etc/bind/test.org";
};

zone "30.20.172.in-addr.arpa" {
        type master;
        file "/etc/bind/30.20.172.in-addr.arpa";
};

[codesyntax lang="bash"]

vim /etc/bind/test.org

[/codesyntax]

$ORIGIN test.org.

$TTL 1H

test.org.          IN SOA ns.test.org. root.test.org. (
                                2012062600      ; serial
                                12H             ; refresh
                                2H              ; retry
                                1W              ; expiry
                                2D )            ; minimum

test.org.      IN    NS   ns.test.org.

ns.test.org.   IN    A    172.20.30.1

www10          IN    A    172.20.30.10
www11          IN    A    172.20.30.11
www12          IN    A    172.20.30.12
www13          IN    A    172.20.30.13
www14          IN    A    172.20.30.14
www15          IN    A    172.20.30.15

[codesyntax lang="bash"]

vim /etc/bind/30.20.172.in-addr.arpa

[/codesyntax]

$ORIGIN 30.20.172.in-addr.arpa.

$TTL 2D

@          IN SOA ns.test.org. root.test.org. (
                                2012062600      ; serial
                                12H             ; refresh
                                2H              ; retry
                                1W              ; expiry
                                2D )            ; minimum

@     IN    NS     ns.test.org.

1     IN    PTR    ns.test.org.

10    IN    PTR    www10.test.org.
11    IN    PTR    www11.test.org.
12    IN    PTR    www12.test.org.
13    IN    PTR    www13.test.org.
14    IN    PTR    www14.test.org.
15    IN    PTR    www15.test.org.

Let's use our DNS server

[codesyntax lang="bash"]

echo "search test.org" > /etc/resolv.conf
echo "nameserver 127.0.0.1" >> /etc/resolv.conf

[/codesyntax]

How to configure bind on CentOS 6.3

DNS stands for Domain Name System and is a a hierarchical distributed naming system for computers, services, or any resource connected to the Internet or a private network. In other words DNS translate human readable hostnames such as test.org into machine readable ip addresses such as 89.36.25.239.

Preliminary notes
- Server Name: ns.test.org
- Server IP: 172.20.30.1/24

Install required software packages
[codesyntax lang="bash"]

yum install bind bind-libs bind-utils

[/codesyntax]

Set BIND service to start on system boot
[codesyntax lang="bash"]

chkconfig named on

[/codesyntax]

Start named service for generating some default configuration files.
/etc/init.d/named start

Note: In case the command above hangs there is an entropy problem. You should install haveged daemon. More details here.

If you don't want to install haveged daemon there is a workaround:
[codesyntax lang="bash"]

rndc-confgen -a -r /dev/urandom

[/codesyntax]

Edit main configuration file and add zone entry of www.test.org
[codesyntax lang="bash"]

vim /etc/named.conf

[/codesyntax]

options {
        forwarders { 8.8.8.8; 8.8.4.4; };
        listen-on port 53 { any; };
        directory "/var/named";
        dump-file "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        allow-query { any; };
        allow-query-cache { any; };
};

logging {
        channel default_debug {
            file "data/named.run";
            severity dynamic;
        };
};

zone "." IN {
        type hint;
        file "named.ca";
};
zone "test.org" {
        type master;
        file "test.org";
};
zone "30.20.172.in-addr.arpa" {
        type master;
        file "30.20.172.in-addr.arpa";
};

Create Zone files which we mentioned in named.conf file
[codesyntax lang="bash"]

cd /var/named
vim /var/named/test.org

[/codesyntax]

$ORIGIN test.org.

$TTL 1H

test.org.          IN SOA ns.test.org. root.test.org. (
                                2012062600      ; serial
                                12H             ; refresh
                                2H              ; retry
                                1W              ; expiry
                                2D )            ; minimum

test.org.       IN    NS   ns.test.org.

ns.test.org.    IN    A    172.20.30.1

www10          IN    A    172.20.30.10
www11          IN    A    172.20.30.11
www12          IN    A    172.20.30.12
www13          IN    A    172.20.30.13
www14          IN    A    172.20.30.14
www15          IN    A    172.20.30.15

[codesyntax lang="bash"]

vim /var/named/30.20.172.in-addr.arpa

[/codesyntax]

$ORIGIN 30.20.172.in-addr.arpa.

$TTL 2D

@          IN SOA ns.test.org. root.test.org. (
                                2012062600      ; serial
                                12H             ; refresh
                                2H              ; retry
                                1W              ; expiry
                                2D )            ; minimum

@     IN    NS     ns.test.org.

1     IN    PTR    ns.test.org.

10    IN    PTR    www10.test.org.
11    IN    PTR    www11.test.org.
12    IN    PTR    www12.test.org.
13    IN    PTR    www13.test.org.
14    IN    PTR    www14.test.org.
15    IN    PTR    www15.test.org.

Restart named service
[codesyntax lang="bash"]

/etc/init.d/named restart

[/codesyntax]

Update /etc/resolv.conf file
[codesyntax lang="bash"]

echo "search test.org" > /etc/resolv.conf
echo "nameserver 127.0.0.1" >> /etc/resolv.conf

[/codesyntax]

Source: http://www.broexperts.com/2012/03/linux-dns-bind-configuration-on-centos-6-2/

Entropy on linux

Introduction
How to check entropy level:

[codesyntax lang="bash"]

watch -n1 cat /proc/sys/kernel/random/entropy_avail

[/codesyntax]

If the value is to low (around 100) install haveged daemon

How to install haveged on Linux

Debian
[codesyntax lang="bash"]

 apt-get install haveged

[/codesyntax]

CentOS
haveged daemon is available for CentOS via EPEL repository.
First, we download and import the GPG keys for EPEL software packages:

[codesyntax lang="bash"]

wget http://ftp.riken.jp/Linux/fedora/epel/RPM-GPG-KEY-EPEL-6
rpm --import RPM-GPG-KEY-EPEL-6
rm -f RPM-GPG-KEY-EPEL-6

[/codesyntax]

Download EPEL repository for 32-bit CentOS
[codesyntax lang="bash"]

 wget http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-7.noarch.rpm

[/codesyntax]
Install EPEL repository on 32-bit CentOS
[codesyntax lang="bash"]

 rpm -ivh epel-release-6-7.noarch.rpm

[/codesyntax]
You can start install and use EPEL the repository to install the software packages, input yum command like follows:
[codesyntax lang="bash"]

 yum --enablerepo=epel install haveged

[/codesyntax]

How to install smartmontools on Citrix XenServer

Introduction

Citrix XenServer 5.5, 5.6 and 6.0 are based on CentOS 5.4, but the smartmontools package is not available in the default Citrix repository, so it can't be installed with yum install smartmontools (as it would be possible with any other CentOS distribution).

Installation

It is probably safest to use the same versions that were in the original CentOS 5.4 distribution. Since that release is no longer the latest, we have to go to an archive server to find the packages. The packages listed below are actually used in other CentOS releases too, so these are probably the latest versions anyway.

[codesyntax lang="bash"]

wget http://vault.centos.org/5.4/os/i386/CentOS/mailx-8.1.1-44.2.2.i386.rpm
wget http://vault.centos.org/5.4/os/i386/CentOS/smartmontools-5.38-2.el5.i386.rpm

rpm -hiv smartmontools-5.38-2.el5.i386.rpm mailx-8.1.1-44.2.2.i386.rpm

[/codesyntax]

Checking the disk status

We can now retrieve the disk status using:

[codesyntax lang="bash"]

smartctl -d ata -a /dev/sda

[/codesyntax]

The most important fields are: SMART overall-health self-assessment test result - that should always have a value of PASSED. Other important fields are:

  • Reallocated_Sector_Ct, which counts the number of bad blocks that have been reallocated. It should be a low number. If this value increases, it is an alarm signal. Make a backup and replace the disk drive.
  • Current_Pending_Sector, which is the number of blocks with read errors that are not yet reallocated.
  • Offline_Uncorrectable.

Also check out the columns VALUE WORST THRESH. For each attribute, the current value of the field should never be lower than the threshold defined by the manufacturer.

Automatic monitoring of disk drives

The smartd daemon handles automatic testing for all drives, logs any status changes in /var/log/syslog or /var/log/messages and sends a status email in case of a problem (if mail is enabled, see below). It is configured in the file /etc/smartd.conf.

The following lines will run a short test every day between 02:00 and 03:00, and a long test on every Saturday between 03:00 and 04:00. If there is a problem, send an email to the configured address. The -M test option will send a test email whenever the smartd daemon is started.

The DEVICESCAN line would normally cause default test runs for all disks that smartd finds, but does not work on my system for some reason. So it is commented out and the tests will only run for explicitly listed devices.

[codesyntax lang="bash"]

/dev/sda -d ata -a -s (S/../.././02|L/../../6/03) -t -m user@example.com
/dev/sdb -d ata -a -s (S/../.././02|L/../../6/03) -t -m user@example.com

#DEVICESCAN -H -m root

[/codesyntax]

After any change to the /etc/smartd.conf file the smartd daemon should be restarted:

[codesyntax lang="bash"]

/etc/init.d/smartd restart

[/codesyntax]

Enabling email on Citrix XenServer

Citrix XenServer is not configured to run a mail server. Therefore without further configuration smartd might attempt to send out warning emails in case anything fails, but no mails will actually receive their destination.

It is fortunately not necessary to install a full-blown email package. XenServer comes with ssmtp preinstalled, which simply forwards emails to a real mail server.

To enable mail sending on Citrix XenServer, set up /etc/ssmtp/ssmtp.conf. You need to provide a real mail server and the local domain name.

#
# /etc/ssmtp.conf -- a config file for sSMTP sendmail.
#

root=postmaster
mailhub=relay.example.com
rewriteDomain=nxen01.example.com
hostname=nxen01.example.com

After setting up /etc/ssmtp/ssmtp.conf, send a test email from the console to make sure that the email gets through:

[codesyntax lang="bash"]

echo "this is a test mail" | mailx -s "Test mail" user@example.com

[/codesyntax]

If you are using the -M test option to a /etc/smartd.conf device definition, you can also restart the smartd daemon to have it send out test emails.

Source: http://www.schirmacher.de/diReallocated_Sector_Ctsplay/INFO/Install+smartmontools+on+Citrix+XenServer

MDADM Cheat Sheet

Create a new RAID array

Create (mdadm --create) is used to create a new array:
[codesyntax lang="bash"]

mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb2

[/codesyntax]

or using the compact notation:
[codesyntax lang="bash"]

mdadm -Cv /dev/md0 -l1 -n2 /dev/sd[ab]1

[/codesyntax]

/etc/mdadm.conf

/etc/mdadm.conf or /etc/mdadm/mdadm.conf (on debian) is the main configuration file for mdadm. After we create our RAID arrays we add them to this file using:
[codesyntax lang="bash"]

mdadm --detail --scan >> /etc/mdadm.conf

[/codesyntax]

or on debian
[codesyntax lang="bash"]

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

[/codesyntax]

Remove a disk from an array

We can’t remove a disk directly from the array, unless it is failed, so we first have to fail it (if the drive it is failed this is normally already in failed state and this step is not needed):
[codesyntax lang="bash"]

mdadm --fail /dev/md0 /dev/sda1

[/codesyntax]

and now we can remove it:
[codesyntax lang="bash"]

mdadm --remove /dev/md0 /dev/sda1

[/codesyntax]

This can be done in a single step using:
[codesyntax lang="bash"]

mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1

[/codesyntax]

Add a disk to an existing array

We can add a new disk to an array (replacing a failed one probably):
[codesyntax lang="bash"]

mdadm --add /dev/md0 /dev/sdb1

[/codesyntax]

Verifying the status of the RAID arrays

We can check the status of the arrays on the system with:
[codesyntax lang="bash"]

cat /proc/mdstat

[/codesyntax]

or

[codesyntax lang="bash"]

mdadm --detail /dev/md0

[/codesyntax]

The output of this command will look like:

[codesyntax lang="bash"]

cat /proc/mdstat

[/codesyntax]
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]

md1 : active raid1 sdb3[1] sda3[0]
19542976 blocks [2/2] [UU]

md2 : active raid1 sdb4[1] sda4[0]
223504192 blocks [2/2] [UU]

here we can see both drives are used and working fine - U. A failed drive will show as F, while a degraded array will miss the second disk _

Note: while monitoring the status of a RAID rebuild operation using watch can be useful:
[codesyntax lang="bash"]

watch -n1 cat /proc/mdstat

[/codesyntax]

Checking a linux MD raid array

In this example my RAID array will be /dev/md0

[codesyntax lang="bash"]

cd /sys/block/md0/md
echo check >sync_action
watch -n1 cat /proc/mdstat

[/codesyntax]

OR:

[codesyntax lang="bash"]

/usr/share/mdadm/checkarray -a /dev/md0
watch -n1 cat /proc/mdstat

[/codesyntax]

Note: if you receive the following error message:

[codesyntax lang="bash"]

checkarray: W: array md0 in auto-read-only state, skipping...

[/codesyntax]

Then you should do this:
[codesyntax lang="bash"]

mdadm --readwrite /dev/md0

[/codesyntax]

Stop and delete a RAID array

If we want to completely remove a raid array we have to stop if first and then remove it:
[codesyntax lang="bash"]

mdadm --stop /dev/md0
mdadm --remove /dev/md0

[/codesyntax]

and finally we can even delete the superblock from the individual drives:
[codesyntax lang="bash"]

mdadm --zero-superblock /dev/sda

[/codesyntax]

Finally in using RAID1 arrays, where we create identical partitions on both drives this can be useful to copy the partitions from sda to sdb:
[codesyntax lang="bash"]

sfdisk -d /dev/sda | sfdisk /dev/sdb

[/codesyntax]

(this will dump the partition table of sda, removing completely the existing partitions on sdb, so be sure you want this before running this command, as it will not warn you at all).

There are many other usages of mdadm particular for each type of RAID level, and I would recommend to use the manual page (man mdadm) or the help (mdadm --help) if you need more details on its usage. Hopefully these quick examples will put you on the fast track with how mdadm works.

Source: This info is taken from here and here