How to set up multiple monitors in linux (using xrandr)

If you would need to set up your displays once every week or less using the GUI is just fine. I had to do it every morning and after a while it became really annoying.

Turn off the HDMI display:
[codesyntax lang="bash"]

xrandr --output HDMI1 --off

[/codesyntax]

Turn on the HDMI display and set it as primary display:
[codesyntax lang="bash"]

xrandr --output HDMI1 --mode 1920x1080 --primary --left-of eDP1

[/codesyntax]

Marvel indices taking lot of space? Delete indices older than 7 days!

It looks like Marvel is generating some data everyday. Is there a way to reduce the amount of data generated by marvel? The short answer to the above question is Yes!

[codesyntax lang="bash"]

curator --host 127.0.0.1 show indices --older-than 30 --time-unit days --timestring '%Y.%m.%d' --prefix .marvel
curator --host 127.0.0.1 close indices --older-than 30 --time-unit days --timestring '%Y.%m.%d' --prefix .marvel
curator --host 127.0.0.1 delete indices --older-than 30 --time-unit days --timestring '%Y.%m.%d' --prefix .marvel

[/codesyntax]

If we want to remove all indices from February 2015:

[codesyntax lang="bash"]

curator --host 127.0.0.1 show indices --regex '\.marvel-2015\.02\.*'
curator --host 127.0.0.1 close indices --regex '\.marvel-2015\.02\.*'
curator --host 127.0.0.1 delete indices --regex '\.marvel-2015\.02\.*'

[/codesyntax]

nagios-nrpe-server: Ignores dont_blame_nrpe=1

We have Debian Wheezy running on all our production machines. Debian Jessie is out there for a while and I wanted to give it a try...
An upgrade like this should be painless and without any major problems.

I installed a machine with Debian Jessie and made it ready for production by playing some ansible playbooks against it. More or less everything was fine, but the nagios nrpe server. This is what I noticed in the log:

Oct 30 17:08:36 was1 nrpe[14125]: Error: Request contained command arguments!
Oct 30 17:08:36 was1 nrpe[14125]: Client request was invalid, bailing out...

After digging a little bit I kinda figured out what happened.
nagios-nrpe-server debian package was compiled without --enable-command-args because there are security problems and that this feature is often used wrong.

The quickest fix for this was to recompile it with --enable-command-args and reinstall it.

  • Get the source.

[codesyntax lang="bash"]

apt-get source nagios-nrpe-server
apt-get install libssl-dev dpatch debhelper libwrap0-dev autotools-dev build-essential fakeroot
apt-get build-dep collectd collectd-core
ln -s /usr/lib/x86_64-linux-gnu/libssl.so /usr/lib/libssl.so

[/codesyntax]

  • Edit nagios-nrpe-2.15/debian/rules and just after --enable-ssl \ insert --enable-command-args \ (it should be line number 16)

[codesyntax lang="bash"]

sed '16i\\t\t--enable-command-args \\' nagios-nrpe-2.15/debian/rules

[/codesyntax]

  • Build the package

[codesyntax lang="bash"]

cd nagios-nrpe-2.15/
dpkg-buildpackage -rfakeroot

[/codesyntax]

  • Install the newly created package

[codesyntax lang="bash"]

cd ..
dpkg -i nagios-nrpe-server_2.15-1_amd64.deb

[/codesyntax]

Note: in case you're interested in the whole story, please read this.

Dumping multiple mysql tables at once

Could happen to have a mysql database with a lot of tables and we want dump all of them, but we also want to make the logical dump process faster. Some of you could say that mydumper is the solution. It could be. But what if we don't have the permission to install it or whatever?

Let's say we have these tables and we need to dump all of them:

[codesyntax lang="bash"]

MariaDB [database]> show tables like "objects_2015%";
+-----------------------------------------------+
| Tables_in_xxxxxxxxxx_logs (objects_2015%)     |
+-----------------------------------------------+
| objects_2015_12_01                            |
| objects_2015_12_02                            |
| objects_2015_12_03                            |
| objects_2015_12_04                            |
| objects_2015_12_05                            |
| objects_2015_12_06                            |
| objects_2015_12_07                            |
| objects_2015_12_08                            |
| objects_2015_12_09                            |
| objects_2015_12_10                            |
| objects_2015_12_11                            |
| objects_2015_12_12                            |
| objects_2015_12_13                            |
| objects_2015_12_14                            |
| objects_2015_12_15                            |
| objects_2015_12_16                            |
| objects_2015_12_17                            |
| objects_2015_12_18                            |
| objects_2015_12_19                            |
| objects_2015_12_20                            |
| objects_2015_12_21                            |
| objects_2015_12_22                            |
| objects_2015_12_23                            |
| objects_2015_12_24                            |
| objects_2015_12_25                            |
| objects_2015_12_26                            |
| objects_2015_12_27                            |
| objects_2015_12_28                            |
| objects_2015_12_29                            |
| objects_2015_12_30                            |
| objects_2015_12_31                            |
+-----------------------------------------------+
74 rows in set (0.00 sec)

[/codesyntax]

Dumping, let's say, 6 tables in parallel will speed up the dump process.
Note: Too many mysqldump instances could overload the server and make the dump process slower (if the server is a production server this could have quite an impact on the performance).

We need the list with all the commands we need to execute:

[codesyntax lang="bash"]

for i in $(mysql -h host -uuser -ppassword database -Bsqe "show tables like \"objects_2015%\""); do echo "mysqldump -h host -uuser -ppassword --add-drop-table --quick --quote-names --disable-keys --extended-insert database $i | gzip > /path/to/backup/$i.sql.gz"; done > /tmp/things_to_do/tables.txt

[/codesyntax]

Start the dump process:
[codesyntax lang="bash"]

parallel -j 6 < /tmp/things_to_do/tables.txt

[/codesyntax]

Kill all mysql queries having query time greater than 1 minute

At this point there are two approaches to achieve this. One is using pt-kill from Percona Toolkit, and the other one is to use a bash script with a lot of pipes :)
Why would someone use the second approach? I don't know, perhaps because there is no Percona Toolkit available.

[codesyntax lang="bash"]

for i in $(mysql -e "show processlist" | egrep -v "system user" | grep '^[0-9]' | awk '{print $6" "$1}' | sort -nr -k1 | awk -v threshold="60" '$1 > threshold' | awk '{print $2}'); do mysql -e "kill $i"; done

[/codesyntax]

If you can/want to use pt-kill and don't know how, please read this.

Natural Sorting in MySQL

The data:

select id, name from Object where name like "dbrferrari%";
+-----+--------------+
| id  | name         |
+-----+--------------+
|   8 | dbrferrari5  |
|   9 | dbrferrari6  |
|  25 | dbrferrari1  |
|  26 | dbrferrari2  |
|  35 | dbrferrari3  |
|  64 | dbrferrari4  |
|  80 | dbrferrari7  |
|  99 | dbrferrari11 |
| 101 | dbrferrari8  |
| 102 | dbrferrari10 |
| 133 | dbrferrari12 |
| 134 | dbrferrari15 |
| 135 | dbrferrari14 |
| 199 | dbrferrari9  |
| 200 | dbrferrari16 |
| 202 | dbrferrari18 |
| 211 | dbrferrari13 |
+-----+--------------+

The problem:

select id, name from Object where name like "dbrferrari%" order by name asc;
+-----+--------------+
| id  | name         |
+-----+--------------+
|  25 | dbrferrari1  |
| 102 | dbrferrari10 |
|  99 | dbrferrari11 |
| 133 | dbrferrari12 |
| 211 | dbrferrari13 |
| 135 | dbrferrari14 |
| 134 | dbrferrari15 |
| 200 | dbrferrari16 |
| 202 | dbrferrari18 |
|  26 | dbrferrari2  |
|  35 | dbrferrari3  |
|  64 | dbrferrari4  |
|   8 | dbrferrari5  |
|   9 | dbrferrari6  |
|  80 | dbrferrari7  |
| 101 | dbrferrari8  |
| 199 | dbrferrari9  |
+-----+--------------+

The solution:

select id, name from Object where name like "dbrferrari%" order by LENGTH(name), name asc;
+-----+--------------+
| id  | name         |
+-----+--------------+
|  25 | dbrferrari1  |
|  26 | dbrferrari2  |
|  35 | dbrferrari3  |
|  64 | dbrferrari4  |
|   8 | dbrferrari5  |
|   9 | dbrferrari6  |
|  80 | dbrferrari7  |
| 101 | dbrferrari8  |
| 199 | dbrferrari9  |
| 102 | dbrferrari10 |
|  99 | dbrferrari11 |
| 133 | dbrferrari12 |
| 211 | dbrferrari13 |
| 135 | dbrferrari14 |
| 134 | dbrferrari15 |
| 200 | dbrferrari16 |
| 202 | dbrferrari18 |
+-----+--------------+

How to add debug symbols for MariaDB Debian/Ubuntu packages

I don't know about other distributions, but I know that the debug symbols were stripped from Debian/Ubuntu packages. If there are some crashes reported I won't be able to fully analyze them. The only way to fix this problem is to build the packages again.

[codesyntax lang="bash"]

git clone https://github.com/MariaDB/server.git
cd server/
git branch -a
git checkout 10.0

apt-get install libdistro-info-perl
apt-get install fakeroot
apt-get install libreadline-gplv2-dev libpam0g-dev dpatch libjemalloc-dev
apt-get install libboost-all-dev libjudy-dev libjudydebian1
apt-get install build-essential dpkg-dev devscripts hardening-wrapper
apt-get build-dep mysql-server

patch -p1 < /path/to/patch.txt

./debian/autobake-deb.sh

[/codesyntax]

Note: Here is the patch.txt

mysql hot backup with xtrabackup

I want to backup my databases without downtime... Well, there are couple approaches to this. One of them is to add a standby slave and use LVM snapshot approach. The second approach is to use percona xtrabackup. My problem is that I have some huge databases here, so taking local backups is out of the question.

But just to remain written this is how to do local hot backups.

1. Take the backup
[codesyntax lang="bash"]

time innobackupex --user=user --password=password --no-timestamp --rsync --slave-info --safe-slave-backup /path/mysql

[/codesyntax]

2. At this point the data is not ready to be restored. There might be uncommitted transactions to be undone or transactions in the logs to be replayed. Doing those pending operations will make the data files consistent and it is the purpose of the prepare stage. Once this has been done, the data is ready to be used.

[codesyntax lang="bash"]

time innobackupex --apply-log --use-memory=12G /path/mysql

[/codesyntax]

Since in my case having storing the backup locally (not even temporary) is not an option I need to transport the whole thing to the storage server.
The lame solution is to use NFS... but c'mon... grrrr... no!

The right solution is to stream the data directly to the storage server. To achieve this I am using netcat. Also to gain more performance I am compressing the data on the fly.

1. On the target machine
[codesyntax lang="bash"]

nc -l -p 9999 | qpress -dio | xbstream -x -C /path/to/mysql

[/codesyntax]

2. On the source machine
[codesyntax lang="bash"]

time innobackupex --user=user --password=pass --no-timestamp --slave-info --parallel=$((`nproc`-2)) --safe-slave-backup --stream=xbstream /data/db/mysql | qpress -io something | nc destination 9999

[/codesyntax]

3. On the target machine
Apply the transaction log in order to make the backup consistent.

[codesyntax lang="bash"]

time innobackupex --apply-log --use-memory=120G /path/to/mysql

[/codesyntax]

Note: --use-memory option is used to speed up the process by using more memory (the default is 100MB which is a quite small value).

Configuring DRAC with ipmitool

  • Load modules

[codesyntax lang="bash"]

modprobe ipmi_devintf
modprobe ipmi_si

[/codesyntax]

  • List of helpful ipmitool commands

Check BMC Firmware Revision
[codesyntax lang="bash"]

ipmitool -I open bmc info | grep -A3 "Firmware Revision"

[/codesyntax]

Check SEL log
[codesyntax lang="bash"]

ipmitool sel

[/codesyntax]

List SEL log
[codesyntax lang="bash"]

ipmitool sel list

[/codesyntax]

Check which node you are in [For Dell Cloud edge]
[codesyntax lang="bash"]

ipmitool raw 0x34 0x11

[/codesyntax]

Reset BMC/DRAC to default
[codesyntax lang="bash"]

ipmitool mc reset cold

[/codesyntax]
Note: this will also fix No more sessions are available for this type of connection! error while trying to connect to DRAC

  • Configure DRAC from ipmitool

Set BMC/DRAC static IP
[codesyntax lang="bash"]

ipmitool lan set 1 ipsrc static

[/codesyntax]

Set BMC/DRAC IP Address
[codesyntax lang="bash"]

ipmitool lan set 1 ipaddr

[/codesyntax]

Set BMC/DRAC Subnet Mask
[codesyntax lang="bash"]

ipmitool lan set 1 netmask

[/codesyntax]

Set BMC/DRAC Default Gateway
[codesyntax lang="bash"]

ipmitool lan set 1 defgw ipaddr

[/codesyntax]

Set BMC/DRAC dhcp IP
[codesyntax lang="bash"]

ipmitool lan set 1 ipsrc dhcp

[/codesyntax]

Display BMC/DRAC network settings
[codesyntax lang="bash"]

ipmitool lan print 1

[/codesyntax]

Change the NIC settings to dedicated
[codesyntax lang="bash"]

ipmitool raw 0x30 0x24 2

[/codesyntax]

Change the NIC settings to shared
[codesyntax lang="bash"]

ipmitool raw 0x30 0x24 0

[/codesyntax]

Check the NIC settings
[codesyntax lang="bash"]

ipmitool raw 0x30 0x25

[/codesyntax]

Restart the BMC/DRAC
[codesyntax lang="bash"]

ipmitool mc reset warm

[/codesyntax]

Example:
[codesyntax lang="bash"]

ipmitool lan set 1 ipsrc static
ipmitool lan set 1 ipaddr 10.10.17.37
ipmitool lan set 1 netmask 255.255.240.0
ipmitool lan set 1 defgw ipaddr 10.10.16.1
ipmitool lan set 1 ipsrc dhcp
watch -n1 ipmitool lan print 1

[/codesyntax]

How to recover the space used by UNDO_LOG in InnoDB tablespaces

The panic started when the monitoring server sent an alert about the storage of one of our MySQL servers - saying that the disk is about to get filled.

I realized that a a lot of the disk space was used by the InnoDB's shared tablespace ibdata1. But wait... I have innodb_file_per_table enabled. So, what is stored in ibdata1?

According to this serverfault post, ibdata1 contains vital InnoDB informations:

  • Table Data Pages
  • Table Index Pages
  • Data Dictionary
  • MVCC Control Data
    • Undo Space
    • Rollback Segments
  • Double Write Buffer (Pages Written in the Background to avoid OS caching)
  • Insert Buffer (Changes to Secondary Indexes)

Next step was to download the InnoDB Ruby Tools made by Jeremy Cole and to check what is being stored in the ibdata1.

[codesyntax lang="bash"]

ruby ~/innodb_ruby/bin/innodb_space -f ibdata1,data1,data2 space-page-type-summary
type                count       percent     description
UNDO_LOG            1969570     83.55       Undo log
INDEX               230460      9.78        B+Tree index
IBUF_FREE_LIST      115168      4.89        Insert buffer free list
INODE               22337       0.95        File segment inode
ALLOCATED           19424       0.82        Freshly allocated
IBUF_BITMAP         143         0.01        Insert buffer bitmap
XDES                142         0.01        Extent descriptor
SYS                 130         0.01        System internal
FSP_HDR             1           0.00        File space header
TRX_SYS             1           0.00        Transaction system header

[/codesyntax]

So, it has 1969570 UNDO_LOG pages which is almost 84% of the table space...

Now, according to some stackoverflow post shrinking or purging ibdata1 file is NOT possible without doing a logical dump/import.

Well... I managed to reclaim that space!

  • I dumped the structure of the databases

[codesyntax lang="bash"]

mysqldump --no-data database1 > database1.sql
mysqldump --no-data database2 > database2.sql

[/codesyntax]

  • I stopped the mysql server

[codesyntax lang="bash"]

/etc/init.d/mysql stop

[/codesyntax]

  • I renamed mysql dir (/data/db/mysql in my case)

[codesyntax lang="bash"]

mv /data/db/mysql /data/db/mysql.bak
mkdir -vp /data/db/mysql
cp -av /data/db/mysql.bak/mysql /data/db/mysql/
cp -av /data/db/mysql.bak/{aria_log.00000001,aria_log_control} /data/db/mysql/

[/codesyntax]

  • I did some clean up

[codesyntax lang="bash"]

rm -vf /data/db/mysql/mysql/innodb_{index,table}_stats.*
rm -vf /data/db/mysql/mysql/gtid_slave_pos.*

[/codesyntax]

  • Last few things before starting mysql

[codesyntax lang="bash"]

mkdir -vp /data/db/mysql/tmp
chown mysql:mysql /data/ -R

[/codesyntax]

  • I started mysql and checked tables for MySQL upgrade (this is really necessary in order to recreate innodb_{index,table}_stats tables).

[codesyntax lang="bash"]

/etc/init.d/mysql start
mysql_upgrade -uuser -ppassword --force
/etc/init.d/mysql restart

[/codesyntax]

  • No errors in mysql log, so I imported the dumps I created before

[codesyntax lang="bash"]

mysql database1 < database1.sql
mysql database2 < database2.sql

[/codesyntax]

  • I discarded the tables space for every single table

[codesyntax lang="bash"]

alter table database1.table1 discard tablespace;
alter table database1.table2 discard tablespace;
alter table database1.tableX discard tablespace;

alter table database2.table1 discard tablespace;
alter table database2.table2 discard tablespace;
alter table database2.tableX discard tablespace;

[/codesyntax]

  • I moved the data files (the .ibd files) and imported the table space for every single table

[codesyntax lang="bash"]

alter table database1.table1 import tablespace;
alter table database1.table2 import tablespace;
alter table database1.tableX import tablespace;

alter table database2.table1 import tablespace;
alter table database2.table2 import tablespace;
alter table database2.tableX import tablespace;

[/codesyntax]

Good luck!