Monthly Archives: January 2016

nagios-nrpe-server: Ignores dont_blame_nrpe=1

We have Debian Wheezy running on all our production machines. Debian Jessie is out there for a while and I wanted to give it a try...
An upgrade like this should be painless and without any major problems.

I installed a machine with Debian Jessie and made it ready for production by playing some ansible playbooks against it. More or less everything was fine, but the nagios nrpe server. This is what I noticed in the log:

Oct 30 17:08:36 was1 nrpe[14125]: Error: Request contained command arguments!
Oct 30 17:08:36 was1 nrpe[14125]: Client request was invalid, bailing out...

After digging a little bit I kinda figured out what happened.
nagios-nrpe-server debian package was compiled without --enable-command-args because there are security problems and that this feature is often used wrong.

The quickest fix for this was to recompile it with --enable-command-args and reinstall it.

  • Get the source.

[codesyntax lang="bash"]

apt-get source nagios-nrpe-server
apt-get install libssl-dev dpatch debhelper libwrap0-dev autotools-dev build-essential fakeroot
apt-get build-dep collectd collectd-core
ln -s /usr/lib/x86_64-linux-gnu/libssl.so /usr/lib/libssl.so

[/codesyntax]

  • Edit nagios-nrpe-2.15/debian/rules and just after --enable-ssl \ insert --enable-command-args \ (it should be line number 16)

[codesyntax lang="bash"]

sed '16i\\t\t--enable-command-args \\' nagios-nrpe-2.15/debian/rules

[/codesyntax]

  • Build the package

[codesyntax lang="bash"]

cd nagios-nrpe-2.15/
dpkg-buildpackage -rfakeroot

[/codesyntax]

  • Install the newly created package

[codesyntax lang="bash"]

cd ..
dpkg -i nagios-nrpe-server_2.15-1_amd64.deb

[/codesyntax]

Note: in case you're interested in the whole story, please read this.

Dumping multiple mysql tables at once

Could happen to have a mysql database with a lot of tables and we want dump all of them, but we also want to make the logical dump process faster. Some of you could say that mydumper is the solution. It could be. But what if we don't have the permission to install it or whatever?

Let's say we have these tables and we need to dump all of them:

[codesyntax lang="bash"]

MariaDB [database]> show tables like "objects_2015%";
+-----------------------------------------------+
| Tables_in_xxxxxxxxxx_logs (objects_2015%)     |
+-----------------------------------------------+
| objects_2015_12_01                            |
| objects_2015_12_02                            |
| objects_2015_12_03                            |
| objects_2015_12_04                            |
| objects_2015_12_05                            |
| objects_2015_12_06                            |
| objects_2015_12_07                            |
| objects_2015_12_08                            |
| objects_2015_12_09                            |
| objects_2015_12_10                            |
| objects_2015_12_11                            |
| objects_2015_12_12                            |
| objects_2015_12_13                            |
| objects_2015_12_14                            |
| objects_2015_12_15                            |
| objects_2015_12_16                            |
| objects_2015_12_17                            |
| objects_2015_12_18                            |
| objects_2015_12_19                            |
| objects_2015_12_20                            |
| objects_2015_12_21                            |
| objects_2015_12_22                            |
| objects_2015_12_23                            |
| objects_2015_12_24                            |
| objects_2015_12_25                            |
| objects_2015_12_26                            |
| objects_2015_12_27                            |
| objects_2015_12_28                            |
| objects_2015_12_29                            |
| objects_2015_12_30                            |
| objects_2015_12_31                            |
+-----------------------------------------------+
74 rows in set (0.00 sec)

[/codesyntax]

Dumping, let's say, 6 tables in parallel will speed up the dump process.
Note: Too many mysqldump instances could overload the server and make the dump process slower (if the server is a production server this could have quite an impact on the performance).

We need the list with all the commands we need to execute:

[codesyntax lang="bash"]

for i in $(mysql -h host -uuser -ppassword database -Bsqe "show tables like \"objects_2015%\""); do echo "mysqldump -h host -uuser -ppassword --add-drop-table --quick --quote-names --disable-keys --extended-insert database $i | gzip > /path/to/backup/$i.sql.gz"; done > /tmp/things_to_do/tables.txt

[/codesyntax]

Start the dump process:
[codesyntax lang="bash"]

parallel -j 6 < /tmp/things_to_do/tables.txt

[/codesyntax]

Kill all mysql queries having query time greater than 1 minute

At this point there are two approaches to achieve this. One is using pt-kill from Percona Toolkit, and the other one is to use a bash script with a lot of pipes :)
Why would someone use the second approach? I don't know, perhaps because there is no Percona Toolkit available.

[codesyntax lang="bash"]

for i in $(mysql -e "show processlist" | egrep -v "system user" | grep '^[0-9]' | awk '{print $6" "$1}' | sort -nr -k1 | awk -v threshold="60" '$1 > threshold' | awk '{print $2}'); do mysql -e "kill $i"; done

[/codesyntax]

If you can/want to use pt-kill and don't know how, please read this.