Author Archives: jonas

Swap Alt and Windows keys with xmodmap?

The problem: I have a Mac keyboard where the Alt/Win (i.e. Option/Command) keys are inverted compared to a regular PC keyboard, and I'd like to swap them.

The answer:

[codesyntax lang="bash"]

# clear all options
setxkbmap -model "pc105" -layout "us,se" -option ""  

# set the Apple keyboard
setxkbmap -rules "evdev" -model "pc105" -layout "us,se" -option "terminate:ctrl_alt_bksp,lv3:rwin_switch,grp:shifts_toggle,altwin:swap_lalt_lwin"

[/codesyntax]

 

And another one which I call when I'm back on a normal keyboard:

[codesyntax lang="bash"]

# clear settings
setxkbmap -model "pc105" -layout "us,se" -option ""

# pc keyobard
setxkbmap -rules "evdev" -model "pc105" -layout "us,se" -option "terminate:ctrl_alt_bksp,lv3:rwin_switch,grp:shifts_toggle"

[/codesyntax]

Alter mysql tables on the fly, without locking them

If you need to perform real-time ALTER TABLE processes on MySQL (InnoDB, TokuDB) tables, a great tool for the job is the Percona Toolkit.

The Percona Toolkit includes a utility (pt-online-schema-change) to perform such a process, without write-locking tables, or having to manually create a temporary table and triggers to synchronize the data during the process.

[codesyntax lang="bash"]

time pt-online-schema-change --host my-awesome-database.example.net --user=user --password=secret --execute --print --no-drop-old-table --alter "DROP INDEX kenny" D=my-database,t=my-table

[/codesyntax]

[codesyntax lang="bash"]

time pt-online-schema-change --hostmy-awesome-database.example.net --user=user --password=secret --execute --print --no-drop-old-table --alter "add source_id int(10) unsigned DEFAULT NULL"D=my-database,t=my-table

[/codesyntax]

Remove Old Job Builds on Jenkins with Groovy Script

Purpose

This article describes a way to use a Groovy script to remove old builds from all jobs up to a set number. In other words, you set how many of the latest builds to keep and run it on the Jenkins Master to cleanup all previous builds for all jobs. This script includes Cloudbees Folders, Basic Projects, Github Organizations plugin and Workflow Multibranch plugin support.

Take the following script and either run it from the CLI or from the Jenkins Management Script Console GUI.

Prerequisites

  • Jenkins 2.x+

Steps Using GUI

  1. Set the 2nd argument to how many builds you wish to keep. In this example all but 10 builds will be deleted for each job. listJobObjects(item, 10, 0)
  2. Select Manage Jenkins on the Master you wish to perform the task on
  3. Select Script Console
  4. Paste the Groovy Script in the script window
  5. Select Run

Steps for CLI

  1. First you have to have a SSH Key pair setup for a user that has adiquite permissions
    1. Generate SSH Key pair
      [codesyntax lang="bash"]

      ssh-keygen -t rsa -b 4096

      [/codesyntax]

    2. Add public key to local admin Jenkins user
      1. Browse People | <username> | Configure
      2. Paste Public key in SSH Public Keys window and Save
  2. Copy the groovy script content to a file such as /tmp/remove-old-builds.groovy
  3. Copy private RSA key if not already on the system i.e. ~/.ssh/id_rsa
  4. Set permissions on the private key if needed
    [codesyntax lang="bash"]

    chmod 0400 ~/.ssh/id_rsa

    [/codesyntax]

  5. Locate the jenkins-cli.jar
    [codesyntax lang="bash"]

    find . -type f -name 'jenkins-cli.jar'

    [/codesyntax]
    OR
    [codesyntax lang="bash"]

    updatedb && locate 'jenkins-cli.jar'

    [/codesyntax]

  6. Run java jenkins cli from Master or you could setup another box with the jar etc.
    [codesyntax lang="bash"]

    java -jar /var/cache/jenkins/war/WEB-INF/jenkins-cli.jar -i ~/.ssh/id_rsa -s http://localhost:8080 groovy /tmp/remove-old-builds.groovy

    [/codesyntax]

Groovy Script

[codesyntax lang="groovy"]

import jenkins.model.*
import hudson.model.*
import com.cloudbees.hudson.plugins.folder.*
import jenkins.branch.*
import org.jenkinsci.plugins.workflow.job.*
import org.jenkinsci.plugins.workflow.multibranch.*

/**
 * Utility function used to delete old builds.
 *
 * @param item the current Jenkins item to process, this can be a Folder or a Project
 * @param numberOfBuildsToKeep the total number of builds to keep. Please note that one more build could be
 *        kept if the first "numberOfBuildsToKeep" builds are all in a failed state.
 */

def deleteOldBuilds(item, Integer numberOfBuildsToKeep, Integer numberOfSuccessfulBuildsKept) {
    def count = 1

    println('Checking for Old Builds...')

    for (build in item.getBuilds()) {
        if(count++ >= numberOfBuildsToKeep) {
            if(item.getBuildStatusIconClassName() == 'icon-blue' && numberOfSuccessfulBuildsKept == 0) {
                println('Keep ' + build)
            } else {
                println('Deleting ' + build)
                build.delete()
            }
        } else if(item.getBuildStatusIconClassName() == 'icon-blue') {
            numberOfSuccessfulBuildsKept++
        }
    }
    println('PRIOR BUILD COUNT: (' + count + ')')
    println ''
}

def listJobObjects(item, Integer numberOfBuildsToKeep, Integer numberOfSuccessfulBuildsKept) {
    if(item instanceof Project) {
        println('PROJECT: (' + item.getName() + ')')
        deleteOldBuilds(item, numberOfBuildsToKeep, numberOfSuccessfulBuildsKept)
    } else if(item instanceof Folder) {
        println ''
        println('FOLDER: (' + item.getName() + ')')
        println('*************************************')
        for (subItem in item.items) {
            listJobObjects(subItem, numberOfBuildsToKeep, numberOfSuccessfulBuildsKept)
        }
    } else if(item instanceof WorkflowMultiBranchProject) {
        println('MULTIBRANCH-PROJECT: (' + item.getName() + ')')
        for (subItem in item.items) {
            listJobObjects(subItem, numberOfBuildsToKeep, numberOfSuccessfulBuildsKept)
        }
    }  else if(item instanceof WorkflowJob) {
        println('MULTIBRANCH-JOB: (' + item.getName() + ')')
        deleteOldBuilds(item, numberOfBuildsToKeep, numberOfSuccessfulBuildsKept)
    } else if(item instanceof OrganizationFolder) {
        println('ORG-FOLDER: (' + item.getName() + ')')
        for (subItem in item.items) {
            listJobObjects(subItem, numberOfBuildsToKeep, numberOfSuccessfulBuildsKept)
        }
    } else {
        println('UNKNOWN: (' + item.getName() + ')')
        println('CLASS: (' + item.getClass() + ')')
        println('INSPECT: (' + item.inspect() + ')')
    }
}

for (item in Jenkins.instance.items) {
    println ''
    listJobObjects(item, 10, 0)
    println('*************************************')
}

[/codesyntax]

Source: bonusbits

MariaDB 10.0: How to migrate from InnoDB to TokuDB

This is only to remember what I did to migrate from InnoDB to TokuDB on a MariaDB cluster.
TokuDB settings are specific to my environment, so please don't ask me what/how/why.

root@dbrfoo:~# mysql -u root -p
MariaDB [(none)]> INSTALL SONAME 'ha_tokudb';
Query OK, 0 rows affected (0.07 sec)

MariaDB [(none)]> show engines;
+--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+
| Engine             | Support | Comment                                                                    | Transactions | XA   | Savepoints |
+--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+
| MEMORY             | YES     | Hash based, stored in memory, useful for temporary tables                  | NO           | NO   | NO         |
| MRG_MyISAM         | YES     | Collection of identical MyISAM tables                                      | NO           | NO   | NO         |
| MyISAM             | YES     | MyISAM storage engine                                                      | NO           | NO   | NO         |
| BLACKHOLE          | YES     | /dev/null storage engine (anything you write to it disappears)             | NO           | NO   | NO         |
| CSV                | YES     | CSV storage engine                                                         | NO           | NO   | NO         |
| TokuDB             | YES     | Tokutek TokuDB Storage Engine with Fractal Tree(tm) Technology             | YES          | YES  | YES        |
| PERFORMANCE_SCHEMA | YES     | Performance Schema                                                         | NO           | NO   | NO         |
| ARCHIVE            | YES     | Archive storage engine                                                     | NO           | NO   | NO         |
| InnoDB             | DEFAULT | Percona-XtraDB, Supports transactions, row-level locking, and foreign keys | YES          | YES  | YES        |
| FEDERATED          | YES     | FederatedX pluggable storage engine                                        | YES          | NO   | YES        |
| Aria               | YES     | Crash-safe tables with MyISAM heritage                                     | NO           | NO   | NO         |
+--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+
11 rows in set (0.00 sec)

MariaDB [(none)]> show plugins;
+-------------------------------+----------+--------------------+--------------+---------+
| Name                          | Status   | Type               | Library      | License |
+-------------------------------+----------+--------------------+--------------+---------+
| binlog                        | ACTIVE   | STORAGE ENGINE     | NULL         | GPL     |
| mysql_native_password         | ACTIVE   | AUTHENTICATION     | NULL         | GPL     |
| mysql_old_password            | ACTIVE   | AUTHENTICATION     | NULL         | GPL     |
| MEMORY                        | ACTIVE   | STORAGE ENGINE     | NULL         | GPL     |
| MyISAM                        | ACTIVE   | STORAGE ENGINE     | NULL         | GPL     |
| CSV                           | ACTIVE   | STORAGE ENGINE     | NULL         | GPL     |
| MRG_MyISAM                    | ACTIVE   | STORAGE ENGINE     | NULL         | GPL     |
| PERFORMANCE_SCHEMA            | ACTIVE   | STORAGE ENGINE     | NULL         | GPL     |
| InnoDB                        | ACTIVE   | STORAGE ENGINE     | NULL         | GPL     |
| XTRADB_READ_VIEW              | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| XTRADB_INTERNAL_HASH_TABLES   | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| XTRADB_RSEG                   | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_TRX                    | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_LOCKS                  | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_LOCK_WAITS             | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_CMP                    | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_CMP_RESET              | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_CMPMEM                 | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_CMPMEM_RESET           | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_CMP_PER_INDEX          | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_CMP_PER_INDEX_RESET    | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_BUFFER_PAGE            | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_BUFFER_PAGE_LRU        | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_BUFFER_POOL_STATS      | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_METRICS                | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_FT_DEFAULT_STOPWORD    | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_FT_DELETED             | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_FT_BEING_DELETED       | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_FT_CONFIG              | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_FT_INDEX_CACHE         | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_FT_INDEX_TABLE         | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_SYS_TABLES             | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_SYS_TABLESTATS         | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_SYS_INDEXES            | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_SYS_COLUMNS            | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_SYS_FIELDS             | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_SYS_FOREIGN            | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_SYS_FOREIGN_COLS       | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_SYS_TABLESPACES        | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_SYS_DATAFILES          | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| INNODB_CHANGED_PAGES          | ACTIVE   | INFORMATION SCHEMA | NULL         | GPL     |
| ARCHIVE                       | ACTIVE   | STORAGE ENGINE     | NULL         | GPL     |
| Aria                          | ACTIVE   | STORAGE ENGINE     | NULL         | GPL     |
| FEDERATED                     | ACTIVE   | STORAGE ENGINE     | NULL         | GPL     |
| BLACKHOLE                     | ACTIVE   | STORAGE ENGINE     | NULL         | GPL     |
| FEEDBACK                      | DISABLED | INFORMATION SCHEMA | NULL         | GPL     |
| partition                     | ACTIVE   | STORAGE ENGINE     | NULL         | GPL     |
| TokuDB                        | ACTIVE   | STORAGE ENGINE     | ha_tokudb.so | GPL     |
| TokuDB_trx                    | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_lock_waits             | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_locks                  | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_file_map               | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_fractal_tree_info      | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_fractal_tree_block_map | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
+-------------------------------+----------+--------------------+--------------+---------+
54 rows in set (0.00 sec)

MariaDB [(none)]> \q

root@dbrfoo:~# vim /etc/mysql/conf.d/tokudb.cnf

[mariadb]
# See https://mariadb.com/kb/en/how-to-enable-tokudb-in-mariadb/
# for instructions how to enable TokuDB
#
# See https://mariadb.com/kb/en/tokudb-differences/ for differences
# between TokuDB in MariaDB and TokuDB from http://www.tokutek.com/

#plugin-load=ha_tokudb.so

tokudb_cache_size=300GB
tokudb_commit_sync=off
tokudb_fsync_log_period=1000
tokudb_directio=ON
tokudb_disable_slow_alter=ON
tokudb_disable_hot_alter=OFF
tokudb_load_save_space=ON
tokudb_row_format=tokudb_fast


root@dbrfoo:~# /etc/init.d/mysql stop
root@dbrfoo:~# /etc/init.d/mysql start


----------------
150615 14:35:57 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
150615 14:36:09 mysqld_safe Starting mysqld daemon with databases from /data/db/mysql
150615 14:36:10 [Note] /usr/sbin/mysqld (mysqld 10.0.19-MariaDB-1~wheezy-log) starting as process 20812 ...
150615 14:36:10 [Note] InnoDB: Using mutexes to ref count buffer pool pages
150615 14:36:10 [Note] InnoDB: The InnoDB memory heap is disabled
150615 14:36:10 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
150615 14:36:10 [Note] InnoDB: Memory barrier is not used
150615 14:36:10 [Note] InnoDB: Compressed tables use zlib 1.2.7
150615 14:36:10 [Note] InnoDB: Using Linux native AIO
150615 14:36:10 [Note] InnoDB: Using CPU crc32 instructions
150615 14:36:10 [Note] InnoDB: Initializing buffer pool, size = 100.0G
150615 14:36:16 [Note] InnoDB: Completed initialization of buffer pool
150615 14:36:17 [Note] InnoDB: Highest supported file format is Barracuda.
150615 14:36:20 [Note] InnoDB: 128 rollback segment(s) are active.
150615 14:36:20 [Note] InnoDB: Waiting for purge to start
150615 14:36:20 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.23-72.1 started; log sequence number 38210935651348
150615 14:36:20 [Note] Plugin 'FEEDBACK' is disabled.
150615 14:36:20 [Note] Server socket created on IP: '::'.
150615 14:36:20 [Note] Event Scheduler: Loaded 0 events
150615 14:36:20 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.0.19-MariaDB-1~wheezy-log'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  mariadb.org binary distribution
----------------

root@dbrfoo:~# mysql -u root -p

MariaDB [(none)]> use xxxxxxxx_ugc;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed

MariaDB [xxxxxxxx_ugc]> alter table user_data engine=tokudb;
Stage: 1 of 2 'Fetched about 161900000 rows, loading data still remains'   3.49% of stage done

VMware modules, Arch Linux & kernel 4.8.13

After upgrading the kernel to 4.8.13-1-ARCH some of the vmware kernel modules failed to compile:

/tmp/modconfig-6BT70S/vmmon-only/linux/hostif.c:1592:47: error: ‘NR_ANON_PAGES’ undeclared (first use in this function)
/tmp/modconfig-BBuLH6/vmnet-only/netif.c:468:7: error: ‘struct net_device’ has no member named ‘trans_start’; did you mean ‘mem_start’?

The fix:

[codesyntax lang="bash"]

cd /usr/lib/vmware/modules/source
tar xf vmnet.tar
tar xf vmmon.tar
mv vmnet.tar vmnet.old.tar
mv vmmon.tar vmmon.old.tar
sed -i -e 's/dev->trans_start = jiffies/netif_trans_update(dev)/g' vmnet-only/netif.c
sed -i -e 's/unsigned int anonPages = global_page_state(NR_ANON_PAGES);/unsigned int anonPages = global_page_state(NR_ANON_MAPPED);/g' vmmon-only/hostif.c
tar cf vmnet.tar vmnet-only
tar cf vmmon.tar vmmon-only
rm -r vmnet-only
rm -r vmmon-only

vmware-modconfig --console --install-all

[/codesyntax]

Custom Elasticsearch template with custom field mapping

Disclaimer: I am writing this post mostly for me to remember how and what I did to solve wrong field mapping in elasticsearch.

Quote from Elasticsearch Dynamic Mapping documentation page

When Elasticsearch encounters a previously unknown field in a document, it uses dynamic mapping to determine the datatype for the field and automatically adds the new field to the type mapping.

Sometimes this is the desired behavior and sometimes it isn't. Perhaps you don't know what fields will be added to your documents later, but you want them to be indexed automatically. Perhaps you just want to ignore them. Or - especially if you are using Elasticsearch as a primary data store - perhaps you want unknown fields to throw an exception to alert you to the problem.

I had this annoying problem with a field that was mapped as date when it was supposed to be a string.

It is worth mentioning that existing type and field mappings cannot be updated because that would mean invalidating already indexed documents and the right way would be to create a new index with the correct mappings and reindex the data into that index.

Ok, cool, but how do I tell elasticsearch to map that particular field to string? Well, we need to create a template that will automatically be applied when new indices are created. The template could include both settings and mappings, and a simple pattern template that controls whether the template should be applied to the new index.

Let's get to work!

What is the full path of my field (stats_dates in this case)?
[codesyntax lang="bash"]

curl -s -XGET http://localhost:9200/logstash-2016.10.05/_mappings | jq 'path(recurse(if type|. == "array" or . =="object" then .[] else empty end))'

[/codesyntax]

[
...
  "logstash-2016.10.05",
  "mappings",
  "hemlock",
  "properties",
  "stats_dates",
  "type"
]
...

Ok, so the full path is mappings.hemlock.properties.stats_dates.type. Now that we the full path, let's create the template.
[codesyntax lang="bash"]

curl -s -XPUT http://localhost:9200/_template/logstash-stats_dates -d '

{
  "order": 0,
  "template": "logstash-*",
  "settings": {
  },
  "mappings": {
    "hemlock": {
      "properties": {
        "stats_dates": {
          "type": "string"
        }
      }
    }
  },
  "aliases": {
  }
}

'

[/codesyntax]

Check if the template was applied to the new index:

[codesyntax lang="bash"]

curl -s http://localhost:9200/logstash-2016.10.06/_mappings | jq '.[].mappings.hemlock.properties.stats_dates'

[/codesyntax]

{
  "type": "string"
}

Build strongswan v5.5.0 debian package -- with debug symbols

Usually I am using the packages from the official repositories. However, sometimes it's necessary to use a newer version, I recently had to do this with strongswan and I'm sharing the procedure for other people to try.

Get the build dependencies

[codesyntax lang="bash"]

apt-get update
apt-get install devscripts fakeroot
apt-get build-dep strongswan

[/codesyntax]

Obtain and build the package

[codesyntax lang="bash"]

mkdir ~/work
cd ~/work
debcheckout strongswan
cd strongswan
sed -e '/dh_strip/ s/^#*/#/' -i debian/rules
sed -e 's/debhelper.*/debhelper,/g' -i debian/control
dpkg-buildpackage -rfakeroot -uc -b

[/codesyntax]

Full Disk Encryption on Arch Linux: NVMe, GRUB2, LVM and LUKS

Last week I had to install my new work computer. Although I am a Debian guy, I thought I should give a chance to Arch Linux as well.
In theory full disk encryption and Linux is not a big deal (there's a lot of documentation out there) and it should be pretty straight forward. In practice, probably because it was the first time doing it on Arch, it took me some time to figure it out how to do it right.

My biggest problem was that the current version of grub shipped with Arch doesn't include support the following combination: NVMe, LVM, LUKS. Apparently the git version has the fix for that issue.

So here is what I did (This post comes without warranty of any kind! I do not issue any guarantee that this will work for you!)

  • Partition the disk - no Windows, no UEFI needed, my disk is only 256G and I will need only one primary partition so... there is no reason to use GPT.

[codesyntax lang="bash"]

parted -a optimal -s /dev/nvme0n1 mklabel msdos
parted -s /dev/nvme0n1 mkpart primary 2048s 100%

[/codesyntax]

  • Encrypt the newly created partition

[codesyntax lang="bash"]

cryptsetup luksFormat /dev/nvme0n1p1
cryptsetup luksOpen /dev/nvme0n1p1 disk

[/codesyntax]

  • Setup logical volumes

[codesyntax lang="bash"]

pvcreate /dev/mapper/disk
vgcreate vg0 /dev/mapper/disk
lvcreate -L 32G vg0 -n swap
lvcreate -L 15G vg0 -n root
lvcreate -l +100%FREE vg0 -n home

[/codesyntax]

  • Create filesystem

[codesyntax lang="bash"]

mkswap -L swap /dev/vg0/swap
mkfs.ext4 /dev/vg0/root
mkfs.ext4 /dev/vg0/home

[/codesyntax]

  • Mount the partitions

[codesyntax lang="bash"]

mount /dev/vg0/root /mnt
mkdir -vp /mnt/home
mount /dev/vg0/home /mnt/home

[/codesyntax]

  • Set mirror list

[codesyntax lang="bash"]

pacman -Sy
pacman -S pacman-contrib
cd /etc/pacman.d/
wget "archlinux.org/mirrorlist/?country=SE" -O mirrorlist.b
sed -i 's/^#//' mirrorlist.b
rankmirrors -n 3 mirrorlist.b > mirrorlist

[/codesyntax]

  • Install base system

[codesyntax lang="bash"]

pacstrap /mnt base base-devel vim less dhclient
genfstab -p -U /mnt > /mnt/etc/fstab

[/codesyntax]

  • Chroot time!

[codesyntax lang="bash"]

arch-chroot /mnt /bin/bash

[/codesyntax]

  • Localization

[codesyntax lang="bash"]

echo "en_US.UTF-8 UTF-8" > /etc/locale.gen
locale-gen
export LANG=en_US.UTF-8
echo LANG=en_US.UTF-8 > /etc/locale.conf
rm -f /etc/localtime
ln -s /usr/share/zoneinfo/Europe/Stockholm /etc/localtime
hwclock --systohc --utc

[/codesyntax]

  • Set the hostname

[codesyntax lang="bash"]

echo jonas > /etc/hostname
vim /etc/hosts

[/codesyntax]

...
127.0.0.1	localhost.localdomain	localhost jonas
::1		localhost.localdomain	localhost jonas
...
  • Install various packages

[codesyntax lang="bash"]

pacman -S wget

[/codesyntax]

  • Identify the network card and enable dhcp client

[codesyntax lang="bash"]

ip link
systemctl enable dhcpcd@<interface>.service
systemctl enable dhcpcd.service

[/codesyntax]

  • Add a new user

[codesyntax lang="bash"]

useradd -m -g users -s /bin/bash jonas
passwd jonas
visudo

[/codesyntax]

...
 ##
 ## User privilege specification
 ##
 root ALL=(ALL) ALL
 jonas ALL=(ALL) ALL
 ...
  • Set the root password

[codesyntax lang="bash"]

passwd

[/codesyntax]

  • Create a key so I won't enter the LUKS passphrase twice

[codesyntax lang="bash"]

dd if=/dev/urandom of=/crypto_keyfile.bin bs=512 count=4
cryptsetup luksAddKey /dev/nvme0n1p1 /crypto_keyfile.bin

[/codesyntax]

  • Create the initial ramdisk environment

[codesyntax lang="bash"]

vim /etc/mkinitcpio.conf

[/codesyntax]

MODULES="ext4"
HOOKS="base udev autodetect modconf block encrypt lvm2 resume filesystems keyboard fsck"
FILES=/crypto_keyfile.bin

[codesyntax lang="bash"]

mkinitcpio -p linux

[/codesyntax]

  • The current version of grub shipped with Arch doesn't include support for NVMe, but grub-git it seems it does

[codesyntax lang="bash"]

vim /etc/pacman.conf

[/codesyntax]

[archlinuxfr]
SigLevel = Optional TrustAll
Server = http://repo.archlinux.fr/$arch

[codesyntax lang="bash"]

pacman -Syu
pacman -S yaourt

# required packages to build grub-git
pacman -S git rsync freetype2 ttf-dejavu python autogen help2man fuse

su -l jonas

# add unifont gpg key (more details here[1])
export KEY="1A09227B1F435A33"
gpg --recv-keys $KEY
gpg --edit-key $KEY

# install grub-git (please note that you will need at least 2G available on your /tmp)
yaourt -S grub-git

[/codesyntax]

  • We don't need to be "jonas" anymore

[codesyntax lang="bash"]

exit

[/codesyntax]

  • Now that we have grub installed, let's configure and install it

[codesyntax lang="bash"]

vim /etc/default/grub

[/codesyntax]

GRUB_CMDLINE_LINUX="cryptdevice=/dev/nvme0n1p1:disk"
GRUB_ENABLE_CRYPTODISK=y

[codesyntax lang="bash"]

grub-mkconfig -o /boot/grub/grub.cfg
grub-install /dev/nvme0n1

[/codesyntax]

  • Security considerations

[codesyntax lang="bash"]

chmod 000 /crypto_keyfile.bin
chmod -R g-rwx,o-rwx /boot

[/codesyntax]

  • Done!

[codesyntax lang="bash"]

exit
reboot

[/codesyntax]

[1]: https://bbs.archlinux.org/viewtopic.php?pid=1488734#p1488734

Mutt: delete duplicate e-mail messages

Disclaimer: at some point I found this page describing how to delete duplicate e-mail messages with mutt. Unfortunately the page is not up anymore (HTTP 404) and I took the liberty to post it on my blog. Anyway, credit goes to Marianne Promberger. The original page was here: http://promberger.info/linux/2008/03/31/mutt-delete-duplicate-e-mail-messages/

Here we go:

Sometimes, if you consolidate different mailboxes where some of the messages are in both mailboxes, you end up with duplicates.
With mutt, it is really easy to delete one copy of each duplicate. I got this tip from here.

You need to have set duplicate_threads = yes, either put it in your~/.muttrc, or check whether it is on by default (it is for me). To see the value of a variable, while you’re running mutt, say

:set ?duplicate_threads

and it will display the current value (note the leading colon, and of course replace “duplicate_threads” with the variable name you want to see).

You also need to have your mailbox sorted by message threads. This is a nice feature in general, similar to the messages threads in Gmail (but mutt’s implementation is much more user friendly, in my opinion). If you haven’t alreadyset sort=threads in your ~/.muttrc, you can sort “on the fly” while you’re in a mailbox: type o (to sort; mutt then asks for the criterion to sort on, and tells you the options) followed by d (for date).

Now say T to tag a certain pattern, put in ~= as the pattern. Duplicates (one copy of each message that mutt sees twice in the folder) are now tagged. To delete them, either type just d (this will work if you have set auto_tag=yes), or type; to apply the next command to all tagged messages, then hit d.

Addendum

Actually, it’s much easier, you can skip the tagging step and just do D (for “delete matching pattern” followed by ~=. If you’re adventurous, you can set Mutt to automatically rid your mailboxes of duplicates using a folder-hook, like this (in your ~/.muttrc)

folder-hook . push "<delete-pattern>~=<enter>"

This is handy together with

folder-hook . 'set record="^"'

which always puts your “sent” copy into the current mailbox — handy for developing meaningful threads of incoming mails and replies. However, of course it results in duplicates if you are on the cc or if the mail is going to a mailing list you’re subscribed to. Enter the above folder-hook — no more duplicates.

If you’re even more adventurous, you could add updating your mailbox (i.e., purging messages marked for deletion) to the above folder-hook, like this:

folder-hook . push "<delete-pattern>~=<enter>$"

Be notified when critical battery level is reached

Ever happened to be focused on something and miss the fact that your laptop is running out of battery? And to lose your work?
Yesterday happened twice! "Really? Hmm... I need to fix this as soon as possible."
I googled a bit and this stackexchange post  popped up. Nice, great! But being notified even when you're charging your laptop it's not nice and I had to change the solution a bit:

[codesyntax lang="bash"]

cat .notify-send_setup

[/codesyntax]

#!/bin/bash
touch $HOME/.dbus/Xdbus
chmod 600 $HOME/.dbus/Xdbus
env | grep DBUS_SESSION_BUS_ADDRESS > $HOME/.dbus/Xdbus
echo 'export DBUS_SESSION_BUS_ADDRESS' >> $HOME/.dbus/Xdbus

exit 0

[codesyntax lang="bash"]

cat .battnotif

[/codesyntax]

#!/bin/bash
export DISPLAY=:0
XAUTHORITY=$HOME/.Xauthority

if [ -r "$HOME/.dbus/Xdbus" ]; then
    . "$HOME/.dbus/Xdbus"
fi

battery_level=$(acpi -b | grep -P -o '[0-9]+(?=%)')
STATUS=$(acpi -b | awk '{print $3'} | sed -e 's/,//g')

if [ $battery_level -le 15 ] && [ $STATUS == "Discharging" ]
then
    /usr/bin/notify-send -u critical "Battery low" "Battery level is ${battery_level}%!"
    echo 'batt low' >> $HOME/cron.log
fi

echo 'ran batt' >> $HOME/cron.log