hanneseichblatt.de2019-11-16T14:35:38+01:00Hannes EichblattSolaris is Dead2017-10-05T00:00:00+02:00/personal/2017/10/05/solaris-is-dead
<h1 id="solaris-is-dead">Solaris is Dead</h1>
<p>Most of you probably have heard by now that <a href="https://en.wikipedia.org/wiki/Oracle_Corporation">Oracle</a> has silently EOLed <a href="https://en.wikipedia.org/wiki/Solaris_(operating_system)">Solaris</a>, one of the older commercial Unices.</p>
<p>The first definitive sign was <a href="https://twitter.com/webmink/status/904081073256243201">a tweet</a> by <a href="https://en.wikipedia.org/wiki/Simon_Phipps_(programmer)">Simon Phipps</a> on September 2:</p>
<blockquote>
<p>For those unaware, Oracle laid off ~ all Solaris tech staff yesterday in a classic silent EOL of the product.</p>
</blockquote>
<p>Soon after, TheLayoff.com, a portal for recently or soon to be laid off employees, <a href="https://www.thelayoff.com/oracle">started noticing</a> an <a href="https://twitter.com/TheLayoff/status/903323829199671299">influx of now ex-Oraclers</a>. Some of them reported having been notified by UPS of impending packages while others reported receiving the first actual termination notices. The packages’ arrivel seem to have been planned for <a href="https://en.wikipedia.org/wiki/Labor_Day">Labor Day</a> which I find particulary egregious.</p>
<p>The Register <a href="https://www.theregister.co.uk/2017/08/31/oracle_stops_prolonging_inevitable_layoffs/">reported on August 31</a>:</p>
<blockquote>
<p>Oracle finally decides to stop prolonging the inevitable, begins hardware layoffs Pink slips are en route, say staff</p>
</blockquote>
<p>Later, even <a href="https://blogs.oracle.com/author/alan-coopersmith">Alan Coopersmith</a>, at that time Technical Lead for the Solaris 11.4 release, <a href="https://twitter.com/alanc/status/903802951470157825">tweeted ominously</a>:</p>
<blockquote>
<p>Sunset in Santa Clara.</p>
</blockquote>
<p>The Santa Clara office was home to the Solaris and SPARC teams. <a href="http://www.mercurynews.com/2017/09/05/oracle-slashes-more-than-900-santa-clara-jobs-more-worldwide/">Local news</a> picked up the story as well.</p>
<p>Drew Fisher, <a href="https://www.linkedin.com/in/drew-fisher-38309711">Senior Software Engineer for the Solaris Install Team</a>, also <a href="https://twitter.com/drewfisher314/status/903804762373537793">confirmed</a> the bad news:</p>
<blockquote>
<p>For real. Oracle RIF’d most of Solaris (and others) today.</p>
</blockquote>
<p>RIF stands for <a href="https://en.wikipedia.org/wiki/Layoff">Reduction in Force</a> and the Wikipedia page redirects to the entry for layoff.</p>
<p>One of the most in-depth articles in the direct aftermath surely was <a href="https://en.wikipedia.org/wiki/Bryan_Cantrill">Bryan Cantrill’s</a> <a href="http://dtrace.org/blogs/bmc/2017/09/04/the-sudden-death-and-eternal-life-of-solaris/">The sudden death and eternal life of Solaris</a></p>
<blockquote>
<p>Solaris may not have been truly born until it was made open source, and — certainly to me, anyway — it died the moment it was again made proprietary. But in that shorter life, Solaris achieved the singular: immortality for its revolutionary technologies. So while we can mourn the loss of the proprietary embodiment of Solaris (and we can certainly lament the coarse way in which its technologists were treated!), we can rejoice in the eternal life of its technologies — in illumos and beyond!</p>
</blockquote>
<p>I recommend this article in particular due to his long commitment to Solaris, having written DTrace and several top positions in the field, mostly close to Solaris-related technologies, especially after becoming Vice President of Engineering and later CTO at <a href="https://en.wikipedia.org/wiki/Joyent">Joyent</a>, the home of <a href="https://en.wikipedia.org/wiki/SmartOS">SmartOS</a>, one of Solaris’ open source children.</p>
<p>Again, Oracle’s awful handling of the situation shows through:</p>
<blockquote>
<p>In particular, that employees who had given their careers to the company were told of their termination via a pre-recorded call — “robo-RIF’d” in the words of one employee — is both despicable and cowardly.</p>
</blockquote>
<p>Some sources specifically pointed to the connection between Solaris and the SPARC teams which had historically always been closely connected. <a href="http://www.informit.com/authors/author_bio.aspx?ISBN=9780321334206">Isaac Rabinovitch</a>, a former technical writer at Sun, <a href="https://twitter.com/isaac32767/status/904163673886793729">tweeted</a>:</p>
<blockquote>
<p>It’s interesting that people are viewing the Oracle move as a software EOL. I guess most people don’t even know what SPARC is, much less 1/ how closely Solaris is tied to it. Yeah, yeah, Solaris runs on x86 too. I know this all too well, having worked in the x86 division at 2/ Sun until just before the Oracle takeover. But x86 Solaris was never a serious thing. If you worked for Sun, you had to pretend it was, 3/ but nobody except a few marketing folks believed it. So, Oracle admitting (about a decade late) that SPARC has no future means Solaris 4/ has no future. I get that a lot of people think highly of the SPARC/Solaris stack, but I can’t mourn it. That’s because the people at Sun 5/ who worshiped this tech destroyed the company trying to preserve it. And with it a lot of good tech I was proud to be associated with. 6/6</p>
</blockquote>
<p>Adrian Cockcroft, <a href="http://www.allthingsdistributed.com/2016/10/welcoming-adrian-cockcroft-to-tthe-aws-team.html">formerly Distinguished Engineer at Sun</a> and author of <a href="https://www.amazon.com/Adrian-Cockcroft/e/B000APJAKG/ref=dp_byline_cont_book_1">several books in the field</a>, including <a href="https://www.amazon.com/Sun-Performance-Tuning-Sparc-Solaris/dp/0131496425/">“Sun Performance and Tuning: Sparc & Solaris”</a> posted <a href="https://medium.com/@adrianco/open-letter-to-my-sun-friends-at-oracle-updated-from-2010-post-1f8b2bcba693">“Open letter to my Sun friends at Oracle (updated from 2010 post)”</a> on August 31.</p>
<blockquote>
<p>That is what I meant when I tweeted that Illumos is as irrelevant as Solaris, and it is legacy computing. I don’t mean Solaris will go away, I’m sure it will be the basis of a profitable business for a long time, but the interesting things are happening elsewhere, specifically in public cloud and “infrastructure as code”.</p>
</blockquote>
<p>Meshed Insights, a technology blog, <a href="https://meshedinsights.com/2017/09/03/oracle-finally-killed-sun/">published a list</a> of other products acquired and run into the ground by Oracle and provided good strategic insight:</p>
<blockquote>
<p>Instead of understanding the real failures at Sun – taking too long to open source Solaris and attempting a marketing-led approach in 2000-2002 instead of Sun’s traditional engineering-led approach – Ellison blamed the man who was landed with the task of rescuing whatever he could from the smouldering ruins left by McNealy, Zander, Tolliver and their clan and their complacent failure. Ellison never understood the pioneering approach Schwartz was taking, instead sneering at blogging and calling all the work-in-progress “science projects” while dismantling the partner channels and alienating the open source community.</p>
</blockquote>
<p>Hacker News inevitably got note of the events and <a href="https://news.ycombinator.com/item?id=15160149">started a thread</a>, collecting opinions and fond memories.</p>
<p><a href="https://en.wikipedia.org/wiki/Brendan_Gregg">Brendan Gregg</a>, also an ex-Sun and ex-Joyent employee and currently Senior Performance Architect at Netflix, is probably best known for his work in systems performance (and <a href="http://www.brendangregg.com/linuxperf.html">these diagrams</a>). He published <a href="http://www.brendangregg.com/blog/2017-09-05/solaris-to-linux-2017.html">“Solaris to Linux Migration 2017”</a>, a guide for migrating existing systems to equivalent technologies.</p>
<blockquote>
<p>Many people have contacted me recently about switching from Solaris (or illumos) to Linux, especially since most of the Solaris kernel team were let go this year (including my former colleagues, I’m sorry to hear).</p>
</blockquote>
<p>Fortune.com published an <a href="http://fortune.com/2017/09/05/oracle-layoffs-hardware-solaris/">article focussing on the economic aspects</a>:</p>
<blockquote>
<p>Oracle has laid off what appears to be a significant number of employees working on its hardware and Solaris operating system efforts, according to anonymous posts on TheLayoff.com, the gist of which were confirmed to Fortune by former Oracle employees.</p>
</blockquote>
<p>Heise, one of the major IT-related publications in Germany had <a href="https://www.heise.de/ix/meldung/Oracle-feuert-SPARC-und-Solaris-Entwickler-3820643.html">an article</a> and <a href="https://www.heise.de/ix/meldung/Kommentar-zum-Solaris-Ende-Ein-roter-Elefant-im-IT-Laden-3824375.html">an editorial</a> on the topic. Even Fefe <a href="https://blog.fefe.de/?ts=a7551208">noticed</a> and mentioned <a href="https://www.theregister.co.uk/2017/08/02/oracle_john_fowler_bails/">John Fowler’s exit a month earlier</a>. Fowler had been the head of Solaris development at Oracle.</p>
<p>Oracle originally <a href="https://www.theregister.co.uk/2017/09/08/oracle_pushes_solaris11_plans_out/">wanted to present its plans for Solaris 11.next</a>, the coming rolling-release model for Solaris post-11.3, at its <a href="https://www.oracle.com/openworld/index.html">OpenWorld 2017</a> conference. However, their session catalog contains <a href="https://events.rainfocus.com/catalog/oracle/oow17/catalogoow17?showEnrolled=false&search.itinfrastructure=1502206130220004NQxi">no talk about the topic but several about moving Solaris workloads into Oracle Cloud</a> and general cloud adoption.</p>
<p>In the words of <a href="https://twitter.com/emtiu/status/807117424957030400">Michael Büker on Twitter</a>:</p>
<blockquote>
<p>We had joy, we had fun
We ran Unix on a Sun,
But the source and the song
Of Solaris have all gone</p>
</blockquote>
Underappreciated Systemd Features: systemd-nspawnd2017-08-17T00:00:00+02:00/personal/2017/08/17/systemd-nspawn
<p><code class="highlighter-rouge">systemd(1)</code> does contain something that looks very similar to a minimal container solution. It is <a href="http://0pointer.net/blog/projects/changing-roots.html">not meant to be a full blown container solution</a> but merely meant to</p>
<blockquote>
<p>cover testing, debugging, building, installing, recovering. That’s what you should use it for and what it is really good at, and where it is a much much nicer alternative to chroot(1).</p>
</blockquote>
<p>You can think of <code class="highlighter-rouge">systemd-nspawn(1)</code> more like LXC, less than Docker. Changes to your filesystem are persistent across container boots and there is no clearly defined repository for operating system images for example. Nevertheless it provides a built-in (since most Linux distributions use <code class="highlighter-rouge">systemd(1)</code> nowadays) way to start a Linux system without sprinkling your filesystem with the remnants of your experiments.</p>
<p>Your containers will live in a directory for each one under <code class="highlighter-rouge">/var/lib/machines</code>. Suppose we want to create an <a href="https://www.archlinux.org/">Arch Linux</a> container, we can use <code class="highlighter-rouge">pacman(8)</code> to bootstrap our root filesystem:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># cd /var/lib/machines
# mkdir ./archcontainer
# pacstrap -i -c -d /var/lib/machines/archcontainer --noconfirm base base-devel
</code></pre></div></div>
<p>This will install the <code class="highlighter-rouge">base</code> and <code class="highlighter-rouge">base-devel</code> groups, essential for every Arch Linux system into our directory under <code class="highlighter-rouge">/var/lib/machines</code>. We then need to make a couple of configuration changes. First, we activate <code class="highlighter-rouge">systemd-networkd(8)</code>, <code class="highlighter-rouge">systemd(1)</code>’s built-in network manager. At this point, you could also use <code class="highlighter-rouge">NetworkManager(8)</code>. Additionally, we activate <code class="highlighter-rouge">systemd-resolved(8)</code>, a local DNS resolver. We use <code class="highlighter-rouge">chroot(1)</code> to execute the command within the container.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># chroot /var/lib/machines/archcontainer sh -c "systemctl enable systemd-networkd systemd-resolved"
</code></pre></div></div>
<p>To be able to log in to your container afterwards, we have to set a passwort for the <code class="highlighter-rouge">root</code> account to “<code class="highlighter-rouge">r00tpw</code>”. We use <code class="highlighter-rouge">chroot(1)</code> again.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># chroot /var/lib/machines/archcontainer sh -c "echo root:r00tpw | chpasswd"
</code></pre></div></div>
<p>Since our login will not happen via <code class="highlighter-rouge">ssh(1)</code> or local login, we need to add <code class="highlighter-rouge">/dev/pts(4)</code> to the list of secure <code class="highlighter-rouge">tty(4)</code>s in <code class="highlighter-rouge">/etc/securetty(5)</code>.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># echo "pts/0" >> /var/lib/archcontainer/etc/securetty
</code></pre></div></div>
<p>Since we used the default location for the directory containing our root filesystem, the container is automatically recognized as a machine by <code class="highlighter-rouge">machinectl(1)</code>.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># machinectl start archcontainer
# machinectl list
</code></pre></div></div>
<p>The last command should show your container. There are two ways to get access to your running container from the host system.
You can directly start a shell session in your container.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># machinectl shell archcontainer
</code></pre></div></div>
<p>You could also get a console with a login prompt for the machine by issuing the following command.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># machinectl login archcontainer
</code></pre></div></div>
<p>If you chose the latter way, you can disconnect from the console by pressing <code class="highlighter-rouge">Ctrl+5</code> three times within one second.
You can stop the container by issuing the following command.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># machinectl stop archcontainer
</code></pre></div></div>
<p>There is another way to start the container besides using <code class="highlighter-rouge">machinectl(1)</code>, unsurprisingly also using <code class="highlighter-rouge">systemd(1)</code>, in this case <a href="http://0pointer.de/blog/projects/instances.html">instantiated units</a>.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># systemctl start systemd-nspawn@archcontainer
# systemctl status systemd-nspawn@archcontainer
# systemctl stop systemd-nspawn@archcontainer
</code></pre></div></div>
<p>Do not mix the <code class="highlighter-rouge">machinectl(1)</code> and <code class="highlighter-rouge">systemd-nspawn(1)</code> way of controlling the container.</p>
<p>One interesting effect of the integration of all these concepts in <code class="highlighter-rouge">systemd(1)</code>, we can use our host’s <code class="highlighter-rouge">systemctl(1)</code> to control the container’s <code class="highlighter-rouge">systemd(1)</code>. For example you can list all units running in the container by issuing the following command on the host.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># systemctl -M archcontainer status
</code></pre></div></div>
<p>Nevertheless, always keep in mind that this is not a full-blown container technology but a nice way to start a lightweight container to test a task or two.</p>
Underappreciated Systemd Features: Builtin Watchdog2017-08-11T00:00:00+02:00/personal/2017/08/11/systemd-watchdog
<p>In my opinion, you massively underestimate <code class="highlighter-rouge">systemd(1)</code> when you think of it as just another <code class="highlighter-rouge">init(1)</code> daemon. It actually provides some features that one would expect to be used quite widely by now but aren’t. I want to try to present some of these in a couple of blog posts. These will only provide simple introductory examples to get you started. In this specific case, Lennart Poettering himself provided a <a href="http://0pointer.de/blog/projects/watchdog.html">comprehensive tutorial on the topic</a>.</p>
<p>Suppose we have a tiny binary we want to run to provide a certain functionality. This binary acts as a <code class="highlighter-rouge">daemon(7)</code> and thus needs to be started by your <code class="highlighter-rouge">init(1)</code>. To achieve this, we create a <code class="highlighter-rouge">systemd.unit(5)</code>. At this point, if the process crashes, <code class="highlighter-rouge">systemd(1)</code> would simply restart it and everything would be fine. However, there are cases in which your process becomes unresponsive and stops providing its intended functionality without actually crashing. In that case, our simple unit from before would not be restarted by <code class="highlighter-rouge">systemd(1)</code> because the latter has no way of knowing it crashed. As long as the process is running, <code class="highlighter-rouge">systemd(1)</code> will deem it alive and leave it be.</p>
<p><code class="highlighter-rouge">systemd(1)</code> provides a mechanism for the process to notify it in defined time intervals and will react in a defined manner if it has not received a signal during the last interval. I will call that signal a heartbeat. There are two ways for the process to notify <code class="highlighter-rouge">systemd(1)</code>:</p>
<ul>
<li>a call to <code class="highlighter-rouge">sd_notify(3)</code> when the binary is linked against the relevant library</li>
<li>calling <code class="highlighter-rouge">systemd-notify(1)</code> with the appropriate command line arguments</li>
</ul>
<p>The following <code class="highlighter-rouge">systemd(1)</code> unit will provide a simple example:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[Unit]
Description=my daemon
[Service]
Type=simple
ExecStart=/bin/sh -xc "while : ; do date ; sleep 3 ; test -f /tmp/DONT_NOTIFY || systemd-notify \"WATCHDOG=1\" ; done"
NotifyAccess=all
WatchdogSec=6
Restart=on-failure
</code></pre></div></div>
<p>This unit starts a subshell (<code class="highlighter-rouge">sh(1p)</code>) because your <code class="highlighter-rouge">ExecStart</code> directive needs the full path to a binary. This subshell will then enter into an infinite loop (<code class="highlighter-rouge">while : ; do .. ; done</code>). Every three seconds (<code class="highlighter-rouge">sleep 3</code>) it prints the current date and time (<code class="highlighter-rouge">date</code>) and checks if a file called <code class="highlighter-rouge">/tmp/DONT_NOTIFY</code> exists (<code class="highlighter-rouge">test -f</code>). If it does not (<code class="highlighter-rouge">||</code>) exist, <code class="highlighter-rouge">/usr/bin/systemd-notify</code> is called with the command line argument <code class="highlighter-rouge">WATCHDOG=1</code>, thus sending a heartbeat to <code class="highlighter-rouge">systemd(1)</code>.</p>
<p>The other directives define how <code class="highlighter-rouge">systemd(1)</code> will react in case of error and how the error state is defined. By setting <code class="highlighter-rouge">WatchdogSec</code> we activate the watchdog functionality and set the time interval in which we expect heartbeats to six seconds. However, I recommend sending more often, half the interval <a href="http://0pointer.de/blog/projects/watchdog.html">seems to be best practice</a>, which is why we <code class="highlighter-rouge">sleep(1)</code> for three seconds. Because we activated the watchdog functionality, <code class="highlighter-rouge">systemd(1)</code> will set the <code class="highlighter-rouge">WATCHDOG_USEC</code> environment variable for the process, which could in turn use it to calculate how often it needs to send a heartbeat.</p>
<p>If <code class="highlighter-rouge">systemd(1)</code> does not receive the heartbeat in time, it will restart the daemon (<code class="highlighter-rouge">Restart=on-failure</code>). In our example we could trigger this behaviour by <code class="highlighter-rouge">touch(1)</code>ing <code class="highlighter-rouge">/tmp/DONT_NOTIFY</code> and wait for a few seconds. Most of the time, your daemon will start new child processes, which <code class="highlighter-rouge">systemd(1)</code> will of course notice and also manage. The directive <code class="highlighter-rouge">NotifyAccess=all</code> tells <code class="highlighter-rouge">systemd(1)</code> to accept the heartbeat from all processes belonging to that specific unit, not just the main (parent) process. In our case this does not change behaviour, I only include it because in most real-world cases you would need it.</p>
Build AUR Packages in a Docker Container2016-12-15T00:00:00+01:00/personal/2016/12/15/build-aur-packages-in-docker
<p>I switched back to <a href="https://www.archlinux.org/">Arch Linux</a>, partly because of the amazing <a href="https://wiki.archlinux.org/index.php/Arch_User_Repository">Arch User Repository (AUR)</a>. I use <a href="https://archlinux.fr/yaourt-en"><code class="highlighter-rouge">yaourt</code></a> as my <a href="https://wiki.archlinux.org/index.php/AUR_helpers">AUR helper</a>. We will use <a href="https://hub.docker.com/r/heichblatt/archlinux-yaourt/">my Arch Linux Docker image with yaourt included</a> to sandbox the whole process of building a package from a <a href="https://wiki.archlinux.org/index.php/PKGBUILD"><code class="highlighter-rouge">PKGBUILD</code></a>. We mount a directory <code class="highlighter-rouge">pkgs</code> under our current working directory at <code class="highlighter-rouge">/var/cache/pacman/pkg</code> in the container to store the built packages and all necessary dependencies. <code class="highlighter-rouge">yaourt</code> will keep a copy of all downloaded packages in said directory. We also tell <code class="highlighter-rouge">yaourt</code> to <code class="highlighter-rouge">export</code> the built packages there. Since <code class="highlighter-rouge">yaourt</code> will let us edit the <code class="highlighter-rouge">PKGBUILD</code> and <code class="highlighter-rouge">*.install</code> files, we need to specify an <code class="highlighter-rouge">$EDITOR</code>. The last argument is the name of the package to be built, in this example the simply wonderful <a href="https://lumina-desktop.org/">Lumina desktop environment</a>.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run -ti -v $PWD/pkgs:/var/cache/pacman/pkg -e "EDITOR=nano" heichblatt/archlinux-yaourt yaourt -Sy --export /var/cache/pacman/pkg lumina-desktop
</code></pre></div></div>
<p>This way, if you miss any malicious commands in the scripts being run during the build, the damage gets contained in the ephemeral container. I’ll leave it to the reader to find further ways to secure the container. What’s more, you could even write your own wrapper around <a href="https://wiki.archlinux.org/index.php/makepkg"><code class="highlighter-rouge">makepkg</code></a> and put it in your <a href="https://archlinux.fr/man/yaourtrc.5.html"><code class="highlighter-rouge">yaourtrc</code></a>.</p>
Access a Remote LibVirt Network via OpenVPN2015-11-24T00:00:00+01:00/personal/2015/11/24/openvpn-bridge-libvirt
<p>This is very loosely based on a <a href="http://koofr.net/bridging-two-host-local-virtual-networks-with-openvpn/">how-to by the Koofr team</a>.
We assume the following:</p>
<ul>
<li>You have a remote box with libvirtd running on a RHEL 7-compatible OS.</li>
<li>You have created a network 192.168.100.0/24 in said libvirtd.</li>
<li>You want to access the VMs in said network from your local box.</li>
<li>The bridge interface is called virbr1.</li>
</ul>
<p>We start on the remote server. First, we install the necessary software packages.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>yum install openvpn easy-rsa
mkdir -p /etc/openvpn/easy-rsa
cp -rf /usr/share/easy-rsa/2.0/* /etc/openvpn/easy-rsa/
cd /etc/openvpn/easy-rsa/
vim vars # set your vars
cp /etc/openvpn/easy-rsa/openssl-1.0.0.cnf /etc/openvpn/easy-rsa/openssl.cnf
</code></pre></div></div>
<p>Now we create some keys and certificates.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd /etc/openvpn/easy-rsa/
source ./vars
./clean-all
./build-ca # build CA key and certificate
./build-dh # build Diffie-Hellman parameters
./build-key-server server
./build-key client # your client key and certificate
cd /etc/openvpn
cp easy-rsa/keys/ca.{crt,key} easy-rsa/keys/dh2048.pem easy-rsa/keys/server.{crt,key} .
</code></pre></div></div>
<p>Put the following in /etc/openvpn/libvirt-bridge.conf, your server config:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mode server
tls-server
local YOUR_SERVERS_PUBLIC_IP
port 1194
proto udp
dev tap0
script-security 2
up "/etc/openvpn/bridgeup.sh virbr1 tap0 1500"
down "/etc/openvpn/bridgedown.sh virbr1 tap0"
persist-key
persist-tun
ca ca.crt
cert server.crt
key server.key
dh dh2048.pem
cipher BF-CBC
comp-lzo yes
ifconfig-pool-persist ipp.txt
server-bridge 192.168.100.1 255.255.255.0 192.168.100.201 192.168.100.201
max-clients 3 # only this many clients can connect simultaneously
user nobody
group nobody
keepalive 10 120
status server.log
verb 3
</code></pre></div></div>
<p>You might have noticed two scripts mentioned in the config: <code class="highlighter-rouge">bridgeup.sh</code> and <code class="highlighter-rouge">bridgedown.sh</code>. These are the scripts OpenVPN calls when it successfully creates and destroys its own network interface. They add or remove the OpenVPN interface from the LibVirt bridge, respectively.</p>
<p><code class="highlighter-rouge">bridgeup.sh</code></p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/sh</span>
<span class="nv">BR</span><span class="o">=</span><span class="nv">$1</span>
<span class="nv">DEV</span><span class="o">=</span><span class="nv">$2</span>
<span class="nv">MTU</span><span class="o">=</span><span class="nv">$3</span>
/sbin/ip <span class="nb">link set</span> <span class="s2">"</span><span class="nv">$DEV</span><span class="s2">"</span> up promisc on mtu <span class="s2">"</span><span class="nv">$MTU</span><span class="s2">"</span>
/sbin/brctl addif <span class="s2">"</span><span class="nv">$BR</span><span class="s2">"</span> <span class="s2">"</span><span class="nv">$DEV</span><span class="s2">"</span>
<span class="nb">exit </span>0
</code></pre></div></div>
<p><code class="highlighter-rouge">bridgedown.sh</code></p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/sh </span>
<span class="nv">BR</span><span class="o">=</span><span class="nv">$1</span>
<span class="nv">DEV</span><span class="o">=</span><span class="nv">$2</span>
/sbin/brctl delif <span class="s2">"</span><span class="nv">$BR</span><span class="s2">"</span> <span class="s2">"</span><span class="nv">$DEV</span><span class="s2">"</span>
/sbin/ip <span class="nb">link set</span> <span class="s2">"</span><span class="nv">$DEV</span><span class="s2">"</span> down
<span class="nb">exit </span>0
</code></pre></div></div>
<p>Finally, we open the OpenVPN port in <a href="https://fedoraproject.org/wiki/FirewallD">firewalld</a>.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>firewall-cmd --get-active-zones # find out your currently active zone
firewall-cmd --zone=public --add-port=1194/udp # change current config
firewall-cmd --zone=public --add-port=1194/udp --permanent # change permanent config
</code></pre></div></div>
<p>After starting the service (careful, <a href="http://0pointer.de/blog/projects/instances.html">systemd instantiated units</a>), we should be done with the server side.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>systemctl enable openvpn@libvirt-bridge
systemctl start openvpn@libvirt-bridge
</code></pre></div></div>
<p>On your local machine, install OpenVPN, download <code class="highlighter-rouge">ca.crt</code>, <code class="highlighter-rouge">client.crt</code> and <code class="highlighter-rouge">client.key</code> from your server and put them in one directory. On SELinux-enabled machines this directory should be under <code class="highlighter-rouge">~/.cert</code>. Then we add our <code class="highlighter-rouge">client.conf</code>, which should look like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>client
remote YOUR_SERVERS_PUBLIC_IP 1194
dev tap
proto udp
resolv-retry infinite
nobind
persist-key
persist-tun
verb 2
ca YOUR_HOME/.cert/myvpn/ca.crt
cert YOUR_HOME/.cert/myvpn/client.crt
key YOUR_HOME/.cert/myvpn/client.key
comp-lzo yes
script-security 2
</code></pre></div></div>
<p>You can now start your client with</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo openvpn --config client.conf
</code></pre></div></div>
<p>After that, you should be able to connect directly to your VMs living in your libvirtd network, e.g.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>nmap -p 22 192.168.100.123
</code></pre></div></div>
Install Tarsnap on CentOS 72015-09-27T00:00:00+02:00/personal/2015/09/27/tarsnap-on-centos-7
<p>To use the excellent <a href="https://www.tarsnap.com/">Tarsnap online backup service</a> on a CentOS 7 system, follow these steps:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd /usr/src/
wget https://www.tarsnap.com/download/tarsnap-autoconf-1.0.36.1.tgz
tar xf tarsnap-autoconf-1.0.36.1.tgz
wget https://www.tarsnap.com/download/tarsnap-sigs-1.0.36.1.asc
cat tarsnap-sigs-1.0.36.1.asc
sha256sum -c tarsnap-sigs-1.0.36.1.asc
cd tarsnap-autoconf-1.0.36.1
yum install gcc e2fsprogs-devel zlib-devel openssl-devel
./configure
make
make install
cp /usr/local/etc/tarsnap.conf.sample /usr/local/etc/tarsnap.conf
</code></pre></div></div>
<p>Edit <code class="highlighter-rouge">/usr/local/etc/tarsnap.conf</code>. Then create a key:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/usr/local/bin/tarsnap-keygen --keyfile /root/tarsnap.key --user YOUR_USER_ID --machine MACHINE_ID
</code></pre></div></div>
<p>Copy the key to a safe location, otherwise you cannot access the backup data in case of loss of this key.</p>
<p>Create cronjobs like this one:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@daily /usr/local/bin/tarsnap -c -f my-root-server-$(date +\%Y\%m\%d) /var/lib/mysql /home /etc /var/log
</code></pre></div></div>
Install Docker Compose on Fedora2015-09-07T00:00:00+02:00/personal/2015/09/07/install-docker-compose-on-fedora
<p>Fedora 22 currently only has <a href="https://apps.fedoraproject.org/packages/fig/overview/">version 1.0.0</a> of <a href="https://docs.docker.com/compose/install/">Docker Compose</a> in its repositories and the instructions on Docker’s website do not work at the moment. I’m here to help.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo dnf install python-devel gcc
sudo pip install -U docker-compose==1.4.0 websocket
docker-compose --version
</code></pre></div></div>
Crack ZIP Passwords2015-08-18T00:00:00+02:00/personal/2015/08/18/crack-zip-file-passwords
<p>I will assume that you are allowed to crack the .zips you will be cracking.</p>
<p>We will use <a href="http://oldhome.schmorp.de/marc/fcrackzip.html">fcrackzip</a> for this. I could not get the much more known <a href="http://www.openwall.com/john/">John The Ripper</a> to work reliably in this case.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>wget http://oldhome.schmorp.de/marc/data/fcrackzip-1.0.tar.gz
tar xf fcrackzip-1.0.tar.gz
cd fcrackzip-1.0
./configure
make
sudo make install
</code></pre></div></div>
<p>Suppose we know that the password ist a five digit PIN consisting only of numbers, <a href="http://www.theunixschool.com/2012/04/different-ways-to-zero-pad-number-or.html">padded with zeros</a>.
We create a wordlist of all possible passwords:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>typeset -Z5 i ; for i in {0..99999} ; do echo $i ; done > /tmp/wordlist
</code></pre></div></div>
<p>Then we use that wordlist with fcrackzip:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/usr/local/bin/fcrackzip --use-unzip --dictionary -p /tmp/wordlist ~/my-zip-file.zip
</code></pre></div></div>
Installing Shrew Soft IPsec VPN Client on CentOS 72015-08-05T00:00:00+02:00/personal/2015/08/05/shrew-soft-ipsec-vpn-client-centos
<p>To install <a href="https://www.shrew.net/download/ike">Shrew Soft IPsec VPN Client</a> on CentOS 7, follow these steps.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>yum groupinstall -y "Development Tools"
yum install -y wget qt qt-devel cmake make openssl-devel libedit-devel
wget -P /usr/src https://www.shrew.net/download/ike/ike-2.2.1-release.tgz
cd /usr/src
tar xf ike-2.2.1-release.tgz
cd ike
cmake -DCMAKE_INSTALL_PREFIX=/usr -DQTGUI=YES -DETCDIR=/etc -DNATT=YES
make
make install
</code></pre></div></div>
First Steps with runC2015-07-08T00:00:00+02:00/personal/2015/07/08/first-steps-with-runc
<p>We will take a look at <a href="https://github.com/opencontainers/runc">runC</a>, a tool</p>
<blockquote>
<p>for spawning and running containers according to the OCF specification.</p>
</blockquote>
<p>You can read more about the <a href="https://github.com/opencontainers/specs">Open Container Format specification</a> at <a href="https://runc.io/">their website</a> or in the <a href="https://blog.docker.com/2015/06/runc/">announcement by Docker, Inc</a>.</p>
<h2 id="installation">Installation</h2>
<p>I use Fedora 22.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export GOPATH=$HOME/golang
mkdir -pv $GOPATH/src/github.com/opencontainers
cd $GOPATH/src/github.com/opencontainers
git clone https://github.com/opencontainers/runc
cd runc
make
sudo make install # forgive me
</code></pre></div></div>
<h2 id="first-usage">First Usage</h2>
<p>We have runC create a template container definition. We then create a directory called <code class="highlighter-rouge">rootfs</code> to keep our filesystem.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>runc spec > config.json
mkdir rootfs
</code></pre></div></div>
<p>We pull the current CentOS image.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker pull centos:7
</code></pre></div></div>
<p>We export said image to a tarball.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker export $(docker create centos:7) > centos.tar
</code></pre></div></div>
<p>We then expand that tarball to the sub-directory <code class="highlighter-rouge">rootfs</code>, remove the tarball and <code class="highlighter-rouge">chmod</code> the directory to root.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>tar -C rootfs -xf centos.tar
rm centos.tar
sudo chown root:root -R rootfs
</code></pre></div></div>
<p>Because UTS namespaces are <a href="http://crosbymichael.com/creating-containers-part-1.html">not supported yet</a>, we need to remove the <code class="highlighter-rouge">hostname</code> line.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sed -i '/hostname/d' config.json
</code></pre></div></div>
<p>We then make the <code class="highlighter-rouge">rootfs</code> writable (<code class="highlighter-rouge">readonly: false</code>). This command will remove the tabs before the <code class="highlighter-rouge">readonly</code> line. This is only cosmetical but easier than dealing with <code class="highlighter-rouge">sed</code>’s tab handling.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sed -i '/\"rootfs\"\,/!b;n;c\"readonly\"\:\ false' config.json
</code></pre></div></div>
<p>runC needs to run as root, yet root does not have <code class="highlighter-rouge">/usr/bin/local</code> in her <code class="highlighter-rouge">$PATH</code> so we call the binary with its full path.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/usr/local/bin/runc
</code></pre></div></div>
<p>We’re in.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sh-4.2#
</code></pre></div></div>
<h2 id="notes">Notes</h2>
<p>While I was preparing this post, Georg Kunz of CloudGear wrote a <a href="https://www.cloudgear.net/blog/2015/getting-started-with-runc/">very similar howto</a>. And got featured in <a href="https://blog.docker.com/docker-weekly-archives/">Docker Weekly</a>. I think mine has enough difference and additional detail to publish it anyway. See also the <a href="https://github.com/opencontainers/runc">official documentation</a>, which at the time of writing is incomplete.
Keep in mind that runC is still pre-alpha.</p>
Vagrant with libvirtd on CentOS 72015-05-31T00:00:00+02:00/personal/2015/05/31/Vagrant-with-libvirt-support-on-centos-7
<p><a href="http://vagrantup.com/">Vagrant</a> is a tool to</p>
<blockquote>
<p>[c]reate and configure lightweight, reproducible, and portable development environments.</p>
</blockquote>
<p>It takes a declarative definition (a <a href="https://docs.vagrantup.com/v2/vagrantfile/index.html">Vagrantfile</a>) of a virtual machine and creates that machine for you. Among its supported virtualization technologies are <a href="http://virtualbox.org">VirtualBox</a> and <a href="http://libvirt.org/">libvirt</a>. We want to be able to do this on a server running CentOS 7 and use an <a href="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/">already installed <code class="highlighter-rouge">libvirtd</code></a> as the provider.</p>
<p>First, we remove any previously installed versions of Vagrant.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>yum remove vagrant
</code></pre></div></div>
<p>We will install Vagrant from the <a href="https://copr.fedoraproject.org/coprs/jstribny/vagrant1/">jstribny/vagrant1</a> <a href="https://fedorahosted.org/copr/">COPR</a> which depends on two other COPRs providing the <a href="https://www.softwarecollections.org/en/scls/rhscl/ruby200/">Ruby200</a> and <a href="https://www.softwarecollections.org/en/scls/rhscl/ror40/">Ror40</a> <a href="https://www.softwarecollections.org/">Software Collections</a>. This is the way <a href="http://fedoramagazine.org/running-vagrant-fedora-22/">Fedora Magazine recommended</a>.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd /etc/yum.repos.d/
wget https://copr.fedoraproject.org/coprs/rhscl/ruby200-el7/repo/epel-7/rhscl-ruby200-el7-epel-7.repo
wget https://copr.fedoraproject.org/coprs/rhscl/ror40-el7/repo/epel-7/rhscl-ror40-el7-epel-7.repo
wget https://copr.fedoraproject.org/coprs/jstribny/vagrant1/repo/epel-7/jstribny-vagrant1-epel-7.repo
yum install vagrant1 vagrant1-vagrant-libvirt
</code></pre></div></div>
<p>Because we want to use our own default provider instead of VirtualBox, we need to tell Vagrant by setting an environment variable. You may append this into your shell configuration file (<a href="http://fabiorehm.com/blog/2013/11/12/set-the-default-vagrant-provider-from-your-vagrantfile/">or your Vagrantfile</a>).</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export VAGRANT_DEFAULT_PROVIDER=libvirt
</code></pre></div></div>
<p>Now we activate the <code class="highlighter-rouge">vagrant1</code> Software Collection for our current shell.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>scl enable vagrant1 $SHELL
</code></pre></div></div>
<p>Change into the directory containing our Vagrantfile (for example by cloning and entering <a href="https://github.com/heichblatt/vagrant-fedora22">vagrant-fedora22</a>) and start the VM.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>vagrant up
</code></pre></div></div>
Dockerized Oracle Linux2014-12-02T00:00:00+01:00/personal/2014/12/02/dockerized-oracle-linux
<p>If you ever find yourself in need of a Docker container based on <a href="http://www.oracle.com/technetwork/server-storage/linux/overview/index.html">Oracle Linux</a>, the following command will get you there:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>wget -O- --no-check-certificate https://public-yum.oracle.com/docker-images/OracleLinux/OL7/oraclelinux-7.0.tar.xz \
| unxz \
| docker load
docker images|grep oraclelinux
docker run -ti oraclelinux:7.0 bash
cat /etc/oracle-release
Oracle Linux Server release 7.0
</code></pre></div></div>
<p>Obviously, running OL under Docker prevents you from taking advantage of the <a href="http://www.oracle.com/technetwork/server-storage/linux/technologies/uek-overview-2043074.html">Unbreakable Enterprise Kernel</a>.</p>
Various Thoughts on Docker2014-09-02T00:00:00+02:00/personal/2014/09/02/various-thoughts-on-docker
<h2 id="misconceptions-we-could-have-avoided-equal-time-wasted">Misconceptions we could have avoided equal time wasted</h2>
<p>Driven by the current hype surrounding Docker, there have been a couple of articles comparing Docker to KVM or more generally hypervisors and containers. It’s a shame for so many intelligent people to waste their precious capacities on what I consider a lame comparison. They compare concepts that aren’t even in the same larger category. It is possible to compare those categories or compare members of them, but to me this seems futile as well.</p>
<p>There is, however, a second wave of articles, exemplified by Russell Pavlicek’s <a href="http://www.linux.com/news/enterprise/cloud-computing/785769-containers-vs-hypervisors-the-battle-has-just-begun">Containers vs Hypervisors: The Battle Has Just Begun</a>. They dodge said misconception to then compare the two categories mentioned earlier. Personally, I still consider them a mild waste of time due to the topic having been discussed to great lengths in the last decades. This is not a new topic, and especially containerization is not a new concept, cf BSD jails, Solaris Zones, even OpenVZ and chroot to a certain extent. I understand that there are people working in technology today that did not experience the previous containerization waves (I certainly didn’t), yet we can’t dedicate the mainstream discussion within the interested parts of the technology community to helping newcomers catch up.</p>
<p>One might argue that, while containerization has come into focus before, this time it is driven by a new approach, recent developments in technology and is generally targeted at a new generation of computer scientists, but then the discussion should be focussing on the differences these new developments bring. Otherwise, it seems to me, it may look like the tech community is celebrating something they already had achieved. We already had sandboxing in terms of separated filesystems running on the host kernel. What Docker brought us is integration of newly introduced features of the Linux kernel: cgroups, AuFS, etc into a fresh approach on containerization. Additionally, the people behind it have done many things right in the fields of publicity, quickly integrating community feedback, documentation and responsiveness to their audience. Docker came into existence because new possibilities opened up and amounted to enough potential to trigger what I consider an overhaul of known solutions to known problems, not because we faced new problems.</p>
<h2 id="my-guesses-on-what-might-be-next-for-the-docker-ecosystem">My guesses on what might be next for the Docker ecosystem</h2>
<p>As far as Docker and it’s beautifully blossoming ecosystem is concerned, I think we’re way behind the boom phase, heading for the bust part. We’ve seen this before: new ecosystems around shiny new technologies explode, continue to grow, fragment and balkanize before sinking back into oblivion. The only way around these dangers of the post-peak part of such wave is to consolidate around standard solutions and models, define best practices and maybe even integrate into (naturally evolving) standard stacks.</p>
<p>I think Docker is beyond its peak in terms of publicity. It’s here, it’s great, we know how to use it. What the ecosystem is currently figuring out are ways to orchestrate Docker containers (cf Fig, Flynn, Dokku, etcd, CoreOS, fleetd). This part of the ecosystem is currently in full bloom and I’m curiously standing by to see what solutions will remain. A great dying is ahead and a lot of dead ends will be abandoned. After that, we are usually left with a handful of field-tested solutions for the handful of broad problem categories the underlying solution is targeting in the first place. Dokku for simple single host scenarios, a fleet of automatically provisioned Docker containers using etcd and fleetd running on OpenStack for more complex scenarios. Even if you can precisely define your needs and the key parameters of your problem, as things stand this will not lead you directly to a go-to solution for similar cases, because at this point there are dozens of them, half of which will be dead by next friday.</p>
<p>I might sound like a sheep complaining that the herd can’t decide on a common path to blindly follow. My point is though, that the larger context of containerization is deeply connected to the operations part of the technology industry. In the spirit of DevOps, it’s a nice intersection of developer and system administration work. The larger your system gets, however, the more the focus shifts to the sysadmin part of that junction. Docker is a way to declaratively define a system, which helps developers and admins alike. When the admin deploys 500 containers to form the application in an production environment, the orchestrating layer I described earlier becomes key. The larger your system gets, the more you want reliable, time-tested solutions with a healthy community. These solutions probably already exist, most others just have to move out the way for us to see them for what they are: the actual, very small part of Docker I’m actually willing to call revolutionary. We knew containers, we just didn’t think of them as smaller pieces in a largely self-managed network, defined simply and declaratively. And now we do.</p>
Installing OpenVPN Server on CentOS/RH/Scientific Linux2014-02-01T00:00:00+01:00/personal/2014/02/01/openvpn-centos
<p>There are a couple of How-Tos around the net but none of them worked for me so I just want to dump my command line history here. This is loosely based on <a href="https://www.digitalocean.com/community/articles/how-to-setup-and-configure-an-openvpn-server-on-centos-6">Digital Ocean’s How-To</a> but includes some additions and corrections. I use CentOS 6.5.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>yum install openvpn -y
yum install easy-rsa -y
cp /usr/share/doc/openvpn-2.3.2/sample/sample-config-files/server.conf /etc/openvpn/server.conf
vim openvpn/server.conf # set dh dh2048.pem
mkdir -p /etc/openvpn/easy-rsa
cp -rf /usr/share/easy-rsa/2.0/* /etc/openvpn/easy-rsa/
cd /etc/openvpn/easy-rsa/
vim vars # define your variables
cp /etc/openvpn/easy-rsa/openssl-1.0.0.cnf /etc/openvpn/easy-rsa/openssl.cnf
cd /etc/openvpn/easy-rsa/
source ./vars
./clean-all
./build-ca
time ./build-dh # this takes forever
./build-key-server server
cd /etc/openvpn/easy-rsa/keys
cp dh2048.pem ca.crt server.crt server.key /etc/openvpn/
cd /etc/openvpn/easy-rsa/
./build-key --batch client1
# the next three lines set 10.8.0.10 as static IP for client1
vim /etc/openvpn/server.conf # uncomment client-config-dir ccd
mkdir -p /etc/openvpn/ccd
echo "ifconfig-push 10.8.0.10 10.8.0.11" > /etc/openvpn/ccd/client1
service openvpn start
</code></pre></div></div>
<p>If you have several clients you can use the following script to ease generating keys for them. Substitute USERNAME with your own username.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
<span class="o">[</span> <span class="nv">$# </span><span class="nt">-eq</span> 0 <span class="o">]</span> <span class="o">&&</span> <span class="o">{</span> <span class="nb">echo</span> <span class="s2">"Usage: </span><span class="nv">$0</span><span class="s2"> client_id"</span><span class="p">;</span> <span class="nb">exit </span>1<span class="p">;</span> <span class="o">}</span>
<span class="nv">ID</span><span class="o">=</span><span class="s2">"</span><span class="nv">$1</span><span class="s2">"</span>
<span class="nb">cd</span> /etc/openvpn/easy-rsa
<span class="nb">source</span> ./vars
./build-key <span class="nt">--batch</span> <span class="s2">"</span><span class="nv">$ID</span><span class="s2">"</span>
<span class="nb">tar </span>czvf /tmp/<span class="s2">"</span><span class="nv">$ID</span><span class="s2">"</span>.tgz ./keys/<span class="o">{</span><span class="s2">"</span><span class="nv">$ID</span><span class="s2">"</span><span class="k">*</span>,ca.crt<span class="o">}</span>
<span class="nb">chown </span>USERNAME /tmp/<span class="s2">"</span><span class="nv">$ID</span><span class="s2">"</span>.tgz
</code></pre></div></div>
<p>After each run, you can pull the new keys to your client and use them.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>scp $YOURSERVER:/tmp/$YOURCLIENT.tgz .
</code></pre></div></div>
New Job: iDiv2013-12-29T00:00:00+01:00/personal/2013/12/29/new-job-idiv
<p>Recently, I’ve started to work for the <a href="http://www.idiv.de/idiv-global/?lang=en">German Centre for Integrative
Biodiversity Research (iDiv)</a>.
It is a joint centre of the universities of Halle, Jena and Leipzig. I
am part of a three-person IT support unit. In the near future the
institute will consist of nearly 200 employees with challenging problems
and tasks. I’m looking forward to interesting Linux-related problems and
projects. Looking at the weeks already spent there, I don’t regret my
change of position.</p>
Accessing a FritzBox via VPN from Debian Wheezy2013-10-07T00:00:00+02:00/personal/2013/10/07/accessing-a-fritzbox-vpn-from-debian-wheezy
<p>Suppose, we are on a Debian box running wheezy, how can we connect to a
FritzBox VPN?</p>
<p>We need to install ShreSoft VPN Client locally. Get the tarball on the
<a href="https://www.shrew.net/download/ike">downloads page</a>.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>wget https://www.shrew.net/download/ike/ike-2.2.1-release.tgz
tar xf ./ike-2.2.1-release.tgz
cd ike
sudo apt-get install -y libqt4-dev qt4-dev-tools build-essential
cmake -DCMAKE_INSTALL_PREFIX=/usr -DQTGUI=YES -DETCDIR=/etc -DNATT=YES -LIBDIR=/lib
make qikec qikea
sudo checkinstall make install
</code></pre></div></div>
<p>Start the daemon (<code class="highlighter-rouge">iked</code>) and the client (<code class="highlighter-rouge">qikec</code>).</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo iked &
qikec &
</code></pre></div></div>
<p>In the client, create a new connection with <a href="http://www.avm.de/de/Service/Service-Portale/Service-Portal/VPN_Interoperabilitaet/15729.php">the following
settings</a>
:</p>
<ul>
<li>General:
<ul>
<li>“Host Name or IP Adress” host name of your FritzBox</li>
<li>“Port” 500</li>
<li>“Auto Configuration” “ike config pull”</li>
<li>“Adress Method” “Use a virtual adapter and assigned address”</li>
<li>Activate “Obtain Automatically”.</li>
</ul>
</li>
</ul>
<!-- -->
<ul>
<li><a href="">Client</a> keep defaults</li>
<li>“Name Resolution”
<ul>
<li>Deactivate “Enable WINS”, “Enable DNS” and <a href="">Enable Split DNS</a></li>
</ul>
</li>
<li>“Authentication”
<ul>
<li>“Authentication Method” “Mutual PSK”.</li>
<li>“Local Identity”
<ul>
<li>“Identification Type” “User Fully Qualified Domain Name”.</li>
<li>“UFQDN String” VPN user’s e-mail address</li>
</ul>
</li>
<li>“Remote Identity”
<ul>
<li>“Identification Type” “IP Address”</li>
<li>“Use a discovered remote host address”.</li>
</ul>
</li>
<li>“Credentials”
<ul>
<li>“Pre Shared Key” your VPN password</li>
</ul>
</li>
</ul>
</li>
<li>“Phase 1”
<ul>
<li>“Exchange Type” “aggressive”</li>
<li>“Dh Exchange” “group 2”</li>
<li>“Cipher Algorithm” “aes”</li>
<li>“Cipher Key Length” “256” bits</li>
<li>“Hash Algorithm” “sha1”.</li>
<li>“Key Life Time limit” “3600”</li>
</ul>
</li>
<li>“Phase 2”
<ul>
<li>“Transform Algorithm” “esp-aes”</li>
<li>“Transform Key Length” “256” bits</li>
<li>“HMAC Algorithm” “sha1”</li>
<li>“PFS Exchange” “group 2”</li>
<li>“Compress Algorithm” “deflate”</li>
</ul>
</li>
<li>“Policy”
<ul>
<li>deactivate “Maintain Persistent Security Associations” and
“Obtain Topology Automatically or Tunnel All”.</li>
<li>click “Add”, window “Topology Entry”
<ul>
<li>“Type” “Include”.</li>
<li>“Address” IP network (192.168.100.0) of the FritzBox</li>
<li>“Netmask” subnet mask (255.255.255.0) of the FritzBox</li>
<li>“OK”.</li>
</ul>
</li>
</ul>
</li>
<li>“Save”</li>
</ul>
Bachelor Thesis Update2013-08-12T00:00:00+02:00/study/2013/08/12/ba-thesis-update
<p>I handed in my bachelor’s thesis on 8 August 2013.</p>
<p>I am not satisfied with how it turned out contentually. Maybe I am
expecting too much, probably not. I somehow managed to fill the
necessary number of pages but not answer all the questions I wanted to
answer, let alone touch on all the aspects I had in mind. It has been
said before and slowly I’m beginning to admit it: the topic was more
suitable for a master’s thesis, the subject is too broad.</p>
<p>It was fun, though. I enjoyed writing for longer periods of time and
being able to immerse myself in work. I ended up using different tools
than expected. Redmine turned out to be too much for a one-person
project so I dropped it quite early on. A TODO file in my repository
proved to be sufficient. LaTeX and BibTeX were great. Despite wanting to
publicly work on the text and keep it in a public GitHub repo, I used a
private one. Somehow the pressure of instantaneously publicizing my
advancements kept me from writing. I switched to Eclipse + EGit +
TeXlipse at some point. Despite a feeling of this being completely
overblown, the combination provided useful integration of the various
tools. I particularly enjoyed the way Git is integrated and presented
visually, EGit is simply amazing.</p>
<p>Now we wait, exam regulations say it may take up to six weeks to get my
result. I enjoyed my first weekend after the storm.</p>
<p>Keep your fingers crossed, people.</p>
<p><strong>Update:</strong> It passed. Only five ECTS points to go.</p>
Online Learning and MOOCs2013-07-07T00:00:00+02:00/study%20personal/2013/07/07/moocs
<p>Aaron Swartz probably <a href="http://www.aaronsw.com/weblog/productivity">said it
best</a>:</p>
<blockquote>
<p>First, you have to make the best of each kind of time. And second, you
have to try to make your time higher-quality.</p>
</blockquote>
<p>There are times when I’m simply not capable of anything that requires
substantial amounts of brain power. Everyone has these times when he or
she is only able to consume but not produce. That’s OK. Most of us
resort to entertainment. We can do better.</p>
<p>For some now I have taken online courses or enjoyed educational
podcasts. For those speaking German, there are obvious choices:</p>
<ul>
<li><a href="http://cre.fm/">CRE</a>, formerly known as Chaosradio Express, has
explained countless concepts and phenomena to me and I consider <a href="https://en.wikipedia.org/wiki/Tim_Pritlove">Tim
Pritlove</a> one of my
greatest teachers. If you want to have an expert explain his topic
to you, this one is for you.</li>
<li><a href="http://chaosradio.ccc.de/">Chaosradio</a> is similar but less directed
towards a single topic, it’s like having some friends explain a
current topic to you.</li>
<li><a href="http://soziopod.de/feed/podcast/">Soziopod</a> gives easy to
understand explanations of broad and varied topics and fields
related to sociology.</li>
<li><a href="http://alternativlos.org/">Alternativlos</a> shines thanks to the deep
insight the two moderators have into their topics, mostly
conspiracies, politics and technology. For everything else they
get experts.</li>
</ul>
<p>In the last few months I have taken to dedicated
<a href="https://en.wikipedia.org/wiki/Massive_open_online_course">MOOC</a>
(Massive Open Online Course) sites, such as
<a href="https://www.coursera.org/">Coursera</a>. I don’t care about
certifications, but enjoy the high-quality content on there. I’ve taken
great classes about <a href="https://class.coursera.org/networksonline-001">Social
Networks</a> and <a href="https://class.coursera.org/startup-001">Startup
Engineering</a>. While preparing
for linguistics exams I used the awkwardly named <a href="http://linguistics.online.uni-marburg.de/free/information/portal/home.php">Virtual Linguistics
Campus
(VLC)</a>
by Marburg University. While we’re on the subject of linguistics, there
is <a href="http://www.slate.com/articles/podcasts/lexicon_valley.html">Lexicon
Valley</a>, a
very informative podcast trying to explain various topics surrounding
the English language. And finally, BBC’s <a href="http://www.bbc.co.uk/programmes/b006qykl">In Our
Time</a> is a true treasure trove
of knowledge, each one hour episode being a thorough explanation of a
topic. The topics themselves vary greatly, from antiquity to astronomy,
and in the years of its existence, moderator Melvyn Bragg has
accumulated a <a href="http://www.bbc.co.uk/programmes/b006qykl/episodes/player">magnificent archive of episodes and
topics</a>. More
specific in its topics is the feed of <a href="http://rss.oucs.ox.ac.uk/sociology/sociology-audio/itunesu.xml">lectures by the Department of
Sociology at
Oxford</a>.</p>
<p>Keep episodes of these on your mobile player, especially if you have to
commute more than 20 minutes every day. Make the best of each kind of
time.</p>
How to install transmission-remote-gtk on Debian2013-06-29T00:00:00+02:00/personal/2013/06/29/install-transmission-remote-gtk-on-debian
<p>For those that use
<a href="https://github.com/ajf8/transmission-remote-gtk"><code class="highlighter-rouge">transmission-remote-gtk</code></a>
to control a <a href="http://www.transmissionbt.com/"><code class="highlighter-rouge">transmission-daemon</code></a> and
sorely miss the package in the Debian repos, this will install version
1.1.1 from source.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># apt-get install intltool pkg-config libjson-glib-dev libgtk-3-dev libcurl4-gnutls-dev
$ wget https://transmission-remote-gtk.googlecode.com/files/transmission-remote-gtk-1.1.1.tar.gz
$ tar xf transmission-remote-gtk-1.1.1.tar.gz
$ cd transmission-remote-gtk-1.1.1/
$ ./configure --prefix=/usr/ && make && sudo make install
</code></pre></div></div>
You need ops2013-06-09T00:00:00+02:00/personal/2013/06/09/you-need-ops
<p>I stumbled across the short blog post <a href="http://joerussbowman.tumblr.com/post/51388938822/who-needs-ops-anyway">Who needs ops
anyway</a>,
which is a sarcastic comment on another companies failure to acknowledge
their need for ops. Besides my distaste to relabel administrators as
ops, he absolutely has a point. It’s been more times than is healthy
that executives decided they could do away with dedicated admins and
just have developers run their infrastructure “on the side” or, even
worse, do it themselves. In my experience, something like that happening
is a sure sign of that company’s decay. It’s not so much that employees
would need great amounts of training, it’s experience that counts (just
like in development or any. other. field.) or wouldn’t be able to do
administration, but they simply aren’t as effective at that. The point
of having specialized people for maintenance and incident response is to
manage recurring or emergency tasks in the most efficient way without
interrupting normal business operations. When your business critical
service goes down at noon, do you really want to send one of your
developers into the wonderful land of
learning-critical-infrastructure-as-you-go or do you want someone who
does nothing else all day besides knowing your systems and keeping them
running?</p>
<p>I know I’m as biased as it gets but think about it: there is a reason
systems administration has been a trade for decades. When the Enterprise
is in trouble, Kirk doesn’t want to read the manual. That’s what Scotty
is for.</p>
New Repo: Bittorrent Sync on 64 bit Linux2013-06-02T00:00:00+02:00/personal/2013/06/02/new-repo-btsync
<p>I use <a href="http://labs.bittorrent.com/experiments/sync.html">Bittorrent
Sync</a> on a lot of my
machines to keep several directories in sync. In fact, after being
satisfied with Dropbox for a while, I tried nearly everything there is
in synchronization. ownCloud was overkill for what I wanted and did not
offer decentralized sync. I prefer a solution where devices in the same
LAN will sync first, before using the slower connection over the
internet. Dropbox could do that. Simple rsync and self-built solutions
simply did not scale. Sparkleshare does not really work for my mostly
binary files because it keeps a complete history. Git-annex is a beauty,
but it does not work automatically and doesn’t really fit the usecase of
many rapidly changing small binary files.</p>
<p>BTSync is lacking a webinterface for quick file edits, but it is fast,
decentralized and still relatively lightweight.</p>
<p>The <a href="https://github.com/heichblatt/btsync">new repository</a> mainly
consists of a Makefile. After running <code class="highlighter-rouge">make</code>, you get three things:</p>
<p>It downloads the newest 64 bit binary to <code class="highlighter-rouge">/usr/bin</code>. It then copies a
simple init script to <code class="highlighter-rouge">/etc/init.d</code> and puts the current user’s name <a href="https://github.com/heichblatt/btsync/blob/master/init.d/btsync#L4">in
it</a>.
You can put more names in there, if necessary. Upon boot, the script
iterates over the list and starts a <code class="highlighter-rouge">btsync</code> daemon for each of the
users. The individual processes are parameterized with the user’s config
file, found in <code class="highlighter-rouge">~/.sync/config.json</code>. The Makefile will generate the
default version and change (in the current version) three settings:</p>
<ul>
<li>The device id.</li>
<li>The base directory for BTSync’s data. <code class="highlighter-rouge">~/.sync</code></li>
<li>The daemon won’t listen on 0.0.0.0:8888 but only on localhost:8888.
I cannot imagine why the first got to be the default.</li>
</ul>
<p>All of this is relatively simple and as soon as I get to it, I will
build actual Debian packages. Also, it should be easy to make this
architecture-independent.</p>
Productivity2013-05-23T00:00:00+02:00/personal/2013/05/23/productivity
<p>I wanted to jot down a few quick points about productivity, which has
been one of my interests for quite some time.</p>
<h2 id="people-i-learned-from">People I learned from</h2>
<p>I am a long time follower of <a href="http://calnewport.com/blog/">Cal Newport’s
blog</a> (medium-length introductory talk to
his main points <a href="https://www.youtube.com/watch?v=qwOdU02SE0w">here</a>) and
his motto (taken <a href="http://calnewport.com/blog/2008/02/01/the-steve-martin-method-a-master-comedians-advice-for-becoming-famous/">from Steve
Martin</a>):
<em>Be so good they can’t ignore you.</em></p>
<p>Also, like nearly everyone who has ever come near the topic of
productivity in IT, I have been using my own kind of
<a href="https://en.wikipedia.org/wiki/Getting_Things_Done">GTD-like</a> workflow
and I would recommend Allen’s approach when looking for a good place to
start.</p>
<p>Aaron Swartz’s <a href="http://www.aaronsw.com/weblog/productivity">post on the
topic</a> has brought me some
insight, for example not all kinds of activity having the same priority:</p>
<blockquote>
<p>my list is programming, writing, thinking, errands, reading,
listening, and watching (in that order).</p>
</blockquote>
<p>His words have stayed with me over the weeks since I’ve read them first.
I admire their brevity, simplicity and clarity.</p>
<p>Let’s not forget Lars Wirzenius’ <a href="http://gtdfh.branchable.com/">GTD for
Hackers</a>. It’s a well-written description
of a GTD implementation, giving a practical introduction to the topic.</p>
<h2 id="tools-i-use">Tools I use</h2>
<ul>
<li><strong>Redmine</strong> for everything larger than three items or running longer
than a week.</li>
<li>A <strong>notebook</strong> for errands and quickly write down whatever is thrown
at me.</li>
<li><strong>E-mails to myself</strong> for longer ideas when on the move.</li>
<li><strong>Online calendar</strong> for everything that has a concrete date attached
to it.</li>
<li><strong>feed2imap</strong> for fetching a selected few RSS feeds and delivering
them to a special mailbox.</li>
<li><strong>Instapaper</strong> for postponing interesting articles for later reading
when skimming through said mailbox.</li>
<li>An <strong>e-book reader</strong> to actually consume the postponed articles and
mark relevant passages.</li>
<li><strong>Zotero</strong> for organizing academic texts and bibliographies and
larger topical text collections.</li>
</ul>
My Computing Rules2013-04-26T00:00:00+02:00/personal/2013/04/26/computing-rules
<ul>
<li>
<p>Convention over configuration.</p>
<p>Sure, you can use <a href="http://www.zsh.org/"><code class="highlighter-rouge">zsh</code></a> and enjoy all the
little nifty features but you could also just get used to the
defaults of
<a href="https://www.gnu.org/software/bash/manual/bashref.html"><code class="highlighter-rouge">bash</code></a> and
be sure that wherever you login, you already know the environment.
You can customize your tiling window manager until it looks like the
Matrix just coredumped, but if you get accustomed to XFCE’s
defaults, you can get to work minutes after installation.
Customization leads to fiddling.</p>
<p>For example, after using a highly customized
<a href="http://awesome.naquadah.org/"><code class="highlighter-rouge">awesome</code></a> for several years, I
switched to using Debian and switched to KDE. I can get a new
computer up and running in less than half an hour and without any
prior configuration, all my machines work and “feel” the same way.
YMMV. I’m not a deskmodder, I’m an admin. Nevertheless, I like to
have my complete set of software and I have a <a href="https://github.com/heichblatt/default-environment-fedora/blob/master/Makefile">Makefile for
that</a>.</p>
</li>
<li>
<p>Use existing technologies.</p>
<p><a href="http://duply.net/"><code class="highlighter-rouge">duply</code></a> for example is a complete backup
solution built on top of <code class="highlighter-rouge">rsync</code>, <code class="highlighter-rouge">gpg</code>, <code class="highlighter-rouge">tar</code> and <code class="highlighter-rouge">gzip</code>. All of
those programs have been around for years. Use existing building
blocks to reach higher while profiting from the proven stability of
those blocks.</p>
</li>
<li>
<p>Open source because it’s transparent.</p>
<p>Someone more intelligent than me once said that you can learn a lot
about a thing when you watch it fall apart. When a software solution
is failing (and it will), I want to be able to look inside and find
out why. Apart from social aspects of the open source movement,
technical transparency is my selling point when it comes to chosing
platforms or products.</p>
</li>
<li>
<p><a href="https://en.wikipedia.org/wiki/KISS_principle">Keep it simple</a></p>
<p>Bash scripts, correct use of version control and the occasional
Makefile can solve more problems than you’d expect. Make it complex
when it has to, not a second earlier.</p>
</li>
<li>
<p>Encrypt ALL the data.</p>
<p>I use full disk encryption on all my devices. Do you know the sore
feeling when your device gets stolen/lost and you wish you could at
least delete the photos on it? I don’t.</p>
</li>
<li>
<p>Command line over GUI.</p>
<p>Neal Stephenson’s <a href="http://www.cryptonomicon.com/beginning.html">In the Beginning, was the Command
Line</a> was a real eye
opener for me in that it provided a reasonable theory of CLI vs GUI,
“oral history” vs product (Stephenson). Thomas Scoville’s <a href="http://theody.net/elements.html">The
Elements Of Style: UNIX As
Literature</a> complemented this
thought for me:</p>
</li>
</ul>
<blockquote>
<p>Suddenly the overrepresentation of polyglots, liberal-arts types, and
voracious readers in the UNIX community didn’t seem so mysterious, and
pointed the way to a deeper issue: in a world increasingly dominated
by image culture (TV, movies, .jpg files), UNIX remains rooted in the
culture of the word.</p>
</blockquote>
Linkdump2013-02-10T00:00:00+01:00/personal/2013/02/10/linkdump
<ul>
<li>Some interesting thoughts in <a href="https://www.commondreams.org/view/2012/07/02-8">A New Era for Worker
Ownership?</a></li>
<li><a href="http://www.minimallyminimal.com/blog/2012/7/3/the-next-microsoft.html">This</a>
made me wonder how much (re-)branding can change our perception.
Nice work though.</li>
<li>As soon as I have some time on my hands, <a href="http://files.dubfire.net/csoghoian-dissertation-final-version-7-18.pdf">Christopher Soghoian’s
doctoral
thesis</a>
is top of the list.</li>
</ul>
<blockquote>
<p>Telecommunications carriers and service providers now play an
essential role in facilitating modern surveillance by law enforcement
agencies. The police merely select the individuals to be monitored,
while the actual surveillance is performed by third parties: often the
same email providers, search engines and telephone companies to whom
consumers have entrusted their private data. Assisting Big Brother has
become a routine part of business</p>
</blockquote>
<ul>
<li><a href="http://caterina.net/2011/03/15/fomo-and-social-media/">Caterina Fake’s blog
post</a>
introduced me to the concept of
<a href="http://www.urbandictionary.com/define.php?term=fomo">FOMO</a></li>
</ul>
<blockquote>
<p>“FOMO” stands for “Fear of Missing Out” and it’s what happens
everywhere on a typical Saturday night, when you’re trying to decide
if you should stay in, or muster the energy to go to the party.</p>
</blockquote>
<ul>
<li>A <a href="https://bytbox.net//blog/2012/08/leaving-github.html">blog post</a>
that outlines why GitHub is merely built on top of Git and why that
is a bad thing.</li>
</ul>
<blockquote>
<p>Protocols last a long time; ultimately, services like github and
sourceforge are just fads, with very little (I think no) added value</p>
</blockquote>
<ul>
<li>In case you are interested in organizational theory (and who
isn’t?), there is <a href="http://orgtheory.wordpress.com/">this wet dream of a blog about
it</a>.</li>
<li>Anil Dash about <a href="http://dashes.com/anil/2012/12/the-web-we-lost.html">The Web we
lost</a> and
concepts like appification, the sickness that is advertisement and
why communities on the net are becoming increasingly gated.</li>
<li>Some <a href="http://jordanburgess.com/post/35606480702/why-im-starting-a-blog">thoughts
about</a>
publicly speaking your mind in the form of a blog.</li>
</ul>
<blockquote>
<p>I understand that anything I write now opens me up to criticism,
contradiction and mockery (mainly mockery) now and in the future.</p>
</blockquote>
<ul>
<li>Vivek Haldar <a href="http://blog.vivekhaldar.com/post/40460692580/so-good-they-cant-ignore-you-review">reviews and
likes</a>
Cal Newports book <a href="http://www.calnewport.com/books/sogood.html">So good they can’t ignore
you</a> (Video of Newport
explaining his theses <a href="www.youtube.com/watch?v=qwOdU02SE0w">here</a>)</li>
</ul>
I want no local storage anywhere near me2013-01-25T00:00:00+01:00/personal/2013/01/25/i-want-no-local-storage-anywhere-near-me
<p>I have been pondering the <a href="http://usesthis.com/">The Setup</a> post about
<a href="https://en.wikipedia.org/wiki/Rob_Pike">Rob Pike</a>. He developed <a href="https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs">Plan
9</a> which I was
briefly interested in. He has a very interesting and very radical point
of view where his data belong:</p>
<blockquote>
<p>I want no local storage anywhere near me other than maybe caches. No
disks, no state, my world entirely in the network. Storage needs to be
backed up and maintained, which should be someone else’s problem, one
I’m happy to pay to have them solve. Also, storage on one machine
means that machine is different from another machine. (..) The
terminal, even though it had a nice color screen and mouse and network
and all that, was just a portal to the real computers in the back.
When I left work and went home, I could pick up where I left off,
pretty much. My dream setup would drop the “pretty much” qualification
from that. (..) As laptops came in, people started carrying computers
around with them everywhere. The reason was to have the state stored
on the computer, not the computer itself. You carry around a computer
so you can access its disk. (..) The world should provide me my
computing environment and maintain it for me and make it available
everywhere. If this were done right, my life would become much simpler
and so could yours.</p>
</blockquote>
<p>It has been a while since I have been as torn about an opinion as I am
about this one. On the one hand, I really like to keep my hands on my
data, which is why I deeply mistrust “The Cloud”, which, as far as pure
storage is concerned, is simply renting storage from someone else and
maybe using their nifty little synchronization daemon. The vision Pike
proposes (and helped come to life) on the other hand is a completely
networked world where devices expose their inner workings through
uniform interfaces by which he does not mean the comparatively clumsy
APIs of today, but more in the direction of securely exposing your /dev
and /proc to the net.</p>
<blockquote>
<p>This is 2012 and we’re still stitching together little microcomputers
with HTTPS and ssh and calling it revolutionary. I sorely miss the
unified system view of the world we had at Bell Labs, and the way
things are going that seems unlikely to come back any time soon.</p>
</blockquote>
<p>One question would be how to achieve such uniformity in the first place.
Forget about later introducing it, so how would you have convinced
people to use your rather radical model back then? But that’s not the
main point here.</p>
<p>At the very least this is one of those rare instances when a person with
genuinely beautiful ideas puts them together and presents a vision.
Refreshing.</p>
There's someone wrong on the internet2013-01-22T00:00:00+01:00/personal/2013/01/22/theres-someone-wrong-on-the-internet
<p>I
<a href="http://caterina.net/2012/11/01/quarrel-not-at-all-or-why-one-shouldnt-engage-in-online-mudslinging/">found</a>
a great quote from Lincoln:</p>
<blockquote>
<p>“Quarrel not at all. No man resolved to make the most of himself can
spare time for personal contention. Still less can he afford to take
all the consequences, including the vitiating of his temper and loss
of self control. Yield larger things to which you can show no more
than equal right; and yield lesser ones, though clearly your own.
Better give your path to a dog than be bitten by him in contesting for
the right. Even killing the dog would not cure the bite.”</p>
</blockquote>
<p>XKCD has, as always, <a href="http://xkcd.com/386/">already covered this years
ago</a>.</p>
<p><strong>Update:</strong> There actually is research about <a href="http://www.motherjones.com/environment/2013/01/you-idiot-course-trolls-comments-make-you-believe-science-less">how internet trolls are
changing other readers
perception</a>
of the actual content.</p>
Bandwidth Efficiency nowadays2012-12-24T00:00:00+01:00/personal/2012/12/24/bandwidth-efficiency
<p>I stumbled across <a href="http://blog.chriszacharias.com/page-weight-matters">Page Weight
Matters</a> by Chris
Zacharias where he briefly describes how he got a YouTube page down to
100 KB and the pain that caused him. He seems surprised to find that
reducing the fluff on such a widely known site as YT would actually
bring more people to the service, mostly those with slow internet
connections or from very remote places, people the average interwebz
citizen could maybe think about a little more often. If that’s too lofty
for you, see it as an exercise in <a href="http://mnmlist.com/w/">minimalism</a>.</p>
<p>However, I was amused by the figures, still having much more restricting
rules in mind, specifically the <a href="http://web.archive.org/web/20040209204828/http://www.infested.de/old/archive/archive2.txt">HTML Coding
Manifesto</a>
(in German). The authors (ca 2000) were still thinking about
compatibility with terminal browsers! After having thought for several
years that their benchmarks were a bit too strict, I am happy to see
their point made again, more than 10 years later.</p>
<p>Let’s keep it short, shall we?</p>
<p>Now, may I interest you in a website that I always admired for its
completeness and simplicity, given its sole purpose, technical
documentation? <a href="http://openbsd.org">OpenBSD.org</a></p>
My Bachelor Thesis2012-12-09T00:00:00+01:00/study/2012/12/09/my-ba-thesis
<p>I found my first reviser for my bachelor thesis. It is <a href="http://www.uni-leipzig.de/~culture/mitarb_schwend.htm">Prof. Dr.
Joachim Schwend</a>
and I’m very relieved he accepted. Hopefully, I will find my second
reviser next week so I can finish the registration until 18 December.</p>
<p>The thesis will fall into cultural studies. There is no concrete (as in
written in stone) question yet but the direction has become very clear
during the last few weeks.</p>
<p>I want to survey the anti-islamic movement within the New Right.
Specifically, I want to give a short overview of the scenes in the UK
and Germany and subsume them under the overarching model of
<a href="http://www.uni-bielefeld.de/(en)/ikg/projekte/GMF/index.htm">Group-Focused
Enmity</a>
developed by the Institut für interdisziplinäre Konflikt- und
Gewaltforschung at Bielefeld University.</p>
<p>I’m still in the process of developing a detailed outline but one thing
is as sure as can be at this point: I want to tackle this project with
the familiar tools from software development. There is a public <a href="https://github.com/heichblatt/ba-thesis">Github
repository</a> that holds the
public branch of my Git repository. There will be monthly point
releases, then two release candidates before the final version at the
beginning of June. Task management is done in
<a href="https://en.wikipedia.org/wiki/Redmine">Redmine</a>. The resulting document
will be written in <a href="https://en.wikipedia.org/wiki/LaTeX">LaTeX</a> and
bibliography will be managed in
<a href="https://en.wikipedia.org/wiki/BibTeX">BibTex</a>.</p>
<p>Wish me luck as I tend to need it. The motto is: <em>Hoc age. Hora fugit.</em></p>
<p><strong>Update:</strong> <a href="http://www.uni-leipzig.de/~culture/mitarb_singer.htm">Rita
Singer</a> will be my
second reviser.</p>
<p><strong>Update:</strong> I handed in the registration form. It’s official now, folks,
and the baby has a (vaguely phrased) title: <em>The Anti-Islamic Movement
within the New Right</em></p>
Hope2012-11-08T00:00:00+01:00/personal/2012/11/08/hope
<p>I am always utterly relieved when I read about people like <a href="http://thoughtcrime.org/">Moxie
Marlinspike</a>.</p>
<blockquote>
<p>But I also secretly hate technology, am partially horrified with the
direction “geek” culture has gone, and have little affection for the
weird entrepreneur scene that’s currently devouring the Bay Area.</p>
</blockquote>
<blockquote>
<p>In general, I hope to contribute to a world where we value skills and
relationships over careers and money, where we know better than to
trust cops or politicians, and where we’re passionate about building
and creating things in a self-motivated and self-directed way.</p>
</blockquote>
So I've written my first exam from home2012-08-10T00:00:00+02:00/study/2012/08/10/online-exam
<p>I just <a href="https://github.com/heichblatt/klausur-globalisierung/tags">finished
writing</a> my
first exam from home. We had four hours time to answer a few questions
with mini essays. However, as usual after exams, I am completely
unsatisfied with it, but then again the questions were so broad that I
don’t even try to guess what will pass and what won’t. It is remarkable
how exhausted I am, even after this new kind of least effort exam.</p>
<p>Time to get dressed.</p>
<p><strong>Update:</strong> Passed it, not exactly with flying colours, but still.</p>
New Repo: dotfiles-vim2012-07-14T00:00:00+02:00/personal/2012/07/14/new-repo-dotfiles-vim
<p>I’ve put up a new repository:
<a href="https://github.com/heichblatt/dotfiles-vim">dotfiles-vim</a>. As you might
have guessed, it contains my VIM config. I use
<a href="https://github.com/tpope/vim-pathogen">pathogen</a> to manage my scripts
etc and keep them unter <code class="highlighter-rouge">.vim/bundle/</code> as <a href="http://git-scm.com/book/en/Git-Tools-Submodules">Git
submodules</a>.
<a href="http://vimcasts.org/episodes/synchronizing-plugins-with-git-submodules-and-pathogen/">There</a>
<a href="http://www.allenwei.cn/tips-using-git-submodule-keep-your-plugin-up-to-date/">are</a>
<a href="http://mirnazim.org/writings/vim-plugins-i-use/">several</a>
<a href="http://dudarev.com/blog/keep-vim-settings-and-plugins-in-git-repo/">articles</a>
about this.</p>
Presentation: Stereotypes in Whiskey Advertisement2012-07-01T00:00:00+02:00/study/2012/07/01/presentation-whiskey
<p><img src="https://github.com/heichblatt/presentation-whiskey/raw/master/i-dont-always.jpg" alt="" /></p>
<p>I am sure you have noticed the <a href="https://github.com/heichblatt/presentation-whiskey">new
repo</a> and my first
steps towards my next presentation. It is pretty amazing: I managed to
start more than three days before the actual date. One problem so far is
the gap between the topic of the seminar (“Transatlantic Migrations”)
and my presentation. To be honest, I did not think too much before
choosing this particular subject. But here’s my reasoning why it is
relevant to the module at large:</p>
<ol>
<li><strong>Let’s go cultural studies</strong> It is hard to define the position of
the seminar between cultural studies and history and politics, it
draws from both. Personally, I found the former much more
interesting because it sheds light on the larger picture, the
connections on abstract levels. I find the fabric of our culture
more interesting than the story of how it (the culture) came to be.</li>
<li><strong>Stereotypes are valid subjects of cultural studies</strong> While being
dangerous, they also help to make sense of things and function as
<a href="http://en.wikipedia.org/wiki/Stereotype#Sense-making_tools">shortcuts</a>.
They also often center around nationalities.</li>
<li><strong>Whiskey ad producers often use national stereotypes</strong> Since the
countries most closely connected to Whiskey already have strong and
well-defined stereotypes and connotations attached to them, it makes
sense to use them to draw a very straight line from, say, whiskey to
Ireland to Irish nature to purity.</li>
</ol>
<p>As my devoted followers will undoubtedly have noticed, these thoughts
are pretty half-baked so far. But then again, I honestly see a valid
argument in the material.</p>
<p><strong>Update</strong> My professor has given me permission to proceed with this
project. One week to go.</p>
<p><strong>Update 2</strong> I held the presentation and <a href="http://knowyourmeme.com/memes/everything-went-better-than-expected">got a
1,3</a>.</p>
<p><strong>Update 3</strong> The final version is
<a href="https://github.com/heichblatt/presentation-whiskey/tags">here</a>.</p>
<p><strong>Update 4</strong> I am thinking about writing an (optional) essay about the
topic, started <a href="https://github.com/heichblatt/essay-migrations">a repo</a>.
We’ll see.</p>
Linkdump2012-07-01T00:00:00+02:00/personal/2012/07/01/linkdump
<ul>
<li><a href="http://en.wikipedia.org/wiki/Richard_Stallman">The Stallmann</a> has a
list of <a href="http://stallman.org/amazon.html">Reasons not to do business with
Amazon</a></li>
<li>The Wall Street Journal has a <a href="http://online.wsj.com/article/SB10001424052702303379204577474953586383604.html">nice
article</a>
about what they inhibitedly<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup> call “boss-free”.</li>
<li><a href="http://en.wikipedia.org/wiki/Martin_Haase">Martin Haase</a> writes
about the German word
<a href="http://neusprech.org/tauschboerse/">Tauschbörse</a></li>
<li><a href="http://en.wikipedia.org/wiki/Metasploit_Project">Metasploit</a> has a
<a href="https://community.rapid7.com/community/metasploit/blog">blog</a>.</li>
<li>A website that shows Facebook status updates of people unable to
handle their own privacy settings, <a href="http://www.weknowwhatyouredoing.com/">filtered by compromising
keywords</a>. I honestly do not
know what to think about it.</li>
<li><a href="http://arstechnica.com/gadgets/2012/06/minitel-frances-precursor-to-the-web-to-go-dark-on-june-30/">Ars Technica: Minitel goes
down</a>.
Yesterdays future.</li>
<li><a href="http://ecowatch.org/2012/kardashians/">Kardashians Get 40 Times More News Coverage than Ocean
Acidification</a></li>
<li><a href="http://www.mcsweeneys.net/articles/an-open-letter-to-people-who-take-pictures-of-food-with-instagram">McSweeney’s: Open letter to people who take pictures of food with
Instagram</a></li>
</ul>
<blockquote>
<p>So now that you’re a professional photographer, you need to capture
the simpler things in life. All of them. It is your duty as an artist,
after all. And there is nothing simpler than your pretentious foodie
excursions.</p>
</blockquote>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Is that even a real word? <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>
Migrating to Jekyll2012-06-24T00:00:00+02:00/personal/2012/06/24/migrating-to-jekyll
<p>So I tried <a href="https://github.com/mojombo/jekyll">jekyll</a> and I liked it.</p>
<p>For quite a while now I have been wondering why I don’t use some of the
things I love working with for blogging: Git and Github. Guess I never
got around to actually find a working solution that did not take ages to
set up. Jekyll does not and since <a href="https://github.com/pages">Github
Pages</a> is based on it, awesomeness ensues.
There, I said it. The other thing is, even if Github is going down
tomorrow, I can still generate HTML and push it to some server. You
know, <a href="http://www.youtube.com/watch?v=zmJTcyqiZ44">freedom</a> and stuff.
Lastly, I get to use my beloved
<a href="www.textism.com/tools/textile/">Textile</a>.</p>
<p>So expect some migration of migration-worthy old posts during the next
few days.</p>
<p>Oh yeah, and I switched to <a href="http://ragefac.es/57">English</a>.</p>
Git, my love2012-06-22T00:00:00+02:00/personal/2012/06/22/git-my-love
<p>It’s been a while since I switched from Subversion to Git. Since then
I’ve been using it for everything that’s even slightly more complex than
a text file. I use the command line most of the time, together with the
following tools:</p>
<ul>
<li><a href="http://meldmerge.org/">Meld</a>: graphical diff viewer, built-in Git
support</li>
<li><a href="http://vimdoc.sourceforge.net/htmldoc/diff.html">vimdiff</a>: command
line diff viewer, indispensable classic</li>
<li><a href="http://git.gnome.org/browse/gitg/">GitG</a>: Git GUI, GTK+-based</li>
<li><a href="http://jonas.nitro.dk/tig/">tig</a>: ncurses based Git GUI</li>
<li><a href="http://joeyh.name/code/etckeeper/">etckeeper</a>: keeps <code class="highlighter-rouge">/etc</code> under
version control, comes with several hooks for various package
managers, i.e. autocomitting before and after an <code class="highlighter-rouge">apt-get</code> run.
Produces self-writing quasi-documentation.</li>
</ul>
<p><a href="https://www.github.com">Github</a> takes it a step further. Free accounts
only have public repositories and <a href="https://github.com/plans">paid plans</a>
are a little too expensive in my view. Lately I had to produce several
projects for university. Habitually, I turn to
<a href="http://de.wikipedia.org/wiki/LaTeX">LaTeX</a> for such stuff. The results
can be found in <a href="https://github.com/heichblatt">my Github account</a>. Once
again, the best tool turned out to be: my favourite editor
<a href="http://www.vim.org">Vim</a> <sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>, a <a href="https://github.com/heichblatt/template-academic/blob/master/Makefile">minimal
Makefile</a>
and <a href="http://packages.debian.org/en/squeeze/texlive">TeX Live from the Debian
repos</a>.</p>
<p>Next stop: understanding what
<a href="http://git-annex.branchable.com">git-annex</a> can do for me.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Nevertheless, this time I experimented with
<a href="http://www.eclipse.org">Eclipse</a> +
<a href="http://texlipse.sourceforge.net">TeXlipse</a> +
<a href="http://www.eclipse.org">EGit</a>, but the <a href="http://en.wikipedia.org/wiki/KISS_principle">KISS
principle</a> got me
again. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>
This weeks serendipity: yourlogicalfallacyis.com2012-05-09T00:00:00+02:00/personal/2012/05/09/this-weeks-serendipity-yourlogicalfallacies-com
<p>The website with the motto “Thou shalt not commit logical fallacies”
presents a nice overview over certain <a href="http://en.wikipedia.org/wiki/Category:Logical_fallacies">logical
fallacies</a> with
simple explanations, ready to be referenced in your next arguments.</p>
<p>Kind of like <a href="http://ragefac.es">ragefac.es</a>, just like, you know,
intelligent and everything.</p>
This Weeks Serendipity Light Table2012-04-23T00:00:00+02:00/2012/04/23/this-weeks-serendipity-light-tableThis weeks serendipity: Solarized2012-04-22T00:00:00+02:00/personal/2012/04/22/this-weeks-serendipity-solarized
<p>I’ve been a long-time admirer of the <a href="http://slinky.imukuppi.org/zenburn/">Zenburn colour
scheme</a>, but I have to admit:
<a href="http://ethanschoonover.com/solarized">Solarized</a> is my new favourite.
It comes in a bright and dark variants and configuration files for
various programs. What you don’t find in <a href="https://github.com/altercation/solarized">Schoonover’s
repo</a>, <a href="https://github.com/search?utf8=%E2%9C%93&q=solarized&type=Everything&repo=&langOverride=&start_value=1">can be found in
others</a>:
config files for
<a href="https://github.com/cycojesus/awesome-solarized">awesome</a>,
<a href="https://github.com/ghuntley/terminator-solarized">Terminator</a> and GTK2
themes
(<a href="https://github.com/heichblatt/gtk2-theme-solarizedlight">bright</a> and
<a href="https://github.com/heichblatt/gtk2-theme-solarizeddark">dark</a>).</p>
<p><strong>Update</strong>: I just (20130615) stumbled across the AUR packages for my
repos
(<a href="https://aur.archlinux.org/packages/gtk2-theme-solarizedlight-git/">light</a>
and
<a href="https://aur.archlinux.org/packages/gtk2-theme-solarizeddark-git/">dark</a>,
thanks to <a href="https://github.com/tlvince">tlvince</a> for this.</p>