Uberspace provides a nice german introduction on how to use Redis on their servers and also kindly points out that Redis resides in memory and will (when used with default settings) lose its state (speak: all your data *yikes*) when there is a crash or restart. So let us fix that:

# Setting up Redis

Just a quick rehearsel of their tutorial. Redis is beeing set up in two steps basically:

test -d ~/service || uberspace-setup-svscan
uberspace-setup-redis


# Enabling persistency

There are several persistency models you can read up on here: Redis persistence demystified. But I would recommend sticking with the most common way shown here, which strikes a good balance between performance and data safety[1].

nano ~/.redis/conf


And then add the following lines:

dir /home/YOUR_UBERSPACE_USER_NAME/.redis/
appendonly yes
appendfsync everysec

# Restart redis
svc -du ~/service/redis


# Verifying persistency

Now run the tool that will put data into Redis and make it do so. You can then check ① both that the persistency file grew to a size greater than 0 and ② also restart Redis and afterwards check whether your tool still sees the previously entered data.

# ① Check that the persistency file holds content:
du -sch ~/.redis/appendonly.aof

# ② Restart Redis. This would kill all the contained data
# if the persistency setting would not work. Then check if
# the data is still available in your Redis using tool.
svc -du ~/service/redis


Sources
1. Redis persistence demystified (oldblog.antirez.com)

My boss uses Heises Preisvergleich a lot. It's a (to me somewhat awkward UI/UX-wise) german price comparison page. Typical german verbosity you might think. After all even in law "Germans rarely find a rule they don't want to embrace" as the Telegraph writes. That should give you a pretty good idea what our price comparison sites look like...

But we're not all that way :-) So we tried to fix it ourselves. It turns out there are some nice ways of doing that since the web is still free and open (will that die away with WebAssembly?).

We were mainly interested in the specification part of the site, where they list things like CPU and RAM. Since my boss is quite a Firefox fan I wrote a Greasemonkey script. It starts to run when the page has fully loaded and then replaces and transforms specification text passages using Regex. On my first try I made the error to really parse the HTML. After some time Cpt. Obvious hit me and made me use Regular Expressions to simply search and replace. That is a screenshot of the result (red marks changes introduced by my script; a dash means something got erased, just imagine a loat of bloat text where the dashes are):

You can find all the code and some help setting up your development environment in the Github repository.

Having written Peertransfer last semester I now had the chance to try myself at another JS heavy-ish project: Borgweb -- a log file viewer for Borgbackup.

Borgwebs JS talks to a RESTful API to:

1. Get a list of log files (on the left).
2. Display the contents of log files.
3. Paginate between chunks of the log file (only request the part that is being displayed).
4. Display miscellaneous information about the log file.
• Highlight erroneous lines.
• Indicate overall state of the backup that produced the log.
5. React to the height of the browser window and display only a suitable amount of lines.
6. React to height changes and re-render.

The first version I wrote was very complicated code-wise and it was quite hard and awkward to debug or even introduce new features. Having all code in one big file did not help. There was some aversion in our company to use JS code structuring tools like Gulp and Browserify maybe because we underestimated the complexity these few features already introduce (for a beginner like me).

So a few discussion of algorithms later I decided to rewrite and use a more modular way of structuring my code and also tried hard to keep functions short.

Starting to use ES6 constructs I needed to transpile and decided to use Babelify. It's one of two contenders in that space but is the one that is community based. Building on previous attempts I could reuse/improve upon old build scripts and pretty much stayed with this version:

var lib = require('./gulpfile.lib.js')
...
var files = {
js: './index.js',
jsBndl: '../borgweb/static/bundle.js' }
...
newBuildLog(); browserSync.init({ proxy: 'localhost:5000', open: false })
lib.generateBundle(files.js, files.jsBndl, true)

gulp.watch([files.js, 'src/**/*.js'], function (ev) {
if (ev.type == 'changed') {
newBuildLog()
log("Changed: " + ev.path)
var ret = lib.generateBundle(files.js, files.jsBndl)
setTimeout(function () { browserSync.reload() }, 150) // todo
return ret } }) })


What's neat for me about this is, that it watches all me used JS files and continuously rebuilds the bundle.js that is actually used by the browser. BrowserSync handles reloading the website when files changed. Since it spawns a local webserver I could use another machine remotely open the website via SSH (almost like shown here) and have BrowserSync handle the reloading automatically - second monitor without having to plug cables yay.

When glueing code together I liked to keep it verbose so I had a separate export section at the end of every module like so:

module.exports = {
noBackupRunning: noBackupRunning,
pollBackupStatus: pollBackupStatus,
...
}


And also a section in my index.js where I made the functions globally available that needed to be callable from the frontend:

window.startBackup = borg.startBackup
window.switchToLog = logViewer.switchToLog
window.nextPage = logViewer.nextPage
...


Of course there are still open issues on Github. But we have a first working version and already have a PIP package.

So lessons learned here:

1. Write short functions.
2. Structure using modules.
3. Automatize if possible.

Maybe you have bought a "'puter", maybe even a mobile one, and most certainly they have force-sold you a operation system that fits the dull-wittedness of the world. But then somehow reason creeps back in and starts annoying you... and then you just cannot can't switch to linux.

So here you go:

1. Spare you the hassle of talking to those "'puter"-guys at the local computer-discounter-monster-store in the case of warranty issues and backup the disk image of your freshly bought and so called self-enslavement operation system:
• Plug a bootable USB stick and boot your favorite Linux.
• Plug/mount/use the current USB stick to dd your drive:
sudo dd if=[DEVICE_FILE] of=[BACKUP_FILE] bs=4M
2. Get your Windows serial out[1] - remember what they say: "one mans trash..."
3. Hopefully never: Restore the diskimage if you have a warranty case. Just repeat step one and swap if= and of= values.

And remember: always have a second device that runs SomaFMs Defcon Radio. 👍

[1]: sudo cat /sys/firmware/acpi/tables/MSDM

Operation systems generally can digest and look for a PAC file (="proxy auto-config"). They look for it using WPAD (="web proxy auto-discovery protocol").

Windows 7 and 8 do that. Ubuntu can do that (can be activated within the Network settings). For Fedora and Korora it's the same.

The PAC file can look like this one which provides "PAC with network and domain whitelisting".

For WPAD to work automatically the file should be served using the the MIME type application/x-ns-proxy-autoconfig. Its location should be announced via DHCP and DNS.

The DNS should announce either an A record (="host record") or an CNAME for the domain name wpad which should resolv to the IP of the machine that serves the PAC file. All in all it should be possible to access it over port 80 using http://wpad.[local_domain]/proxy.pac.

Using DHCP the file can be made available using any address and port. On a Linux machine the responsible configuration file /etc/dhcp/dhcpd.conf could look like this:

option local-proxy-config code 252 = text;
...

References