Unmount SSHFS

If it seems you can't unmount SSHFS:
umount: /home/user/mountpoint: Permission denied
Just try this:
fusermount -u /home/user/mountpoint

Planned obsolescence through random misbehaviour

Random misbehaviour could be a viable way to expire a closed source/proprietary software/hardware product without inflicting too much damage to the own brands name.

If products expire in an abrupt way it is obvious that negative resentments will rapidly accumulate in product forums, comment sections and by word of mouth. But what if the expiration happens in a gradual way? And what if the onset and the severity of the presented misbehaviour are also determined by a function of increasing probabilty?

I would argue that the consumer, because he can not pinpoint the cause of the misbehaviour due to the randomness, will first come up with theories in which he views himself as beeing able to temprarily fix or bypass the malfunctions. The genesis of such beneficial theories could also be nurtured by intelligent algorithms. Because the consumer finds himself in a situation which is, compared that of sudden expiration, harder to describe, he will, I hold, not so readily spread information that would damage the brand. Done right the consumer could ideally be given the impression that the malfunctions stem from other, somehow connected/related devices. I would further argue that the consumers willingness to talk about the malfunction of a certain product decreases over time. This is why it should be beneficial to nurture the genesis of theories in which the other devices are the cause of the malfunction. That way the initial frustration and most of the negative communications during the "hot phase" of bad reputation generation are focused on multiple brands and/or technologies. When the frustration regarding the own brands device eventually reaches the replacement threshold, the willingness to further generate negative communications or dicuss malfunctions has already decreased and the own brand takes lesser damage.

Also it is valuable to note that the tech press would would find themselves in the same situation. Depending on the legal framework of the respective country they would face the additional obstacle of possible lawsuits when publishing articles based on guesswork.

To give a concrete example of this concept:

Let's say a company produces a bluetooth headset. The bluetooth headset needs to be initially "paired" with other devices before it can relay audio for them. When switching a device on and off, the headset needs connect anew but must not be paired again. The proposed planned obsolescence would work in the following way:
The first day the device gets paired with another device an interal counter starts. The counter will, after a one or two year period has passed (depending on the guaranteed warranty) activate the random obsolescence function (ROF). Good malfunctions provided by the ROF could be to require the consumer to start the connection procedure multiple times when switching on the device or even to require a new pairing procedure to make it work again. Malfunctions like disconnects could be introduced when several bluetooth devices are connected at the same time. Unexpected behaviour like connecting successfully but still not relaying audio could also be introduced. The ROF could then, from week to week, increase the probability of such malfunctions in order to continually frustrate the consumer and to incentivice the product replacement process. Since the ROF acts only on a few connected devices the consumer will suspect other devices as possible sources of the malfunctioning and thus will also talk about them when discussing possibles solutions or giving product recommendations. When the consumer is eventually frustrated enough and replaces the headset he will no longer so willingly talk and theorize about the cause of the missbehaviours. The consumer would ideally be left with the impression that the malfuntions did stem from the technology standards used (eg. "Bluetooth 3.0") or that they had to do with the combination of multiple brands.

Redis data persistency on Uberspace

Uberspace provides a nice german introduction on how to use Redis on their servers and also kindly points out that Redis resides in memory and will (when used with default settings) lose its state (speak: all your data *yikes*) when there is a crash or restart. So let us fix that:

Setting up Redis

Just a quick rehearsel of their tutorial. Redis is beeing set up in two steps basically:

test -d ~/service || uberspace-setup-svscan  
uberspace-setup-redis  

Enabling persistency

There are several persistency models you can read up on here: Redis persistence demystified. But I would recommend sticking with the most common way shown here, which strikes a good balance between performance and data safety[1].

nano ~/.redis/conf  

And then add the following lines:

dir /home/YOUR_UBERSPACE_USER_NAME/.redis/  
appendonly yes  
appendfsync everysec  
# Restart redis
svc -du ~/service/redis  

Verifying persistency

Now run the tool that will put data into Redis and make it do so. You can then check ① both that the persistency file grew to a size greater than 0 and ② also restart Redis and afterwards check whether your tool still sees the previously entered data.

# ① Check that the persistency file holds content:
du -sch ~/.redis/appendonly.aof  
# ② Restart Redis. This would kill all the contained data
# if the persistency setting would not work. Then check if
# the data is still available in your Redis using tool.
svc -du ~/service/redis  

Sources
1. Redis persistence demystified (oldblog.antirez.com)

Fixing websites with Userscripts: Heise Preisvergleich

My boss uses Heises Preisvergleich a lot. It's a (to me somewhat awkward UI/UX-wise) german price comparison page. Typical german verbosity you might think. After all even in law "Germans rarely find a rule they don't want to embrace" as the Telegraph writes. That should give you a pretty good idea what our price comparison sites look like...

But we're not all that way :-) So we tried to fix it ourselves. It turns out there are some nice ways of doing that since the web is still free and open (will that die away with WebAssembly?).

We were mainly interested in the specification part of the site, where they list things like CPU and RAM. Since my boss is quite a Firefox fan I wrote a Greasemonkey script. It starts to run when the page has fully loaded and then replaces and transforms specification text passages using Regex. On my first try I made the error to really parse the HTML. After some time Cpt. Obvious hit me and made me use Regular Expressions to simply search and replace. That is a screenshot of the result (red marks changes introduced by my script; a dash means something got erased, just imagine a loat of bloat text where the dashes are):

Fix heise preisvergleich Greasmonkey result

You can find all the code and some help setting up your development environment in the Github repository.

Borgweb frontend development

Having written Peertransfer last semester I now had the chance to try myself at another JS heavy-ish project: Borgweb -- a log file viewer for Borgbackup.

Borgweb screenshot

Borgwebs JS talks to a RESTful API to:

  1. Get a list of log files (on the left).
  2. Display the contents of log files.
  3. Paginate between chunks of the log file (only request the part that is being displayed).
  4. Display miscellaneous information about the log file.
    • Highlight erroneous lines.
    • Indicate overall state of the backup that produced the log.
  5. React to the height of the browser window and display only a suitable amount of lines.
  6. React to height changes and re-render.

The first version I wrote was very complicated code-wise and it was quite hard and awkward to debug or even introduce new features. Having all code in one big file did not help. There was some aversion in our company to use JS code structuring tools like Gulp and Browserify maybe because we underestimated the complexity these few features already introduce (for a beginner like me).

So a few discussion of algorithms later I decided to rewrite and use a more modular way of structuring my code and also tried hard to keep functions short.

Starting to use ES6 constructs I needed to transpile and decided to use Babelify. It's one of two contenders in that space but is the one that is community based. Building on previous attempts I could reuse/improve upon old build scripts and pretty much stayed with this version:

var lib = require('./gulpfile.lib.js')  
...
var files = {  
  js: './index.js',
  jsBndl: '../borgweb/static/bundle.js' }
...
gulp.task ('watch', function () {  
  newBuildLog(); browserSync.init({ proxy: 'localhost:5000', open: false })
  lib.generateBundle(files.js, files.jsBndl, true)

  gulp.watch([files.js, 'src/**/*.js'], function (ev) {
    if (ev.type == 'changed') {
      newBuildLog()
      log("Changed: " + ev.path)
      var ret = lib.generateBundle(files.js, files.jsBndl)
      setTimeout(function () { browserSync.reload() }, 150) // todo
      return ret } }) })

What's neat for me about this is, that it watches all me used JS files and continuously rebuilds the bundle.js that is actually used by the browser. BrowserSync handles reloading the website when files changed. Since it spawns a local webserver I could use another machine remotely open the website via SSH (almost like shown here) and have BrowserSync handle the reloading automatically - second monitor without having to plug cables yay.

When glueing code together I liked to keep it verbose so I had a separate export section at the end of every module like so:

module.exports = {  
  noBackupRunning: noBackupRunning,
  pollBackupStatus: pollBackupStatus,
  ...
}

And also a section in my index.js where I made the functions globally available that needed to be callable from the frontend:

window.startBackup = borg.startBackup  
window.switchToLog = logViewer.switchToLog  
window.nextPage = logViewer.nextPage  
...

Of course there are still open issues on Github. But we have a first working version and already have a PIP package.

So lessons learned here:

  1. Write short functions.
  2. Structure using modules.
  3. Automatize if possible.