Category Archives: Tech Stuff

Super slow write speeds to Synology DS414

By   December 2, 2018

I was successfully running my DS414 with a combination of 4TB+4TB+4TB+1.8TB disks in two volumes.  I wanted to increase capacity so I bought a 6TB seagate external disk because the price was right.  I figured I’d pull the disk out of the chassis and slide it into my NAS.  Mistake number one.

The disk inside the chassis was a Seagate ST6000DM003.

After waiting 3 weeks for the volume to resize I started using the Volume.  My plan was to move some ProxMox containers and VM’s onto the Volume as part of a PVE update.  But any reads/writes to/from the volume would cause the entire volume to lock up for 30 seconds at a time.  This was causing Plex streaming to block as well. Something was wrong.

I used the Synology tools to try to figure it out but there was no help there.  So I did what I do best and dove into the shell.  I ssh’d and started digging around.   After some digging, here is what I discovered.

Here’s the configuration:

/dev/sda – Seagate ST4000DX001

/dev/sdb – Seagate ST4000DX001

/dev/sdc – Seagate ST6000DM003

/dev/sdd – Western Digital WD40EZRX

/proc/diskstats showed that the weighted average IO completion time for the ST6000DM003 was 160x slower than the other disks:

sh-4.3# cat diskstats | grep ” sd[abcd] ” | awk ‘{print $3,$14}’
sda 5628640
sdb 5558600
sdc 946825590
sdd 5950130

I have a bad disk.

The more bad news is that Seagate’s website doesn’t appear to be able to pull up the warranty status because the disk serial number is not found on Seagate’s side.  I probably have to get the serial number of the chassis.  This probably implies that Seagate is not going to honor the disk warranty because I pulled it out of the chassis.   I’ve sent a support request so we’ll see what I get back on that.

 

Anyone have a use for Yak fur?

By   September 15, 2018

This morning, I find myself with a small corral of naked Yaks. I’m guessing most people know about Yak Shaving Here’s how it went this morning.

Actually, it started yesterday.  I wanted to change the thermostat setting at the Cabin so it’d be nice and toasty when we got there.  Can’t for some reason.  Dig into the HomeAssistant console and discover the reason is that the component for my Venstar ColorTouch thermostat isn’t disabling the schedule so changing the temperature fails and generates an error from the thermostat.  I should fix it.

I figure out how to fix the bug but in order to submit a patch, I need to upgrade my HomeAssistant installation.

So do all the requisite ‘git fetch; git merge upstream/….’ stuff, and then upgrade everything in the virtual environment.

Unfortunately, I can’t upgrade the virtual environment because my Python is too old.

Can’t ‘apt install’ a new Python because I’m on an old Ubuntu 16.04.  Don’t want to go down that path right now.

Download a new Python, build, install.

Create a new virtual environment and reinstall all of the packages.

Installing packages fails (Twisted) because libbz2-dev wasn’t installed.

Install libbz2-dev.

Rebuild Python and reinstall.

Create a new virtual environment with the new python and reinstall all of the packages.

Installing packages fails due to a build problem with libopenzwave.

Looks like I need to upgrade my toolchain (g++ specifically).

I don’t like the look of that particular Yak. Lets try upgrading to Ubuntu 18.04.

My current version of ProxMox doesn’t support Ubuntu 18.04.

I need to upgrade ProxMox first.  Oh, that’s the greasiest Yak yet. It’s a major version upgrade.

I should really just build a new ProxMox from scratch, while running the old one.

I don’t have enough hardware to build a new ProxMox server, even temporarily.

I think I’m on the last Yak. So in order to submit a patch against HomeAssistant, I need to go to the Hardware Store.

Recover your zoneminder data after inadvertent loss of ib_logfile[01]

By   September 3, 2018

In the unlikely event that you’ve lost your Mysql ib_logfile[01] files, you will google and try to figure out whether you can get the data back.  All the googling will tell you that all of your data is in those log files but that’s not true as of any recent version of mysql.  I was looking around in /var/lib/mysql/zm/ and noticed the .ibd files were sizable which suggested the data was actually in there instead.  After some googling, I found you can import data back into a freshly created Zoneminder DB.

I decided to uninstall/reinstall zoneminder  from scratch and then recreate the DB:

https://stackoverflow.com/questions/18761594/how-do-i-do-a-clean-re-install-of-zoneminder

After doing that, I did the following:

(there are shell commands interspersed with the mysql commands so you can see the order of operations):

 

lock tables Devices write;
alter table Devices discard tablespace;
# cp -p saved/Devices.idb /var/lib/mysql/zm/
alter table Devices import tablespace;

lock tables Events write;
alter table Events discard tablespace;
# cp -p saved/Events.idb /var/lib/mysql/zm/
alter table Events import tablespace;

lock tables Filters write;
alter table Filters discard tablespace;
# cp -p saved/Filters.idb /var/lib/mysql/zm/
alter table Filters import tablespace;

lock tables Frames write;
alter table Frames discard tablespace;
# cp -p saved/Frames.idb /var/lib/mysql/zm/
alter table Frames import tablespace;

lock tables Groups write;
alter table Groups discard tablespace;
# cp -p saved/Groups.idb /var/lib/mysql/zm/
alter table Groups import tablespace;

lock tables Logs write;
alter table Logs discard tablespace;
# cp -p saved/Logs.idb /var/lib/mysql/zm/
alter table Logs import tablespace;

lock tables MonitorPresets write;
alter table MonitorPresets discard tablespace;
# cp -p saved/MonitorPresets.idb /var/lib/mysql/zm/
alter table MonitorPresets import tablespace;

lock tables Monitors write;
alter table Monitors discard tablespace;
# cp -p saved/Monitors.idb /var/lib/mysql/zm/
alter table Monitors import tablespace;

lock tables Servers write;
alter table Servers discard tablespace;
# cp -p saved/Servers.idb /var/lib/mysql/zm/
alter table Servers import tablespace;

lock tables States write;
alter table States discard tablespace;
# cp -p saved/States.idb /var/lib/mysql/zm/
alter table States import tablespace;

lock tables TriggersX10 write;
alter table TriggersX10 discard tablespace;
# cp -p saved/TriggersX10.idb /var/lib/mysql/zm/
alter table TriggersX10 import tablespace;

lock tables Users write;
alter table Users discard tablespace;
# cp -p saved/Users.idb /var/lib/mysql/zm/
alter table Users import tablespace;

lock tables ZonePresets write;
alter table ZonePresets discard tablespace;
# cp -p saved/ZonePresets.idb /var/lib/mysql/zm/
alter table ZonePresets import tablespace;

lock tables Zones write;
alter table Zones discard tablespace;
# cp -p saved/Zones.idb /var/lib/mysql/zm/
alter table Zones import tablespace;

After that, ‘systemctl restart zoneminder’ and hope for the best.

Linux: add a USB network interface to a bridge

By   January 11, 2018

I have some Raspberry Pi0Ws that I’m connecting to a LAN via the USB gadget network driver on a Raspberry Pi3.  The problem I encountered was that the usb[0-3] interfaces weren’t showing up when the Pi3 booted because the Pi0w’s booted later.  Only ‘eth0’ appeared on ‘br0’.

You need to make sure to add the usb[0-3] interfaces to the Bridge when the Pi0w’s boot.  This also works if Pi0W’s are plugged in after the Pi3 is already booted.  A simple change to /etc/network/interfaces is all it took to make this happen:

auto br0
iface br0 inet dhcp
bridge_ports eth0 usb0 usb1 usb2 usb3
bridge_stp off
bridge_fd 0
bridge_waitport 0

allow-hotplug usb0
allow-hotplug usb1
allow-hotplug usb2
allow-hotplug usb3

auto usb0
iface usb0 inet static
address 10.0.0.1
netmask 255.255.255.0
up ifconfig usb0 up
up brctl addif br0 usb0

auto usb1
iface usb1 inet static
address 10.0.0.2
netmask 255.255.255.0
up ifconfig usb1 up
up brctl addif br0 usb1

auto usb2
iface usb2 inet static
address 10.0.0.3
netmask 255.255.255.0
up ifconfig usb2 up
up brctl addif br0 usb2

auto usb3
iface usb3 inet static
address 10.0.0.4
netmask 255.255.255.0
up ifconfig usb3 up
up brctl addif br0 usb3

Note the IP addresses on the individual usb[0-3] interfaces are irrelevant. Those are present only so I can bring the interface ‘up’.

Venstar ColorTouch T7900

By   December 11, 2017

The thermostat arrived and I was excited.  I plugged it in on my bench at home and the first thing I found was the temperature sensor was off by about 5F.  I knew I could set the calibration in the menus but wasn’t sure if the amount it was off was linear or not.  I contacted tech support and received a response back almost right away. After a couple of back/forths with tech support, they offered to send me a wired temperature sensor to try instead.  That was on November 12th.   Since then, I’ve updated the firmware on the thermostat and now it has, twice, locked up completely.  I have to cycle the power on it to get it to come back.  I fired off another question to tech support and received no response.

It is now December 11th and I’ve not heard anything from tech support since the first instance on November 12th.

In the meantime, I’ve mounted the thermostat on the wall and hooked it up to my furnace and it seems to work fine enough.  I’ve also modified a few scripts so I can continue to monitor it remotely. I’ve even written a bit of an API for it in Python.

It hasn’t hung on me since I mounted it to the wall so I’ll continue to keep an eye on that.

It’s still a great thermostat as long as you don’t want tech support.

Edit: Tech support is actually responsive.  The problem is that my emails don’t reach them.  They claim to have never received any emails from me even though I’ve confirmed in my mail logs that their MX has accepted my message.  If I communicate with them via their web-form then tech-support is very responsive.  Clearly they have some aggressive anti-spam filters on their email that are generating false-positives.

Use Environment Canada Weather forecast atom feed to decide whether to irrigate

By   August 20, 2017

I have more or less switched to using Home Assistant to automate things at the cabin.  One of the things I’ve had to do is integrate the Etherrain/8 from QuickSmart into HomeAssistant by creating a new component and switch module.  This is currently (as of this writing) on a branch waiting for release integration.

Now the next thing was to automate the irrigation.  HomeAssistant’s automation scripts take a bit of getting used to.. It’s certainly not very intuitive but it is what it is.

First the automation, to water the front beds at 7 AM on Mon/Wed/Fri if the chance of rain is <60%:

    - id: water_front_beds_on
      alias: Start Watering Front Beds mon/wed/fri at 7AM
      initial_state: on
      trigger:
        platform: time
        hours: 7
        minutes: 0
        seconds: 0
      condition:
        condition: and
        conditions:
          - condition: state
            entity_id: binary_sensor.rain_unlikely
            state: 'on'
          - condition: time
            weekday:
              - mon
              - wed
              - fri
      action:
        service: switch.turn_on
        entity_id: switch.front_beds

So the next question is likely “where does rain_unlikely come from”?  It’s here:

    binary_sensor:
      - platform: threshold
        name: rain_unlikely
        threshold: 59
        type: lower
        entity_id: sensor.environment_canada_pop
    
      - platform: mqtt
        state_topic: environment_canada/pop
        name: environment_canada_pop

Essentially, an MQTT message with a percentage that is the “probability of precipitation”.  But who generates this MQTT message?  In fact, nobody really.  This is just a placeholder sensor that holds the POP.  But something must populate this. Well, here’s an Appdaemon script written in Python:

    import appdaemon.appapi as appapi
    import feedparser
    import sys
    import time
    import datetime

    class EnvCanada(appapi.AppDaemon):

      def initialize(self):
        if "locator" in self.args:
          loc = self.args["locator"]
        else:
          loc = "ab-52"  # Default to Calgary
        if "hr" in self.args:
          hr = int(self.args["hr"])
        else:
          hr = 4
        if "ahead" in self.args:
          add=int(self.args["ahead"])
        else:
          add = 0

        myargs={}
        myargs["loc"] = loc
        myargs["add"] = add
        myargs["module"] = self.args["module"]
        # First run immediately
        h = self.run_in(self.get_pop, 1, **myargs)
        # Then schedule a runtime for the specified hour.
        runtime = datetime.time(hr, 0, 0)
        h = self.run_once(self.get_pop, runtime, **myargs)

      def get_pop(self, args):
        loc = args["loc"]
        add = args["add"]
        d=feedparser.parse('http://weather.gc.ca/rss/city/{0}_e.xml'.format(loc))
        weekdays=["Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"]
        today = (weekdays[time.localtime().tm_wday+add])
        pop = 0
        for entry in iter(d.entries):
          if today in entry.title:
            if "howers" in entry.title:
              pop = 100
            if "POP" in entry.title:
              next=0
              for word in iter(entry.title.split(" ")):
                if next:
                  pop = int(word.rstrip("%"))
                if word == "POP":
                  next=1
        print("{0}: Got POP {1}".format(args["module"], pop))
        self.set_state("sensor.environment_canada_pop", state = pop)

      def terminate(self):
        self.log("Terminating!", "INFO")

This essentially grabs the Atom feed from Environment Canada, looks for the word “Showers” or “showers” or “POP” and generates that percentage.  It then reaches under the skirt of Home Assistant and populates the above mentioned ‘MQTT sensor’.. I really wish I could think of a better way to do that.

Just for the sake of completion; here is the AppDaemon configuration to get it all started:

  EnvCanada:
  module: environment_canada
  class: EnvCanada
  hr: 5
  ahead: 0
  locator: ab-53

The ‘ahead’ attribute is “how many days ahead to determine the POP.  Ie: at 0, it returns todays probability of precipitation.  At 1, it will return tomorrow’s.   the ‘hr’ attribute is when AppDaemon should run this script.  In this case, at 05:00AM.  The ‘locator’ is the portion of the URL that specifies which Environment Canada weather location to use.  ‘ab-53’ is Sundre Alberta.

 

Hope that helps someone.

 

NFS client mount within a Proxmox LXC container.

By   December 23, 2016

Another “memo to self” …

 

[ Edit: minor change for Proxmox 5.x at bottom]

Having trouble doing an NFS mount from within a Proxmox LXC container?  A google search took me here and it pretty much answers the question but doesn’t work with Proxmox 4.4-1.  The error I was seeing after following the advice in the above was:

apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-cgns" pid=11339 comm="apparmor_parser"

So you also need to edit /etc/apparmod.d/lxc/lxc-container-default-cgns and make it look like so:

# Do not load this file. Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default-cgns flags=(attach_disconnected,mediate_deleted) {
 #include <abstractions/lxc/container-base>

# the container may never be allowed to mount devpts. If it does, it
 # will remount the host's devpts. We could allow it to do it with
 # the newinstance option (but, right now, we don't).
 deny mount fstype=devpts,
 mount fstype=nfs,
 mount fstype=cgroup -> /sys/fs/cgroup/**,
}

and then subsequently do:

service apparmor reload

Edit: On proxmox 5.2-1 the file is /etc/apparmor.d/lxc/lxc-default-cgns.  The rest of the above is still correct.

socat on OS X – TCDRAIN returns Invalid Argument.

By   June 26, 2016

When using socat, as installed by ‘brew install socat’ on OS X, you will likely get this error when trying to proxy a serial device to another host via TCP:

TCSADRAIN, 0x7fffffffe148):Invalid argument

This is because OS X uses the FreeBSD termios interface and the bug is explained here:

https://lists.freebsd.org/pipermail/freebsd-ports-bugs/2015-March/304366.html

This is the patch you want to apply to ‘socat’:

https://bz-attachments.freebsd.org/attachment.cgi?id=154044

 

Unfortunately, ‘brew install socat’ just gives you someone else’s precompiled binary and you want to retrieve the source so you can apply the above patch.

 

Do it like so:

 

cd `brew --cache`
brew unpack socat
cd socat-1.7.3.1
curl https://bz-attachments.freebsd.org/attachment.cgi?id=154044 > patch
patch < patch
./configure
make
make install

 

pfSense openvpn client to generic openvpn server in bridge mode

By   May 27, 2016

This should really go into the ‘memo to self’ category but I don’t have one.  Regardless…

I have an Ubuntu VM running OpenVPN in Bridge mode (tap).  I wanted to bridge my cottage network to my home network using pfSense out at the cottage. In the process of making this work, a fair amount of googling was involved so I decided to aggregate all of the information in one place in case I ever needed to reproduce it.  Friend Kurt was running up against some of the same issues.

 

First, make sure your OpenVPN server is working and that you have the following client specific files available (filenames will likely vary):

  • site.ovpn
  • ca.crt
  • ta.key
  • client.crt
  • client.key

On the server, I had to make some minor changes to make everything work:

If you can ping client->server but the connection hangs when you try to edit a file or view a web-page

mssfix 142
fragment 1200

If on the client, you see “OpenVPN Bad LZO decompression header byte:”? I had to comment out “comp-lzo” on the server… This seems bogus but it made it work. Need to investigate this later.

The client says “Authenticate/Decrypt packet error: cipher final failed”, the issue is the cipher being used. The default on the server was “BF-CBC” but the pfSense default was “AES-128-CBC”. Change the pfSense to “BF-CBC” and you’re good to go.

The general procedure for making this work in pfSense is the following:

    • Go to System->Cert. Manager and add your server’s “ca.crt” to Certificate Authorities. Give it a descriptive name.
    • Then go to System->Cert. Manager->certificates and add your client.crt and client.key.  Give it a descriptive name as well.   Ensure you do this after you’ve added ca.crt so that when you add this certificate, it will reference the above ca.crt.
    • Go to VPN->OpenVPN->Client and click ‘Add’
        • Select Peer-to-Peer under ‘Server mode’
        • Select ‘tap’ under ‘Device Mode’
        • Select ‘WAN’ under ‘Interface’
        • Set your server host/address to your VPN server address.
        • Set the port accordingly.
        • Set description to something you’ll recognize.
        • Under TLS Authetication, set ‘Enable authentication of TLS packets’.  It will drop down a text box into which you can paste the contents of ‘ta.key’.
        • Set ‘Peer Certificate Authority’ to the one you added above.
        • Set ‘Client Certificate’ to the one you added above.
        • Set encryption algorithm to whatever your VPN server is using (BF-CBC in my case)
        • Under ‘Custom Options’, I had:

      mssfix 142
      fragment 1200

The final note I’d like to add is one about IP addresses. When you set your ‘server-bridge’ parameter on the server’s VPN config, you assign a pool of IP addresses that are not in your dhcp server’s range. By default, the IP addresses assigned are specific to the client certificate. So if you find your clients are all getting the same IP address, it is because they each need a unique client certificate. You can override this behavior using the ‘duplicate-cn’ directive in your server’s config file. It’s generally not a good idea though so you should just create unique client certificates.

iterm2 arrow keys not working in cursor application mode

By   February 3, 2016

(TL;DR at the bottom)

This is one of those things that irritated me for ages.  I generally don’t use arrow/home/end keys for anything except when I run (rarely) certain applications like ‘make menuconfig’ where I’m forced to navigate using arrow keys.

For the longest time, the arrow keys didn’t work on iterm2 in certain applications.  After digging in, I discovered the problem.

Ages ago, I started using OS-X, but terminal.app sucked so I installed iterm.  Then iterm2 came out and I upgraded.  Sometime thereafter I discovered the arrow keys didn’t work.  This morning, I decided enough was enough and I got to the bottom of it.  One of the answers on this question posted a handy little script to test whether the keys work in cursor application mode:

 

sh -c "$(cat <<\EOF
noecho_appmode() {
  stty -echo
  printf '\033[?1h'
}
modes="$(stty -g)"
restore_echo_and_appmode() {
  stty "$modes"
  printf '\033[?1l'
}
printf '\nType <Up> <Down> <Right> <Left> <Control-D> <Control-D>\n'
printf '(no output until after the first <Control-D>, please type "blindly")\n\t'
noecho_appmode             ; trap 'restore_echo_and_appmode' 0
cat -v
restore_echo_and_appmode   ; trap ''                         0
printf '\nExpected:\n\t'
printf 'kcu%c1\n' u d f b | /usr/bin/tput -S | cat -v
printf '\n\n'
EOF
)"

This told me that iterm2 wasn’t working correctly. But it obviously works for many other people.

TL;DR:

 

When I upgraded from iterm to iterm2, my settings survived and Preferences->Profiles->Keys (NOT Preferences->Keys) contained overrides for the arrow keys and home/end.  Once I loaded a Preset for “xterm default”, exited iterm2 and restarted it, arrow keys worked fine.