Greetings Programs!
I've been working on a command line tool named convey for quite some time now and it needs to return an exit code. In most programming languages this is pretty straight forward... You just return the exit code from your main function. But in golang it's not quite that simple.
In golang os.Exit returns immediately and doesn't call any deferred function calls. This makes it impossible to cleanup with a deferred function. As many gophers will tell you, deferred cleanup functions are amazing, so this is quite the bummer.
I've experimented with this for a while, but never got it fully sorted out until tonight. My previous attempts swallowed panics and just weren't very good. But today I built a proof of concept, tested it, and documented the crap out of it. And well, it works great! No longer will I find myself scratching my head on why I have an exit code of 0 but the program aborted too early.
This works by renaming your main function to gomain and making it return an int. Then we write a main function that does all of the magic including handling panics.
You can find the fully documented source code (released to the public domain) at bitbucket.org/rw_grim/gomain.
Happy Hacking!
Jack of all trades, master of most!
Saturday, December 30, 2017
Friday, February 3, 2017
docker-machine and qnap container station
So earlier this week woot.com had a QNAP TS-453mini for an awesome price. I've been in the market for a new NAS to replace my self built one from more than half a decade ago. One of the selling points that caused me to purchase it was the container station. The container station lets your run Docker containers directly on the NAS. If you've talked to me in the past 3 years, I've probably told you how I've drank all of the container kool-aid. So of course this alone is reason enough for me to look into it.
The NAS came today and I put some old drives in it right now as I'm waiting on my secondary drive order (to try and make sure I hit different production runs). So I started tinkering with the container station when I remembered about docker machine.
Docker machine is used to control multiple docker engines. Typically it is used to create a virtual machine, or provision one on a cloud provider. However, there is also a poorly documented driver named "none". The none driver is just straight docker api over a tcp socket. The container station exposes this and provides the certificates for authentication.
To get the certificates, go to the Preferences page in the Container Station. From there select the Docker Certificate tab. This page has some instructions on how to install the certs, but that'll only work if you only plan on connecting to the NAS. So instead just hit the download button.
Now that we have the certs we can create the machine in docker-machine. Of course replacing %nas ip% with the IP address or hostname of the NAS and %name% with whatever you want to refer to it as in docker machine. In the following examples I've named the machine nas.
If we try to use this as is right now we'll get the following error.
To fix this we need to add and configure the certs that we downloaded earlier into the docker machine config. To do that we need to cd into the directory that holds the config. Once there, we need to extract cert.zip into that directory.
Now we just need to modify the AuthOptions section to point to the correct files. It should look like the following:
Now we can enable the host in docker machine and run docker ps on it:
On no!! What happened?!? Well docker is usually pretty strict about having the same version of the client talk to the same version of the server. Luckily we can work around this.
And it works! Now we can treat the nas as any old docker host and let docker-machine manage it for us.
The NAS came today and I put some old drives in it right now as I'm waiting on my secondary drive order (to try and make sure I hit different production runs). So I started tinkering with the container station when I remembered about docker machine.
Docker machine is used to control multiple docker engines. Typically it is used to create a virtual machine, or provision one on a cloud provider. However, there is also a poorly documented driver named "none". The none driver is just straight docker api over a tcp socket. The container station exposes this and provides the certificates for authentication.
To get the certificates, go to the Preferences page in the Container Station. From there select the Docker Certificate tab. This page has some instructions on how to install the certs, but that'll only work if you only plan on connecting to the NAS. So instead just hit the download button.
Now that we have the certs we can create the machine in docker-machine. Of course replacing %nas ip% with the IP address or hostname of the NAS and %name% with whatever you want to refer to it as in docker machine. In the following examples I've named the machine nas.
docker-machine create --driver=none --url=tcp://%nas ip%:2376 %name%
If we try to use this as is right now we'll get the following error.
$ docker-machine ls --filter name=nas
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
nas - none Running tcp://nas:2376 Unknown Unable to query docker version: Get https://nas:2376/v1.15/version: x509: certificate signed by unknown authority
To fix this we need to add and configure the certs that we downloaded earlier into the docker machine config. To do that we need to cd into the directory that holds the config. Once there, we need to extract cert.zip into that directory.
$ cd ~/.docker/machine/machines/nas
$ unzip ~/Downloads/cert.zip
Archive: /home/grim/Downloads/cert.zip
extracting: ca.pem
extracting: cert.pem
extracting: key.pem
Now we just need to modify the AuthOptions section to point to the correct files. It should look like the following:
"AuthOptions": {
"CertDir": "/home/grim/.docker/machine/machines/nas/",
"CaCertPath": "/home/grim/.docker/machine/machines/nas/ca.pem",
"CaPrivateKeyPath": "/home/grim/.docker/machine/machines/nas/key.pem",
"CaCertRemotePath": "",
"ServerCertPath": "/home/grim/.docker/machine/machines/nas/cert.pem",
"ServerKeyPath": "/home/grim/.docker/machine/machines/nas/key.pem",
"ClientKeyPath": "/home/grim/.docker/machine/machines/nas/key.pem",
"ServerCertRemotePath": "",
"ServerKeyRemotePath": "",
"ClientCertPath": "/home/grim/.docker/machine/machines/nas/cert.pem",
"ServerCertSANs": [],
"StorePath": "/home/grim/.docker/machine/machines/nas"
}
Now we can enable the host in docker machine and run docker ps on it:
$ eval $(docker-machine env nas)
$ docker ps
Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.23)
On no!! What happened?!? Well docker is usually pretty strict about having the same version of the client talk to the same version of the server. Luckily we can work around this.
$ export DOCKER_API_VERSION=1.23
$ eval $(docker-machine env nas)
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
And it works! Now we can treat the nas as any old docker host and let docker-machine manage it for us.
Friday, August 26, 2016
Local^wBitbucket Pipelines
So a while ago Bitbucket started a Beta program from a new feature called Pipelines, or more appropriately, Bitbucket Pipelines. Being interested in CI/CD I of course submitted an application right away. I was accepted, but then found out it only had support for Git.
As you may or may not know, I'm not really a big fan of Git (that flamewar is for another time and place) and prefer Mercurial. So much so that nearly all of the 300ish repositories I have access to on Bitbucket are Mercurial. So I was dead in the water and couldn't do anything.
Fast forward a few months and Bitbucket added Mercurial support to Pipelines. SCORE!
I started adding Pipelines support to one of my more simple projects and unfortunately found it extremely tedious to have to push to Bitbucket every time to see if I fixed the build. So, like any good Open Source developer, I started working on a solution.
Pipelines is built on top of Docker and uses a YAML file to describe how the build works. I've been using Docker for very long time now (I gave a talk on it in August 2014 for anyone that's curious) so I'm very comfortable with it. That said, all I really needed to do was take the YAML file and turn it into some docker run commands.
So after a few hours of work I had a working version of what I later named local-pipelines.
It sat that way for awhile until Sean Farley ran into the same issues I did with not being able to test until pushing. He then proceeded to clean the VCS interaction code and added support for passing environment variables into the pipeline. His work got me working on it again too so I finally cleaned up the documentation and got it uploaded to pypi.
You can find more information on the overview page or if you're using MacPorts you can find it there as well.
As you may or may not know, I'm not really a big fan of Git (that flamewar is for another time and place) and prefer Mercurial. So much so that nearly all of the 300ish repositories I have access to on Bitbucket are Mercurial. So I was dead in the water and couldn't do anything.
Fast forward a few months and Bitbucket added Mercurial support to Pipelines. SCORE!
I started adding Pipelines support to one of my more simple projects and unfortunately found it extremely tedious to have to push to Bitbucket every time to see if I fixed the build. So, like any good Open Source developer, I started working on a solution.
Pipelines is built on top of Docker and uses a YAML file to describe how the build works. I've been using Docker for very long time now (I gave a talk on it in August 2014 for anyone that's curious) so I'm very comfortable with it. That said, all I really needed to do was take the YAML file and turn it into some docker run commands.
So after a few hours of work I had a working version of what I later named local-pipelines.
It sat that way for awhile until Sean Farley ran into the same issues I did with not being able to test until pushing. He then proceeded to clean the VCS interaction code and added support for passing environment variables into the pipeline. His work got me working on it again too so I finally cleaned up the documentation and got it uploaded to pypi.
You can find more information on the overview page or if you're using MacPorts you can find it there as well.
Building libpurple3 on OSX with homebrew
I've spent a far amount of time this week working on making it easier (or even possible in some cases) to build libpurple3 on OSX. I haven't gotten to Gtk+ yet, so I haven't even tried compiling Pidgin yet.
Typically OSX isn't one of first party supported platforms, but I'm on vacation this week and only have a MacBook at my immediate disposal. So here we are ;)
Most of the issues have been in the generation of the configure script and landed in PR #103. But there was a side effect from an earlier PR that became a problem on OSX since homebrew does not have farstream. PR #107 has been submitted to fix that.
When PR #107 is merged everything should be buildable. But it does not and will not work out of the box. Why you ask? Politics! (of course...)
Homebrew attempts to play things very safe, which is admirable, but a giant pain when you're actually trying to compile something that isn't already in Homebrew. So we go about making this easier by using a little known feature of Pidgin's build system.
Years ago I got tired of trying to remember what arguments I was passing to autogen.sh and configure. So like any programmer, I added code that would do it for me! That code sources a shell script named autogen.args from the directory you invoke autogen.sh from. So in the root of my build directory I have a file named autogen.args with the following content:
At any rate, I hope this has been helpful to someone :)
Typically OSX isn't one of first party supported platforms, but I'm on vacation this week and only have a MacBook at my immediate disposal. So here we are ;)
Most of the issues have been in the generation of the configure script and landed in PR #103. But there was a side effect from an earlier PR that became a problem on OSX since homebrew does not have farstream. PR #107 has been submitted to fix that.
When PR #107 is merged everything should be buildable. But it does not and will not work out of the box. Why you ask? Politics! (of course...)
Homebrew attempts to play things very safe, which is admirable, but a giant pain when you're actually trying to compile something that isn't already in Homebrew. So we go about making this easier by using a little known feature of Pidgin's build system.
Years ago I got tired of trying to remember what arguments I was passing to autogen.sh and configure. So like any programmer, I added code that would do it for me! That code sources a shell script named autogen.args from the directory you invoke autogen.sh from. So in the root of my build directory I have a file named autogen.args with the following content:
As you can see, I'm disabling a ton of stuff to make this build work, but that's not all. Notice the PKG_CONFIG_PATH environment variable being exported. This is dealing with Homebrew's policy of staying out of the system's way. OSX has it's own version of libffi and libxml2 so Homebrew will not install any part of those packages where the system will look for them. This has the unfortunate side effect of causing weird build failures until you realize this is the problem.export PKG_CONFIG_PATH=$(brew --prefix libffi)/lib/pkgconfig:$(brew --prefix libxml2)/lib/pkgconfigCONFIGURE_FLAGS="--disable-gtkui --disable-consoleui --disable-vv --disable-kwallet --disable-meanwhile --disable-avahi --disable-dbus --disable-gnome-keyring --enable-introspection"
At any rate, I hope this has been helpful to someone :)
Tuesday, March 17, 2015
Chromecast, Android, UPnP/DLNA, and Docker
If you're like me, you have at least one machine on your home network that has a UPnP/DLNA server up and running it on. You may have used a PS3, XBOX360, NeoTV, or any other UPnP/DLNA device to watch media from that UPnP/DLNA server. And... Like me, you may have grown tired of all these machines and tried to migrate everything to using a Chromecast. If so, you soon discovered Chromecast has no way to natively play anything via UPnP/DLNA.
This is the situation I found myself in. I searched the Play store and found BubbleUPnP. It sounded great, then I tried to use it. Well by default it can only cast to your Chromecast in the native formats that your Chromecast supports, which as I'm sure is not what the majority of your media is encoded in.
Luckily, BubbleUPnP has a server that will transcode media from whatever format it is in into a format that your Chromecast supports. The Android app will even automatically find the server for you automatically. However, the server is kind of tricky to setup and get working. Fortunately I've been playing with Docker a lot lately.
So if you're trying to cast UPnP/DLNA and you happen to be running a Linux distribution that's capable of running Docker, you can just run the Docker image I've created for the BubbleUPnPServer.
There were existing Docker images of the BubbleUPnPServer, but they all had something weird about them which you can read in more detail at the official page for the image. But to get you started you just need to run the following command on a machine with Docker installed on your local network.
docker run -P --net=host rwgrim/docker-bubbleupnpserver
Hope you find this as useful as I have!
This is the situation I found myself in. I searched the Play store and found BubbleUPnP. It sounded great, then I tried to use it. Well by default it can only cast to your Chromecast in the native formats that your Chromecast supports, which as I'm sure is not what the majority of your media is encoded in.
Luckily, BubbleUPnP has a server that will transcode media from whatever format it is in into a format that your Chromecast supports. The Android app will even automatically find the server for you automatically. However, the server is kind of tricky to setup and get working. Fortunately I've been playing with Docker a lot lately.
So if you're trying to cast UPnP/DLNA and you happen to be running a Linux distribution that's capable of running Docker, you can just run the Docker image I've created for the BubbleUPnPServer.
There were existing Docker images of the BubbleUPnPServer, but they all had something weird about them which you can read in more detail at the official page for the image. But to get you started you just need to run the following command on a machine with Docker installed on your local network.
docker run -P --net=host rwgrim/docker-bubbleupnpserver
Hope you find this as useful as I have!
Thursday, January 9, 2014
Long time no post...
I've been super busy as of late with lots of projects and other stuff, but figured a blog post is long overdue.
As some of you may know, there was a Pidgin Summer of Code project for GObjectification. As some of you may know, I designed and spec'd out a ton of that project. I was not the primary mentor for the project, but more of a technical support. Anyways, when the topic of plugins came up, my GPlugin library got chosen.
My last post on GPlugin was announcing version 0.0.2 in April of 2012!! Anyways, over the summer, GPlugin's functionality grew leaps and bounds. The Python loader was finished, a Perl loader was started, a Lua load was completed and the library itself is more or less feature complete. There's still plenty of things that need to be done, but everything (aside from a bug or two) is completely ready to use!
Last night I released version 0.0.12 which pretty much just fixed an extreme corner case bug that was more of a memory leak and some more unit testing. At any rate, if you're working on a Glib/GObject based application and want plugins, give GPlugin a go as I could really use some more feedback!
As some of you may know, there was a Pidgin Summer of Code project for GObjectification. As some of you may know, I designed and spec'd out a ton of that project. I was not the primary mentor for the project, but more of a technical support. Anyways, when the topic of plugins came up, my GPlugin library got chosen.
My last post on GPlugin was announcing version 0.0.2 in April of 2012!! Anyways, over the summer, GPlugin's functionality grew leaps and bounds. The Python loader was finished, a Perl loader was started, a Lua load was completed and the library itself is more or less feature complete. There's still plenty of things that need to be done, but everything (aside from a bug or two) is completely ready to use!
Last night I released version 0.0.12 which pretty much just fixed an extreme corner case bug that was more of a memory leak and some more unit testing. At any rate, if you're working on a Glib/GObject based application and want plugins, give GPlugin a go as I could really use some more feedback!
Sunday, April 29, 2012
gplugin 0.0.1^H2 released!
After *MANY* months of on again off again coding I'm very proud to announce version 0.0.2 of GPlugin.
Version 0.0.1 had a broken pkg-config file, which I fixed in 0.0.2... I knew I was forgetting to test something...
GPlugin is a library that will give your program GObject based plugins. Right now it only supports native (compiled plugins), but there are plans for at least a python loader and whatever else anyone wants to write.
During this release I transitioned this project off of guifications.org to bitbucket. All GPlugin related business (bug reporting, file downloads, etc) should be done at the bitbucket site.
Also, documentation is in the source and will eventually be built/posted once the gobject-introspection guys figure out whats going on with g-ir-doctool. Right now you can build the docs and open them in yelp, but not devhelp. I'll put something on the bitbucket site soon for how to do that.
Source tarballs can be found here: zip gz bz2
Version 0.0.1 had a broken pkg-config file, which I fixed in 0.0.2... I knew I was forgetting to test something...
GPlugin is a library that will give your program GObject based plugins. Right now it only supports native (compiled plugins), but there are plans for at least a python loader and whatever else anyone wants to write.
During this release I transitioned this project off of guifications.org to bitbucket. All GPlugin related business (bug reporting, file downloads, etc) should be done at the bitbucket site.
Also, documentation is in the source and will eventually be built/posted once the gobject-introspection guys figure out whats going on with g-ir-doctool. Right now you can build the docs and open them in yelp, but not devhelp. I'll put something on the bitbucket site soon for how to do that.
Source tarballs can be found here: zip gz bz2
Subscribe to:
Posts (Atom)