Technological Inflation

A discussion on Twitter stirred up my neurons, and I felt this is a good time to put in writing things I’ve been discussing with people in corridors for a long while.

Say hello to technological inflation

Whether we like it or not, it is in my opinion we are in an age of technological inflation.

We can’t recruit easily

Having a fresh memory of trying to recruit just 4 years ago and that of today, the experience is so different that it’s really hard to grasp. And if you understand what I’m talking about, you probably agree.

The amount of shallow experience and skillset out there is astonishing, and worse - it is acceptable; companies are rushing out to try and blitz-recruit just to strengthen the lines.

In big companies, HR presses hard to get that rare and painfully needed check-mark. Eventually, a close to 100% rejection rate from engineering is hard to explain to higher management, and engineering are pressed to lower the bar, and are left to sort out the pieces later.

In small companies and startups, new recruits are being swapped out in an unpleasant frequency (to both sides), after those companies realize they didn’t do a good job clearing out the minefield of hype and buzzwords and awesomeness from those candidates’ CVs.

We can’t make our case properly

As engineers and architects, we feel that every choice should be reasoned about and proven to be the right one; or the smart one. And for a lot of cases, if not all, this is justified.

As the saying goes, “we are the choices we make” - I believe this is true for every product we build, and specifically for the sum of technical choices we make.

Due to that, we must understand that the Paradox of choice overwhelms engineers as well. Granted, the amplitude changes:

For example, within the Ruby community (or any such stable community, and defining what is a stable community is a subject for another post entirely) you pretty much are safe with most of the choices you make.

However, in a typical front-end community, you are so completely lost; and after you made your (by all means correct) choice, chances are it will be the wrong one after six months. If you are just picking up a front-end stack, do you start out with Angular? but Angular 2.0 is killing 1.x…, how about React? would that be a smarter move? but Angular has plenty of talent out there already…; and that never ends.

The most painful part is, that in our micro universe - hype fuels the paradox of choice.

That is destructive. It becomes a vicious feedback cycle. Not only you don’t know what to choose, you also may end up choosing something that is half-baked, bad quality, or unreliable in the long run.

But it gets worse. Even when you know what’s right, you are peer pressured into that moment’s hype; and everyone keeps parroting the same agenda. Sounds familiar?

Behavioral economics applied to tech

I suspect even the tech community exhibits some of the behavioral economics patterns like any other community.

Here’s one to think about: Open source is ‘free’, and ‘free’ makes people do irrational things.

I love open source, and I am personally fueled by doing open source (for reasons mentioned below). But let’s ask this for a moment:

  • Do we make the same judgement when adopting an open-source product as we do a product which we paid a license for?
  • Do we judge it under the same constraints?

The answer is - probably not. But now that we know how we behave, we can counter it (seriously, that book is great). And we can judge open-source products with the same pair of glasses as any other product, with a healthy engineering approach (more on that in the end).

Funny thing is, that I wouldn’t even think about that back in 98’ when I built my first open source project; as a person who comes from a modest home with limited resources, my motivation was that I wanted to be able to help out others like me.

But 17 years of doing open source later, open source becomes almost an industry, it’s a whole different beast where the concepts of altruism and freedom are no longer the sole substance that structure it. As any rich ecosystem it is exploding with options, some of them are completely awesome, and some of them are not.

Sadly, the practice of decision making and making rational choices, didn’t get that kind of love. In such a vibrant environment, sometimes, we are overwhelmed, pressured, and pushed into a direction which doesn’t match our intuition.

There is hope

Technological inflation, is where our sense of value and the act of judging value becomes useless. Everything is awesome.

Luckily we’re engineers (some say artists?), and all we need to do to resolve that mental overhead is go back to basics:

  • Measure twice cut once. Every tool, and every product, should prove itself to your problem domain.
  • Time proven over just released. All products need time to mature; no matter what they say. If people are counting on your product, give a serious thought before adopting edge.
  • Some things are just the same but in different packaging. Learn to identify those and stick to patterns and techniques, not tools and frameworks.
  • Test all the f__king time (or really, make a best effort). Prefer platforms that let you do that more easily. Once you have that in place, switching always costs less.
  • Listen to your gut, and test your gut every so often. If you have solid and good experience with a product or a tool, that must amount to something.

The end

In some point in time, you get to a stage where you start seeing history repeating itself (in this case, React IS awesome, as was doing WndProc back in win32 days ;).

You see it everywhere. You just don’t bother mentioning it anymore or writing about it. You conserve energy, smile in acknowledgement, point people gently at the right direction, and keep on going. And naturally these are the voices that don’t get heard as much, sans the occasional blogpost - remember that.

If you smiled and acknowledged this, welcome to being old. And wise. And sometimes grumpy. Embrace it :)

Thanks @kensodev for the inspiration and @ketacode for making the first post about it, you guys rock!

A Modern iOS App Icon and Image Workflow With Blade

There are a few steps in a typical Designer-Programmer exchange. This starts when either you (the developer) - are part of the team and the designer had finished design work, or when you (the developer) hire a designer or make contract work, and now you need to integrate the deliverables into your app.

The following is an explanation on how I chose to attack this problem, and an open source tool called Blade I’ve built to make the solution work.

Current Workflow

You typically need an app icon, and image assets you would like to include in various resolutions, where a given image would be at 1x, 2x and 3x resolutions, and an app icon would be a bit more involved.

Here are the steps:

  1. (Designer) Design
  2. (Designer) Produce - cut sizes, adapt resolutions
  3. (Designer) Handoff - provide assets, and specs
  4. (Developer) Integrate - place each provided asset in its catalog, and resolution. Repeat until it looks good.
  5. (Developer) Identify mismatch - sum all glitches up, provide feedback to designer.
  6. Go back to either (1), (2) or (3) based on step (5)

Here are all the things that can go wrong, and how Blade compensates for them:

  1. (Designer) Producing sizes - can produce wrong sizes for wrong assets. Can use the wrong tool for the job. Can save in wrong formats, and so on. Errors can also be introduced by tooling even when the designer may cut manually (using Sketch for example), or automatically (using a Photoshop script or a dedicated “App Icon” generator website).

    With Blade: the designer produces and delivers just one size of the asset.

  2. (Designer/Developer) Handoff - human miscommunication. The primary cause in a lot of software fiascos. although not as costly as a spaceship crashing, miscommunication between designers and developers still rots their workflow slowly and surely.

    With Blade: reduce communication, by providing a single asset, of known size - the largest possible.

  3. (Developer) Integration error - placing the wrong assets in the wrong resolutions, fitting assets incorrectly, and so on.

    With Blade: automates asset catalog creation, and takes that concern away from the developer.

  4. (Developer/Designer) Communication error fed back into the feedback loop. Injecting errors into a feedback loop quickly creates an avalanche effect.

    With Blade: even if there is a feedback loop, Blade enables skipping all of the production steps, because it runs them automatically. So basically it is - get a new asset, rerun Blade.

Blade Powered Workflow

With Blade the previous error prone workflow is reduced to a safer one:

Between the lines, this new workflow also establishes these:

  1. 1 size to rule them all. 80/20 best case.
  2. Automated steps 2-4
  3. A zero-friction process to repeat for providing feedback. Laziness as a natural optimizer.

Blade Quickstart

Using Homebrew:

 $ brew tap jondot/tap
 $ brew install blade

Now, within your project do:

$ blade --init

And take a look at the Bladefile that was generated for you.

This is how working with Blade looks like:

If this is interesting for you, see the Blade file, for the full power of Blade.

Why Blade Works


Blade was created in the wake of the Apple Watch release. At the time no tool supported the newer Apple Watch form factors, and it only emphasized the need to be coupled to Xcode itself, and not an external website/tool. Using a generator website or a Photoshop script - you are at the mercies of these, waiting for their update in the best case, and in the worst case - making all the additions by hand.

Apple saves each image catalog in their own format, in a file called Contents.json. Blade reads and generates this same format, and practically behaves like Xcode itself. This promises a slick integration and future-proof tooling - when a new iOS is released with new form factors, Blade doesn’t need to update, nor do you need to, just rerun to generate additional sizes.


Every great solution and great design has strong trade-offs. In fact, when you can’t identify the trade-off a product or service make, that’s where you should be cautious.

Blade makes the following trade-offs:

  • The deliverable is a single big image per catalog. Every other size is a derivative of that. This means, you cannot provide a completely custom design for a exceptionally small/big variant of the asset. We also recognize that in this scenario, no other tooling will help you and in any case you should resort to a manual workflow.
  • Blade works with a single file format - PNG. While first versions of Blade included SVG, we found that Lanczos, as a resampling algorithm for scale reduction worked perfectly. We removed SVG support to avoid confusing users – SVG renderers out there make a poor job of rendering a pixel-perfect image, and including a headless browser would make rendering a lot slower.
  • Blade works with Xcode, and for Apple technologies only. We chose not to support Android by default, however it should be easy to add it.
  • Blade is meant for the developer. There is no UI, and you are encouraged to find you own convenient way of maintaining assets - storage and versioning.


Blade is now powering all of my iOS app. Working with designers (one of which happens to be my Wife) have become a lot more productive, and in the case of my Wife - makes a happier marriage :)

Feel free to submit pull requests or feedback on the Blade repo.

Using Travis CI to Validate Awesome Lists

I maintain several awesome-style lists, namely: awesome-react-native, awesome-devenv, awesome-colorschemes, awesome-weekly, and awesome-beginners.

Some more complete than others, and being an OCD curator, some really serve my own needs.

What I (and probably others) didn’t realize, is that since the web is constantly changing, these lists are subject to breaking – directly proportional to the number of links they host.

Travis CI

I’ve been using Travis CI for nearly every serious project I host on Github, and it is really valuable, especially when I have an open-source project which is really nothing more than an idea, execution, and good intentions of the original author and community.

For my awesome-lists, to make sure they don’t break without my noticing, and to make sure each pull request contains a valid link, I thought of using Travis CI along with a custom script to parse and verify links from a markdown document.

Surely this is not really testing code, and I hope it still is blessed by Travis CI (thanks for this awesome free service for open-source projects, guys!) but still it can be looked at as a sort of an integration test.

The validation script is pretty naive, at the expense of being dead simple and yet effective:


doc = Nokogiri::HTML('').read).to_html)
links = doc.css('a').to_a
puts "Validating #{links.count} links..."

invalids = []
Parallel.each(links, :in_threads => 4) do |link|
    uri = URI.join(BASE_URI, link.attr('href'))
    invalids << link

unless invalids.empty?
  puts "\n\nFailed links:"
  invalids.each do |link|
    puts "- #{link.text}"
  puts "Done with errors."

puts "\nDone."

And the Travis setup should be something like this:

language: ruby
- 2.2
script: bundle exec ruby validate.rb

And then, on each commit a build runs. A failing build looks like this:

Validating 254 links...
Failed links:
- react-native-context-menu
Done with errors.

And is nicely exposed with a Travis badge on my READMEs.

Check this out live on my awesome-react-native list.

3 Years of Go in Production

For the last 3 years, my microservices in production were divided into the following platforms:

  • Core: Ruby and JRuby with Rails and Sinatra
  • Satellites, scale: Node.js, Clojure and later: Go

A “core” toolset would live long. It would also move fast. It would depict the domain of the business and the core product solution that provides raw value.

Mostly, its performance profile doesn’t really introduce any infrastructural concerns.

The satellites and “scale” toolset exhibits use cases where we bumped into scalability issues and had to tear apart a piece of the core, and rebuild it on top of a more performant stack.

It also represent a pure infrastructural concern; such as a push infrastructure, or analytics services. These things don’t change as fast as the problem domain, and they do need to be robust, fast and dependable.

I want to talk about that “scale” toolset and share a bit of my own experience. Let’s start at the end.

While migrating from Node to Go, these are the things I have noticed to be different.


Node liked to crash when handling unexpected performance, mostly due to large use of connections and the overhead in managing them and keeping their resources in check.

True, this is mostly solved by proper capacity planning, and usage of hardening patterns like circuit breaker, and throttling.

However, using these, my Node services looked like forced concoctions that crashed hard and had a horrible GC profile of minutes per collection (this is early Node days I’m talking about where you had to patch V8 to support a larger heap). It kept reminding me V8 was originally built to be run on the desktop.

Even in its early days, Go didn’t have any of that, and it was enough. And when it did crash, it recovered crazy fast, and I liked that property, and made use of it; failing processes crashed fast.


Around 2011-12, Node was at the apex of performance, and Go wasn’t. That is, until Go 1.1 mixed up that equation. I first noticed it through the excellent TechEm benchmarks:

  • Round 1, March 2013 (pre Go 1.1): Go at 13k req/s, Node at 10k req/s. No big deal.
  • Round 10 (latest): Go at 380k req/s Node at 225k req/s. Around 175% increase in favor of Go, and when you compare that to Node with Express, you get 145k req/s for Node, which is 260% increase in favor of Go.

Although these are a merely but a specialized variant of a micro benchmark, think about the overhead of the web framework (Express) superimposed on the host platform (Node).

When Go is straddled with a typical web framework (Gin), it doesn’t react that hysterically and the reduction in performance is in the 1-3% range, Node however had a dramatic reaction.

It stands to hint which stack you’d want to pack infrastructure on. Think about it. Is this why a lot of full-on infrastructure projects (Docker, Kubernetes, Vagrant etc.) were built on Go (hint: yes)?


To deploy Node or Ruby, you have to deal with dependencies. Tools such as Bundler, rubygems, and npm were created to overcome dependency hell, and provided us with an ever helpful layer of abstraction, which split our products into two:

  • Product (essence)
  • Product (dependencies)

Essentially we could snapshot our deps, and ship our product. But notice, with Bundler and npm, we snapshot a description of our deps (unless we choose to vendor. IMHO, with npm – people usually don’t).

Every deploy might have modified the dependency tree of the product, and servers hosting these products had to reflect that.

For those wanting to solve this problem, they had to ask these questions:

  • Is that the responsibility of the Configuration Management infra?
  • Should the deployment process or framework take care of dependencies?
  • Should you bundle dependencies along with your product?
  • What happens when your dependencies die? (i.e. pulled off from Rubygems)

And their answers would typically be:

  • Configuration management should take care of the servers. Resources are not products.
  • Yes. The deployment process should take care of deps.
  • No, bundling dependencies is an anti-pattern. At worst case make our own local cache or proxy.
  • When dependencies die, we can use a local cache. Or: dependencies never die.


Docker seals these questions shut - everything is snapshotted into an image, and you deploy that. This provides a layer of abstraction on top of the dependencies idea - Snapshot all the things.

But still, for what’s written here, we’re talking pre-production-docker era here (which is, only a year and a half ago).


Even without Docker, Go packs a binary which is self-contained. And the answers to the above questions, are:

  • Go builds its dependencies into the binary, making a self-inclusive deliverable
  • Deployment framework caring about dependencies doesn’t matter anymore
  • Bundling dependencies doesn’t matter anymore
  • Dying dependencies doesn’t matter, dependencies live within your source tree

And even with Docker, we have no drama. A 5mb image plus your binary size, makes pulling an new image and starting up (and failing, when needed) crazy fast.

The Surprise Factor

Go makes portable Code. Java made that possible too. However, Go makes for a different kind of portability. It doesn’t compile to every platform under the sun (yet), but it does build for x86 and ARM.

Building for ARM means building for mobile, and Raspberry Pis.

My tipping point for using Go was when I looked into Python and C, for building my several ideas for home projects. I had to look at Python because it looks like that’s what the entire RPi community used, and I had to look at C because I found that a typical Python app took 27Mb of RAM blank.

Obviously for the first Raspberry Pi model I had, I didn’t have a lot of memory.

So, I decided to try Go, and I cleared up a day to do that, because I guessed cross compilation and ARM were going to be a nightmare and I really wanted to use Go (better yet, I didn’t want to use C as bad).

The first 5 minutes passed and I cross-compiled and built a hello world Go binary, SCP’d it to the Pi, and it printed a ‘Hello world’ and exited. This was Go 1.0 or something of the sort.

Not making peace with how smooth everything went - I spent the following 10 minutes making sure, and double-making-sure, that I copied the correct binary, and it truly was my own Go program that was running.

I had a day to spare because everything worked perfectly, so I started working on what eventually became Ground Control.

Go is About Forgiveness

Let me tell you a story about forgiveness, and Go code.

Go is verbose. It lacks generics, it adopts code generation as an escape hatch for many things the core language lacks.

To everyone with experience - code generation is a bad smell, and this is a problem; and it should be.

However, coming from Node.js code bases, with the dreaded callback hell and a particularly low quality factor for community packages (early Node days) a Go code base looks like heaven. So we forgive.

Just when you are starting to get used to punching out very verbose Go code, you start noticing these verbosity issues you have overlooked; they bug you on a daily basis, and they are everywhere.

But then, this kind of codebase would usually indicate you’re a bit more serious with Go. My guess is that you’d be at the same stage where you want to start doing concurrency work.

You discover Go’s channels, its concurrency model, and its nonblocking I/O.

Once again, you learn to appreciate it and become forgiving.

By this time, the systems you’re developing are complex (in a good way), and so they want to be generic.

You want to start building infrastructure for yourself.

Generics, and lack of language abstractions start to hit you, hard. And once again, at the same time, you notice that your production environment is quiet. So quiet that it allows you to even ponder these things.

You’re noticing everything you build is very robust, performant, without a special effort on your part. Additionally, you remember that the last commits and fixes you made were relatively easy because everything was super spelled out.

You accept Go

At this point, you accept Go.

Code generation, its CSP concurrency model, its simplicity, its hyper-focused single-purpose tools such as go-vet, go-fmt, and the likes and make peace with the fact that by using Go, you’re building and getting accustomed to a colorful, vivid, tool set.

You become forgiving, because strangely, you doubted Go at every crossroad, and it didn’t let you down.

Ruby and Go Sitting in a Tree

In this post, we’ll see how to build a Ruby gem with Go based native extension I called scatter, which is enabled by the latest Go 1.5 release and its c-shared build mode.

Feel free to drop me a line or tweet if you have questions or comments.

C Shared Libraries and Go, Ruby, Node.js, Python.

With Go 1.5, we’ve got a sparkling new go build -buildmode=c-shared mode, with which we can build C shared libraries.

Go, Ruby, Node.js and Python (and others) use C shared libraries as extensions. As an example, if you wanted to parse XML and either your language didn’t have such a parser (as doubtful as it sounds), or wasn’t fast enough parsing, you could use the robust libxml C library which exists since almost forever (well, 99’).

The big deal is - Go 1.5 enables you to build libraries such as libxml, in Go instead of C, and then use them from any host language that can use C libraries.

Here’s a crazy thing: Firefox and Go via a Firefox addon.

The Case for Ruby and Go

I don’t believe Ruby is slow for general purpose use cases, but it would be for specialized ones. When you’re using Ruby for the wrong workload, it might appear slow.

With C extensions, and now Go - you don’t have to completely switch away from Ruby to tackle such a workload, when it doesn’t make sense.

Ruby FFI and C Extensions

FFI should be very easy to use in Ruby given that you already have a properly built C library.

Usually, you would use FFI on an existing, well known library and create what is called a “binding”, much like any of these projects.

So to make Ruby call C code, for performance or simply for standing on shoulders of giants we would:

  • Build a C library and export functions for FFI (which is really just using existing functions), or,
  • Find an existing C library and point FFI at its API

And of course with Go 1.5,

  • Build a Go library and expose it as a C shared library, and point FFI at its API

Getting Started

Building a C shared library based on Go is easy, here’s a simple example. The Go part:

// libsq.go
package main
import "C"

//export sq
func sq(num int) int {
  return num*num

func main(){}

Here, the export comment is meaningful and Go uses it to mark out functions to export. Building as a C shared library goes like this:

$ go build -buildmode=c-shared -o libsq.go

And now we can tie this up with Ruby and FFI. The Ruby part:

# sq.rb
require 'ffi'
module Sq
  extend FFI::Library
  ffi_lib File.expand_path("./", File.dirname(__FILE__))
  attach_function :sq, [:int], :int

# test it out

We’re done with the basic example. Seeing how easy this was, you probably are starting to have a ton of ideas. Note them down.

But before you get started, let’s take a look at an expanded, more practical, example.

Scatter - Go Powered Parallel HTTP Requests Ruby Gem

Scatter is a showcase gem that does a scatter/gather (or fan-out, etc) type requests on a list of URLs in parallel.

For somewhat pragmatic purposes (there are other C based gems that do this) I’d like this kind of heavy lifting to be performed by Go’s HTTP stack, and modeled with Go’s channels and goroutines for concurrency.


We start, as before, by building our Go based C shared library. We’ll make a single function that coordinates all of the HTTP requests.

// somewhere in func scatter_request...
for _, uri := range cmd.URIs {
  go func(_uri string) {
    c <- makeRequest(_uri)

result := Result{}
for _ := range cmd.URIs {
  resp := <-c
  result[resp.uri] = resp

This is the core of the idea. From this, you see that there are no locks, no complex or dirty coordination to achieve concurrency.

Pitfalls and Patterns

In the simple sq example earlier before, I’ve used ints as the parameters and return values. This was intentional - it was hiding some pitfalls.

You might want to pass complex objects down to Go from Ruby, arrays perhaps, and then you might want to return multiple values from Go back to Ruby like the idiomatic Go result and error.

ruby-ffi solves the complex objects case for you, but not always arrays, and when you want to return multiple parameters, you really should return a complex object (struct).

In this example, we want to:

  • Pass multiple URLs, or even varargs
  • Return a big result, which is an aggregation of all of the requests (array or hash)
  • Signal an error if it happened

Unfortunately we hit all of the mentioned points of doing FFI. We have to build structs manually, we have to limit array sizes to work with built in arrays, or we have to start dealing with pointers and unsafe references.

Staying Happy

I want to stay in my happy zone in terms of developer experience for now. This is why in this example we will defer parameter and return value coding to an application-side codec.

We’ll pass a string in, and a string out. We’ll encode JSON in, and decode JSON out, and this could that easily be Protobuf, Thrift, msgpack, or Avro instead - what ever you like.

Remember - this is a trick, but not a dirty one. It might very well take you where ever you want to go, without the headache.

Working With Strings

So strings just became super important. And they should be - regardless of what we’re doing here strings were always the workhorse data type of software.

But again, I used a deceptively simple example. I didn’t use strings intentionally, because if you tried simply doing this:

// libsq.go
package main
import "C"

//export prnt
func prnt(str string) {

It wouldn’t work.

Although it looks like everything is working - str will be empty. This is because Ruby strings are illusively different than Go strings. To resolve it, we’ll do this:

// libsq.go
package main
import "C"

//export prnt
func prnt(data *C.char) {


If your Go code works on a shared Ruby resource, you need to start juggling the GIL. This can be painful if you don’t have much experience with building robust concurrent code. And even if you do, concurrency is hard, and it’s harder when you also need to manage memory without a GC (in C extension land).

So, if we stick to a somewhat naive principle where our Go extension does big things completely, we wouldn’t need to criss-cross between Ruby and Go code and maintain ownerships all around.

Shared Memory

You should read about Go and CGo memory management, but hopefully, again, if we stick to the tips in this specific example - as harsh as they may sound, most of it would be irrelevant.

Making a Gem

You now have all of the theory needed to build your own Go powered Ruby gems.

Let’s take a look at the grunt work of laying out such a gem so that it would build a hybrid of Go and Ruby and run successfully after install for your users.

Building Native Extensions

Ruby gems support C Ruby and Java (JRuby) extensions by way of rake-compiler It covers several concerns while building native extensions:

  • Locating files and resources for building C or Java
  • Generating a machine-specific context (makefiles, configuration) so that the build will address that specific machine’s architecture and bit width
  • Running the build with the relevant tooling
  • Installing and cleaning up

What you need to do is:

  • In your gemspec, point to an extconf.rb with spec.extensions = %w[ext/extconf.rb], your extconf file will be a simple and minimal descriptor of the build (out of our scope for this post)

And, optionally:

  • Provide instruction per environment (Java, C)
  • Toggle inclusion of prebuilt binaries, that were cross-compiled ahead of time

Building Our Native Go Extension

We don’t have a C codebase, nor do we need the C tooling. So we can do either of these two:

  1. Depend on Go tooling. A Go 1.5+ compiler should exist at the user’s machine, we’ll build the Go library on gem install time.
  2. Cross compile and include all binary artifacts with our gem. On install time, sniff out the platform and depend on the correct binary.

We’ll do (1) because it feels cleaner for our purposes and makes debugging easier.

If you’d like to streamline the workflow then I’d recommend (2). This way, the Ruby and Go codebase releases are allowed for independent (and healthy) evolution.

Our Slim extconf and Makefile

Since Go makes a great toolset, this is how our extconf would look like :)

# this is a cheat
puts "running make"
`make build`

And our makefile:

	go build -buildmode=c-shared -o libscatter.go
# fake out clean and install

.PHONY: build

That’s it. All the rest is standard Ruby gem, and our FFI glue code.

Summing Up

This idea works well - you can clone the scatter repo, install, run benchmarks and more.

Personally, I fantasized about a Go powered Ruby gem ever seeing and trying to build a Rust based Gem, after reading Rust and Skylight.

These kind of concoctions breath new life into a community and a platform.

Make Sure It’s Worth It

For the time of this writing, this idea is pretty much novel. There are not much official advise, and the road is paved with gotchas (again, see Firefox and Go).

I think that for now, if you limit yourself to the what was presented here, you may be on the safe side.

Truth be told, with Go, the friction for building native extensions goes down dramatically. Setting up such a Go extension is fun, rather than a full-fledged C extension. Suddenly “is it worth it?” becomes “why not?”.

The Real World

Ruby is fast enough for most things. However, you might find a Go based library useful for packing raw performance and cleaner concurrent codebase into:

  • An existing project, that’s so old it wouldn’t respond well to an architecture refactor
  • An existing team, that wouldn’t respond well for swapping an entire tech stack
  • Ruby projects that miss libraries that Go already has, or has lower-quality libraries in the same sense.

Low Level Go

Let’s take a look at a golang binary size with real life dependencies.

These are two of my own projects, where I knew they had to run on a command line, and across platforms:

But what about when your work is very simple, and requires doing some classic C with direct operating system calls? Does it “pay off” to build that in Go?

Some would write a small C program and be done with it; being small it would probably be a lot of fun since there’s no abstraction that can stop you anywhere.

However, a one-off C program will probably compromise on portability and perhaps some other arguable properties such as code clarity and ease of maintenance.

Let’s see what does it take to get as close as possible to that C program, but stay within Go.

Reducing the Go Binary Size

Go binaries tend to grow fast with each included dependency. However, let’s make a very important statement: in real life, Go programs are small enough for that to not matter at all. Moreover, at runtime the resources consumed will be much less than say Java and Ruby and Python, which is perfect.

That being said, if we still want to get close to a C binary, we don’t want to be in the MB range, but in KBs.

How low can we go?

The Go binary will pack the garbage collector, the goroutines scheduler and the dependencies you include via import.

Taking a very minimal CLI program, let’s say we have os to cover basic file handling and flag to parse and access command line arguments. We don’t even do any common I/O operations here.

package main
func main(){

Binary size: 1.8MB.

Going foward, let’s remove flag and assume we can get to ARGV via os.Args. It’ll be less elegant but, whatever.

package main
func main(){

Binary size: 893Kb. That looks quite good. Can we do better?


Going through Go docs, we bump into syscall, Go’s interface into the low level OS primitives:

The primary use of syscall is inside other packages that provide a more portable interface to the system, such as “os”, “time” and “net”. Use those packages rather than this one if you can.

Let’s swap everything with “raw” syscalls:

package main
func main(){
  syscall.Open("foobar", 0, 666)

Binary size: 544Kb. Neat (we’ll stop here - for anything practical, I assure you this is the bare minimum :).

We can shave around 20KB more with strip but let’s forget about that for the moment.

Shake off abstractions

As with C, you can do without abstractions in Go. You could get a lot of mileage with syscall, as do a lot of the standard packages in order to implement the higher level, more streamlined Go API.

Here is a snippet from os.Chdir, reassuring it’s a fancy wrapper around syscall.

func (f *File) Chdir() error {
  if f == nil {
    return ErrInvalid
  if e := syscall.Fchdir(f.fd); e != nil {
    return &PathError{"chdir",, e}
  return nil


For a real life example, You can take a look at cronlock, a small utility I’ve built with the conclusions from this article. Being that it drives a mission-critical component, it had to be small and simple.

Working without abstractions from time to time is nice; it feels like being a kid with a LEGO again and there’s a special place for C in my heart to relay that feeling. Go makes it a tad bit more accessible and portable.

Note: syscall is supposed to be depracated in favor of a better architecture, but has not yet - and the ideas here should still be valid after the transition. See more here.

Open Sourcing Castbox

In January 2014 I gave a talk at the Israeli Devcon in Tel-Aviv, named “Chromecast Internals”. I announced Castbox at the end of that talk.

Getting that exposure brought up interesting ideas which postponed my plan of open sourcing it, but today, I have no option but to bury these plans due to Google Chromecast changes.

So, much delayed, I’m open sourcing Castbox. The good news is that this project, 8 months later, is more robust (since I wanted to build a business around it).

Castbox still works with most apps and will continue to work until all Chromecast apps migrate to the new Google protocol (which may probably take time).

You can use it to develop your apps and have a Chromecast without the real Chromecast if you want - on a RaspberryPi for example.

Why Go

I have built several other open source projects in Go in the past 2 years, and am running Go in production for a long while. However, I have never stated my opinion and point of view on Go, and I hope to cover some of it below.

My goals for this project were to:

  • Have a build for Raspberry Pi
  • Develop on OSX and run on Linux and Windows
  • Have a reasonably happy development experience
  • Be certain that I will consume low resources and run fast

Parsing Binary Data With Node.js

I’ll start by highlighting some of the pillars of binary data, hopefully in a breeze. If you find yourself very attracted to these topics, I recommend this book (you can skip the HLA/assembly parts). Also note that it’s a bit oldschool (I read that more than 10 years ago but it left quite an impression) so there may be newer and better resources to learn from.


A “computer” word, is a sort of unit of grouping of bits. For example, a word can be 8, 16, 32, 64 etc, bits wide. Typically a word’s width is coupled to the CPU’s architecture’s width (i.e. 64bit CPU) but in our case, we’ll treat the meaning of word as “a set of N fixed-size bits” where N is the number of bits.


The term “endian” comes from “end”. When you look at a sequence of bytes and want to convert a group of bytes to a plain old number, it stands to denote which end of the number is first; in the case of big endian the first part is the bigger one. In the case of little endian the first part is the little one.

For example, there are two ways to look at the couple of bytes appearing in a binary file: 01 23.

Asset Pipeline Internals

Almost a year ago, I wrote about build management for Javascript projects.

In a hindsight a year proved to be a ton of time on the client-side.

Most notably Grunt (which I only mentioned briefly) took off like a rocket, and in the same manner Yeoman - which I almost instantly considered a swiss army knife for doing my client-side only projects.

Yeoman though, which relies on Grunt, is going through some fundamental changes and looks like it is being re-arranged and re-planned for a while now.

For what it’s worth I do support the new Yeoman changes, but instead of waiting for it to crystalize I tought it is time to re-evaluate what’s out there today and see if Yeoman can be replaced altogather (the answer is ‘Yes’, keep reading :).

ZeroMQ and Ruby a Practical Example

For a specific high-performance workloads, I wanted to include a new and highly optimized endpoint onto Roundtrip.

If you don’t know what Roundtrip is yet, feel free to quickly check out the previous Roundtrip post and come back once you got the idea of what it does.

I had to select both a wire protocol and an actual transport that will be very efficient. To gain an even higher margin over HTTP, I knew I wanted it to be at least binary and not very chatty.

A good option for this would be Thrift, for example. However I wanted to go as low as I could, because I didn’t really need anything more than the bare simplest RPC mechanism.

However, going with straight up TCP wouldn’t gain me much because I typically hold development ease and maintainability as an additional value. There was only one thing I felt offering an awesome development model and being as close to (or even better than, on some occasions) TCP…