Saturday, March 8, 2014

Keeping Track of Unicorn Workers

When you are running infrastructure and hosting rails, sinatra or other rack applications there are usually a handful of other technologies in play. A common stack for serving up your rack application is Nginx and Unicorn. While most people realize how to gather request metrics from their rails application and from Nginx there is usually another metric that really helps understand the performance of your rack application.

If you are using Unicorn, you most likely understand that there is a Unicorn master that manages a set of Unicorn workers. Each worker can handle 1 request at a time and if you do not have enough workers to handle all your requests, those requests will back up into a queue until a free worker can handle the requests.

In ensuring you have enough unicorn workers and to make sure you are not dropping requests or serving up slow requests you should keep track and gather metrics from your unicorn workers as well. While there are a few interesting things you can do with Raindrops, I have been using it in my unicorn.rb file and sending stats through Sensu to a graphite server. Additionally, I am notifying when I have the number of free workers drops below a reasonable threshold.

In order to monitor your unicorn workers, add raindrops to your gem file and you can simply spin a background thread in your unicorn.rb file to reach into the socket and send stats back out periodically. 

Here is what the part of my unicorn.rb file looks like for instrumenting active and queued workers.


Remember not to reach into the port too frequently as not to cause performance problems with your unicorns.


Sunday, February 10, 2013

What commit/branch contains the change?

Ever have someone show you something in a repository and you can't remember what branch had the change they had shown you? Every forget which branch has a particular commit you made at some point? 

This happens to me more often than I think I would like to admit. With the following two git tricks, you can always figure out exactly where the change is your are looking for.



The first command does two things.
  • expands a listing of all the commits using `git rev-list --all`
  • greps across those commits for the pattern.
The second command identifies which branch(es) the commit resides in.

It's these simple things that make me really love git.

Wednesday, November 7, 2012

Find and Open in VIM

Recently, Jamison Dance tweeted the following tweet


This tweet reminded me of one small painful item when using Vim - finding text across files and replacing said text with another value. Here is a classical example of doing this on the command line


That script searches all ruby files and replaces 'omc_data_service' with 'service' - that looks a lot more intimidating than what most text editors offer you doesn't it (you can read more about this script from my buddy Joe).

After you run the script above, you need to then review the code prior to committing to ensure the replacement worked properly and have your test suite confirm it. Also, that script can be time consuming to get right. I prefere to take a different approach.


With this approach, I am grepping across files for text and them opening all files with that text in Vim. Let's review how I build this command up each time I do something like this.

First, I start with simple grep command across a set of files using the `-r` option to search recursively, --`--include` to specify the file types to include and then, starting in my current directory.


After looking over my results from the command line quickly to help make sure I'm matching files I want correctly, I add the `-l` option to only include file names - this is a slight change up from the complicated way I used to do it, but like all unix tools - there are lots of chainsaws of different sizes.

After sanity checking the files, I wrap the command in `$()` and pass it to vim.

The `$()` is a shell expansion that takes all the file names as results and expands those as unique args to be passed into vim. Now you can use your normal vim commands to search and find replace in each file. After you are done with a file, use the command `:n` to skip to the next file.

PS. yes - I know I can grep from vim and skip across each reference, but when doing a find and replace, I prefer this method.

To read more about shell expansions, see the following reference:
http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_04.html#sect_03_04_04

Saturday, April 28, 2012

Just a Promise

Warning: Code Examples are in CoffeeScript


I have been working on a javascript application over the past couple of months built with the Spine.js framework and things have gone really well. We are holding onto data for a current session and some of the less volatile data is only fetched once during that session and only when it's needed.


For example, we have "Discussions" and "Users" and we only fetch these models once during a session and in the case of a Discussion, we only fetch a full discussion one at a time. Also, while we batch fetch Users, we cannot guarantee when we render a Discussion along with Users and their avatars that all of the users are present at rendering time. This caused quite a bit of code that looks like this...

Yeah that sucks having code that looks like that littered all across the application and it can get even worse if you need to ensure that multiple pieces of data already exist...so tonight, I cleaned it up with a promise and am really happy with the results


Additional Reading/Resources:

Modular Javascript

Complex javascript apps and eco systems are the common norm these days and with javascript packaging and caching solutions that encourage all of your javascript to be served up in a single payload. Moving towards this paradigm brings up some interesting issues about how to keep your javascript organized, lean and fast.

A common pattern observed in javascript writing is to wrap your javascript code in a closure to control scope and namespace your "objects" to prevent variables being overwritten.

File structure looks like...

  • /javascript
    • house-app.js
    • house-initializer.js



With this approach, there are a couple of problems.

  1. Javascript file load order becomes important - if script block (a) depends on something from script block (b), script block (b) must be ensured to load before (a). In a complex app/site - you can end up with hundreds of javascript objects and managing one large file can become burdensome. Also, if you break these into multiple files, circular dependencies can become an issue.
  2. All javascript in the closure will always always run when the files are loaded (when #call is executed).
  3. Global scope is still relied heavily on to attach all "objects" to be used.

To solve these problems I recommend using a module pattern in developing your javascript such as CommonJS or AMD. If you have used node.js, then you have used the CommonJS implementation and if you have used "require.js", then you have used the AMD implementation of the module pattern.

I like the permise of the AMD implementation which provides a convention to asynchronously load required javascript files from a server as this prevents the need to have a large payload initial sent to a page to get up and running (faster page load time) but this advantage becomes moot after someone has visited the page once (with caching). Also, creating a javascript "app" that can easily run in an offline mode becomes more challenging because you have to deal with multiple files in a cache manifest file, etc. However, require.js has a solution to this and provides a utility to create one javascript file but relies on node.js to work (potentially introduces an additional dependency into your development/deploy env) and introduces another step into your deploy if using another asset bundling solution (asset pipeline, jammit, etc).

The way I prefer to handle the module pattern implementation is based on the CommonJS spec and is based off a node.js package called "stitch". In using this solution, the only thing you have to worry about in load order is your initial library dependencies (stitch-header.js, jquery.js, backbone.js, etc) and then the rest of your javascript can be loaded in any order. This simplifies specifying your javascript includes for your packaging solution and ensures that javascript only gets executed when necessary.

Here is the example from above using the the CommonJS pattern.

File structure now looks like

  • /javascript
    • stitch-header.js
    • /app
      • house-app.js
      • /models
        • house.js
        • window.js
        • door.js
      • /controllers
        • house_controller

Then to use the above the house app in the particular page that needs the house app, put a small script block at the top of the page that reads, "var HouseApp = require('house_app'); window.app = new HouseApp;"


Saturday, December 31, 2011

MySql Views and Rails, The Easy Way

Joe Cannatti and I recently needed to tame a interesting database design in a legacy code base and determined that a mysql view would be just the ticket to create a simple solution to a complex data problem.

Out of the box rails (3.2 RC1 as of this writing), you cannot use views very easily. We could not find proper solutions online and there is a "view" gem that is not updated so we did some digging and developed the following simple solution to utilize views in rails.

  1. Update your config/application.rb file to use sql instead of active records schema dumper by adding the following line.
    config.active_record.schema_format = :sql
    
  2. Monkey patch Mysql2Adapter or other appropriate data adapter to include the views in the export. If you are using rails 3.* then the update is small.
    Create a file called config/initializers/monkey_patch_abstract_mysql_adapter.rb
    module ActiveRecord
      module ConnectionAdapters
        class Mysql2Adapter
    
          alias :build_table_structure :structure_dump
    
          def structure_dump
            build_table_structure << build_view_structure
          end
    
          def build_view_structure
            sql = "SHOW FULL TABLES WHERE Table_type = 'VIEW'"
    
            select_all(sql).inject("") do |structure, table|
              table.delete('Table_type')
              structure += select_one("SHOW CREATE VIEW #{quote_table_name(table.to_a.first.last)}")["Create View"] + ";\n\n"
            end
          end
    
        end
      end
    end 
    
  3. Create your view in a migration file using "execute" to run raw sql.
    class MyDatabaseView < ActiveRecord::Migration
    
      def self.up
        execute "CREATE OR REPLACE VIEW my_database_view ..."
      end
    
      def self.down
        execute "DROP VIEW my_database_view"
      end
    
    end
    
When creating your active record models against your views, please remember to to define the "readonly?" method to true unless you have created your view as a "update/insert" view.
def readonly?
   true
 end
Now you are all setup to use mysql views in your rails application. If you are using another database adapter, look for the method called "structure_dump" in your adapter as the rake task uses this task to dump the database structure as sql.

Monday, December 26, 2011

Create Rails Models From JSON

Recently, I had the need to be able to create and save an active record model based on a json object returned back to the server. After some digging around I came across the following solution that fits the bill.

my_model_params = params[:my_model_params]
my_model = MyModel.new(JSON.parse(my_model_params))

JSON.parse from the ruby json gem allows you to parse a json string into a hash object which can then be used to create your model as you normally would in rails.