10 minutes with Ghost blogging platform: looks promising!

WordPress’ performance frustrates me sometimes, and Ghost received some good attention a while back. Time to take a look at Ghost!

First Impressions

  • really easy to install for dev purposes. It can run with SQLite, no need to set up a database server
  • it runs fast!
  • I like the Markdown editor
  • creating apps and themes looks simple compared to WordPress
  • the WordPress exporter for moving posts into Ghost works most of the time

But it’s still missing WordPress features I rely on:

Thoughts

I will give Ghost another try when I have time to look at the entire ecosystem and see what add-ons are out there, or if I can add the features I want. I appreciate that a fresh install of Ghost is fast and light, and hopefully adding some features I like will not slow it down much.

The good news is the team is very open about progress and changes. Here’s a blog post about what’s happening now, and here’s the Ghost roadmap on Trello.

What would moving from WordPress to Ghost be like?

  • easy to import most WP post text with a WP plugin, but I did see some errors when moving JS embedded in WP posts into Ghost
  • Ghost should be able to use the same URLs for posts as WP
  • images in posts will have to be moved to another host like S3. The Ghost team recommends Cloudinary to automate that process
  • could host site on Heroku, Digital Ocean, or another VPS provider. Not shared hosting

WordPress on Heroku with HHVM

How to

The instructions worked me, and I installed WP’s DB and set up my admin user.

Now it gets weird

I added the WordPress importer plugin so I could transfer my posts, and the Independent Publisher theme to composer.json like this:

"require": {
        "hhvm": "~3.2",
        "WordPress/WordPress": "*",
        "wpackagist-plugin/jetpack": "~3.1",
        "wpackagist-plugin/wpro": "~1.0",
        "wpackagist-plugin/sendgrid-email-delivery-simplified": "~1.3",
        "wpackagist-plugin/authy-two-factor-authentication": "~2.5",
        "wpackagist-plugin/wordpress-importer": "*",
        "wpackagist-theme/independent-publisher": "1.6"
 
    },

Since composer.lock is included in my repo, just re-deploying the site will not make composer install the new dependencies from composer.json. Why? Composer looks to composer.lock first because it has the exact versions of the packages to install. There are two workarounds:

  1. Remove composer.lock from Git repo
  2. Update composer.json AND composer.lock on some machine before committing to Git and deploying on Heroku

Remove composer.lock from Git repo?

This is easy, but not recommended because:

  1. Deploys will take longer and use more memory as the exact versions of all dependencies in composer.json will have to be resolved on every deploy

  2. Developers and production servers may end up with different composer.lock files, potentially resulting in everyone having out-of-sync dependencies

I wrote more about versioning composer.lock here.

Update both files before deploying?

Because "hhvm": "~3.2", is included in the required dependencies, composer update must be run using HHVM instead of PHP. That means I need an install of HHVM outside of Heroku just to update composer.lock

HHVM requires a 64 bit Unix-like operating system. It will not run on my Windows dev machine, nor a 32-bit Linux VM which is what my dev machine is capable of running. Time to upgrade?

Temporary Solution

For now, I have to follow this workflow to add a new dependency in composer.json:

1) Remove "hhvm": "~3.2", from composer.json
2) Run composer update on my dev box without HHVM installed
3) Add "hhvm": "~3.2", back to composer.json for Heroku’s sake
4) Commit composer.lock and composer.json to Git and deploy

HHVM will not appear in composer.lock, but that’s OK. Heroku still runs it.

Update after deploying?

This was tempting:

  1. Use heroku run bash to fire up another dyno with a CLI
  2. Update composer.lock in there
  3. Commit composer.lock to Git and push it
  4. Redeploy the site and get new dependencies when composer install runs

But I found the command hhvm ``which composer`` update throws an error:

Loading composer repositories with package information
Updating dependencies (including require-dev)
SlowTimer [5000ms] at curl: http://wpackagist.org/p/providers-old$77702c9f39565428994a020971d129f042db127809c1caa49589ce0862e93278.json
SlowTimer [5000ms] at curl: http://wpackagist.org/p/providers-old$77702c9f39565428994a020971d129f042db127809c1caa49589ce0862e93278.json
 
 
 
  [Composer\Downloader\TransportException]
  The "http://wpackagist.org/p/providers-old$77702c9f39565428994a020971d129f0
  42db127809c1caa49589ce0862e93278.json" file could not be downloaded: Failed
   to open http://wpackagist.org/p/providers-old$77702c9f39565428994a020971d1
  29f042db127809c1caa49589ce0862e93278.json (Operation timed out after 4949 m
  illiseconds with 2592872 out of 4680293 bytes received)

What about composer’s --ignore-platform-reqs ?

composer update --ignore-platform-reqs will get the update command to run locally for me, but the resulting composer.lock file is not suitable to run on Heroku. The app displays an error in browser, and running heroku logs shows that the deployment did not go smoothly:

2014-12-16T14:29:53.818251+00:00 app[web.1]: app_boot.sh: 23: app_boot.sh: vendor/bin/heroku-hhvm-nginx: not found

How did Heroku + HHVM perform?

Browsing through posts and pages was faster than on my current shared hosting (with neither install using caching). Performance in the admin area was about the same, which is where I wanted to see the most improvement.

Fall Toronto 2014 Node.js Meetup

60+ of us met at One Eleven Richmond for 3 presentations and chatting


Using Docker for Metrics Dashboards during development

Mario from 360 incentives put up a blog post expanding on his presentation here

Story time

One time a cache busting mechanism failed, causing 100s of people to not be angle to finish an online course. Unit tests did not pick this up. How to check business value is being delivered? Use metrics!

Metrics are like a long running acceptance test

Use Metrics in Development

  • Not just production
  • Test that you are capturing the right data for logging
  • Having dashboards in dev speeds up feedback cycles

How to do

  • 360 uses statsd, graphite, grafana
  • Manage dashboards like code
    • Push updated dashboard layouts to version control and have them deploy automatically to wherever they are needed
  • Use Docker to distribute production-like environments to developer machines quickly with metrics software installed.
  • Mario wrote scripts to automate spinning up multiple docker images, destroying them, filling up containers with fake data

Docker makes it easy to run the same operational infrastructure (logging, aggregation, dashboards) in production and on every dev’s machine. Then everyone gets visual feedback in the form of graphs faster

Docker Features

  • Docker repos are very lightweight compared to virtual machines
    • 1 physical machine can run hundreds of docker images
  • Containers can be linked to each other over ports and IP addresses
  • Containers can mount parts of real file systems to themselves, like Vagrant
  • Is Docker missing features compared to Vagrant? Who cares! Docker is waaay faster to share and deploy environments with!

BTW, 360 Incentives is 11th fastest growing tech company in Canada. Neato!


Sails.js intro

Lee from OpenCare introduced sails.js

  • Sails adds a ORM, routing features, and more on top of express.js so you write less boilerplate
  • Sails’ ORM ‘waterline’ is DB agnostic, can link objects between different DB’s (Eg. relate a row in a PostgreSQL DB to a document in MongoDB)
  • Opencare uses Sails.js in production with an AngularJS frontend, they are happy with it

Shortcomings

  • Waterline does not support populating nested objects. Objects must be created and saved one layer at a time
  • Poor error handling
    • Sails returns 500 internal server error for too many types of errors, even for input validation
    • OpenCare built their own error handler to get around this

AudienceView transitions to Node.js

I had a hard time keeping up with this presentation. Geoff Wells and Corey discussed how they are transitioning their existing, successful product to Node.js.

  • AudienceView has 1.5 million line C++/ASP codebase that they started building 12 years ago, and is still in use today. Time to try Node.js!
  • How to change platforms without disrupting $2 billion in revenue from international customers (stadiums, concert halls, places that people buy tickets to get into)?

Why Node?

  • Existing ASP codebase is written in JS instead of VB
  • Can integrate with C++ backend via Node addons
  • Node.js has a community

Some things built during the transition

  • Node.js addons for communicating with existing C++ code (the backend stuff)
  • An ASP template parser that runs in Node.js so existing web page templates can reused
  • Replace ASP’s session handling with something custom-built
  • Replace ASP’s multithreading with queues

Success?

  • An earlier attempt at moving part of the platform to Java has been mostly replaced with Node.js in surprisingly little time. Success!
  • A significant part of ASP web application replaced with Node.js in 250 man days. Big success!

Helpful things

Some decisions made very early on in developing the backend of the product are proving very helpful during the transition

  • Very simple use of DB – no views, no stored procedures, no direct communication between ASP and DB is making transition easier
  • The C++ part of the backend is portable and can run on OS other than Windows
  • Choosing to use JS instead of VB way back means less boring work translating code today

Fall 2014 Node.js Meetup

Tuesday, Nov 18, 2014, 7:00 PM

One Eleven
111 Richmond St West, 5th Floor Toronto, ON

100 Members Went

Join us Tuesday Nov 18th for the Fall 2014 Node.js meetup. We’ve got three great speakers, and a new venue courtesy of Opencare and One Eleven.Here are the talks for this event:Docker, Dashboards & Node – Mario Pareja from 360incentives will talk about how 360 leveraged Docker to distribute their production application coupled with a customized m…

Check out this Meetup →

Javascript bind vs call

bind and call take same arguments, but do different things. call simply allows you to execute a function in a specific scope and returns the result of running that function. bind actually returns a new function locked to the scope passed in, and you may execute it as many times as you want.

Here’s an example. Try running it in your browser’s JS console to see the results for yourself.

// create an object with its own scope to work with
var Hat = function(size) {
  this.size = size;
  console.log('hat object created');
}
 
// and another object to compare it to
var Shirt = function(size) {
  this.size = size;
  console.log('shirt object created');
}
 
// this function returns the value of "size" in whatever scope it runs in
var getSize = function() {
  return this.size;
}
 
var size = "I don't know";
var hat = new Hat("M");
var shirt = new Shirt("XL");
 
 
// getSize() usually returns the global value for size
console.log('getSize in global scope:', getSize());
 
// here's an example of running it in scope of hat once
console.log('getSize in `hat` scope:', getSize.call(hat)); // writes out 'M'
 
// run it regularily, and it is back to the global scope
console.log('getSize in global scope again:', getSize()); // writes out 'I don't know'
 
// now make a new version of getSize() that only runs in the scope of hat
var getHatSize = getSize.bind(hat);
 
// writes out 'M' every time!
console.log('getHatSize:', getHatSize());
console.log('getHatSize again:', getHatSize());
 
// what if getHatSize is added to shirt's prototyp?
Shirt.prototype.getHatSize = getHatSize;
console.log('getHatSize added to shirt\'s prototype', shirt.getHatSize()); // writes out 'M', so the binding to the scope of hat has not been broken
 
// meanwhile, adding getSize
Shirt.prototype.getSize = getSize;
console.log("getSize added to shirt's prototype:", shirt.getSize()); // writes out 'XL', which means the function is executing in the scope of the shirt object

Why to version your composer.lock file

Composer.json often lists dependencies with version numbers that can point to a bunch of potential packages. Lines like this:

"require": {
    "laravel/framework": "4.2.*",
    "ruflin/elastica": "1.3.0.0",
    "guzzlehttp/guzzle": "~4.0",
    "sunra/php-simple-html-dom-parser": "1.5.0"
},

When composer install is run without a composer.lock file available, it has to translate the ~, *, =>, and other bits of flexible version numbering into an real version number to download for you. It also has to grab the dependencies of each one of the dependencies you specify. This takes time, memory and bandwidth. When composer finishes, it saves a list of the packages it has downloaded to composer.lock with real version numbers and all the required packages included.

Example Scenario

You start a new project by yourself. You set up composer.json with some packages, run composer install and commit all the right stuff to get, minus composer.lock

72 hours later

Another developer on your team receives your code, including the composer.json. They run composer install and receive a different set of packages than you did 72 hours earlier because one of the dependencies pushed an update. Will that be a problem? Maybe…

72 hours later

The project is done! Time to deploy. Somewhere in the process composer install runs. All the dependencies will be resolved again wasting bandwidth, memory, time, and potentially installing a different set up of packages then on either development machines. This won’t introduce bugs if you are lucky, but it isn’t optimal.

What’s inside composer.lock?

You’ll notice composer.lock is a much bigger file than composer.json (72kb vs 1kb for my example). Here’s what it looks like:

{
    "_readme": [
        "This file locks the dependencies of your project to a known state",
        "Read more about it at http://getcomposer.org/doc/01-basic-usage.md#composer-lock-the-lock-file",
        "This file is @generated automatically"
    ],
    "hash": "675d6a1de44ac07f57f730785f25142b",
    "packages": [
        {
            "name": "classpreloader/classpreloader",
            "version": "1.0.2",
            "source": {
                "type": "git",
                "url": "https://github.com/mtdowling/ClassPreloader.git",
                "reference": "2c9f3bcbab329570c57339895bd11b5dd3b00877"
            },
            "dist": {
                "type": "zip",
                "url": "https://api.github.com/repos/mtdowling/ClassPreloader/zipball/2c9f3bcbab329570c57339895bd11b5dd3b00877",
                "reference": "2c9f3bcbab329570c57339895bd11b5dd3b00877",
                "shasum": ""
            },
            "require": {
                "nikic/php-parser": "~0.9",
                "php": ">=5.3.3",
                "symfony/console": "~2.1",
                "symfony/filesystem": "~2.1",
                "symfony/finder": "~2.1"
            },
            "bin": [
                "classpreloader.php"
            ],
            "type": "library",
            "extra": {
                "branch-alias": {
                    "dev-master": "1.0-dev"
                }
            },
            "autoload": {
                "psr-0": {
                    "ClassPreloader": "src/"
                }
            },
            "notification-url": "https://packagist.org/downloads/",
            "license": [
                "MIT"
            ],
            "description": "Helps class loading performance by generating a single PHP file containing all of the autoloaded files for a specific use case",
            "keywords": [
                "autoload",
                "class",
                "preload"
            ],
            "time": "2014-03-12 00:05:31"
        },
        {
            "name": "d11wtq/boris",
            "version": "v1.0.8",
            "source": {
                "type": "git",
                "url": "https://github.com/d11wtq/boris.git",
                "reference": "125dd4e5752639af7678a22ea597115646d89c6e"
            },
            "dist": {
                "type": "zip",
                "url": "https://api.github.com/repos/d11wtq/boris/zipball/125dd4e5752639af7678a22ea597115646d89c6e",
                "reference": "125dd4e5752639af7678a22ea597115646d89c6e",
                "shasum": ""
            },
            "require": {
                "php": ">=5.3.0"
            },
            "suggest": {
                "ext-pcntl": "*",
                "ext-posix": "*",
                "ext-readline": "*"
            },
            "bin": [
                "bin/boris"
            ],
            "type": "library",
            "autoload": {
                "psr-0": {
                    "Boris": "lib"
                }
            },
            "notification-url": "https://packagist.org/downloads/",
            "time": "2014-01-17 12:21:18"
        },
 
......[more packages listed here]....
 
    ],
    "aliases": [
 
    ],
    "minimum-stability": "stable",
    "stability-flags": {
        "mockery/mockery": 20
    },
    "prefer-stable": false,
    "platform": [
 
    ],
    "platform-dev": [
 
    ]
}

You’ll find all the packages in composer.json and the resolved dependencies listed, each with their version number. Why is this better? When composer install uses information from composer.lock, it simply downloads the listed packages, and doesn’t have to traverse through the dependency tree searching for packages, or figure out which version of a package to download. This saves time when a developer or server runs composer install, and ensures the same versions of every package are requested every time. Keep composer.lock in version control, and the team and production server will benefit from this when composer install is run.

More info

Read this conversation on GitHub:

KingCrunch commented on Feb 11

@paparts Sounds like you don’t versionize the composer.lock? As a rule of thumb: For applications versionize it, for libraries, don’t. You shouldn’t run update on a live system, because it is quite likely, that sooner or later a package comes in, that breaks your application, without that you’ve tested it locally. The composer.lock and composer.phar install ensures, that exactly that packages in that versions are installed, that you’ve development your application against.

paparts commented on Feb 12

I didn’t notice that the framework I was using has listed the composer.lock on the ignore list. Thanks for pointing that out.

Fatal error: Uncaught exception 'ErrorException' with message 'proc_open(): fork failed - Cannot allocate memory' in phar:///home/ubuntu/somefolder/composer.phar/vendor/symfony/console/Symfony/Component/Console/Application.php:985

Here’s an explanation of composer install and composer update and how they are different: http://adamcod.es/2013/03/07/composer-install-vs-composer-update.html.

Here’s the official composer documentation explaining what the lock file does: https://getcomposer.org/doc/01-basic-usage.md#composer-lock-the-lock-file

Interview Cake Problem #11

I was on the right track with breaking down the problem. I was ready to start looking for sequences of matching cards across the deck and 2 piles, but the hint suggested to break the problem down further to matching individual cards. With that hint, I was able to implement a good solution.

I chose to actually pop and push elements from the arrays when doing comparisons between the piles instead of just incrementing an index every time a match was found. It is slower, but I think it is less prone to human off-by-one errors.

Neat JS stuff that came in handy

Need to make a shallow copy an array quickly? Try slice

var copy = original.slice(0);

Need to combine 2 arrays without creating a new array (like with concat()) ? Try using apply with push. push usually expects to receive a list of values as arguments instead of 1 array of values. apply helps with that.

original.push.apply(original, additional)

This says to run the push function in the scope of the array original, and will treat the array additional as a list of arguments the way push usually expects to receive values. The result is that the values in additional will be added to original.

FSTOCO – November 2014 Conference thoughts

The Full Stack Toronto conference had a great vibe and useful talks. 2 weeks later, here are some themes that stick out in my mind:

  1. Agile/Lean Methodologies

    The first not-keynote presentation I saw was “Taking a Lean Approach to Client Projects”, and from then on this style of work came up repeatedly. Either everyone wants to work this way, or everyone is already.

    Here are slides from @HiraJaved10’s talk on doing UX Research without slowing down the agile process

  2. Monolothic Applications become microservices

    This topic came up in a few presentations as well. Everyone wants to break apart the big Ruby on Rails apps they started 4ish years ago into smaller services. Why? To make it easier for teams to divide and conquer. This can speed up QA cycles, deployment, and ease scaling.

    Here are slides from @mchacki’s presentation on Microservices

  3. AngularJS is THE full-featured front-end framework

    Here’s the vibe I get:

    • AngularJS is hot right now
    • React.js + Flux + other stuff is too new
    • Backbone is too old
    • Ember is something else
    • People don’t build big sites with just jQuery. Except for IBM maybe.

     
    Here is code and slides from @ericwgreene’s talk on Angular’s $digest loop.

  4. Responsive Web Design by Ethan Marcotte

    This book got multiple shout-outs, and new version is coming out early 2015. Hooray!

Presenter Dejan Glozic has more thoughts on the conference. Tweets in this stream should be relevant until the next monthly meetup in 2015!