Using Nodejs to record microphone input to mp3 files on Ubuntu

https://ubuntuforums.org/archive/index.php/t-224748.html was very helpful.

Install lame mp3 encoder if you don’t have it.

sudo apt-get install lame

You should already have arecord, which records audio sends it to stdout.

  1. Run the command alsamixer to see your audio inputs and tweak volumes
  2. Run the command arecord -f cd | lame – out.mp3 to record audio to an mp3 file called out.mp3 until you hit ctrl-c

Now do that with Nodejs!

OK! This will record audio until you exit the script with Ctrl+C.

const spawn = require('child_process').spawn;

// prepare 2 child processes
const recordProcess = spawn('arecord', ['-f', 'cd'])
const encodeProcess = spawn('lame', ['-', 'out.mp3'])

// pipe them
recordProcess.stdout.pipe(encodeProcess.stdin);

// get debug info if you want
/*
recordProcess.stdout.on('data', function (data) {
  console.log('Data: ' + data);
});
recordProcess.stderr.on('data', function (data) {
  console.log('Error: ' + data);
});
recordProcess.on('close', function (code) {
  console.log('arecord closed: ' + code);
});
*/

// this seems like a good idea, but might not be needed 
process.on('exit', (code) => {
  console.log(`About to exit with code: ${code}`);
  recordProcess.kill('SIGTERM');
  encodeProcess.kill('SIGTERM');
});

Recording & compressing short screencasts on Windows

Tools I use

  • CamStudio with the lossless codec for screen recording
  • ffmpeg
  • a scripting environment with shell access (I chose node.js) for batch converting

My process

  1. Capture video using Camstudio

    I choose to record 1 window as my region, and compress with the Camstudio lossless codec.

  2. Use ffmpeg via a node.js script to batch convert videos

    This turns them into something that can be played in a web browser

    var fs = require('fs'),
        util = require('util'),
        child_process = require('child_process');
     
    var shellCommand = 'c:\\ffmpeg\\bin\\ffmpeg.exe -i %s -codec:v libx264 -profile:v high -preset slow -b:v 500k -maxrate 500k -bufsize 1000k -threads 0 -y %s';
     
    fs.readdir('./', function (err, paths) {
     
        paths.forEach(function (path) {
     
            // TODO - only convert *.avi files
     
            var command = util.format(shellCommand, path, path.replace('.avi', '.mp4'))
     
            var child = child_process.exec(command, function (error, stdout, stderr) {
                console.log(path, err, stdout, stderr);
            });
     
     
        });
     
    });

    I base my settings on Jemej’s ffmpeg tutorial. The quality and framerate are low since I usually record things that don’t move much like terminal windows.

Ideas for improving

  • this is dumb – it runs every .avi file in the folder through ffmpeg whether it needs or not.
  • are there smarter ways to use all the cores of my CPU?
  • what about watching for .avi file changes/additions with gaze and encoding new .avi files as soon as CamStudio saves them?

Toronto JS meetup – March 2015

Shopify hosted us

Why use nodejs to build distributed systems? by Gord Tanner

slides video

  • js was event based from the start – browser vendors didn’t want bad web developer code to stop their browsers from running
  • nodejs’ event-loop makes it easier to handle concurrency. Promises, callbacks, and queues force you to keep things simpler than mutexes, semaphores, locking, shared memory
  • Always test your distributed app is a distributed environment. How about multiple Vagrant VMS? This is very different from firing up a few nodejs processes in your local machine where processes share RAM and a hard drive

Intro to WebPack with Tasveer Singh

slides video

  • old build tools merged all JS into a single file and uglified it
    • too much js parsing up front for mobile browsers when page loads
    • if 1 character of js changes, the entire merged file must be downloaded by users again
  • requirejs – importing dependencies is easy to mess up with typos

  • so try WebPack! It is big and complicated, but you can use it if you just believe, and the output is great!
    • inlining images to reduce HTTP requests
    • support for compilation tools for coffee script, babel
    • great support for merging lots of js files into a few modules – developers can find the balance between reducing HTTP requests and downloading too much code at once
    • watch out for asynchronously loading CSS – as more CSS loads and renders, conflicting rules lead to styles changing in unexpected places

Angular 2 with Matias Niemelä

video


March Tech Talk Night – Distributed Computing, WebPack, and Angular 2.0

Thursday, Mar 12, 2015, 6:00 PM

Shopify Toronto
80 Spadina Ave. 4th Floor Toronto, ON

126 Members Went

Did you know that this February was the first February since 1967 where every day was below zero degrees? The reason is that we didn’t warm up the hearts of Toronto with a Tech Talk Night! Join us on March 12th at Shopify’s beautiful Toronto office for another spectacular event.We would like to thank Lighthouse Labs for sponsoring the event: Ligh…

Check out this Meetup →

CreateInTO February 2015 – Gathering v20.0 – The Building Things Edition

Headless.io by Patrick Schroen

  • ever stream music from your home NAS to speakers in bar via a cellphone tethered laptop? Patrick Schroen does!

  • headless.io is a framework to use js to make devices chat with each other, a server and IDE to run and edit nodejs scripts from anywhere.

  • Chrome is the only browser that allows web socket connections over unsigned ssl connections.

  • Raise the Pride (an installation with an actual flag which raised and lowered via electric motor according to sentiment on Twitter) was Patrick’s first production use of the script. Big success. http://raisethepride.ca

Wattage with Peter

Sweet preview of stuff to come! The content we saw is now on the company website. The goal is to become the Github + Esty of hardware.

Gathering v20.0 – The Building Things Edition

Wednesday, Feb 25, 2015, 7:00 PM

Handlebar
159 Augusta Avenue Toronto, ON

87 Creators Went

Join us at Handlebar for our monthly gathering of creative developers, designers, hackers, makers and more. This month we look at projects that connect the web with the world and that aim to make hardware development even more accessible.Patrick SchroenHeadless Web Part 2: #RaiseThePride Under the Hood Patrick will take us through a framework h…

Check out this Meetup →

WB Lift Monitor with Kibana and ElasticSearch

View source code on GitHub

I decided to try loading data from https://secure.whistlerblackcomb.com/ls/lifts.aspx into ElasticSearch, and view it in Kibana 3 for fun. nodejs handles downloading the info and putting it into ElasticSearch, and runs Express to serve the static Kibana pages. You can launch your own instance on Heroku with this button:

Deploy

WB Lift Monitor Screenshot

Stuff I learned making this

Kibana isn’t great for this purpose

Overlapping lines on graphs hide data. Users can’t tell exactly which lifts are closed by looking at the graphs because the last queried lift covers up the other lines at the same Y value.

Kibana rounds numbers to whole numbers, so the speed graph/histogram loses precision. Combine that with the overlapping lines mentioned above, and the graph becomes less useful. I had to multiply speeds by 10 before loading them into ElasticSearch (speeds are in dm/s instead of m/s), otherwise all the lifts appear only have about 4 different speeds in the graph.

Still, looking at the colourful squiggly lines is fun during opening and closing time on weekends.

Line breaks are weird

The hardest part of this was making .profile.d/kibana-config.sh work. I kept running into issues with line breaks ending up in the login:password string that gets base 64 encoded and used in the Authorize HTTP header, which lead to lots of 401 Forbidden errors when Kibana tried to access ElasticSearch.

Some bash commands

curl -X DELETE 'https://name:[email protected]/index-name'
 
curl -X POST 'https://name:[email protected]/index-name'

These work great together to remove and recreate an index. Very handy when you to clear data out of 1 ElasticSearch index.

BONSAI_URL=https://name:password@server-name.bonsai.io npm start

When working on my dev machine, I used this to start node with the same BONSAI_URL environment variable set that Heroku has.

Heroku is fun

How cool is this button?

Deploy

Things to note when you deploy this:

  1. The default time range in the Kibana dashboard is 6 hours, so it takes ~15 minutes to save enough data for the graphs to populate.
  2. If this runs on a free dyno, the process that scrapes data and serves Kibana will sleep if it doesn’t receive any HTTP requests, and no data will be scraped. No data means blank spots in the graphs. You can refresh the page with the browser’s refresh button (not Kibana’s refresh button, which only makes a request to the ElasticSearch server), or set up another service to ping the app’s URL occasionally to stop the process from sleeping.

Other stuff to add?

Better colours for queries

This gradient scale might be fun. Valley lifts in green, mid-mountain in white, alpine in blue.

Maybe Whistler and Blackcomb lifts could have slightly different tints?

NodeJS Toronto January 2015

StrongLoop presented their tools for working with REST APIs. Slides are here, and my notes will soon be outdated because the platform is changing quickly. Also, nodejs 0.12 came out this week.

Things with APIs that I forget about

  • watches
  • NEST theromostats
  • cars

Companies using nodejs

There was a good demo of using all your CPU cores with nodejs by managing a cluster of processes with strong-cluster-control.

StrongLoop presents: Develop, Deploy, Monitor and Scale REST APIs

Tuesday, Jan 27, 2015, 6:00 PM

One Eleven
111 Richmond St West, 5th Floor Toronto, ON

99 Members Went

We’ve got a special event for you on Tuesday Jan 27 – StrongLoop is coming all the way from SF to deliver a 3 hour training session on REST based APIs with Node. This event is geared towards beginner and intermediate developers with basic understanding of Node.js.Shubhra Kar, Director Products & Education at StrongLoop will be leading the session….

Check out this Meetup →

10 minutes with Ghost blogging platform: looks promising!

WordPress’ performance frustrates me sometimes, and Ghost received some good attention a while back. Time to take a look at Ghost!

First Impressions

  • really easy to install for dev purposes. It can run with SQLite, no need to set up a database server
  • it runs fast!
  • I like the Markdown editor
  • creating apps and themes looks simple compared to WordPress
  • the WordPress exporter for moving posts into Ghost works most of the time

But it’s still missing WordPress features I rely on:

Thoughts

I will give Ghost another try when I have time to look at the entire ecosystem and see what add-ons are out there, or if I can add the features I want. I appreciate that a fresh install of Ghost is fast and light, and hopefully adding some features I like will not slow it down much.

The good news is the team is very open about progress and changes. Here’s a blog post about what’s happening now, and here’s the Ghost roadmap on Trello.

What would moving from WordPress to Ghost be like?

  • easy to import most WP post text with a WP plugin, but I did see some errors when moving JS embedded in WP posts into Ghost
  • Ghost should be able to use the same URLs for posts as WP
  • images in posts will have to be moved to another host like S3. The Ghost team recommends Cloudinary to automate that process
  • could host site on Heroku, Digital Ocean, or another VPS provider. Not shared hosting

Fall Toronto 2014 Node.js Meetup

60+ of us met at One Eleven Richmond for 3 presentations and chatting


Using Docker for Metrics Dashboards during development

Mario from 360 incentives put up a blog post expanding on his presentation here

Story time

One time a cache busting mechanism failed, causing 100s of people to not be angle to finish an online course. Unit tests did not pick this up. How to check business value is being delivered? Use metrics!

Metrics are like a long running acceptance test

Use Metrics in Development

  • Not just production
  • Test that you are capturing the right data for logging
  • Having dashboards in dev speeds up feedback cycles

How to do

  • 360 uses statsd, graphite, grafana
  • Manage dashboards like code
    • Push updated dashboard layouts to version control and have them deploy automatically to wherever they are needed
  • Use Docker to distribute production-like environments to developer machines quickly with metrics software installed.
  • Mario wrote scripts to automate spinning up multiple docker images, destroying them, filling up containers with fake data

Docker makes it easy to run the same operational infrastructure (logging, aggregation, dashboards) in production and on every dev’s machine. Then everyone gets visual feedback in the form of graphs faster

Docker Features

  • Docker repos are very lightweight compared to virtual machines
    • 1 physical machine can run hundreds of docker images
  • Containers can be linked to each other over ports and IP addresses
  • Containers can mount parts of real file systems to themselves, like Vagrant
  • Is Docker missing features compared to Vagrant? Who cares! Docker is waaay faster to share and deploy environments with!

BTW, 360 Incentives is 11th fastest growing tech company in Canada. Neato!


Sails.js intro

Lee from OpenCare introduced sails.js

  • Sails adds a ORM, routing features, and more on top of express.js so you write less boilerplate
  • Sails’ ORM ‘waterline’ is DB agnostic, can link objects between different DB’s (Eg. relate a row in a PostgreSQL DB to a document in MongoDB)
  • Opencare uses Sails.js in production with an AngularJS frontend, they are happy with it

Shortcomings

  • Waterline does not support populating nested objects. Objects must be created and saved one layer at a time
  • Poor error handling
    • Sails returns 500 internal server error for too many types of errors, even for input validation
    • OpenCare built their own error handler to get around this

AudienceView transitions to Node.js

I had a hard time keeping up with this presentation. Geoff Wells and Corey discussed how they are transitioning their existing, successful product to Node.js.

  • AudienceView has 1.5 million line C++/ASP codebase that they started building 12 years ago, and is still in use today. Time to try Node.js!
  • How to change platforms without disrupting $2 billion in revenue from international customers (stadiums, concert halls, places that people buy tickets to get into)?

Why Node?

  • Existing ASP codebase is written in JS instead of VB
  • Can integrate with C++ backend via Node addons
  • Node.js has a community

Some things built during the transition

  • Node.js addons for communicating with existing C++ code (the backend stuff)
  • An ASP template parser that runs in Node.js so existing web page templates can reused
  • Replace ASP’s session handling with something custom-built
  • Replace ASP’s multithreading with queues

Success?

  • An earlier attempt at moving part of the platform to Java has been mostly replaced with Node.js in surprisingly little time. Success!
  • A significant part of ASP web application replaced with Node.js in 250 man days. Big success!

Helpful things

Some decisions made very early on in developing the backend of the product are proving very helpful during the transition

  • Very simple use of DB – no views, no stored procedures, no direct communication between ASP and DB is making transition easier
  • The C++ part of the backend is portable and can run on OS other than Windows
  • Choosing to use JS instead of VB way back means less boring work translating code today

Fall 2014 Node.js Meetup

Tuesday, Nov 18, 2014, 7:00 PM

One Eleven
111 Richmond St West, 5th Floor Toronto, ON

100 Members Went

Join us Tuesday Nov 18th for the Fall 2014 Node.js meetup. We’ve got three great speakers, and a new venue courtesy of Opencare and One Eleven.Here are the talks for this event:Docker, Dashboards & Node – Mario Pareja from 360incentives will talk about how 360 leveraged Docker to distribute their production application coupled with a customized m…

Check out this Meetup →

The Bowling Game Kata in nodejs

I had to ride a train with sketchy internet connection this week, which meant it was finally time to try Bob Martin’s Bowling Game Kata. This is a simple walkthrough of building a system for scoring bowling games using Test Driven Development. It is simple (one class, < 10 functions, often less than 200 lines of code), short (50 code examples, less than an hour to complete), and puts you into Bob’s brain as he points out bits of code he doesn’t like and refactors them.

Thoughts!

I completed the kata using Javascript and NodeJS instead of Java. I used Mocha for writing tests. It was easy to translate the exercise to these tools. At the end of the project, I was curious about how to add up scores for incomplete games (< 10 frames), and whether more tests would be necessary to verify the program works because I chose a dynamic language. I was also relieved that none of the suggested refactoring steps seemed out of place to me.

I discovered the –watch argument for Mocha part way through the project, which saved some keystrokes and clicking by automatically running tests when relevant files changed.

I have only completed this Kata once. A Kata is supposed to be completed many times. It should be habit forming. I haven’t decided if this is a good to repeat over and over again. Is it too simple? I’m not sure if I would benefit from this one.

My source code is available on GitHub

More Katas!

Things I wish I knew about Node + Express before making my first website with it

Develop with nodemon, deploy with forever

nodemon vs forever? More like nodemon AND forever! nodemon is live reload for Node apps. When it detects a file in your application has changed, it will restart your application for you. It does not require much configuration, and will save you many keystrokes and mouse clicks. My typical nodemon command looks like:

nodemon app.js --debug

It can also be used as part of a grunt command with grunt-nodemon.

Apps that restart well with nodemon will work better with forever. When your live application crashes despite all your unit tests passing, forever will automatically restart it. It offers more flexible logging, and can manage several nodejs apps at once. You can start an app with forever with a command like this:

forever start app.js

Waking up in the morning to see your Apache logs full of PHP errors isn’t fun, but the site may still be up and running. Waking up in the morning to see 1 error in your Node app brought the site down for hours is less fun. forever can help with that.

How to set environment variables

Environment variables are a good place to store private things like passwords and keys needed by your app. You don’t want those in your source code. I grew up using Windows, and didn’t know this method. Try running this command with nodejs installed:

NODE_ENV=debug DB_URL=mongodb://urlhere.ca STUFF=great node

This will start nodejs with environment variables named NODE_ENV, DB_URL, and STUFF set and ready to use. Type this command into the nodejs console next:

console.log(process.env)

You’ll see ENV, DB_URL, and STUFF included in the list other environment variables like PATH, ready to be used by your application. The variables will go away when the nodejs process ends. I find this very helpful when working on Heroku-hosted apps. Heroku uses environment variables for storing authentication info for many add-ons, and being able to duplicate those settings locally while developing brings me one step closer to matching their environment.

I have tested this out with nodemon and forever on Windows with the Git terminal as well as Linux, and it works in all cases.

NODE_ENV=production DB_URL=mongodb://urlhere.ca nodemon index.js

node-inspector will save you time

Look at this thing!

Breakpoints

Debugging Screenshot

Better logging

Console Screenshot

All for Node! You can’t want to use console.log() debugging after seeing this. It will save you time. I’m not even going to put an example here. Watch a screencast to get started.

Use async.js to prevent callback hell

Update 2015: Promises are more powerful, but a bit harder to understand, and Functional Reactive Streams is bigger step away from imperative programming.

Before you write your callback in a callback in a callback in a callback, try out async.js. It will force you to think harder about the input and output of your functions, reduce the amount of code you write, and probably improve the testability of your code as well.

Here’s how good NOT nesting callbacks 6 levels deep can look:

async.waterfall([
    // make raw Tweet into something useful
    contentTransformer.transformTweet.bind(this, tweet),
 
    // check the hashtags are valid
    entryChecker.hashtagMatch,
 
    // set the user_has_won property
    entryChecker.userHasWon,
 
    // save transformed Tweet to DB
    entriesController.save,
 
    // prepare to send a DM to the user
    twitterDirectMessager.send
], function (err, result) {
    util.log(TAG + 'saved a tweet. Err? ' + err);
});

Here are 5 function running in a waterfall pattern. I just have to put my functions in an array, make sure each returns the correct values, and async.waterfall takes care running them all or stopping when one returns an error, and running one last callback where I can log the result.

util.log() timestamps log messages. console.log() can log multiple objects at once

Use util.log() for messages that are going into logs. You’ll want the timestamps later to diagnose issues. Combine util.log with util.inspect() if you need to log detailed views of objects.

Use console.log() for quick and dirty checks when you forget you have node-inspector in your toolbox.

Automate running npm update after updating from version control

Hours have been wasted on this. Moving from wild files to package management is rad, but also means forming new habits. You probably won’t always notice if package.json changed in the last svn update or git pull, so why not just start running npm update by default?

Learn from boiler plate projects like MEAN and Tableau

Someone else made the hard decisions about what to name folders and files already. Tableau is a relatively lightweight example. MEAN is a stack that integrates Grunt, AngularJS, and Passport. Both packages include helpful examples.

Root users and Open File Limits

See my post on running node as a non-root user and raising open file limits to keep NodeJS web applications running smoothly and securely by using iptables and changing some Ubuntu configuration.

More things I wish I new about

  • File uploads with Express are more work than PHP and a good framework
  • Separate routes from controllers from business logic (like Laravel and other framesworks) makes for easier testing.