.NET Unit Testing Tools

Introduction

There are no silver bullets in our chosen profession, software engineer. Select tools and technologies that meet your requirements. When adopting tools, be prepared to align your workflow with the tool’s happy path.

If you can’t find a tool that meets your requirements, write one.  I’ve done this many times over the years and so have thousands of our fellow developers.

At work, I spend 100% of my time on C# .NET application design, architecture, and coding. After work, it’s JavaScript and .NET.

I’ve found a sweet spot for my JavaScript development that I’ve written about in these two recent blog posts:

When writing JavaScript applications, I like that my tests run automatically any time I save a code or test file. Automatic is the operative word, not having to run an additional command for my tests to run saves time and effort.  The test results appear in a command line window.  Automatic test running enables simple red-green-refactor workflow written about by so many developers and unit test advocates.

Additionally, I’ve  grown fond of the BDD style assertions that the Chai library offers.

.NET Red-Green-Refactor Workflow

This section is about setting up a red-green-refactor code-time workflow. None of the below content applies to setting up a CI Server that performs continuous integration and testing services.

I’ve started work on my Ocean Framework that I originally published by in 2007. Back then I didn’t provide unit tests, a mistake I’m now correcting. I’ll write much more about Ocean and other WPF projects later in the year.

My workflow goal is:

  • Visual Studio automatically running the unit tests
  • BDD style assertions
  • Unit test framework that supports Xamarin, WPF, and C# libraries

I did my research and found that JetBrains Resharper Ultimate when combined with JetBrains dotCover provides .NET developers the capability to have a simple Continuous Testing workflow. Since I already have a JetBrains Ultimate license, this was an easy decision.  I also use JetBrains WebStorm and dotPeek products.

JetBrains Continuous Testing workflow enables developers to write code and unit tests, and when the files are saved automatically run the unit tests. Continuous Testing only runs tests that are impacted by a code or unit test change. When I have a project that has a thousand or two thousand unit tests, I’ll write a  blog post with the results of the time savings as well as the required system resources to support this feature.

Code coverage report is generated when tests are run and is viewed in the same tool window as the test results. The coverage report tool can also highlight your code, visually indicating the lines of code covered and not covered by the last unit test run. Nice feature when you’re driving for 100% coverage.

Continuous Testing has several options for triggering when tests are run.  I’m currently using the On ‘Save’ Build and Run Dirty Tests option as it meets my workflow requirements. Choose the option that best meets your workflow requirements.

continuoustesting

Continuous Testing allows developers to deselect assemblies or parts of assemblies not to run tests or coverage for, which is a fabulous feature for the red-green-refactor workflow as it cuts down the test run time significantly on large projects.

I should note, that Visual Studio has these same capabilities to run unit tests and provide code coverage report as well other excellent code analysis tools. You can quickly set up a shortcut key for running your tests on demand.  I could not figure out how to do this automatically. I did think about setting up a file watcher and then running the unit tests from the command line when a file is saved. Since JetBrains Continuous Testing does this for me, as well as only running the tests affected by files that are saved, I’ve selected this tool.

Choose the tools that meet your requirements and align with your workflow. If you like the Visual Studio tools, please use them.  No need to purchase tools if you don’t need them.

Tool Choices

I’ve chosen these libraries to round out my tooling selection. Use NuGet to install them the solution test assemblies.

Please do your research and select tools that meet your requirements and enable an effective code-time workflow.

Close

Reviewing daily workflow scenarios for efficiency and accuracy is an important task for developers and developer leads. Forming habits that make you efficient increases your value in the marketplace. Work smarter not harder.

Have a great day,

Just a grain of sand on the world’s beaches

 

Xamarin Dev Days Cranbury NJ

Infragistics is hosting Xamarin Dev Days on 19 Nov 2016 at our corporate office in Cranbury, NJ.

The very popular Xamarin Dev Days event sold out in one day, but there is a wait list and you can register here.

In addition to the standard curriculum, there will be an optional Bonus Content session given by Infragistics Engineers that wrote the Infragistics Xamarin Forms product.

Attend the fun day of learning with fellow developers and MVP’s at our excellent facility.

Have a great day,

Just a grain of sand on the world’s beaches

 

 

 

Open Source – cave lupum

Introduction

I’ve been actively working with open-source JavaScript packages for about 18 months. Developers that are very generous with their time have built tools and frameworks that have enriched the lives of developers all over the world. I too have contributed tools and believe in this beautiful Ecosystem.

A few months ago I started to look under the hood of my SPA and Nodejs applications and found code and practices that caught my attention. I found packages that other packages depended on, have very few lines of code. Packages with dependencies that are out of date or dependencies that had warnings such as, this package version is subject to a denial of service attack.

Upon further reflection, I got very concerned about the damage a bad person could inflict on trusting developers that download packages that have a dependency that has been replaced by evil code. My system and software that I write could be compromised. Now imagine ticking time bomb code replicated over Docker Containers and placed on servers. Damage could be immeasurable.

cave lupum – Beware the wolf disguised as a lamb.

Publicly articulating details of the many attack scenarios I’ve thought of would be irresponsible. Instead, it’s time to start the conversation around the problem that our international community is currently faced with and how we can protect our precious open-source.

Again, this blog post is about getting the conversation started.

Over the last few weeks, I’ve met with high profile MVP’s and a few corporate executives that share similar quality and security concerns that I’m sharing in this blog post.

For the purpose of this blog post, “packages” refers to open-source JavaScript packages that are added to Nodejs, or JavaScript web applications using a package manager.

I’ll have a section down below for compiled downloads such as NuGet, Visual Studio Gallery, and the Visual Studio Marketplace.

Proposal Goals

  • Not add any burdens to the open-source developer
  • Provide package consumers a measured level of confidence in the package and its dependencies
  • Raise the quality of packages by having them evaluated
  • Have repositories provide evaluation services and reporting for their packages

Proposal

Package evaluation is performed in the cloud.  An MVP friend also thought about a command line tool that could be used locally.

Package evaluation should be opt-in.  Allow developers wanting their packages evaluated to submit their package for evaluation. An opt-in approach would not add any burdens to developers not wanting to participate, while at the same time, driving up quality for the packages that are evaluated, giving those developers additional credibility for their efforts.

Consumers of packages could choose to use evaluated packages or continue to use non-evaluated packages at their own risk.

Evaluation and Download

Where packages are evaluated (centralized vs. non-centralized) is a topic that will generate much discussion and debate.

Where evaluated packages are downloaded from (centralized vs. non-centralized) is another topic that will generate much discussion and debate.

Evaluation Metrics

A standard set of metrics is applied to JavaScript packages, yielding a consistent evaluation report, enabling consumers to easily compare packages.

Below is a short “starter list” of metrics. Additional metrics should include the warnings such as those that npm emits when packages are installed.

Most evaluation metrics are yes or no.  Some are numeric; others are a simple list. When a package is evaluated, all of its dependencies are also evaluated. A package’s evaluation can only be as good as its weakest dependency.

  • Package signed
  • Included software license
  • Number of dependencies
  • Number of dependencies with less than ten lines of JavaScript
  • Package is out of date
  • Package has warnings
  • Have out of date dependencies
  • Has dependencies with warnings
  • Has unit tests
  • Has 100% unit test coverage
  • All tests pass
  • Makes network calls
  • Writes to file system
  • Threat assessment
  • Package capabilities (what API’s are being used)

NuGet, Visual Studio Gallery, Visual Studio Marketplace

NuGet, Visual Studio Gallery, and Visual Studio Marketplace serve compiled code which is evaluated differently than JavaScript. Microsoft will need to determine the best way to evaluate and report on these packages.

Funding

This proposal affects developers and infrastructures from all over the world.

As a software engineer, I know that while there will be challenges, the problems identified in this proposal are solvable.

Getting big corporations and government to proactively and cooperatively, take on and complete a task because it’s the right thing to do is a challenge that must be initiated.

Waiting until there is a problem and then trying to stem the tide and roll back the damage is a poor strategy.  Benjamin Franklin said, “an ounce of prevention is worth a pound of cure,” he is correct.

I honestly do not believe getting funding for a project of this scope will be any problem.

Next Steps

Big players need to meet and solve this problem.

Developers, start the conversation in your sphere of influence and contact big players and let them know your concerns.  Request that they take proactive action now.

Close

Have a great day.

Just a grain of sand on the worlds beaches.

 

XAML Designer Brush Editor Fixed

Introduction

Since the release of Visual Studio 2015 Update 3, the VS XAML Designer pop up brush editor quit working in the Properties window when properties were sorted by name, or when properties of type brush were not in the Brush category.  This issue affected both Visual Studio and Blend for Visual Studio 2015.

Fix

Microsoft posted the fix in this cumulative servicing update:

Microsoft Visual Studio 2015 Update 3 (KB3165756)  

https://msdn.microsoft.com/en-us/library/mt752379.aspx

I have installed the update and tested various controls and use cases and can confirm the fix corrected the problem.

Close

Have a great day.

Just a grain of sand on the worlds beaches.

 

Component Generator for AngularJS, Angular 2, Aurelia​

Introduction

I believe developers should own their code generation story. The value in owning your code generation is that when platforms change, APIs change, language grammar is enhanced, you can easily refactor your templates and not miss a beat. I also believe that owning your code generation story is a forcing function for thinking out how your application works, is wired up, and how to unit test it.

This tool provides templates that you must edit before generating any code.

What?  No ready-made templates?  Karl, are you nuts?  Why?

Let’s think about code generation templates for a minute. Templates are used to create language specific output.  Developers are using many flavors of JavaScript today: ES5, ES6, ES6 with ES7 experimental features, TypeScript, Coffee Script, etc. Stylesheet files can be written using LESS, SASS, SCSS, or CSS.

What language should I use to write the templates?  I use ES6, but not everyone does.

Let’s think about how Angular apps are structured and wired up.  Check ten blog posts, and you’ll read about ten valid ways to structure and wire up an Angular SPA app.

Small apps are wired up differently than medium or large apps.  Some put small modules in a single folder, whereas a medium-sized module may be within a single parent folder, with each component in a separate folder.

Developers doing unit testing will structure their component wire up differently to better support testing scenarios without having to load Angular for a component or controller unit test.  Not loading Angular for each unit test significantly speeds up your unit tests.

Based on the above language, structure, and wire up options, providing default templates would provide zero value.

Your Small Part

You’ll edit the empty template files for the component based SPA frameworks you author applications for such as AngularJS, Angular 2, or Aurelia. Having you edit your templates provides the best solution for this tool supporting many different: JavaScript flavors, AngularJS coding styles, and component wiring up strategies. Additionally, you’ll be the proud owner of your code generation story.

This tool uses Underscorejs templates that are easy to author, usually requiring a minute or two. However, if your scenario requires it, you can swap out the template engine.

More Than a Tool

My first version of this tool was written in less than two hours as single ES6 file and worked great. It didn’t have any tests but worked perfectly for my one scenario which was, Angular 1.5.9 components and ES6.

Then my mentoring and scope creep genes kicked into high gear, and I spent a few days and several versions to produce this tool.

I didn’t want to miss an opportunity to share a tool that I think will benefit many developers across languages, SPA frameworks, and scenarios. But to accomplish this goal, the tool would require 100% test coverage, work for any JavaScript language and any SPA component based framework.

I hope that others can learn from the architecture, code, and unit tests presented here. I welcome both positive and negative feedback to improve the tool and code.

This tool was also a forcing function for me to dive deep into authoring testable Node.js tools using ES6. It took me a little time, learning the testing tools, but when I refactored the code, having 100% test coverage paid off in spades.

Background

During my career as a Software Engineer and Architect, I’ve always written tools to increase my productivity and accuracy. Tools like XAML Power Toys or this tool came about because I found myself repeating the same task over again, and knew the computer is capable of making me infinitely more productive.

Two weeks ago I started writing an AngularJS 1.5.9 component based application.  AngularJS components are similar in concept to Angular 2 and Aurelia components although the syntax and implementation are a little different.

After creating my second component, I stopped to write this tool. I’m not going to perform mundane, repetitive tasks like creating: folders, components, controllers, templates, tests, and CSS files; this is a perfect assignment for a tool.

In addition to creating folders and files, the tool leverages user editable templates for generating code that matches your requirements and coding style. The ability to generate boiler maker code yields a significant increase in productivity.

What You’ll Learn

  • Features of the tool
  • Tool Architecture
  • Requirements
  • Installation
  • Local development installation
  • Local production installation
  • Command line usage
  • Editing Templates

Videos

  • Getting Started
  • Modifying the tool
  • Unit testing the tool

Features

  • Node.js command line utility, written using ES6 and TDD methodology
  • Cross-platform Windows, Mac, and Linux (thanks to Node.js)
  • User editable templates drive the generated code
  • Create components for any JavaScript framework like AngularJS, Angular 2, and Aurelia
  • Separating templates by framework enables supporting multiple frameworks
  • Component output folder is optionally created based on a command line option
  • Component controller file is optionally generated based on a command line option
  • Component unit test file is optionally generated based on a command line option
  • Component CSS file is optionally generated based on a command line option
  • You can modify anything about this tool, make it fit like a glove for your workflow

Tool Architecture

This object-oriented application tool was written using ES6. I have separated the functionality of the application into small single-responsibility classes. Besides good application design, it makes understanding and unit testing much easier. When I think of this tool, a .NET Console application immediately comes to mind.

I wrote Guard clauses for every constructor and method. I always add guard clauses, even on private methods, irrespective of language because I am a defensive software engineer. Writing guard clauses requires no extra effort given tools like Resharper and the ubiquitous code snippet feature in most editors and IDEs. Guard clauses future proof code in maintenance mode when another developer makes a false assumption while editing, next think you know a customer is filing a bug.

In some of the classes, I have refactored small sections of code into a method. Doing this makes the code easier to read, comprehend, and simplifies unit testing.

Classes

  • ApplicationError – deriving from Error, an instance of this class is thrown when a user error caused by improper command line usage occurs. It permits the top-level try catch block in the ComponentCreator to perform custom handling of the error.
  • CommandLineArgsUtility – provides functions related to command line arguments.
  • ComponentCreator – performs validation and writes the templates.
  • IoUtility – wrapper around Node.js fs.
  • Logger – wrapper around Console
  • TemplateItem – wrapper around a supported framework, exposes all data as immutable
  • Templates – stores collection of TemplateItems, provides immutable data about code generation templates.

The entry point for the tool is index.js.  This module instantiates a TemplateItem for each supported framework, creates all of the required dependencies for the ComponentCreator and injects them, then invokes the create method on the ComponentCreator instance.

The reason for creating and injecting all external dependencies in the root of the application is to enable unit testing. Dependencies injected using constructor injection can easily be stubbed or mocked.

I have successfully used this type of architecture for writing other complicated Node.js applications.

Dependencies

  • Node.js® is a JavaScript runtime built on Chrome’s V8 JavaScript engine. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world.
  • Chai – is a BDD / TDD assertion library for node and the browser that can be delightfully paired with any javascript testing framework.
  • Istanbul – full featured code coverage tool for Javascript
  • Mocha – is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.
  • Sinon.js – Standalone test spies, stubs, and mocks for JavaScript.
    No dependencies and works with any unit testing framework.
  • Sinon-Chai – provides a set of custom assertions for using the Sinon.JS spy, stub, and mocking framework with the Chai assertion library. You get all the benefits of Chai with all the powerful tools of Sinon.JS.

Requirements

  • Node.js  (install either LTS or Current versions.  Personally I use the Current version.)

On Mac’s I don’t recommend installing Node.js from the Node.js website. If you do, upgrading is a PIA.

For Mac users, use brew. After installing brew, run this command from a terminal:

brew install node

If you use brew, future upgrading or uninstalling Node.js on your Mac is a breeze.

Recommendations

To help understand how this application is setup and how unit testing is setup and invoked, please read my blog post: https://oceanware.wordpress.com/2016/08/10/easy-tdd-setup-for-nodejs-es6-mocha-chai-istanbul/

Installation

Ensure you have installed above requirements.

Download or clone the Component Generator repository here.

Open a terminal or command window, and then navigate to the root folder of the tool and run this command.

npm install

Next, run the unit tests and see the coverage report by executing this command:

npm test

Running the unit test also runs the Istanbul code coverage tool which outputs a detailed coverage report in the /coverage/lcov-report/index.html file.

Local Development Installation

During development, testing, and editing of your Node.js command line tools you’ll need to set up a symbolic link so you can execute your tool from any folder. Setting up a symbolic link instead of installing your Node.js package globally, allows you to continue editing the tool’s code, while at the same time, being able to execute the tool from any folder on your computer.

Mac and Linux

From the root folder of the tool, open a terminal or command window and run this command:

npm link

To test you development installation run this command:

gencomponent

The tool will display the command line usage.

Navigate to another folder and rerun the gencomponent command.  You should get the same output.

Windows

You MUST give yourself Full Permissions on the /node_modules folder before proceeding.

Follow the above steps for Mac and Linux.

Local Production Installation

This step is optional.  If you like the convenience of being able to edit your templates or change the tools code, while at the same time being able to invoke the tool from any folder then, by all means, skip this step.

If you want to install the tool globally, or want to install the tool on other machines, then follow these steps.

Before proceeding, you need to remove the symbolic link you created in the above steps.

Navigate to the root folder of the tool, open a terminal or command window and run the following command:

npm unlink
Window, Mac, and Linux

Navigate to the root folder of the tool, open a terminal or command window and run the following command:

npm install -g

You can now invoke the tool from anywhere on your machine.

After installation, if you need to modify the tool or template, uninstall the tool globally, make the changes and reinstall it globally.

Command Line Usage

Before proceeding, ensure you have created a symbolic link for the tool, or installed it globally.

gencomponent (component name) [-(framework)] [--ftsc]

The component name is required to generate a component.

The framework is optional and defaults to ‘ng’ if not supplied. Default framework can be changed by modifying the code.  Default valid options:

  • -ng
  • -ng2
  • -aurelia

Code generation options are prefaced with a double dash  (–), are optional, and can be in any order. The valid options are:

  • f = create a component folder
  • t = create a component unit test file and controller unit test file for the optional controller
  • s = create a component CSS file
  • c  = create a component controller file

If invalid arguments are provided or you attempt to create a component that already exists, an error message will be displayed at the command line.

Usage Examples

1. Show the command line usage message.

gencomponent

gencomponent ?

2. Create the Sales component in the current folder along with the sales.component js, and sales.template.html files. Templates are selected from the default /templates/ng folder.

gencomponent Sales

3. Create the Sales component in a new component folder named Sales along with the sales.component js, sales.controller.js, sales.template.html, sales.component.spec.js, sales.controller.spec.js, and sales.template.css files.  Templates are selected from the default /templates/ng folder.

gencomponent Sales --ftcs

4. Create the Sales component in the current folder along with the sales.component js, sales.component.spec.js, and sales.template.html files. The templates are selected from the /templates/aurelia folder.

gencomponent Sales -aurelia --t

5. Create the SalesDetail component in a new component folder named SalesDetail along with the salesDetail.component.js, salesDetail.component.spec.js, salesDetail.controller.js, salesDetail.controller.spec.js, and salesDetail.template.html files. The templates are selected from the /templates/ng2 folder.

gencomponent SalesDetail -ng2 --ftc

You can easily modify this tool to handle more or less supported frameworks and change the file naming conventions.

Editing Templates

Please read Underscorejs template documentation, it’s very short.

Template engines allow the template author to pass a data object to methods that resolve the template and produce the generated code.

This tool passes a rich data object that you can use inside your templates.

gencomponent SalesDetail

The below template data object was hydrated and passed to the template engine when the tool was invoked with SalesDetail as the desired component.

{ componentName: 'SalesDetail',
 componentSelector: 'sales-detail',
 componentImportName: 'salesDetail.component',
 controllerImportName: 'salesDetail.controller',
 componentImportNameEnding: '.component',
 controllerImportNameEnding: '.controller',
 templateFileNameEnding: '.component.html',
 componentFileNameEnding: '.component.js',
 controllerFileNameEnding: '.controller.js',
 componentSpecFileNameEnding: '.component.spec.js',
 templateCssFileNameEnding: '.component.css' }

This snippet from the below HTML file shows the syntax for injecting the componentName property value into the generated code.

<%= componentName %>

The below HTML can be found in the /temples/ng/componentTemplate.html file. This demonstrates consuming a data object property in a template.

<h2>Component Generator Data Object</h2>
<p>These are the data properties available in all template files.</p>
<p></p>
<p>componentName = <strong><%= componentName %></strong></p>
<p>componentSelector = <strong><%= componentSelector %></strong></p>
<p>componentImportName = <strong><%= componentImportName %></strong></p>
<p>controllerImportName = <strong><%= controllerImportName %></strong></p>
<p>componentImportNameEnding = <strong><%= componentImportNameEnding %></strong></p>
<p>controllerImportNameEnding = <strong><%= controllerImportNameEnding %></strong></p>
<p>templateFileNameEnding = <strong><%= templateFileNameEnding %></strong></p>
<p>componentFileNameEnding = <strong><%= componentFileNameEnding %></strong></p>
<p>controllerFileNameEnding = <strong><%= controllerFileNameEnding %></strong></p>
<p>componentSpecFileNameEnding = <strong><%= componentSpecFileNameEnding %></strong></p>
<p>templateCssFileNameEnding = <strong><%= templateCssFileNameEnding %></strong></p>

Template Folders

  • /ng – AngularJS
  • /ng2 – Angular 2
  • /aurelia – Aurelia

Available Templates

  • Component – always generated
  • Component Template – always generated
  • Controller – optionally generated
  • Component Unit Test – optionally generated
  • Controller Unit Test – automatically generated if the Controller and Component Unit Test is generated
  • Component Template Stylesheet – optionally generated

Workflow for Editing Templates

Before diving into template editing, you need to know exactly what the outputted generated code needs to look like.

You’ll need to decide on your application structure and wiring up.  Will you put your controllers inside the component file or keep them in separate files? Are you going to write unit tests?

I recommend, writing several components, w0rk out the details, identifying repeated code such as imports or require statements, and commonly used constructor injected objects.

In the below example, I have an existing About controller that represents how I would like my controllers to be generated.

Copy the code to generate into the appropriate template file, and then replace the non-repeating code with resolved values from the template data object.

In the below example, I copied the About controller into the componentTemplate.controller.js file and then replaced the “About” name with the componentName data object property.

class AboutController {
   constructor() {
   }
}

export default AboutController;

This below template will generate the above code.

class <%= componentName %>Controller {
    constructor() {
    }
}

export default <%= componentName %>Controller;

Now repeat the above steps for each template and for each framework you’ll be performing code generation.

Note that some templates will be empty, this is normal for .css and possibly .html files. But at least you didn’t have to waste precious time creating the file.

Videos

Getting Started

This 8-minute video explains how to get started with this tool.

Modifying the Tool

This 11-minute video explains how to modify:

  • templates
  • frameworks
  • file naming conventions
  • template engine

Unit Testing the Tool

This 23-minute video explains unit testing this tool.

Close

Writing your own cross-platform, command-line tools using Node.js is fun.

Having 100% test coverage is not easy and takes time. Just know that your customers and fellow developers will appreciate you putting the effort into a release with 100% test coverage.

Have a great day.

Just a grain of sand on the worlds beaches.

 

Xamarin Forms Bindable Picker v2

Introduction

I’ve updated the BindablePicker from a previous blog post, added new features and created a github reopro for the code.

Xamarin Forms is a new and cool API for quickly building native apps for IOS, Android, and Windows UWP in C#.

The Xamarin Forms API comes with a primitive Picker control that lacks typical bindable properties that developers expect a Picker (similar functionally that a desktop ComboBox has) to have.

Xamarin Forms makes it very easy for developers to extend the API, write your own custom controls, or write custom renderers for controls.

This BindablePIcker is the result of studying blog and forum posts and receiving feedback and bug report on the original version.

API Comparison

Xamarin Forms Picker API

  • SelectedIndex (bindable)
  • Items (not bindable)

Bindable Picker API

  • ItemsSource (bindable)
  • SelectedItem (bindable)
  • SelectedValue (bindable)
  • DisplayMemberPath
  • SelectedValuePath

New Features Added

  • Support for collections that implement INotityCollectionChanged like the ObservableCollection

Bug Fixed

The original BindablePicker did not correctly set the SelectedItem after the ItemsSource was refreshed at runtime.

Bindable Picker Source

This repro contains a project that demonstrates scenarios for using this control and it has the source for the BindablePicker.

https://github.com/Oceanware/XamarinFormsBindablePicker

Training Video – XAML Power Toys BindablePicker Scenarios

This short video explains three common use cases for the BindablePicker.

Close

Have a great day.

Just a grain of sand on the worlds beaches.

Easy TDD Setup for Nodejs ES6 Mocha Chai Istanbul

Introduction

I’m working on a command line tool for AngularJS, Angular2, and Aurelia that creates components from user templates.  It creates the folder, component js file, component template HTML file, optional component template CSS file, and the component spec js file.

The tool generates the code using the underscorejs template engine.  It’s amazing how much code you’ll no longer have to type; boiler maker component wiring up and default unit tests for most components.

As I was writing the tool, I decided to break out the project setup into this small blog post to make the tool blog post simpler and focused. You can use this simple project as a starter or learning tool for your Nodejs ES6 projects.

I wrote this application and the command line tool using the Atom Editor.  I’ve include my Atom Snippets down below that give me a big productivity boost when writing unit tests.

This blog post is much more about setting up a Nodejs project that uses ES6, Mocha, Chai, and Istanbul than how to use these tools. Please refer to the many outstanding blog posts, courses, and tutorials on these tools and ES6.

My Approach To Nodejs ES6

It’s amazing what you can write using Nodejs.  I’ve written complex, multi-process apps that have IoT connected over MQTT and real-time communication to web clients.  Also written simple apps like the above command line tool. Nodejs is wonderful and is what enables Electron to be the prodigious cross-platform desktop application tool that it is.

ES6 is a clean modern language, is simple, familiar looking, and is fun.  I’ve used ES5 and TypeScript  for many projects but settled on ES6. I blogged about my decision here

Using ES6 with Nodejs does not require Babel for your code or unit tests.  I’m not using ES7 features such as class properties or decorators, but I can live with that for now.

I structure my Nodejs apps, perhaps differently than you’ve seen on other blog posts.  Not implying better, just different.

I prefer to write my ES6, Nodejs code like I would any object orientated app, small classes with discrete functionality. In architecture speak, SOLID, DRY, etc.

I also structure my ES6 so that it can be tested.  Sometimes that requires a little rethinking and possibly some refactoring, but it’s worth it.

Hello World

It would be madness to not write the ubiquitous “Hello World” app for my Nodejs demo, so here we go.

When this app is executed, index.js is the entry point, it creates an instance of HelloWorld and invokes the run method. 

Notice that I’m passing the command line arguments into the constructor. I do this to make testing the HelloWorld class much easier than if I didn’t.

index.js

'use strict'

const HelloWorld = require('./app/helloworld');

let c = new HelloWorld(process.argv.slice(2));
c.run();

 

HelloWorld is simple.  If no command line args are passed, the run method will log the greeting.  If args are passed, they will be concatenated and then logged.

helloWorld.js

'use strict'

const Logger = require('./logger');

class HelloWorld {

    constructor (commandLineArgs) {
        this.commandLineArgs = commandLineArgs;
        this.greeting = 'Hello World';
        this.logger = new Logger();
    }

    run() {
        if (this.commandLineArgs && this.commandLineArgs.length) {
            this.logger.log(this.commandLineArgs.join(' '));
        } else {
            this.logger.log(this.greeting);
        }
    }
}

module.exports = HelloWorld;

 

Logger outputs message to the console. I always create a logger for my Nodejs apps, so other classes don’t need console.log, etc.  I like the object oriented approached to keep my code clean and familiar. This is a very simple Logger class, enhance it as required for your apps.

logger.js

'use strict'

class Logger {

    log(message) {
        console.log(message);
    }
}

module.exports = Logger;

Unit Testing Setup

For my Nodejs projects I use the following testing tools:

  • Mocha – unit test framework
  • Chai – BDD / TDD assertion library
  • Istanbul – code coverage tool
  • Sinon – standalone test spies, stub, and mock framework
  • Sinon Chai – extends Chai with assertions for Sinon.

You can use Mocha by itself or Mocha and Istanbul to get coverage.  I like the features of Chai, but at the end of the day, it’s personal preference for testing style.  “Actually testing is critical, test style is not.”

I install the test tools locally in my Nodejs projects rather than globally so that I can have multiple versions of a tool if required. Local installs make the command line longer, but that’s not an issue since the command will be in package.json or in a gulp task, bottom line, you don’t have to type it.

Local install example:  npm install mocha –save-dev

Understanding Mocha Setup and Startup

Node and npm commands are executed from your root folder.

When Mocha is invoked, by default it will look in the /test folder for the mocha.opts file, which is the Mocha configuration file. 

Screen Shot 2016-08-11 at 12.25.45 AM

mocha.opts

The first line tells Mocha which folder to look into for the tests, if not supplied, it will use the /test folder.  I’ve chosen the /app folder because I like to have my unit tests in the same folder as the JavaScript being tested.

The second line loads up the common.js file

The third line tells Mocha to look not only in the app folder but also all sub-folders.

Finally, the fourth line, tell Mocha to quit processing when a test fails. 

Note:  When running your full test suite, or when running on a CI server, the bail option is probably not appropriate.

app
--require ./test/common.js
--recursive
--bail

 

common.js

This setup is optional, but its value is that I don’t have to repeat these require statements for Chai and Sinon in every test js file.

'use strict';

global.chai = require('chai');
global.chai.should();

global.expect = global.chai.expect;
global.sinon = require('sinon');

global.sinonChai = require('sinon-chai');
global.chai.use(global.sinonChai);

 

package.json scripts section

The scripts section of the package.json file makes it easy to run commands, especially commands with long text.

To run a command from the terminal or command line type:  npm run {script command name}

For example,  npm run example or npm run tdd:mac

The example and example2 command run the app with and without command line arguments.

The test command runs all the unit tests and code coverage report.

The tdd:mac command runs Mocha and all your tests.  Then it begins to watch the target folder for any changes.  When a file changes, its reruns the tests automatically.

Note:  mocha -w on Windows does not work, hence the command tdd:mac.  Bugs have been logged.  For now, if you’re on Windows, I recommend writing a glup task that watches the folder and then runs mocha without the -w option.  Optionally, if you’re a WebStorm user, you can set this up in the WebStorm IDE if desired.

My typical workflow on my Mac is to open Atom or WebStorm, view my spec file and code being tested in split view, then in a terminal window I run, npm run tdd:mac and I’m good to go.  I get instant feedback from my Mocha test runner as a write tests or code.

  "scripts": {
    "example": "node index.js",
    "example2": "node index.js Hey world!",
    "test": "./node_modules/.bin/istanbul cover ./node_modules/mocha/bin/_mocha",
    "tdd:mac": "./node_modules/.bin/mocha -w"
  },

logger.spec.js

This unit test verifies that the Logger class will invoke console.log and pass the correct message to it.

When you’re unit tests actually write to the console, the text will be outputted into your Mocha report stream outputted to the console.  To limit the noise, I’ve created the below TestString that blends in nicely with the Mocha report.

The variable ‘sut’ is an acronym for ‘system under test.’  I use ‘sut’ to make it easy for the next person reading my tests to quickly see what object is being tested. Consistent code is much easy to read and maintain.

The Sinon library makes it easy to test class dependencies by either spying, stubbing, or mocking the class or methods.  The reason I don’t use a stub or mock here for console.log is because it will block the Mocha report from being displayed.  The spy was a good fit and the TestString gave me the output I wanted.

'use strict'

const Logger = require('./logger');
const TestString = '    ✓';  // nice hack to keep the mocha report clean. LOL.

describe('Logger', () => {
    it('should log a message to the console', () => {
        let sut = new Logger();
        let spy = sinon.spy(console, 'log');

        sut.log(TestString);

        expect(spy.calledOnce);
        expect(spy.calledWithMatch(TestString));

        spy.restore();
    });
});

 

helloWorld.spec.js

To limit bugs and typo’s I use constants for my expected results and method arguments.

In this simple app, the Logger is exposed as a property on HelloWorld, making it accessible for stubbing at test time.  In a larger app, the Logger would be an injected dependency.  Injected dependencies are a no brainer to stub and mock.

'use strict'

const HelloWorld = require('./helloWorld');
const Logger = require('./logger');
const DefaultGreeting = 'Hello World';
const Arg1 = 'Hello';
const Arg2 = 'there!'

describe('HelloWorld', () => {

    describe('Constructor', () => {

        it('should be created with three properties: commandLineArgs, greeting, and logger', () => {
            let sut = new HelloWorld();
            expect(sut).to.have.property('commandLineArgs');
            expect(sut).to.have.property('greeting');
            expect(sut).to.have.property('logger');
        });

        it('should have default greeting', () => {
            let sut = new HelloWorld();
            expect(sut.greeting).to.equal(DefaultGreeting);
        });

        it('should have command line args set when supplied', () => {
            let sut = new HelloWorld([Arg1, Arg2]);
            expect(sut.commandLineArgs).to.have.lengthOf(2);
            expect(sut.commandLineArgs[0]).to.equal(Arg1);
            expect(sut.commandLineArgs[1]).to.equal(Arg2);
        });
    });

    describe('Run', () => {
        it('should log command line args when supplied', () => {
            let logger = new Logger();
            let stub = sinon.stub(logger, 'log').returns();
            let sut = new HelloWorld([Arg1, Arg2]);
            sut.logger = logger;

            sut.run();

            expect(logger.log).to.have.been.calledOnce;
            expect(logger.log).to.have.been.calledWith(`${Arg1} ${Arg2}`);

            stub.restore();
        });

        it('should log default greeting when no command line args are passed', () => {
            let logger = new Logger();
            let stub = sinon.stub(logger, 'log').returns();
            let sut = new HelloWorld();
            sut.logger = logger;

            sut.run();

            expect(logger.log).to.have.been.calledOnce;
            expect(logger.log).to.have.been.calledWith(DefaultGreeting);

            stub.restore();
        });
    });

});

 

Test Results

Executing npm test or npm run test, produces the following output.

The describe and it blocks are nicely nested in this Istanbul coverage report.

The first item in the Logger group is a black check mark, this is my little hack I mentioned above in logger.spec.js file test.

Screen Shot 2016-08-13 at 2.24.51 PM

Atom Snippets

Atom editor snippets rock.  The very best snippet documentation I’ve read is here, read it and you’ll be a happy camper.

These snippets assist my coding of classes and unit tests.

'.source.js':
  'Fat Arrow':
    'prefix': 'fat'
    'body': '() => {}'
  'describe unit test':
    'prefix': 'dsc'
    'body': """
        describe('$1', () => {
            $2
        });

    """
  'it unit test':
    'prefix': 'itt'
    'body': """
        it('should $1', () => {
            $2
        });

    """
  'Class with Constructor':
    'prefix': 'cctor'
    'body': """
        'use strict'

        class $1 {

            constructor () {
                $2
            }
        }

        module.exports = $1;
    """
  'Method Maker':
    'prefix': 'mm'
    'body': """
        $1($2) {
            $3
        }

    """

Download

https://github.com/Oceanware/nodejs-es6-mocha

Close

I hope this information helps you in setting up a Nodejs project that uses ES6, Mocha, Chai, and Istanbul.

Just a grain of sand on the worlds beaches.