Open Source – cave lupum

Introduction

I’ve been actively working with open-source JavaScript packages for about 18 months. Developers that are very generous with their time have built tools and frameworks that have enriched the lives of developers all over the world. I too have contributed tools and believe in this beautiful Ecosystem.

A few months ago I started to look under the hood of my SPA and Nodejs applications and found code and practices that caught my attention. I found packages that other packages depended on, have very few lines of code. Packages with dependencies that are out of date or dependencies that had warnings such as, this package version is subject to a denial of service attack.

Upon further reflection, I got very concerned about the damage a bad person could inflict on trusting developers that download packages that have a dependency that has been replaced by evil code. My system and software that I write could be compromised. Now imagine ticking time bomb code replicated over Docker Containers and placed on servers. Damage could be immeasurable.

cave lupum – Beware the wolf disguised as a lamb.

Publicly articulating details of the many attack scenarios I’ve thought of would be irresponsible. Instead, it’s time to start the conversation around the problem that our international community is currently faced with and how we can protect our precious open-source.

Again, this blog post is about getting the conversation started.

Over the last few weeks, I’ve met with high profile MVP’s and a few corporate executives that share similar quality and security concerns that I’m sharing in this blog post.

For the purpose of this blog post, “packages” refers to open-source JavaScript packages that are added to Nodejs, or JavaScript web applications using a package manager.

I’ll have a section down below for compiled downloads such as NuGet, Visual Studio Gallery, and the Visual Studio Marketplace.

Proposal Goals

  • Not add any burdens to the open-source developer
  • Provide package consumers a measured level of confidence in the package and its dependencies
  • Raise the quality of packages by having them evaluated
  • Have repositories provide evaluation services and reporting for their packages

Proposal

Package evaluation is performed in the cloud.  An MVP friend also thought about a command line tool that could be used locally.

Package evaluation should be opt-in.  Allow developers wanting their packages evaluated to submit their package for evaluation. An opt-in approach would not add any burdens to developers not wanting to participate, while at the same time, driving up quality for the packages that are evaluated, giving those developers additional credibility for their efforts.

Consumers of packages could choose to use evaluated packages or continue to use non-evaluated packages at their own risk.

Evaluation and Download

Where packages are evaluated (centralized vs. non-centralized) is a topic that will generate much discussion and debate.

Where evaluated packages are downloaded from (centralized vs. non-centralized) is another topic that will generate much discussion and debate.

Evaluation Metrics

A standard set of metrics is applied to JavaScript packages, yielding a consistent evaluation report, enabling consumers to easily compare packages.

Below is a short “starter list” of metrics. Additional metrics should include the warnings such as those that npm emits when packages are installed.

Most evaluation metrics are yes or no.  Some are numeric; others are a simple list. When a package is evaluated, all of its dependencies are also evaluated. A package’s evaluation can only be as good as its weakest dependency.

  • Package signed
  • Included software license
  • Number of dependencies
  • Number of dependencies with less than ten lines of JavaScript
  • Package is out of date
  • Package has warnings
  • Have out of date dependencies
  • Has dependencies with warnings
  • Has unit tests
  • Has 100% unit test coverage
  • All tests pass
  • Makes network calls
  • Writes to file system
  • Threat assessment
  • Package capabilities (what API’s are being used)

NuGet, Visual Studio Gallery, Visual Studio Marketplace

NuGet, Visual Studio Gallery, and Visual Studio Marketplace serve compiled code which is evaluated differently than JavaScript. Microsoft will need to determine the best way to evaluate and report on these packages.

Funding

This proposal affects developers and infrastructures from all over the world.

As a software engineer, I know that while there will be challenges, the problems identified in this proposal are solvable.

Getting big corporations and government to proactively and cooperatively, take on and complete a task because it’s the right thing to do is a challenge that must be initiated.

Waiting until there is a problem and then trying to stem the tide and roll back the damage is a poor strategy.  Benjamin Franklin said, “an ounce of prevention is worth a pound of cure,” he is correct.

I honestly do not believe getting funding for a project of this scope will be any problem.

Next Steps

Big players need to meet and solve this problem.

Developers, start the conversation in your sphere of influence and contact big players and let them know your concerns.  Request that they take proactive action now.

Close

Have a great day.

Just a grain of sand on the worlds beaches.

 

XAML Designer Brush Editor Fixed

Introduction

Since the release of Visual Studio 2015 Update 3, the VS XAML Designer pop up brush editor quit working in the Properties window when properties were sorted by name, or when properties of type brush were not in the Brush category.  This issue affected both Visual Studio and Blend for Visual Studio 2015.

Fix

Microsoft posted the fix in this cumulative servicing update:

Microsoft Visual Studio 2015 Update 3 (KB3165756)  

https://msdn.microsoft.com/en-us/library/mt752379.aspx

I have installed the update and tested various controls and use cases and can confirm the fix corrected the problem.

Close

Have a great day.

Just a grain of sand on the worlds beaches.

 

Component Generator for AngularJS, Angular 2, Aurelia​

Introduction

I believe developers should own their code generation story. The value in owning your code generation is that when platforms change, APIs change, language grammar is enhanced, you can easily refactor your templates and not miss a beat. I also believe that owning your code generation story is a forcing function for thinking out how your application works, is wired up, and how to unit test it.

This tool provides templates that you must edit before generating any code.

What?  No ready-made templates?  Karl, are you nuts?  Why?

Let’s think about code generation templates for a minute. Templates are used to create language specific output.  Developers are using many flavors of JavaScript today: ES5, ES6, ES6 with ES7 experimental features, TypeScript, Coffee Script, etc. Stylesheet files can be written using LESS, SASS, SCSS, or CSS.

What language should I use to write the templates?  I use ES6, but not everyone does.

Let’s think about how Angular apps are structured and wired up.  Check ten blog posts, and you’ll read about ten valid ways to structure and wire up an Angular SPA app.

Small apps are wired up differently than medium or large apps.  Some put small modules in a single folder, whereas a medium-sized module may be within a single parent folder, with each component in a separate folder.

Developers doing unit testing will structure their component wire up differently to better support testing scenarios without having to load Angular for a component or controller unit test.  Not loading Angular for each unit test significantly speeds up your unit tests.

Based on the above language, structure, and wire up options, providing default templates would provide zero value.

Your Small Part

You’ll edit the empty template files for the component based SPA frameworks you author applications for such as AngularJS, Angular 2, or Aurelia. Having you edit your templates provides the best solution for this tool supporting many different: JavaScript flavors, AngularJS coding styles, and component wiring up strategies. Additionally, you’ll be the proud owner of your code generation story.

This tool uses Underscorejs templates that are easy to author, usually requiring a minute or two. However, if your scenario requires it, you can swap out the template engine.

More Than a Tool

My first version of this tool was written in less than two hours as single ES6 file and worked great. It didn’t have any tests but worked perfectly for my one scenario which was, Angular 1.5.9 components and ES6.

Then my mentoring and scope creep genes kicked into high gear, and I spent a few days and several versions to produce this tool.

I didn’t want to miss an opportunity to share a tool that I think will benefit many developers across languages, SPA frameworks, and scenarios. But to accomplish this goal, the tool would require 100% test coverage, work for any JavaScript language and any SPA component based framework.

I hope that others can learn from the architecture, code, and unit tests presented here. I welcome both positive and negative feedback to improve the tool and code.

This tool was also a forcing function for me to dive deep into authoring testable Node.js tools using ES6. It took me a little time, learning the testing tools, but when I refactored the code, having 100% test coverage paid off in spades.

Background

During my career as a Software Engineer and Architect, I’ve always written tools to increase my productivity and accuracy. Tools like XAML Power Toys or this tool came about because I found myself repeating the same task over again, and knew the computer is capable of making me infinitely more productive.

Two weeks ago I started writing an AngularJS 1.5.9 component based application.  AngularJS components are similar in concept to Angular 2 and Aurelia components although the syntax and implementation are a little different.

After creating my second component, I stopped to write this tool. I’m not going to perform mundane, repetitive tasks like creating: folders, components, controllers, templates, tests, and CSS files; this is a perfect assignment for a tool.

In addition to creating folders and files, the tool leverages user editable templates for generating code that matches your requirements and coding style. The ability to generate boiler maker code yields a significant increase in productivity.

What You’ll Learn

  • Features of the tool
  • Tool Architecture
  • Requirements
  • Installation
  • Local development installation
  • Local production installation
  • Command line usage
  • Editing Templates

Videos

  • Getting Started
  • Modifying the tool
  • Unit testing the tool

Features

  • Node.js command line utility, written using ES6 and TDD methodology
  • Cross-platform Windows, Mac, and Linux (thanks to Node.js)
  • User editable templates drive the generated code
  • Create components for any JavaScript framework like AngularJS, Angular 2, and Aurelia
  • Separating templates by framework enables supporting multiple frameworks
  • Component output folder is optionally created based on a command line option
  • Component controller file is optionally generated based on a command line option
  • Component unit test file is optionally generated based on a command line option
  • Component CSS file is optionally generated based on a command line option
  • You can modify anything about this tool, make it fit like a glove for your workflow

Tool Architecture

This object-oriented application tool was written using ES6. I have separated the functionality of the application into small single-responsibility classes. Besides good application design, it makes understanding and unit testing much easier. When I think of this tool, a .NET Console application immediately comes to mind.

I wrote Guard clauses for every constructor and method. I always add guard clauses, even on private methods, irrespective of language because I am a defensive software engineer. Writing guard clauses requires no extra effort given tools like Resharper and the ubiquitous code snippet feature in most editors and IDEs. Guard clauses future proof code in maintenance mode when another developer makes a false assumption while editing, next think you know a customer is filing a bug.

In some of the classes, I have refactored small sections of code into a method. Doing this makes the code easier to read, comprehend, and simplifies unit testing.

Classes

  • ApplicationError – deriving from Error, an instance of this class is thrown when a user error caused by improper command line usage occurs. It permits the top-level try catch block in the ComponentCreator to perform custom handling of the error.
  • CommandLineArgsUtility – provides functions related to command line arguments.
  • ComponentCreator – performs validation and writes the templates.
  • IoUtility – wrapper around Node.js fs.
  • Logger – wrapper around Console
  • TemplateItem – wrapper around a supported framework, exposes all data as immutable
  • Templates – stores collection of TemplateItems, provides immutable data about code generation templates.

The entry point for the tool is index.js.  This module instantiates a TemplateItem for each supported framework, creates all of the required dependencies for the ComponentCreator and injects them, then invokes the create method on the ComponentCreator instance.

The reason for creating and injecting all external dependencies in the root of the application is to enable unit testing. Dependencies injected using constructor injection can easily be stubbed or mocked.

I have successfully used this type of architecture for writing other complicated Node.js applications.

Dependencies

  • Node.js® is a JavaScript runtime built on Chrome’s V8 JavaScript engine. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world.
  • Chai – is a BDD / TDD assertion library for node and the browser that can be delightfully paired with any javascript testing framework.
  • Istanbul – full featured code coverage tool for Javascript
  • Mocha – is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.
  • Sinon.js – Standalone test spies, stubs, and mocks for JavaScript.
    No dependencies and works with any unit testing framework.
  • Sinon-Chai – provides a set of custom assertions for using the Sinon.JS spy, stub, and mocking framework with the Chai assertion library. You get all the benefits of Chai with all the powerful tools of Sinon.JS.

Requirements

  • Node.js  (install either LTS or Current versions.  Personally I use the Current version.)

On Mac’s I don’t recommend installing Node.js from the Node.js website. If you do, upgrading is a PIA.

For Mac users, use brew. After installing brew, run this command from a terminal:

brew install node

If you use brew, future upgrading or uninstalling Node.js on your Mac is a breeze.

Recommendations

To help understand how this application is setup and how unit testing is setup and invoked, please read my blog post: https://oceanware.wordpress.com/2016/08/10/easy-tdd-setup-for-nodejs-es6-mocha-chai-istanbul/

Installation

Ensure you have installed above requirements.

Download or clone the Component Generator repository here.

Open a terminal or command window, and then navigate to the root folder of the tool and run this command.

npm install

Next, run the unit tests and see the coverage report by executing this command:

npm test

Running the unit test also runs the Istanbul code coverage tool which outputs a detailed coverage report in the /coverage/lcov-report/index.html file.

Local Development Installation

During development, testing, and editing of your Node.js command line tools you’ll need to set up a symbolic link so you can execute your tool from any folder. Setting up a symbolic link instead of installing your Node.js package globally, allows you to continue editing the tool’s code, while at the same time, being able to execute the tool from any folder on your computer.

Mac and Linux

From the root folder of the tool, open a terminal or command window and run this command:

npm link

To test you development installation run this command:

gencomponent

The tool will display the command line usage.

Navigate to another folder and rerun the gencomponent command.  You should get the same output.

Windows

You MUST give yourself Full Permissions on the /node_modules folder before proceeding.

Follow the above steps for Mac and Linux.

Local Production Installation

This step is optional.  If you like the convenience of being able to edit your templates or change the tools code, while at the same time being able to invoke the tool from any folder then, by all means, skip this step.

If you want to install the tool globally, or want to install the tool on other machines, then follow these steps.

Before proceeding, you need to remove the symbolic link you created in the above steps.

Navigate to the root folder of the tool, open a terminal or command window and run the following command:

npm unlink
Window, Mac, and Linux

Navigate to the root folder of the tool, open a terminal or command window and run the following command:

npm install -g

You can now invoke the tool from anywhere on your machine.

After installation, if you need to modify the tool or template, uninstall the tool globally, make the changes and reinstall it globally.

Command Line Usage

Before proceeding, ensure you have created a symbolic link for the tool, or installed it globally.

gencomponent (component name) [-(framework)] [--ftsc]

The component name is required to generate a component.

The framework is optional and defaults to ‘ng’ if not supplied. Default framework can be changed by modifying the code.  Default valid options:

  • -ng
  • -ng2
  • -aurelia

Code generation options are prefaced with a double dash  (–), are optional, and can be in any order. The valid options are:

  • f = create a component folder
  • t = create a component unit test file and controller unit test file for the optional controller
  • s = create a component CSS file
  • c  = create a component controller file

If invalid arguments are provided or you attempt to create a component that already exists, an error message will be displayed at the command line.

Usage Examples

1. Show the command line usage message.

gencomponent

gencomponent ?

2. Create the Sales component in the current folder along with the sales.component js, and sales.template.html files. Templates are selected from the default /templates/ng folder.

gencomponent Sales

3. Create the Sales component in a new component folder named Sales along with the sales.component js, sales.controller.js, sales.template.html, sales.component.spec.js, sales.controller.spec.js, and sales.template.css files.  Templates are selected from the default /templates/ng folder.

gencomponent Sales --ftcs

4. Create the Sales component in the current folder along with the sales.component js, sales.component.spec.js, and sales.template.html files. The templates are selected from the /templates/aurelia folder.

gencomponent Sales -aurelia --t

5. Create the SalesDetail component in a new component folder named SalesDetail along with the salesDetail.component.js, salesDetail.component.spec.js, salesDetail.controller.js, salesDetail.controller.spec.js, and salesDetail.template.html files. The templates are selected from the /templates/ng2 folder.

gencomponent SalesDetail -ng2 --ftc

You can easily modify this tool to handle more or less supported frameworks and change the file naming conventions.

Editing Templates

Please read Underscorejs template documentation, it’s very short.

Template engines allow the template author to pass a data object to methods that resolve the template and produce the generated code.

This tool passes a rich data object that you can use inside your templates.

gencomponent SalesDetail

The below template data object was hydrated and passed to the template engine when the tool was invoked with SalesDetail as the desired component.

{ componentName: 'SalesDetail',
 componentSelector: 'sales-detail',
 componentImportName: 'salesDetail.component',
 controllerImportName: 'salesDetail.controller',
 componentImportNameEnding: '.component',
 controllerImportNameEnding: '.controller',
 templateFileNameEnding: '.component.html',
 componentFileNameEnding: '.component.js',
 controllerFileNameEnding: '.controller.js',
 componentSpecFileNameEnding: '.component.spec.js',
 templateCssFileNameEnding: '.component.css' }

This snippet from the below HTML file shows the syntax for injecting the componentName property value into the generated code.

<%= componentName %>

The below HTML can be found in the /temples/ng/componentTemplate.html file. This demonstrates consuming a data object property in a template.

<h2>Component Generator Data Object</h2>
<p>These are the data properties available in all template files.</p>
<p></p>
<p>componentName = <strong><%= componentName %></strong></p>
<p>componentSelector = <strong><%= componentSelector %></strong></p>
<p>componentImportName = <strong><%= componentImportName %></strong></p>
<p>controllerImportName = <strong><%= controllerImportName %></strong></p>
<p>componentImportNameEnding = <strong><%= componentImportNameEnding %></strong></p>
<p>controllerImportNameEnding = <strong><%= controllerImportNameEnding %></strong></p>
<p>templateFileNameEnding = <strong><%= templateFileNameEnding %></strong></p>
<p>componentFileNameEnding = <strong><%= componentFileNameEnding %></strong></p>
<p>controllerFileNameEnding = <strong><%= controllerFileNameEnding %></strong></p>
<p>componentSpecFileNameEnding = <strong><%= componentSpecFileNameEnding %></strong></p>
<p>templateCssFileNameEnding = <strong><%= templateCssFileNameEnding %></strong></p>

Template Folders

  • /ng – AngularJS
  • /ng2 – Angular 2
  • /aurelia – Aurelia

Available Templates

  • Component – always generated
  • Component Template – always generated
  • Controller – optionally generated
  • Component Unit Test – optionally generated
  • Controller Unit Test – automatically generated if the Controller and Component Unit Test is generated
  • Component Template Stylesheet – optionally generated

Workflow for Editing Templates

Before diving into template editing, you need to know exactly what the outputted generated code needs to look like.

You’ll need to decide on your application structure and wiring up.  Will you put your controllers inside the component file or keep them in separate files? Are you going to write unit tests?

I recommend, writing several components, w0rk out the details, identifying repeated code such as imports or require statements, and commonly used constructor injected objects.

In the below example, I have an existing About controller that represents how I would like my controllers to be generated.

Copy the code to generate into the appropriate template file, and then replace the non-repeating code with resolved values from the template data object.

In the below example, I copied the About controller into the componentTemplate.controller.js file and then replaced the “About” name with the componentName data object property.

class AboutController {
   constructor() {
   }
}

export default AboutController;

This below template will generate the above code.

class <%= componentName %>Controller {
    constructor() {
    }
}

export default <%= componentName %>Controller;

Now repeat the above steps for each template and for each framework you’ll be performing code generation.

Note that some templates will be empty, this is normal for .css and possibly .html files. But at least you didn’t have to waste precious time creating the file.

Videos

Getting Started

This 8-minute video explains how to get started with this tool.

Modifying the Tool

This 11-minute video explains how to modify:

  • templates
  • frameworks
  • file naming conventions
  • template engine

Unit Testing the Tool

This 23-minute video explains unit testing this tool.

Close

Writing your own cross-platform, command-line tools using Node.js is fun.

Having 100% test coverage is not easy and takes time. Just know that your customers and fellow developers will appreciate you putting the effort into a release with 100% test coverage.

Have a great day.

Just a grain of sand on the worlds beaches.

 

Xamarin Forms Bindable Picker v2

Introduction

I’ve updated the BindablePicker from a previous blog post, added new features and created a github reopro for the code.

Xamarin Forms is a new and cool API for quickly building native apps for IOS, Android, and Windows UWP in C#.

The Xamarin Forms API comes with a primitive Picker control that lacks typical bindable properties that developers expect a Picker (similar functionally that a desktop ComboBox has) to have.

Xamarin Forms makes it very easy for developers to extend the API, write your own custom controls, or write custom renderers for controls.

This BindablePIcker is the result of studying blog and forum posts and receiving feedback and bug report on the original version.

API Comparison

Xamarin Forms Picker API

  • SelectedIndex (bindable)
  • Items (not bindable)

Bindable Picker API

  • ItemsSource (bindable)
  • SelectedItem (bindable)
  • SelectedValue (bindable)
  • DisplayMemberPath
  • SelectedValuePath

New Features Added

  • Support for collections that implement INotityCollectionChanged like the ObservableCollection

Bug Fixed

The original BindablePicker did not correctly set the SelectedItem after the ItemsSource was refreshed at runtime.

Bindable Picker Source

This repro contains a project that demonstrates scenarios for using this control and it has the source for the BindablePicker.

https://github.com/Oceanware/XamarinFormsBindablePicker

Training Video – XAML Power Toys BindablePicker Scenarios

This short video explains three common use cases for the BindablePicker.

Close

Have a great day.

Just a grain of sand on the worlds beaches.

Easy TDD Setup for Nodejs ES6 Mocha Chai Istanbul

Introduction

I’m working on a command line tool for AngularJS, Angular2, and Aurelia that creates components from user templates.  It creates the folder, component js file, component template HTML file, optional component template CSS file, and the component spec js file.

The tool generates the code using the underscorejs template engine.  It’s amazing how much code you’ll no longer have to type; boiler maker component wiring up and default unit tests for most components.

As I was writing the tool, I decided to break out the project setup into this small blog post to make the tool blog post simpler and focused. You can use this simple project as a starter or learning tool for your Nodejs ES6 projects.

I wrote this application and the command line tool using the Atom Editor.  I’ve include my Atom Snippets down below that give me a big productivity boost when writing unit tests.

This blog post is much more about setting up a Nodejs project that uses ES6, Mocha, Chai, and Istanbul than how to use these tools. Please refer to the many outstanding blog posts, courses, and tutorials on these tools and ES6.

My Approach To Nodejs ES6

It’s amazing what you can write using Nodejs.  I’ve written complex, multi-process apps that have IoT connected over MQTT and real-time communication to web clients.  Also written simple apps like the above command line tool. Nodejs is wonderful and is what enables Electron to be the prodigious cross-platform desktop application tool that it is.

ES6 is a clean modern language, is simple, familiar looking, and is fun.  I’ve used ES5 and TypeScript  for many projects but settled on ES6. I blogged about my decision here

Using ES6 with Nodejs does not require Babel for your code or unit tests.  I’m not using ES7 features such as class properties or decorators, but I can live with that for now.

I structure my Nodejs apps, perhaps differently than you’ve seen on other blog posts.  Not implying better, just different.

I prefer to write my ES6, Nodejs code like I would any object orientated app, small classes with discrete functionality. In architecture speak, SOLID, DRY, etc.

I also structure my ES6 so that it can be tested.  Sometimes that requires a little rethinking and possibly some refactoring, but it’s worth it.

Hello World

It would be madness to not write the ubiquitous “Hello World” app for my Nodejs demo, so here we go.

When this app is executed, index.js is the entry point, it creates an instance of HelloWorld and invokes the run method. 

Notice that I’m passing the command line arguments into the constructor. I do this to make testing the HelloWorld class much easier than if I didn’t.

index.js

'use strict'

const HelloWorld = require('./app/helloworld');

let c = new HelloWorld(process.argv.slice(2));
c.run();

 

HelloWorld is simple.  If no command line args are passed, the run method will log the greeting.  If args are passed, they will be concatenated and then logged.

helloWorld.js

'use strict'

const Logger = require('./logger');

class HelloWorld {

    constructor (commandLineArgs) {
        this.commandLineArgs = commandLineArgs;
        this.greeting = 'Hello World';
        this.logger = new Logger();
    }

    run() {
        if (this.commandLineArgs && this.commandLineArgs.length) {
            this.logger.log(this.commandLineArgs.join(' '));
        } else {
            this.logger.log(this.greeting);
        }
    }
}

module.exports = HelloWorld;

 

Logger outputs message to the console. I always create a logger for my Nodejs apps, so other classes don’t need console.log, etc.  I like the object oriented approached to keep my code clean and familiar. This is a very simple Logger class, enhance it as required for your apps.

logger.js

'use strict'

class Logger {

    log(message) {
        console.log(message);
    }
}

module.exports = Logger;

Unit Testing Setup

For my Nodejs projects I use the following testing tools:

  • Mocha – unit test framework
  • Chai – BDD / TDD assertion library
  • Istanbul – code coverage tool
  • Sinon – standalone test spies, stub, and mock framework
  • Sinon Chai – extends Chai with assertions for Sinon.

You can use Mocha by itself or Mocha and Istanbul to get coverage.  I like the features of Chai, but at the end of the day, it’s personal preference for testing style.  “Actually testing is critical, test style is not.”

I install the test tools locally in my Nodejs projects rather than globally so that I can have multiple versions of a tool if required. Local installs make the command line longer, but that’s not an issue since the command will be in package.json or in a gulp task, bottom line, you don’t have to type it.

Local install example:  npm install mocha –save-dev

Understanding Mocha Setup and Startup

Node and npm commands are executed from your root folder.

When Mocha is invoked, by default it will look in the /test folder for the mocha.opts file, which is the Mocha configuration file. 

Screen Shot 2016-08-11 at 12.25.45 AM

mocha.opts

The first line tells Mocha which folder to look into for the tests, if not supplied, it will use the /test folder.  I’ve chosen the /app folder because I like to have my unit tests in the same folder as the JavaScript being tested.

The second line loads up the common.js file

The third line tells Mocha to look not only in the app folder but also all sub-folders.

Finally, the fourth line, tell Mocha to quit processing when a test fails. 

Note:  When running your full test suite, or when running on a CI server, the bail option is probably not appropriate.

app
--require ./test/common.js
--recursive
--bail

 

common.js

This setup is optional, but its value is that I don’t have to repeat these require statements for Chai and Sinon in every test js file.

'use strict';

global.chai = require('chai');
global.chai.should();

global.expect = global.chai.expect;
global.sinon = require('sinon');

global.sinonChai = require('sinon-chai');
global.chai.use(global.sinonChai);

 

package.json scripts section

The scripts section of the package.json file makes it easy to run commands, especially commands with long text.

To run a command from the terminal or command line type:  npm run {script command name}

For example,  npm run example or npm run tdd:mac

The example and example2 command run the app with and without command line arguments.

The test command runs all the unit tests and code coverage report.

The tdd:mac command runs Mocha and all your tests.  Then it begins to watch the target folder for any changes.  When a file changes, its reruns the tests automatically.

Note:  mocha -w on Windows does not work, hence the command tdd:mac.  Bugs have been logged.  For now, if you’re on Windows, I recommend writing a glup task that watches the folder and then runs mocha without the -w option.  Optionally, if you’re a WebStorm user, you can set this up in the WebStorm IDE if desired.

My typical workflow on my Mac is to open Atom or WebStorm, view my spec file and code being tested in split view, then in a terminal window I run, npm run tdd:mac and I’m good to go.  I get instant feedback from my Mocha test runner as a write tests or code.

  "scripts": {
    "example": "node index.js",
    "example2": "node index.js Hey world!",
    "test": "./node_modules/.bin/istanbul cover ./node_modules/mocha/bin/_mocha",
    "tdd:mac": "./node_modules/.bin/mocha -w"
  },

logger.spec.js

This unit test verifies that the Logger class will invoke console.log and pass the correct message to it.

When you’re unit tests actually write to the console, the text will be outputted into your Mocha report stream outputted to the console.  To limit the noise, I’ve created the below TestString that blends in nicely with the Mocha report.

The variable ‘sut’ is an acronym for ‘system under test.’  I use ‘sut’ to make it easy for the next person reading my tests to quickly see what object is being tested. Consistent code is much easy to read and maintain.

The Sinon library makes it easy to test class dependencies by either spying, stubbing, or mocking the class or methods.  The reason I don’t use a stub or mock here for console.log is because it will block the Mocha report from being displayed.  The spy was a good fit and the TestString gave me the output I wanted.

'use strict'

const Logger = require('./logger');
const TestString = '    ✓';  // nice hack to keep the mocha report clean. LOL.

describe('Logger', () => {
    it('should log a message to the console', () => {
        let sut = new Logger();
        let spy = sinon.spy(console, 'log');

        sut.log(TestString);

        expect(spy.calledOnce);
        expect(spy.calledWithMatch(TestString));

        spy.restore();
    });
});

 

helloWorld.spec.js

To limit bugs and typo’s I use constants for my expected results and method arguments.

In this simple app, the Logger is exposed as a property on HelloWorld, making it accessible for stubbing at test time.  In a larger app, the Logger would be an injected dependency.  Injected dependencies are a no brainer to stub and mock.

'use strict'

const HelloWorld = require('./helloWorld');
const Logger = require('./logger');
const DefaultGreeting = 'Hello World';
const Arg1 = 'Hello';
const Arg2 = 'there!'

describe('HelloWorld', () => {

    describe('Constructor', () => {

        it('should be created with three properties: commandLineArgs, greeting, and logger', () => {
            let sut = new HelloWorld();
            expect(sut).to.have.property('commandLineArgs');
            expect(sut).to.have.property('greeting');
            expect(sut).to.have.property('logger');
        });

        it('should have default greeting', () => {
            let sut = new HelloWorld();
            expect(sut.greeting).to.equal(DefaultGreeting);
        });

        it('should have command line args set when supplied', () => {
            let sut = new HelloWorld([Arg1, Arg2]);
            expect(sut.commandLineArgs).to.have.lengthOf(2);
            expect(sut.commandLineArgs[0]).to.equal(Arg1);
            expect(sut.commandLineArgs[1]).to.equal(Arg2);
        });
    });

    describe('Run', () => {
        it('should log command line args when supplied', () => {
            let logger = new Logger();
            let stub = sinon.stub(logger, 'log').returns();
            let sut = new HelloWorld([Arg1, Arg2]);
            sut.logger = logger;

            sut.run();

            expect(logger.log).to.have.been.calledOnce;
            expect(logger.log).to.have.been.calledWith(`${Arg1} ${Arg2}`);

            stub.restore();
        });

        it('should log default greeting when no command line args are passed', () => {
            let logger = new Logger();
            let stub = sinon.stub(logger, 'log').returns();
            let sut = new HelloWorld();
            sut.logger = logger;

            sut.run();

            expect(logger.log).to.have.been.calledOnce;
            expect(logger.log).to.have.been.calledWith(DefaultGreeting);

            stub.restore();
        });
    });

});

 

Test Results

Executing npm test or npm run test, produces the following output.

The describe and it blocks are nicely nested in this Istanbul coverage report.

The first item in the Logger group is a black check mark, this is my little hack I mentioned above in logger.spec.js file test.

Screen Shot 2016-08-13 at 2.24.51 PM

Atom Snippets

Atom editor snippets rock.  The very best snippet documentation I’ve read is here, read it and you’ll be a happy camper.

These snippets assist my coding of classes and unit tests.

'.source.js':
  'Fat Arrow':
    'prefix': 'fat'
    'body': '() => {}'
  'describe unit test':
    'prefix': 'dsc'
    'body': """
        describe('$1', () => {
            $2
        });

    """
  'it unit test':
    'prefix': 'itt'
    'body': """
        it('should $1', () => {
            $2
        });

    """
  'Class with Constructor':
    'prefix': 'cctor'
    'body': """
        'use strict'

        class $1 {

            constructor () {
                $2
            }
        }

        module.exports = $1;
    """
  'Method Maker':
    'prefix': 'mm'
    'body': """
        $1($2) {
            $3
        }

    """

Download

https://github.com/Oceanware/nodejs-es6-mocha

Close

I hope this information helps you in setting up a Nodejs project that uses ES6, Mocha, Chai, and Istanbul.

Just a grain of sand on the worlds beaches.

Angular 1.5.7 Components ES6 and jspm

Purpose

The purpose of the blog post and accompanying simple example project is to show you how to:

  • Create an ES6 Angular 1.5.7 super simple web application with navigation
  • Use Angular 1.5.7 Components
  • Use Angular Component Router  (not the constantly changing Angular 2 Router)
  • Bootstrap an ES6 Angular 1.5.7 application
  • Set up ES6 Angular 1.5.7 modules
  • Configure the Component Router
  • Provide a root component that hosts the entire application; providing a placeholder for the Component Router to navigate components into
  • Demonstrate writing super clean ES6 code that is 98% void of the word Angular.
  • Provides two Components that the app can navigate to.

This sounds like a lot, but it’s accomplish with only a few succinct ES6 files.

Background

I’m a total fan of Angular 1.x and now Angular 1.5.x after watching Scott Alan’s Pluralsight Course on Building Components with Angular 1.5.

I’m a fanatic about authoring my JavaScript using ES2015 (ES6, Harmony) and using jspm as my package manager.  This combination of language and package management is so clean and simple.

Scott’s course uses ES5.  Probably a good decision as it keeps the concept count down for Angular 1.x developers who still use ES5.

I highly recommend you watch the course; in about 90 minutes you’ll be another convert to using Angular 1.5.x Components.

I have looked at both Aurelia and Angular 2.  They are both still in beta and undergoing API and tooling changes. I’m very keen on Aurelia and am looking forward to adopting this product in the future.  What I like most about Aurelia is that the team embraced convention over configuration which dramatically reduces the boiler maker code for common scenarios.  Maybe Angular 2 will one day refactor their API to do the same.

Introduction

Authoring Angular 1.x or 1.5.x apps using ES6 with jspm is  simple and the code is very clean.  I have a project that demonstrates using Electron, Angular 1.x, ES6, and jspm. I will be creating a new project that uses Angular 1.5.7, Electron, ES6, and jspm very soon.

When using ES6 in today’s browsers or in Electron, the ES6 must be transpiled to ES5.  jspm hides all  that complexity and just does it for you. 

Gulp also has a module called gulp_jspm with an option, “selfExecutingBundle” that will essentially pre-compile, bundle, and minify all of your application’s ES6 to ES5.  Heck, it even removes all traces of ES6 libraries from the bundle.

Transpiling, bundling, and minification are part of “real world ES6 development.”  I just like that jspm makes this process simple and almost 100% transparent. 

Please note:  jspm is not the only game in town.  There are many other techniques, frameworks, build systems, etc., that accomplish the same task, producing the same end result.  When I did my study last year, I found that jspm worked best for me.  I recommend that you look at all the options and tools, read many blog posts on the subject just like I did. Then choose the one you understand and can be successful with.

Please note:  This application does not take any dependencies on the volatile and changing Angular 2 Beta.  The Component Router used in this project, is the original Angular 2 router and its works great.  I strongly recommend staying away from Angular 2 dependencies until the team has had time to ship RTM bits and ensure they have an approved, and good story for Angular 1.5.x integration.

Additionally, I have yet to see a compelling reason to write production code in Angular 2.  Like you, I have Angular 1.x projects in production and that run everyday and perform beautifully. 

Application Startup

Before you can run off and write the next awesome app using Angular 1.5.7 Components and ES6 we need to learn how the application starts up.  As you’ll see there are differences between the ES6 jspm code I’ll present and the current AngularJS 1.x ES5 apps you’re writing today.

index.html

  • Is loaded by the browser or Electron
  • Loads up system.js and config.js using script tags
  • The bootstrap.js module is imported.  The act of importing a module causes it to execute
  • Notice you don’t see any Angular framework markup as we will be manually bootstrapping Angular.

<!doctype html>

<html lang="en">
<
head>
<
meta charset="utf-8">
<
meta http-equiv="X-UA-Compatible" content="IE=edge">
<
meta name="viewport" content="width=device-width, initial-scale=1">

<title>Angular 1.5.7 Components ES6 jspm</title>
</
head>
<
body>
<
script src="src/jspm_packages/system.js"></script>
<
script src="src/config.js"></script>
<
script>
System.import('app/bootstrap').catch(console.error.bind(console));
</script>

<app-root></app-root>
</
body>
</
html>

bootstrap.js

  • framework dependencies are loaded
  • application ES6 modules are loaded
  • when the modules are all loaded and the document is ready, then bootstrap Angular
  • Notice how Angular is imported and provided a name, “AppModule”  I now have full access to my module and can access properties like, “name”
// load our framework modules
import angular from  'angular';
import 'ngcomponentrouter';

// load our application ES6 modules 
import AppModule from './app.module';
import './app-root.component';
import './About/app-about.component';
import './Home/app-home.component';

angular.element(document).ready(() => {
    // bootstrap angular now that all modules have been loaded
    angular.bootstrap(document, [AppModule.name], {strictDi: true});  
});

app.module.js

  • framework dependencies are imported so we can use them
  • Angular module named “app” is created and the Component Routers is injected as a dependency
  • Component Router root component is configured. Look back to index.html and you’ll see the app-root component in the markup
  • Export the Angular “app” module
import angular from  'angular';
import ngcomponentrouter from 'ngcomponentrouter';

let module = angular.module('app', [ngcomponentrouter]);

// must tell the Component Router which component to navigate components into
module.value('$routerRootComponent', 'appRoot');

export default module;

app-root-component.js

  • Import the above app.module default export, which is the angular.module(‘app’).  Consumers have clean code now.  Angular no longer appears in the code.
  • Register the ‘appRoot’ component with the AppModule and set its template.
  • Configure the root component router
  • Last line configures the default route
import AppModule from './app.module';

AppModule.component('appRoot', {
  templateUrl: '/src/app/app-root.component.html',
  $routeConfig: [
    { path: '/home', component: 'appHome', name: 'Home'},
    { path: '/about', component: 'appAbout', name: 'About'},
    { path: '/**', redirectTo: ['Home']}
  ]
});

app-root.component.html

  • This is my incredibly simple application root object.
  • It provides some navigation links for the Home and About components
  • The ng-outlet directive is where the Component Router will site components as they are navigated to.
<h1>Hello World</h1>

<p>
  <a href="#home">Home</a>
</p>
<p>
  <a href="#about">About</a>
</p>

<ng-outlet></ng-outlet>

app-home.Component.js

  • Import the app.module
  • Register the ‘appHome’ component with the AppModule and set its template.
  • See how clean this code is?
import AppModule from '../app.module';

AppModule.component('appHome', {
  templateUrl: '/src/app/Home/app-home.component.html'
  });

Download

Make sure you have node.js and jspm installed globally.

You can download or clone the simple repro here: https://github.com/Oceanware/ng157es6jspm

After downloading or cloning, navigate to the folder and open a command prompt (terminal window for OS X or Linux) and execute:

npm install

npm start

Your browser will open and display the application.

Close

You can start to see the simplicity of Angular 1.5.7 and ES6.  Clean JavaScript files, very easy to understand the intent of the code.  Fun programming!

Have fun and be productive with Angular 1.5.7 and ES6.

Hope this helps someone and have a great day.

Just a grain of sand on the worlds beaches.

ES2015 (ES6) or Typescript

Introduction

I get the question,  “Karl, why do you use ES2015 (ES6)?”

The answer I give depends on the context of the question, in other words what is the scenario we are asking about.

I will answer the question for each of these scenarios:

  • Authoring JavaScript Framework
  • Authoring Large Line of Business Application with more than a few developers
  • Authoring a small application with one or a few developers

Authoring JavaScript Framework

Without equivocation I would use Typescript for a JavaScript Framework.

Why, because I can transpile to ES2015 or ES5, so I can deliver my framework in Typescript, ES2015, or ES5.

Several years from now, I’ll be able to transpile my framework to ES vNext (as long as Typescript is still around and maintained properly), effectively future proofing my code.

I don’t have the hassle of 3rd party .d.ts files that are old or incomplete because my framework probably does not have many 3rd party dependencies.

If my framework does have them, I have the resources to create the required .d.ts files.  I’ll pay this tax because the benefits outweigh the .d.ts file hassles.

Authoring Large Line of Business Application with More Than a Few Developers

Without equivocation I would use Typescript for building a large line of business application with more than a few developers.

Why, because I can leverage the compile time checking, strong typing, and interfaces that Typescript offers; additionally I would use a linter with very strict rules.

I say this for several reasons.  First, because in a large team project like this, you need to reign in some developers so that they don’t get off the path of sensible and maintainable Typescript (JavaScript).  I care much more about creating a maintainable product than I do about someone’s feelings or creative coding desires.  The very strict linter rules also help developers sharpen their JavaScript coding skills.

Second, because Typescript does perform strong type checking at compile time.

Back all this up with unit and integration tests, and you have the basis for a very successful large line of business application.

Authoring a Small Application with One or a Few Developers

Here is where my answer to the original question changes from Typescript to ES2015.

For all of my personnel projects and blog post projects, I’ll use ES2015 (ES6).

For small team projects, I would still like to use ES2015.

Why?

  • Because I write simple ES2015 JavaScript that looks like C#
  • Because I write very clean ES2015 that is very easy to read
  • Because I use a ES6 linter with very strict rules, helps keep my ES6 clean and I’ve learned a lot from the linter rules I violated
  • Because I don’t want to pay the 15% tax for authoring Typescript (adding the type definitions to the code,  getting the .d.ts files downloaded, and imported in the code.  This 15% does not count towards missing or incorrect .d.ts files.)
  • Because I don’t want to deal with 3rd party .d.ts files that are either out of date or missing – this can be a real bummer
  • Because I like the dynamic nature of JavaScript and leverage that capability on occasion
  • Because for a long time, basically a single developer was managing the Definitely Typed github repro.  I look at it yesterday and it seems to have gotten a face lift and many new developers helping out.
  • Because the tool Microsoft ships for creating .d.ts files does not render a .d.ts file that can be used, I always found myself having to add more code to them to get them to work.
  • Just because you’re using a framework that was authored in Typescript, it does not mean you have to use Typescript.

Obviously, these are my opinions, and I know that others can easily come back with solutions or comments, but after many projects using Typescript this what I’ve decided to do.

I don’t want to give the impression that there is a huge gap between perfect .d.ts files and the few that I had trouble with.  But those few I needed, well, I needed them.  It got old dealing with this problem.  Remember, demo ware does not have this problem.  Its when you’re developing real applications that need libraries for services and features, and those services and features have missing, or outdated .d.ts files.  This is where the bummer begins.  I think if Microsoft delivered a tool that I could point to a JavaScript library and it would render a .d.ts file that could be used in the project, I may back off on this gripe.  But I have tried to make the missing .d.ts files and spent precious time messing with this.

All developers need to evaluate languages, tools, frameworks, and 3rd party dependencies for all of their projects, and pick the ones that meet the needs of that specific project.

Select the best tool for the job, not because the framework was written in, or because it’s new and shiny, or because other developers use it, select a tool because it is the best fit for the given requirements.

Close

So if you ask me if I use Typescript or ES2015, my first question will be, what is the scenario or use case, then I can answer based on the above criteria.

Hope this helps someone and have a great day.

Just a grain of sand on the worlds beaches.