Updated Xamarin Forms BindablePicker


On 9/30/2016 I updated the Xamarin Forms Bindable Picker, correcting it to handle the view model clearing the Items ObservableCollection.

When the view model exposes an ObservableCollection as the BindablePickers ItemsSource and the view model clears that collection, the BindablePicker now responds correctly and updates the UI as expected by:

  • Removing all items in its collection
  • Setting SelectedItem to null
  • Setting SelectedValue to null
  • Setting SelectedIndex to -1

Much appreciate the feedback and bug reports I’ve received to make this control much better.


Have a great day,

Just a grain of sand on the world’s beaches

AngularJS 1.5 Input Components


The promise of JavaScript component-based architecture frameworks such as AngularJS 1.5, Angular 2, or Aurelia is enabling developers to write less code, both HTML markup, and JavaScript.  The ability to encapsulate HTML and JavaScript into reusable components should simplify the application consuming these components greatly.

Please, review this post and code, and then start your own component library for your applications. There are enough components here to get you started.  I have chosen to make these components granular with a single responsibility, rather that make a single component with many properties to change the behavior. This approach provides clean and readable HTML markup, clearly describing the components role and behavior.

This blog post and GitHub repo are written using ES6. I wrote why I use ES6 JavaScript here. I also leverage ES7 experimental class property features in my applications.

AngularJS 1.5 components are uncomplicated element directives. These components can encapsulate business logic and are testable.

If you have time, compare the amount of JavaScript code is required by Angular 2 components to the AngularJS 1.5 components. You’ll find that AngularJS 1.5 is simpler and currently requires considerably less framework code to register. You’ll also see that I separate the component registration from the component, this removes Angular from the component enabling very fast executing unit tests.  See the Review Presentation repo example.

The below images show the clean and minimal HTML required to produce a typical data form with comprehensive validation and associated messages, including password strength metrics. The “match” directive (see app.module.js) on the second input-password component handles comparing the two passwords. The corresponding error message is displayed below these components. The input control placeholder is included for instructional purposes only and is probably not needed in a simple form like this one.




I did my research before embarking on creating these input components. I’m very grateful to the many community authors for their blog posts and answers on StackOverflow. The below links helped me to understand this problem space and I recommend that you study them:

Creating a custom form field with validation using AngularJS 1.5 and TypeScript

Password Strength Meter


The InputBaseComponent is the base class for the text input components. It exposes properties common to all HTML text input controls.

The InputBaseComponent creates the component label from the name attribute if the label attribute is not supplied. It’s onInit, onChange, and validate methods are invoked by deriving classes.

Normally an AngularJS 1.5 component does not update it’s parent model directly. As explained in the above blog post this component design calls for updating the model instead of using two-way bindings or a one-way binding with a corresponding event.

I agree with this architecture in this specific use case because it mimics the interaction that an input control would have if it was not encapsulated in a separate component. It is assumed that the form these components are hosted on would be adhering to the typical AngularJS component design pattern described in the AngularJS documentation.

By requiring the model in the component registration (see input-components.module.js), the model will be injected and is available to the component controller for reading and writing. When the view needs to be updated, $render is invoked and the below function updates the component value. When the component value is changed, the model value is set in the onChanage method.

require: {
    model: "ngModel"

this.model.$render = () => {
  this.value = this.model.$viewValue;


This base class handles setting sane values for the maximumLength and for setting  isRequired based on the minimumLength. The isRequired property is used in the HTML to set a required attribute.

The onChange method provides data input length validation by limiting the value length to the maximumLength.  This code is wrapped in try-catch block to provide a seam for logging an obvious attempt to hack your site. Remember, always defend in layers and never trust any data from the internet either on the client or server.

The validate method checks the input for minimum or maximum length validation rule violations. What is nice about this implementation is that the validation messages can easily be customized to provide the user with good information.  In this application, I’ve chosen to display the validation message next to the label.  If you don’t do this, then be sure to include the label in your validation message.

export default class InputBaseComponent {

  isRequired = false;

  constructor(StringUtility) {
    this.StringUtility = StringUtility;
    if(this.name && !this.label) {
      this.label = this.StringUtility.parseWords(this.name);

  onInit() {
    this.model.$render = () => {
      this.value = this.model.$viewValue;
    if(!this.maximumLength) {
      this.maximumLength = 200;
    if(this.minimumLength) {
      this.isRequired = true;

  onChange() {
    try {
      if(this.value && this.maximumLength && 
          this.value.length > this.maximumLength) {
        this.value = this.value.substring(0, this.maximumLength);
    } catch(e) {
      this.value = '';
      // log this to your system as a security message

  validate() {
    if(this.isRequired && this.minimumLength) {
      if(!this.value || this.value.length < this.minimumLength) {
        this.validationError = `has a minium length of ${this.minimumLength}`;
        return false;
    if(this.value && this.maximumLength && 
        this.value.length > this.maximumLength) {
      this.validationError = `has a maximum length of ${this.maximumLength}`;
      // log this to your system as a security message
      return false;

    this.validationError = '';
    return true;


InputBaseComponent.$inject = ['StringUtility'];


Now that we have all the hard work done, let’s extend InputBase and call the required methods.  For an input of type text, no additional validation is required.

import InputBase from '../input-base'

export default class InputTextComponent extends InputBase {

  constructor(StringUtility) {

  $onInit() {

  onChange() {

InputTextComponent.$inject = ['StringUtility'];


The InputEmailComponent extends the InputBase and adds email input validation. Notice that if the call to super validate returns false, this validation does not run.  You can change this logic to display all validation errors or display them one at a time as I do.

import InputBase from '../input-base'

export default class InputEmailComponent extends InputBase {

  constructor(StringUtility) {

  $onInit() {

  onChange() {

    if(super.validate()) {
      if(this.value && 
          !this.StringUtility.isEmailValid(this.value)) {
        this.validationError = ' has invalid format';

InputEmailComponent.$inject = ['StringUtility'];

InputTextComponent Template

This template has four groups:

  • input-group – apply CSS styles for spacing and alignment
  • form – subform container for the component
  • label-area – displays the label and associated validation messages
  • input – the input control

The validation messages are shown when:

  • the form control is invalid
  • and validation error text has a value
  • and
    • the control has been touched
    • or parent form has been submitted

You can choose when users see validation messages. I’ve used the above logic in my applications and users are happy with the experience.

  <ng-form name="{{vm.name}}Form">
      <label for="{{vm.name}}">{{vm.label}}</label>
      <error-message ng-show="{{vm.name}}Form.{{vm.name}}.$invalid && 
        vm.validationError && 
        ({{vm.name}}Form.{{vm.name}}.$touched || 
      placeholder="{{vm.label | lowercase}}"
      ng-model-options="{allowInvalid: true, debounce: 250}"

HTML Input Control

  • placeholder – optional.  I like, that I can bind the label and then use an Angular filter to change the label to lower case.
  • autocomplete – choose the option that meets your requirements
  • ng-model-options
    • allowInvalid – when true, allow ng-change to be raised on keystrokes when the control is invalid.
    • debounce – the time delay for model updating since the last keyboard input.
  • ng-model – binds to the component value
  • ng-change invokes the onChange method
  • ng-required – when true, sets the required attribute on the control
  • ng-minlength – sets the minimum length rule for the control
  • ng-maxlength – sets the maximum length rule for the control
  • maxlength – I STRONGLY urge you to ALWAYS set this attribute on any text input control.  Setting this attribute limits the number of characters that can be entered, pasted or dragged into the control at runtime. Remember that security is always in layers.

Form Submit Button Options

There are two schools of thought around form submit buttons.  One is to disable the submit button when the form is invalid, the other to allow the user to click the submit button and then show all input validation messages.

Your choice will depend on the application requirements and stakeholder choices.

The submit button strategy needs to take into account the applications timing of when to render validation messages.

I prefer to only show validation messages after the user visits the control or when the user submits the form and the control is invalid.


Whether you’re using AngularJS 1.5, Angular 2, Aurelia, et al., components provide the HTML and JavaScript developer a powerful programming paradigm that embraces separation of concerns, test ability, maintainability, readability, and are down right fun to work with.




Have a great day,

Just a grain of sand on the world’s beaches

.NET Unit Testing Tools


There are no silver bullets in our chosen profession, software engineer. Select tools and technologies that meet your requirements. When adopting tools, be prepared to align your workflow with the tool’s happy path.

If you can’t find a tool that meets your requirements, write one.  I’ve done this many times over the years and so have thousands of our fellow developers.

At work, I spend 100% of my time on C# .NET application design, architecture, and coding. After work, it’s JavaScript and .NET.

I’ve found a sweet spot for my JavaScript development that I’ve written about in these two recent blog posts:

When writing JavaScript applications, I like that my tests run automatically any time I save a code or test file. Automatic is the operative word, not having to run an additional command for my tests to run saves time and effort.  The test results appear in a command line window.  Automatic test running enables simple red-green-refactor workflow written about by so many developers and unit test advocates.

Additionally, I’ve  grown fond of the BDD style assertions that the Chai library offers.

.NET Red-Green-Refactor Workflow

This section is about setting up a red-green-refactor code-time workflow. None of the below content applies to setting up a CI Server that performs continuous integration and testing services.

I’ve started work on my Ocean Framework that I originally published by in 2007. Back then I didn’t provide unit tests, a mistake I’m now correcting. I’ll write much more about Ocean and other WPF projects later in the year.

My workflow goal is:

  • Visual Studio automatically running the unit tests
  • BDD style assertions
  • Unit test framework that supports Xamarin, WPF, and C# libraries

I did my research and found that JetBrains Resharper Ultimate when combined with JetBrains dotCover provides .NET developers the capability to have a simple Continuous Testing workflow. Since I already have a JetBrains Ultimate license, this was an easy decision.  I also use JetBrains WebStorm and dotPeek products.

JetBrains Continuous Testing workflow enables developers to write code and unit tests, and when the files are saved automatically run the unit tests. Continuous Testing only runs tests that are impacted by a code or unit test change. When I have a project that has a thousand or two thousand unit tests, I’ll write a  blog post with the results of the time savings as well as the required system resources to support this feature.

Code coverage report is generated when tests are run and is viewed in the same tool window as the test results. The coverage report tool can also highlight your code, visually indicating the lines of code covered and not covered by the last unit test run. Nice feature when you’re driving for 100% coverage.

Continuous Testing has several options for triggering when tests are run.  I’m currently using the On ‘Save’ Build and Run Dirty Tests option as it meets my workflow requirements. Choose the option that best meets your workflow requirements.


Continuous Testing allows developers to deselect assemblies or parts of assemblies not to run tests or coverage for, which is a fabulous feature for the red-green-refactor workflow as it cuts down the test run time significantly on large projects.

I should note, that Visual Studio has these same capabilities to run unit tests and provide code coverage report as well other excellent code analysis tools. You can quickly set up a shortcut key for running your tests on demand.  I could not figure out how to do this automatically. I did think about setting up a file watcher and then running the unit tests from the command line when a file is saved. Since JetBrains Continuous Testing does this for me, as well as only running the tests affected by files that are saved, I’ve selected this tool.

Choose the tools that meet your requirements and align with your workflow. If you like the Visual Studio tools, please use them.  No need to purchase tools if you don’t need them.

Tool Choices

I’ve chosen these libraries to round out my tooling selection. Use NuGet to install them the solution test assemblies.

Please do your research and select tools that meet your requirements and enable an effective code-time workflow.


Reviewing daily workflow scenarios for efficiency and accuracy is an important task for developers and developer leads. Forming habits that make you efficient increases your value in the marketplace. Work smarter not harder.

Have a great day,

Just a grain of sand on the world’s beaches


Xamarin Dev Days Cranbury NJ

Infragistics is hosting Xamarin Dev Days on 19 Nov 2016 at our corporate office in Cranbury, NJ.

The very popular Xamarin Dev Days event sold out in one day, but there is a wait list and you can register here.

In addition to the standard curriculum, there will be an optional Bonus Content session given by Infragistics Engineers that wrote the Infragistics Xamarin Forms product.

Attend the fun day of learning with fellow developers and MVP’s at our excellent facility.

Have a great day,

Just a grain of sand on the world’s beaches




Open Source – cave lupum


I’ve been actively working with open-source JavaScript packages for about 18 months. Developers that are very generous with their time have built tools and frameworks that have enriched the lives of developers all over the world. I too have contributed tools and believe in this beautiful Ecosystem.

A few months ago I started to look under the hood of my SPA and Nodejs applications and found code and practices that caught my attention. I found packages that other packages depended on, have very few lines of code. Packages with dependencies that are out of date or dependencies that had warnings such as, this package version is subject to a denial of service attack.

Upon further reflection, I got very concerned about the damage a bad person could inflict on trusting developers that download packages that have a dependency that has been replaced by evil code. My system and software that I write could be compromised. Now imagine ticking time bomb code replicated over Docker Containers and placed on servers. Damage could be immeasurable.

cave lupum – Beware the wolf disguised as a lamb.

Publicly articulating details of the many attack scenarios I’ve thought of would be irresponsible. Instead, it’s time to start the conversation around the problem that our international community is currently faced with and how we can protect our precious open-source.

Again, this blog post is about getting the conversation started.

Over the last few weeks, I’ve met with high profile MVP’s and a few corporate executives that share similar quality and security concerns that I’m sharing in this blog post.

For the purpose of this blog post, “packages” refers to open-source JavaScript packages that are added to Nodejs, or JavaScript web applications using a package manager.

I’ll have a section down below for compiled downloads such as NuGet, Visual Studio Gallery, and the Visual Studio Marketplace.

Proposal Goals

  • Not add any burdens to the open-source developer
  • Provide package consumers a measured level of confidence in the package and its dependencies
  • Raise the quality of packages by having them evaluated
  • Have repositories provide evaluation services and reporting for their packages


Package evaluation is performed in the cloud.  An MVP friend also thought about a command line tool that could be used locally.

Package evaluation should be opt-in.  Allow developers wanting their packages evaluated to submit their package for evaluation. An opt-in approach would not add any burdens to developers not wanting to participate, while at the same time, driving up quality for the packages that are evaluated, giving those developers additional credibility for their efforts.

Consumers of packages could choose to use evaluated packages or continue to use non-evaluated packages at their own risk.

Evaluation and Download

Where packages are evaluated (centralized vs. non-centralized) is a topic that will generate much discussion and debate.

Where evaluated packages are downloaded from (centralized vs. non-centralized) is another topic that will generate much discussion and debate.

Evaluation Metrics

A standard set of metrics is applied to JavaScript packages, yielding a consistent evaluation report, enabling consumers to easily compare packages.

Below is a short “starter list” of metrics. Additional metrics should include the warnings such as those that npm emits when packages are installed.

Most evaluation metrics are yes or no.  Some are numeric; others are a simple list. When a package is evaluated, all of its dependencies are also evaluated. A package’s evaluation can only be as good as its weakest dependency.

  • Package signed
  • Included software license
  • Number of dependencies
  • Number of dependencies with less than ten lines of JavaScript
  • Package is out of date
  • Package has warnings
  • Have out of date dependencies
  • Has dependencies with warnings
  • Has unit tests
  • Has 100% unit test coverage
  • All tests pass
  • Makes network calls
  • Writes to file system
  • Threat assessment
  • Package capabilities (what API’s are being used)

NuGet, Visual Studio Gallery, Visual Studio Marketplace

NuGet, Visual Studio Gallery, and Visual Studio Marketplace serve compiled code which is evaluated differently than JavaScript. Microsoft will need to determine the best way to evaluate and report on these packages.


This proposal affects developers and infrastructures from all over the world.

As a software engineer, I know that while there will be challenges, the problems identified in this proposal are solvable.

Getting big corporations and government to proactively and cooperatively, take on and complete a task because it’s the right thing to do is a challenge that must be initiated.

Waiting until there is a problem and then trying to stem the tide and roll back the damage is a poor strategy.  Benjamin Franklin said, “an ounce of prevention is worth a pound of cure,” he is correct.

I honestly do not believe getting funding for a project of this scope will be any problem.

Next Steps

Big players need to meet and solve this problem.

Developers, start the conversation in your sphere of influence and contact big players and let them know your concerns.  Request that they take proactive action now.


Have a great day.

Just a grain of sand on the worlds beaches.


XAML Designer Brush Editor Fixed


Since the release of Visual Studio 2015 Update 3, the VS XAML Designer pop up brush editor quit working in the Properties window when properties were sorted by name, or when properties of type brush were not in the Brush category.  This issue affected both Visual Studio and Blend for Visual Studio 2015.


Microsoft posted the fix in this cumulative servicing update:

Microsoft Visual Studio 2015 Update 3 (KB3165756)  


I have installed the update and tested various controls and use cases and can confirm the fix corrected the problem.


Have a great day.

Just a grain of sand on the worlds beaches.


Component Generator for AngularJS, Angular 2, Aurelia​


I believe developers should own their code generation story. The value in owning your code generation is that when platforms change, APIs change, language grammar is enhanced, you can easily refactor your templates and not miss a beat. I also believe that owning your code generation story is a forcing function for thinking out how your application works, is wired up, and how to unit test it.

This tool provides templates that you must edit before generating any code.

What?  No ready-made templates?  Karl, are you nuts?  Why?

Let’s think about code generation templates for a minute. Templates are used to create language specific output.  Developers are using many flavors of JavaScript today: ES5, ES6, ES6 with ES7 experimental features, TypeScript, Coffee Script, etc. Stylesheet files can be written using LESS, SASS, SCSS, or CSS.

What language should I use to write the templates?  I use ES6, but not everyone does.

Let’s think about how Angular apps are structured and wired up.  Check ten blog posts, and you’ll read about ten valid ways to structure and wire up an Angular SPA app.

Small apps are wired up differently than medium or large apps.  Some put small modules in a single folder, whereas a medium-sized module may be within a single parent folder, with each component in a separate folder.

Developers doing unit testing will structure their component wire up differently to better support testing scenarios without having to load Angular for a component or controller unit test.  Not loading Angular for each unit test significantly speeds up your unit tests.

Based on the above language, structure, and wire up options, providing default templates would provide zero value.

Your Small Part

You’ll edit the empty template files for the component based SPA frameworks you author applications for such as AngularJS, Angular 2, or Aurelia. Having you edit your templates provides the best solution for this tool supporting many different: JavaScript flavors, AngularJS coding styles, and component wiring up strategies. Additionally, you’ll be the proud owner of your code generation story.

This tool uses Underscorejs templates that are easy to author, usually requiring a minute or two. However, if your scenario requires it, you can swap out the template engine.

More Than a Tool

My first version of this tool was written in less than two hours as single ES6 file and worked great. It didn’t have any tests but worked perfectly for my one scenario which was, Angular 1.5.9 components and ES6.

Then my mentoring and scope creep genes kicked into high gear, and I spent a few days and several versions to produce this tool.

I didn’t want to miss an opportunity to share a tool that I think will benefit many developers across languages, SPA frameworks, and scenarios. But to accomplish this goal, the tool would require 100% test coverage, work for any JavaScript language and any SPA component based framework.

I hope that others can learn from the architecture, code, and unit tests presented here. I welcome both positive and negative feedback to improve the tool and code.

This tool was also a forcing function for me to dive deep into authoring testable Node.js tools using ES6. It took me a little time, learning the testing tools, but when I refactored the code, having 100% test coverage paid off in spades.


During my career as a Software Engineer and Architect, I’ve always written tools to increase my productivity and accuracy. Tools like XAML Power Toys or this tool came about because I found myself repeating the same task over again, and knew the computer is capable of making me infinitely more productive.

Two weeks ago I started writing an AngularJS 1.5.9 component based application.  AngularJS components are similar in concept to Angular 2 and Aurelia components although the syntax and implementation are a little different.

After creating my second component, I stopped to write this tool. I’m not going to perform mundane, repetitive tasks like creating: folders, components, controllers, templates, tests, and CSS files; this is a perfect assignment for a tool.

In addition to creating folders and files, the tool leverages user editable templates for generating code that matches your requirements and coding style. The ability to generate boiler maker code yields a significant increase in productivity.

What You’ll Learn

  • Features of the tool
  • Tool Architecture
  • Requirements
  • Installation
  • Local development installation
  • Local production installation
  • Command line usage
  • Editing Templates


  • Getting Started
  • Modifying the tool
  • Unit testing the tool


  • Node.js command line utility, written using ES6 and TDD methodology
  • Cross-platform Windows, Mac, and Linux (thanks to Node.js)
  • User editable templates drive the generated code
  • Create components for any JavaScript framework like AngularJS, Angular 2, and Aurelia
  • Separating templates by framework enables supporting multiple frameworks
  • Component output folder is optionally created based on a command line option
  • Component controller file is optionally generated based on a command line option
  • Component unit test file is optionally generated based on a command line option
  • Component CSS file is optionally generated based on a command line option
  • You can modify anything about this tool, make it fit like a glove for your workflow

Tool Architecture

This object-oriented application tool was written using ES6. I have separated the functionality of the application into small single-responsibility classes. Besides good application design, it makes understanding and unit testing much easier. When I think of this tool, a .NET Console application immediately comes to mind.

I wrote Guard clauses for every constructor and method. I always add guard clauses, even on private methods, irrespective of language because I am a defensive software engineer. Writing guard clauses requires no extra effort given tools like Resharper and the ubiquitous code snippet feature in most editors and IDEs. Guard clauses future proof code in maintenance mode when another developer makes a false assumption while editing, next think you know a customer is filing a bug.

In some of the classes, I have refactored small sections of code into a method. Doing this makes the code easier to read, comprehend, and simplifies unit testing.


  • ApplicationError – deriving from Error, an instance of this class is thrown when a user error caused by improper command line usage occurs. It permits the top-level try catch block in the ComponentCreator to perform custom handling of the error.
  • CommandLineArgsUtility – provides functions related to command line arguments.
  • ComponentCreator – performs validation and writes the templates.
  • IoUtility – wrapper around Node.js fs.
  • Logger – wrapper around Console
  • TemplateItem – wrapper around a supported framework, exposes all data as immutable
  • Templates – stores collection of TemplateItems, provides immutable data about code generation templates.

The entry point for the tool is index.js.  This module instantiates a TemplateItem for each supported framework, creates all of the required dependencies for the ComponentCreator and injects them, then invokes the create method on the ComponentCreator instance.

The reason for creating and injecting all external dependencies in the root of the application is to enable unit testing. Dependencies injected using constructor injection can easily be stubbed or mocked.

I have successfully used this type of architecture for writing other complicated Node.js applications.


  • Node.js® is a JavaScript runtime built on Chrome’s V8 JavaScript engine. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world.
  • Chai – is a BDD / TDD assertion library for node and the browser that can be delightfully paired with any javascript testing framework.
  • Istanbul – full featured code coverage tool for Javascript
  • Mocha – is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.
  • Sinon.js – Standalone test spies, stubs, and mocks for JavaScript.
    No dependencies and works with any unit testing framework.
  • Sinon-Chai – provides a set of custom assertions for using the Sinon.JS spy, stub, and mocking framework with the Chai assertion library. You get all the benefits of Chai with all the powerful tools of Sinon.JS.


  • Node.js  (install either LTS or Current versions.  Personally I use the Current version.)

On Mac’s I don’t recommend installing Node.js from the Node.js website. If you do, upgrading is a PIA.

For Mac users, use brew. After installing brew, run this command from a terminal:

brew install node

If you use brew, future upgrading or uninstalling Node.js on your Mac is a breeze.


To help understand how this application is setup and how unit testing is setup and invoked, please read my blog post: https://oceanware.wordpress.com/2016/08/10/easy-tdd-setup-for-nodejs-es6-mocha-chai-istanbul/


Ensure you have installed above requirements.

Download or clone the Component Generator repository here.

Open a terminal or command window, and then navigate to the root folder of the tool and run this command.

npm install

Next, run the unit tests and see the coverage report by executing this command:

npm test

Running the unit test also runs the Istanbul code coverage tool which outputs a detailed coverage report in the /coverage/lcov-report/index.html file.

Local Development Installation

During development, testing, and editing of your Node.js command line tools you’ll need to set up a symbolic link so you can execute your tool from any folder. Setting up a symbolic link instead of installing your Node.js package globally, allows you to continue editing the tool’s code, while at the same time, being able to execute the tool from any folder on your computer.

Mac and Linux

From the root folder of the tool, open a terminal or command window and run this command:

npm link

To test you development installation run this command:


The tool will display the command line usage.

Navigate to another folder and rerun the gencomponent command.  You should get the same output.


You MUST give yourself Full Permissions on the /node_modules folder before proceeding.

Follow the above steps for Mac and Linux.

Local Production Installation

This step is optional.  If you like the convenience of being able to edit your templates or change the tools code, while at the same time being able to invoke the tool from any folder then, by all means, skip this step.

If you want to install the tool globally, or want to install the tool on other machines, then follow these steps.

Before proceeding, you need to remove the symbolic link you created in the above steps.

Navigate to the root folder of the tool, open a terminal or command window and run the following command:

npm unlink
Window, Mac, and Linux

Navigate to the root folder of the tool, open a terminal or command window and run the following command:

npm install -g

You can now invoke the tool from anywhere on your machine.

After installation, if you need to modify the tool or template, uninstall the tool globally, make the changes and reinstall it globally.

Command Line Usage

Before proceeding, ensure you have created a symbolic link for the tool, or installed it globally.

gencomponent (component name) [-(framework)] [--ftsc]

The component name is required to generate a component.

The framework is optional and defaults to ‘ng’ if not supplied. Default framework can be changed by modifying the code.  Default valid options:

  • -ng
  • -ng2
  • -aurelia

Code generation options are prefaced with a double dash  (–), are optional, and can be in any order. The valid options are:

  • f = create a component folder
  • t = create a component unit test file and controller unit test file for the optional controller
  • s = create a component CSS file
  • c  = create a component controller file

If invalid arguments are provided or you attempt to create a component that already exists, an error message will be displayed at the command line.

Usage Examples

1. Show the command line usage message.


gencomponent ?

2. Create the Sales component in the current folder along with the sales.component js, and sales.template.html files. Templates are selected from the default /templates/ng folder.

gencomponent Sales

3. Create the Sales component in a new component folder named Sales along with the sales.component js, sales.controller.js, sales.template.html, sales.component.spec.js, sales.controller.spec.js, and sales.template.css files.  Templates are selected from the default /templates/ng folder.

gencomponent Sales --ftcs

4. Create the Sales component in the current folder along with the sales.component js, sales.component.spec.js, and sales.template.html files. The templates are selected from the /templates/aurelia folder.

gencomponent Sales -aurelia --t

5. Create the SalesDetail component in a new component folder named SalesDetail along with the salesDetail.component.js, salesDetail.component.spec.js, salesDetail.controller.js, salesDetail.controller.spec.js, and salesDetail.template.html files. The templates are selected from the /templates/ng2 folder.

gencomponent SalesDetail -ng2 --ftc

You can easily modify this tool to handle more or less supported frameworks and change the file naming conventions.

Editing Templates

Please read Underscorejs template documentation, it’s very short.

Template engines allow the template author to pass a data object to methods that resolve the template and produce the generated code.

This tool passes a rich data object that you can use inside your templates.

gencomponent SalesDetail

The below template data object was hydrated and passed to the template engine when the tool was invoked with SalesDetail as the desired component.

{ componentName: 'SalesDetail',
 componentSelector: 'sales-detail',
 componentImportName: 'salesDetail.component',
 controllerImportName: 'salesDetail.controller',
 componentImportNameEnding: '.component',
 controllerImportNameEnding: '.controller',
 templateFileNameEnding: '.component.html',
 componentFileNameEnding: '.component.js',
 controllerFileNameEnding: '.controller.js',
 componentSpecFileNameEnding: '.component.spec.js',
 templateCssFileNameEnding: '.component.css' }

This snippet from the below HTML file shows the syntax for injecting the componentName property value into the generated code.

<%= componentName %>

The below HTML can be found in the /temples/ng/componentTemplate.html file. This demonstrates consuming a data object property in a template.

<h2>Component Generator Data Object</h2>
<p>These are the data properties available in all template files.</p>
<p>componentName = <strong><%= componentName %></strong></p>
<p>componentSelector = <strong><%= componentSelector %></strong></p>
<p>componentImportName = <strong><%= componentImportName %></strong></p>
<p>controllerImportName = <strong><%= controllerImportName %></strong></p>
<p>componentImportNameEnding = <strong><%= componentImportNameEnding %></strong></p>
<p>controllerImportNameEnding = <strong><%= controllerImportNameEnding %></strong></p>
<p>templateFileNameEnding = <strong><%= templateFileNameEnding %></strong></p>
<p>componentFileNameEnding = <strong><%= componentFileNameEnding %></strong></p>
<p>controllerFileNameEnding = <strong><%= controllerFileNameEnding %></strong></p>
<p>componentSpecFileNameEnding = <strong><%= componentSpecFileNameEnding %></strong></p>
<p>templateCssFileNameEnding = <strong><%= templateCssFileNameEnding %></strong></p>

Template Folders

  • /ng – AngularJS
  • /ng2 – Angular 2
  • /aurelia – Aurelia

Available Templates

  • Component – always generated
  • Component Template – always generated
  • Controller – optionally generated
  • Component Unit Test – optionally generated
  • Controller Unit Test – automatically generated if the Controller and Component Unit Test is generated
  • Component Template Stylesheet – optionally generated

Workflow for Editing Templates

Before diving into template editing, you need to know exactly what the outputted generated code needs to look like.

You’ll need to decide on your application structure and wiring up.  Will you put your controllers inside the component file or keep them in separate files? Are you going to write unit tests?

I recommend, writing several components, w0rk out the details, identifying repeated code such as imports or require statements, and commonly used constructor injected objects.

In the below example, I have an existing About controller that represents how I would like my controllers to be generated.

Copy the code to generate into the appropriate template file, and then replace the non-repeating code with resolved values from the template data object.

In the below example, I copied the About controller into the componentTemplate.controller.js file and then replaced the “About” name with the componentName data object property.

class AboutController {
   constructor() {

export default AboutController;

This below template will generate the above code.

class <%= componentName %>Controller {
    constructor() {

export default <%= componentName %>Controller;

Now repeat the above steps for each template and for each framework you’ll be performing code generation.

Note that some templates will be empty, this is normal for .css and possibly .html files. But at least you didn’t have to waste precious time creating the file.


Getting Started

This 8-minute video explains how to get started with this tool.

Modifying the Tool

This 11-minute video explains how to modify:

  • templates
  • frameworks
  • file naming conventions
  • template engine

Unit Testing the Tool

This 23-minute video explains unit testing this tool.


Writing your own cross-platform, command-line tools using Node.js is fun.

Having 100% test coverage is not easy and takes time. Just know that your customers and fellow developers will appreciate you putting the effort into a release with 100% test coverage.

Have a great day.

Just a grain of sand on the worlds beaches.