Xamarin Forms Bindable Picker v2


I’ve updated the BindablePicker from a previous blog post, added new features and created a github reopro for the code.

Xamarin Forms is a new and cool API for quickly building native apps for IOS, Android, and Windows UWP in C#.

The Xamarin Forms API comes with a primitive Picker control that lacks typical bindable properties that developers expect a Picker (similar functionally that a desktop ComboBox has) to have.

Xamarin Forms makes it very easy for developers to extend the API, write your own custom controls, or write custom renderers for controls.

This BindablePIcker is the result of studying blog and forum posts and receiving feedback and bug report on the original version.

API Comparison

Xamarin Forms Picker API

  • SelectedIndex (bindable)
  • Items (not bindable)

Bindable Picker API

  • ItemsSource (bindable)
  • SelectedItem (bindable)
  • SelectedValue (bindable)
  • DisplayMemberPath
  • SelectedValuePath

New Features Added

  • Support for collections that implement INotityCollectionChanged like the ObservableCollection

Bug Fixed

The original BindablePicker did not correctly set the SelectedItem after the ItemsSource was refreshed at runtime.

Bindable Picker Source

This repro contains a project that demonstrates scenarios for using this control and it has the source for the BindablePicker.


Training Video – XAML Power Toys BindablePicker Scenarios

This short video explains three common use cases for the BindablePicker.


Have a great day.

Just a grain of sand on the worlds beaches.

Easy TDD Setup for Nodejs ES6 Mocha Chai Istanbul


I’m working on a command line tool for AngularJS, Angular2, and Aurelia that creates components from user templates.  It creates the folder, component js file, component template HTML file, optional component template CSS file, and the component spec js file.

The tool generates the code using the underscorejs template engine.  It’s amazing how much code you’ll no longer have to type; boiler maker component wiring up and default unit tests for most components.

As I was writing the tool, I decided to break out the project setup into this small blog post to make the tool blog post simpler and focused. You can use this simple project as a starter or learning tool for your Nodejs ES6 projects.

I wrote this application and the command line tool using the Atom Editor.  I’ve include my Atom Snippets down below that give me a big productivity boost when writing unit tests.

This blog post is much more about setting up a Nodejs project that uses ES6, Mocha, Chai, and Istanbul than how to use these tools. Please refer to the many outstanding blog posts, courses, and tutorials on these tools and ES6.

My Approach To Nodejs ES6

It’s amazing what you can write using Nodejs.  I’ve written complex, multi-process apps that have IoT connected over MQTT and real-time communication to web clients.  Also written simple apps like the above command line tool. Nodejs is wonderful and is what enables Electron to be the prodigious cross-platform desktop application tool that it is.

ES6 is a clean modern language, is simple, familiar looking, and is fun.  I’ve used ES5 and TypeScript  for many projects but settled on ES6. I blogged about my decision here

Using ES6 with Nodejs does not require Babel for your code or unit tests.  I’m not using ES7 features such as class properties or decorators, but I can live with that for now.

I structure my Nodejs apps, perhaps differently than you’ve seen on other blog posts.  Not implying better, just different.

I prefer to write my ES6, Nodejs code like I would any object orientated app, small classes with discrete functionality. In architecture speak, SOLID, DRY, etc.

I also structure my ES6 so that it can be tested.  Sometimes that requires a little rethinking and possibly some refactoring, but it’s worth it.

Hello World

It would be madness to not write the ubiquitous “Hello World” app for my Nodejs demo, so here we go.

When this app is executed, index.js is the entry point, it creates an instance of HelloWorld and invokes the run method. 

Notice that I’m passing the command line arguments into the constructor. I do this to make testing the HelloWorld class much easier than if I didn’t.


'use strict'

const HelloWorld = require('./app/helloworld');

let c = new HelloWorld(process.argv.slice(2));


HelloWorld is simple.  If no command line args are passed, the run method will log the greeting.  If args are passed, they will be concatenated and then logged.


'use strict'

const Logger = require('./logger');

class HelloWorld {

    constructor (commandLineArgs) {
        this.commandLineArgs = commandLineArgs;
        this.greeting = 'Hello World';
        this.logger = new Logger();

    run() {
        if (this.commandLineArgs && this.commandLineArgs.length) {
            this.logger.log(this.commandLineArgs.join(' '));
        } else {

module.exports = HelloWorld;


Logger outputs message to the console. I always create a logger for my Nodejs apps, so other classes don’t need console.log, etc.  I like the object oriented approached to keep my code clean and familiar. This is a very simple Logger class, enhance it as required for your apps.


'use strict'

class Logger {

    log(message) {

module.exports = Logger;

Unit Testing Setup

For my Nodejs projects I use the following testing tools:

  • Mocha – unit test framework
  • Chai – BDD / TDD assertion library
  • Istanbul – code coverage tool
  • Sinon – standalone test spies, stub, and mock framework
  • Sinon Chai – extends Chai with assertions for Sinon.

You can use Mocha by itself or Mocha and Istanbul to get coverage.  I like the features of Chai, but at the end of the day, it’s personal preference for testing style.  “Actually testing is critical, test style is not.”

I install the test tools locally in my Nodejs projects rather than globally so that I can have multiple versions of a tool if required. Local installs make the command line longer, but that’s not an issue since the command will be in package.json or in a gulp task, bottom line, you don’t have to type it.

Local install example:  npm install mocha –save-dev

Understanding Mocha Setup and Startup

Node and npm commands are executed from your root folder.

When Mocha is invoked, by default it will look in the /test folder for the mocha.opts file, which is the Mocha configuration file. 

Screen Shot 2016-08-11 at 12.25.45 AM


The first line tells Mocha which folder to look into for the tests, if not supplied, it will use the /test folder.  I’ve chosen the /app folder because I like to have my unit tests in the same folder as the JavaScript being tested.

The second line loads up the common.js file

The third line tells Mocha to look not only in the app folder but also all sub-folders.

Finally, the fourth line, tell Mocha to quit processing when a test fails. 

Note:  When running your full test suite, or when running on a CI server, the bail option is probably not appropriate.

--require ./test/common.js



This setup is optional, but its value is that I don’t have to repeat these require statements for Chai and Sinon in every test js file.

'use strict';

global.chai = require('chai');

global.expect = global.chai.expect;
global.sinon = require('sinon');

global.sinonChai = require('sinon-chai');


package.json scripts section

The scripts section of the package.json file makes it easy to run commands, especially commands with long text.

To run a command from the terminal or command line type:  npm run {script command name}

For example,  npm run example or npm run tdd:mac

The example and example2 command run the app with and without command line arguments.

The test command runs all the unit tests and code coverage report.

The tdd:mac command runs Mocha and all your tests.  Then it begins to watch the target folder for any changes.  When a file changes, its reruns the tests automatically.

Note:  mocha -w on Windows does not work, hence the command tdd:mac.  Bugs have been logged.  For now, if you’re on Windows, I recommend writing a glup task that watches the folder and then runs mocha without the -w option.  Optionally, if you’re a WebStorm user, you can set this up in the WebStorm IDE if desired.

My typical workflow on my Mac is to open Atom or WebStorm, view my spec file and code being tested in split view, then in a terminal window I run, npm run tdd:mac and I’m good to go.  I get instant feedback from my Mocha test runner as a write tests or code.

  "scripts": {
    "example": "node index.js",
    "example2": "node index.js Hey world!",
    "test": "./node_modules/.bin/istanbul cover ./node_modules/mocha/bin/_mocha",
    "tdd:mac": "./node_modules/.bin/mocha -w"


This unit test verifies that the Logger class will invoke console.log and pass the correct message to it.

When you’re unit tests actually write to the console, the text will be outputted into your Mocha report stream outputted to the console.  To limit the noise, I’ve created the below TestString that blends in nicely with the Mocha report.

The variable ‘sut’ is an acronym for ‘system under test.’  I use ‘sut’ to make it easy for the next person reading my tests to quickly see what object is being tested. Consistent code is much easy to read and maintain.

The Sinon library makes it easy to test class dependencies by either spying, stubbing, or mocking the class or methods.  The reason I don’t use a stub or mock here for console.log is because it will block the Mocha report from being displayed.  The spy was a good fit and the TestString gave me the output I wanted.

'use strict'

const Logger = require('./logger');
const TestString = '    ✓';  // nice hack to keep the mocha report clean. LOL.

describe('Logger', () => {
    it('should log a message to the console', () => {
        let sut = new Logger();
        let spy = sinon.spy(console, 'log');






To limit bugs and typo’s I use constants for my expected results and method arguments.

In this simple app, the Logger is exposed as a property on HelloWorld, making it accessible for stubbing at test time.  In a larger app, the Logger would be an injected dependency.  Injected dependencies are a no brainer to stub and mock.

'use strict'

const HelloWorld = require('./helloWorld');
const Logger = require('./logger');
const DefaultGreeting = 'Hello World';
const Arg1 = 'Hello';
const Arg2 = 'there!'

describe('HelloWorld', () => {

    describe('Constructor', () => {

        it('should be created with three properties: commandLineArgs, greeting, and logger', () => {
            let sut = new HelloWorld();

        it('should have default greeting', () => {
            let sut = new HelloWorld();

        it('should have command line args set when supplied', () => {
            let sut = new HelloWorld([Arg1, Arg2]);

    describe('Run', () => {
        it('should log command line args when supplied', () => {
            let logger = new Logger();
            let stub = sinon.stub(logger, 'log').returns();
            let sut = new HelloWorld([Arg1, Arg2]);
            sut.logger = logger;


            expect(logger.log).to.have.been.calledWith(`${Arg1} ${Arg2}`);


        it('should log default greeting when no command line args are passed', () => {
            let logger = new Logger();
            let stub = sinon.stub(logger, 'log').returns();
            let sut = new HelloWorld();
            sut.logger = logger;






Test Results

Executing npm test or npm run test, produces the following output.

The describe and it blocks are nicely nested in this Istanbul coverage report.

The first item in the Logger group is a black check mark, this is my little hack I mentioned above in logger.spec.js file test.

Screen Shot 2016-08-13 at 2.24.51 PM

Atom Snippets

Atom editor snippets rock.  The very best snippet documentation I’ve read is here, read it and you’ll be a happy camper.

These snippets assist my coding of classes and unit tests.

  'Fat Arrow':
    'prefix': 'fat'
    'body': '() => {}'
  'describe unit test':
    'prefix': 'dsc'
    'body': """
        describe('$1', () => {

  'it unit test':
    'prefix': 'itt'
    'body': """
        it('should $1', () => {

  'Class with Constructor':
    'prefix': 'cctor'
    'body': """
        'use strict'

        class $1 {

            constructor () {

        module.exports = $1;
  'Method Maker':
    'prefix': 'mm'
    'body': """
        $1($2) {





I hope this information helps you in setting up a Nodejs project that uses ES6, Mocha, Chai, and Istanbul.

Just a grain of sand on the worlds beaches.

Angular 1.5.7 Components ES6 and jspm


The purpose of the blog post and accompanying simple example project is to show you how to:

  • Create an ES6 Angular 1.5.7 super simple web application with navigation
  • Use Angular 1.5.7 Components
  • Use Angular Component Router  (not the constantly changing Angular 2 Router)
  • Bootstrap an ES6 Angular 1.5.7 application
  • Set up ES6 Angular 1.5.7 modules
  • Configure the Component Router
  • Provide a root component that hosts the entire application; providing a placeholder for the Component Router to navigate components into
  • Demonstrate writing super clean ES6 code that is 98% void of the word Angular.
  • Provides two Components that the app can navigate to.

This sounds like a lot, but it’s accomplish with only a few succinct ES6 files.


I’m a total fan of Angular 1.x and now Angular 1.5.x after watching Scott Alan’s Pluralsight Course on Building Components with Angular 1.5.

I’m a fanatic about authoring my JavaScript using ES2015 (ES6, Harmony) and using jspm as my package manager.  This combination of language and package management is so clean and simple.

Scott’s course uses ES5.  Probably a good decision as it keeps the concept count down for Angular 1.x developers who still use ES5.

I highly recommend you watch the course; in about 90 minutes you’ll be another convert to using Angular 1.5.x Components.

I have looked at both Aurelia and Angular 2.  They are both still in beta and undergoing API and tooling changes. I’m very keen on Aurelia and am looking forward to adopting this product in the future.  What I like most about Aurelia is that the team embraced convention over configuration which dramatically reduces the boiler maker code for common scenarios.  Maybe Angular 2 will one day refactor their API to do the same.


Authoring Angular 1.x or 1.5.x apps using ES6 with jspm is  simple and the code is very clean.  I have a project that demonstrates using Electron, Angular 1.x, ES6, and jspm. I will be creating a new project that uses Angular 1.5.7, Electron, ES6, and jspm very soon.

When using ES6 in today’s browsers or in Electron, the ES6 must be transpiled to ES5.  jspm hides all  that complexity and just does it for you. 

Gulp also has a module called gulp_jspm with an option, “selfExecutingBundle” that will essentially pre-compile, bundle, and minify all of your application’s ES6 to ES5.  Heck, it even removes all traces of ES6 libraries from the bundle.

Transpiling, bundling, and minification are part of “real world ES6 development.”  I just like that jspm makes this process simple and almost 100% transparent. 

Please note:  jspm is not the only game in town.  There are many other techniques, frameworks, build systems, etc., that accomplish the same task, producing the same end result.  When I did my study last year, I found that jspm worked best for me.  I recommend that you look at all the options and tools, read many blog posts on the subject just like I did. Then choose the one you understand and can be successful with.

Please note:  This application does not take any dependencies on the volatile and changing Angular 2 Beta.  The Component Router used in this project, is the original Angular 2 router and its works great.  I strongly recommend staying away from Angular 2 dependencies until the team has had time to ship RTM bits and ensure they have an approved, and good story for Angular 1.5.x integration.

Additionally, I have yet to see a compelling reason to write production code in Angular 2.  Like you, I have Angular 1.x projects in production and that run everyday and perform beautifully. 

Application Startup

Before you can run off and write the next awesome app using Angular 1.5.7 Components and ES6 we need to learn how the application starts up.  As you’ll see there are differences between the ES6 jspm code I’ll present and the current AngularJS 1.x ES5 apps you’re writing today.


  • Is loaded by the browser or Electron
  • Loads up system.js and config.js using script tags
  • The bootstrap.js module is imported.  The act of importing a module causes it to execute
  • Notice you don’t see any Angular framework markup as we will be manually bootstrapping Angular.

<!doctype html>

<html lang="en">
meta charset="utf-8">
meta http-equiv="X-UA-Compatible" content="IE=edge">
meta name="viewport" content="width=device-width, initial-scale=1">

<title>Angular 1.5.7 Components ES6 jspm</title>
script src="src/jspm_packages/system.js"></script>
script src="src/config.js"></script>



  • framework dependencies are loaded
  • application ES6 modules are loaded
  • when the modules are all loaded and the document is ready, then bootstrap Angular
  • Notice how Angular is imported and provided a name, “AppModule”  I now have full access to my module and can access properties like, “name”
// load our framework modules
import angular from  'angular';
import 'ngcomponentrouter';

// load our application ES6 modules 
import AppModule from './app.module';
import './app-root.component';
import './About/app-about.component';
import './Home/app-home.component';

angular.element(document).ready(() => {
    // bootstrap angular now that all modules have been loaded
    angular.bootstrap(document, [AppModule.name], {strictDi: true});  


  • framework dependencies are imported so we can use them
  • Angular module named “app” is created and the Component Routers is injected as a dependency
  • Component Router root component is configured. Look back to index.html and you’ll see the app-root component in the markup
  • Export the Angular “app” module
import angular from  'angular';
import ngcomponentrouter from 'ngcomponentrouter';

let module = angular.module('app', [ngcomponentrouter]);

// must tell the Component Router which component to navigate components into
module.value('$routerRootComponent', 'appRoot');

export default module;


  • Import the above app.module default export, which is the angular.module(‘app’).  Consumers have clean code now.  Angular no longer appears in the code.
  • Register the ‘appRoot’ component with the AppModule and set its template.
  • Configure the root component router
  • Last line configures the default route
import AppModule from './app.module';

AppModule.component('appRoot', {
  templateUrl: '/src/app/app-root.component.html',
  $routeConfig: [
    { path: '/home', component: 'appHome', name: 'Home'},
    { path: '/about', component: 'appAbout', name: 'About'},
    { path: '/**', redirectTo: ['Home']}


  • This is my incredibly simple application root object.
  • It provides some navigation links for the Home and About components
  • The ng-outlet directive is where the Component Router will site components as they are navigated to.
<h1>Hello World</h1>

  <a href="#home">Home</a>
  <a href="#about">About</a>



  • Import the app.module
  • Register the ‘appHome’ component with the AppModule and set its template.
  • See how clean this code is?
import AppModule from '../app.module';

AppModule.component('appHome', {
  templateUrl: '/src/app/Home/app-home.component.html'


Make sure you have node.js and jspm installed globally.

You can download or clone the simple repro here: https://github.com/Oceanware/ng157es6jspm

After downloading or cloning, navigate to the folder and open a command prompt (terminal window for OS X or Linux) and execute:

npm install

npm start

Your browser will open and display the application.


You can start to see the simplicity of Angular 1.5.7 and ES6.  Clean JavaScript files, very easy to understand the intent of the code.  Fun programming!

Have fun and be productive with Angular 1.5.7 and ES6.

Hope this helps someone and have a great day.

Just a grain of sand on the worlds beaches.

ES2015 (ES6) or Typescript


I get the question,  “Karl, why do you use ES2015 (ES6)?”

The answer I give depends on the context of the question, in other words what is the scenario we are asking about.

I will answer the question for each of these scenarios:

  • Authoring JavaScript Framework
  • Authoring Large Line of Business Application with more than a few developers
  • Authoring a small application with one or a few developers

Authoring JavaScript Framework

Without equivocation I would use Typescript for a JavaScript Framework.

Why, because I can transpile to ES2015 or ES5, so I can deliver my framework in Typescript, ES2015, or ES5.

Several years from now, I’ll be able to transpile my framework to ES vNext (as long as Typescript is still around and maintained properly), effectively future proofing my code.

I don’t have the hassle of 3rd party .d.ts files that are old or incomplete because my framework probably does not have many 3rd party dependencies.

If my framework does have them, I have the resources to create the required .d.ts files.  I’ll pay this tax because the benefits outweigh the .d.ts file hassles.

Authoring Large Line of Business Application with More Than a Few Developers

Without equivocation I would use Typescript for building a large line of business application with more than a few developers.

Why, because I can leverage the compile time checking, strong typing, and interfaces that Typescript offers; additionally I would use a linter with very strict rules.

I say this for several reasons.  First, because in a large team project like this, you need to reign in some developers so that they don’t get off the path of sensible and maintainable Typescript (JavaScript).  I care much more about creating a maintainable product than I do about someone’s feelings or creative coding desires.  The very strict linter rules also help developers sharpen their JavaScript coding skills.

Second, because Typescript does perform strong type checking at compile time.

Back all this up with unit and integration tests, and you have the basis for a very successful large line of business application.

Authoring a Small Application with One or a Few Developers

Here is where my answer to the original question changes from Typescript to ES2015.

For all of my personnel projects and blog post projects, I’ll use ES2015 (ES6).

For small team projects, I would still like to use ES2015.


  • Because I write simple ES2015 JavaScript that looks like C#
  • Because I write very clean ES2015 that is very easy to read
  • Because I use a ES6 linter with very strict rules, helps keep my ES6 clean and I’ve learned a lot from the linter rules I violated
  • Because I don’t want to pay the 15% tax for authoring Typescript (adding the type definitions to the code,  getting the .d.ts files downloaded, and imported in the code.  This 15% does not count towards missing or incorrect .d.ts files.)
  • Because I don’t want to deal with 3rd party .d.ts files that are either out of date or missing – this can be a real bummer
  • Because I like the dynamic nature of JavaScript and leverage that capability on occasion
  • Because for a long time, basically a single developer was managing the Definitely Typed github repro.  I look at it yesterday and it seems to have gotten a face lift and many new developers helping out.
  • Because the tool Microsoft ships for creating .d.ts files does not render a .d.ts file that can be used, I always found myself having to add more code to them to get them to work.
  • Just because you’re using a framework that was authored in Typescript, it does not mean you have to use Typescript.

Obviously, these are my opinions, and I know that others can easily come back with solutions or comments, but after many projects using Typescript this what I’ve decided to do.

I don’t want to give the impression that there is a huge gap between perfect .d.ts files and the few that I had trouble with.  But those few I needed, well, I needed them.  It got old dealing with this problem.  Remember, demo ware does not have this problem.  Its when you’re developing real applications that need libraries for services and features, and those services and features have missing, or outdated .d.ts files.  This is where the bummer begins.  I think if Microsoft delivered a tool that I could point to a JavaScript library and it would render a .d.ts file that could be used in the project, I may back off on this gripe.  But I have tried to make the missing .d.ts files and spent precious time messing with this.

All developers need to evaluate languages, tools, frameworks, and 3rd party dependencies for all of their projects, and pick the ones that meet the needs of that specific project.

Select the best tool for the job, not because the framework was written in, or because it’s new and shiny, or because other developers use it, select a tool because it is the best fit for the given requirements.


So if you ask me if I use Typescript or ES2015, my first question will be, what is the scenario or use case, then I can answer based on the above criteria.

Hope this helps someone and have a great day.

Just a grain of sand on the worlds beaches.

Visual Studio XAML Designer Needs Culture Support


I just got back from a trip to Japan.  It’s amazing when you leave your country and discover how developers around the world are solving problems.

One problem that should be very easy to address is actual globalization and culture support in the XAML Designer.

I met with several customer that ship WPF products that support Japanese, Chinese, and English.  These customers use the XAML Designer to view their forms in all supported languages.  Currently it is a real pain to quickly display XAML forms in other languages.

One customer actually wrote their own localization system to get around the pain of using the current culture support in the Visual Studio 2015 XAML Designer to render their forms in multiple languages.

There is a very good article on Code Project that helps developers to be productive:  http://www.codeproject.com/Articles/35159/WPF-Localization-Using-RESX-Files


I’m proposing to the XAML Tools Team that they provide a ComboBox at the bottom of the XAML Designer that allows the developer to switch cultures and when they do, reload the designer using that culture.

Microsoft is a global company, it’s customers write software all over the world in many languages. The XAML Designer has an opportunity to come along side these developers and enable them to be more productive.  I hear the term “productive” in many Microsoft keynotes and presentations.  Thank you for helping international developers all over the world.



Hope this helps someone and have a great day.

Just a grain of sand on the worlds beaches.

Thin ViewModels


I remember when I first started using MVVM I found myself putting not only view logic, but small pieces of business logic in my viewmodels.  It was testable and reduced the number of classes related to the front end.  These viewmodels were not blobs, but they did take on more responsibilities than a single-responsibility class would have.

Then in 2013 I changed my application design so that any business logic was relegated to a business logic class that was injected into the viewmodel.  My viewmodels were now leaner and followed the single-responsibility principal much closer.

Motivation for Thin View Models

Motivation for thin viewmodels is strong and simple: cross-platform.

Given the advent of Xamarin.Forms and its capability to author applications for UWP, IOS, Android, and OS X (Xamarin only), as an application architect I would strongly recommend that applications put their business logic in Portable Class Libraries (PCL) and keep their viewmodels thin.  If you do this, you can also share that same business logic with desktop platforms like WPF or Windows Forms.

If you have platform specific logic you can abstract it behind one or more interfaces and then inject it into the proper layer.

I’m excited about Xamarin.Forms and the potential it has for cross-platform development using .NET programming languages.

While Xamarin.Forms does not currently work with OS X, the PCL libraries you author will.  You’ll need to Xamarin Studio to author the OS X UI.

Using this architecture also gives you the advantage of unit and integration testing on a single UI agnostic shared code base.


In the end its about giving yourself, your team, your company the option and capability to meet market driven cross-platform requirements without a rewrite.

Exciting times to be an architect and developer.

Hope this helps someone and have a great day

Just a grain of sand on the worlds beaches.