An Alternative to the Traditional Daily Stand-up Meeting Format

TL;DR During the daily stand-up meeting, focus on the project; not the individual. Structure your stand-up to reflect that by using the board as the focus point, and re-phrasing your questions accordingly.

The daily stand-up has become a very common practice in software teams, especially in Agile ones, as a way to communicate current status. These 3 questions (or a similar formulation) are typical in these meetings:

  • What did you do Yesterday?
  • What’s your goal for today?
  • Is there anything blocking you from making progress?

The Problem

Our stand-ups were taking over 20 mins, and most importantly, often times people would leave the meeting feeling misaligned. This didn’t help the team feel confident about our focus and short-term progress. For a little while, we tried tweaking some things about the meeting, like the scheduled time, reminding people to just report “what matters”, etc. Regardless of what we did, the result would essentially be the same. Over time, this would perpetuate the notion that the problem lied in the participants’ communication skills. But, what if that wasn’t the case? What if the problem lied somewhere else?

After much thought, while contemplating the meeting questions, this dawned on me:

  • What did you do Yesterday?
  • What’s your goal for today?
  • Is there anything blocking you from making progress?

See the pattern? The object of these questions is you; the individual, not the project. I realized these questions are framed in a way that could lead people into a mindset that’s more conducive to having to explain how they spent (will spend) their entire workday Yesterday (today). If that was the case, that could explain why people tended to linger on their answers and provide information that was irrelevant to the stand-up; they just needed to defend themselves. If somebody were to ask you (regardless of the context) what you did Yesterday, you would feel more inclined to recreate your entire day; relevant and irrelevant information included.

Another realization that followed, was the fact that our stand-up consisted of “each one” in the team answering the previous questions, regardless of whether they had anything to contribute to the status update for that day. Who wants to appear as if they did nothing the previous day, especially when everybody else in the team does, right? Again, if what we wanted was useful information out of this meeting, this protocol did not seem to help.

So we decided to make some changes to our stand-up meeting format.

Our Solution

We concluded we needed to shift the focus more towards the project. Hence, we implemented these 2 changes:

1. Rely on the scrum board for the updates

Since we are a remote team, we now make sure the board is screen-shared every time. We also let the cards that are not done yet (i.e., In Progress, Review or Testing) guide the stand-up by simply going over them one-by-one. Obviously, each card has an assignee who’s usually the “reporter” for that task, but it doesn’t stop anybody else from providing related information. Another plus to this is that it allows us all to be on the same page after each meeting, by discussing all tasks at hand. Moreover, it creates an opportunity to re-prioritize and re-allocate tasks on-the-spot as unforeseen needs arise.

2. Rephrase the status questions

We also changed the questions to (while focusing on each card on the board):

  • What was done Yesterday for this card?
  • What’s the next step to be done on this card?
  • What are the blockers for this card?

This version is much more focused on the project itself, and on the tasks at hand. It leads us to have a common focus that’s more actionable, and less defensive. In particular, the second question helps clarify short-term goals for junior developers in a team environment as it’s phrased in a way that opens the floor for the entire team to provide feedback.

Results

After 3 months of using this new format, our stand-up average time is < 10 mins, and the best of all, people report higher satisfaction. We all leave the meeting with a concrete understanding of what the priorities and goals for the current day are, which allows our team to adapt more rapidly to the current needs.

Babel and TypeScript Together

tl;dr: If you need to use both Babel-generated and TypeScript-generated files together, make sure both the Babel and Typescript transpilers are targeting the same ES/module technology.

Currently, I’m working on a legacy Angular 1 project that’s using TypeScript (TS). The codebase has grown massively, and given Angular 1’s performance issues with complex client apps, the team has decided to look into other alternatives. One of the alternatives is React.js.

Side note: Part of the challenge we’re facing is ensuring production up-time and ongoing new feature development. This pretty much eliminates the option of a complete overwrite. We need to ensure backwards compatibility with existing code in any kind of re-engineering effort we take. One of the options we thought of is “componentizing” central parts of the app, which we could pull out of the main project into a separate solution that can be rewritten with better patterns and better overall performance. Given that React has a better data-binding mechanism than Angular 1, we expect a re-write of these central components in React would make for considerable performance improvements in the app. Moreover, this abstraction of common logic would improve readability, reusability and maintainability on the main app.

Since the main app uses TS, we needed these transpiled React components to be compatible with the TS code. But … how?

The answer is in fact pretty simple: Make both the TS and the Babel transpilers produce the same artifact tech. Let me elaborate …

In our case, our TS transpiler was configured to generate ES5, AMD files. So we needed our Babel transpiler to also produce ES5, AMD files. Then, we could easily import Babel-generated files into our TS-generated ones. Babel out-of-the-box generates ES5 files, so all we needed was to take those and transform them into AMD files. It turns out, there’s already a plugin for that.

Empowered with that plugin, we’re now able to write React components in ES6, and have them be used by our TS code within our app.

Enabling server-side config values in JS files when using ASP.NET MVC

If you’re working on an ASP.NET MVC app, chances are you’re dealing with a Web.config file to manage your app config settings. If you’re using the latest MVC (6 at the time of this writing), you’re probably using a json file to manage that. Either way, at some point you’re managing application configs from the server side and you may want to be able to pass on some of those settings to a front-end piece of code.

Let’s assume you’re dealing with a Web.config for now (although most of this also applies to other cases as well). The typical way to handle this involves using the System.Configuration.ConfigurationManager to pull the settings from the Web.config to your C# code. From then, you could have your cshtml files read those values to make logic decisions in your views. That’s all very easy, but now let’s assume you have a piece of JavaScript code that needs to have access to these configs, and you have that piece of code living on a separate .js file. Here’s an easy and elegant way to handle it.

To simplify this article, let’s assume you have a single cshtml file in your web app, which gets loaded on the first request to the server (btw, that’s probably the case if you’re using an SPA framework like Angular). And to make it even simpler, let’s assume you have a single config value you need to deal with (e.g., Foo) and it’s in the appSettings section of your Web.config.

First: pull your settings from the Web.config file to your cshtml

Here’s what your config value would look like in your Web.config:

<!-- other stuff here -->
<appSettings>
    <add key="Foo" value="true"/>
</appSettings>
<!-- other stuff here -->

Now, this is what you need to do (or something very similar) to pull this into your cshtml file and make it available to your HTML markup:

<!-- Use proper script tags ... the square brackets are a workaround to stop WP's parsing engine from messing this up -->
[script type="text/javascript" 
        src="@Url.Content('~/Scripts/configs.js')" 
        data-foo="@ConfigurationManager.AppSettings["Foo"]"]
// only for JS configs
[/script]

As you may notice, we’re using the conventional HTML data attributes to initiate the transfer of this config value to the JavaScript environment.

Second: expose your settings to the JavaScript runtime

There’s one more step to complete this transfer. If you notice in the previous snippet we have a config.js files in the Scripts folder to do so. Here’s what that file could look like:

(function(window) {
    "use strict";

    var scripts = document.getElementsByTagName("script"),
        lastScript = scripts[scripts.length - 1];

    window.configs = {
        foo: JSON.parse(lastScript.getAttribute("data-foo"))
    };
})(window);

What this is doing is using JavaScript trickery to grab the data attributes in that script tag from the DOM. Then finally, it’s creating a JavaScript object with the proper setting, and making it available globally via the window.configs property.

Conclusion

I find this approach pretty clean/elegant. It could be modified by:

  • making it more dynamic/reusable (e.g., the HTML data attributes could be dynamically created by the cshtml engine, and the properties in the window.configs object could also be created dynamically).
  • extending the window.config object to manually set up your own Front-End only config values.
  • integrating it with built-in mechanisms within any Front-End frameworks you may be using.

In any case, it’s a good start.

Bash + crontab for periodic URL content checking

Not long ago, I found myself checking a website everyday or so to find out if a particular product became available (in stock). After a handful of times, I thought “why don’t I create a little script to do this for me, and forget about it in the meantime?”

Enter bash scripting + curl + grep

A few years ago, I would have used Selenium (or some other browser-automation tool) to solve this problem. After becoming more familiar with unix systems and all its handy tools, I now think bash is a much lighter weight alternative. A combination of bash, curl, and grep would do the trick.

The basic idea is to execute the curl command as such:

curl -s http://www.foo.com | grep -q -i "bar"

What this does is execute a get request to http://www.foo.com and grep the response with the word bar. Then all we have to do is wrap it around an if statement and perform some sort of action to let us know when the condition is met. In my case, I wanted to leverage Mac OS X’s notification center, so I found a way to trigger an AppleScript notification, which also shows up in Notification Center (with the added native Growl-like popup). The resulting snippet looks something like this:

if curl -s http://www.foo.com | grep -q -i "bar"; then
osascript -e 'display notification "Found text in URL" with title "curl/grep test" sound name "default"'
else
echo "not found"
fi

As you can tell, you can configure the message, title, and sound of the notification. Here’s a useful blogpost with more details on AppleScript notifications.

Now that we have the basic script set up, you may be wondering how to do this on some sort of loop so that we can have it triggered every so often. You may be inclined to writing a while loop of the like, but there’s a native, much better way to do this in unix systems.

Enter crontab

crontab (short for cron table) is simply a command to manage shell commands to run on a set schedule. People usually refer to these as cron jobs. You can read more about Cron here.

As you can imagine, at this point we can simply (a) save our bash script as a *.sh file, and the create a cron job to execute it every certain time. With this, you can just go on with your life using your system regularly, and let this background task alert you when it finds what you want on whatever website.

Final shell script

I’ve put together a more generic form of this bash script in this github repo. It will allow you to customize it even further. You can download it and look at its usage (via -h) for more details.

Enjoy!

Working With StructureMap 3 Profiles


interface ICar { }
public class Brandless : ICar { }
public class Honda : ICar { }
public class Toyota : ICar { }
[TestFixture]
public class WorkingWithProfiles
{
[Test]
public void ShouldRespectProfiles()
{
var container = new Container(_ =>
{
_.Scan(s => s.TheCallingAssembly());
_.Profile("honda", r => r.For<ICar>().Use<Honda>());
_.Profile("toyota", r => r.For<ICar>().Use<Toyota>());
});
try
{
container.GetInstance<ICar>();
}
catch (StructureMapConfigurationException)
{
Assert.Pass("ICar was not configured in the main container");
}
container.GetProfile("honda").GetInstance<ICar>().ShouldBeType<Honda>();
container.GetProfile("toyota").GetInstance<ICar>().ShouldBeType<Toyota>();
}
[Test]
public void ShouldRespectSingletonsInProfiles()
{
var container = new Container(_ =>
{
_.Scan(s => s.TheCallingAssembly());
_.Profile("honda", r => r.For<ICar>().Use<Honda>().Singleton());
_.Profile("toyota", r => r.For<ICar>().Use<Toyota>());
});
container.GetProfile("honda").GetInstance<ICar>().ShouldBeSameAs(container.GetProfile("honda").GetInstance<ICar>());
container.GetProfile("toyota").GetInstance<ICar>().ShouldNotBeSameAs(container.GetProfile("toyota").GetInstance<ICar>());
}
[Test]
public void ShouldRespectSingletonsInRootContainer()
{
var container = new Container(_ =>
{
_.Scan(s => s.TheCallingAssembly());
_.For<ICar>().Use<Brandless>().Singleton();
_.Profile("honda", r => r.For<ICar>().Use<Honda>().Singleton());
_.Profile("toyota", r => r.For<ICar>().Use<Toyota>());
});
container.GetInstance<ICar>().ShouldBeSameAs(container.GetInstance<ICar>());
container.GetProfile("honda").GetInstance<ICar>().ShouldBeSameAs(container.GetProfile("honda").GetInstance<ICar>());
container.GetProfile("toyota").GetInstance<ICar>().ShouldNotBeSameAs(container.GetProfile("toyota").GetInstance<ICar>());
}
}

view raw

gistfile1.cs

hosted with ❤ by GitHub


public interface ICoffee { }
public interface IBeans { }
public interface IMilk { }
public class Latte : ICoffee
{
public readonly IBeans Beans;
public readonly IMilk Milk;
public Latte(IBeans beans, IMilk milk)
{
Beans = beans;
Milk = milk;
}
}
public class SpecialtyLatte : ICoffee
{
public readonly IBeans Beans;
public readonly IMilk Milk;
public SpecialtyLatte(IBeans beans, IMilk milk)
{
Beans = beans;
Milk = milk;
}
}
public class BlendedBeans : IBeans { }
public class KenyaBeans : IBeans { }
public class SuperSecretBeans : IBeans { }
public class WholeMilkGradeA : IMilk { }
public class WholeMilkGradeB : IMilk { }
[TestFixture]
public class MoreComplexProfileDefinitions
{
[Test]
public void WorkingWithNestedContainers()
{
var container = new Container(c =>
{
c.For<ICoffee>().Use<Latte>();
c.For<IBeans>().Use<BlendedBeans>();
c.Profile("localshop", r =>
{
r.For<IBeans>().Use<KenyaBeans>();
r.For<IMilk>().Use<WholeMilkGradeA>();
});
c.Profile("starbucks", r => r.For<IMilk>().Use<WholeMilkGradeB>());
c.Profile("specialtyshop", r =>
{
r.For<ICoffee>().Use<SpecialtyLatte>();
r.For<IBeans>().Use<SuperSecretBeans>();
r.For<IMilk>().Use<WholeMilkGradeA>();
});
});
using (var localshop = container.GetNestedContainer("localshop"))
{
var latte = (Latte)localshop.GetInstance<ICoffee>();
latte.Beans.ShouldBeType<KenyaBeans>();
latte.Milk.ShouldBeType<WholeMilkGradeA>();
}
using (var starbucks = container.GetNestedContainer("starbucks"))
{
var latte = (Latte)starbucks.GetInstance<ICoffee>();
latte.Beans.ShouldBeType<BlendedBeans>();
latte.Milk.ShouldBeType<WholeMilkGradeB>();
}
using (var specialtyshop = container.GetNestedContainer("specialtyshop"))
{
var specialtyLatte = (SpecialtyLatte) specialtyshop.GetInstance<ICoffee>();
specialtyLatte.Beans.ShouldBeType<SuperSecretBeans>();
specialtyLatte.Milk.ShouldBeType<WholeMilkGradeA>();
}
}
[Test]
public void WorkingWithProfiles()
{
var container = new Container(c =>
{
c.For<ICoffee>().Use<Latte>();
c.For<IBeans>().Use<BlendedBeans>();
c.Profile("localshop", r =>
{
r.For<IBeans>().Use<KenyaBeans>();
r.For<IMilk>().Use<WholeMilkGradeA>();
});
c.Profile("starbucks", r => r.For<IMilk>().Use<WholeMilkGradeB>());
c.Profile("specialtyshop", r =>
{
r.For<ICoffee>().Use<SpecialtyLatte>();
r.For<IBeans>().Use<SuperSecretBeans>();
r.For<IMilk>().Use<WholeMilkGradeA>();
});
});
using (var localshop = container.GetProfile("localshop"))
{
var latte = (Latte)localshop.GetInstance<ICoffee>();
latte.Beans.ShouldBeType<KenyaBeans>();
latte.Milk.ShouldBeType<WholeMilkGradeA>();
}
using (var starbucks = container.GetProfile("starbucks"))
{
var latte = (Latte)starbucks.GetInstance<ICoffee>();
latte.Beans.ShouldBeType<BlendedBeans>();
latte.Milk.ShouldBeType<WholeMilkGradeB>();
}
using (var specialtyshop = container.GetProfile("specialtyshop"))
{
var specialtyLatte = (SpecialtyLatte) specialtyshop.GetInstance<ICoffee>();
specialtyLatte.Beans.ShouldBeType<SuperSecretBeans>();
specialtyLatte.Milk.ShouldBeType<WholeMilkGradeA>();
}
}
[Test]
public void GettingContainerViaGetProfileIsNotTheSameAsGettingProfileViaGetNestedContainer()
{
var container = new Container(c =>
{
c.For<IMilk>().Use<WholeMilkGradeB>();
c.Profile("specialtyshop", r => r.For<IMilk>().Use<WholeMilkGradeA>());
});
var specialtyShopProfile = container.GetProfile("specialtyshop");
var specialtyShopAsNestedContainer = container.GetNestedContainer("specialtyshop");
specialtyShopProfile.ProfileName.ShouldEqual("specialtyshop");
specialtyShopAsNestedContainer.ProfileName.ShouldEqual("specialtyshop – Nested");
container.Role.ShouldEqual(ContainerRole.Root);
specialtyShopProfile.Role.ShouldEqual(ContainerRole.ProfileOrChild);
specialtyShopAsNestedContainer.Role.ShouldEqual(ContainerRole.Nested);
}
}

view raw

gistfile2.cs

hosted with ❤ by GitHub

Testing Q Promises in Node.js

Technorati Tags: ,

The code

Let’s say you have a Node module (named potentialPartner.js) that returns a promise, as such:

var Q = require("q");

function willYouLoveMe(cond){
	var deferred = Q.defer();

	if (cond === "even if I were out of shape")
		deferred.reject("I only like guys in shape");
	else
		deferred.resolve("I love you unconditionally!");

	return deferred.promise;
}

module.exports = {
	willYouLoveMe: willYouLoveMe
};

How can we test it?

To cover all scenarios, you’ll need a way to exercise all possible outcomes of the promise (in this case a fulfilled promise, and a rejected rejected promise). Using mocha, should, and (obviously) Q, here’s what the tests may look like:

var should = require("should"),
	Q = require("q");

describe("A potential partner", function() {
	var potentialPartner = require("../potentialPartner");

	it("should promise to love me", function() {
		var promise = potentialPartner.willYouLoveMe();

		promise.should.have.property("then");
		promise.should.have.property("fail");
	});

	it("should promise to love me unconditionally", function(done) {
		var promise = potentialPartner.willYouLoveMe("no matter what");

		promise.done(function(){ // onFulfilled
			done();
		}, function() { // onRejected
			should.fail("I expect my partner to love me unconditionally!");
			done();
		});
	});

	it("should not promise to love me ONLY if I'm in shape", function(done) {
		var promise = potentialPartner.willYouLoveMe("even if I were out of shape");

		promise.done(function(){ // onFulfilled
			should.fail("I expect my partner to not just promise me to love me if I'm in shape :(");
			done();
		}, function() { // onRejected
			done();
		});
	});
});

Note that:

  1. The very first test (“should promise to love me”) is probably unnecessary. However, I like to have very specific tests that make the failure obvious when they fail. This makes them more effective, and the failures quicker to debug.
  2. I’m using mocha’s done function to handle the async results from the method under test. Otherwise, the async tests will complete before the assertions are executed, which will cause mocha to report false positives.
  3. I’m using promise.done() to hook up my assertions. This is key! … you may feel inclined to chain a then to your promise, or a fail (depending on the condition you want to test), but the main problem with that has to do with the way exceptions are handled internally by then/fail. The bottom line is that such handling can cause issues with your failing assertions and the way they end up getting reported by the test framework (it can hamper debug-ability). Instead, promise.done() plays more nicely with your test reports.

Node.js module testing via rewire

When testing Node modules, one of the challenges you face is dealing with other module dependencies (either third-party modules or your own ones). Let’s say you want to test this module:

var electricSaw = require("./electricSaw"),
    table = {
        wood: {}
    };

function makeTable() {
    electricSaw.cut(table.wood);
    table.finished = true;
    return table;
}

module.exports = {
    makeTable: makeTable
};

Notice electricSaw is an external module being referenced by the module you want to test. How can you deal with it?

  1. You could not worry about it and test the whole thing in its entirety. This could be useful in some situations (e.g., acceptance testing), but not so much in others (e.g., unit testing). It depends on what your end goal is.
  2. If your goal is to isolate your module, then you’ll need to somehow figure out how to inject a test double instead of the actual electricSaw module.

If you decide to use test doubles, you have mainly 2 options: either replace Node’s require function yourself (which requires you to know more about Node’s internals), or rely on existing modules like rewire, proxyquire, or SandboxedModule. Let’s look at how we can leverage rewire, along with mocha (test framework), sinon (for mocking) and should (for BDD-style assertions) to accomplish this:

var should = require("should"),
    sinon = require("sinon"),
    rewire = require("rewire");

describe("A carpenter", function() {
    var carpenter = rewire("./carpenter"),
        electricSaw = {
            cut: sinon.spy()
        };

    describe("when using an electric saw", function() {
        before(function() {
            carpenter.__set__("electricSaw", electricSaw);
        });

        it("should be able to make a table", function() {
            var table = carpenter.makeTable();

            electricSaw.cut.calledOnce.should.be.true;
            electricSaw.cut.calledWith({}).should.be.true;
            table.finished.should.be.true;
        });
    });
});

The most important concept in this snippet is the use of rewire, instead of require to import your module. This will allow you to replace the dependencies with your own test doubles (via rewire’s __set__ function). This is the main key to enabling this whole mocking approach. Eventually, as the complexity of your code-under-testing grows, you’ll also need to get skilled with different mocking concepts.

Here’s a github project I use to play with this Node mocking approach.

Spring.NET AOP MethodBeforeAdvice Example

Assuming we have the following class:

namespace Example
{
	public class Foo
	{
		public void Bar()
		{
			// method body
		}
	}
}

If we wanted to intersect the call to Bar() with an AOP Advice triggered before the method call, this is what we’d do using Spring.NET:

1. Implement the Advice:

namespace Example
{
	public class MyAopAdvice : IMethodBeforeAdvice
	{
		public void Before(MethodInfo method, object[] args, object target)
		{
			// advice body
		}
	}
}

IMethodBeforeAdvice is part of the Spring.Aop namespace.

2. Configure Spring.NET (via xml in this case)

<object id="myAopAdvice" type="Example.MyAopAdvice" />

<object id="foo" type="Spring.Aop.Framework.ProxyFactoryObject">
  <property name="Target">
    <object type="Example.Foo" autowire="autodetect"/>
  </property>
  <property name="InterceptorNames">
    <list>
      <value>myAopAdvice</value>
    </list>
  </property>
</ object>

Note that I used Spring’s ProxyFactoryObject to create a proxy for Foo. This is because Foo is not implementing any interface. In this case, Spring.NET needs to create a proxy around it in order to create the pointcuts.

Last, but not least, we’d need to make Bar virtual so that it can be overridden in the proxy.

And that’s it. Any time Foo.Bar() is called, MyAopAdvice will execute before the method’s body is executed.

Note

The previous xml configuration instructs Spring.NET to create a pointcut for any virtual method in Foo. In our case, since we have only one method (Bar), it’s not a big deal. However, if your class happens to have multiple methods, and you only wanted to create a pointcut for a single method, there are 2 ways you could accomplish it:

  1. Only make the method where you want the pointcut virtual, and leave the rest as non virtual.
  2. Use a RegEx pointcut advisor to configure your advice, as shown below:
<object id="myAopAdvice" type="Spring.Aop.Support.RegularExpressionMethodPointcutAdvisor">
  <property name="pattern" value="Bar"/>
  <property name="advice">
    <object type="Example.MyAopAdvice" />
  </property>
</ object>

Converting C# Razor Models Into JavaScript objects

If you’re using ASP.NET MVC for your web app, with the popularity of front-end MV* frameworks (Backbone, Knockout, etc.) at some point you’ll probably have more of a need to access the properties of your Razor model on your JS code. If you have your JS code embedded in your cshtml (bad!), you could simply do something like:

<script type="text/javascript">
	var foo = @Model.Foo;
	// rest of your JS code now has access to foo
</script>

But what if you want to have your JS code in a separate file (and why wouldn’t you?), then you’re faced with having to somehow pass the Razor model into your JS file. In other words you can’t just write @Model.Foo in your JS file (that’s a Razor-only syntax). You could manually map every Razor property into a JS object and pass it onto the JS file. But you obviously want to avoid having to do all that manual work.

A better alternative is to have a simple/reusable way to dynamically serialize your Razor model into a JS object automagically. Here’s an easy way to accomplish that:

  1. Serialize your Razor model into a JSON object (I prefer Json.Net for this).
  2. Convert your JSON object into a JS object (using native JS code or json2.js for better IE support).
  3. Pass the JS object from your cshtml file to your js file.

Code Please!

First add an HtmlHelper extension in your C# code for reusability and easy invocation:

public static IHtmlString ToJson(this HtmlHelper helper, object obj)
{
	var settings = new JsonSerializerSettings { 
		ContractResolver = new CamelCasePropertyNamesContractResolver() 
	};
 	settings.Converters.Add(new JavaScriptDateTimeConverter());
 	return helper.Raw(JsonConvert.SerializeObject(obj, settings));
}

Again, this assumes you’re using Json.Net. This simply serializes a C# object into JSON (maintaining JS naming conventions by using camel-case names for the properties). Next, add a JS helper function (you probably have a local util library in your JS code) to simplify things a bit:

util.toJS = function (json) {
	return JSON.parse(JSON.stringify(json));
}

The last thing is to actually perform the serialization in your cshtml:

<!-- somewhere in your csthml file-->
<script type="text/javascript">
	require(['index'], function (api) {
		var model = util.toJS(@(Html.ToJson(Model)));
		$(function () { api.exec(model); });
	});
</script>

This assumes you’re using requireJS. The important thing here is how we’re constructing the model variable (line 4) and how we’re passing it to the JS file (line 5). Of course, you don’t have to use requireJS, in which case you’d need to adapt this code a bit. But you get the idea…

The final behavior is that you’ll have a JS object mirroring your Razor model, so for example if your Razor model looks like this: { Foo = “bar”, Baz = “bah” }, your JS object will look like this: { foo: “bar”, baz: “bah” }

As you can see, with this pattern you can make your cshtml file much cleaner, and move practically all your JS logic out of it.

Unit testing JavaScript modules using RequireJS and Jasmine

One of the things we’ve been doing a lot on my most recent project is JS development (thanks to Joel’s influence). One of the useful JS patterns we’ve taken advantage of to help us organize our front-end code is the Module pattern – and to help us manage/wire-up these modules, we’ve been using RequireJS. We’ve also incorporated Jasmine into our project to support BDD on our front-end code.

On this post, I want to discuss a simple approach to unit testing RequireJS modules using Jasmine. Warning: we’ll be violating TDD by not doing a test-first approach, but it’s all for illustration purposes.

The Example

Let’s assume our front-end functionality consists of taking a list of students that have joined a class an presenting them on an html list.

SRP

To make things easier, let’s follow the SRP (I know… even with JS code!) and break down this functionality into 2 modules: students, to handle retrieving the list of students, and klass (as in school class), to handle student roster retrieval. This separation will also help us test things – you’ll see why.

The Modules

Each module should be placed in its own .js file. First, the students module:

define("students", [], function(){
	var enrolledStudents;

	var self = {
		_init: function(){
			enrolledStudents = {
				math: ['John', 'Carl', 'Joseph'],
				chemistry: ['Rich', 'Alex']
			};
		},
		getEnrolled: function(subject){
			return enrolledStudents[subject];
		}
	};

	self._init();

	return self;
});

As you can tell it’s a very simple module, and very “non-dynamic”. Don’t get too caught up on this implementation. Keep in mind that in theory this could be replace with an ajax call to a server providing the actual students, or some other mechanism. The point is that at the time you’re testing the modules depending on this module, this implementation will be irrelevant.

Next, the klass module (the one we want to unit test):

define("klass", ["students"], function(students){
	var self = {
		getStudentRoster: function(subject){
			var list = students.getEnrolled(subject);
			var html = $('<ul></ul>');
			$.each(list, function(i, item){
				html.append('<li>' + item + '</li>');
			});
			return html;
		}
	};

	return self;
});

The Jasmine test

Now we’re ready to work on the jasmine test for our klass module. The key here is to use a stub for the students module so that we can focus only on the unit under test (the klass module in this case) – just like you’d do with unit testing server-side code.

For the stubbing, we’ll rely on jasmine spy’s.

Here’s our jasmine test:

var studentsStub = { getEnrolled: function(subject){} };
define("students", [], studentsStub);

require(["klass"], function(klass){
	describe('klass module', function(){
		it('should get students roster list per class', function(){
			spyOn(studentsStub, 'getEnrolled').andCallFake(function(subject){
				if (subject === 'foo')
					return ['Joe', 'Richard'];
			});
			expect(klass.getStudentRoster('foo')).toHaveHtml('<li>Joe</li><li>Richard</li>');
		});
	});
});

Notice that we’re re-defining the students module in requireJS for our test, and we’re “injecting” our studentsStub object to the requireJS engine. Then, within the test implementation, we’re defining a fake response for our students stub getEnrolled() function. This will ensure that the klass module will be using our stubbed students list whenever it calls students.getEnrolled(). Again, this allows us to just focus on the klass module functionality – i.e., by doing this stubbing, this test is not concerned about the actual students module behavior.

One thing I’m not showing here is the html needed to run the test. I’ll leave that to you, but just keep in mind that you’d only need to import the students module (i.e., the module under test) in your script tags.

In Short

This simple example (although probably not very applicable in real-life) shows how universal basic principles can also be applied to JS code to make things more unit-testable. Namely, the SRP, seams, and the module pattern.