Synthetic Transaction Monitoring

We as the product owners need to know if and when an outage happen, and we need to go in front it rather than knowing by a customer calling support. The good news is that most of the teams understand the need and are ramping up the effort for monitoring. Also, fortunately, most of all our modern applications/APIs have some health check which we rely heavily on monitoring. It is an excellent start; we need to know if the APIs are up and running, but that alone will not help.

Let me introduce to a new term and must have as we embark on building new monitoring and alerting, ‘Synthetic Transaction Monitoring.’ Synthetic transaction monitoring is running a script by simulating the most typical user behavior of our application end to end. Think of it as your core smoke test. One is to check if the site is up and API’s are all responding and another is to check if the application as a whole behaving as it is supposed to. Find out what is the most critical and most used path of the user in the application, script the behavior and run in production as often as you can.

During the production release, most of the products run some smoke tests before handing it for manual tests. So we can leverage one or more of those smoke test as synthetic transactions quickly.

The core principals of Synthetic Transaction Monitoring are

  • It should mimic most typical user behavior of our application end to end.
  • It can be more than one.
  • It must be one of the first tests you run right after production release for validation.
  • We need to run this script as often as we can in production.
  • Alert if and when it fails.
  • These monitorings need to run in all environments, not just production.
  • Make these synthetic transactions as a part of the performance test in CI pipeline so that no one accidentally decreases the performance.


The fourth point “We need to run this script as often as we can in production” is deceivingly simple but it is very critical. We should be able to execute the synthetic transaction every minute. If for some reason the synthetic transactions take more than a minute to run then we need to ask the questions to ourselves;

  • if that transaction indeed a core user behavior?
  • If it is core behavior, is it acceptable to have that performance?


One final note, when we start building synthetic transactions, we rely on domain experts and business analyst to develop these behaviors but in long run, we should be able to look at the logs to identify true customer usage and modify these synthetic transactions.

Getting started with Python & Selenium

For those coming from Microsoft world. Python tool sets are available to develop in Visual Studio. Recently I was working on a situation where I have check a particular website every couple of hours. I did not know at I have to do it for some time, so first time I did not manually and then second time when I did, I know I have to code it. I remember someone told me Python is available in Visual Studio so I thought I will try python in Visual Studio.


First thing we need to do is to install python tools. When you start a new project in Visual Studio, pick python and if it is not installed, it will ask you to install python tool set and acknowledge it and install them. It normally take couple of mins to install them.

Now that we have python installed, open VS again and create a new python project. We normally use PIP to install python packages if we were using traditional python programming. We need to do the same in VS environment as well. Here is the stackoverflow question and answer. I followed the first answer and able to install the packages I wanted.

There is a possibility you might find ‘pip’ in there, make sure you follow this comment

@cal97g When the panel is smaller, you just have to change the dropdown menu from “Overview” to “pip”. I just expanded it in the picture to show it all at once

Now that we have pip. Select ‘install selenium’ and that’s it. It will install selenium and we are all set to go.


Now that we have all the pre-requisite. Let’s jump in and start coding. For my testing, I started out with the example from selenium-python documentation. I used the code as it is to see how it works before I start experimenting with it.

from selenium import webdriver
from selenium.webdriver.common.keys import Keys

driver = webdriver.Firefox()
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
assert "No results found." not in driver.page_source

I used the above code the chapter 2. Now with no other changes, copy the code in the python file and try to run it. There is a possibility you will run into this error

“selenium.common.exceptions.WebDriverException: Message: ‘geckodriver’ executable
needs to be in PATH.”

We can solve this by getting the latest version of geckodriver from here and adding it to the system PATH. Before you continue, restart the VS so that it has latest path.

Lets try again, now if the path was picked up properly, it should open the firefox. In case you run into an error like this

“Message: Expected browser binary location, but unable to find binary in default location, no ‘moz:firefoxOptions.binary’ capability provided, and no binary flag set on the command line”

It is because it does not find FireFox in the Path. It can be solved couple of ways. One way is use explicit path of Firefox in the code like the example here. But I like the solution of adding firefox.exe to the system path.


MS Orleans 1.2 data persistence

In my opinion, Orleans under appreciated framework out there. It is by far one of the best framework came out of MS. I knew of Actor models from Erlang and Scala and what it gives and finally to see it in MS was amazing.

Here are the reasons why I like Orleans or more to do with Actor implementations

  • As a developer, it is easy to write a single threaded model program. In Orleans, all the code we write will be single threaded program.
  • Concurrency is very difficult and Orleans hides those details from development.
  • Each grain holds its state and handles reading and writing to the back end store. I thought it is pretty cool. This kind of eliminate the data layer/ORM layer from our development mindset.

There are more reasons for why would one like Orleans. But I want to stop at the third point and explain a problem I ran into and how Orleans team helped me fix it.

I was following an example to learn data persistence. Most of the document on the web are all kind of old and if you were to get the latest version from the Orleans (as of writing it is 1.2.0) you get latest version and they did change the template model for how one would use data persistence.  Earlier you would need a configuration file to let Silo know how to build the persistence. From 1.2.0, it is done through program. Based on what I see there is a method to load from configuration but I did not try it.

So far anyone following example, everything is same except couple of things, starting 1.2.0. They are

  • Now you should see a new file in the project called orleanshostwrapper.cs
  • This code loads Silo with proper configuration.
  • By default, it sets the data provider as “MemoryStore”.
  • If you want to change it, you can change it by changing the default parameters of “config.AddMemoryStorageProvider();” method call.
  • In the old model, the state objects were defined as interface, now change that to standard class. If you get an error that undefined variable, make sure the field in the class is ‘Public‘.

I was stuck on this issue for a day or two till I got great help from Sergey. Thanks a lot for the great help.


AngularJS 1.x with Typescript – 1

Everyone knows TypeScript especially after AngularJS team starts using Typescript to build AngularJS 2.0. I have been trying to avoid Typescript, since it is a transcompiler and the underlying language is still relevant and powerful. But over the holidays I spend some time learning Typescript with in the realm of AngularJS and found it interesting and easy to use. It reminds me of C# in many ways. With majority of developers are in Java or .Net world, because of Typescript, more people will be able to move to front end development easily.

When I was learning Angular first time, I wrote series blogs to educate myself and for anyone interested in following. So with that same spirit, I am going to do the same for AngularJS development with Typescript.

Couple of things;

  • This series is only for Angular 1.x not Angular 2.0
  • I am more interested in AngularJS so, I will not go deep into Typescript.
  • For development I am using Visual Studio Code. It is a free idea with good support to Typescript.
  • I use Win 10 desktop for development.
  • Instead of building a large application I will be focusing on concepts and sample example specific to concept.


Today we will setup our development environment and write a simple hello world program.

  1. Install Node/NPM

First and foremost, we need to install Node, even though we will not use node right away the npm tool that comes with Node we will use it to bring in the packages. So go to and download and install Node.

2. Install TypeScript

Open a command prompt and where ever you are, type in the following to install TypeScript.

npm install -g typescript

It doesn’t matter which directory you are in, since we are passing ‘-g’, TypeScript compiler will be available globally.

Once TypeScript installed, verify you have the latest version. by running following command

tsc -v

You should see version result as 1.7 or greater. If you have older version then you might have VS 2013/2015 installed and it already have old TypeScript in your path that you need to remove.

3. Install Visual Studio Code (Skip if you have a favorite editor)

Go to Visual Studio Code home page, download and install the editor.

4. Install TypeScript Definition Manager (TSD)

Before we start coding, we need to add intellisense to our editor. To do that, we need to install type script definition manager. Once TSD installed, similar to npm which can install the packages available under npm, we can use tsd to bring in the definition files of all javascript files we will use. For now think of it is a way to bring intellisense to the editor.

Fortunately TSD is available through npm so we install TSD once globally so we do not need to do it again and again like TSC.

npm install -g tsd

Once it is installed we will see how to bring the definition files during the development of the Hello World program.


Lets write some code:

On to developing Hello world program in AngularJS/TypeScript. Since we are using Visual Studio Code (VCS), I am going to open a command prompt and create a directory to build our hello world program.

I created a folder blog1 under c:\users\unni\learningangularts\.  Open VSC and use open folder option to point to the directory we created. It will create a empty editor with no files in it.

To do simple hello world program we need the following

index.html <- main html file

app.ts <- typescript file for module definition.

helloController.ts <- controller definition file.

tsconfig.json <- configuration file for type script.

scripts/angular.js <- angular js file from angular web site.

We will look at each one and discuss what is new with respect to TypeScript. Following is index.html


There is nothing new here except, I am using ‘controller as alias’.  All I am doing here is to render the message from the controller to the view through one way binding {{}}. There is not new here for type script as one would except.

If you notice closely, the script files referenced for app.js and helloController.js are java script files instead of type script files even though we will develop in type script files.

When I started out, I did have them as app.ts file instead of app.js. SO be mindful of that. When using TypeScript files (.ts extention), even though we develop using TypeScript at the end we transcompile them into javascript files so those are the files we need to reference in html files or anywhere.

Before we start adding working on controllers we need to two things

  1. First we need to bring in type script definition for angular.js so that, VSC can show intellisense. Before we do that, make sure you download angular.js file and drop it in some folder to distinguish them as framework script files. In my case, I created a script folder and dropped it in there.  With angular script file in place, open command prompt and run the following command at the root of the application directory, in our case @ “c:\users\unni\learningangularts\blog1

tsd install angular –resolve –overwrite –save

On running it should identify angular.js in the directory structure and bring in its type definition and also its dependencies. It seems angular.js still have dependency to jquery.js so it will bring the type definition of jquery as well. So you should expect to see something like the following


This should bring in the type definition for angular files and put them in typing folder.

2. Next we need to tell type script when it is transcompiling, when type of javascript should it create. In our case, we will try to create ES5 compliant javascript. It is done through creating “tsconfig.json

“compilerOptions”: {
“target”: “es5”

Now lets look at the type script magic.  Lets start with app.ts file.

angular.module(“helloWorld”, []);

Even though it is TypeScript file there is nothing type script about it. This is simple angular module definition as you would in normal angular app.

Next look at the hello controller;

//interface definition for model

interface IHelloWorldModel {
message: string;

//controller definition

class helloCtrl implements IHelloWorldModel{
message: string;

this.message = “Hello world!”;

//adding controller to angular module.

.controller(“helloCtrl”, helloCtrl);

When developing TypeScript files, it is recommended to create the interface for each controller, which need to be implemented so that view can get them. In other words, all the items in interface will be exposed in the view so by creating interface, the controller which implements forced to implement them.

In our example,  in index.html we are only displaying {{vm.message}} so we need only message in the interface. When defining the interface, make sure you specify the type also in there, which forces the controller to use the same type thus forcing strong type checking.

Next part is creating the controller itself. We use traditional class to define the controller than creating a function like you used to when created as javascript file. We make sure the controller implements the interface definition. In our case we need only message. So define it and in the constructor, pass a string to it.

At the end add the controller class to the module.It is very important, adding controller has to happen after the class definition.

One thing I forgot to mention to run the website through VSC we need a light weigh http server. Fortunately for us, there is one available through npm called http-server,  install it with npm like the following

npm install -g http-server

and then run it from the root directory of index.html file. This will start the server at specific port. once it is running without any errors,


so far we have only created the TypeScript files, before we can run the code, we need to make sure we compile and generate the javascript file. Ctrl+Shift+B will build and generate the javascript file. But when you try to do this first time, vsc looks for a task file and since we did not create one, it will ask do you want to create one and acknowledge yes. It will create a task file.

Before we compile it, we need to make one quick change to that file. Open the tasks.json file under .vscode filer as shown in the picture below;


remove the HelloWorld from line 24 and make it empty as shown in the above pic. With all in place, perform compile (ctrl + shift + B), it should compile and generate the javascript files. If you look at the file structure to the left, you will see app.js next to app.ts and helloController.js next to helloController.ts.

With http-server running and all files generated, fire up a browser and go localhost:8080 should show hello world as shown below.


You can get the code from

I might have skipped over few things, if you have any questions, please let me know.

Happy Scripting…

Deploying Yeoman generated app in Azure.

Finally I decided to take a plunge into cloud by signing up with Azure. I have been working on SPA application using AngularJS. I decided to try hosting the application in Azure. There are lot materials out there on how to build and deploy SPA into Azure. I am really impressed with the documentation and help available out there to accomplish this task. Since I haven’t written any blog in a year, I thought I will take this opportunity to write a blog about my experience and few things I found along the way.

Build a web application using Yeoman:

I am a lazy programmer so if there are things that make my development easy I will take that route. To get started, I took Yeoman’s help on generating the seed project so that I do not need to worry about the directory structure etc., Yeoman help wire up everything for me. There are ton help on the web, just in case if you haven’t used it. Not to repeat the whole steps, here is one of the site which explains how to get started.Before you go any further, make sure the starter application is working locally by running

grunt serve

If the starter page doesn’t show up then you have problems with the yeoman setup and fix the problem before you proceed.

Source code repository:

Next is source code repository. I decided to use bit bucket for this task. it is free and for me it serve the purpose incredibly. I highly recommend to check it out, in case if you haven’t heard or use it. It has excellent tutorials on how to add an existing project and create new repo.


Next stop is Azure. Create your account and login into Azure portal.Here is a good video on how to add a new web app in Azure from Azure Fridays. Follow the video, you will be able to create a new web application in seconds.

Now that we have all the infrastructure and code ready to go, lets take about how do we go about deploying. Deploying is little tricky at least for me because I was using Web Storm for my development (not visual studio). Here are few things I had to do to make this process automated.

Modify deployment Script:

When you create SPA you will end up using ‘npm’ and ‘bower’ packages. We need to make sure, when we deploy all those packages will be installed in the azure. This is the first hurdle. Azure uses Kudu engine for deployment. So we need to add two files to our project to enable azure deployment and they are



in the root directory of the project. Even for that there is help. Follow this link and it will help you create those files. The script, it generated will add all the npm packages (as of this writing) but not bower, so you need to modify the deploy.cmd to add bower packages during the deployment. Open deploy.cmd and add the following lines after “::3 npm install” if condition.

:: 4. Select node version
  call bower install
  IF !ERRORLEVEL! NEQ 0 goto error

This will help Azure deployment to add both npm and bower packages in the web site.

You will not deploy the whole file structure Yeoman created as the web site since it has lot of things they are intended for development like testing. You do not want to distribute that code to production and also you will want do all the goodies like linting, minification and uglyfying and trim down the code to small distribution package. Fortunately for us, grunt helps us with this. To do that, we run the following in cmd line.

grunt build

This will create a new directory ‘dist‘ in the directory structure this is what we need to deploy to azure. Couple of ways we can deploy this and I went the the route of creating a branch for deployment and setup continuous deployment from that branch. But remember since Bower packages will not be installed by default we need make sure bower install runs along with deployment that means we need modified deployment file and also the json files distributed along with the ‘dist’ directory. So I added four highlighted statements to the copy task before deploying the dist subtree to branch.

// Copies remaining files to places other tasks can use
    copy: {
      dist: {
        files: [{
          expand: true,
          dot: true,
          cwd: '<%= %>',
          dest: '<%= yeoman.dist %>',
          src: [
        }, {
          expand: true,
          cwd: '.tmp/images',
          dest: '<%= yeoman.dist %>/images',
          src: ['generated/*']
        }, {
          expand: true,
          cwd: 'bower_components/bootstrap/dist',
          src: 'fonts/*',
          dest: '<%= yeoman.dist %>'
          { cwd: '', src: "deploy.cmd", dest: 'dist/deploy.cmd' },
          { cwd: '', src: ".deployment", dest: 'dist/.deployment' },
          { cwd: '', src: "bower.json", dest: 'dist/bower.json' },
          { cwd: '', src: "package.json", dest: 'dist/package.json' }
      styles: {
        expand: true,
        cwd: '<%= %>/styles',
        dest: '.tmp/styles/',
        src: '{,*/}*.css'

That is it, with all these steps now I can easily build and deploy to azure easily.

If you think there is a better way to do this, I would love to hear from you.


Building REST services, rambling …

With the recent changes in the web development like SPA and change in device platforms, the backend changed drastically.  Backend no longer just serving up pages, they are now moved to highly scalable, available and responsive service providers. Over the past year or so, I was purely focused on the front end development. I did not touch much of the backend since we had very talented people in the back end. But over the last couple of months, some of the changes happened in the backend made me look at the backend architecture. Similar to building highly testable and manageable front end, back end also should be able to sustain changes with out breaking the front end. That means, when you build services, you need to build them in such a way, they are self serving and highly discoverable services which can adopt to changes without breaking the clients.

Our back end is build with Web API. It is easy to put together simple hello world or student REST services. But when building enterprise web services, you need to put some time there to come up with what are the resources are you going to expose and how are you going to expose. For those who have been developing services using SOAP protocol, this is paradigm shift. So if you are going to build REST services, don’t just convert the SOAP to REST rather think about what are you trying expose.

I remember reading somewhere, building REST URL, resources and query parameters is 80% art and 20% science. I totally agree to that. When I sat in discussions initially, I was thinking it was pure waste of time. It is not a rocket science, but later I found out, building rest service end point is thing of beauty. Take time and look at the resources you are going to expose and why? Ask the question, are they really the resources clients want? Just because we have a resource in a table doesn’t mean we need to expose them as a resource. When you are exposing the resource, it should be meaningful to the end user and easy to use.

I am not going to explain how to build REST services. Vinay Shani wrote one of the best blogs on the REST services. Here are some of the major things I went through when I was consuming REST services and moved to building services.

  • Cache: Build your rest services in such a way, it leverages the caches. Poorly designed resources end up not using caches. Especially when you are building highly responsive services, cache is an important piece of the puzzle.
  • Versioning: There are three different ways one could version the services. I personally do not like versioning in the URL but on the other hand, we do not have mature technology to enable discovery either. I remember reading somewhere, if you start out with versioning, you are already admitting your failure. I loved this blog on versioning. Anyway you do it, you are doing versioning wrong 🙂
  • Query string/Resource hierarchy: Take care on how you want to set this up. This goes hand in hand with Cache. I prefer resource hierarchy than query string. I prefer to use query string only in how I want to the representation to be and filtering my results.
  • Documentation/Discovery: You could create an awesome service, like Taj Mahal and put in the middle of a forest where no one can find it, then there is no use. Also if you build mind blowing machine and if it is not easy to use what is the point. Build great API documentation, use tools like swagger or some other forms. Provide great on boarding story and excellent means for users to report problems and concerns like Forums/Discussion groups.

There are ton of resources available on the web. Before you start wiping up REST services, change you mind set, read what people went through ask questions and be smart in building resources. Like Troy said in his blog, once people start consuming, they are very important part of the equations so changes have to be handled very carefully.  We should not create distrubtive services that clients do not want to use. Only way you services be successful is if you clients consuming it and happy using it.

Building your movie website with Movies DB in angularJS

I am a big fan of movies, for some time, I have been tinkering with building a website to see what is new on movies and TV. Recently I came across Movies DB website. The website by itself is wealth of information but the best part is, if you would sign up with them then you can use their rest services. You need to read the legal documents to see if it fits your bill also they throttle the requests coming in as well. I set out to write my own service so that I can use this service in my couple of other projects I am working on. Well, even that is already solved for me. Head out to lub-tmdb at GitHub, they have done most of the work for you already. So it is just a matter of hooking it into your project.

They have an excellent example project in their on how to use it. The example project is self contained, so I thought I add couple of lines to show, how to add into your existing project.

1. Download the min file from their build directory and put it in directory of your choosing. In my test code example, I put it in the root directory.

2. Add the script file to your index file, the real important thing is, it has to be added after your script with ‘angular.module’ definition. You would want to do the same if you build your own service.

<script src=””></script>
<script src=”app.js”></script>
<script src=”lub-tmdb.0.1.1.js”></script>

As you can see ‘lub-tmdb.0.1.1.js came adter my main app.js file.

3. I followed their example and created a simple search for a movie like the following in the index.html

<form ng-submit=”exec(‘search’, ‘movie’, query)”>
     <input type=”text” ng-model=”query”>
     <button type=”submit”>Search Movie</button>


4. In the controller, again based on their example, I called the service as follows;

var app = angular.module(“main”, [‘lub-tmdb-api’])
     .value(‘lubTmdbApiKey’, ”);

var mainCtrl = function ($scope, lubTmdbApi) {

    var suc = function(result){
         $scope.searchResults =;

    var err = function (results) {
         $scope.searchResults = results;

    $scope.exec = function (type, method, query) {
             query: query
         }).then(suc, err);

app.controllers(“mainCtrl”, mainCtrl);

The important points in here are

  • Second line the above script, should have your api key that you got from the website.
  • Don’t convert it to JSON object in this line; $scope.searchResults =;

I personally really enjoyed using both the angular service @ lub-tmdb and also the Movies DB. Both are solid and made my life little easier. Thanks a lot for both the services.