First off, why is it even important to spot core from non-core business activities? If you are a B2B entrepreneur, then you have probably stumbled on the realization that businesses are much more amenable to outsourcing non-core but important business activities to outside vendors like yourself.  For a given industry some activities are evidently core and some very clearly non-core. However, a large majority of activities are somewhere in the gray area in-between. As a B2B entrepreneur, how do you spot such activities? One indication is whether or not the business has put their production resources (engineering team, manufacturing plant, etc.) to work on that activity. If they have, they consider this a core activity. It’ll be hard to convince them to outsource it to you. Of course it is entirely possible for some other company in the same industry to consider this non-core!

Below is an illustrative example that helps me anchor this concept.

Consider two hypothetical Indian resturant owners – Mr. Khan and Mr. Singh. Mr. Khan runs a brisk lunch kitchen in the middle of downtown. Mr. Singh runs a high end dinner restaurant in an exclusive neighborhood. Both have their chef and kitchen staff fully engaged in cooking and serving Indian food. It is a core activity. If I propose supplying pre-cooked Chicken Tikka Masala to their doorstep every day, they’ll refuse. That is, after all, what they do. On the other hand, consider payroll managment for their contract employees. It is clearly non-core. They’ll be happy to outsource that task to a company like Zenefits or one of its clones. But the interesting tasks are somewhere in between. Instead of supplying pre-cooked food, lets say I propose supplying fresh produce to their doorstep every week. What would their reactions be? One way to predict their reactions is to look at how do they currently buy their produce. If their chef/kitchen staff does this, then they consider this core and will not buy your delivery service. If they have somebody else outside of the core kitchen staff do this, then you have a shot. So lets find out.

Mr. Khan asks (forces?) his teenage son to buy groceries for the restaurant every week. When lunch hour starts on week days, his no-nonsense chef needs the ingredients ready to use. Mr. Singh’s chef on the other hand personally goes to the local farmer’s market to hand pick the freshest produce he can find to prepare the delicate the falvors that he is going to charge a fortune for. Who do you think is a better prospect for the grocery delivery service?

Of course the above is a contrived example, but based on my (admittedly limited) experience in B2B sales in the last one year, a rough (i.e., not 100% accurate) indicator of non-core activity is whether or not your prospect is putting their production resources on it.


Let me preface this blog by saying that it is not about Microsoft bashing. To quote an oft-repeated (and infamous quote) – “I  L O V E  T H I S   C O M P A N Y”. For me job satisfaction means three things – (a) work that makes full use of my existing skills and competency; (b) place where I can constantly learn – from my peers, my day-to-day work, the industry, etc.; (c ) autonomy to chart my own course of action. My career as a Program Manager at Microsoft afforded me all three – albeit at varying degrees. Then one fine morning I walked into my manager’s office and told him that I wanted to leave – to start my own company. After a prolonged period of retention attempts (where needless to say I didn’t change my mind), it was official – I was leaving Microsoft after 7.5 years of a very fulfilling career. And with it went the nice pay check, the RSUs, the health benefits, etc.

When my friends and family came to know about this decision, they would say with a knowing smile on their faces, “You must have something up your sleeve – do you already have funding? Is the product ready to be released?”. Some other concerned friends asked me whether I was doing a startup for the right reasons. Apparently some right reasons are – wanting to change the world, scratch a personal itch. One wrong reason is – wanting to do a startup. This ushered in a period of severe self-doubt – none of these things were true for me. I did not have funding, heck I did not even have a product definition. And I sure am not the “change the world – lets put a dent in the universe” type.

My reasons for doing a startup are pretty simple – more than a decade ago I read an article by Tim Berners Lee about Semantic Web Services that really inspired me. I sincerely believe that the promise of web services (a.k.a cloud computing) combined with statistical learning techniques (a.k.a machine learning) can really make that vision a reality. I wanted to contribute in my own small way to move that vision one step forward. And the thought of a tightly knit, lean team rapidly executing on this vision was very, very appealing. 

Obviously I sought a lot of advice from mentors and other senior people before I actually quit my job. One mentor advised me that I should quit once I had “f**k you”  money (enough money that I can walk away from any situation I don’t like). My wife told me it was very bad advice because the actual amount depended on the individual – for some $50,000 would be enough, for some $10MM would not be enough. Some other senior people said the smart thing would be to work on my startup on nights and weekends while holding my day job. Most of these well-meaning folks assumed that “working” on my startup means building the software. Given my background, building software is the easy part. Finding the right product/market fit is the more difficult part. And for that I needed to talk to a bunch of potential customers, folks in the industry, etc. And of course none of the potential customers I wanted to talk to were available during nights and weekends 🙂 If I was to figure out the right product to build – I needed to spend a lot of time in the field talking to people. And it just is not possible to do that while holding down a “day” job. The next time I had an important meeting with a potential customer for my startup, and an important meeting with a partner team at Microsoft, which one was I going to choose? Regardless of my choice, I would be doing a disservice to both.

There were only two options in front of me – either I focus all my energies on being the best possible Program Manager at Microsoft that I could be. Or I could try to beat the odds of creating a successful startup (according to popular Internet “statistics” 90% of startups fail) and create the most awesome product I could. There was no middle ground. At least not for me.

I decided to jump in. Only time will tell if this was the right decision. All I know is that it “feels” right in my heart.

“So…what does a PM actually do?”, is a question I usually face, especially from my developer friends, when I tell them that I am a Program Manager. This blog entry explains my understanding of this profession.

I moved over from being a developer to being a Program Manager 7 years ago, and have been practicing this art ever since. Now, PM is an acronym for various different sounding, but similar roles – Project Manager, Product Manager, (Technical) Program Manager, etc. And these roles sometimes overlap with Product Planning or Product Marketing. Recently I even came across the term Production Management 🙂 Over the past years, my understanding of the PM role has gone through 4 stages of progression, with each stage building upon the one before it.

  1. Execution
  2. Post-release activities
  3. Planning
  4. Strategizing



Early on in a PM’s career, their focus is on execution. Now, we all know that the PM does not actually write code. In this stage the PM’s sole karma in life is to facilitate development. At its most basic this means keeping track of the project. Additionally, this involves being the human interface between different development teams to keep track of dependencies and make any project adjustments in response to shifting realities in dependent teams. I learnt this part by jumping off the deep end when I became a PM in the Networking team in the Windows Phone team (or Windows Mobile as it was known then). Now, this being the Windows Phone product, almost every other component/feature team was dependent on my team. So I had a lot of “upstream” dependencies. On the other hand, my team was dependent on the OEM’s radio driver team, which was an external team. This was my “downstream” dependency. As we would onboard different radios and chip architectures, or write a new abstraction layer to enable new scenarios like making a voice call and connecting to the Internet at the same time (yes, it was a long time ago :-)), I would need to ensure that as development progressed, all of these moving parts were moving in sync with each other. The last thing I wanted was the UI talking directly to the radio driver because my team failed to deliver the right interfaces, or some new radio capability not being exposed through my layer and thus being unavailable to the rest of the product.

Post Release Activities

Once a set of payload has been deployed to production a PM needs to start raising awareness of the new capabilities with customers and gather their feedback. As a PM in Windows Azure, I used to regularly give talks about upcoming features to customer communities and get their initial reactions. Windows Azure has active MSDN forums and StackOverflow forums that I would regularly monitor and contribute to. It also helped that we have a very strong product marketing team. They would set up one-on-one face time with various customers and it was incredibly helpful to interact directly with the architects and engineers who were building their service on top of Windows Azure. Being part of Windows Azure, I was fortunate to have a lot of the supporting infrastructure that made it possible for me to perform these post-release activities. In case your team/company does not have this infrastructure set up, a few simple things that you can start off with are –

  • Set up an email distribution list of your most vocal and passionate customers. This will provide you with a channel to pro-actively reach out to them to get early feedback on a new feature. This will also be very effective in surfacing problems early on.
  • Create a StackOverflow tag for your product and communicate this to all your customers. This provides a good forum for customers to ask technical questions and get help from you as well as the community.
  • Start participating in appropriate sub-reddits and start your own product’s sub-reddit.
  • Write regular blog entries describing new featues, or use cases of existing features.
  • Set up a twitter account and a twitter hashtag for your product. Communicate summary of your reddit, StackOverflow, and blog updates on twitter.


In theory planning for the next release is easy. Pick up the top few features from your prioritized product backlog that will fit in your current release and get to work on that.

In reality this is rarely the case. For starters, maintaining a prioritized product backlog that has cost estimates is easier said than done. In order to create and maintain a credible product backlog, PMs have to engage in what I’ll call “continuous planning”. This means that as soon as a new feature or scenario is identified, you get to work on it and get a high level understanding of what it entails. This usually means having a clear understanding of the following –

  • Priority from the customers’ point of view and from your point of view
  • Value proposition
  • Goals
  • Scenarios
  • High level design
  • T-shirt size cost estimates

Once you start engaging in the continuous planning model, you will discover a curious thing, there are multiple features that are all pri-0 (or the most important). And obviously you’ll not be able to fit in all of them in a single release. Selecting a subset of these to do is mostly art. Sometimes you’ll have to choose a lower priority feature over a higher priority feature simply because it fits, or because the developers who have the expertise to deliver that lower priority feature have time to do it in this release. Make sure you communicate the release payload to your upstream and downstream dependent teams. Over communication is better than under communication. Put this up on a team wiki or a sharepoint site so it is readily available upon demand.

Once you are fairly certain that you are going to implement a certain feature, it is time to build the functional specification. This does not mean write a 30 page document for single button’s functionality. The medium can be anything that can be communicated easily in your team – post-it notes, team wiki, whiteboard, etc. Contents of a functional spec will differ for different teams and different products and is the topic of another blog. But the one common thing all functional specs have is that they communicate very clearly and in great detail what it is that the team should be building. If there is an ops team or a test team, this spec will let these teams know how to deploy or test the product.


This is by-far the most difficult part of the job. Think hard about where you want to take your product/feature 2 years or 5 years down the line. Write down in vivid detail how your customers and possibly their end users will be using the product/feature in the future. Put on your sci-fi author’s hat when you do this. Make this as vivid as possible – if you feel like describing what your customers will be wearing when using your app on their wall computers, do it. And at all costs avoid bland vision statements that you see on plaques of big companies. Unless the vision statement can help you make decisions about what to do next and how to behave in certain situations, it is not of much use. However, ensure that this aligns with the overall company’s vision. There is nothing wrong with having your own unique vision that is different from the company you work for, but in that case go start your own company. Get buy-off on this vision from the team, they are the ones who will make this a reality, so it is important that they believe in it as much as you do. Then create a high level roadmap to get to this vision. The litmus test of an actionable vision is whether most of the work you do in next release is aligned to this roadmap.

And at all times remember this PM mantra – “no plan survives contact with reality”. So after all your strategizing and planning, you have to keep a close eye on execution and constantly keep course correcting based on the ground realities of the moment, which brings us full circle back to the Execution stage!

I have been thinking of doing my own startup for a while now. When I ask established entrepreneurs about what kind of startup idea I should pursue, I get advice ranging from – “Just jump in with something, and you will pivot your way to the right idea” to “Jump only when you have some traction with initial customers”. And of course the timeless gem “scratch your own itch”. Predictably, I have chosen the middle path, where the idea or the market has some hope of success, but I don’t have to wait to set up the business and get traction before I quit my day-job. In fact, I have come to the realization, that I’ll not be able to give my startup enough attention to get traction while still holding a day-job. In this blog post I’ll talk about the three evaluation criteria I use when deciding whether or not to pursue an idea –

  • Will the business be scalable?
  • Does it have network effects?
  • Does it have either high margins or high volume?

Will the business be scalable?

A very simple way to answer this question is to understand what it takes to sign up one new additional customer for the business. Take a traditional dentist for example. In order to sign up a new patient (a.k.a customer) the dentist will have to spend as much amount of time on the new patient as he does on his existing patients. In economic terms, the marginal cost of servicing a new customer is pretty high. Moreover, there are only so many patients the dentist can see in a day. Which means there is a physical cap on the number of customers in this business. This is not a scalable business. Contrast this with a dentist selling “do-it-yourself dental checkup” kits. The cost of establishing the business would be pretty high. But selling each additional unit will be small in comparison. Moreover, there is no physical limit to the number of customers that can be served. This is a scalable business. Needless to say, in most cases going after a scalable business is better than going after a non-scalable business.

Does it have network effects?

Network effects are what help businesses go viral. Network effects means that every new person who uses the service makes it more valuable for all other users. Take the telephone for instance. If only 2 people in a town have telephone it is marginally useful for them. But when a third and then a fourth person get a telephone, the value of the telephone increases for all 4 people. A more cotemporary example would be facebook where, as more and more people signed up, it became more and more valuable to its existing users, in turn causing more and more people to sign up. Apart from promoting viral growth, network effects also help build high barriers of entry and increase switching costs for its users. Think again about moving your entire social network from facebook to google+. Not gonna happen…at least not very easily. Network effects are often found in multi-sided markets. For example game consoles like XBox have multi-sided networked markets with game players and game providers. More the number of players, more game companies build games for the XBox platform. More games available will attract more players and so on.

Does it have either high margins or high volume?

Making high profit per unit sold is a good thing. In the online services world, this would translate to average profit per user. If this is high then it is ok to have less number of users. Good examples are enterprise software companies like SAP or Oracle. Their high margins enable them to serve only the top Fortune thousand companies. However, companies like Amazon Web Services have shown us that having a low margin (rumored to be < 5%) per user but having a high volume of users is just as profitable. Obviously a business that has potential for neither high margins nor high volume is not attractive. And one that has both is super attractive!

Not all startup ideas have to have all three characteristics. Some good startup idea may not have any of these. But in general, asking these questions gives me a good idea of the potential of the startup.

OAuth 2.0 needs no introduction (read the excellent introduction in the IETF spec if you need one). This blog is an easy to read explanation of OAuth 2.0 protocol with explanations on how Facebook has implemented it. In the interest of making it easier to read, I’ll explain this as if it were a 3-act play.

Let me first set the plot of this play. There is a User, who wants an Application (lets say yet another super-innovative photo sharing application) to do something_useful. However, and here comes the twist, in order to do something_useful the Application needs access to some of the User’s resources (say some photos) which are held for safekeeping by the Resource Server, played in this drama by facebook. As you can discern from the plot – there are three main characters in this play –

  1. User
  2. Application (App)
  3. Resource Server (ReS) aka Facebook (FB)

However, as you will soon see, there is a fourth hidden character – the Authorization Server (AuthS) who has the most pivotal part to play!

According to OAuth 2.0, this play (like most plays before it) is composed of three acts. Like I mention above, this play kicks off by the User asking the App to do something_useful.

USER --- GET /something_useful --> APP

3 Act Overview

This section gives an overview of the 3 acts, without going into the dialogues and the script of each act.

ACT 1: User gives App an authorization grant.

According to the OAuth 2.0 protocol, App will need an authorization grant from the User when requesting access to the User’s protected resources. In an ideal world, the User would have a stack of such authorization grants lying around on her hard drive that she can just give to the App. The reality is obviously quite different as we’ll see later in this blog.

ACT 2: AuthS gives the App an access_token in exchange for the authorization grant.

This is where the Authorization Server makes his first appearance. The cool thing about the AuthS is that all three characters in our play – User, App, and Resource Server trust the AuthS with their secrets. We will see how this plays a pivotal role in our play later in this blog.

ACT 3: ReS gives App access to protected resources in exchange for the access token.

The Resource Server trusts the access_token issued by the AuthS (see how the trust between AuthS and ReS comes into play!). It lets the App have access to the User’s resources for a short while.

The Script

Let us now delve into the details of these acts.

ACT 1: User gives App an authorization grant.

User and App are on the stage.

User: Do something_useful for me.

USER --- GET /something_useful --> APP

App: I cannot, unless you give me your authorization grant.

Now, in the simple version of this play, the User has a stack of these authorization grants lying around. But in this more “real world” enactment, if the App were to say something silly like this, she would just go to another app. So instead our savvy App tells the User how to get an authorization grant by doing an HTTP redirect to the AuthS. Note, all the parameters of the GET request here. The state parameter is something that will help the App from being a victim of CSRF further down the road.

USER <-- 302
state=xxx -- APP

Now, AuthS makes his entry and he is actually the same as the ReS aka Facebook! Without explicitly doing so, this redirect URL causes the User to ask the AuthS aka Facebook for the authorization grant.

User (to facebook): Give me the authorization grant.

USER --> GET /oauth?client_id=:app_id&redirect_uri=http://app/something_useful&state=xxx --- AUTHS (FB)

AuthS: Give me your secret (facebook login and password), here is the login form.

USER <-- 200 login form --- AUTHS (FB)

User: Now, I trust you Authorization Service, after all I have asked you to protect some of my resources for me. Here is my secret.

USER --- POST user fb creds ---> AUTHS (FB)

Now, if the User were to just receive the authorization grant from the AuthS, she would get fed-up with this nonsense and just stop using the App. So the wise AuthS redirects the user to the App instead. Remember, the AuthS knows where to redirect because it received the redirect_url from the User, who in turn received it from the App.

Auths: Go to the App and give him this packet. He’ll know what to do with this.

USER <-- 302 http://app/something_useful?code=:authgrant&state=xxx --- AUTHS

User (to App): Here take this authorization grant. Now, do that something_useful for me.

USER --- GET /someting_useful?code=:authgrant&state=XXX ---> APP

Act ends here.

ACT 2: Authorization Server gives the App an access_token

Authorization Server and App are on the stage.

App: Here take this package containing the authorization grant and give me the access_token so I can borrow User’s protected resources. Oh and I trust you with my own secrets as well. So here are my creds (app id and app secret), so you know it is indeed me, and can hand me the access_token.

code=:authgrant ---> AUTHS (FB)

AuthS opens the authorization grant package. We can see that it contains a message from the User authorizing the App to get an access_token. And it is signed by the Authorization Service himself. Authorization Server validates his own signature, makes sure that the App requesting it is the same app mentioned in the authorization grant, and then hands over the access token to the App.

AuthS: Hmm..everything checks out fine. Here is the access_token.

APP <-- 200 access_token and expiry --- AUTHS (FB)

End of Act 2.

ACT 3: Resource Server gives App access to protected resources

App and Resource Server are on the stage.

App: Here is the access_token. Let me borrow the User’s protected resources.

APP --- GET /me?access_token=:access_token ---> RS (FB)

The ReS trusts the access_token that was provided by the Authorization Server (obviously as he was the Authorization Server) and grants temporary access to the App. Note, the App does not have to authenticate himself to the ReS. App authentication only happens with the AuthS.

APP <--- 200 User's resources --- RS (FB)


And with that this merry tale comes to an end. Here is the snapshot of all the HTTP request/responses at a glance.

USER --- GET /something_useful --> APP
USER <-- 302
state=xxx -- APP
USER --> GET /oauth?client_id=:app_id&redirect_uri=http://app/something_useful&state=xxx --- AUTHS (FB)
USER <-- 200 login form --- AUTHS (FB)
USER --- POST user fb creds ---> AUTHS (FB)
USER <-- 302 http://app/something_useful?code=:authgrant&state=xxx --- AUTHS
USER --- GET /someting_useful?code=:authgrant&state=XXX ---> APP
code=:authgrant ---> AUTHS (FB)
APP <-- 200 access_token and expiry --- AUTHS (FB)
APP --- GET /me?access_token=:access_token ---> RS (FB)
APP <--- 200 User's resources --- RS (FB)

Why I don’t write unit tests

Unit tests have saved me countless number of times. I know the benefits of TDD. Yet there are times when I do not write unit tests because writing unit tests takes time. And my good senses have a tendency to desert me whenever I am racing against a deadline. So I cut corners, leave out unit tests, and of course, pay for it later in production. This got me thinking – which parts of authoring UTs are the most time consuming? And can I do something about it? Turns out writing mock objects is one of the most time consuming parts of authoring UTs. And yes, there is something I (and you) can do about it.

Common unit test pattern

Lets say you have a simple 3-tier architecture that is most common for web applications.

Now, I want to write a unit test for the SnackController. I would end up writing 2 pieces of code – the actual unit tests for the SnackController and a mock CookieService.

Why mocks?

  1. I want to test how SnackController behaves when CookieService throws unexpected errors and faults. In other words, I want to inject faults in CookieService.
  2. If CookieService is a remote process, usually running on a different server, I don’t want my SnackController unit tests to take a dependency on network I/O or RPC. I want my unit tests to be “self contained” and fast.
  3. I don’t want bugs in CookieService to affect the testing of SnackController.

Reason #1 is the strongest reason why I write mock objects. For reason #2 – I am ok with making calls over the wire in my unit tests. Sure, they will fail occasionally due to network flakiness or other reasons outside the control of my test environment. But if it can happen in a test environment, it will surely happen in production! So I better have the right recovery actions in place in SnackController. And as long as I run CookieService unit tests before running SnackController unit tests, I am reasonably assured that reason #3 will not be an issue.

Proxy instead of mock

Now, instead of writing a mock service for the purpose of injecting faults, wouldn’t it be cool if I were to write some sort of a proxy object with delegates (function pointers for the old schoolers amongst us :-)) that are wired up to methods in the real CookieService? That way I could re-wire any delegate in the proxy to a faulty method at runtime and inject fault in the SnackController calls.

Auto-generated proxy

Sounds cool. Now I can switch between faulty and actual behavior at runtime. But it does not solve my original problem of saving time. I ended up writing a proxy service instead of a mock service – spending just as much time as before. What I really need is a proxy factory that can create – at runtime – the above proxy object, with the delegates and all. Turns out with a bit of IL magic, I can do just that. And I did 🙂 With two lines of code, I can create a proxy object for any given object, implementing any given interface.

ProxyFactory pf = new ProxyFactory(false);
ICookieService proxy = pf.Create(realCookieSvc);

By default, the proxy object implements ICookieService and all its methods are wired up to the corresponding methods in the realCookieSvc. ProxyFactory is the class that I wrote that has all the IL magic in it.

ProxyFactory has a ChangeBehavior method that can rewire any method in the proxy to a user supplied method. So in this case, I would implement a faulty method ask ProxyFactory to rewire one of the good methods in the proxy to the new faulty method. All of this happens at runtime.

You can find the code and this example (in much more detail) implemented in the dynamic_proxy project on github. This is under the creative commons license, so do as you please with it :-).

But that is not good design..” is a refrain I have heard in countless software design meetings arguments. And usually these arguments devolve pretty quickly into philosophic disagreements based on opinions rather than facts. This obviously begs the question – so what is good software design? I’ll add my own opinion to this ongoing discourse with 7 design principles of good software design.

1. Functional
A few months ago I was at a Lean Startup Meetup where one of the panelists was the VP of technology of a well known company that had just acquired a smaller company. Software architects working for her were commenting that the design of the acquired software was terrible, when one of them had the sudden “insight” – “but we just paid millions of dollars for this, because the shi*t works!” And that to me is the first and foremost principles of good design. Any piece of software must be able to execute its core functionality well. Otherwise it is all for naught. Good software is functional.

2. Robust
I was showing off my mad coding skillz in to a friend of my mine who happens to be a software architect. He gave me a kind fatherly look and said “this is all fun and good, but can you write code that will live on for years and years? This phone that you are using has a piece of code that I wrote 5 years ago”. Which brought me to my second epiphany about good software design. Its got to be robust. But what does robust really mean? To me it means three things – software is resistant to failures, it is able to recognize when failures occur, and it is able to recover from failures. Good software is robust.

3. Measurable
My third design principle was not a sudden a-ha moment for me. But rather it grew on me gradually, as a result of working in environments that were extremely data driven, sometimes ruthlessly so. It should be possible to measure how the software is doing out in the wild, outside of the confines of my test fixtures. Of course the actual measurement metrics would differ based on the business and nature of the software. An oft used metric for web applications is the number of HTTP 500 ISEs that are returned. For UI based applications, it is usually the number of seconds for the UI to respond. But if you find yourself writing software without putting any thought as to how to measure it (whatever measure means for you), know that you are violating a good design principle. Good software is measurable.

4. Debuggable
I remember the days working as a developer in an e-commerce company (yes, that one), where I would get paged because the ordering system was misbehaving. I would be the only developer debugging the live system, all the time surrounded by a bunch of manager-types who would demand “status” every second minute. Walking through the code in that situation was – to put it mildly – not fun. And it brought home to me the realization that any software I write should have logging that is not just there for the heck of it, but that would really help me debug when the need arose. Since then I have put in debug APIs on my web services, put in ability to dump debug state upon demand, etc. Good software is debuggable.

5. Maintainable
One of the ways in which recruiters try to attract software developers is by telling them that in the new company, they will be writing brand new software from scratch. Which highlights the fact that a lot of work that software developers do is maintaining software that somebody else wrote a while back. Software can be easy to maintain if has consistent styling, good comments, is modular, etc. In fact, there is a lot of literature on good software design that just focuses on design principles that make it easy to make changes to parts of the software without breaking its functionality. Good software is maintainable.

6. Reusable
Software developers worry about writing reusable software incessantly. A lot of software developer interviews that I have sat in (as an observer, interviewer, interviewee) invariably pose an algorithm question – “how can you find the least common ansector in a tree”, then the follow up is – “how can you make it more general”. But note, that this is #6 on my list. That is because generilizing a specific solution is hard and time consuming. And a lot of times nobody ends up reusing that piece of code anyway (of course you can argue that it is because the code was not reusable enough to begin with…). So unless you are absolutely sure that you are going to reuse this piece of functionality elsewhere, and you can time bound the effort of making it reusable, do not try this at home. Nevertheless, good software is reusable.

7. Extensible
Surprisingly, the topic of maximum number of design arguments I have sat through makes it last in my list. Usually, this is brought up when you are building infrastructure software. And it usually starts with – “but suppose that tomorrow somebody wants to add X here…”. Be wary of this design principle. Be very wary. Unless “tomorrow” is a real date in the future, and “somebody” is a real person, this design principle can lead you down a deep rabbit hole with nothing but dirt at the bottom. But I have to say this – really good software is extensible.

What is cloud computing from Avilay Parekh

Cloud Computing = Elasticity

At its core cloud computing is a business model innovation. Traditional datacenters have been renting out IT infrastructure by the month for quite some time now. Cloud computing providers have figured out a way to rent this stuff out by the hour. And in that process they have come up with much more scalable and superior way of managing IT resources. This lets businesses scale out their infrastructure when they need to, and more importantly, scale back in when the resources are not needed anymore. Lets take an example of an accounting firm. They will typically have a heavy load during tax season from February to April. This would mean they need more disks to store their data, higher network bandwidth to communicate with other systems, more servers to process all the tax reports, etc. With a traditional datacenter they would typically enter into a long term contract to accommodate the high capacity season, with all the excess capacity being wasted for the rest of the year. On the other hand with a cloud computing provider this accounting firm would scale out their IT infrastructure during tax season, but scale back in for the rest of the year.

Datacenter Resources

Owning and operating a datacenter is a pretty expensive proposition. To begin with a datacenter needs large amounts of space to house the equipment. Next it would need powerful heating, ventilating, and air conditioning (HVAC) systems to control the large quantity of heat that would be generated by all the electronic equipment. By some estimates property and HVAC amount to around 52% of the total cost of operating a datacenter. The next big cost component is the electric power.  Then comes the actual hardware – servers, communication and networking infrastructure, data storage equipment, physical security equipment, etc. Last but not least is the software – OS, application software, etc.

  • Physical space
  • HVAC
  • Power
  • Hardware
  • Software

Datacenter Business Models

Clearly owning and operating a datacenter is expensive and requires specialized knowledge. Co-location facilities or “colos” make IT resource management one step easy by providing the physical space for rent, where businesses can install their own hardware and operate their own software. The colo takes care of HVAC and power. Managed datacenters make this even easier by renting out the hardware in addition to the space, HVAC, and power. Most managed datacenter providers are also hosters, that is they provide web hosting services where businesses can buy the capability of running their web applications without worrying too much about the underlying infrastructure. Cloud computing providers rent out the same set of IT resources, but by the hour. When a company rents “compute capacity” or “CPU cycles”, they are actually renting the entire gamut of resources that make up a datacenter.

Pillars of Cloud Computing – Multi-tenancy and Virtualization

Multi-tenancy and Virtualization are the two pillars upon which cloud computing stands. One of the ways cloud computing is able to provide IT resources by the hour is to provide a pretty uniform (read non-customized) set of hardware/software that can be easily shared amongst multiple customers. This notion is at the heart of multi-tenancy. To share the same disk or network port or CPU amongst multiple customers, cloud computing providers have evolved better security and data protection mechanisms than traditional datacenters, which didn’t need them to such a degree. The biggest enabler of multi-tenancy is virtualization. One of the most talked about virtualization is machine virtualization via the hypervisor. In this, a high end computer server can host several smaller virtual machines, which for all intents and purposes look like full computers to the applications running on them. It is way more cheap to have a machine with 8 CPU cores run 8 VMs instead of having 8 full single-core servers.

Infrastructure as a Service (IaaS)

It is mostly used by the IT Pro community. IaaS providers provide elastic IT infrastructure resources along with superior IT management. To make a business out of multi-tenancy and virtualization, IaaS providers have come up with better ways of monitoring performance and measuring SLAs to better meter and eventually bill for usage. All of this innovation has led to better and automated provisioning and overall management of IT resources. AWS is the most notable IaaS provider. Windows Azure has also started offering a preview version of their IaaS services.

Platform as a Service (PaaS)

It is mostly used by the developer community. PaaS providers provide a platform to develop and run an online service. It does this by abstracting out the underlying IT infrastructure to the maximum extent possible. Developing online services is hard. Any service that helps in this development – either by simplifying it, or by providing the plumbing for it – qualifies as a PaaS service. Running an online service is even harder. Running includes deploying, testing, and real time monitoring. Any service that helps in running an online service also qualifies as a PaaS service. As you might imagine, a lot of services can fit in this broad definition of PaaS. Several companies are coming up with innovative ways to make it increasingly easy to develop and run an online service. Some PaaS elements that I have seen so far are –

Things that make it easy to develop an online service –

  • Programming frameworks: These range from general web application frameworks like Ruby on Rails and ASP.NET to new programming constructs like the web role, worker role, and VM role in Azure.
  •  Data storage abstractions: Most PaaS providers have distributed storage systems that are URL addressable. AWS has S3, Azure has Azure Storage. This talk gives an in-depth explanation of the Azure Storage system’s architecture.
  • Network abstractions: Ability to create and work in a VPN in the cloud, create a virtual IP address (VIP) that fronts multiple instances of a service component – usually a website, etc. are examples of network abstractions in use today.
  • Source control system: Heroku is one of the first PaaS provider I saw that had a seamless integration with git providers like github. A git push from your local developer machine resulted in that code being deployed to the cloud.
  • Add-on services: An online service typically uses other services like some sort of an Identity service, a mailing service, a logging service, etc. These services are usually third party services made available by the PaaS provider.

Things that make it easy to run an online service –

  • Service model: This describes the IT resources this service needs to be deployed
  • Configuration management: Ability to update the configuration on a running service, specifying different configuration values for different environments, etc. are examples.
  • Rolling upgrades and deployment: Ability to upgrade the online service without taking it down entirely.
  • Scaling: Ability to increase and decrease the instances of any service component.
  • Multiple pre-production environments: Ability to define different deployment environments for testing purposes.

Is Cloud Computing for me?

YES. For small businesses cloud computing provides a low cost alternative to spinning an entire IT department up. For large enterprises it provides a test bed for newer applications and the ability to burst from on-premises infrastructures onto the cloud for periodic spike in usage. For software startups it provides a low maintenance way to launch new products. As a developer you would need to get up to speed with this new paradigm of software development. As an IT Pro you want to leverage this technology to run your IT organization more efficiently. You get the idea…