Posted by: jsonmez | April 14, 2013

What Makes Code Readable: Not What You Think

You often hear about how important it is to write “readable code.”

Developers have pretty strong opinions about what makes code more readable. The more senior the developer, the stronger the opinion.

But, have you ever stopped to think about what really makes code readable?

The standard answer

You would probably agree that the following things, regardless of programming language, contribute to the readability of code:

  • Good variable, method and class names
  • Variables, classes and methods that have a single purpose
  • Consistent indentation and formatting style
  • Reduction of the nesting level in code

There are many more standard answers and pretty widely held beliefs about what makes code readable, and I am not disagreeing with any of these.

(By the way, an excellent resource for this kind of information about “good code” is Robert Martin’s excellent book, Clean Code, or Steve McConnell’s book that all developers should read, Code Complete. *both of these are affiliate links, thanks for your support.)

Instead, I want to point you to a deeper insight about readability…

The vocabulary and experience of the reader

I can look at code and in 2 seconds tell if you it is well written and highly readable or not.  (At least my opinion thereof.)

At the same time, I can take a sample of my best, well written, highly readable code and give it to a novice or beginner programmer, and they don’t spot how it is different from any other code they are looking at.

Even though my code has nice descriptive variable names, short well named methods with few parameters that do one thing and one thing only, and is structured in a way that clearly groups the sections of functionality together, they don’t find it any easier to read than they do code that has had no thought put into its structure whatsoever.

In fact, the complaint I get most often is that my code has too many methods, which makes it hard to follow, and the variable names are too long, which is confusing.

There is a fundamental difference in the way an experienced coder reads code versus how a beginner does

An experienced developer reading code doesn’t pay attention to the vocabulary of the programming language itself.  An experienced developer is more focused on the actual concept being expressed by the code—what the purpose of the code is, not how it is doing it.

A beginner or less experienced developer reads code much differently.

When a less experienced developer reads code, they are trying to understand the actual structure of the code.  A beginner is more focused on the actual vocabulary of the language than what the expression of that language is trying to convey.

To them, a long named variable isn’t descriptive, it’s deceptive, because it is hiding the fact that NumberOfCoins represents an integer value with its long name and personification of the variable, as something more than just an integer.  They’d rather see the variable named X or Number, because its confusing enough to remember what an integer is.

An experienced developer, doesn’t care about integers versus strings and other variable types.  An experienced developer wants to know what the variable represents in the logical context of the method or system, not what type the variable is or how it works.

Example: learning to read

Think about what it is like to learn to read.

When kids are learning to read, they start off by learning the phonetic sounds of letters.

When young kids are reading books for the first time, they start out by sounding out each word.  When they are reading, they are not focusing on the grammar or the thought being conveyed by the writing, so much as they are focusing on the very structure of the words themselves.

Imagine if this blog post was written in the form of an early reader.

Imagine if I constrained my vocabulary and sentence structure to that of a “See Spot Run” book.

Would you find my blog to be highly “readable?”  Probably not, but kindergarteners would probably find it much more digestible.  (Although they would most likely still snub the content.)

You’d find the same scenario with experienced musicians, who can read sheet music easily versus beginners who would probably much prefer tablature.

An experienced musician would find sheet music much easier to read and understand than a musical description that said what keys on a piano to press or what strings on a guitar to pluck.

Readability constraints

Just like you are limited to the elegance with which you can express thoughts and ideas using the vocabulary and structure of an early reader book, you are also limited in the same way by both the programming language in which you program in and the context in which you program it.

This is better seen in an example though.  Let’s look at some assembly language.

.model small
.stack 100h

msg     db      'Hello world!$'

        mov     ah, 09h   ; Display the message
        lea     dx, msg
        int     21h
        mov     ax, 4C00h  ; Terminate the executable
        int     21h

end start

This assembly code will print “Hello World!” to the screen in DOS.

With x86 assembly language, the vocabulary and grammar of the language is quite limited.  It isn’t easy to express complex code in the language and make it readable.

There is an upper limit on the readability of x86 assembly language, no matter how good of a programmer you are.

Now let’s look at Hello World in C#.

public class Hello1
   public static void Main()
      System.Console.WriteLine("Hello, World!");

It’s not a straight across the board comparison, because this version is using .NET framework in addition to the C# language, but for the purposes of this post we’ll consider C# to include the base class libraries as well.

The point though, is that with C#’s much larger vocabulary and more complicated grammar, comes the ability to express more complex ideas in a more succinct and readable way.

Want to know why Ruby got so popular for a while?  Here is Hello World in Ruby.

puts "Hello, world"

That’s it, pretty small.

I’m not a huge fan of Ruby myself, but if you understand the large vocabulary and grammar structure of the Ruby language, you’ll find that you can express things very clearly in the language.

Now, I realize I am not comparing apples to apples here and that Hello World is hardly a good representation of a programming language’s vocabulary or grammar.

My point is, the larger the vocabulary you have, the more succinctly ideas can be expressed, thus making them more readable, BUT only to those who have a mastery of that vocabulary and grammar.

What we can draw from all this?

So, you might be thinking “oh ok, that’s interesting… I’m not sure if I totally agree with you, but I kind of get what your saying, so what’s the point?”

Fair question.

There is quite a bit we can draw from understanding how vocabulary and experience affects readability.

First of all, we can target our code for our audience.

We have to think about who is going to be reading our code and what their vocabulary and experience level is.

In C#, it is commonly argued whether or not the conditional operator should be used.

Should we write code like this:

var nextAction = dogIsHungry ? Actions.Feed : Actions.Walk;

Or should we write code like this:

var nextAction = Actions.None
   nextAction = Actions.Feed
   nextAction = Actions.Walk;

I used to be in the camp that said the second way was better, but now I find myself writing the first way more often.  And if someone asks me which is better, my answer will be “it depends.”

The reason why it depends is because if your audience isn’t used to the conditional operator, they’ll probably find code that uses it confusing.  (They’ll have to parse the vocabulary rather than focusing on the story.)  But, if your audience is familiar with the conditional operator, the long version with an if statement, will seem drawn out and like a complete waste of space.

The other piece of information to gather from this observation is the value of having a large vocabulary in a programming language and having a solid understanding of that vocabulary and grammar.

The English language is a large language with a very large vocabulary and a ridiculous number of grammatical rules.  Some people say that it should be easier and have a reduced vocabulary and grammar.

If we made the English language smaller, and reduced the complex rules of grammar to a more much simple structure, we’d make it much easier to learn, but we’d make it harder to convey information.

What we’d gain in reduction of time to mastery, we’d lose in its power of expressiveness.

One language to rule them all?

It’s hard to think of programming languages in the same way, because we typically don’t want to invest in a single programming language and framework with the same fervor as we do a spoken and written language, but as repugnant as it may be, the larger we make programming languages, and the more complex we make their grammars, the more expressive they become and ultimately—for those who achieve mastery of the vocabulary and grammar—the more readable they become. (At least the potential for higher readability is greater.)

Don’t worry though, I’m not advocating the creation of a huge complex programming language that we should all learn… at least not yet.

This type of thing has to evolve with the general knowledge of the population.

What we really need to focus on now is programming languages with small vocabularies that can be easily understood and learned, even though they might not be as expressive as more complicated languages.

Eventually when a larger base of the population understands how to code and programming concepts, I do believe there will be a need for a language as expressive to computers and humans alike, as English and other written languages of the world are.

What do you think?  Should we have more complicated programming languages that take longer to learn and master in order to get the benefit of an increased power of expression, or is it better to keep the language simple and have more complicated and longer code?

If you like this post don’t forget to Follow @jsonmez or subscribe to my RSS feed.

Posted by: jsonmez | April 7, 2013

Bad Advice: “Stop Working So Hard”

I’ve been seeing quite a few posts on Hacker News lately about why you should not work too hard and even saying you should work less than 35 hours a week.

(Now, don’t get me wrong.  I think the authors of these articles are awesome people who have accomplished huge things.  I don’t mean to disrespect anyone of these great entrepreneurs.  I just think some of them have confused where they are now, with how they got there.)

Would we ever want to live in a world where working harder didn’t amount to anything more, but rather ended up returning you less?

I know plenty of people who work less than 35 hours a week, and I wouldn’t say they are doing the best work of their life.

Homeless man on the street

In contrast, I know plenty of people who are working 50 to 60 hours per week and they are doing some amazing things.

You have to work hard now to reap the benefits later

At the beginning of every episode of Pat Flynn’s podcast he says

“Welcome to the smart passive income podcast where it’s all about working hard now so you can sit back and reap the benefits later.”

There is no way around this.  It is the principal of sowing and reaping at work.

While many well intentioned bloggers have urged you to not put in those extra hours at night, but rather to take time to do what you want and live a life outside of your work, they have forgotten the very path they took to get to where they are today.

If you are in that season of your life, then please take their advice.  They are 100% right.  There is this point of diminishing returns where you don’t gain much more benefit by spinning the wheel harder.

Ever rode a bike down hill really fast?


You know how at first you can start pedaling and it will actually make you go down the hill faster, but at some point the pedals just start spinning themselves?

You reach that point where you can’t actually move your legs fast enough to make much of a difference.  Every couple of seconds, your foot will hit that tiny bit of resistance which tells you that you actually did something, but most of the time you are just spinning your loose pedals, not actually adding any speed.

It’s a pretty good feeling zooming down that hill with minimal effort on your part.  There is no need to pedal furiously like you did to get up the hill.  If you are pedaling furiously at that point, not only are you wasting your effort, but you are missing out on taking time to enjoy the best part of the ride.

You have to climb the hill before you can sail down it

When riding a bicycle, there is only one way to reach a point where you can sail down a hill effortlessly—you have to climb up a hill first.bikeup

Altitude change down, requires previous altitude change up.  No way around it.

Pedaling a bike up a hill is hard work.

Not only do you have to keep working to move the bike up the hill, but every time you stop pedaling, you run the risk of rolling backwards.

The faster you want to get up the hill, the harder you have to pedal and the more you risk tiring out and rolling down the hill.

There is no rest, there are no breaks when pedaling up the hill.  The best you can do is get off the bike for a while and walk it up the hill, but that will surely slow you down.

And so it is with life in general.

My personal hill

I’d like to buy into the story that we can just take it easy and good things will come, but the reality of the situation is that you’ve got to put in work first—hard work.

I started buying real estate when I was 18 years old.  I bought my first house, which is a rental I still have today.

Since then, I’ve been buying properties at a rate of about 1 every couple of years.

It hasn’t been easy.  Huge sacrifices to be able to do it, but from when I started I knew that I was pedaling my bike up the hill.

I also had been working as a developer full time for about the past 15 years.  During that time, I was working nights and weekends to handle my real estate, building apps, and most recently creating online courses for Pluralsight.

Only at the beginning of this year was I able to finally quit my regular job working for someone else and start working completely for myself.

It took a lot of extra hours on nights and weekends, week after week for over 2 years to get there.

Just within the last year have all the real estate investments that I have been making for the last 15 years started to actually put some money in my pocket.

I’m still at the point where I am working 60 hour weeks just about every week.  I am still climbing up the hill.

But, the good news is I can see the crest and I know that if I keep pushing down on those pedals, I’ll reach the peak from where I can coast down.

My advice

Don’t buy into the idea that there is someway to get around hard work.

Stop running away from hard work and start embracing it.  I’ve learned from experience that it takes much more effort overall to avoid hard work than it does to do it, and avoiding hard work engenders no benefits long term or short.

Make the right sacrifices.

Don’t sacrifice your marriage or family in order to get ahead.  In the end, it will put you behind.  Remember, there is no more costly pursuit than divorce.

Make time to be with your spouse, set aside time to play with the kids every day, if you have them.  Take a day off to have a family day.

Instead, sacrifice from this list:

  • Watching TV
  • Hanging out with friends
  • Playing games
  • Goofing around
  • Browsing the web

Yeah, it might suck for awhile, but if you want to climb that hill now, so that you can cruise down it later, you are going to have to make some sacrifices.

Don’t waste your time.

Here is a list of things I don’t do:

  • Cut my own lawn
  • Wash my car
  • Clean my house
  • Any kind of home improvement work

Mowing lawn


I pay for these things and instead spend that time—not sitting on the couch watching TV—but working hard at what I do best.  Working at doing things that will generate me more money than it will cost me to pay someone else to do the other things I mentioned in this list.

I use a service called Fancy Hands to handle many of the time consuming tasks I can delegate out.  I have saved tons of time and money by using that service. (Disclosure: that link is my referral link to that site.)

Every time I am doing something, I ask myself if I should be paying someone else to do this.  And if your time is escaping you completely, start tracking it.

Lighten your load.

Want to make it easier to pedal a bike up a hill?

Good, all you have to do is carry less stuff with you.

This means, get your expenses down.  Start being smart with your money.

Pay off debts, don’t go into debt.  Don’t be pennywise and pound foolish, but at the same time learn to live on less.

If you learn to live on 2k a month, guess how much you need to live?  That’s right 2k.

If you have saddled yourself with debt and expenses that make it so you need 10k a month to live, you are going to have to pedal a lot harder… just saying.

(If you want to read a good book that helps you learn this mindset, read Rich Dad Poor Dad by Robert T. Kiyosaki.)

It all comes down to this

Be willing to work hard now in order to have a better, more relaxed tomorrow.

Don’t try to take shortcuts or get rich quick, those roads lead to disaster and wasted time.

Instead, if you are working a full time job now for someone else, give yourself 10 hours a week of “your time,” where you work for yourself.

Put in the time now to build that business on the side.  Make that sacrifice for 2 years or 5 years or however long it takes to get your bike pushed up that hill.

Don’t give up, don’t be afraid to work hard, and don’t be sucked in by any preacher that preaches a fast way to riches and leisure by doing less.

Remember, those who show up everyday eventually beat out both the faster and the smarter.

If you like this post don’t forget to Follow @jsonmez or subscribe to my RSS feed.

The recent free courses from Pluralsight on teaching kids to program really got me thinking about this subject.

There seems to be a big backlash in development community against the idea that everyone should learn to program.

I’m not sure exactly where it is coming from, but I suspect it has something to do with egos and fear.

Even within the development community, there seems to be a distinction between “real programmers,” and “not real programmers,” based on language or technology choice.


I have to admit, I have been guilty of this type of thinking myself, because a very easy way to increase our own value is to decrease the value of others.

But what I have come to find is that not only is the distinction between “real programmers” and “not real programmers” a false dichotomy, but that the distinction between a programmer at all and a layperson, is also not quite as clear, or at least it shouldn’t be.

Not everyone should be a programmer

It’s true.  Just like not everyone should be an accountant, or not everyone should be a writer, but I think we can all agree, that everyone should understand basic math and be able to write.

Learning how to program and doing it professionally are two distinct things and they should not be lumped together.

It it pretty hard to imagine a working world where no one except writers could write.

Imagine wanting to send an email to your boss, but you don’t know how to write, so you have to ask the company writer to do it for you.

That is what the world would be like if we insisted that only writers needed to learn how to write.

But perhaps you think I am just being silly, I mean the need to write is so prevalent in everyday situations, but the need to program isn’t.

But I challenge you to consider if whether it is actually true that the need to write is much more prevalent than the need to program, or because everyone knows how to write, the need for writing is just recognized more.

Imagine if everyone you interacted with on a daily basis knew how to write code.  Imagine that, just like everyone has a word processor on their computer that they know how to use, there was an IDE that allowed them to write simple scripts.

Think about how that changes the world.

APIs everywhere!

The first thought that comes to my mind in that world is that there would be APIs everywhere.

Every single program would have an easily accessible, scriptable API, because every user of that program would want to be able to automate it.

In time, the way we viewed the world would completely change, because just like products today are designed with the thought that users of those products can write, products of that time period would be designed with the assumption that users of those programs can program.

Suddenly everything becomes accessible, everything interfaces with everything else.

Doctors build their own simple tools based around their specific process by combining general purpose software from their equipment.

There is a Pinterest full of code snippets instead of pictures.

Every device and piece of software you interact with has an API you can use to automate it.

The point is that we can’t conceive what the world would look like if programming was as prevalent as writing, but such a world can and should exist.

Computers and technology are such a large part of everyone’s lives that it is becoming more and more valuable to be able to utilize this so common element.

It starts with kids

We have to stop thinking programming is hard and realize that it is one of the easier things we can teach kids to do.

If a person can grasp and use a complex language, such as English, that person can learn how to program.

Programming is much more simple than any spoken or written language.

But, we have to stop erecting these artificial barriers that make programming computers seem more difficult than algebra.

Nokids_computert only that, but we need to start integrating programming concepts into learning these other subjects.

Is there really much difference between an algebraic variable and a variable in a programming language?

Isn’t most mathematics solved by learning an algorithm already?  Why not at the same time, teach how to program that algorithm?  Not only would it make the subject much more interesting, but it would build a valuable skill as well.

We spend a great deal of time educating kids with knowledge they will never use—basically filling their minds with trivia.  But, how much more likely would they be to use the skills learning to program would give them?

What was hard yesterday is easy today

Calculus, geometry, probability, the structure of a living cell, electricity… What do they all have in common?

These concepts used to be advanced topics that only the most educated in society knew about or discussed, but now have become common knowledge that we teach children in school.  Ok, well maybe not calculus, but it should be.

Over  time, the concepts that only the brightest minds in a field could possibly understand are brought down to the masses and become common knowledge.

It is called “standing on the shoulders of giants,” and it is the only way our society advances as a whole.

Imagine if it was just as difficult for us to grasp the concepts we are taught in school as it was for the pioneers of that knowledge to obtain it… We wouldn’t ever advance as a whole.

But, fortunately, what is hard yesterday ends up being what is easy today.

The same will eventually happen with computer programming, the question is just how long do we need to wait?

It’s all about breaking down walls

I try to never say that something is hard, because the truth is that although there are some things in life that are hard, most things are easy if you have the right instruction.

It is natural for humans to want to think the knowledge or skills they have acquired is somehow special, so naturally we have a tendency to overemphasis the difficult in obtaining that knowledge or set of skills, but we’ve got to work through the fear of job security and egos and remove the veil of complexity from programming and make it simple.

The value we can bring by helping others to understand the knowledge we have is much greater than the value that using that knowledge alone provides.

If you like this post don’t forget to Follow @jsonmez or subscribe to my RSS feed.

There isn’t a large amount of advice out there on developer job interviews.

I’ve found that many talented developers have difficulty with job interviews, because they spend more of their time focusing on what they are truly passionate about, technology and development, and not much time prepping their interview skills.

It’s unfortunate, because having good job interview skills can really help you advance your career by giving you opportunities you wouldn’t be able to get without being skilled in this area

.Happy businesspeople, or business man and client

1. Hire an expert to create your resume

I’ve mentioned this idea before, but it is so important that I’ll say it again.  Unless you write resumes for a living, you are not a professional resume writer.

There are people who write resumes for a living and those professional resume writers probably don’t try and write their own software to use on their computer.

So, if resume writers don’t write software, why would software developers try and write resumes?

Perhaps you can do a good job, but chances are a professional can do a better job.

My advice, if you want to get the largest number of possible opportunities for a job, bite the bullet and pay the dollars to have your resume written professionally.  It is a relatively small investment for the potential gain of landing a much better job based on the large number of opportunities you are likely to have.

2. Research your interviewer

I’m always amazed when I conduct a developer interview and I’ve sent an email out to the developer I am interviewing ahead of time, which had my full name and my blog address, yet when I speak to them in the interview they seem to know nothing about me.

On the other hand, I’ve had interviews where I’ve interview someone and they worked into the interview a mention of a blog post I had written or a course of mine they had watched on Pluralsight.

Guess which developer I was more likely to recommend for a job?

We are all human, we like to know that someone is interested in us.  Dale Carnegie taught me the easiest way to get someone interested in you is to show a genuine interest in them. (Yes, I’m recommending this book again for like the 8th time, and yes, that is an Amazon affiliate link.)

Whether this is fair and objective is besides the point.  If you are interviewing for a job, it is just ludicrous not to research the company you are interviewing at and the interviewer, (if you know who it will be,) ahead of time.

Today it is easier than ever to find someone’s Facebook page, Twitter handle or blog.  You can learn quite a bit with just a little bit of research and it shows that you actually are detail-oriented and care about your career.

3. Get an inside referral

Want to know the absolute easiest way to get a job?  Get an inside referral.

You will be twice as likely to be interviewed and 40% more likely to be hired.

Yeah, that’s right, it makes that much of an impact!

It isn’t even very difficult to do, if you are willing to plant a few seeds ahead of time to make sure there are plenty of apples on the tree, when you need to pick one.

A while back I found a company I wanted to work for.  So, what did I do?

Well, I found a developer at that company that I felt had some common thoughts and ideas as my own and I started following his blog.

I commented on his blog and showed an interest in his work and the company he was working for, and eventually I had an opportunity from that situation to get an inside referral.

Many developers say, “well, I don’t know anyone in XYZ corp.”  Ok, fine if you want to give up there, go ahead, but I bet, if you try, you can find a way to meet and befriend someone in just about any company.

But the secret is, you have to network before you need a job, so start doing it now!

4. Learn to solve algorithm based problems

I’ve got a 6 step process I use to solve algorithm based problems that often come up in developer interviews.

I go step by step and teach you how to do this in my Pluralsight course on Job Interviews.

It is an important skill that every developer should have and it isn’t really that difficult to learn.

Many touch job interviews include 1 or more questions where you are asked to solve some programming problem, either on a whiteboard or at a computer, yet many developers, who are otherwise great programmers, become completely paralyzed when asked to do so and flub it.

If you take the time to learn how to solve these kinds of problems, you’ll easily put yourself in the top 10% of developers who interview for most jobs and you’ll be much less nervous about being asked to solve a problem on the spot.

The reason why we get nervous has nothing to do with performance anxiety, it has everything to do with familiarity and confidence in solving these types of problems.

For example, if someone asked you to do 10 jumping jacks, you probably wouldn’t get all nervous and flail around… why?  Because you are confident you can do it.

Build your confidence in this area and you won’t be nervous either.

5. Answer questions with passion

One word answer to questions or one sentence textbook definitions may technically be correct, but if that is all you do, you are missing an opportunity to showcase one of the greatest assets a developer can bring to a team—passion.

answerIf I ask you what polymorphism is, I am not just asking to find out if you can read a textbook and memorize a definition to repeat back to me later.  I am trying to find out what you think about polymorphism.  I want you to expound upon the subject and use it as an opportunity to have a conversation.

Now, not all interviewers think the same way, and you have to be a little cognizant of when it is time to shut-up, but the point is you should try and show some passion in your answers and expound upon them if possible.

6. Avoid “trap” questions

Why are you looking for a new opportunity?

Name your greatest strength and your greatest weakness.

What was the result the last time you and a coworker disagreed on a technical issue?

You should really know how you are going to answer these types of questions before you are asked them and what the interviewer is looking for when asking these questions.

I’ve got some recommendations on exactly how to answer these questions in my course, but you should at least consider these kinds of questions ahead of time and reason through some of the possible answers you can give.

For example, if we look at just that first question about why you are looking for a new opportunity…

In many cases interviewers are trying to find out if you are going to bad mouth your current or previous employers.  It is a sure sign you will do the same to them, so don’t do it.

If you don’t think about this ahead of time, you can easily fall into the trap of saying something negative about your current job and severely hurting your chances of landing that new job.

7. Don’t ever lie!

One of the worst things to do in an interview is to lie.

If you don’t know something, don’t make up an answer.  Don’t pretend like you worked with some technology if you haven’t or make up some story of how you used something in your last job.

Instead, either say, that you don’t know or that you aren’t 100% sure, but you can try and give an answer based on what you think.  It also doesn’t hurt to follow up by asking the interviewer what the correct answer is, because you are genuinely interested.

There is a good chance whatever an interviewer is going to ask you about is something they are familiar with, because they don’t want to look like an idiot if you start talking about the subject.  For that reason, even if you consider yourself a good BS’er, most of your BS will be instantly detected and you’ll immediately lose your integrity, which is very hard to ever gain back.

8. Don’t ever be brutally honest

Many developers go overboard the other direction and reveal too many personal details about themselves, thinking that honesty and complete transparency is the best policy.

While you shouldn’t lie, you also shouldn’t spill all the messy details of your life and all your personal flaws to your interviewer either.

People are drawn in by a bit of mystery and generally don’t like to gamble on whether or not your OCD or obsession with World of Warcraft will cause you to be a flop at your job.

Personality is good, character flaws are bad.

Don’t ever lie, but don’t volunteer up information that is going to paint you in a bad light.  Not only will that information likely hurt you, but it also shows a lack of judgment as well.

9. Know your computer science basics

I also cover this in my Job Interview course, because it is so important and can be learned in less than an hour.

Yes, so many developers claim that they don’t know what linked lists and stacks are, because they don’t have a formal education in computer science or it was too long ago when they graduated college.

I agree that we don’t use deep computer science concepts in most programming jobs, but as a professional software developer, you should at least know the basics.

I seriously doubt you’d want an electrician to rewire your house, if that electrician didn’t know anything about the basics of electrical engineering, so don’t assume that someone wants to hire someone who can code, but doesn’t understand the fundamentals of their profession.

You don’t have to be a computer science professor, but you should at least know the basics that I am sure can be taught in an hour, because I do so in my Job Interview course.

10. Build experience creatively

Last but not least, many developers, especially developers starting out or moving from another career field, lack relevant work experience and have no idea how to get it.

It is a bit like the chicken or the egg problem of which came first.presentation

How do you get experience if you don’t have any?

The answer is to be creative.  There are many ways to get experience that doesn’t involve working directly for a company as a software developer.

Here are just a few ideas:

  • Join an open source project
  • Start an open source project
  • Build a mobile app and put it in the app store
  • Build a small web app
  • Start a blog
  • Present at code camps or other user groups

There are many ways you can get experience that will look good on your resume and give employers confidence in hiring you, you just may have to be a little creative.

Final words

Hopefully you’ve found these tips helpful.  I’ve found that there isn’t a large amount of good information out there for developers looking for good job interview advice, so I actually ended up creating a Pluralsight course on the subject, which you should check out if you want to find more about the tips I mention here.

If you are astute, you may be thinking to yourself, ah, that John Sonmez character writes a blog post to secretly promote his Pluralsight video, pretending to give free advice.

Well, I definitely got the idea to write this blog post to help promote my Pluralsight video, because, hey that is what I do, I make Pluralsight videos— but I hope that you found these tips themselves to be useful as well.

If you like this post don’t forget to Follow @jsonmez or subscribe to my RSS feed.

Posted by: jsonmez | March 10, 2013

7 Reasons Why You Should Tackle Hard Problems Last

I always hear the advice that we should tackle hard problems first.

It seems like pretty legitimate advice, and many of the reasons for saying so make sense, at least at a surface level.

The reasoning usually revolves around the idea that by tackling the hard problems first you can eliminate your biggest risk early and everything will be smooth sailing from there.

When you really think about the reasoning for solving hard problems first though, most of it is not actually reasoning, but based of off one emotion… fear.

We tend to think it is best to solve hard problems first, because we are thinking about eliminating our fear, not because we are thinking about what approach has the highest chance of success or is the most optimal.

I call this FDD, or Fear Driven Development.

And when I think about it that way, I find myself hard pressed to find a real good reason for tackling hard problems first besides being able to abort early.  Which, in some cases might be good, but I’d rather focus on success.


Here are 7 reasons why it might be a good idea to tackle the hard problems last instead of first.

1. Solving easy problems first gives you momentum

When a large ball starts rolling down a hill, it picks up speed rapidly and that large ball can bust through many barriers that it couldn’t before, simply because of one thing it has after rolling down a hill that it didn’t have before—momentum.

On the converse, trying to push a heavy ball up a hill is pretty hard.  And if there are barriers on the way to the top of the hill, not only do you have to fight gravity, but you have to be able to push extra hard to get through those barriers.


Life is hard enough, why make it harder?

I recently received an email from a developer that was concerned that his team wasn’t gelling and that they didn’t have the expertise in the technology needed to solve the complicated problem ahead of them.

They were going to start the project by trying to integrate this fairly complex technology and he was afraid that it would cause them a large delay before they would be able to get version 1 out.

My advice?

Start with your simple problems; get working software out there as soon as possible.  Not only will the team gel much more easily as they are having success and making real progress, but that momentum will help them when it is time to solve the more difficult problem. 

Even if they have to throw the first version away, when they get to the hard problem, the momentum alone will make them much more likely to reach success in the end.

I could give 100 examples of how solving easy problems to gain momentum can benefit you, but you probably already know intrinsically that this is true.

Long story short, get a running start before taking a big leap.

2. Avoid solving the wrong problem

wrongThere are few things worse than spending weeks or months solving a difficult problem, only to find out in the end that you actually solved the wrong problem.

The big problem with solving the hard problems first is that the hard problems usually require a large amount of context in order to fully understand them.

It is very hard to get the right context for a hard problem when we take it out of its natural order of progression and artificially cut it to the front of the line.

You’d probably like to start a college class by taking the final exam first, so you don’t have to worry about it, but the problem with doing so is that you’ll lack the context and information to understand the questions and to know the answers.

When we try to tackle problems out of order to avoid leaving the hard problems to the end, we end up losing all of the learning and context that would help us to solve the hard problems at the end and we are much much more likely to end up solving the wrong problem, which is a complete waste of time.

3. Someone else may solve the problem for you

Sometimes procrastination is a good thing.

Sometimes, when you purposely push off solving a hard problem till the end, you end up finding that someone else already solved your problem.

editingI was working on a Pluralsight video last week, using Camtasia 8 for editing, and I found that one of the video segments I was tried to load up was crashing the application every time.

I spent a few minutes trying to troubleshoot it, but nothing I was trying was working, so I had to make a decision.

I had 3 choices:

  1. Keep trying to solve this hard problem before moving on.
  2. Go on and do other videos and send off a support request to see if they could handle it.
  3. Make a new project and re-edit all the clips.

Choices 1 and 3 involved tackling a hard problem right then and there.

Choice 2 was to tackle easy problems and see if support could solve my hard problem for me, and if not, I would solve it at the end.

I ended up choosing option 2 and it paid off.  It turned out Camtasia support was able to solve my problem.  By the time I needed the project to complete my course, they had solved my hard problem for me and I didn’t waste any time upfront trying to tackle it myself.

Now it could have worked out differently; I might have had to solve the problem myself at the end, but instead of assuming I would have to and wasting perhaps a day or 2, trying to solve the problem myself, I pushed it off and kept working on easy problems and I gave someone else a chance to solve my hard problem for me.

It doesn’t happen all the time, but many times if we push off the hard problems we face, we find that by the time we get to them, someone else has already solved the problem for us.

4. Your own subconscious mind may solve the problem

When I said someone else might solve the problem for you, that someone else might actually by you—at least your subconscious mind.

Have you ever had the experience of thinking about a problem and not being able to figure it out, but then you wake up the next morning and suddenly have the answer?

It seems that our subconscious mind is more powerful than we are aware of.


In many cases, if we know of the hard problem we need to solve and have thought about it a little bit, our subconscious mind will start working on the solution, even though we are not aware.

Obviously this isn’t going to work all the time, and your subconscious mind isn’t going to write a bunch of code for you, but in many cases there is at least some benefit to throwing the problem off to our internal “worker thread.”

5. You are more committed to solving the hard problem when you risk everything you do so far

One benefit to saving the hard problem for last is that you have some extra motivation in the form of loss aversion.

It has been demonstrated in several experiments that people tend to try to avoid losses versus acquiring gains.

We can use this knowledge to our advantage by doing the easy work first and letting our loss aversion help motivate us to solve the harder problems, because we don’t want to lose all the work we put into a project so far.

By starting with easy problem, we put some “skin in the game.”

If we try to solve the hard problems first, we have nothing to lose, so we are much more likely to give up.

6. Hard problems are often easy problems bundled together

vanilla beans and orchid flowerI’ve talked many times about breaking things down and trying to keep things as simple as possible.

And it turns out that many hard problems (not all) are decomposable into many small easy problems.

If you strive to never solve hard problems and to always push them off, you may actually find out that you never have to solve hard problems.

Many times we can chip away at hard problems, by taking bits of them off a little at a time and solving those easier problems.  Eventually, you may find that you’ve reached the tootsie roll center of your hard problem lollipop and it is filled with chocolate candy!

Now, some problems aren’t very easily decomposable, but a good majority of problems are.  Once you develop the skills to chip off bits of hard problems into smaller easy problems, the world looks like a much more conquerable place.

So, saving hard problems for last and breaking off little pieces of them as you go, can be a good strategy to help you to wear down your opponent before you have to face him.

7. Some hard problems are never truly solved

One of the big problems with tackling the hard problems first is that they tend to fill up as much time as you’ll give them.

If I give you an easy problem, like write a function to reverse a string, there isn’t much to think about.  You can solve it a number of different ways and there isn’t a huge difference in the efficiency of the different methods of solving it.  It is pretty unlikely you’ll spend weeks revamping your solution and thinking that it’s not quite right.

But, if I give you a problem like, create an in-memory database, not only is it a hard problem, but it has a huge number of possible solutions and can be optimized from now until the end of time.  You could spend 2 weeks working on that task or 2 years.

The problem is that many hard problems don’t have a good scope to them when they are solved in isolation.

If you design an engine for a car that isn’t built yet, you won’t know when you are done.

But if you design an engine for a car and you know how much it weighs and know what kind of transmission it will use and what kind of fuel efficiency it needs to have, you can have a much clearer understanding of when your engine design is done.

If you save the hard problems for last, the scope of those hard problems will be much more defined, which will keep you from wasting valuable time over solving a problem or, like I mentioned earlier, solving the wrong problem altogether.

If you like this post don’t forget to Follow @jsonmez or subscribe to my RSS feed.

Posted by: jsonmez | March 3, 2013

Time Traveling To The Future Of User Interfaces

I really dislike using a keyboard and a mouse to interact with a computer.


Using a mouse is a more universal skill—once you learn to use a mouse, you can use any mouse.  But, keyboards are often very different and it can be frustrating to try and use a different keyboard.

When I switch between my laptop and my desktop keyboard, it is a jarring experience.  I feel like I am learning to type all over again.  (Of course I never really learned to type, but that is besides the point—My three finger typing style seems to work for me.)


When I switch to a laptop, I also have to contend with using a touchpad instead of a mouse, most of the time.  Sure, you can plug in a mouse, but it isn’t very convenient and you can’t do that everywhere.

I also find that no matter how awesome I get at keyboard shortcuts, I still have to pick up that mouse or use the touchpad.  Switching between the two interfaces makes it seem like computers were designed for 3 armed beings, not humans.

Even when I look at a laptop, it is clear that half of the entire design is dedicated to the keyboard and touchpad—that is a large amount of wasted space.

I’m not going to say touch is the answer

You may think I am going in the direction of suggesting that tablets solve all our problems by giving us a touch interface, but that is not correct.

Touch is pretty awesome.  I use my iPad much more than I ever thought I would.  Not having the burden of the keyboard and mouse or touchpad is great.

But, when I go to do some text entry on my tablet or my phone, things break down quite a bit.

On-screen keyboards are pretty decent, but they end up taking up half of the screen and the lack of tactile feedback makes it difficult to type without looking directly at the keyboard itself.  Some people are able to rely on autocorrect and just let their fingers fly, but somehow that seems dirty and wrong to me, as if I am training bad habits into my fingers.


Touch itself is not a great interface for interacting with computers.  Computer visual surfaces are flat and lack texture, so there is no advantage to using our touch sensation on them.  We also have big fingers compared to screen resolution technology, so precision is also thrown out the window when we relegate ourselves to touch interfaces.

It is completely silly that touch technology actually blocks us from viewing the part of the screen we want to touch.  If we had greaseless pointy transparent digits, perhaps touch would make the most sense.

Why did everything move to touch then?  What is the big thing that touch does for us?

It is pretty simple, the only real value of touch is to eliminate the use of a mouse or touch pad and a keyboard.

Not convinced?

I wasn’t either, till I thought about it a bit more.

But, consider this… If you were given the option of either having a touch interface for your tablet, or keeping the mouse-like interface, but you could control the mouse cursor with your mind, which would you prefer?

And that is exactly why touch is not the future, it is a solution to a specific problem, the mouse.

The real future

The good news is there are many entrepreneurs and inventors that agree with me and they are currently building new and better ways for us to interact with computers.

Eye control

This technology has some great potential.  As the camera technology in hardware devices improve along with their processing power, the possibility of tracking eye movement to essentially replace a mouse is becoming more and more real.

There are two companies that I know are pioneering this technology and they have some pretty impressive demos.

TheEyeTribe as an “EyeDock” that allows for controlling a tablet with just your eyes.


They have a pretty impressive Windows 8 tablet demo which shows some precise cursor control using just your eyes.

Tobii is another company that is developing some pretty cool eye tracking technology.  They seem to be more focused on the disability market right now, but you can actually buy one of their devices on Amazon.

The video demo for PCEye freaks me the hell out though.  I don’t recommend watching it before bed.

But Tobii also has a consumer device that appears to be coming out pretty soon, the Tobii REX.


Subvocal recognition (SVR)

This technology is based on detecting the internal speech that you are generating in your mind right now as you are reading these words.

The basic idea is that when you subvocalize, you actually send electrical signals that can be picked up and interpreted.  Using speech recognition, this would allow a person to control a computer just by thinking the words.  This would be a great way to do text entry to replace a keyboard, on screen or off, when this technology improves.

NASA has been working on technology related to this idea.

And a company called ambient has a product called Audeo that is already in production.  (The demo is a bit rough though.) You can actually buy the basic kit for $2000.


Gesture control

You’ve probably already heard of the Kinect, unless you are living under a rock.  And while that technology is pretty amazing, it isn’t exactly the best tool for controlling a PC.

But, there are several other new technologies based off gesture control that seem promising.

There are two basic ways of doing gesture control.  One is using cameras to figure out exactly where a person is and track their movements.  The other is to use accelerometers to detect when a user is moving a device, (an example would be the Wii remote for Nintendo’s Wii.)

2013-03-03_13-34-37A company called Leap, is very close to releasing a consumer targeted product called Leap Motion that they are pricing at only $79.  They already have plans to sell this in Best Buy stores and it looks very promising.

Another awesome technology that I already pre-ordered, because I always wanted an excuse to wear bracers, is the MYO, a gesture controlled armband that works by a combination of accelerometers and sensing electrical impulses in your arm.


What is cool about the MYO is that you don’t have to be right in front of the PC and it can detect gestures like a finger snap.  Plus, like I said, it is a pretty sweet looking arm band—Conan meets Bladerunner!

Obviously video based gesture controls won’t work well for mobile devices, but wearable devices like the MYO that use accelerometers and electrical impulse could be used anywhere.  You could control your phone, while it is in your pocket.

Augmented reality and heads up displays

One burden of modern computing that I haven’t mentioned so far is the need to carry around a physical display.

A user interface is a two-way street, the computer communicates to the user and the user communicates to the computer.

2013-03-03_13-36-29Steve Mann developed a technology called EyeTap all the way back in 1981.  The EyeTap was basically a wearable computer that projected a computer generated image on top of what you were viewing onto your eye.

Lately, Google Glass has been getting all the attention in this area, as Google is pretty close to releasing their augmented reality eyewear that will let a user record video, see augmented reality, and access the internet, using voice commands.

Another company, you may not have heard of is Vuzix, and they have a product that is pretty close to release as well, Smart Glasses M100.

Brain-computer Interface (BCI)

Why not skip everything else and go directly to the brain?header_set

There are a few companies that are putting together technology to do just that.

I actually bought a device called the MindWave from NeuroSky, and while it is pretty impressive, it is still more of a toy than a serious way to control a computer.  It basically is able to detect different brain wave patterns.  It can detect concentration or relaxation.  You can imagine, this doesn’t give you a huge amount of control, but it is still pretty fascinating.

I haven’t tried the EPOC neuroheadset yet, but it has even more promise.  It has 14 sensors, which is a bit more intrusive, but it supposedly can detect your thoughts regarding 12 different movement directions, emotions, facial expressions, and head rotation.

So where are we headed?

It is hard to say exactly what technology will win out in the end.

I think we are likely to see aspects of all these technologies eventually combined, to the point where they are so ubiquitous with computer interaction that we forget they even exist.

I can easily imagine a future where we don’t need screens, because we have glasses or implants that directly project images on our retinas or directly interface with the imaging system in our brains.

I easily see us controlling computers by speech, thought, eye movement and gesture seamlessly as we transition from different devices and environments.

There is no reason why eye tracking technology couldn’t detect where our focus is and we could interact with the object of our focus by thinking, saying a command or making a gesture.

What I am sure of though is that the tablet and phone technology of today and the use of touch interfaces is not the future.  It is a great transition step to get us away from the millstone around our necks that is the keyboard and mouse, but it is far from the optimal solution.  Exciting times are ahead indeed.

If you like this post don’t forget to Follow @jsonmez or subscribe to my RSS feed.

Posted by: jsonmez | February 24, 2013

Where Is Agile Now?

It seems just yesterday I was trying to push forward the idea of developing software in an Agile way, but somehow now it seems like that battle is over.

As if we won without a fight.


When I look around now, I don’t see software development shops doing big upfront design.  I don’t see consultants knocking down doors to certify you as a Scrum Master.

It seems that we have now entered a phase where Agile is accepted as the default and now instead of everyone trying to pitch the idea of Agile, everyone is trying to fix their broken Agile implementations.

The funny thing is, still no one even knows what Agile is

The big problem with Agile from the beginning has always been trying to define it.

Pretty early on, this problem was solved by calling it Scrum.  Scrum was something that was easily definable, and something you could get certified in.

Scrum was a set of rules you follow that makes you Agile.

At least that is how it was pitched too often.

I predicted that Scrum would die, and I am pretty ready to call that prediction as correct.


Sure, there are plenty of development shops still using Scrum today, but it isn’t really growing and less and less organizations are following it strictly.  (I can’t back this up, just my feel.)

I am a pretty firm believer in most of the value of Scrum being that it contains a firm set of rules that doesn’t require debate or judgment calls.  If most organizations are just taking the idea of a 2 week sprint and having daily scrum meetings, they are not likely getting much of the value out of Scrum.

But, the problem is that Scrum itself was never Agile.  Scrum was a defined set of process, that if you followed, would give you the framework you needed to actually be Agile.

To me Agile has always meant stopping the BS about software development.

To me Agile meant stop building this plan that you know is going to fail and rules lawyering your customers to death to get them to pay for something they didn’t want, because that is what they agreed to.

To me Agile meant to instead try to develop software as honestly as possible.  To go in and find out exactly what the customer wanted at the moment, try to build that thing and as openly as possible and as quickly as possible get further feedback to refine and improve.  To focus on doing a good job and know that if you are doing that everything else will fall into place.  To me that is what Agile has always been.

So when I say where is Agile now, I am probably asking a different question than most people

I have to ask myself: are software development shops doing what I define as Agile?  Has that idea permeated the software development community as a whole?

I don’t think so, but I don’t think it has died, nor will it ever.

But, I have seen some things that make me hopeful.

I’ve seen a large amount of talk about MVP, Minimum Viable Product.

I’ve seen many start-ups launching MVPs and being successful doing so.  And I’ve seen awesome companies like Buffer using this idea to build a product that is exactly what I want, because their plan is completely based on the customer and it adapts to the customer.

Why am I saying all this?

Simple, I think that what the world thought was Agile was two things:

  1. Scrum
  2. Iterative development

For the most part the software development world has ditched Scrum, at least the only form of useful Scrum, which is strict by-the-book Scrum, and adopted Scrum meetings and iterative development.  Honestly, I could do without the Scrum meetings, because although they are a good idea, no one actually does them correctly.

So, in essence, we won the wrong battle and we did so with major concessions.  But, that is ok, because what the consultants packaged up, certified people in and sold as Agile, wasn’t really Agile at all. 

Instead the real Agile movement has been gaining traction and it isn’t being sold by consultants, it is being championed by small start-up companies that are producing products that are 100 times more weighted on the side of results in the results to employees ratio than traditional software development shops and they are calling it MVP.


This is where the true spirit and ideas of Agile live and thrive and as more and more of these companies become successful and more and more researchers dissect their results, they are going to find that these small software boutiques were the ones who were actually practicing Agile, because they were cutting through all the BS of software development and focusing and developing and building exactly what the customer wanted—tools, process, contracts, plans be damned!

Posted by: jsonmez | February 17, 2013

Principles Are Timeless Best Practices Are Fads

There is a huge difference between a principle and a best practice.

Best practices are subjective and depend largely on context, while principles are eternal and universal.


After writing The More I Know The Less I Know, I received a few emails talking about how there are absolute best practices that should always be followed in software development.

I had already intended to write about principles, but that confusion made it clear to me that there should be a distinction made between best practices and principles.  We don’t want to throw the baby out with the bath water.

Looking at some examples of best practices

First let’s take a look at some software development best practices, then we’ll contrast them to principles to better get an idea of the difference.

One of the most common best practices today in software development is the idea of unit testing.  I’ve written about how I have my doubts about blindly following this best practice in the past, but whether or not we should follow it, is not what I am concerned with today.

Unit testing is extremely contextual.  What I mean by this, is that almost anyone would agree that there are a certain set of circumstances that makes unit testing have value.

If you work in an environment where the execution of unit tests takes a really long time, or you are developing your software in a waterfall approach where you have a big upfront design and detailed requirements, unit testing starts to lose value rapidly.

But rather than get trapped into the argument of when unit testing loses its value, it is better to address when it has the highest value—we are much more likely to agree there.

Unit testing has the highest value when we are working in agile environments where changes are being introduced into a software system rapidly and refactoring is taking place.  It also greatly increases in utility when you are able to execute and write the tests quickly, because that feedback loop makes it much easier to write the tests in a step by step approach, especially when doing TDD.

BusinessmanThere are plenty of other best practices that have fallen out of favor, like heavily commenting code and documenting requirements with UML diagrams, but context also greatly played a part in the value of these practices. 

When most developers wrote very short variable and method names, comments were really important.  Before Agile processes became prevalent, getting detailed requirements upfront was critical.

But, most best practices are good!

Yes, you are right, most best practices do apply pretty broadly and are generally helpful in a large number of different contexts.

For example, it is considered a best practice to use a source control system and it doesn’t seem like there are many situations where this wouldn’t be the case.

So doesn’t that make it a concrete rule or a principle?

No, it is still too specific to be generally applied in all cases and the act of putting your code in source control does nothing to improve the quality of your software or software product. 

If you were to blindly follow any best practice and not apply that best practice in a way that brings out the underlying principle, you would be very unlikely to actually receive any benefit.

You see, most best practices are actually derived from universally applicable principles that never change.  That is why most best practices are good.

The problem is applying the best practice itself in no way assures the benefit of its underlying principle.

To put it plainly, there is something greater at work that makes it a good idea to check your code into a source control system.  It is entirely possible to follow the action, but completely miss the spirit of the action.

More and more today, I see software development teams that are:

  • Writing unit tests
  • Using continuous integration systems
  • Using source control
  • Having Scrum meetings
  • Pair programming
  • Using IoC containers

Yet they are getting little to no benefit from it.  Just a bunch more pain and hoops to jump through.  The reason is simple…

It’s not the best practice that is effective, it is the principle behind the best practice

Principles are everywhere.  They apply in all aspects of our life.  You cannot go through the day without being affected by the results of 100s of different principles that have a constant influence on your life, just like the law of gravity does.gravity

Gravity is actually a great way to understand principles.  As far as we know, it is a universal force that is always in effect.  It is impossible to escape the law of gravity, wherever you go in the universe it affects you.

Principles are like laws of nature except bigger.  Principles are more like the laws of reality.  Even though you may not be able to describe them fully or understand how they work, they always work.

Take for instance, the law of the harvest.  Most people are familiar with this particular principle.  It basically goes like this.

You reap what you sow.

How universal is this truth?  How can anyone avoid it?  How many times have you found yourself subject to this inescapable law about how reality works?

Many software development best practices are actually based on this principle.  Think about best practices that have you make efforts to improve the quality of software early on in the process.

TDD or test driven development, is such a best practice.  The basis of TDD is to introduce quality into the software development process as early as possible, so that the finished product is better.

If you apply the practice of TDD without understanding this principle, you are just following the motions and you won’t actually gain the benefit of the practice.

If you can’t understand at some level that the point of doing TDD is to sow some good seeds in your software that you will harvest later on, you won’t be writing the right kind of tests.

There is nothing magical about writing tests before writing code, but there is something valuable in purposely investing in upfront quality with the end goal of getting a big yield on that investment in the right season.

By the way, that is why I like Bob Martin’s book Agile Principles, Patterns and Practices in C#; it discusses many principles of software development that are timeless.  Books like this one and the book I have mentioned probably 10 times in this blog, How to Win Friends and Influence People, are full of principles.

Also, check it out, you just learned what Agile really is.  With principles in mind, now read the Agile manifesto.  It was never designed to be a detailed process and set of best practices for developing software, it was always meant to be a recognition of a set of principles that guide software development.

So, just remember the next time you are arguing with someone over a best practice, or consider applying one to a project you are working on, if you don’t understand the underlying principle, no amount of ceremony and procedure will have the smallest amount of benefit.

If you like this post don’t forget to Follow @jsonmez or subscribe to my RSS feed.

Posted by: jsonmez | February 11, 2013

We Can’t Measure Anything in Software Development

Baccarat is an interesting card game that you’ll find at many casinos.  The objective of the game is to correctly predict whether the bank or player will win a hand.


In Baccarat the scoring for a hand is very simple, add up all the cards at their face value with face cards being worth 10 and only count the total in the ones column.

6 + 7 + J = 23 = 3

A + 4 = 5

The highest possible hand is 9 and whoever has the highest hand wins.  If the player and banker have the same hand, it is a tie.

I won’t go into the details of how the number of cards are drawn is determined, but if you are interested you can find that information on Wikipedia.  Basically, you end up having pretty close to a 50 / 50 chance of either the player or banker winning a hand.  (Of course the house edge still is about 1.06% in the best case.)

The interesting thing about Baccarat though, is that despite the odds, despite common sense, despite the understanding that the game is completely random, people will still sit there and record every single hand and score trying to use it to look for patterns to predict future results.

These poor deluded souls actually think they are measuring something on these score cards, as if what happened in the last hand will in any way affect what will happen in the next hand.

After many years of trying to find the secret formula for measuring software development activities, I’ve come to the conclusion that trying to measure just about any aspect of software development is like trying to measure the odds of a future Baccarat hands based previous Baccarat hands.

Why we want to measure software development

It’s understandable why we want to measure software development—we want to improve.  We want to find out what is wrong and fix it and we want to know when things go wrong.

After all, who hasn’t heard the famous quote:

“What gets measured gets improved.”

Don’t we all want to improve?

Somehow we get stuck with this awful feeling that the opposite is true—that what doesn’t get measured doesn’t get improved.


And of course we feel guilty about it, because we are not doing a good job of measuring our software development practices.

Just like the avid Baccarat gambler, we want to believe there is some quantifiable thing we can track, which will give us information that can give us the edge.

Sometimes the reason for wanting to measure is more sinister practical, we want to evaluate the individuals on our team to see who is the best and who is the worst.

If we could figure out how to measure different aspects of software development, a whole world of opportunities open for us:

  • We can accurately give customers estimates
  • We can choose the best programming language and technology
  • We can figure out exactly what kind of person to hire
  • We can determine what kind of coffee produces the best code

How we try

I’ve been asked by many managers to come up with good metrics to evaluate a software development team.

I’ve tried just about everything you can think of:

  • Lines of code written
  • Bugs per developer
  • Bugs per line of code
  • Defect turn around time
  • Average velocity
  • Unit test code coverage percentage
  • Static analysis warnings introduced
  • Build break frequency

I’ve built systems and devised all kinds of clever ways to measure all of these things.

I’ve spent countless hours breaking down backlogs to the smallest level of detail so that I could accurately estimate how long it would take to develop.

I’m sure you’ve probably tried to measure certain aspects of software development, or even tried to figure out what is the best thing to measure.

It’s just too hard

No matter what I measure or how I try to measure it, I find that the actual data is just about as meaningless as notebook full of Baccarat hands.

One of the biggest issues with measuring something is that as soon as you start measuring it, it does start improving. 

What I mean by this is that if I tell you that I am going to start looking at some metric, you are going to try and improve that metric.  You won’t necessarily improve your overall productivity or quality, but you’ll probably find some way—intentional or not—to “game the system.”

Some managers try to get around this issue by just not telling the team what they are being measured on.  But, in my opinion, this is not a good idea.  Holding someone accountable to some realistically arbitrary standard without telling them what, is just not very nice at all, to put it mildly.

But really the biggest reason why it is too hard to measure aspects of software development, is that there are just way too many variables.

  • Each software development project is different
  • Each feature in a project is different
  • Software developers and other team members are different
  • From day to day even the same software developer is different.  Did Jack’s wife just tell him she was cheating on him?  Did Joe just become obsessed with an online game?  Is Mary just sick of writing code this week?
  • As you add more unit tests the build time increases
  • Different team members go on PTO
  • Bob and Jim become better friends and chat more instead of work

The point is everything is changing every day.  Just about every aspect of software development is fluid and changing.

There is not one metric or even a set of metrics you can pick out that will accurately tell you anything useful about a software development project.  (At least I have never seen one at any software development shop I’ve ever been at on consulted at.)


If you were building widgets in a factory, you could measure many qualities about that widget making process, because much of it would be the same from day to day, but with software development, you are always exploring new territory and a 1000 different variables concerning how you are developing the software changing at the same time.

Measuring without measuring

So am I basically saying that metrics in software development are completely worthless and we shouldn’t bother to track anything?

No, not exactly.

What I am saying is that trying to use metrics int the same way that we measure the average rainfall in a city, or running pace improvement by looking at its average over time, doesn’t really work in software development.

We can track the numbers, but we can’t draw any good conclusions from them. 

For example, say you track defects per lines of code and that number goes up one week, what does it mean?  Any number of things could have caused that to happen or it could just be a totally random fluke.  You can’t really know because there isn’t a knob you can turn and say “ah, I see we turned up the coffee bitterness factor to 3 and it resulted in more bugs.”  Instead there are 500 knobs and they all changed in random directions.


So, I am saying don’t look at how the numbers of any particular metric are moving from day to day or week to week and expect that it means anything at all, instead look for huge deviations, especially if they are sustained.

If all of a sudden your average team velocity dropped down to almost nothing from some very high number, you won’t know what caused it, but you’ll know that it is much more likely that there was one single knob that got cranked in some direction and you’ll at least have some idea what to look for.

You really have to treat the software development process more like a relationship than like a factory.

I don’t have a series of metrics I use to evaluate my relationship with my wife or my friends.  I don’t secretly count how many times my wife sighs at me in a day and track it on a calendar to determine our relationship quality factor.

Instead what I do is talk to her and ask her how things are going, or I get a more general idea of the health of the relationship by being involved in it more.

Team retrospectives are a great way to gauge the temperature of the team. Ask the team members how things are going.  They will have a pretty good idea if things are improving or slowing down and what the effectiveness level is.

Measure not, but continuously improve, yes

So kick back, don’t worry so much.  I promise I won’t tell Six Sigma that you aren’t using metrics.

Instead focus on continuously improving by learning and applying what you learn.  If you can’t notice enough of a difference without metrics, metrics wouldn’t have helped you anyway, because the difference would just be lost in variance anyway.

If you like this post don’t forget to Follow @jsonmez or subscribe to my RSS feed.

Posted by: jsonmez | February 2, 2013

Leaving the Safety of a Regular Job

My routine is pretty crazy.

I get up in the morning, make my strict bodybuilding diet breakfast, and get to work at my first job by around 8:00 AM.

I’ll take a few short breaks before lunch to cook usually some fish or chicken in order to fit in my 6 to 7 meals a day.  (I eat the same exact thing every single day.)

At lunch time I’ll either head to the gym or out on the road for a run.

Around 5:00 I’ll be done with my work for the day at TrackAbout, and take about a 2 hour break to eat dinner and spend some time with the family before heading back to my office to start recording.

Most nights I spend from 7:30ish till 11:00-12:00 planning course work, recording, or editing videos for my Pluralsight courses.

On weekends I usually spend one day finishing up whatever I didn’t get done during the week and writing a blog post.

I’ve been doing this for just about 2 years.

But, that is about to change.

Leaving TrackAbout

February 13th will be my last day working at one of the best companies I have ever worked for, TrackAbout.


It is really difficult to leave a company that is full of so many good people.  In the two and a half years I was at TrackAbout, I cannot recall one heated argument that I have ever been in with any person at that company.  I don’t even know of anyone else having a quarrel either.  That really says a lot about the values and temperance of the employees and owners of the company.

Here are some of the awesome things I liked about working for TrackAbout:

  • Completely remote development team.  Everyone works from home.
  • No bureaucracy!  One layer of management, developers report directly to our CTO.
  • Our CTO, Larry Silverman, is highly technical.  You can’t BS him!  He knows software development and is able to make good choices that are highly relevant to the work being done.  (No death marches, no mandates from on high.)
  • Autonomy.  As long as you are doing your job, how you do it is mostly up to you.  Even what we do to some extent is decided by our teams.
  • Respect.  In the whole time I was at TrackAbout, I never was pushed to lower an estimate or questioned about how I did my job.  TrackAbout empowers its employees by trusting them and believing they are competent.
  • Flexibility.  I always found that if I thought we were doing something the wrong way at TrackAbout, I could say why and how to make it better and things would actually change.
  • Developer free time.  Every 2 weeks we get 4 hours to work on whatever project we want.

I don’t intend to make this an advertisement, but they are hiring an entry level position.  (Web and Mobile .NET Developer – Entry Level – TELECOMMUTE)

So why am I leaving then?

You might be wondering if I enjoyed working at TrackAbout so much, why I would leave.

As I said, it was not an easy decision, but my true passion—the basis of this blog—has always been to make things that seem complicated simple.  I really enjoy being able to take a complex thing and break it down so anyone can understand it.

Pluralsight got $27.5 million in funding this year with the goal of expanding their course catalog by a large amount this year.

I realized that I needed to do everything I can to help with that goal, and that this kind of opportunity would not likely come again in my life.  For me, this represents an opportunity to independently support myself and to devote full time to doing the thing I am most passionate about, taking the complex and making it simple.

Come February 14th of this year, I’ll be devoting almost all my time to producing Pluralsight courses.

Stepping away from stability



I have to admit, it is a bit scary to not have a regular paycheck coming in.

I’ll be a completely independent author making a living off of the courses I produce.  Both exciting, and scary.

I’ve been used to getting that steady two week paycheck and having benefits provided for me, but now my fate is entirely in my own hands.

It is a step I know that I had to take, I just had not planned on taking it so soon.

Where to from here?

This year will primarily be focused on Pluralsight course development and possibly a small amount of consulting.

After that, the road is unwritten.  I’ll be keeping this blog going, and I definitely plan to have a redesign this year.

Older Posts »