Rule #1 — Redone

I love it when I’m wrong, because I learn things. As David Woods pointed out yesterday, my example for the minimal effort you should put in to a Catch was flawed. My code contained the following:

    Catch ex as Exception
Throw ex
End Try

WRONG

The problem with my post is that all of the stack information is lost. If you follow the example that David Woods has on his blog you will see what I mean. The correct code should have been:

    Catch ex as Exception
Throw
End Try

Or, if you don’t like typing, this minimalist approach:

    Catch
Throw
End Try

In the end, though, the point that needs to be made is that you need to either handle the exception yourself (handle does not mean ignore) or you need to bubble it up to see if someone else wants to handle it. In addition, you need to put into the exception information that will help in debugging the problem.

The Stakeholder Registry has such extensive exception information embedded within it. When their code experiences something like a SQL Timeout they display the stored procedure that was being called and even the parameters that were being passed in. This is tremendously helpful in determining what the problem might be.


Rule #2

I mentioned that Rule #1 was "If you catch an exception, you better record it or re-throw it." Today I will talk about the next rule in the series.

Rule #2: If you throw or catch exceptions, you should use the finally clause to clean up after yourself.

When you throw an exception, or catch an exception, one of the biggest things that happens is that you start to move outside the boundaries of the flow mechanism that you had in place. OK, what this really means is that when you throw or catch you skip a lot of code. If you are using resources that are not simple .NET objects, you need to clean up after yourself. The last part of the Try … Catch … Finally block is the Finally clause and this helps you clean up.

The code that you put in the Finally clause is always executed at the end of the Try … Catch block. So, if you re-throw the exception, throw a new exception or, regretfully, ignore the exception, the code in the Finally clause will be executed. This give you opportunity to clean up after yourself by disposing of those resources and objects that are not simple .NET objects. This includes things like file handles, SQL Server connections, or other resources not necessarily handled by managed .NET code.

This does a lot of things, not the least of which is ensure that your applications does not run out of those resources!!! So, like I always tell my girls when they leave the kitchen table, "clean up after yourself, because no one else is going to do it for you."

Try and Catch

Please note that there is an error with this entry. See Rule #1 — Redone for details.

Exceptions are powerful tools that give the developer the ability to understand what is going on with their application, particularly when there is a problem. Sadly, many programmers do not use this feature, or implement it so poorly as to provide no meaningful information.

Rule #1: If you catch an exception, you better record it or re-throw it.

What does this mean? Take a look at the following code:

Catch ex As Exception


End Try

What this code does is catch the exception, then let it get away. It’s like putting cheese on a mouse trap, but then gluing down the trigger so that the mouse can get away. I mean, seriously, what are you thinking? At the least, at the very least, you should have the follow:

Catch ex As Exception


Throw ex


End Try

This at least re-throws the exception so that something above you can make a decision as to what needs to be done. If I had my druthers, however, it would be more like this:

Catch ex As Not Implemented


Throw New Exception("Blow Up3 does not implement that functionality", ex)


Catch es As Exception


Throw New Exception("Unforeseen error in Blow_Up3 trying to access Don’s bank account", ex)


End Try

There is so much more information available to the method that initially gets the exception than the calling method has that it would be a shame to ignore that information and make life more complicated for everyone.

DataSets

DataSets are an amazing construct. They allow you to pass back an entire result set, regardless of how many tables, to the caller and have them work with the data in the fashion that they want. Relationships can be established between tables, queries can be run against the dataset and updates can be made to the table.

You remember the saying “If something is too good to be true, it is“? Guess what? It’s true with DataSets as well.

First of all, let me confess up front that I am not now, nor have I ever been, a very big fan of datasets, but probably not for the reasons you’re thinking. I trust you will keep this in mind as you read.

DataSets provide a wealth of functionality to the developer, but it does come at a significant cost as well. DataSets hide much of the complexity associated with databases, particularly in the area of updating and populating fields on a web page or a report. While this hiding is, in some respects, quite welcome, it lulls the developer into a false sense of complacency and prevents the developer from truly understanding what is happening. I cannot tell you how many times over the past few years I have seem developers pass around hundreds of megabytes of data in a single dataset because they could. DataSets make it easier to be blissfully unaware of the consequences of doing something because of the fact that they hide things so well.

When I was younger I learned IBM 360 Assembler, both at the U of A and at NAIT. This low level language made me very conscious of the amount of data I was using and the most effective ways of manipulating it. Even earlier versions of Visual Basic (1.0 through 5.0) were fairly good at making you aware of what you were doing and the impact. As the complexity of the lower levels of programming has been covered up and hidden by successive updates to the languages and frameworks that support them, the ability of developers to understand the impact of what they are doing has been lowered. Today we have cases where hundreds of megabytes or even gigabytes of data are routinely moved from process to process because it is so simple to do so. The underlying impact, however, can bring a server to it’s knees.

While I’m not advocating that everyone learns IBM 360 Assembler, I am advocating that developers fully understand the objects they are using and the impact of using those objects. If you aren’t sure of the impact of what you are doing, experiment a little more, read a little more, learn a little more. The more you understand what you are doing with the languages you are using, the more productive you will be.

Failure

Being old, excuse me, older, than many of you gives me an advantage over you in a number of ways. I will be able to get the senior rate at the movies before you and I will be able to get discounts at hotels before you. What it has also done, is given me the opportunity to fail more often than you.

One of the best teachers in the world is failure, as it shows you what went wrong and what not to do. All you need to do now is learn from that failure and try to prevent that same situation from happening again. As someone who has been in this field for 20 years I have experienced a lot of failures, both on my part and those with whom I’ve worked. Each failure has been a learning experience that has allowed me to gain some piece of knowledge such that I am able to either not fail in the same manner or at least recover faster.

Unfortunately, failure is often seen as a bad thing, and from an overall project perspective it most certainly is a bad thing. However, small individual failures are not something that should be frowned upon, but embraced. Scott Berkun, in The Art of Project Management, wrote:

Courageous decision makers will tend to fail visibly more often than those who always make safe and cautious choices.

This applies to everyone that makes decisions, from the project manager down to the developer. If a decision was made that was, at the time, the right decision, celebrate the decision, regardless of whether or not it was a success. If the decision was bad, educate the decision maker so that they can learn from their mistake. (Educate does not mean punish.) By telling people you expect them to be perfect and that you do not expect any problems, you are telling them to play things safe and not try anything new. Mankind didn’t go to the Moon by playing safe. IBM played it safe with the personal computer and lost. Risks need to be taken at certain points and we need to train all of our staff, from developers to project managers, when failure and risk, is a good thing.

Writing Enough Code

Test Driven Development talks about writing enough code to pass the test. XP (Extreme Programming) talks about writing just enough code to meet the requirements. In both cases they make a case that you should not could for things that are not required, nor should you code for possibilities that may or may not occur.

This can easily get taken to extreme, however. I worked with a young man who took the idea of “just enough code” and went too far with it. He was writing an application that was designed to accept yearly reports. It was known from the start that the reports needed to be stored by year, searched by year, printed by year, etc. However, in the first release there was no historical data that needed to be kept, everything was in the current year. So, guess what? He omitted years from everything he did, database and code, because “it wasn’t necessary for this release of the application”.

Yes, write just enough code, but also use common sense. If you know for a fact that you are going to need to do something in a future release, don’t ignore that fact just because the current release doesn’t require it. If you know that you need to handle multiple years, design and code for that early on, even if the year is always going to be the same. Writing just enough code and common sense are not mutually exclusive ideas, at least, for most people.

Branching Guidance

Sometimes there just isn’t a shortcut to the right answer. You know what I mean: instead of researching the answer yourself, you lean over, talk to your buddy for 30 seconds and he gives you the answer you need. In many cases this works when you’re trying to solve a silly little problem or you just can’t remember the name of the runner up in last years American Idol.

Other problems, unfortunately, require that you understand the background behind the solution before you can actually understand the solution itself. String theory is like this. So are some aspects of quantum mechanics. Most business problems, don’t fall into this level of complexity, although I have seen the odd case where PhDs would be confounded by the sheer complexity of what has been engineered. Not necessarily what was required, just what was engineered.

In some cases there is some simple help, but it does require a bit of reading. I was recently asked for information about when to do branching and exactly how it should be done. In this case, I went to somebody who needs to do this on a frequent basis: Microsoft. Indeed, the information at CodePlex was excellent in terms of it’s understanding of the problem and the potential solutions. For those of you who think you understand how branching should be done, and for those of you who are at a loss, I recommend this document as an excellent source of information from which you can retrieve the bits and pieces that are of particular interest to you. It is not a light read as the amount of information it contains is quite voluminous (approximately 28 pages), but it gives you some interesting insight into an arcane subject.

Soccer Referee

Paraphrasing is a lost art in the IT world, but it is an art that really needs to be emphasized more. When I was younger I was helping a friend referee a soccer match. (Football to you foreigners) He wanted me to be a linesman and he told me that “When the red team kicks the ball out of bounds I want you to point the flag in the direction that the blue team will be moving when they get the ball on the throw in.” This made perfect nonsense to me as that seemed much too complicated, so I paraphrased it “You mean, point the flag at the team that kicked the ball out?” This confused him for a moment as he struggled to reconcile what I had repeated back to him with what he had told me, but he gradually agreed that the impact would be the same.

Sometimes when we write up specifications for an application we are too deep in the details and too aware of the intent, but not fully aware of the impact. We need to step back, take a look at what we have said or written, and see if we can rephrase it to make it simpler, yet still retain the same meaning. I do this quite often when writing these one minute comments. You should see some of the stuff that I write and throw out. (Then again, you have seen the stuff that I’ve gone ahead and sent out.) For instance, I’m currently writing this note because one on testing just doesn’t make any sense when viewed from outside the original context that most of the readers will not have.

The same thing is true of specifications. Not everyone reading the specification is going to have the same background as you or is operating with the same context. Not everyone is going to be an expert in the business area involved. (Or the subtleties of being a soccer referee.) What you write for a specification needs to be easy to understand, even for those that are unfamiliar with the business process. If it isn’t easy to understand then you need to step back, clear your mind, and try again. If it is hard for someone who knows the business to write the specifications, imagine how hard it is for someone not familiar with the application to understand what you have just written.

Best Practices

There are a ton of best practices floating out there in the infamous nether of cyberspace. Many of these interesting tidbits have actually come to roost in the minds of architects, programmers, testers, and, yes, even Project Managers. The question is, how do these best practices get communicated out to everyone?

Organizations sometimes do this through the creation of standards and templates for people to follow. These can be advantageous in that they prescribe certain actions that must occur. Standards, however, have some disadvantages in that the time from the creation of the standard to the implementation of the standard can be quite long. In other cases the standard provides either too much/not enough guidance and subsequently causes more confusion than if the standard had not existed. Enforcement is also a tricky thing to implement as grandfathering old projects needs to be taken into account versus the benefit of following the standard.

Some organizations produce lighter weight guidelines for people to follow. A guideline is a water downed standard in that it has not followed the same rigorous approval process, but is still considered something that should be followed, when possible. Guidelines, however, because they are not standards and subsequently not enforced, do not always provide the structure that is necessary to take full advantage of the material in question.

At the far end of the scale are those organizations that publish standards in a much more informal manner. The mere act of a certain group publishing something gives that tidbit of information the status of an organizationally approved standard that must be followed and will be enforced. This method presupposes that the group publishing the work is granted sufficient authority to make those decisions on behalf of the organization. Sometimes this authority is granted on a wide scale (all IT standards) or on a very narrow scale (all Visual Basic Programming Standards), depending upon the comfort level that management has with the group.

All of these methods, however, share one key thing: communication. Even on a project by project basis these communication mechanisms can be used to disseminate project standards to the rest of the team. Whether this is done through formal documents (standards), informal guidelines, or by having the Application Architect send out emails or write a blog, any method of communicating standards and guidelines is better than none, so get those best practices out of your head and on to paper (electronic or wood pulp) and let other people benefit from your experience.

High Performance Teams

In another life I was busy researching the idea behind “High Performance Teams” (HPT). These teams are not NASCAR fans, nor are they hooked on amphetamines. Instead, they are a group of individuals who work with each other really well and outperform other similar groups in terms of their quality of work and the speed with which the work gets done. You’ve seen these teams in hockey as the coach will normally put certain players together and keep them together throughout the season with few changes.

In IT, however, the concept of a high performance team does not always seem to be understood or even implemented in many areas. A team can be as small as two people, or it can be much larger, but there are some key traits that all of these teams share. (OK, here is where I differ from conventional wisdom so if you want you can tune out, even though you may be missing some really cool stuff.)

  • Trust. Perhaps the most important trait is that the members of the team trust each other to make the right decisions or at least a decision that can be lived with by everyone.
  • Communication. Team members communicate with each other effectively. Different people understand things in different ways. Some people like metaphors, others like analogies while others love diagrams. In a HPT the appropriate mechanism is used at the right time to maximize the effectiveness of the communication.
  • Commitment. Each team member knows that every other member of the team is just as committed as they are to producing a high quality product.
  • Continuous Improvement. An HPT is not satisfied with the status quo, they want to do the next job better than they did the last job and the one before that, by continually improving how things are done.

Some organizations are not ready for HPTs as it means setting a group up as being “special”. Others are not interested as they believe, rightly or wrongly, that if people just follow the process everyone would be part of an HPT. Some larger projects do implement this concept within the overall project and find that the HPT is extremely productive and crucial to the success of the project.

It may not be your cup of tea, but at least you’re aware of the possibilities.