Web services need planning

Project Management Plan

Project Management Plan

Copyright perhapstoopink (Creative Commons)

I get calls all the time at home from “investment planners” and “retirement planners” and all sorts of other people who believe that I need to plan for the future.  Well, their not wrong, but planning needs to occur in other areas of your life as well.

Taking on my mantle of IT Philosopher I’m here to talk to you about planning.  Future planning for your applications.

I understand that being able to write “Developed and implemented web services” looks good on a resume, but if your application does not need a web service don’t give it one.  Sorry for the emphasis, but it’s needed.  Quite often I see application architectures that don’t make a lot of sense, other than the fact that it will look good on a resume.

Let’s take a look at a fictitious travel agency “Jessop Fantasy Tours”  (hey, my story, my travel agency).  One thing that probably makes good sense to spin off into a web service is “RetrieveTravelItinerary”.  Based on the name you would assume that this web service would retrieve the Itinerary based on a key being passed in.  In order to determine if it meets the criteria for a web service see if it satisfies these points:

  1. There is a need for this information in multiple locations in one or more applications.

OK, there was originally going to be a list of five or six points, but, for the most part, it really comes down to the above question.  Even then I could spend another couple of thousand words just talking about the permutations and combinations of items that make up the little phrase “multiple locations”.

For instance, I don’t mean two different places on the same screen.  I don’t mean in two different functions, but in two different application functions.  (“List of Itineraries” and “Itinerary Details”).  If you plan on commercializing the usage of the function then that would qualify as multiple locations.  By commercializing the usage I don’t necessarily mean external to the organization.  You could have a function, used strictly internally, that you need to promote, maintain and elicit usage within the organization.  Let’s say that my travel agency provides clients with a list of restaurants near a location that they pick.  They can do this from within the pages where they book the travel or, because I am a nice guy and I want to drive traffic to my site, I also put this on the main page.  The email system also needs it because it will generate the itinerary and associated restaurants and send that information to an email address.

There are valid reasons for creating web services (same function, different places) and poor reasons for creating web services (resume padding, bored).  Know which one is which and make sure that the reasons are valid.


Cancelling a Project

Inertia.  Newton described as a body at rest tends to stay at rest and a body in motion tends to stay in motion, unless acted upon by an outside source.  OK, he actually said:

Corpus omne perseverare in statu suo quiescendi vel movendi uniformiter in directum, nisi quatenus a viribus impressis cogitur statum illum mutare.

But seeing as I don’t understand Latin very well I thought I would translate for you.  You’re welcome.

"Well, that’s all very fascinating, but what does this have to do with IT?"  Glad you asked.  You see, a project is much like a body in motion:  it tends to stay in motion.  Regardless of whether or not the project is required anymore or even if the target has completely changed, the project still moves forward.  There are a few skeletons of this sort in my closet, that I almost ashamed to mention.  (Almost, but not quite.)

Have you ever worked on a project that was headed nowhere and doing it at a break neck speed?  Imagine a project where you’re almost finished the design and the technology hasn’t even been chosen yet.  Tough to finish your design, isn’t it?  Image a project where business rules are changing, but the design of the project is two versions of rules behind.  Not going to be that successful is it?  Imagine a project where the developers are asked to work overtime.  For free.  And then tell them they need to work more over time because the project is behind.  Imagine a project where the Project Manger gets promoted, and removed from the project, and yet his line managers are punished for the current status of the project.  Imagine a project where the "technical guru" is unable to comprehend basic technology, yet insists that his technology choices are sound.  Now imagine him in charge of the overall project!!!!

All of these are reasons for a project to stop.  All of these are reasons for a re-assessment of the viability of the project.

But the project kept going.

Inertia kept the project going.  Inertia fuelled by pride and a stubborn reluctance to say "I think we need to stop".  There is no shame in stopping a project if it’s headed in the wrong direction.  There is no shame in saying "Things have changed since we started, let’s stop and re-evaluate things before we go too far".  The objective for any project should be for the benefit of the organization.  Sometimes it is better for the organization to stop a project and walk away then it is to let the project continue.  Understanding the long term results of proceeding is more important than finishing the project.

(By the way, the project I was referring to was with a previous employer and should not be confused with any current or previous project of Alberta Education, Alberta Advanced Education & Technology or Alberta Learning.)

Some things should not be “added on”

When building an application there are some things that can be added on afterwards:  new functionality, better graphics and friendlier messages.  These are all things that add more value to the application.

There are some things, however, that should not be added on afterwards:

  • Error Handling.  What?  Don’t add on error handling afterwards?  No.  It needs to be done at the start.  Now, I know what some of you are saying "But, Don, we’ll add this in if we have a problem."  Face it, every new application has problems and if you don’t have error handling in at the beginning you are spending needless cycles trying to debug the application and you are causing me to drink Pepto-Bismol as if it were Dr. Pepper.  We recently had to help a project debug their application in the UAT environment, but they had no error handling at all, except the default ASP.NET error handling.  Thank goodness for Avicode, as it helped us pinpoint the problem quickly, just far too late in the development cycle.
  • Object Cleanup.  If you create an object, kill the object.  It’s simple.  It’s so simple that even Project Managers can do it.  By not cleaning up after yourself you raise the potential for memory leaks to happen.  And you know what that means?  Alka-Seltzer, Petpo’s cousin. I can’t tell you the number of applications in which we recycle the component once it hits a certain limit because it would keep you awake at night.  (Lunesta, another cousin)  Suffice to say that many of our applications are forced to recycle after a certain period of time or when they exceed a certain size. 

The scary thing is that both of these items are considered  best practices for writing code in the first place.  I know the excuses "…the project manager won’t let me do this …" or "…I don’t have enough budget to do this …" or, the one heard most frequently "… I don’t have the time to do this …:.  Very few project managers tell their staff how to code, so the first excuse is just a cop out.  As for the budget, doing these items does not add significantly to the cost of application as it usually makes debugging faster and easier, so the budget excuse is just that, an excuse.  As for the time, if you’re short on time, you need to do this as it will help you.

One of the things that many Health Organizations are putting in place is prevention of disease so that there is no need to cure the disease.  Prevention is much more cost effective.  Object Cleanup is prevention, pure and simple.  When someone has symptoms that need to be diagnosed, what does the doctor do?  Perform a seance?  Guess?  Or do they use a tool to help them out?  Ever heard of an MRI or even an X-Ray?  Think of Error Handling as a tool to help you diagnose the disease faster.  It’s better than guessing.

So, object cleanup prevents problems and error handling helps diagnose problems.  So, I guess this means that I’ll be seeing more applications with these items as an integral part of the overall application or do I need to go back to the medicine cabinet?

Work Smarter, not Harder

Raise your hand if you have had someone tell you to "work smarter, not harder"?  Ah, I see the majority of hands in the air.  (Careful about that.  People might think it strange for you to raise your hand in response to a line in an email.  I won’t tell anyone though.)  So, how do you work smarter, not harder?  (i.e. increase productivity)  Yes, in the following paragraphs I am going to give you an entire book’s worth of advice, so pay attention, this is pure gold!

The premise behind "smarter not harder" is that you only spend time on the most important things and leave the "back burner" stuff until there is time to do it.

There, that’s it.  That’ll be $19.95 CDN please.  Only PayPal at the moment.

Wow, that was most … unsatisfying.  But, you know what, I think I’ve saved a lot of you $19.95.  Let’s face it, there is no silver bullet for dramatically increasing productivity.  No magic spell is going to dramatically make you more productive.  Nothing you can do right now is going to have a significant impact on improving your productivity in the next couple of weeks, right when your supervisor wants it most.  You can read books, attend seminars, hire personal development coaches or a myriad of other things, but the truth is that change takes time.  if you type 30 words a minute, you aren’t going to suddenly start typing 60 words a minute because your supervisor said you should.  If you can run a six minute mile, you aren’t going to get down to a five minute mile just because a book told you that you could.

All of these things, including "smarter not harder" require practice.  A book might tell you what you need to practice.  A seminar might guide you in the right direction for general areas of improvement and a personal development coach might lay out a detailed plan, but the reality is that it all depends on you.  Without the practice, the commitment and the desire to work smarter, it isn’t going to happen.  But even if all of these things are in place, it is going to take time.

So, where does this leave all of the people telling others to work "smarter, not harder"?  Well, the odds are that they are in a supervisory position.  The odds are also in my favour that this person is experiencing a time crunch whereby the amount of work has now exceeded the capacity of the staff.  So, in an effort to increase the capacity of the staff they are asked to work smarter.  This may or may not be used in conjunction with greed ("we’ll give you a bonus if it’s done on time"), fear ("we’ll can you if it’s not done on time"), heroism ("everyone is depending on you to save their butts") or, as I’ve seen in one instance, all three approaches.

Essentially, if you are at the point where you are telling people to work "smarter, not harder", you’ve already lost.  Suck it up, realistically plan the project and change either the target date (sometimes), the scope (sometimes) or add more people (dangerous as this will also increase the effort required).  If you really want people to work smarter, then help them at the beginning of the project, not when there is a crisis.  Help them plan their work.  Help them organize their InBox.  Help them become better developers prior  to you needing them to become better developers.

The Dark Side of Objects

The Dark Side of Objects?  (Luke, I am your father.) 

Sometimes you need reins on developers and designers.  Not because they aren’t doing a good job, but because if you don’t you may end up in a quagmire of objects that no one can understand. Objects are good, but they can be overdone.  Not everything should be an object and not everything lends itself to being objectified.  Sometimes a developers goes too deep when trying to create objects.

When I was learning about objects I had a great mentor who understood the real world boundaries of objects:  when to use them, how to use them and far to decompose them into additional objects.  Shortly after having "seen the light" with regard to objects I was helping a young man (okay, at my age everyone is young) write an application which was actually the sequel to the data entry application I mentioned in the previous note.  He needed to do some funky calculations so he created his own numeric objects.  Instead of using the built in Integer types he decided that he would create his own Number object.  This number object would have a collection of digits  When any calculations needed to be done he would tell one of the digits the operation to be performed and let that digit tell the other digits what to do.  Well, this gave him a method whereby he could perform any simple numeric operation (+-/*) on a number with a precision of his own choosing.  He spent weeks on perfecting this so that his number scheme could handle integers and floating point numbers of any size.  It was truly a work of art. 

And weeks of wasted time.

What he needed to do was multiple two numbers together or add up a series of numbers.  Nothing ever went beyond two decimal points of precision and no amount was greater than one million.  These are all functions built into the darn language and didn’t need to be enhanced or made better.  The developer got carried away with objects and objectified everything when it didn’t or in this case, shouldn’t have been done.

Knowing when to stop using objects is just as important as knowing when to use objects.

Solving the Right Problem

One of the hardest things to do it solve the right problem at the right time. 

When investigating a problem you may end up looking at a wide variety of possible solutions.  Some of these solutions are quick fixes while others require a fair amount of effort to implement.  The question is, which one do you propose?

For a crisis, the quick fix is usually the right choice.  Things need to be resolved quickly and the best solution may not be able to solve the problem fast enough.  As a result the quick fix is usually chosen for Production emergencies and rushed through into Production.  Quick fixes are not meant to be permanent solutions, but in many cases they end up being permanent for a variety of reasons.

In less crisis oriented situations, however, the best solution may actually be the resolution of a deeper, more convoluted problem that is actually the root cause of the issue.  Unfortunately, resolving the root cause of a problem may actually be a problem in and of itself.  There may be significant effort and money that needs to be spent in order to resolve the issue in the manner that it should.  Sometimes the problem is so fundamental to the application that it almost appears that you have to re-write the application to make it work as desired.  If this is the case, is this what you should propose?

As with many things in life, it comes down to a business case:  is the cost of implementing the solution less than the cost of living with the quick fix?  If this were strictly a matter of dollars and cents then the answer would be know right away.  Unfortunately the cost of living with the problem is not easily quantifiable.  How do you measure consumer lack of confidence in terms of cost?  How do you measure consumer satisfaction in terms of cost?  in many cases only the business area affected can even hope to determine the cost.  It is our job to present the facts as we know them, the costs as we know them, and let the business decide the ultimate cost.

n-Tier environment … design

One of the biggest differences, at least in terms of perspective, between a Win32 application (aka Fat Client) and a web browser application is that the Win32 application is used by a single person whereas the web application may be used by hundreds, or thousands of people at one time.  OK, quiet down out there, let me explain.

A Win32 application is run on a desktop and is run by a single person at a time.  The application may be installed on hundreds or thousands of machines, but each machine has a single person running the application.  In this manner the workload is distributed between each workstation and the database server.  In a web based application all of that processing needs to occur somewhere.  While some of it happens on the browser, a good portion of it runs on the web server.  Therein lies the problem.

Many web applications are built in such a manner that they consume large amounts of CPU and memory in order to serve a single client.  Multiply by the number of people that are going to be accessing the application and you have a nightmare on your hands. Many years ago servers used to be much more powerful than desktops.  This was due to the advance architecture used in the servers and the very expensive chips used.  As prices have dropped, however, this distinction has almost completely disappeared.  Indeed, a new desktop may, in some circumstances, be faster than the server to which it is communicating.  Because the average developer has a fairly powerful machine, what seems to run quickly on their desktop completely bogs down on the server when multiple people try to access it.

Our current Production servers are quite large and contain a lot of raw horsepower.  We do not have a large number of concurrent users.  I would personally be surprised if we hit 40 on any one server.  This is in comparison to a server at Google that is designed to support hundreds of concurrent users.  On-line games, such as World of Warcraft, support thousands of concurrent users.  While we don’t need to write our applications so that we can support thousands of concurrent users, we should always be cognizant of the fact that our application does not operate alone.  It needs to co-exist with itself and others.

Batch Processing … Part Two

I must be honest, I am sometimes quite surprised by the reaction my notes some times invoke.  For instance, the note about batch processing generated a number of “high fives” in the hallway and a couple of 500 word responses.  The response was, as I expected, all over the place with some people telling me I was not in my right mind while other people were saying that this is what they have always believed.  Never one to let the flames of disagreement burn out, I thought I would list my personal rules as to whether or not something should be done in batch.  This is not something you need to follow, but it may help illuminate my comments from the other day.

Interfaces with other systems.  I originally had the word “external” but I replaced it with “other” as different systems internal to an overall application may not be able to handle a continuous stream of data.  For instance, our interface to IMAGIS is done through a file transfer which is done once a day.  To ensure that we get as much as possible into the system we do the transfer close to the cut off time that we have arranged with the IMAGIS team.  In this case the target system is just not designed to handle us sending them information multiple times per day.  Now, it may be the case that there are other ways to communicate with IMAGIS that we have not utilized, but with the current set up, we need to do it on a scheduled, batch basis.

Reports. This seems like a logical item to do in a batch process.  Whether this batch process is done via another tool, such as ReportNet scheduling the report, or whether it is scheduled via Windows Scheduler, reports are good batch residents. However, in my mind a report does not process and create data, it merely reports on the data.  If your “report” creates and stores data then that portion should be separated out and done in an asynchronous manner.  A report, any report, should be able to be generated quickly from data that is already stored in the database.  In addition, in many cases, reports do not need to be run on a scheduled basis, as long as the report can be generated on demand and that it will contain the identical content as if it had been generated on a previous day.  For instance, if reporting on the applications that were approved on July 12th, it would list all approvals, even if one of the application was subsequently denied.

And that’s pretty much it.  It’s a very short list of things that need to be done in a batch window.  For me it all comes down to this:

It is our job as IT Professionals, however, to not just do what our clients say, but to educate them as to what can be done, to show them new opportunities, and to give them something better than what they had, not just something newer.

The n-Tier world .. hardware

Does an n-tier hardware environment actually make things better?  I was asked this question recently and, to be honest, it made me pause and reflect on the promises of the n-tier world versus the reality of an n-tier world.

The n-tier push really started gaining hold, again, in the 90’s when the Internet was young and foolish and so was Al Gore.  As a result, most people associate the idea of an n-tier environment as one that is web browser based.  While this is not always the case, the majority of applications currently being developed are web based, so we will run with that and assume, for the purposes of this discussion, that our n-tier environment consists of a web browser communicating with a web server that in turn communicates with a database server.

With regard to the web browser, the idea was that with a “light” front end that was downloaded every time you requested it you did not need as much processing power on the desktop (Internet Computer anyone?) and you could make changes to the application without having to distribute an application to hundreds, thousands or even millions of customers.  This has proven to be a valuable and worthwhile objective of browser based deployments and allows for quicker changes with less client impact.

Separating the main logic on to a web server or cluster of web servers (see previous notes about this) then allows the developer to change the application and only have to do it in a limited number of locations.  While this has allowed the developer to deploy applications quickly, the problem here lies in the fact that the developer(s) build the application as if it was the only thing running on the server, when in reality it is usually one of many applications.  Resource contention (memory, CPU) usually mean that going to a cluster of servers is a requirement.  It is also a common misconception that adding more servers will make the application run faster.  Adding more servers allows you to support more simultaneous users, but does not necessarily make the application run faster.  As a result, a poorly performing application will perform just as poorly one machine as on a cluster of 20, although you can annoy more people on cluster.

By placing all of the database access on a machine, or cluster of machines, there are fewer connections to the database that need to be monitored and managed on the database server.  This reduces memory usage, CPU usage and allows the database to concentrate on serving data to clients.  Unfortunately, this is the where one of the biggest problems is in the n-tier world.  Developers need to optimize their code when accessing the database.  Whether it is reducing locks and deadlocks, reducing transaction length or simply designing better applications and databases, the database back end, regardless of whether it is an Intel box, a Unix server or an OS/390 behemoth can only handle so much work.  Web servers can be clustered, but in order to get more than one database server to service requests against the same data you need to have a much more sophisticated and much more complicated environment.  Just adding another server won’t cut it as the database server is constrained in terms of the memory and CPU we can use.

So, has n-tier lived up to it’s promise?  Sort of.  The web browser side:  yes.  The web/application server side:  mostly.  The database side:  not as much as expected.  The problem is not the technology, rather the people.  We have created an infrastructure that can do great things.  What we need to do now is teach people how to create great things with that infrastructure.

Hot Fixes

What is a hot fix?  This question seems to be coming up  more often and I think it needs a bit of discussion in this arena.  Definitions of hot fix that I have seen include:

  • A hotfix is code (sometimes called a patch) that fixes a bug in a product. (Source)
  • Microsoft’s term for a bug fix, which is accomplished by replacing one or more existing files (typically DLLs) in the operating system or application with revised versions. (Source)

I think we can all agree that a hot fix is something that fixes a bug.  The question now arises as to the size of the patch.  The second definition is important in this aspect as it talks about replacing one or more DLLs.  So, a hot fix will fix a bug by replacing an indeterminate number of DLLs.  Darn it, I’ve used that word again: replacing.  That happens to be the crux of the problem that we are experiencing.

Replacing DLLs does not mean the uninstaling of the entire application and the installation of a new version of the application which has the bug fix inside.  This is simply an install of the application.  A hot fix would take the DLLs that were changed, package those up and install those on affected machines.  This is standard practice used by Microsoft, IBM, Sun, Oracle, Hewlett Packard, PeopleSoft, SAP, Symantec, Trend Micro, Adobe, Electronic Arts, Intuit, AutoDesk Check Point, and, quite literally, millions of other companies.  You don’t re-install Windows every time there is a hot fix for Windows.  You don’t re-install your anti-virus software every time there is an update to the software.  You don’t re-install your entire application because there is a spelling mistake on a page.

If you are asking for a migration to a Shared Environment, and you are essentially asking us to install a new version of the application, don’t call it a hot fix, as you are disagreeing with the vast majority of the IT world and the definition that the Deployment Team uses for a hot fix.  A hot fix replaces DLLs.  By packing everything up into a new install for the application you are potentially including other changes in your fix that are not related to the bug you are trying to install.