Everyone Else is Better at My Job than I Am. And Yours Too.

I just came across the article ‘Syndromes’ Drive Coders Crazy – Business Insider. Interesting article about fear and dissatisfaction among high tech people based on the notion that as individuals we’re not good enough, and that “real programmers” do it for the love of coding. So we’re driven to work crazy hours and job satisfaction is reduced.

We’ve known for along time that software engineers (and other knowledge workers I suspect) are less productive when constantly working 60-hour weeks. The article makes that point again.

Posted in Uncategorized | 1 Comment

Good Practices Reduce Bugs Better than Good Brains

Mostly we think that avoiding bugs is the result of a good brain. If there’s a bug, some programmer missed something. That’s certainly one source of bugs. But in my experience many, if not most bugs can also be traced to how the code is written. The format and structure of the code can often make it hard for human eyes, even trained eyes, to see a problem right in front of us.

The latest iOS security bug is the talk of the town at the moment, and it’s a doozy. It left people extremely vulnerable to attacks, and the code was based on open source code. What makes it more poignant is that it appears to be a very small error in the code, one that is at the same time easy to understand and can be difficult to catch. It’s a good example of how the lack of simple coding standards can cause big problems because it’s easier for the problem to “hide” within the structure of the code.

Below I’ve used this as an example to show how a couple of simple coding standards can help prevent this kind of problem.

An Extraordinary Kind of Stupid is a great article about the bug. The fault is placed on some programmer who missed the fact that there were two identical lines:

OSStatus err; 

if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0)
   goto fail;
if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0)
   goto fail;
if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
   goto fail;
if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
   goto fail;
   goto fail;
if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
   goto fail;
   return err;

The result of that duplicate “goto fail” is that we jump to the code after the “fail” label before we make the last error check. All the articles I’ve seen about this problem essentially say “the programmer should have noticed the identical line, which is always executed because it’s actually outside of the if statement”.

True. But this is like saying that a driver should have avoided an accident even after the traffic signs were wrong and the side mirror was broken. Yes, you might have avoided that accident if you had paid special attention. Maybe. But the environmental difficulties made it very hard to notice the oncoming accident. The environment made significant contributions to the accident, not just your lying eyes.

Take a look at the code again. First, there’s a goto statement. I don’t think I’ve used a goto since high school. Using gotos has been considered a poor programming practice for years because it abruptly alters the logic of the code and can lead to “spaghetti code”. It leads to errors like the one above, and It’s easy to jump to the wrong label. On rare occasions a goto can get you out of a jam you can’t avoid, but it makes the code harder to follow and understand. If I were reviewing this code I’d ask for some good justification for using goto. In the above code, it’s not necessary at all.

Second, the variable and goto label are poorly named. Why do software developers use such short names? Compilers don’t care how long the names are, but people do. “err” is not a clear explanation of what the value is. An error of what?

“err” is also an incorrect name for that variable because it might not contain an error status. It’s the return value of the authentication, which could report that the authentication is successful. Even if it does reflect an unsuccessful authentication, that’s not necessarily an error. It could just be a typo on the password. In the context of authentication, it’s not a failure to when you don’t authenticate. That actually reflects that the code is doing its job.

By the same token, you get to the “fail” label no matter what, even if none of the failure conditions are met. It’s confusing – the implication from the label name is that if you reach the label (which you always do) you have failed in some way. In fact, if you reach the label you’re either successful or unsuccessful with authentication, and neither of those is a failure. A failure is when something fails in some unexpected or unusual way, such as when you can’t save a file because you’ve run out of disk space.

It’s hard to figure out what’s going on with such terse and incorrect variable and label names. So “err” should be called something like sslReturnValue, and “fail” should be called something like osStatusComplete.

There should also be brackets for each if statement. This is one of the big coding failures here and it’s easy to correct. A single line after an if statement is part of that if statement. But if you have more than 1 line you need brackets. So put them there even if you only have 1 line when you first write the code. It’s quite possible you’ll have to add more lines later. And it just makes the code easier to read. This is especially poignant because the failure to see something “simple” in the code is what caused this error. It’s important to remember that it’s not just machines reading this text, it’s humans too. And the humans are more important in finding and fixing errors. Code should be easy to read for just that reason.

Let me repeat: code should be easy to read. It makes it easier to find and fix errors. It makes it easier to see the logic.

Sometimes software engineers like to write “impressive” code that’s dense and difficult to read. The code gets a lot done on a single line, or the spacing of the code is dense with little or no whitespace to call out important routines. That doesn’t mean you’re a smart engineer or that the code will run any faster. It means you write code that’s hard to maintain.

Finally, it’s always dicey to perform assignments within conditionals. It’s a shortcut but it makes it easier to conflate the assignment and the conditional. I hesitate to say “don’t do it”, but avoiding it makes for easier reading and better code comprehension.

So here’s one possible re-write of the code above:

OSStatus authenticationStatus;

authenticationStatus = ReadyHash(&SSLHashSHA1, &hashCtx);

// fall through to the end of the “if” statement if we can't authenticate, otherwise continue

if (0 == authenticationStatus) { 
 authenticationStatus = SSLHashSHA1.update(&hashCtx, &clientRandom);

  if (0 == authenticationStatus) {
    authenticationStatus = SSLHashSHA1.update(&hashCtx, &serverRandom);

    if (0 == authenticationStatus) {
      authenticationStatus = SSLHashSHA1.update(&hashCtx, &signedParams);

       if (0 == authenticationStatus) {
         authenticationStatus = SSLHashSHA1.final(&hashCtx, &hashOut);  

return authenticationStatus; // will be 0 if all checks are OK

So we have nested if statements instead of goto statements. The result is the same as the original example because we always stop as soon as we get a non-zero status code, and we return the status no matter what it is. We also see that we’re returning a status, not an error. We’ll never accidentally assign something in a conditional. There’s no goto statement, so we couldn’t replicate the error in the original code if we tried. It’s more obvious what’s going on just by looking at the structure, which makes for quicker assessment of the code.

The original code was poorly written and is deceptively difficult to comprehend. I would say it’s mostly due to a lack of coding standards, which can be considered lazy software engineering. The boring but important types of practices illustrated above result in big savings in the long term – such as a lack of security vulnerabilities.

Posted in Uncategorized | Leave a comment

How to iterate in RTC: Iterations that enact process

Juliet said “A rose by any other name would smell as sweet.” Shakespeare was arguing (through Juliet’s dialog) that the names of things don’t matter. What matters is what a thing “is”. And what if someone points to a daisy and calls it a rose? That doesn’t make a daisy smell any sweeter. Your “rose” will not impress your date.

And if you call something an iteration that isn’t an iteration, you’ll be less successful in delivering your software than you thought you’d be. If you use iterations, you want to make sure you enact real iterations in your project. Said another way, iterations should have behavior associated with them, not just timeframes.

It’s been my experience that iterations in RTC are often not used like iterations as commonly defined by the vocabulary of a Software Development Lifecycle (SDLC). So what we call iterations in RTC may or may not enact true iterative development. This leads to some mischief when people customize their processes.

One reason why this is a problem is because RTC allows you to define an “iteration” any way you want, as long as it has start and end dates. But an iteration is more than just dates.

An RTC iteration can represent a Waterfall phase, a Scrum iteration,  a chunk of time with some behavior associated with it, or just a start/end date pair. So we don’t want to think that we’re getting “iterative development” for free just by dropping an RTC iteration into a timeline.

An iteration in RTC is often used for its most obvious function: as a date range within a timeline that you can assign work items to. This is useful to be sure, and you can create “iteration types” in RTC that let you identify what you should be doing during that time period. By default, that’s only a description of the iteration (with some defaults for role permissions). But you CAN define true iteration behavior in RTC by adding operation behavior and customizing role permissions for the iteration.

Why worry about the difference between a chunk of time and a real iteration?

Sometimes a Project Manager tells me that they’ve built a hybrid process in RTC. That makes me worry worry. Then they say “I have a waterfall project with iterations.”

No, you don’t. That’s impossible. A Waterfall (or Formal) process delivers its value at the end of the process, by definition. An iteration delivers value at the end of the iteration. By definition. The Waterfall model is designed to do all requirements up front, for instance¹. Iterative development details a subset of requirements each iteration. The measurement of value is different. Iterations measure functionality delivered and Waterfall generally uses Earned Value. So Waterfall and true iterations are functionally incompatible.

There are some things you can do in one of those approaches that you can’t do in the other. Trying to mix-and-match iterations and Waterfall phases is like saying “I like steak and I like key lime pie. If I throw both of them in a blender I’ll get something delicious!”²

What the PM usually means is that they’ve defined Waterfall phases, and they’ve defined sub-sections of those phases (using RTC iterations). But they are not using true iterative development in the context of a Waterfall process because, as we saw above, the two are incompatible.

The mischief happens when a PM thinks they’re getting the benefits of iterations, but they’re only defining timeframes within their timeline. The PM may not really understand the practices he or she is using, so just calling something an iteration seems good enough. In fact, you need to enact and enforce iterative behavior to get the benefits of iterative development.

What does iteration behavior look like in RTC?

Take a look at a project created from the Formal Project template. Open the process definition in RTC and go to Process Configuration > Team Configuration > Iteration Types. Here are all the iteration types defined within the project. If you open an iteration type and select Permissions, you’ll see all the permissions for all roles for that iteration type.


For the Requirements phase in a Waterfall (formal) development cycle, the Developer is only allowed to create defects. Developers don’t define risks or business needs when we’re doing requirements, so we restrict their behavior to creating bugs.³

Similarly, we can define operation behavior for specific roles in each iteration. In the screenshot below, we’re saying that anything delivered during the Requirements phase must have a clean workspace and change sets with descriptions associated with them.



Note that iteration behavior overrides the operation behavior defined at the Team or Project level.

You might be wondering why I didn’t show Scrum iterations for my examples. If you look at the RTC Scrum template you’ll see that there are no iteration types defined. That means that the RTC Scrum template does not enforce iteration-specific behavior!

Actually, this behavior is enforced at Project and Team levels (under Operation Behavior). So all iterations in the Scrum template require identical behavior, all of which is defined at the Project/Team levels.

But it would be better, I think, to reflect that not all true iterations are created equal. Early iterations don’t need so much formality. For example, developers might be creating Epics, you might be experimenting with architectures that you will throw away, etc. Keep things loose and don’t make people do a lot of administrative stuff.

In later iterations you don’t want just anyone creating an Epic, so you might prevent developers from doing so. You want to make sure copyright notices are included in the code. And you want your developers to remove warnings and unused imports from the code as you approach final release. So add those mechanisms into the later iterations.

There’s no reason you can’t have iterations named Early Project, Mid Project and Final Release. They can be mostly the same, but allow for different levels of permission and formality.

You can’t expect to reap the benefit of iterations unless you discipline your team to deliver functionality in the way iterative development is defined. Calling a Waterfall phase an iteration doesn’t make it so, though you can use RTC to customize the phase so it focuses on what your Waterfall approach should focus on.

Similarly, you can define iteration-like functionality at the project and team levels of your Scrum project as RTC currently does. But the focus of iterations change over the course of a project. And not all projects are created equal. For example you may want different iteration types for a 3-member Scrum team versus a 7-member team.

Add iteration behavior to your RTC iterations to support whatever process model you need. But remember that just calling something an iteration doesn’t encourage your team do true iterative development unless you enact it in your process.


1. The observation from Winston Royce is that you find design flaws when you test, so you go back and re-design the product. Kind of a great big second iteration. He’s not the only SDLC guru that recommends throwing away the first attempt. I’d argue that most modern SDLCs actually endorses the practice “throw the first one away.” But that’s for another article.

2. Yuk. Some will argue that modern Waterfall methods incorporate some Agile techniques which allows you to introduce iterations into a Waterfall environment. And I believe that you can be both Agile and Waterfall, since Agile is a perspective and Waterfall is a discipline. But you still can’t jump to the left while you’re jumping to the right. You can’t do everything in your process just once (Waterfall) while at the same time repeating everything in your process (iterations).

3. When I first saw this I wondered why a Developer is allowed to do anything at all during the Requirements phase. To do Waterfall formally, you’d toss the requirements over the wall to the developers and they wouldn’t have any code to write or bugs to report until that time. Yeah, I know this is old school. But it’s still a valid (or at least required) approach for certain types of projects and industries.

Posted in Uncategorized | 1 Comment

A Brief Explanation of Why Processes Don’t Scale

I received a comment on an earlier post from Guido Schneider taking some exception to a Scrum example I used. His thoughtful response got me thinking, and the thinking turned my response to his comment into a whole blog post.

Guido makes the point that in the real world, you often have Scrum teams that are part of larger projects. The result is that we try to scale our processes by creating things like scrum-of-scrums (essentially doubling-down on the practice). We need to create high level iteration plans that look at hundreds of work items across multiple Scrum teams. Sometimes we need to roll up utilization numbers and progress through WI hierarchies, for example. The current Scrum template in RTC doesn’t solve all of these scrum-of-scrum, project, or portfolio level issues.

One reason for this is because Scrum isn’t a project level practice, it’s a team practice (as Guido indicates). But organizations often try to scale Agile techniques like Scrum to larger teams and projects, placing a demand on an RTC Scrum template to do more than just Scrum.

Is it Possible to Scale a Process to a Larger Team Size?

So my concern is, how much can Scrum (or other Agile process) be scaled to larger organizations? Are we just doing what we’ve always done with rolling up the project management information, giving it a new name (Agile), and expecting different results? What evidence is there that you can scale any or all Agile techniques?

Interestingly, Agile was not initially defined as a set of techniques that only small teams can use. But almost all the work that’s been done inventing new Agile techniques focuses on the work of small teams. For example, Scrum uses developers and product owners, not project managers, to prioritize change requests. But larger projects make the project manager responsible for prioritization.

I believe this is because it’s easier to fulfill the Agile Manifesto with a small team. For example, face-to-face interaction is much easier, and gives you much more productivity, on a small co-located team then it does on a large distributed team. More documentation/artifacts are required for that larger team because 20 people can’t have face-to-face communication every day.

When you look at scaling a practice to create scrum-of-scrums, are we really scaling Agile? My experience in recent years is that most Daily Scrums are not actual Scrum meetings. They are status meetings with free-flowing discussion. They don’t keep to strict questions about what was or will be completed. Not only is this not Agile (as defined by Scrum), it’s not scalable. When you get to a scrum-of-scrums, you probably have a worse situation since those Daily Scrums will be filled with project managers who are used to discussing status and roadblocks instead of keeping to a Scrum script.

It’s frustrating, because one of my favorite parts of Scrum is the efficient and effective Daily Scrum format.

Agile Isn’t Everything. Neither is Anything Else

I’m not arguing here that we don’t need to roll up estimates and amount of work completed. I’m making the argument that these aren’t necessarily Agile techniques. They’re certainly not traditional Scrum techniques. And we should consider the cost/benefit of using a practice like Scrum (or any other type of process) when we need to make decisions in a way that is not Scrum-like.

For example, the Agile Manifesto says we prefer responding to change over following a plan. That’s an excellent perspective, but it’s a harder perspective to execute on when you have a large project with lots of moving parts. Customers need delivery dates, dependent teams need to accept new code changes, payment is only made after certain functionality is delivered and the business is counting on that money, etc. Four people working in adjacent cubes can coordinate with quick conversations. Larger projects gotta have a plan to follow or they sink under their own weight.

The Agile Scaling Trap

I assert that there are techniques that make you more effective on a small team that don’t scale to a larger team. The techniques lose their effectiveness in larger teams. The dynamics of larger teams are different, not just bigger. There are things you have to worry about with a large team that you don’t need to worry about with a small team.

What’s a small team? Surprisingly there’s no good research on that in our industry, last I checked. But based on some research with entrepreneurs, and using the n(n-1) formula that illustrates the Law of Diminishing Returns, I believe that a small team is no more than 5 people. More than that, and the communication overhead for each new person on the team increases geometrically. It’s also easier to “hide out” and do less work, which is why I join the largest teams I can find.

And the team needs to be relatively isolated. That is, it has few dependencies on other teams and other teams have few dependencies on it. As you depend more on other small teams, the meaning of “team” changes and you find yourself part of a larger team.

So moving from a small to medium sized team involves more than just adding new people. Everyone needs to communicate with each other differently, and the overhead increases dramatically. You can no longer be successful doing the same thing you did when the team had 3 people on it.

By the way, this doesn’t mean you can’t use Agile techniques in larger organizations. You absolutely can. For example, Story Points is something that an organization of almost any size can use. But we need to be wise about what we use and what can be scaled to larger teams.

Adapt and Overcome

So, what’s this all mean? Guido was making the point that at the project level, you still need to deal with lots of planning work items because you have multiple teams on your project. He’s correct. But looking at a WBS rollup is not Scrum, and it’s not even an Agile practice. It was hard to manage this kind of information 30 years ago, it’s hard to manage it now, and it’s going to be hard tomorrow. Tools make it a lot easier, and tools continue to get better. But it’s not a surprise that total automation eludes us.

We still need to adapt processes to our circumstances, history, and goals. Often, adaptations force us to change the way we view or interpret information, which makes us go back and re-think the complex ways we need to evaluate the progress of our teams and projects. It’s no wonder we haven’t come up with a one-size-fits-all process or project management schema.

We still need to understand our own organizational needs and goals, which means we still need to understand how practices (like Scrum, Story Points, etc) give us value so we can choose wisely, adapt appropriately, and get the most value for the energy we put into defining our processes.

Posted in Uncategorized | Tagged , | 2 Comments

RTC Adoption Problems: Tool, Education, or Perspective?

I work with CLM customers a lot and sometimes I encounter people struggling to adopt RTC. Once in a while the problem is with RTC – a bug, the rare difficult-to-use feature, etc. Once in a while it’s just a technical knowledge thing. For example, if you’re not aware that your Websphere JVM heap space should be no more than half your physical memory, then you’re going to have issues.

And sometimes the problem is human nature. It’s our own difficulty with breaking out of an existing perspective, breaking old habits and old ways of looking at things. These are the most difficult problems to solve, but solving them provides the biggest payback.

Example: Scrum Iteration Planning

To illustrate this issue, let’s say you’re using the RTC Scrum process template. You think that it takes too long to load an iteration plan. You perceive a performance problem. Fair enough.

So we look at your plan and see that you’re loading many hundreds of work items when you display your plan. In other words, you have planned to do hundreds of things in the course of a single Scrum iteration. And these are higher-level items, Epics and Stories, not just tasks and bugs. The problem is, a Scrum iteration shouldn’t have that many planed items. You should generally have dozens of plan items, not hundreds. What gives?

It turns out that you have nested iterations (which is fine), and your iteration plan is focused on the highest level iteration (which is not fine). We find that the timeframe of this high-level iteration lasts for more than a year.

Scrum iterations are designed to be short and provide a deliverable at the end. Four to eight weeks is common. If you have longer iterations, it’s harder to do the planning, estimating, and managing that makes Scrum so useful. Instead of looking at all the planned items in your project, you want to look at just those that are planned for the current (or possibly the next) short iteration. Scrum focuses the team on the “now”, partly because Scrum allows for a lot of change to a project in the future. In fact, Scrum embraces and encourages change, so we can’t know, in a detailed way, what’s going to happen past the next iteration.

This means that you are losing a lot of the value of Scrum if you are trying to see the entire project in one iteration plan. And, if you only look at the plan items for a fine-grained iteration, you’re not going to have performance issues because you’re not loading hundreds of plan items.

It also means that in this example, RTC looks like it has a performance problem, but what’s really happening is that the Scrum template is being used in a way it’s not designed for. You’re trying to pound a square peg into a round hole.

Right Process, Wrong Perspective. Or, Right Perspective, Wrong Process.

So now you’re saying “OK smart guy, but I really want to see the entire scope of my project. I’m used to looking at Gantt charts and critical paths in Microsoft Project. It’s how I’m successful at delivering my software!”

This is the crux of the problem. Scrum (or any process) is designed to deliver value in a certain context. For Scrum that context is small teams that can deliver in short cycles and who embrace plenty of changes as you go. But maybe you’re working on a project that can’t do that. Perhaps you have a large project that must try to define all requirements up-front because of contractual constraints. Or you’re required to report progress against a specific total estimate so you can bill the customer. Or your organization is used to working in a way that’s not the least bit Scrummy, and old habits die hard.

We can argue all day about what’s “good” and “bad” about the process you’re using, but that’s not the point here. It may be that the nature of your project prevents you from adopting all of Scrum (or whatever process you’re trying to adopt). A better way to say that is that the nature of your project should define the type of process you adopt. Maybe you actually do need to look at all plan items across the entire project at all times.

That’s OK, I’m not judging. But let’s not fool ourselves into thinking that Scrum will solve our problems because Scrum is all good, hail mighty Scrum. Let’s use RTC to model the process that we can do and must do, the process that we’re contractually obligated to do or that has proven to be effective in our organization for our particular set of circumstances. Or, use RTC to model the process that we need to do, that will solve some problem in the way we develop our software.

The real problem in the scenario described above is that people want to do Scrum “because it’s Agile”. People want to get that Agile mojo and make their projects less painful and more predictable. But first their project needs to be amenable to Scrum practices, and if so they need to change their perspective and the old, ingrained, unconscious habits they’ve developed.

It’s very, VERY hard to change habits and perspectives. You have to have a clear target and keep that in front of you while your body and mind try to yank you back to your habituated way of doing things. The good news is that a tool like RTC can help you by enforcing new process elements that you want to adopt.

But if you try to use the Scrum template (for example) because you want to adopt Scrum, but you want to view reports and plans like you did in your old Waterfall process, you’re going to be frustrated. You’ll be banging a square peg into a round hole. There’s nothing wrong with square pegs or round holes, but we need to pick the right peg for the right hole.

This means we may choose the Traditional process template in RTC instead of Scrum. Or we choose the Simple process template and add Scrum elements to it, along with some custom stuff we like. Or we choose the Scrum template and add some elements from the Traditional template to it.

We do what we need to do to make sure we:

  • Do things that make us successful in delivering software IN OUR PARTICULAR ORGANIZATION.
  • Consciously change what doesn’t work, and enforce what does work.
  • Continuously review our process and incorporate new best practices. In other words, make a habit of creating new good habits.

And when we bump up against a problem in RTC, we want to ask ourselves if:

  • Something is wrong with RTC
  • We lack some knowledge we need to acquire
  • We’re using the chosen process in the way it was meant to be used

I guess the moral of the story is that there’s no such thing as a free lunch. If you want the advantages of Process X, you need to understand how Process X delivers the value that it does. Then embrace that behavior by forming new organizational habits.

In my experience, this is the most difficult part of adopting a process. Teams and organizations who get good at this are much more successful than those that don’t

Posted in Uncategorized | Tagged , , , | 6 Comments

Jazz Jumpstart Speakers at Innovate

My Jumpstart colleague Rosa Naranjo has posted a list of Innovate sessions presented by Jumpstart team members. Given the high level of expertise across CLM products the Jumpstart group represents, I recommend you attend these sessions if they cover any areas you’re remotely interested in.

Posted in Uncategorized | Tagged , , | Leave a comment

JazzPractices at Innovate 2013

I’ll be attending the IBM Rational Innovate conference this year and leading the Process Enactment Workshop. If you’re going to Innovate, consider attending WKSHP-1116, the Process Enactment Workshop. I’ll be delivering it with my co-creators Ralph Schoon and Jorge Diaz Garcia on Tuesday afternoon.

Ralph is from Germany and Jorge is from Spain, so if your native language is English, German or Spanish we can promise to answer your questions in your mother tongue! Go to the Sessions page and search for “ruehlin” to see the abstract.

Many of us from the Jumpstart Team are going to innovate. Check out the team bios and see if you want to visit any of their presentations as well.

If you’d like to meet with me, please email me and we’ll try to get something set up during the conference. I’m imagining a group process enactment discussion over Mai Tais at the pool…

Posted in Uncategorized | Tagged , , , | 2 Comments

RATL Perf Land Blog

The Jazz Jumpstart team is lucky enough to have Grant Covell join us. He’s a performance guru, and has done a lot of work calculating and validating performance on CLM. 

I’ve added his blog, RATL Perf Land, to the blog roll. If you’re interested in performance issues, you should check out is work.

Link | Posted on by | Tagged | Leave a comment

What State Are You In? And Where Can You Go?

One of the hidden frustrations people encounter when using work items in Rational Team Concert is knowing the current state of the work item AND how to get to other states. People sometimes get frustrated because they can’t switch to the state they want, or they can’t visualize the flow of the work item.

My colleague Ralph Schoon, Germany’s best export, described a way to customize a work item attribute that shows you the current state and available states in your work item. It’s based on original work done by Kristina Florea

Here’s what you need to know about this: IT’S TOTALLY COOL! This should be a built-in feature of RTC. But if you do a little work item customization (and create a simple graphic of your WI states), you can have this extremely cool feature as well.

Here’s a new, unsaved work item. You can see where you are in the flow, the states that are available, and how to get to the other states. The Workflow Information field shows a graphic of all the states, with the current state in green. The Workflow Information field is of type “wiki”.


You can see that the work item is “New”, and it will be “Proposed” after we save it.

After we save the work item, the graphic changes to the following:


We are now in the Proposed state (which is also shown in the field at the top of the window). We can see in green that there is just one possible thing for us to do in this state: research the work item. We can also see that if we want to Reject the work item, we must first Research it.

This makes it much easier to get the work item into the state you want it to be in. It also makes it clear what the entire workflow is, which makes understanding the purpose and direction of work items much easier.

Currently, you can only see the workflow for a work item type (not a specific work item instance), and that can only be viewed from the Process Configuration tab of the project description. Usually the only people who look at that (or know to look there) are the process engineer and the project lead.

Using Ralph’s technique, you can see the workflow for a specific work item (not just the general WI type), and you can see what the current state is in context. From a process perspective, this is a big shortcoming in RTC that Ralph has addressed.

You can do this yourself by using JavaScript to customize work items. Ralph provides easy-to-follow instructions in Lab 5 of the CLM Process Enactment Workshop. I recommend doing the whole workshop, or at least the work item customization labs. But if you want to jump right to the section that describes how to display the work item flow, it’s in section 5.6, Calculated Value to Visualize the State of the Technology Review.

You also might want to request that this feature be added to RTC so it’s easier to implement.

1. Ralph was incorrectly identified as the creator of this feature in the original article. The passage was corrected to show Kristina Florea as the creator.

Posted in Uncategorized | Tagged , | 3 Comments

Why We Need Best Practices

Part 1 in a series.

Every discipline has best practices. These are the behaviors and skills that contribute the most to success. They aren’t objectives in themselves. You don’t “complete” a best practice and then you’re done. Best practices are perspectives and disciplines that you always apply almost every day to make you better at what you do.

To illustrate, I heard a story once about salespeople taking training classes at a conference. All the younger sales folks were going to the cool classes about new sales techniques, the latest psychology of sales, etc. But the more experienced salespeople were going to classes on how to negotiate better, how to close the sale, how to do a better job of networking, etc.

You might wonder why the more experienced people were going to those basic sales classes instead of classes about special circumstances or new trends in sales. The reason is because those basics – negotiating, advancing the sale to the next level, connecting with customers – are fundamental practices that contribute the most to being successful with sales. Negotiation and advancing the sale may be 50% or 80% of actually making a sale. Getting better in those areas, even if they’re areas you’re already good at, will give you more “bang for the buck” for your efforts.

Another example: When I was doing karate, the particular school I was in insisted that you memorize a bunch of tenants and purposes of that particular style. The very first one was “Practice basic techniques all the time”. And every class we did so.

Why practice the most basic blocks and kicks all the time after already practicing them for years? Because they focus on balance, on protecting your must vulnerable areas, on improving your core and speed. These are the things that not only give you the foundation to do dramatic (but not very effective) jump-spinning kicks. They also continue to make you better dealing with the kinds of attacks and defenses that you’re most likely to encounter. Block, block, kick, punch. Don’t worry about flying through the air, just avoid getting hit and disable your opponent. Or better yet, run away!

It’s the same for software development. There are fundamental practices that transcend technology and fads. If you get good at those areas you’ll significantly reduce your risk of failure, you’ll be much more likely to create a product that will solve someone’s problems, and you’ll create more robust and high-quality software. If you don’t focus on those best practices, you’re much more likely to fail to deliver a product on-time that solves someone’s problems.

I started to learn best practices by failing a lot. Like many people, I finally started to migrate to doing what seemed to work and avoiding what didn’t. In fact, after you work on some projects over the years, you can start to “smell failure”. That’s the visceral sense of dread you get when you see too many things happening that just aren’t right. When this happens, you’ve started to get a sense that best practices (even if dimly perceived) are not being performed. Run away!

Rational Software (later purchased by IBM) defined 6 specific best practices of software development and organized their products and solutions around them. That was GREAT! We could actually talk to customers about how to solve their problems and discuss where their shortcomings were. For almost 15 years I’ve used the thinking around those practices to help lots of people get better at delivering software.

These days, it feels to me like we have less understanding of what these best practices are in our industry. Teams do “agile” (whatever that means nowadays), or they repeat what their organization has done for many years, or they just make it up as they go along. While we’ve come a long way over the last 30 years, we’re still addressing the same kinds of problems. And we still need to address the same best practices to be successful.

One more thing about best practices in our industry: they give us a way to help measure the value we deliver. Measuring value can be hard. Is a programmer good because he writes three times as much code as someone else, or is the other person better because it’s hard to find bugs in her code? Is a tester good because she finds a lot of bugs, or is the other tester better because he identifies ways for developers to avoid writing bugs? Is an analyst good because he writes requirements that people can understand? Or is the other person better because she writes requirements that developers can design from?

The fact is, bad software can succeed with good marketing. Good software can fail in the marketplace even if the developers are delivering on their minimum SLOCs. Rewards and punishments for these kinds of results may have little to do with the skills of the practitioners and their contributions. It’s often hard to draw that line between what a practitioner does and the success of a specific product.

So instead of looking at how well a product does in the market as a way to reward practitioners, we want to encourage the behavior that’s most likely to make the product successful, and discourage behavior that won’t. If we identify best practices, we can focus on encouraging behavior that realizes those practices. It doesn’t guarantee success or assure failure will be avoided. But it does give use a way to perform that is more likely to make us successful. That’s what we should be measured by.

I thought I’d do a series of blogs on the best practices for software development as I see them. There aren’t very many practices, but they cover large areas. I would love to see the industry re-dedicate itself to best practices instead of embracing are dearly loved process philosophies. Software development processes should make us better at performing best practices, not be ends unto themselves.

Here are the best practices I’ll be covering in future posts. Stay tuned!

    • Define the problem and a general solution. If you only do one thing right, do this.
    • Manage requirements. At the least, manage the 20% of critical requirements.
    • Test continuously. Like voting in Chicago, do it early and often.
    • Deliver iteratively. Always get back to solid ground, aka stable software.
    • Model the software. The people currently in the room with you are not the only people who need to understand what you’re doing.
    • Define the architecture. No, really. Define the architecture!
Posted in Uncategorized | Tagged , | 1 Comment