DevExpress Newsletter 9: Message from the CTO

ctodx
26 August 2009

My Message from the CTO for the ninth newsletter:

It seems there's a new word hitting the streets: heuristics. Well, OK, it's been around for quite a while, but I'm noticing more and more that it's being seen outside the hallowed halls of academe.

An algorithm is an exact recipe for doing something, usually computational in nature. A heuristic is like an algorithm but it's less exact. If you get an answer you don't like, you get to tweak some assumptions and parameters and try again.

An example of a heuristic is cutting clothing patterns from cloth. There's a simple algorithm for doing this: just place the pieces one after the other down the cloth. Works every time, but it's very wasteful. So you go back and try again by placing the smaller pieces alongside the larger, and fiddle around.

We're trying this out with quality. There is no real algorithm that we (or you) can follow that would produce 100% bug-free software. If there were, we'd all be using it -- duh! (Actually I lie, there probably is such an algorithm, but it would be unusable in reality, much like solving an NP-complete problem exactly.) So, instead, we're trying out some heuristics, either on their own or in combination with each other, to improve on our already high standards for quality. Like all problems that are solvable with heuristics, though, it takes time and iterations, but the company as a whole is committed to succeed.

If we find some practice that works for us, you can be sure we'll let you know. After all, "practice" means trying something over and over, until perfect. Heuristics!

A cross-pollination between something I've been researching in my spare time (it involves optimization algorithms, another prime area that uses heuristics, a.k.a. WAGs) and our internal discussions about that elusive attribute, quality. It seems that many people view quality as something that's black or white, a step function with a discontinuity: it's either of 100% quality, nothing wrong with it at all, or it's worthless; there is no middle ground. I totally disagree and, since there is no usable algorithm for quality, then it's obvious that achieving higher excellence involves iteration through time and the various shades of grey, from black to white. What the slope of the function looks like though, I have no idea Smile.

9 comment(s)
Anonymous
Jason Short

Do you mean in your testing approach?  You are planning to try to use heuristics to determine when and where to test rather than attempting to test everything?  That is a great idea in theory, of course quite complex as well.

Or did you mean with Quality Management as in customer support?  I could see heuristics coming into play here as well, but in terms of when and how many resources you put on a particular issue based upon some criteria.

Interesting post though, makes me think...

26 August, 2009
Anonymous
Antonio De Donatis

Dear Julian,

It’s easy to agree with you about the fact that quality measured on a scale from 0 to 100 is not likely to be absolute (either in a positive or negative sense).

And yet your post inspired me to formulate a couple of observations.

Sometimes people are in a position that requires them to make a choice, such as to adopt or not adopt a certain tool and their answer cannot be gray since they will either adopt it or they won’t.

Their answer, however, should not be interpreted as a certificate of 100% quality, if positive or the “insult” of 0% appreciation, if negative.

My second observation is regarding the iterative process that you intuitively describe as a “movement” from darkness to light via several shades of gray.

My suggestion is to do not under evaluate what is that ensure progress rather than recess in such natural model and, that is, quite simply, a sequence of correct decisions.

So, although we all love to be reasonable and nice (or almost most of us, hopefully), even the most reasonable of us should not be afraid to take decisions even if they may be wrongly interpreted as total acceptance or total rejection.

Yours

A

26 August, 2009
Julian Bucknall (DevExpress)
Julian Bucknall (DevExpress)

Jason: I think my idea was more for the application of heuristics to the testing approach. There are an inordinate number of testing methodologies and patterns we could apply to our software (and, of course, testing UI controls grows dramatically with complexity). We don't have infinite resources or time, and so we have to pick and choose what we do and how "deep" we go. Call it heuristics or call it intuition, we have to work out what gives us the biggest quality bang for our testing buck.

Interesting second point, though: I'll have to think about that.

Cheers, Julian

26 August, 2009
Julian Bucknall (DevExpress)
Julian Bucknall (DevExpress)

Antonio: Thanks for the feedback.

Actually I kind of think of the whole process as walking around a hilly landscape than moving a cursor up a single line graph from dark to light (nice visual there though). We may reach the top of a local "quality" hill and think we're done, but, dang, if there isn't another hill over there that seems higher. To get to it of course, we have to go down into the valley, regroup, and start climbing again.

Cheers, Julian

26 August, 2009
Anonymous
Antonio De Donatis

Hi Julian,

Firstly thanks for the reply and the interesting conversation.

I appreciated the realism and pragmatism of your hill view metaphor and, yet, I find it challenging to push it further by recognizing that, although our conversation is very abstract, we do have a concrete domain (which is software development).

The challenge is therefore to exploit as much as possible what are the peculiarities of this domain.

The example of climbing two hills (100m and 200m tall) implies 300m of total climbing.

DevExpress components are a perfect example of how I have to climb 100m only when climbing the taller hill since the first 100m were an (excellent) effort already made by the DevExpress dev team.

In other terms, software allows for reuse in ways that are unthinkable in other industries.

It follows that at a certain point we should depart from material examples and stay abstract enough to achieve design goals/milestones that can be significantly and efficiently reused.

Thx again for the conversation.

Yours

A

26 August, 2009
Richard Morris (DevExpress)
Richard Morris (DevExpress)

A formal proof of software is possible, albeit very very costly.

www.cebit.com.au/.../571-nicta-research-promises-crash-proof-software

My tax AU$ at work :)

26 August, 2009
Anonymous
Adam Leffert

Julian,

As Antonio said, it's difficult to infer customer satisfaction from a binary choice (buy or don't buy).

Wouldn't your ideas on using heuristics for software quality also apply to customer satisfaction?

The much-discussed customer voting system would provide instant, clear, calibrated feedback on any issue DX chose to monitor.

Do you agree?

Adam Leffert

http://www.leffert.com

27 August, 2009
Richard Berman_1
Richard Berman_1

Well, part of it is clearly the feedback loop with you customers. Technical support is actually a part of this heuristic for quality, and in fact if you are focused on improving the quality of the software alone, then you will frankly miss the boat, as you have to improve the quality of your developers, support people (who for all I know are actually the developers) and the administrative people who support *them*.

This is no idle statement. If, when an error occurs, the main actions are to correct the bug, test it, release it, then you have missed one of the most important aspects of quality and are missing one of the fundamental components of achieving that goal of every-improving quality.

The missing thing is improved PEOPLE. When an error is made that makes it your customers, the sequence of action to repair should always be this:

1. Get the customer handled and working no matter what. Quick fixes, workarounds or more in-depth fixes as long as it is the shortest path to handling the customer.

2. Investigate a bit further to assure the underlying causes of the problem are understood.

3a. Of course, fix the actual cause of the problem and test it.

3b. Find out how those causes came to be EXACTLY. What processes and/or persons were insufficient or acted incorrectly so as to allow the causes?

4. For each person, find out if their actions were the result of some missing knowledge, or were really due to simply not following known procedures or accepted practices.

5. Handle what you find - get the person to understand and demonstrate they can avoid causing that sort of problem again through whatever educative steps are necessary. Or apply appropriate discipline if it was laxness or "a mistake".

6. Examine the workflow and processes used to see how such problems could be detected and avoided, and make that part of the process. Get the people in the area to use the changed process.

Now just imagine if this were done for every problem - perhaps in a very much abbreviated form light issues or a more thorough form for serious problems that make it through to customer?

You'd end up with happier customers because they got first priority (and while not the forum for it, much experience with your support indicates this is not the case currently - it appears that bugfixing is first priority and the customer is not much considered in that).

And in addition to happier customers, there would be improved code, and, over time, improved SPEED of improving code, because there'd be less errors because both your people and your processes become more competent at producing higher quality code when it gets to the customer.

There's a heuristic for you.

27 August, 2009
Dale Mitchell
Dale Mitchell

richard,

i'm glad we've got you watching things down there so we stay up on the latest.

can't wait for your follow-up post sometime in the future when some hacker takes the challenge and crashes it.

thanks

dale

30 August, 2009

Please login or register to post comments.