This Blog


Favorite Posts


August 2007 - Posts

  • Perception, deception, reception

    This past few work days have been up and down. It's a bit like the old joke that starts off with "I have some good news and bad news..." and then gives us the bad news, and then the good, the joke being that the good news is worse than the bad but in a different way.

    Well, I'm going to do this the other way round: first the good news and then the bad. By this masterful switch, the good is really, really good, and the bad is simply awful.

    First off was a link that Dale Mitchell provided in a comment to my post on CodeRush and Refactor! Pro 2.5. The article is an interview with James Moore of Red Gate Software, a company that is mainly known for its line of SQL-related tools, but that is starting to diversify into .NET tools with their ANTS profiler. I won't comment too much on the content of the article, but will point out the second question and answer as being particularly relevant to us and our industry.

    Yes, writing your own controls is fun and instructive and means that you can hone and gear it precisely to your particular situation, but in doing so you are increasing the amount of code you have to maintain, you're increasing the cost of the software you really should be writing, you're not adding to the bottom line. DXperience Enterprise, containing everything that we do for .NET, including all source code for the components, costs $1,300 at present. Say you cost your employer $100 an hour (you know, it'll be close with salary, medical, dental, vacation, 401(k), options, stock, etc, etc). That's 13 hours of your time, less than two days. Are you going to design, write, test, bug-fix a tree-view, for example, in that time?

    (By the way, Mark Miller gave me a redacted question 2 answer with certain passages marked in bold. I decided not to use it, though I'll leave it as an exercise for the reader to reconstruct his version.)

    Then I saw that Andrew Connell had made the results of his work in writing templates and plug-ins for CodeRush and Refactor! Pro available for all Sharepoint programmers. For free. Zero, nada, zilch. OK, you do have to buy CodeRush for about $250 (Refactor! Pro is included in that price, remember) to use them, but considering the amount of time this is going to save you, it's a mere pittance.

    In reading the blog posts ( 1, 2, 3, 4, 5) where he talks about it all, frankly I'm stunned at the amount of work in the project and in the magnanimity of Andrew's gesture. Now, in my defense, I'm not a Sharepoint programmer, but Andrew is and he's distilled his knowledge and experience into these additions to CodeRush and Refactor! Pro that will make your life easier.

    First and foremost, let me say this is awesome work, and secondly, and to a lesser extent, it validates the direction Mark and the team took in ensuring that both CodeRush and Refactor! Pro are fully extensible with a well thought out design and great support in the form of project templates and the like.

    After that, I read Chris Bilson's blog entry on modifying a CodeRush template to help program with NHibernate. A nice little article on extending what we provide as part of the product to make it better and easier to use for your normal daily work. In Chris' case, it's declaring virtual properties for use with NHibernate, a well-regarded ORM derived from the Java Hibernate framework. He's got the screenshots, he's got the shaky mouse highlights (I'm there with you, Chris), resulting in a very succinct tasty article indeed.

    Then after this surfeit of good news, this morning I read this rant about component vendors. I'm certainly not going to dissect who irascian is talking about so vehemently, but instead view it as a rant about all component vendors, of which we are certainly one, and reply on that level. He makes some (qualified) good points and some quite bad ones. I'll look at the ones that could apply to us.

    - Support as an endless loop of responding to a question with another. We've been dinged on this one several times (it annoys me incredibly), and have recently been making changes to some of the answers we make in order to try and cut short this go-around.

    But, having admitted that sometimes we're to blame, I'd invite you to look at the problem from our viewpoint: sometimes the question that's asked of us is ambiguous to the point of obscurity (I remember seeing one of Plato's answers recently: "Are you talking about the ExpressQuantumGrid, the XtraGrid, the ASPxGrid, or the ASPxGridView?"). After all, we can imagine that it's coming from a customer who's been living this issue of which they're complaining for the past several hours. They're fully intimate with the problem and the scenario and forget in the heat of writing an email to us that we're not. So sometimes, we're forced to respond with a question.

    - View Source (for an aspx page containing some third party controls) provides "total gunk, debris and sheer volume of crap". Heh, message received and understood. I'll admit our editor controls are somewhat guilty, and we're looking into the TGDASVOC produced when they're used in the ASPxGridView and hope to show the new slimmer controls in version 2007.3.

    But having said that, does the end-user care about TGDASVOC in the HTML? So long as it's rendered by the browser they're using, do they really care that it was 3K of HTML or 4K? The images on the page are going to be larger than that anyway. To me it's a little like arguing that XML should always be nicely indented for human readers. Er, no, it doesn't have to be; human readers fall into second place compared to the program readers, surely?

    - "there's no real competition". Do what? The mind boggles. Is he saying that we, Infragistics, ComponentOne, Telerik, etc, are in cahoots? Price-fixing? That we're just four facets of the same shady reclusive shell company in Liechtenstein that doles out the sales? My mind has completely boggled over on this one and I'll have to move on before it comes out my ears.

    - Side rant about Community Server 2007. Er, complain to Telligent. We do, and how. Don't assume that they're using a third- party vendor's components. Maybe they are, maybe they're not, maybe they've taken a commercial product and made lots of changes to suit them. Dunno. But they're the front line of support for Community Server.

    Now having said all that, let me point out a few things. Compared with writing WinForms controls, writing useful ASP.NET controls is hard. Sorry to be the bearer of bad news, etc, but it is. For a start, they're written as C# (say), HTML, and Javascript. They're stateless, with some decidedly kludgey ways to try and make them stateful (which the component user tends to break quite frequently). HTML was not designed for fancy interactive UI work, although there is work going on in that area to make it so, and so requires all this kludgey Javascript. The browsers are all different (the DOMs aren't the same, neither is the Javascript) and even the script routines that determine what the browser supports make me laugh with amazement. (In fact, it was only ASP.NET 2.0 that made a real effort to determine which browser was what, and that work continues with Silverlight.)

    Of course, that's why we're paid the big bucks — partly in order to fund magazines and conferences, it seems — therefore we should spend the time and effort to make sure our super-duper grid works in all the browsers, on all the operating systems, with minimal HTML and JS and Viewstate, with rock-solid state persistence, and blazing performance. Which we do, happily. And in doing so bugs are reported which we fix. And we tweak based on feedback from our customers. And customers can rely on us to do all that, because, as mentioned above, they get a lots of value for two days' worth of work.

    I used to work for a company that decided early on that they would write their own grid for their ASP pages (no, this was before ASP.NET). It was geared to their application, it had loads of special case code everywhere, and nobody, but nobody, in the dev team wanted to touch it. Change one thing in one place and functionality changed somewhere else. With the best will and effort in the world, they ended up with a Frankengrid. More time was spent on the underlying infrastructure of the grid than on the application's UI. They're now rewriting with a third-party vendor's product. I'm sure that it's not optimal compared with the old custom one, but the way they look at it, is that they're in the business of selling an application, not a grid.

    And that's what I find troubling about irascian's rant. It's easy to lash out at and blame your vendor for slippages and problems (or to lash out at the unnamed person or persons who decided on this choice of vendor), but I certainly believe in the end you will save money by using one. Just spend some quality time investigating which one to use. Use real code written by a real programmer and use real data. A week spent on this investigation will repay dividends.

    As we like to say, Download, Compare, and Decide.

  • Elementary, my dear Watson

    We are constantly striving to improve the quality of our software, be it the controls, the frameworks, or the IDE tools. Obviously we do so by writing test cases and continually running them, and the support team are instrumental in identifying bugs that our customers run across. The issue tends to be that customers use our controls in ways we never thought of; tsk, tsk, them wacky customers, eh?

    Sometimes though we even manage to identify and fix bugs after release but before our customers report them. ESP? No, just Watson.

    I'm sure you've run into the dialog: you're working away when suddenly, bam, the program you're using crashes and the operating system pops up this dialog telling you that there's been some problem, and would you like to send the diagnostic information about this problem to Microsoft? This dialog is known as Watson, after the sidekick to Sherlock Holmes (and in fact the narrator of the stories).

    I wonder, what do you do? Do you automatically click the Don't Send button? When I was at Microsoft, I got into the habit of clicking Send instead. Why? Because the dialog is not sending Microsoft intimate details about your setup as many would lead you to believe, but instead is sending details of the crash that just occurred.

    What happens to this data? It's collected and analyzed. Watson was originally developed by the Office team for Microsoft Office and proved invaluable at enabling the team to identify bugs seen out "in the wild" and then to fix them. Watson was so successful at finding intractable and hard-to-find bugs that the subsystem was appropriated by the Windows team and is now part of the operating system.

    The thing is Watson does not differentiate between crashes that occur within Windows or Microsoft products; it's a collector of all crashes, including programs that you or I would write. And Microsoft makes this data from all the crashes over all the world available to third parties as well.

    A third-party software company, like, say, Developer Express, can register with the Watson team to receive the crash information that pertain to its EXEs and DLLs.

    This happened just recently in fact. We released version 2007.2 of DXperience and within a day, we had a seven or eight incidents of the same event. The Watson data we get indicates the DLL inside which the crash occurred (obviously one of ours), the stack trace, and the address at which the crash occurred. It was the matter of a couple of hours for us to set up the debug information, and then work out at which line the crash occurred. Using that and the stack trace information we were able to work out that the crash occurred in a finalizer: we were assuming that an object was non-null when in fact it was null due to finalization. The fix was easy to do and appeared in the next version.

    I cannot stress this enough though. Because seven people did, in fact, click Send meant that we were notified of the crash and were able to fix it. Without the Watson dialog and crash data collection, we would not have seen this problem as quickly, and, because it was in a finalizer, it would have been extremely difficult to reproduce. We can also provide some help text as well for the crash, so that someone who clicks Send can also be notified that, for example, there is a release available that will fix the problem.

    The Watson data is always worth sending to Microsoft. By doing so, you are ensuring that companies like ours will improve the quality of their software. So please click Send.

  • The Offer of the Year

    Would you like to see the things we're working on before anyone else?

    Well, would you? Duh. Of course you would like to see the things we're working on. And before the general customer base would be sweet indeed.

    If you happen to be in Las Vegas from 5 to 8 November and happen to be an attendee at DevConnections 2007 (especially for the Visual Studio and .NET Connections part), you can pop along to our booth and we'll be only too glad to demonstrate the new things that we'll have been working on throughout the autumn. In fact we'll be so enthusiastic about showing it all off, we'll be dragging you there to watch.

    And what will we be showing? Mmm, this is difficult: we don't pre- announce stuff and yet I have to give you a hint. Hmm, let's see. Here are some keywords, acronyms, and phrases.

    • WPF controls
    • XAF
    • Revolutionary CodeRush stuff
    • New ASP.NET functionality
    • Charts with WPF

    There you have it. There probably will be some other stuff as well, but that'll be enough to be getting on with, and Mark's Black Ops Security are breathing down my neck. It's all eyes-widening, jaw-dropping, hair-standing-on-end stuff. And, no, sorry, I can't say any more about what it all means.

    So if you really want to see this new stuff before the rest of the world, head on down to Las Vegas and Mandalay Bay and DevConnections at the beginning of November and go to booth #131. It's the biggest booth we've ever had at a conference, all because we have so much to show off. There'll be giveaways (mmm, swag), there'll be Black Ops Security, there'll be guys from the dev teams, and there'll be Ray, Mark, Oliver, Mehul, and me. We'll be ecstatic, exuberant, and exhausted, but, hey, it's Vegas: no one sleeps or has a need to.

    This will be a mega Developer Express Extravaganza, and I'm not known for hyperbole. All the in-crowd will be there!

  • Psst! Wanna see some tutorials for our ASP.NET controls?

    To follow on from (and to complement) my post about us providing support for basic ASP.NET programming, it seems that there's a movement afoot in the hallowed halls of Developer Express Towers to provide simple online tutorials.

    Yep, our ASP.NET team have been writing some quick example tutorials on using our controls (mostly geared to our ASPxGridView and associated editors) and posting them online. And then only telling me about it this morning after reading my previous post. My response was, well, duh, guys, don't hide this and hope people find it, let's get on the roof and shout about it!

    Basically, by analyzing various support questions that have come in over the past month or so, they've identified various scenarios that customers are having problems with and have created little sample programs that describe the problem, show the solution and the code needed to create the solution. You can reach the tutorials here.

    I warn you: the pages are almost Japanese in their simplicity. Our marketing guys haven't seen this yet and so haven't had a chance to throw in a bunch of images and branding and fancy schmancy stuff and so on. My advice is don't tell them: these tutorials are all about each individual problem and its solution, we don't need no steenkin' frou-frou. Enjoy!

  • Getting ASP.NET programming lessons on the quiet

    A conversation I had recently turned out to be funny; not funny ha-ha per se, more funny amazement.

    I was chatting to Plato, a member of our support team. He's taken on a couple more supervisory responsibilities since Max has been tanning himself on vacation at the seaside, and he showed me a reply that had been written to a customer. (I'm paraphrasing, by the way.)

    "You must set the DataSource property of the ASPxGridView with every call to Page_Load, not just that first call with IsPostBack == false."

    This stunned me: it implied that the customer didn't know the page cycle for ASP.NET applications, that they didn't know that everything has to get reconstructed (and destroyed) for every postback, that ASP.NET programming is just not WinForms programming, and that state is not maintained.

    I was suddenly hit with a thought, and so I asked Plato: how many times are you helping people with our ASP.NET controls and it turns out that you end up helping them with standard ASP.NET concepts instead. He replied, every day. He gave the example of one customer who, despite several example programs or variations thereof, still doesn't seem to get the ASP.NET page cycle, the difference between server-side and client-side processing, but, because they're using our ASP.NET controls, we should be helping them.

    This situation fills me with amazement and a bit of shock. We have a fixed set of resources for support: the guys in the support team. When one or more of them goes away on vacation or is sick the others have to take up the slack. And here they are teaching some customers ASP.NET programming by giving answers to questions and by providing example programs. All of this takes time, time away from other support issues and customers, those that perhaps require more advanced answers with longer research times.

    And I'm shocked too: are people really approaching web applications as funny kinds of WinForms programs? Or are they being told by their bosses to "make this program work in a browser" and are thrown in the deep end, and have nowhere else to go but their control vendor's support team?

    I remember when I started ASP.NET programming. I read through Fritz Onion's Essential ASP.NET cover to cover and I still got the cycle wrong in my first few attempts at writing a web app. And I was using Developer Express' ASPxGrid as well. But I persisted and worked it out and finally the page cycle became my friend. Do programmers these days not bother? Or are they hacking away in the hope that something works? And somehow we're getting caught up in this loop of hack, hack, hack?

    I don't know the answers to these questions. I equally also don't know what to do about these under-the-radar ASP.NET tutorials that some customers are getting. Where is the line between helping with our controls and teaching about web programming? Should I even bother getting worried about this? A happy customer is a happy customer after all, even if we're making them happy by helping with some standard beginner stuff.

    I've asked Plato to monitor the situation but I certainly don't want to get all bureaucratic and forbid it. Sigh, just color me amazed.

  • Deconstructing comments

    I've been dithering and thinking about the whole subject of comments to blog posts for a while.

    At first I viewed my blog here as a forum containing a special kind of thread, one in which I, as a kind of lesser god, am the only person able to start a new one. But it's not really a thread as I understand from newsgroups and forums: there's no hierarchical or tree-like view to the comments — this comment is a reply to that comment, but this other comment is a reply to the original post — all comments are merely replies to the original post, and it can be hard to reconstruct the conversation that the post engendered. This annoys the heck out of me and I wonder if any blog engine actually does do threaded commenting.

    Another issue I battle with is whether I should insist on commenters being "known" to the system. At present, commenters can be anonymous, at least in the sense that they can choose a random nom de blog. Don't get me wrong, I'm perfectly willing to be berated by a customer with a legitimate beef and so I like reading all the comments. But what gets me are the anonymous comments that say something along the lines of "Your product sucks, it's too slow/buggy/badly written/idiotically designed, my grandmother wrote a better one during her Thursday game of bingo." There's no way I can help a commenter like that — there's no way to trace him and to let loose the dogs of support to solve the issue — and of course the commenter knows full well I can't and that I know they know I can't. So the only conclusion I can make is that the commenter is a troll. And so I act as censor and delete those comments, because they don't advance the conversation, such as it is, but I still don't like doing so.

    Which reminds me: I'm not a fan of the "great post!" school of commenting. I'd much rather your comments raised other points, discussed issues, analyzed the content, provided a supporting link, made a joke, anything other than be a "me too" comment. Don't get me wrong, I like praise as much as the next man, preen, preen, but I'd much prefer some good discussion. W Somerset Maugham once said "People ask for criticism, but they only want praise", but I'm paid to receive the former. I haven't deleted any "great post!" comments yet, but beware...

    And then there's the issue of should I reply to comments addressed to me? Or should I write a new blog post altogether to answer the point made by a commenter? To see any replies I may have made, you have to navigate to the page of the original post and then scroll down and scan looking for my image. (I suppose in a way this is another facet of the "comment navigation" issue like the one above.) I tend to use my inner Editor to determine if the answer I shall give is important enough for another post, for an update to the post I'm replying to, or for a simple comment. So far I think I'm getting it right, but that's only because no one has complained to me that I'm getting it wrong (and now I await a slew of comments about how I have done so).

    Another issue to which I have no clear answer: should I close down old blog posts for commenting —freezing the content and comments of the post? Or not? For example, people still comment on my blog posts on right-to-left language support. Should I freeze those posts so that they can't? To be honest, the fact that people still do comment means that it's forever in my mind, and so is more likely that I'll build it into a roadmap.

    So what do you think? Tell you what, leave me a comment here to let me know...

  • OLAP, shmOLAP: it slices, dices, and cubes

    In one of the screencasts I narrated recently, I talked about how using the data crunching capabilities of a server versus those of the client can paradoxically make the user experience of using a grid control with large datasets more pleasant.

    I say "paradoxically" because it would seem that having the data on the client locally would result in a better experience, rather than having to scurry back and forth to the server fetching more and more data. In reality what happens is the initial download of all the data is the big time sink and problem (and then having to repeatedly do the same as you analyze the data). Replacing that huge engulfing of data with piecemeal sipping turns out to be more pleasant from the user's perspective: the delays the user experiences during data analysis are smeared out across all interactions and not clumped into some.

    There's another place in our component suites where we make use of server processing to alleviate the clumpiness of this downloading of data, and that's with our XtraPivotGrid.

    The XtraPivotGrid is a specialized grid that helps the user organize and analyze statistical, business, and financial data though its ability to summarize and present large amounts of information in a cross-tabular form. For example, the user can analyze revenue data during set periods (months, quarters, etc) by customer or product group, and so on.

    By default the way the pivot grid works is to do the initial big swallowing of all the data. And because, in general, the pivot grid is going to be used on a lot of data (it's after all going to be used to analyze, summarize, organize data, so it needs it all), that's going to take some time, even if your database is local. A huge amount of memory is going to be allocated on the client to hold and process this data.

    So a strategy for reducing these performance and memory issues at the client is to get the data server to do some of the work. After all, a good server would be able to share the results of queries amongst several clients. For an analysis tool like the pivot grid, the type of server that would be best at this work would be an OLAP server (Online Analytical Processing server).

    An OLAP server serves up what are known as data cubes. Instead of a simple relational model as in the standard relational databases, an OLAP server uses a mix of hierarchical and navigational models in order to store and serve data in a multidimensional matrix. Terms used with OLAP servers are 'dimensions', which are the rows and columns of the matrix, and 'measures', which are the values or fields in the matrix. OLAP servers are optimized for financial type data and the principal dimensions used are time, locations, people, products and so on.

    Back to the pivot grid. Ignoring the adornments of the grid itself — for example the borders, the column and row headers — in essence the pivot grid displays a matrix of similar data, exactly the output from an OLAP server. The XtraPivotGrid has a mode wherein it will construct and execute queries on a cube served up by the OLAP server. In this mode, it is the OLAP server that will aggregate data, sort it, group it, calculate summaries and so on, and then return the results to the pivot grid.

    To use a cube on an OLAP server, there are a couple of steps you need to make:


    1. Specify the connection settings for the server as a connection string using the PivotGridControl.OLAPConnectionString property. This connection string defines the names of the server, the data catalog, and the cube to use. At design time, you can build the connection string via the Connection String Editor available with the OLAPConnectionString property in the Properties grid.


    2. Create fields in the XtraPivotGrid control that represent the specific measures and dimensions of the cube you wish to use. At design time, it's easy: after the connection string has been specified, you can open the Fields page of the pivot grid's designer and the Field List pane will contain all the available measures and dimensions of the cube. You can then add a specific measure or dimension to the XtraPivotGrid control by dragging it onto the Fields pane.

    Of course, should you need to, you can do all this in code as well. See the help file for details.

    After these properties have been set, the pivot grid will query and fetch the required data from the OLAP server. As discussed before, only the data needed for display in the grid is fetched, with the server doing the aggregation, sorting, and summarizing of the data.

    At present we support Microsoft SQL Server Analysis Services (SSAS), both the 2000 and 2005 versions. The client requires the Analysis Services OLE DB provider to be installed as well (the latest release can be downloaded from the Microsoft web site — search for "Feature Pack for SQL Server").

    So if you have a lot of financial or business data you want to crunch and you want to enable your users to analyze this information — after all, raw data becomes actionable information through analysis — you should be thinking about an OLAP server and then use XtraPivotGrid in the hands of your users.

  • C++Builder 2007 with Update 2 now fully supported with Build 27

    To say this has been a episodic saga to rival Dickens seems to be an understatement. First we had an issue with the initial release of C++Builder 2007: in essence the compiler would produce invalid structures that did not match up to our compiled Delphi code, resulting in a nasty crash at run-time. Then CodeGear released Update 2 for C++Builder 2007 (Update 1 was just for Delphi 2007) and we understood that "our" bug was fixed in it, but before we had downloaded and tried the patched compiler, Nick Hodges, the Delphi Product Manager, indicated in a post on b.p.d.non-tech that the bug hadn't been fixed in time. Then yesterday after a flurry of posts and emails, we learned that in fact the bug had been fixed, but the bug report hadn't been flagged as fixed in time for the release.

    After this mish-mash of what happened versus what didn't happen and trying to sort out the truth, we did what we should have done originally and compiled and ran our tests overnight. Like, duh. And the result is?

    Build 27 of our VCL Subscription Suite fully supports C++Builder 2007 with Update 2.

    There, I've confirmed it. All customers who have been holding back on using C++Builder 2007 can now download and apply Update 2 and know that our Build 27 will work just fine. As usual, if you have any troubles or issues, our support team are the best people to help you, either via email or using our Support Center.

  • NDepend's UI: DXperience to the rescue

    Following on from my post on automated refactoring tools, I was going to talk a bit about how modern IDEs should be providing more analytical type services for your code, rather than the usual "here's an editor, go for it" paradigm with the only analysis being the compile.

    One of the analytic tools for which I have a great deal of affection is NDepend. Everyone I talk to who has also used it waxes poetic about it too (and for us programmers waxing poetic is not part of the job description). NDepend is a program that analyzes your code and produces information about the complexity and quality of your code and its architecture, not as static graphs and reports but instead as an interactive tool that lets you zoom and explore areas of complexity and that gives you a much more in-depth understanding of the architecture of your code. Just using it makes you brainier.

    In this case, a thousand words won't even begin to illustrate what NDepend can do, so I'd point you to their website.

    And then, my subsequent blog post still unwritten, Patrick Smaccia writes a very interesting article on what they've been doing for the UI in the next version of NDepend, including a great discussion of traditional menus and toolbars, and how they've decided to use DXperience Enterprise to revamp the UI. A decision which incidentally has given them the ability to support both menus and a ribbon in the same product.

    Patrick provides some great screenshots of the new NDepend UI such as this one with the Office 2007 blue skin:


    Patrick's discussion is very apt and insightful on many fronts (and I'm not just saying that because he's using DXperience, fawn, fawn) and I for one am looking forward to the next version of NDepend. I'm particularly glad that we're supporting this endeavor by providing complementary copies of our tools to MVPs, of which Patrick is one.

  • Real programmers don't use refactoring tools

    You know, it's funny. I can well remember the day I saw my first syntax-highlighted code in an IDE. It looked a little weird compared to the flat Notepad look of yester-editors, but, from that point on, non- syntax-highlighted code just became harder to read. It was as if we had been shown the z-axis from having lived in Flatland for so long.

    And then came code completion lists. You'd be typing along and if you suddenly became stuck trying to remember a member name, you could wait half a second or so and a list would pop up showing the identifiers you could type in at that particular spot. And again, you were shown another dimension to your code, and there was a split from the past.

    And now I see comments like this: "I have to laugh when I see what these tools do. Any coder worth his salt will be instinctively writing code as good as if not better than this tool produces." In case you hadn't guessed, the writer was talking about refactoring tools. In particular, the fake Yorkshireman macho-ness ("When I were a lad, we were lucky to 'ave _fastcall") that infuses the prose might have indicated to you that he was a C++ programmer, and you'd have been right.

    This just seems nutty to me and it's not because I work for a company that puts out a refactoring tool. Yes, we all know what refactorings are, the great geek god Martin Fowler wrote a best-selling eponymous book on them. And we all know the names of the most popular ones: Extract Method, Inline Temp, Extract Interface, Move Method, Rename Identifier, and so on. Their names have become a mantra almost, a paean to the new agilism. But, surely -- he says in a baffled voice -- having an automated tool is better than doing them by hand? Aren't refactoring tools just another small step on the way to better visualization of our code and of writing it better and faster? What am I missing that this comment writer implicitly understands, or, rather, vice versa.

    The writer of this comment also seems to imply that Code, Once Written, Is Never Changed. It must have been the eleventh commandment, but God ran out of ink when printing it on the stone tablet. But we all know from our own experience, without recourse to statistics, that most code is changed. No matter how good we are, how much we're worth our salt, no matter what language we use, we introduce bugs when we write code. It may be we change it immediately ("Duh, that should be a less than, not a greater than") or it may be later, after testing. And, sure, when we modify our code, or we modify someone else's (because we are lucky indeed if the only code we ever work on is our own), we make use of refactorings as we make fixes.

    Think of it like this. Since code will get changed (there's a bug; the requirements change; you prefer to code in an organic fashion as if you were playing with Play-Doh), why not take advantage of a tool that helps you do it? Certainly you can do it all by hand and have members of the opposite sex swoon as they watch you manipulate the code with the keyboard, but to be honest who gives a damn.

    I think of the way I write code. I'm an experimentalist developer: someone who develops with TDD (Test-Driven Development). Despite years of programming, I always approach a programming task as a blank sheet of paper. Since I do TDD my blank sheet of paper always starts off with something I'd like to be true (my first test). And then I write the code that satisfies the test, and then I write something else I'd like to be true (another test), and I write more code that makes it so. And so on so forth.

    During this continual piecemeal advance towards software of value, my code is completely malleable. Just because I wrote it one way for one test, doesn't mean that it's going to stay that way for ever. I refactor ad nauseam and utterly rely on my refactoring tool to do that.

    The refactorings I use most of all are these:

    1. Rename Identifier. This one is the most important refactorings in my ever so humble opinion (it's my blog so what I say goes!). When I'm coding, I never seem to get the names of things right the first time. Indeed, I've given up trying to, since it just interrupts the flow of getting an algorithm down. And the names of things are perhaps the most important thing about our code. We can't do anything about the syntax of our language, but, by gum, we can about our identifiers. Name them badly and we obfuscate our code quickly. We've all heard the (apocryphal?) stories about programmers coming across some code where the previous programmer in an effort to be funny used names of flowers or mountains to name local variables.

    Compilers don't give a fig about what you name your identifiers. It's your human audience that does, and you can guarantee you'll have one: the hapless maintenance programmer that follows in your footsteps, which, sometimes, is you. You are writing for your human reader first, and your compiler second. Beware if you think it's the other way round, because you'll appear on The Daily WTF. I've learned that the real name for an identifier only becomes obvious once you use it. Sometimes if you find it hard to name an identifier it indicates that your code is not expressing the problem domain properly. So, being able to rename an identifier, without worrying about search-and- replace destruction, is a boon beyond compare.

    2. Extract Method. I'm an advocate of small, low- cyclomatic-complexity methods, and ones that have a single responsibility. Ditto for classes, by the way. But constructing a set of interacting methods as a complete thought exercise and getting them all down in source code is a talent that has always eluded me. Call me a stream-of-consciousness programmer by all means, but that generally means slamming it all down and then sorting out the mess afterwards. This is where Extract Method come into its own. Hey, that code over there is essentially the same as this over here, let's extract it into a common method that can be called (reused!) from both locations.

    And an automated Extract Method is great: it works out the parameters, constructs the right method signature, inserts calling code at the location where the code was extracted, and in less time it takes to think about the possibility. Toss in a few Rename Identifier refactorings and your code becomes more readable and simpler to understand.

    Mind you, me, I can't wait for a tool that will find duplicate code for me. Oooh, at that point the combination may topple Rename Identifier from the top of the list.

    3. Create Method Stub. This isn't really a refactoring per se (it makes temporarily non-compilable code into compilable code), but it's so handy for TDD enthusiasts. You're writing a test that defines how you'd like something to read and to work and you find you've used a method name that doesn't exist yet. Hit the refactoring key, and boom, the tool adds a stub method to the right class. Invaluable.

    4. Decompose Parameter. This one is rapidly moving up the list ever since I started playing with an early beta version of it from Refactor! Pro. The set up for understanding why this is so good is a little long-winded, but the payoff has all the more impact for it.

    When we use a framework like the .NET Framework we find ourselves writing a lot of event handler code so that we can get called back from the framework when something happens. Oftentimes, we'll get a nice handy object passed in as part of the call containing property values we can use in our handler code. The unfortunate thing is these event handlers tend not to be reusable, because the caller has to construct a new handy object to pass in some values. All in all a pain. Especially so when, in general, we are given a particularly rich or heavy object and we only use a couple of its properties. Far better to extract out the specific code into another method, pass in the property values (and not the rich, heavy object), and reuse this new method, including calling it from our event handler.

    Enter Decompose Parameter: it works out what's actually being used inside the method's implementation and constructs a method that has those properties as parameters. It then patches up the calling code and you're done. So, instead of a non-reusable event handler that needs an EventArgs but just uses the EventArgs' Graphics property, and hence that requires an EventArgs object to be set up before calling, you get a method that expects a Graphics object instead, and can be used from other painting methods, for example.

    5. Move Type to File. One of Refactor! Pro's invaluable refactorings and every time we demo it, people in the audience "get it" immediately. The way we write code these days is to have one class per source file, the latter named after the former. Makes it easy to find the code for classes, and great for when you have multiple programmers working on the same team. But writing code in that way is a pain in the neck. You want a new class, but have to stop what you're doing in order to add a new file to the project, name it accordingly, switch to it, add the new class, switch back to where you were, etc. It's a complete drag to productivity. For the past couple of years, I just continue writing code to build a class model in the same one source file. When I'm ready to extract the classes, I just invoke this refactoring on one class after the other. Boom, boom, boom. Much easier.

    6. Extract Interface. Another great refactoring. You've written a class believing it to be the only one you'll need, and, wham, you discover another exemplar that's slightly different. I much prefer interface inheritance to implementation inheritance, and so I'd be doing an Extract Interface so quick it would make your head spin. Mock me, baby.

    7. Inline Temp. A quick simple refactoring, nevertheless very useful. It changes code that declares a temporary variable, sets it, and then uses the variable in an expression. This refactoring just gets rid of the temporary variable and moves the initialization into the expression. Sometimes this can make the code a little clearer. Sometimes it won't and you'll need the opposite refactoring; Introduce Temporary Variable. I do tend to use both as I polish code.

    8. Use String.Format. Another refactoring that audiences at demos instinctively "get" and that I use all the time. Face it, writing a String.Format call is great for localization efforts, but it's just a real pain to write. It's much easier to write an expression that consists of a bunch of string concatenations, but unfortunately that's hopeless from a localization viewpoint. So, do the easy thing and let the refactoring tool do the tedious work. I love just writing a string concatenation expression, getting it exactly the way I want it, and then convert it to a String.Format call as if I'd done it that way all along.

    I'll stop there. My intent here is not to list out all the refactorings I use, only those I use most of all. In doing so, I wanted to indicate why I think a refactoring tool is deserving of being in your coding environment.

    Sure, the refactorings I've listed can all be done by hand. Indeed if I were stern and hard-core I'd say do them by hand a couple of times so that you get a real feeling for what each refactoring means and how it works. And I'd laugh my evil taking-over-the-world laugh while you did so. Afterwards you'd just use the automated tool and be thankful that you can save time.

    Because in the end, we're not all great programmers who write code correctly in the first pass. We're all programmers writing code to enhance/fix/extend existing code (and even green field projects turn into a marathon of altering code). We're all maintenance programmers to a smaller or larger extent. We're all programmers who have to live with other people's code, and recognize that our code after a few months could be someone else's. And because we're all that and more we deserve the best coding tools we can get. And that includes automated refactoring tools.


Chat is one of the many ways you can contact members of the DevExpress Team.
We are available Monday-Friday between 7:30am and 4:30pm Pacific Time.

If you need additional product information, write to us at info@devexpress.com or call us at +1 (818) 844-3383


DevExpress engineers feature-complete Presentation Controls, IDE Productivity Tools, Business Application Frameworks, and Reporting Systems for Visual Studio, Delphi, HTML5 or iOS & Android development. Whether using WPF, ASP.NET, WinForms, HTML5 or Windows 10, DevExpress tools help you build and deliver your best in the shortest time possible.

Copyright © 1998-2018 Developer Express Inc.
All trademarks or registered trademarks are property of their respective owners