Blogs

This Blog

News

Favorite Posts

Archives

ctodx

Discussions, news and rants from the CTO of DevExpress, Julian M Bucknall

May 2006 - Posts

  • Saving time and expertise

    Last week I saw a couple of mentions about libraries that serve roughly the same technology, one coming from a mega-corp, the other from a single developer, and it got me thinking about other similar technologies.

    Both of them purport to enable you to write code in your favorite high-level language (HLL) which, when compiled, produces not machine code or IL or p-code (showing my age here!) but Javascript for AJAX applications. To which I can only say, wow.

    The first library is the Google Web Toolkit that enables you, to quote, "write your front end in the Java programming language, and the GWT compiler converts your Java classes to browser-compliant JavaScript and HTML." The toolkit takes care, though its own library, of browser quirks so you don't have to. You even have, and this is amazing, debugging abilities using the original Java code.

    The second library is much less grand, but no smaller in scope. It's Script# and enables you to do roughly the same in a C#/.NET world.

    Both of these toolkits enable you to concentrate on the language you know best (Java or C#) and ignore the languages you're not that familiar with (Javascript and HTML). Indeed that's what a normal compiler enables you to do as well: concentrate on C# or Java or Delphi and ignore the languages you don't know (IL and assembly). You trust the compiler to do the right thing so you don't have to worry about it.

    This idiom pervades what we do as developers. We abstract the real world of hardware and the operating system behind some simple yet flexible and expressive language and framework and we gain huge benefits in productivity and functionality.

    It's not the complete and utter panacea though. There are still some times when you need to get under the hood. But it doesn't matter: in the vast majority of cases, the hood can stay resolutely closed.

    So, for example, using an ORM like our XPO (eXpress Persistence Objects) framework makes sense in the general case. You would rather write C# code that accesses an ORM which then forms the exact SQL statement needed on the fly, than having to write a generic parametrized stored procedure that can do the majority of cases. You need the ability to switch database engines more than you need the highly-optimized T-SQL stored procedures that give you the last percentage of speed gain.

    Look also at all the discussions on DSLs (Domain-Specific Languages). All these DSLs are are an abstraction of some problem domain in some textual form (sometimes XML) and some "compiler" or "interpreter" that does the right thing given some specification or "program" in the DSL. The developer writes something in a high-high-level language and the compiler converts it to a normal HLL or interprets it directly.

    Ant works the same way, but applied to building applications. You write your build script as a series of tasks that need to be done. The tasks are simply Java classes that you (or more generally someone else) has written.

    The thing is, we all have way more software to design and develop than we have time or resources to do it. Any alleviation of this problem is good for us all. I'd try out these toolkits and libraries in your next project to see if they help or hinder.

  • Talking about inheritance

    I was having an IM chat with an old friend yesterday afternoon (let's call him A). He's starting to pull his hair out (he has more than me, so I'm not worried yet) with regard to the obtuseness of his fellow developers. They're somewhat prone, shall we say, to producing unwieldy class models that can't easily be extended or maintained.

    We chatted a bit about implementation inheritance versus interface inheritance using composition. Implementation inheritance is what programmers normally think about when they hear "inheritance". They think of a class hierarchy, they consider which methods to make virtual (or, if you're using Java, which methods if any to make final), they worry about private versus protected. Interface inheritance is when you write a cless that implements one or more interfaces, usually through some kind of composition technique.

    J: The big problem with implementation inheritance is that you are tightly coupling the descendant to the ancestor. The ancestor is not this black box, it's much more transparent than that.

    A: Right, exactly.

    J: As soon as you have a virtual method you impose this awful problem on the writer of the override. Do I call the ancestor's method? If so, when can I call it? At the beginning of my overridden method? At the end? Anywhere I like? Can I set up some data, call some other ancestor methods first? Or is there some calling dependency I don't understand here? The writer of the descendant is forced to read and understand the code for the ancestor.

    A. And it's not just the virtual methods, it's the ancestor's local data. Should the ancestor make this data available directly as a protected field? If so what are the ramifications of this?

    J. Indeed.

    A. So I'm thinking about it, and I'm mostly coming up with the conclusion that some form of composition results in more flexbility and eaiser to understand code when compared with traditional implementation inheritence. I'm thinking that maybe I should talk about this to the developers at one of the design meetings.

    J: The problem is, will they understand?

    A: The problem is, the ones who need to understand never come to the meetings :(

    J: Aye, true.
    J: You really need a good refactoring tool though.
    J: It's not impossible without, just more difficult and a bit tedious.

    A: Do you have any articles you recommend on composition over inheritance?

    J: Let me check some out. The Design Patterns book by the GOF is a paean to composition over inheritance.
    J: The first thing to drill into their heads is Single Responsibility Principle (SRP), but some really don't get even that.

    A: Good point, maybe I should be focusing on the principles and see what I get from that...

    J: SRP is the biggest because it's the hardest to get across.

    A: I think the problem stems from how programming is taught.

    J: Yep. Or, people come to OOP through procedural (say to C# from VB6) and program in the same way they always did.

    A: It's taught backwards -- you're told to think of the data, so all your classes become nothing more than structs, and then you think of what you do to that data, and you never quite get to how the system interacts, and it's those relationships that make the code work or not.
    A: We should start with discussing interactions and relationships of classes, not data packaging

    J: Yep. Teach behavior, not data. The bible of that is "Object Thinking" by David West. I worry it's above most people's heads.

    A: It will remain there until they are paired with people who can teach it, or find a job where it doesn't really matter :(

    J: I'd say, start with SRP. Take some class and show how it has more than two responsibilities and break it apart. Say, any class in [A's company's main application]. LOL

    A: "now see this class has 36 responsibilities, so we type "del class.cs" and start over"

    J: LOL
    J: Trouble is, SRP requires thinking about design and understanding choices and knowing why A is better than B.

    A: Judgement... hard to teach.

  • Advice from Smalltalk...

    This morning, one of the blogs to which I subscribe gave me this link to a PDF of a scanned book on programming with Smalltalk, Smalltalk With Style, from 1996. One of the authors seems to be one of the Pragmatic Programmers, Dave Thomas.

    Now, I'm sure, like me, most of my readers wouldn't be able to read Smalltalk, let alone write it. And, heck, a ten-year-old book? What content could that have that's applicable to programming today? Nevertheless, this book has a whole bunch of excellent guidelines on writing software, some of which are Smalltalk-only, but a surprising number of which are relevant to any language, including C#, VB, and Delphi.

    As I said the book is scanned, making it a little difficult to quote at length, but there's a nice summary at the end of the book of all 126 of the guidelines (page 105 of the book, page 116 of the PDF). Here's a little selection, though (the numbers refer to the guideline numbers in the book):

    1. Choose names that are descriptive. Well, d'oh, but too often (and I'm guilty of it too) we use brief names over descriptive ones.

    7. Avoid naming a class that implies anything about its implementation. A good one this: I've seen classes named something like FooHashtable. Hashtable is way too specific, whereas List or Collection might be better. The aim here is not to allow the user of the class to make decisions about using a class based on its name. Keep it a blackbox as much as possible: that way, you can change the implementation with impunity, say for efficiency or memory usage reasons, later on.

    26. Use a temporary variable within a scope for only one purpose. It can be confusing for the maintenance programmer if a variable is used for two different purposes. Also if you do find that you are using a local variable for two different things within a method, it can imply that the method is too large and should be split up into two or more.

    30. When you need to abbreviate, use a consistent abbreviation strategy. I'm reminded, with this guideline, of the Command Language (CL) on the IBM System/38 and later the AS/400. The command names were tokenized into 3-letter words (or less), so that DSP meant "display", PGM meant "program", DAT for "data" and so on. Once you knew the standard abbreviations, you could read the command names as English and even be able to guess at command names if you were unfamiliar with a particular feature. Now, I'm not recommending that we go back to this way of naming commands, just that if you were to use abbreviations in names, be consistent and if necessary spell it out somewhere so that the whole team can understand that "Str" always means "string" and someone doesn't decide to use it for "setter" or something like that. (This guideline's concept continues for guidelines 29, 31 - 34.)

    36. Do not comment bad code, rewrite it. If you find yourself commenting some code because it's unclear (though do note guideline 47 below), consider the possibility that it's just badly written and that it should be refactored in some way.

    45. Avoid relying on a comment to explain what could be reflected in the code. This goes directly to the readability of the code. If you choose your identifiers well, code can read a little like English and you don't have to write an explanatory comment.

    47. Comment the steps of an algorithm, as needed. An algorithm, unless it's one of the really well-known ones, tends to be fairly obtuse. Document such an algorithm with a reference to a standard algorithms book, or by detailing the steps in the algorithm and why they're done in that order.

    76. Avoid altering the behavior of well-known messages. By "messages" here, Smalltalkers mean methods, and by "well-known" I'm going to assume "what the Framework does". So, as an egregiously bad example, don't write a method called Contains() that adds the key being searched for if it wasn't found: you would be confusing the reader of the code who would make the natural assumption that your Contains() would work in the same way as other classes in the Framework. Rubyists tend to refer to this as the Principle of Least Surprise.

    79. Write small methods. If I'd been paid in the past by the line for code reviewing every stupidly long method, I'd be sipping margaritas on a white beach in the Caribbean, wondering if my private Lear jet was ready to take me to Whistler. I just don't understand the fixation some developers have for the long method. Do they see them as somehow more macho?

    87. Try to design subtypes instead of subclasses. This is essentially an argument for interface inheritance rather than implementation inheritance.

    104. Test classes as they are developed. Oh. Wow. Cool. Yes.

    As I said, that's just a small selection. I'd at least browse through the guideline summary, jumping back into the main body of the book whenever needed. Although I did say I couldn't read Smalltalk, I was partially joking: a lot of the code the book is fairly approachable.

    (Hat tip: Ben Griffiths)

  • Thinking about UX and not UI

    So I spent a little while this morning reading through the Windows Vista User Experience Guidelines. You may have noticed that we, Developer Express, do a lot of user interface controls, and we have to understand where UI is going and how it's changing.

    In the old days I used to have a book on my shelf called "The Windows Interface Guidelines for Software Design: An Application Design Guide" (it's now somewhere in the bomb site that is my basement, sigh). It tried to teach you the ins and outs of designing a user interface. Well, it seems that nowadays UI is passé, and we should welcome the User Experience or UX.

    Actually, jesting apart, the Aero platform for Windows Vista is going to extremely pretty and functional, despite the fact that for the best experience you should have something like a freon-cooled video adapter. The designers at Microsoft have attempted, much as Apple did with OS X, to lay down a set of rules for standardization, consistency, and quality, as well as provide support in the operating system for much if not all of it.

    So we'll be getting a new system font, Segoe UI, that is sans serif and optimized for ClearType. It's a modern, friendly and very legible font. The guidelines even recommend the default font size to be 9 points (ahhhh! lovely). There's also a monospaced font called Consolas that we developers will enjoy (and already do). It also has been optimized for ClearType for better on-screen legibility.

    (In fact, there are a total of seven new non-system fonts for Vista, all of which have been designed for continuous on-screen reading, although they also look good printed. They are: Calibri, Cambria, Candara, Constantia, Corbel, as well as the aforementioned Consolas, and Meiryo, a Japanese font.)

    But the screen font is not the only thing being stressed in the Vista UX. If I can summarize several pages of text in one sentence, Vista UX is about having a restful user interface. Oh, yes, there are all the fabby-dabby transitions and animations, the 3D effects, the fades, and so on, that come with the new integrated video and DirectX APIs, but for me the interesting things are the subtleties.

    So, for instance, there are all the translucency and transparency effects. Bold brash colors are out and more muted ones are in. (In fact, bold and brash is out, period.) The gentle and subtle transitions as you hover with the mouse.

    The new task dialog object has lots of restful whitespace and is nicely divided, with the new Segoe font acting for both the main text and the heading. Backgrounds are understated and help divide up the dialog.

    Small things like the high-quality, high-resolution icons, the translucent window borders, the notifications from the system tray, and so on, provide a better, more restful experience.

    The guidelines also talk about the text and the tone you use in your application's UX. Again the way you "talk" to the user can help promote a more restful, engaging experience. This advice on your application's tone, by the way, is not specific to Vista by any means; you can use it now in applications for Windows XP and Windows Server 2003.

  • The fundamental problem with UI

    When we write a document that's going to be printed we have a plethora of design choices, the most important and fundamental one being the fonts we use. We can choose to be boring and use Arial or Helvetica (here's how to tell the difference) and Times New Roman, or we can select and licence a font (or several) that better expresses who we are and that better supports our corporate image.

    The same goes with books. I was very disappointed in the fonts used to print my book, since I thought they looked staid and boring. I wanted to use a more modern font than the ones Wordware eventually used (and I'm not sure what they did use in the end).

    You could say that a hobby of mine is typography. I love looking at letterforms and why they are the way they are. What decisions did the designer have to solve, what constraints was he under, and how did he solve them? I'm always interested in how businesses present themselves through the text they provide and want us to read. Have they gone to the trouble of designing their corporate look-and-feel down to the fonts they use? Are they consistent in using them? Did they make a good choice? And so on.

    Unfortunately one of the biggest problems about typography, especially with regard to a corporate's look-and-feel, are the fonts used in the software that company writes. Seldom does the UI follow the company's standards, and it's not the programmers' fault. The problems are many fold:

    • you can't assume your customer has your corporate fonts; in fact, all you can assume for Windows is that the customer has the standard fonts that come with the OS, although even then different OSes have different standard fonts
    • although you could ship your corporate fonts with your app and plonk them in the Windows\Fonts folder, any program could then use them gratis with the probable wholesale violation of your font license
    • it's nigh-on impossible (a.k.a. there is no OS support for this anywhere) to embed fonts in an application so that only the application can use those fonts. (However, I wonder how Adobe does it with embedded fonts in PDFs? Does Acrobat Reader have a font rendering engine?)

    Sometimes it can be worse than this. I well remember at a previous job showing in a lunchtime presentation one screen from the company's product that used two very similar fonts (if I remember correctly, MS Sans Serif and Arial). And even worse, at two different font sizes (something like 10 and 11). At my previous company, whose main product's UI was a browser app, I was always complaining that on every page there were two very similar fonts (Verdana and whatever the default font for the PC was set to). To me it just looked jarring and unprofessional, but nothing was ever done about it.

    One day, I hope, it will become easier to design UIs to use specific fonts and then to ship those fonts as part of the application in such a way that only that application can use them. And then one day a little time after that you'll be able to do the same with web applications, but I'm not holding my breath in the interval.

    (Prompted by this article on flow|state.)

  • Lambda is not just a Greek letter

    Sometimes the planets in the heavens align themselves just right and a post you wanted to write is suddenly supported by a whole slew of news articles on the same day. Well, it just happened to me.

    First, there's news of a new CTP of C# and LINQ, or to give it its full title "Microsoft Visual Studio Code Name “Orcas” Language-Integrated Query, May 2006 Community Technology Preview".

    LINQ is interesting to me because not only does it give you an integrated way of doing queries across many sources of data, but it also incorporates a complete new language inside of C#, lambda expressions, in order to succinctly express anonymous methods.

    Lambda expressions are derived from lambda calculus, a branch of mathematical logic. A lambda expression is essentially an anonymous method with a set of parameters and a block of code that applies to those parameters, yielding a result. They're interesting because the programming language that encompasses them enables you to use lambda expressions as data and pass them around to other methods as parameters. C# 2.0's anonymous delegates get close, but the syntax is still a little too awkward.

    Here's an example. In DLINQ you can write a query in C# like this:

    var q = from o in orders, c in customers
            where (o.ShipCity == "London") && (o.CustomerID == c.CustomerID)
            select new { o.OrderDate, c.CompanyName, c.ContactTitle, c.ContactName };

    Parsing this a little, you can see that it's almost a SQL statement in reverse. But note also that it's not "real" C#: what would the C# 2.0 compiler do with "from o in orders" for example. The result of this expression is q, which is a list of some newly auto-generated type with four properties: OrderDate, CompanyName, ContactTitle, ContactName. (That's the reason for the var keyword by the way: there's no way for us to know ahead of time what the new type is going to be called.)

    This actually gets compiled to something like this:

    var q = orders
              .Where(o => o.ShipCity == "London")
              .SelectMany(o =>
                 customers
                   .Where(c => o.CustomerID == c.CustomerID)
                   .Select(c => new { o.OrderDate, c.CompanyName, c.ContactTitle, c.ContactName }));

    Getting closer to "real" C# now (well, OK, I just don't know how to split it up on several lines), apart from those expressions like:

    o => o.ShipCity == "London"

    What's this? Well, this is a lambda expression. It takes one parameter (the o, an item in the orders collection, actually an IEnumerable I think) and returns a boolean (the result of comparing o.ShipCity to "London"). The expression is acting as a predicate delegate for the Where() method.

    The reason for using the lambda expression is twofold. First it's more compact, something that C-languages are famous for, and you don't have to worry about writing all the syntax around creating a new delegate of the right type. Second, it's extraordinarily easy to parse it in to an expression tree. So what? Well the code behind DLINQ is able to analyze these expression trees in order to ascertain the best SQL statement to execute to get the data. The very worse SQL to generate would be to analyze the expression step by step: "Retrieve all orders. Now call the Where() method to find all those records that have ShipCity as London. Now we have another list of orders, let's execute the SelectMany() method on them. And so on." Instead the DLINQ run-time will evaluate the expression trees and produce something like this:

    exec sp_executesql
    N'SELECT [t1].[CompanyName], [t1].[ContactName], [t1].[ContactTitle], [t0].[OrderDate]
    FROM [Orders] AS [t0], [Customers] AS [t1]
    WHERE ([t0].[ShipCity] = N'London') AND ([t0].[CustomerID] = [t1].[CustomerID])'

    In other words, so that the whole selection thing is done in the database, where it should be.

    That's just one great thing about lambda expressions in C# 3.0, but to get more understanding about why they're important, it's handy to know a functional programming language. The oldest and most well-known is probably Lisp (or its close cousin Scheme). In comes the second link for today, a very well-written essay on why knowing Lisp, or rather concepts such as "code is data", is important in order to be a better programmer.

    And then the inestimable Don Box is excited about some new lambda expression support in the latest CTP. It won't make much sense until you're very familiar with passing and combining lambda expressions.

  • Consolas

    You've seen Mark Miller demo CodeRush and Refactor! Pro at a show. You've seen the font he's using, how clean and sharp and readable it was. You've wondered which font it was and where to get it.

    Well, wonder no more. It's Consolas, and it's available here.

  • A trying post

    I was looking through some code recently that did this with a Hashtable:

     if (documentCache.ContainsKey(key))
       doc = documentCache[key] as XmlDocument;
     else {
       doc = LoadDocument(key); // via long winded call across a network
       documentCache.Add(key, doc);
     }
    What's the problem here? 

    For me, the thing that leaps out is that we search twice for the key. We search for the key the first time when we call ContainsKey, and we search for it again when this returns true and we read the value for the key using the array access syntax.

    Now, this is a hash table. The search algorithm used is O(1), a constant time operation. So it doesn't matter really now many items are in the hash table (modulo some hand-wavy talk about memory and the hardware cache), we're going to find the key in some constant time. But we are still doing twice the work.

    Better perhaps to do this:

     doc = documentCache[key] as XmlDocument;
     if (doc == null) {
       doc = LoadDocument(key); // via long winded call across a network
       documentCache.Add(key, doc);
     }

    Here we only do the one search for the key value. The nice thing about the Hashtable class is that if the key is not found the array access operator will return null for us. So we've improved the code's efficiency a little bit.

    Unfortunately, if documentCache were an instance of Dictionary instead -- we've perhaps upgraded our code to .NET 2.0 for better efficiency -- this code will break. The reason is that if the key were not found the array operator getter would throw an exception. We could write this, I suppose:

     try {
       doc = documentCache[key] as XmlDocument;
     }
     catch {
       doc = LoadDocument(key); // via long winded call across a network
       documentCache.Add(key, doc);
     }

    But that gives me the complete shivers. An open catch like that? Brrr. We could catch the instance of the proper exception class tat's thrown and make it slightly better, but even so. It's pretty nasty to write this kind of code where we can imagine the catch is going to be thoroughly exercised.

    So it looks like we should revert to the previous version again and check that the key exists before we try and retrieve its value and suck up the inefficiency of the double search.

    Not so fast. The BCL team invented a nice design pattern for .NET 2.0 and applied it to a lot of places in the Framework. The pattern, for want of a better phrase is the Try pattern. Essentially the Try pattern applies in the "try this and catch if it didn't work" situation, by removing the need for the try..catch.

    Our code becomes this:

     if (!documentCache.TryGetValue(key, out doc)) {
       doc = LoadDocument(key); // via long winded call across a network
       documentCache.Add(key, doc);
     }

    And we remove the need for the exception handling completely.

    Another great example is primitive types now have TryParse methods to try and convert a string to the relevant type. You don't have to wrap your conversions from strings to integer types in try..catch blocks any more, making them ultra-efficient compared to .NET 1.1 and 1.0.

    In other words, instead of writing something like this in the old days:

      DateTime result;
      try {
        result = DateTime.Parse(inputString);
      }
      catch {
        result = defaultValue;
      }
     
    You can write something like this instead:
      DateTime result;
      if (!DateTime.TryParse(inputString, out result))
        result = defaultValue;
  • Birds of a Feather session at TechEd on refactoring

    Our very own Mark Miller (no one else would have him) is leading a Birds of a Feather session on "Next Generation Refactoring in Visual Studio" at TechEd 2006 in Boston (Tuesday, June 13, 2006 at 7:45 PM).

    Here's the summary: "Is refactoring only for the system architects and designers? Is it possible for all developers to refactor continuously as they write code? Can we trust tools that change our source code? Do we need more or less refactoring in our lives? What will refactoring tools be capable of in the future? If you consider source code a company asset, and if you’re interested in making your code easier to read and less costly to maintain and extend, then this session is for you. Stop by, listen in, share your opinions, discuss the state of the art & imagine with us as we consider the future of intelligent machine-assisted code maintenance."

    Mark has some definite ideas about future intelligent automated refactorings (as have I, for that matter) but welcomes more thoughts and ideas. So if you're at TechEd this year, and are free that Tuesday evening, pop in and join us in the discussion. It promises to be a lively one.

  • It was nice in Nice

    Apologies for the lack of posts over the last week and a bit: several of us from Developer Express went to DevConnectionsEurope last week, which this year was being held in Nice on the Côte d'Azur in France. Yes, I know, it's tough but someone's got to do it.

    Unfortunately for us, the hotel we were staying in rejoiced in the prices it could charge for internet connections (it was well over $30 per day). After all, if you already work in an expensive holiday spot, just imagine the cost of your own summer vacation; you need all the money you can rake in. So, I in particular decided to have a connection-free visit; with hindsight, looking at my email inbox, maybe a complete week was too long.

    The conference was a little on the small side, but we met up with some friends in the booth and, of course, were able to demonstrate a lot of our products to (current and potential) customers. I was able to show off my French to all and sundry, especially when we discovered that a couple of boxes of our CDs had gone missing. I think the exhibitor liaison person was quite happy not to have to listen to my mangled grammar and vocabulary any more ("Nous ne trouvons pas deux boîtes de CDs. Où sont-ils?") once we'd managed to track the boxes down.

    We were able to show off to the attendees some of the support for C++ that we've been working on for Refactor! Pro. (And I hasten to add, before you all write in asking for release dates, this stuff is very, very early in its development: you can't imagine the horrors of C++ that we have to deal with. Or maybe you can, in which case I feel for you.) The overall response was great: we're obviously on the right track for this functionality.

    Next on our conference itinerary is TechEd in Boston. See you there!

LIVE CHAT

Chat is one of the many ways you can contact members of the DevExpress Team.
We are available Monday-Friday between 7:30am and 4:30pm Pacific Time.

If you need additional product information, write to us at info@devexpress.com or call us at +1 (818) 844-3383

FOLLOW US

DevExpress engineers feature-complete Presentation Controls, IDE Productivity Tools, Business Application Frameworks, and Reporting Systems for Visual Studio, along with high-performance HTML JS Mobile Frameworks for developers targeting iOS, Android and Windows Phone. Whether using WPF, Silverlight, ASP.NET, WinForms, HTML5 or Windows 8, DevExpress tools help you build and deliver your best in the shortest time possible.

Copyright © 1998-2014 Developer Express Inc.
All trademarks or registered trademarks are property of their respective owners