Bob on Development

January 26, 2007

How to Use Team Foundation Server Source Control with Visual Studio 2003

Filed under: Products,Techniques — Bob Grommes @ 9:33 pm

I’ve just begun work on extending a product that was authored in VS 2003 and is now three years old. The developers wanted to port it to VS 2005 but ran into a brick wall when it was discovered that the client’s old version of Citrix could not cope with .NET 2.0 being installed on the same machine; apparently it caused a complete meltdown requiring a from-scratch reinstall.

In the meantime the developers had standardized on Team Foundation Server (TFS). Rather than face losing the benefits of TFS source control while being stuck (hopefully temporarily) in the VS 2003 world, they came up with a pretty interesting workaround.

1. Put the VS 2003 project into a TFS workspace.

2. In the VS 2005 client, open the Source Control Explorer (View | Other Windows | Source Control Explorer). Right click on the root of the VS 2003 project and do a Get Latest. This downloads all the code to the client system without attempting to open the solution or any of its projects, which would trigger the VS 2003 to VS 2005 conversion wizard, which, if run, would render the projects unusable in VS 2003.

3. From here you work with the project using an instance of VS 2003, using the separate VS 2005 instance to do your check-outs and check-ins.

This is not a bad solution but I wondered … hasn’t someone solved this more elegantly? A quick Google search led me to a Microsoft download that not only allows you to access TFS from VS 2003, but from VS 6, or even FoxPro, Sybase or Toad!

This is heartening in a world where Microsoft doesn’t even support running many of these older but still very much in-use tools on the latest release of its own operating system. I was astounded that they’d even consider telling developers that as recent a product as VS 2003 isn’t and won’t be supported under Vista (although, oddly, VB6 is supported). I was even more surprised that even VS 2005 support will not really be here for months yet, when SP2 is released. Yet, somehow they have managed to support a number of old and even 3rd party tools in TFS. Could it be that at least some people at Microsoft have managed to overcome the Pointy-Haired Ones?

Then it stuck me that supporting these old environments will help sell significantly more TFS licenses, whereas supporting them in Vista will not sell significantly more copies of Vista. Think about it: development teams are 100% of the market for TFS, but probably just a small percent of the total market for Vista licenses. And Microsoft’s thinking is that developers know how to run VMs within Vista anyway, to support old products using old operating systems. Penny-wise and pound foolish, in my view — but no one is asking me.

Advertisements

January 25, 2007

Scrum: Putting a Name on Intelligent Development Practice

Filed under: Managment,Methodologies — Bob Grommes @ 4:56 am

If I am pressed to come up with a methodology that is a somewhat close fit for my own thinking on how software development should be managed, it would be Scrum.

Scrum, for once, is not a clever acronym; instead it is slang for a group huddle in the game of Rugby, where the players cooperatively strategize about how to move the ball down the court. Although often used with Agile processes, Scrum actually has nothing inherently to do with Agile or eXtreme Programming — or even with software development, as it’s used in other fields of endeavor, too.

Scrum has been (mis)appropriated in various ways, including probably by me, but I like the general concepts.

First, Scrum recognizes a few uncomfortable facts of life:

  • Requirements are never fully understood at the start of a project. Even when you think they are.
  • Requirements (and your understanding of them) always change during a project.
  • The introduction of any new tools or techniques into a project renders it even less predictable.

Scrum is a way to manage projects with lots of unknowns — projects, such as software development, that need an empirical approach, not a fixed methodological approach.

You can find a pretty good 30,000 foot overview of the Scrum process here.

In essense, top management (or the project champion — whoever is the ultimate paying customer) conceives a project, chooses a team, and then gets the heck out of the way so they can do their job. This implies the first principle of Scrum, which is to trust and respect your team; after all, you picked them! This means that Scrum will never happen under a micro-managing, insecure, or abusive champion. The very idea of Scrum will terrify such a person. Managers who confuse leadership with browbeating or puppeteering, need not apply.

The product champion who initiates a Scrum project recognizes that it’s his or her job to support the team with resources. In almost all cases, the team will ask for the resources it needs; a good manager need not and generally should not offer to “help”.

Often, resource requests come in the form of reports of impediments to the project’s progress that require management authority to remove. In organizations of any size that have been around for any length of time, there are many well-intentioned procedures, processes and practices that do not serve effecient and effective software development. Product champions in a Scrum setting need to be prepared for the political chaos that can result from removing such impediments. That is why the champion needs sufficient authority (or at least real backing from higher management) to overcome these impediments.

More than one project I’ve been associated with has gone by the wayside when the progress of that project was at odds with the bureaucratic and political intertia of the organization; that is why you always need a champion with sufficient power to get obstacles removed. It’s like qualifying your lead in the realm of sales. There is no point in doing business with a champion with insufficient authority and clout.

I emphasize the management / sponsor / champion side of Scrum because it’s usually overlooked. The part that captures people’s attention seems to be the daily “scrum meeting” where everyone touches base about what they did yesterday and what they will do today. I personally consider this the least interesting aspect of scrum, as other than the daily frequency it’s a common-sense ritual of any methodology. But I suppose that organizations that don’t understand that meetings are productivity killers consider the scrum meeting to be profound because it’s only 15 minutes long. In addition, most organizations are meeting-addicted and don’t feel right about not having a meeting.

Scrum meetings are run by a “scrum master” who is responsible for making sure every developer answers three questions: what did you do yesterday, what will you do today, and what if anything is hindering you from your objectives? This strikes me as a little bit annoying and insulting unless done with considerable finesse; it could make the “scrum master” equivalent to the hated “hall monitor” (aka snitch) of grade school days. I wouldn’t formalize this role, but over time and on average, every team member does need to provide this info.

There is nothing about a scrum meeting that can’t be accomplished via something like a teleconference, IM, Groove, or even email. In fact it doesn’t necessarily require everyone to contribute at the same exact time, though it may well be convenient and logical to do it first thing every work day.

If I have a beef about scrum as it’s traditionally presented, it’s that it is a team sport metaphor that implies a team that is physically present in the same location on a daily basis. I am a believer in virtual teams, and cutting out the Amway-style back-slapping whoo-hoo yeah-team bullshit. For a good developer, doing insanely great work on an interesting project is all the motivation that’s required, and mutual respect of team members for each other’s technical capabilities is far more important than less relevant and less universally accessible bonding experiences such as a weekly game of tag football or meetings with free pizza.

The other key feature of scrum is the concept of “sprints”. A sprint is essentially focused work towards an agreed-upon monthly milestone. A lengthier, more formal monthly scrum meeting is used to present the work in progress. The goal of each sprint is to release a runnable, and as much as possible useful, version of the software under development.

From meeting to meeting, a list of backlogged work is kept — really just a prioritized to-do list; again, nothing profound here.

All of this is very common-sense and easy to understand. I like that sort of thing.

Like any Good Thing, scrum can be mis-used and abused. For example at http://www.controlchaos.com, a prime scrum promotion site, it’s suggested that you kick off a project by first setting the scope and deadline and then telling your team that they are charged with getting it done in half the time — and don’t be surprised if some people will even quit in protest. No kidding!

I reject that sort of arbitrary, manipulative crap. It contradicts other things that site says, such as that the objective is to do as good a job as possible. Any idiot knows that a deadline generated by marketing or just some dart thrown at a calendar is meaningless — things take as long as they take. Longer, in fact. Get over it.

Of course in the real world a project may be useful or valuable only if delivered before a certain date — in that case as long as everyone understands that the date is a constraint on what features and functionality can be implemented, that’s fine. Regardless of the underlying development methodology in use, scrum emphasizes delivering value in milestones at regular intervals (monthly is standard). It encourages discovery, design, implementation and QA in parallel.

So with scrum, what keeps the stakeholders happy is regular and open status meetings, plus early and frequent product deliveries and demos. That, plus the knowledge that people are hard at work doing their level best — whether or not the stakeholder had to compromise any on deadline or features.

January 22, 2007

A Laptop Battery Breakthrough?

Filed under: Hardware — Bob Grommes @ 8:00 am

News comes today of a purported huge breakthrough in battery technology that promises to at least double battery life — my guess is likely more if the form factor stays the same. Although it seems to be developed foremost with automotive and other heavy duty applications in mind it also is said to scale well in both directions.

Various announcements like this are made from time to time and I usually have a “wait and see” attitude but this company seems different. It has attracted some mainstream investors, has been very low-key / low-hype, and availability of commercial product appears imminent, that is, probably within this year. Furthermore, industry skepticism and concerns over the technology’s limitations are much more applicable to the hostile environment under the hood of a car than to the relatively genteel environment of a laptop computer, where operating temperatures are relatively stable and there are not usually huge issues with vibrational stresses. Accidents with laptops can destroy current batteries, too.

If laptops can have a solid 2 to 3-fold boost in battery life then the ball game changes. Many developers do a large chunk — even all — of their work on laptops these days. The ability to work all day, even a couple of days without recharging while having what is likely to be a lighter and faster-charging battery — what’s not to like?

January 19, 2007

Software — Cheap!

Filed under: Methodologies,Projects,Tools — Bob Grommes @ 8:27 pm

Once in awhile, for entertainment, I visit online freelance software development exchanges. It’s a fascinating study of the endless human capacity for witless self-deception. I actually saw a posting today from a guy who wants someone to clone an entire accounting system. Other than the name of the system to be cloned and an unexplained desire to have the work done in PHP, that’s the entire specification. And the budget? Two hundred dollars. I kid you not.

Apparently this guy thinks there are people who can knock out an entire accounting system in two hours. Okay, maybe he’s thinking twenty hours with third world labor. But still.

Apparently his reasoning process goes something like this: (1) I can buy Quicken for $79, therefore (2) it will cost a LOT more than $79 to write my own clone of Quicken — maybe as much as three times as much, but (3) I’ll cut it down to an even $200 as a starting point for negotiations.

I recall a couple of years ago getting a contract pitched to me where the objective was essentially to clone the whole Google search engine. The guy seriously believed that I could lead a team of a half dozen others and pull off in six months what Google, a company with thousands of employees, has spent a decade building. We would all work for fifty grand a year but in six months we’d go public and we’d all be millionaires. Hooray for us!

This guy got my award for Most Improbable Personal Delusion of the Year, until a scant week later I saw a very similar posting to a freelance site with a budget of $500!

There are literally hundreds of such postings floating around the Internet at any moment in time. It makes me wonder if these boards exist for no purpose other than to fleece the simple minded. What kind of contractor would respond to such a post? It must be the sort who will take a one-third deposit up front and then disappear.

What disturbs me most about all this is that while it represents an extreme, it seems to reflect that the commoditization of the craft of software development has reached some kind of fever pitch. I turn down many projects that come my way, for the simple reason that the customer has totally unrealistic expectations. Thankfully no one has yet asked me to write an accounting system or a major public web site or a spacecraft control system in one afternoon for under a hundred dollars — yet — but sometimes it seems like things are heading that way.

What this tells me is that the average person no longer values software. We are used to freeware, shareware, open source, and sub-$100 list prices for commercial software. The gargantuan size of the software market hides the gargantuan effort and expense that went into developing all those general-purpose software products.

Additionally, software development remains a painful and challenging process with many pitfalls for the unwary, and it just doesn’t deliver the effortless and instantaneous results we’ve come to demand of every aspect of life. People will jump through rings of fire and eat little pieces of glass before they will make allowances in their business plans for the fact that the mission-critical software they need will take six months and a couple hundred thousand to put together. “Screw it … we’ll use Excel!” they say.

So … what’s the deal? Is software a commodity now? Can we really put together complex systems in minutes from off-the-shelf components with no planning or testing?

According to a recent profile in Technology Review, no less a person than uber-programmer Charles Simonyi (the guy who was once the driving force behind Microsoft Word) plans to give the public what they are clamoring for. He is on a multi-year quest to create something he calls “intentional software”.

The over-simplified summary of what Simonyi wants to create is: “an expert system that will allow non-programmers to put together software specifications.” Then, by pushing a button, all the code will be generated to produce complex applications that fulfill those specifications.

I’ll concede that I probably don’t appear to be the best candidate to be impartial about the feasibility of Charles’ dream. Still, I doff me hat to ‘im, and wish ‘im well, I do. Because if he actually pulls it off, we will be living in a world of luxury and abundance and riding the wave of another “dot boom” and I will find a way to prosper from it, as will you, gentle reader.

However, my personal prediction is that what will come out of it, is something akin to Charles’ original brainchild, the graphical / WYSIWYG word processor. By which I mean that it will be something we will all take for granted and wonder how we could possibly live without, but it will also fall short in ways that will annoy the living bejesus out of us. (In the above referenced article there is a priceless vignette where Charles’ PowerPoint presentation hangs and the much-hated Clippy the paper clip pops up to offer useless advice. Charles confesses ruefully to the interviewer that even he has no idea how to disable the blasted thing).

Why do I think Charles will fall short of his lofty goal?

One of the reasons is well presented in a sidebar to that article in Technology Review, and that is generically known as the “leaky abstraction problem”.

At a more fundamental level, Charles’ earnest vision depends on reductionism, the idea that every problem can be solved if you can break it down far enough. This is an understandable stance for a software developer, as much of what we do involves breaking large problems down into smaller sub-problems and then putting it all back together. But it has its limitations. When developing software, you are ultimately solving problems that involve “wetware” (people) and that is inherently subjective and messy and chaotic. At some point that interferes with doing science and you have to make very subjective judgments to find a way forward. No development methodology or tool will ever fully automate those judgments.

Now I’m going to say something provocative and easily misunderstood: most of the world’s business is conducted by small to medium-sized companies. Most custom development needed by such companies are relatively modest compared to an enterprise-scale “Manhattan project”. In almost every case, by the time you’ve adequately spec’d most of these projects, you have finished implementing them. It is literally a case of “we’ll know when we are done”. I know this will make the Agile folks, the bean counters, and others apoplectic, but it’s the truth. Anyone who tries to paper it over is just patronizing you.

Software development is an iterative process of discovery. It’s done when it’s done. If you have a big enough project you can do proof-of concept “mini” implementations of various things, run simulations, conduct focus groups, have committee meetings and fill out ISO forms to your heart’s content. And then maybe, assuming all the stakeholders have been honest and diligent (a huge assumption), you will be able to come up with a reasonably close cost estimate / budget.

But in a smaller setting the grim reality is, you’ve got a champion with a fairly vague idea of what they want — more of a vision than a spec — which is communicated in glowing terms to someone like me, who does their best to ask pertinent questions and classify the project as a one, three, six or twelve month project times X number of developers. As the project rolls on the champion has a thousand suggestions for changes, the actual users inform you that for various reasons certain aspects of the vision are completely unrealistic, certain others decide they want the project to fail and employ passive-aggressive tactics to bring that about … and in the end, you have something that is not really what anyone expected up front, in terms of its final form or its cost, other than in very general terms, such as, for instance, “an online credit bureau for trucking companies”. Which comes to think of it, sounds suspiciously like the tiny postings on those freelance boards.

What the online freelance exchanges represent is the earnest desire of many to be able to express a complex system in one sentence, pay a tiny, nay, microscopic fixed amount of money and have it appear magically in a few days or even hours (another common subtext in these postings is “I’ve known about this for two months but just remembered that the deadline is tomorrow”).

Listen to me carefully: IT ISN’T. GOING. TO HAPPEN.

What Charles Simonyi’s more refined vision represents is the understanding that you can’t express complex systems in one sentence. But he still labors under the belief that with the right tools you could express it in terms that people who know nothing about software architecture could comprehend, in ways that will reproducibly result in relatively fast, easy implementations that are also accurate. This, I don’t think is going to happen either, at least not in my lifetime. I strongly suspect it’s a reductionist fantasy.

Check back in fifteen or twenty years and we’ll see!

Update: For more on this popular topic, please refer to the follow up post.

January 16, 2007

Accepting a Flaky Certificate When Doing an SSL POST

Filed under: C# — Bob Grommes @ 12:56 pm

I needed to communicate with a web server that requires an SSL connection yet does not have a valid SSL cert. This is actually not that uncommon when dealing with various business partners. In this case it’s a large multinational corporation with a web farm. The certificate is issued to http://www.bigcorp.com but what actually answers is something like www86.bigcorp.com. Even giant, soulless corporations don’t want to buy an SSL certificate for every box in their web farm!

My client’s legacy ASP application used an ActiveX component and to accept such certificates you simply set a property to true — something like AcceptAllSSLCerts. It was that simple! In porting this to .NET, though, it’s one of those things that gets more involved, because this is an area where the .NET class libraries are a bit on the thin, low-level side. There no doubt are third party products that simplify this, but following is a piece of code that did the trick for me:

First, create the following class:

internal class AcceptAllCertificatePolicy : System.Net.ICertificatePolicy {

  public AcceptAllCertificatePolicy() {}

  public bool CheckValidationResult(System.Net.ServicePoint sp,
                                    System.Security.Cryptography.X509Certificates.X509Certificate cert,
                                    System.Net.WebRequest req,
                                    int certProb) {
    return true;
  }

}

Now, execute the following call just before doing the POST:

System.Net.ServicePointManager.CertificatePolicy = new AcceptAllCertificatePolicy();

This works fine in both .NET 1.1 and 2.0. In 2.0 you’ll get a warning that this call is deprecated and replaced with some kid of callback; in the rush of deadlines I have not bothered to track that down. Another thing I did not need to check was whether setting the ServicePointManager.CertificatePolicy property is global for your entire app (I’m pretty sure it is) and whether it needs to be called before every POST (I’m pretty sure not). However in my case it doesn’t matter because it’s a non performance-critical routine that usually is only called once per program run anyway. Of course if I had other SSL connections to make where I didn’t want to relax this requirement I’d need to figure out how to toggle this on and off.

This actually makes a nice real-world example of how to make decisions about allocating your precious time. Many of us forget that the customer is paying us to solve a problem, not to solve it elegantly. There is a line you don’t want to cross, of course, where you write slovenly code that will come back to haunt you someday. But in this case, if I’d spent an extra 20 minutes making sure I was using the latest API call and tweaking for efficiency that would be unlikely to ever be noticed, then I’m not serving the customer’s best interests. What serves their interests is to make sure it works right.

I will confess that this has gone on my List of Interesting Things To Check Out Someday in my Spare Time (Ha-ha). No problem fine-tuning your code on your own time … it’s just not always warranted on the customer’s time.

January 6, 2007

Writing Polite Code

Filed under: Communication,Techniques — Bob Grommes @ 7:09 pm

I ran across a generally excellent essay by Michael Feathers on what he calls Offensive Coding. The essence of his argument is that if you find yourself frequently writing defensive code (he uses checking for null as an example), the root cause may be that the code you’re working on is called by offensive code. In other words, code that forces you to write defensively.

This resonates with me. Some of my clients have been with me for as long as a decade, so I maintain a heck of a lot of code. Some of it is mine, and some of it is not. I’ve never minded maintenance and refactoring as much as most of my colleagues appear to, but one thing I don’t like about maintaining code is the inordinate amount of time I spend either coding defensively, or doing forensic work to document what the heck is going on so that I don’t have to code defensively.

The opposite of offensive code might be called polite code. And it strikes me that one of the features of polite code is that it’s as self-evident as possible and accurately documented. Good code is self-evident as to the “how” of what it’s doing, and documentation in the form of comments and external documents provides the “why”.

I just spent a couple of hours reverse-engineering some code that handles an automatic product version upgrade because I did not want to risk hosing a production system without being sure about how some items in an application config file were set. I shouldn’t have had to do that, but every reference I could find to these configuration items were vaguely worded and not written with the question in mind, “what will be the concerns of the user reading this”?

That happens a lot … documentation frequently answers questions a user would never ask or give a fig about. It states the obvious, or beats the irrelevant to death, but it never addresses a likely real-world question. To be concrete: the meaning of “InstallMemberRole=true” was I’m sure quite obvious to the original author at the time he was in the throes of creating it. And it seems most likely that this item, in the context of the section of a portal framework configuration most likely means that during the installation process, the member role feature will be installed.

What isn’t ever stated is whether this applies to version upgrades, and if so, whether it will blow away an existing user base. God bless Google, I found some people discussing it who said that it would in fact blow away the existing user base. But, those people may or may not know what they are talking about, and besides, isn’t it odd to default this setting on and thus put people’s user bases at risk? And maybe it involves upgrades to functionality that I’d want to weigh the benefits of.

Well, no matter … with over 1500 impatient active users at stake I can’t afford to guess, so I plowed into the code and tracked down the called T-SQL scripts and made damned sure. This is a defensive action I should not have had to undertake.

As I have said elsewhere, English is a programming language too, and this is another example of why that’s so.

January 3, 2007

DotNetNuke in the Trenches

Filed under: Products — Bob Grommes @ 2:37 pm

I have not yet figured out whether I’m blessed or cursed to be involved in maintaining an extensive DotNetNuke site. I’ve had a year to develop an opionion of DNN, which is a framework that I really wanted to like.

During that year I’ve been sidetracked with other responsibilities for the same client, such as developing an automated billing system and doing some sysadmin tasks, but I’ve finally had some significant time to get cozy with the DNN architecture.

The site began as a generic ASP.NET 1.1 site, then the developer discovered DNN 2, and bolted that onto the legacy parts of the app. Before long this was upgraded to 3.0.13, and currently, I’m in the throes of moving it to DNN 3.3.7 — with the plan being to get that stable and then switch to ASP.NET 2.0 and finally whatever flavor of DNN 4.x is then current.

You may notice that is that this is an awful lot of re-tooling in the course of just a couple of years. Today I was looking for answers to some upgrade questions (little is published about upgrading, and what is published is mostly about major DNN version upgrades). I stumbled across the web site of a DNN plug-in module vendor. The info I’m about to quote is in the Google cache — the current live site tones it down considerably, so I won’t provide a link or identify the vendor. However, I have to say, it’s a pretty revealing rant and validates some of the growing suspicions I’ve developed about DNN.

The vendor was addressing a FAQ regarding why they don’t participate in the DNN certification program for module vendors:

… very simply, DNN changes too frequently. Core API changes have occurred in the last 3 to 4 years to versions 2.0, 2.1, 3.0, 3.3 – that is essentially 4 significant API layer changes in less than 4 years time – with the prospect of another major API change forthcoming [he’s referring to DNN 4.x, which is now out and itself has had some significant minor updates since]

These types of certifications are useful with regards to long-term solid API foundations, such as Windows 32 Bit API, [the] .NET platform, Java and other technologies where the core API does not change on a whim. This is not the case with DotNetNuke. In essence, a DotNetNuke certification does not guarantee that the module that you purchase will work past the next release of DNN – or that the module developer will maintain versions for your current version of DNN if you choose not to upgrade.

I don’t know how to define “too frequently” or whether it’s even the real problem here … the real problem is far too many breaking changes. The core developers are not afraid to change interfaces or namespaces. Sometimes the results are annoying (you get eight zillion “deprecated method call” warnings, but the code still works because the old signature is mapped to the new one). In other cases, things just break because they’re in the wrong place. Matters are exacerbated if you have custom code that calls into the framework.

Maybe this is unavoidable, but a portal engine that supports third party plugins and encourages users to create their own modules, probably needs to be more commited to stability than they are. I know this is making me and the other person working on this system kind of crazy. It’s also costing the client too much buckolas too early in the game I think. Probably a good solid person month or more of labor to move from a 3.0.x to a 3.3.x release seems a bit much. Granted some of it may be learning curve — of the original developers, and of me doing the upgrade. But that’s pretty normal turnover on any project these days.

DNN has other warts too — its documentation is lacking in many important ways, for example. It’s infinitely easier to find answers about the .NET framework even if you confine yourself to Microsoft resources, than it is to Google up answers about DNN. One of the problems, aside from thin docs, is … there it is again … if you do get an answer it’s probably not for the version of DNN you’re currently struggling with. It will be a three year old post about DNN2, or a brand new one about DNN4, or it will not tell you whether or not version issues are relevant. And that’s just the core technology, not the many add-ons out there.

I have yet to decide whether this is “Good Enough” or “The Best We Can Do” or “More Touble Than It’s Worth”. One of these days I’ll settle it in my mind, and post again about it.

Blog at WordPress.com.