Tuesday, December 30, 2008

Passion is Always Recession Proof

A Good Recession?

A couple of weeks ago, in the midst of more economy doom and gloom, our CEO gave a company address speaking to the benefits of a recession. He then went on to make the point that recessions can actually be good for some companies.

In short, recessions reward good management. Companies who've had the discipline to not overextended themselves financially, stay within their circle of expertise, and are passionate about what they do, are excellently positioned to prosper in a recession.

As the competition starts to thicken and there are fewer customers to go around, mediocre companies inevitably fall apart or shrink substantially. Meanwhile great companies relish the opportunity to compete against their counterparts.

A recession separates the pretenders from the contenders. Firms who have built strong relationships with their clients and continue to deliver exceptional value, often have little to fear from a recession.

If anything, a recession helps good companies by weeding out all the chaff polluting the business space. So if a recession can help great companies, how about great developers?

For The Love = For The Win

Let's face it, a lot of IT personal (including developers) aren't exactly passionate about what they do. Somehow somewhere, a lackluster wave of hires stumbled into the IT workplace.

Maybe it was during the dot com days when insane salaries were lobbed at anyone who could work a keyboard. Maybe a career advisor or two read an aptitude test upside down. Either way I'm sure at some point you've bumped into a programmer who isn't passionate about programming.

Not that being apathetic towards your job is a sin per se, but it if you're not passionate about something, you'll never be exceptional at it. And if the most you're destined for is average, then both you and your team will be having perpetual dates with mediocrity.

People who are passionate about programming, would be doing it whether they're getting paid to or not. They're a lot more likely to be reading technical blogs, show up at user groups, and help a team grow deeper technically. And get this, they'll actually enjoy doing it!

So here's the good news about recessions for developers.

The Cull

Don't get it twisted, a lot of good people get let go during recessions. Huge swaths get cut through IT groups, and a lot of great staff ends up on the wrong side of the red line.

But just like great companies, great developers don't have a hard time finding work. Their managers are more than happy to write them glowing letters of reference and refer them on to other firms that can make use of their skills. The same passion that has kept them up to speed with industry changes will help them shine during the interview process.

It's not just good staff that gets laid off, bad hires also get let go too, and this helps development teams tremendously. Managers who once had the luxury of keeping bad hires are finally forced to take a hard look at their staff. The end result is often a more proficient and passionate team of developers.

Hiring is also a lot more productive during a recession, not only are there more great candidates to choose from, but bad hires are a lot more likely to occur when there's too much work available, as oppose to when there's too little. The net result is that teams tend to get more competent as only good developers get filtered back in.

Lastly, people who are simply stumbling around an industry are encouraged to take a moment to take a hard look at their current career path and decide if it's aligned with their strengths and interests. This isn't just good for the industry, it's essential for individuals to find something that they're truly passionate about and that they can find lasting success in. Keeping distracted people in an industry when their strengths really lie elsewhere not only erodes the craft, it steals valuable time from them that they could use to find something that they're exceptional at. Time they could otherwise ply at a trade more closely coupled to their natural strengths. When a bad hire is made, it's not only unfair to the company, it's wasteful of the employee's time. Recessions help hit the reset button.

Another Take

So depending on how optimistic you're feeling you could say that recessions are pretty good for you the developer. They help people find better jobs, stop bad hires, move people to industries where they can find real success, and reward those who truly belong in a space. Ideally it'll help improve your team, your company and your industry. If you're lucky you just might hit the trifecta.

Just a thought,
Tyler

Monday, December 22, 2008

Put Down That Ajax Control Toolkit!

Out with the Old

Don’t get me wrong, I like old things. Constructs like the abacus, scythes, and slide rules will always be remembered by those unfortunate enough to have used them. But to continue using dated tools in today’s ever changing environment is like running cavalry against a line of tanks. As you continue to post mortem projects you should also be auditing your tool belt. This post is about me retiring not only a single tool, but an entire toolkit.

The AJAX Control Toolkit

The AJAX Control Toolkit was released under such a name in January 23rd, 2007, prior this namesake it was known as ATLAS. This toolkit is built on top of the AJAX Extensions and includes a series of controls to help ASP.NET developers take their user experience to the next level. The toolkit boasts many controls like animation extenders, better validators, modal windows, and widgets like accordions and drag panels. A lot of developers (including me) jumped at these controls and were all too happy to include the dependencies in their projects in exchange for the near free functionality. I should know, I was one of them.

The problem with these controls isn’t that they suck; the problem is that they’re not the best. While Microsoft was developing these controls, other client side efforts were underway. You’ve probably heard of these other libraries, some of them being jQuery, prototype, and mootools. I’m personally ditching the toolkit in exchange for jQuery.

jQuery

If you haven’t heard of jQuery by now, I’d kindly suggest that you get out more often. I personally think of jQuery as the Firefox of JavaScript libraries. A lot of jQuery’s strengths come from a litany of professional grade plug-ins that have been contributed and then vetted by the jQuery community. It’s this same community, large in number and energized by a passion for better UIs that makes jQuery such an impressive offering. This is truly a technology that has a bright roadmap.

I decided to ditch the ACT (Ajax Control Toolkit) for jQuery for the following reasons:

Dependencies: The ACT has server side dependencies (both the System.Web.Extensions.dll and AjaxControlToolKit.dll need to be present), jQuery has no server side dependencies besides the emitted JavaScript. This same JavaScript is also smaller than that emitted by the ACT.

Controls: Most likely, in the time that it took you to read this blog post, another jQuery plug-in was submitted to the community. There are a lot of talented JavaScript developers that are using this framework to make some truly impressive plug-ins. I urge you to check out the plug-ins page, it more than dwarfs the ACT offering.

Collaboration: Developers may make it functional, but designers make it usable, and sadly the latter almost always trumps the former. Designers need to be able to mock out UIs and style these controls. The more they get involved the less you have to do when it comes to presentation. For designers to develop style sheets, they often need to get access to the control.

For the ACT this often means they need to be running Windows (which designers seldom do), install Visual Studio, and upgrade to either .NET v2.0 + System.Web.Extensions, or .NET v3.0 (sound tedious yet)?

Or they could just work with jQuery UI, a site that helps designers familiarize themselves with the markup and JavaScript needed to produce some great interfaces.

The ACT doesn't offer natural middle ground between developers and designers; it also doesn’t make for a shared ownership of the UI. Designers should be involved all the way through the project pipeline, not simply handing off a style sheet to a developer.

Passing the style sheet to a developer who didn't create the layout and isn't responsible for it, is likely to fast track your web application to ugly ville.

Worse yet, it’s likely the developer will change the style sheet if s/he can’t get markup to be emitted exactly like the designer was planning for (such is common when working with the ACT). This makes it increasingly awkward for the designer to maintain the style sheet as it moves to QA and then production. The ACT just isn't very designer friendly.

Don't Take My Word For It

Learning jQuery not only gives you access to a tonne of great controls; it starts you down the process of learning a DOM manipulation framework that is truly platform independent. If you’re concerned about support from a big company, Microsoft recently decided to include jQuery with Visual Studio as of September 2008. Consider using jQuery in SharePoint to spruce up a dull UI, or to make ASP.NET MVC more palatable. I think it’s time to get on the bus, this is one that you don’t want to miss.

Best,
Tyler

Friday, December 19, 2008

Advanced Basics: What Causes a JIT?

Kind Of A Black Box

For the longest time ASP.NET Just In Time compiling has always been a pretty gray matter for me. Word on the street has it, that some things you'll do against a web site cause the site to be re-JIT'd. Other actions will simply cause the application to unload. But which actions precipitate which results?

Just In Time Compiling

As you're probably aware, the code that you put in an ASP.NET web site isn't the actual code that runs. All those assemblies, web pages, user controls etc... all need to be Just In Time Compiled before they can actually be used. Visual Studio creates MSIL (Microsoft Intermediate Language) a bytecode that is highly portable. Before you can run it a .NET runtime, it needs to be compiled again into native code. Native code is specifically targeted to the machine it's going to be running on and is a lot more efficient. You can do this yourself (assuming you don't want your code to Just In Time compiled) by using either the ASP.NET Precompilation tool or Ngen.exe (Native Image Generator) another tool that comes with the .NET SDK.

All that's required for MSIL to be JIT'd is that the destination machine have a compiler that's capable of optimizing your MSIL for the given environment. A common example of this is when a developer builds an application on a 32-bit machine and then goes and deploys that MSIL on to some 64-bit machine. At runtime the MSIL will get natively compiled into 64-bit assemblies by a compiler that's sensitive to the needs of the destination 64-bit machine.

The native compiler may also do things like (from Wikipedia):

  1. Optimize to the targeted CPU and the operating system model where the application runs. For example JIT can choose SSE2 CPU instructions when it detects that the CPU supports them. With a static compiler one must write two versions of the code, possibly using inline assemblies.
  2. The system is able to collect statistics about how the program is actually running in the environment it is in, and it can rearrange and recompile for optimum performance. However, some static compilers can also take profile information as input.
  3. The system can do global code optimizations (e.g. inlining of library functions) without losing the advantages of dynamic linking and without the overheads inherent to static compilers and linkers. Specifically, when doing global inline substitutions, a static compiler must insert run-time checks and ensure that a virtual call would occur if the actual class of the object overrides the inlined method.
  4. Although this is possible with statically compiled garbage collected languages, a bytecode system can more easily rearrange memory for better cache utilization.

That's pretty much the just of it. Just In Time compiling adds a lot of flexibility to .NET development. Even though you've built a very generic application (can run anywhere with a CLR), what really get's run is a highly optimized version of that code that is targeted to the destination machine's architecture, operating system, etc...

How Does This Happen (in ASP.NET)

When you first visit a web application it needs to at least JIT the important stuff. JIT'ing starts with what's called top-level items. The following items are compiled first (pulled from here). 

Item

Description

App_GlobalResources

The application's global resources are compiled and a resource assembly is built. Any assemblies in the application's Bin folder are linked to the resource assembly.

App_WebResources

Proxy types for Web services are created and compiled. The resulting Web references assembly is linked to the resource assembly if it exists.

Profile properties defined in the Web.config file

If profile properties are defined in the application's Web.config file, an assembly is generated that contains a profile object.

App_Code

Source code files are built and one or more assemblies are created. All code assemblies and the profile assembly are linked to the resources and Web references assemblies if any.

Global.asax

The application object is compiled and linked to all of the previously generated assemblies.

After the application's top level items have been compiled, ASP.NET compiles folders, pages, and other items as needed. The following table describes the order in which ASP.NET folders and items are compiled.

Item

Description

App_LocalResources

If the folder containing the requested item contains an App_LocalResources folder, the contents of the local resources folder are compiled and linked to the global resources assembly.

Individual Web pages (.aspx files), user controls (.ascx files), HTTP handlers (.ashx files), and HTTP modules (.asmx files)

Compiled as needed and linked to the local resources assembly and the top-level assemblies.

Themes, master pages, other source files

Skin files for individual themes, master pages, and other source code files referenced by pages are compiled when the referencing page is compiled.

Conditions

The following conditions will cause your application to be re-JIT'd. If you modify any of the top level items, any assemblies that reference the top level items will be recompiled as well.

  • Adding, modifying, or deleting assemblies from the application's Bin folder.
  • Adding, modifying, or deleting localization resources from the App_GlobalResources or App_LocalResources folders.
  • Adding, modifying, or deleting the application's Global.asax file.
  • Adding, modifying, or deleting source code files in the App_Code directory.
  • Adding, modifying, or deleting Profile configuration (in the web.config).
  • Adding, modifying, or deleting Web service references in the App_WebReferences directory.
  • Adding, modifying, or deleting the application's Web.config file.

Whenever one of the above conditions is met the old application domain is "drain stopped", which means that new requests are allowed to finish executing and once they're finished the Application Domain hosting those assemblies unloads. As soon as the recompile is finished a new Application Domain starts up with the new code and starts to handle all new requests.

Exodus

Believe it or not, having this kind of knowledge somewhere on hand will actually help you when you need it the most...when you're troubleshooting. This kind of stuff doesn't need to be kept at the tip of your tongue, but it should be at least semi-salvageable from the recesses of memory. The worst kind of trouble is the kind that doesn't make any sense. Having sophisticated tools that do a lot of leg work for you can also put bullet holes in your feet if you haven't read the instruction manual.

Hope that provided some value. I swear this stuff comes up more than you'd think.

Best,
Tyler

Saturday, December 13, 2008

Application Domains Revisited

I Should Know This

There's a lot of things I've taken the time to learn formally (ie. go to school, take a cert) which escape me when they would actually come in handy. In fact it becomes even more frustrating when we end up figuring out what happened...and it's something I already (should) know.

This once again became wildly apparent this week while I was troubleshooting some random web app. As it turns out, poor memory is the gift that keeps on giving, and my own memory is definitely feeling the festive nature of the holidays. In a very small nutshell, I forgot the following.

Every application you set up in a web site (including the root application) ends up being in its own application domain. It'll have its own set of assemblies, get JIT'd independently, and of course be completely isolated from the the contents of other application domains (Session, Statics, Cache, etc... won't be shared.).

Refresher

Process: Contains the code and data of a program. Also contains at least one thread. Represents a boundary that stops it from wreaking havoc on other processes and requires special techniques to communicate across. Runs in a security context which dictates what it can and can't do. If it's running some .NET code then it contains one or more application domains.Process

An example of a process might be notepad.exe, or if we're talking about IIS aspnet_wp.exe (Win XP, Win 2000) or w3wp.exe (Windows 2003).

Application Domain: ApplicationDomainsAre more efficient than processes. One process can load the .NET framework once and host many application domains. Each app domain might load its own assemblies (or even it's own child application domains), but will not have to reload the overhead of the .NET framework. Since each application domain runs within the .NET framework, it benefits from the features there of (things like Code Access Security, Managed Memory, JIT Compiling, etc...).

Application Domains also represent a boundary that you'll need special techniques to communicate across. Since more than one can be hosted in an process, it's common for the .NET Framework to load up and unload application domains at runtime. This is what happens when you recycle an IIS Application Pool, the application domains within it are unloaded and new app domains are created to service new requests.

In ASP.NET an Application Domain is created for every virtual directory that you configure and as application in IIS. Each can have it's own bin folder, web.config and IIS Application Pool.

It's worth mentioning that even though there's Application Domain boundary separation between applications, they can still inherit web.config settings from each other (if one is in a parent folder).

A high level view of Processes, the .NET Runtime, Application Domains, and the assemblies they in turn load might look like the following picture.

Application Pools (IIS 6 and above): 2008.12.13 14.08.36 Host one or more applications (Application Domains if we're talking about .NET code). If an application pool is running in worker process isolation mode, it can also spread those applications over one or more processes (a web garden)! They allow you a great deal of flexibility when it comes to deciding how much isolation you want to give your applications. You can use application pools to provide process level isolation for each application (each application can have its own w3wp.exe [or more if you web garden]). Application Pools can also combine multiple applications in a single application pool (saves resources).

Below, an application pool (".NET v2.0") spreading it's work over multiple processes.2008.12.13 13.45.14

And We're Back

Every now and then I get a little scared of the complexity that will be in development environments 20 years from now. Yes, the stuff being written will be extremely cool, and the tools that get used will no doubt be just as impressive. That being said, there will be a a lot of complexity in play in the future and a tonne to be aware of.

With .NET v4.0 in the works, a lot of focus seems to be on parallel computing (ie. PLINQ) I can't imagine the computing world is about to get much simpler.

Keep your seat belt on.

Best,
Tyler

Sunday, December 7, 2008

The SharePoint Certification Gauntlet

Finish Line

This week I finally finished the last of the SharePoint certifications. They include:

  • MCTS: 70-542 (MOSS Application Development)
  • MCTS: 70-541 (WSS Application Development)
  • MCTS: 70-630 (MOSS Configuration)
  • MCTS: 70-631 (WSS Configuration)

Doing all four in a single year was a little more time consuming than I thought it would be, and I'm glad it's finally over.

The Value

While certifications themselves don't really get you deep in a subject matter (at least not that I've noticed), they do give you a really good idea of the surface area. At that point you're in a good position to explore the topics on you own and get deep in a few.

Here's a more concrete example. A cert in ASP.NET won't get you deep in HTTPModules, but it will tell you what they are, and what you might use them for. The next time you hit a problem that involves say URL Rewriting, you've still got some learning to do, but you're a lot less likely to leave the reservation completely and come up with something home brewed and semi exotic.

To me that's one of the biggest values of certifications, they try to clearly define the scope of the technology. After a cert you should at least know what you don't know about some tech stack. At that point you're a lot less likely to write a bunch of code to solve a problem that your tech stack is naturally geared towards solving.

Rounding The Edges

Of course each of the four SharePoint exams have a slightly different focus. It's also worth mentioning that two of them don't really have anything to do with writing code, they're all about configuration. I thought these were important because of SharePoint's farm footprint. The platform is a lot more than a single web server/database. It's really a series of services working together on many different machines. When all these services act in concert, the resulting ballet provides a pretty decent platform for collaboration.

Because there's so much in play (and so much complexity), it seemed just as important to me to learn the infrastructure nuts and bolts as it was to learn the various APIs.

This sentiment was recently corroborated by the SharePoint Master program. The program involves the same four certifications. I'm pretty sure the content authors of the program are on the same page, that you can't be a "SharePoint Master" unless you have a holistic knowledge of both infrastructure and application development.

Go For It

For those wondering how to go about getting a certification I would encourage them to do so. The experience alone will tell you if you feel it was worth its salt. The typical certification route looks like this.

  1. Find a technology that interests you and a certification path on the Microsoft Learning web site.
  2. Read the about the test requirements. This may involve one or more exams that each have a preparation guide. Be sure to follow the preparation guide.
  3. Study up. This might involve some online learning, some books or just spending a lot of time on the MSDN.
  4. Take a Practice Test. Real exams cost $125 and a couple hours of your time. It's doubtful you'd want to want to write the real exam more than once. Most exams require at least 70% as a passing grade, I've written exams that required as much as 80% to pass. Make sure you're ready before you schedule an exam. You can often find free practice exams online, failing that just pay the $70 to a practice test provider (MeasureUp, Transcender, TestKing, etc...). The idea is to make sure you're ready before scheduling the real exam.
  5. Schedule an exam. For Microsoft exams you'll most likely end up going through Prometric or Pearson Vue. Their web site will book you an appointment with a testing provider (most often some IT college or learning center). Exams cost $125 USD and usually need to be scheduled at least 48 hours prior to writing.
  6. Show up and ace the exam!

Everyone learns at a different pace, but expect to spend at least fifteen hours going over material and practice exams for your first certification. After writing a couple exams you'll notice your preparation start to decrease as you start to streamline the process.

After finishing a certification you'll be given an MCP ID (if you don't already have one), a welcome kit, a certificate, and use of a certification logo (below) for business cards and such. I don't personally make use of the last two, but they may help demonstrate to your boss that your passionate about technology, or convince women you're able to commit to something.

Remember that certs are just a part of your learning continuum, supplement them where need be. They're nice to have, but by no means the finish line.

70-542 (MOSS Dev) 70-630 (MOSS Config)70-631 (WSS Config) 70-541 (WSS App Dev)

Best,
Tyler

Tuesday, December 2, 2008

Windows XP Unzip Errors With SharpZipLib

Great Utility

SharpZipLib is a great little library that helps you programmatically compress/decompress zip archives. If your needs aren't too exotic (ie. you need to programmatically zip/unzip a series of files/folders) this could very well be your ticket.

You've probably heard of it before (it's been around for quite a while), some sample usage might look like this (compresses a file).

using (ZipOutputStream zipStream = new ZipOutputStream(File.Create(zipFilePath)))
{
//Compression level 0-9 (9 is highest)
zipStream.SetLevel(GetCompressionLevel());

//Add an entry to our zip file
ZipEntry entry = new ZipEntry(Path.GetFileName(sourceFilePath));
entry.DateTime = DateTime.Now;
zipStream.PutNextEntry(entry);

byte[] buffer = new byte[4096];
int byteCount = 0;

using (FileStream inputStream = File.OpenRead(sourceFilePath))
{
byteCount = inputStream.Read(buffer, 0, buffer.Length);
while (byteCount > 0)
{
zipStream.Write(buffer, 0, byteCount);
byteCount = inputStream.Read(buffer, 0, buffer.Length);
}
}
}

This is pretty normal usage. It adds a zip entry to an archive and creates said archive. What might confuse you though is the error you'll get if you try to to unpack the archive using a legacy unzip tool (ie. the stock Windows XP decompression tool).

The stock Windows XP extraction tool won't be able to unpack the archive and will complain about the archive being "invalid or corrupted" if you try to extract the archive contents.

The Compressed (zipped) Folder is invalid or corrupted.

Zip64 Extensions

What's happening here is that the utility is enabling Zip64 extensions for the archive, and some older utilities can't read Zip64 Extensions. You could simply turn off Zip64 Extensions, but this will cause problems when you start adding files larger than 4GB to your archive.

A better solution is to make a mild tweak to the way we add files to the archive. By specifying the size of the file we're adding, the ZipOutputStream can decide whether or not to use the Zip64 extensions. If we don't need them then they'll be turned off automatically. The mild tweak below fixes the error from the above code:

...
ZipEntry entry = new ZipEntry(Path.GetFileName(sourceFilePath));
entry.DateTime = DateTime.Now;
/* By specifying a size, SharpZipLib will turn on/off UseZip64 based on the file sizes. If Zip64 is ON
* some legacy zip utilities (ie. Windows XP) who can't read Zip64 will be unable to unpack the archive.
* If Zip64 is OFF, zip archives will be unable to support files larger than 4GB. */
entry.Size = new FileInfo(sourceFilePath).Length;
zipStream.PutNextEntry(entry);
...

Hope that helps someone. That error drove me crazy for a while and I didn't find much via Google.

Best,
Tyler

Monday, December 1, 2008

Does SharePoint Lend To High Availability?

All This Hardware And No Uptime

SharePoint is pretty heavy. I often think of it as an 800 pound gorilla who stopped exercising and let itself slide. To handle all the services that run within a farm and provide decent response time to users, a reasonable amount of hardware usually gets provisioned to pick up the slack.

I'm talking about real iron here. Large farms featuring clustered SQL Servers, redundant application servers, and a series of web front ends balanced with either a network load balancer or a Microsoft NLB cluster.

One might look at all this gear and think that as a result, the farm is almost guaranteed to enjoy some pretty high availability right? Well I guess that depends on what you call high availability.

A Desire for High Availability

\ Total downtime (HH:MM:SS)
Availability per day per month per year
99.999% 00:00:00.4 00:00:26 00:05:15
99.99% 00:00:08 00:04:22 00:52:35
99.9% 00:01:26 00:43:49 08:45:56
99% 00:14:23 07:18:17 87:39:29
For clients that are running SharePoint internally, uptime is probably important but not a huge priority. Clients that use SharePoint for internet facing applications are another matter. These users are far more likely to use SharePoint to buttress e-commerce offerings or brand efforts. These businesses usually want high availability and may even ask for four to five nines of uptime (99.99%-99.999% uptime). Five nines (sometimes called the holy grail of uptime), equates to being down no more than 5 minutes 15 seconds a year. It's a bold proposition and requires a great deal of planning and forethought. Still, if it's doable, all this hardware should set you in the right direction right?

Heavy Patches, And Lots of Them

The biggest hurdle I've had with providing high availability to clients with SharePoint has come from the patching procedures issued from Microsoft. Normally when updating applications/machines it's possible to update one machine at a time, using your load balancer to shelter this machine from production. Once the machine has has been updated you can bring it back into production and start updating one of it's siblings. With SharePoint this process gets a little more complicated. Here are a couple of the reasons:

  • There's no uninstall/rollback for most SharePoint updates (your best bet for uninstall is a machine level backup).
  • The recommended install procedure dictates that you stop all IIS instances for Web Front Ends. This makes it difficult to continue to provide service or at the very least hold up a stall/maintenance page.
  • The recommended install procedure asks that you upgrade all machines on the farm at the same time.
  • There's usually at least one machine in the farm that rejects the upgrade and needs to be troubleshot individually. For me these have often resulted in removing the machine from the farm, upgrading it, and then adding it back to the farm. This usually adds to server downtime, especially if the server was serving a key role (ie: SSP host or the machine that hosts the Central Administration web site).

Assuming you manage to make it through all the above without a lot of downtime, how many times a year do you think you might be able to do it and still maintain a reasonable downtime SLA? Before you answer that, consider all the updates that have come down the pipe for WSS since it's RTM (it's SharePoint 2007 remember). This is also just a list of updates for WSS, there's a whole other table for MOSS (although most of the dates and versions coincide).

Update Name Version (owssrv.dll) Date
Windows SharePoint Services v3.0 12.0.4518.1016 (RTM) November 2006
October public update (2007) 12.0.6039.5000 October 2007
Service Pack 1 12.0.6219.1000 December 2007
Post-Service Pack 1 rollup 12.0.6300.5000 March 2008
Infrastructure Update 12.0.6320.5000 July 2008
August Cumulative Update 12.0.6327.5000 August 2008
October Cumulative Update 12.0.6331.5000 October 2008

Don't get me wrong, updates are good. In fact, I like it when Microsoft fixes things, especially when the clients who have purchased MOSS have already paid potentially millions in licensing fees. I just wished these updates which happen many times a year AND provide critical fixes to expected functionality had better upgrade strategies.

Do SharePoint updates and the way in which SharePoint farms are upgraded make high availability a pipe dream? Does all that hardware do nothing except help the farm scale out?

A Little Transparency

In fact all I'm really looking for from these updates is a little transparency. I'd be thrilled if I could get a little more detail as to what's going on underneath the hood and what to do when the error log starts filling up.

I've yet to see a really good troubleshooting strategy or even deployment strategy that gives you good odds of limiting downtime when it comes time to roll out these upgrades.

We have a ticket open with MS support to take up this issue. The wait for SharePoint related issues is still pretty long, but rest assured should I come up with one or find a good resource for these kinds of rollouts you'll know where you'll find it.

Best,
Tyler