Friday, September 26, 2008

Five Things You Didn't Know JavaScript Could Do

The Stretch Mark of the Internet

Depending on who you talk to, JavaScript is probably held in a very different light. On one hand it's responsible for a lot of the browser exploits and popups that make the web irritate like a rash. It's also a language that didn't always have great IDE and debugging support which made authoring anything of size in JavaScript more than a headache. On the other it's really the only language you can use to write client side web applications (less Flex/Silverlight).

As the web continues to change, JavaScript is starting to play a more important role. More and more JavaScript is starting to be viewed as the duck tape of the Internet. This funky interperated language is slowly getting street credit for having a very expressive syntax that when combined with a good client side API, can give developers what they need to build highly interactive sites. Without further ado, here's some cool JavaScript features, some of these are only just now available to .NET v3.0/v3.5 languages (like lambda expressions and extension methods).

Object, A Dictionary

JavaScript has a similar inheritance hierarchy to Java where almost everything is an object or extends object. Object itself (the class) is actually a dictionary like data structure. All the codes below are all equivalent.

var userObject = new Object();
userObject.lastLoginTime = new Date();
alert(userObject.lastLoginTime);
var userObject = {}; // equivalent to new Object()
userObject[“lastLoginTime”] = new Date();
alert(userObject[“lastLoginTime”]);
var userObject = { “lastLoginTime”: new Date() };
alert(userObject.lastLoginTime);

The last one is a lot like C# v3.0 object initializers.

Functions as Objects

Functions are also objects, you can assign them to variables, pass them into other functions or put them into other data structures (ie. an array of functions).

var addFunction = function(x, y)
{
return x+y;
};

//This is valid;
alert(addFunction(5,3));

function alertOutput(functionDef, input1, input2)
{
alert(functionDef(input1, input2));
}

alertOutput(addFunction, 5, 3);

Adding Functions and The Prototype Chain

Because JavaScript allows you to assign properties and methods at runtime, you can selectively grab an instance of an object and then add functionality to it. Consider the following code which adds an addDays() function to a single instance of a date object.

var date = new Date();
date.addDays = function(days)
{
this.setDate(Number(this.getDate()+days));
}
date.addDays(5); //works!

var date2 = new Date();
date2.addDays(5); //throws error!

But we can also add functionality to ALL JavaScript objects by extending the prototype chain. All objects in JavaScript have a root blueprint or original class which is dubbed the prototype. By altering the prototype ALL instances of that class will get the new functionality.

Date.prototype.addDays = function(days)
{
this.setDate(Number(this.getDate()+days));
}

var date = new Date();
date.addDays(5); //works!!
var date2 = new Date();
date2.addDays(10); //also works!!

In the above example we give all Date objects (existing and future) additional functionality (the method addDays). Notice how similar this is to extension methods (except better cause it's not only static).

Closures

Closures allow you to create functions that build out expressions (read functions) programmatically. These expression can then be attached to an object, used to extend the prototype chain etc... Here's an example of a closure:

//Takes a function that filters numbers and calls the function on
//it to build up a list of numbers that satisfy the function.
function filter(filterFunction, numbers)
{
var filteredNumbers = [];

for (var index = 0; index < numbers.length; index++)
{
if (filterFunction(numbers[index]) == true)
{
filteredNumbers.push(numbers[index]);
}
}
return filteredNumbers;
}

//Creates a function (closure) that remembers the value "lowerBound"
//that gets passed in and keep a copy of it.
function buildGreaterThanFunction(lowerBound)
{
return function (numberToCheck) {
return (numberToCheck > lowerBound) ? true : false;
};
}

var numbers = [1, 15, 20, 4, 11, 9, 77, 102, 6];

var greaterThan7 = buildGreaterThanFunction(7);
var greaterThan15 = buildGreaterThanFunction(15);

numbers = filter(greaterThan7, numbers);
alert('Greater Than 7: ' + numbers);

numbers = filter(greaterThan15, numbers);
alert('Greater Than 15: ' + numbers);

This functionality bears a striking resemblance to Lamba expressions.

Inheritance

There's probably no real surprise here, but JavaScript can also make use of inheritance. You can extend classes and overwrite behaviors down the inheritance chain (polymorphism). You can also make methods private and hide the details of classes (encapsulation).

// Defines a Pet class constructor
function Pet(name)
{
this.getName = function() { return name; };
this.setName = function(newName) { name = newName; };
}

// Adds the Pet.toString() function for all Pet objects
Pet.prototype.toString = function()
{
return 'This pets name is: ' + this.getName();
};
// end of class Pet

// Define Dog class constructor (Dog : Pet)
function Dog(name, breed)
{
// think Dog : base(name)
Pet.call(this, name);
this.getBreed = function() { return breed; };
}

// this makes Dog.prototype inherit from Pet.prototype
Dog.prototype = new Pet();

// Currently Pet.prototype.constructor
// points to Pet. We want our Dog instances'
// constructor to point to Dog.
Dog.prototype.constructor = Dog;

// Now we override Pet.prototype.toString
Dog.prototype.toString = function()
{
return 'This dogs name is: ' + this.getName() +
', and its breed is: ' + this.getBreed();
};
// end of class Dog

var parrotty = new Pet('Parrotty the Parrot');
var dog = new Dog('Buddy', 'Great Dane');
// test the new toString()
alert(parrotty);
alert(dog);

// Testing instanceof (similar to the `is` operator)
alert('Is dog instance of Dog? ' + (dog instanceof Dog)); //true
alert('Is dog instance of Pet? ' + (dog instanceof Pet)); //true
alert('Is dob instance of Object? '+(dog instanceof Object)); //true

Another Chance

Whether you like it or not, JavaScript may be your most important tool when it comes to client side web development these days. It plays a huge role and still doesn't really have a comparable alternative. Not only does the language deserve your attention, but becoming deep in this scripting language will open up huge doors when it comes to writing interactive web applications.

My Best,
Tyler

Wednesday, September 24, 2008

Access Denied For Site Collection Admins

...But I Can't Add Anymore Privileges

I was left scratching my head the other day when asked to troubleshoot a Access Denied error from SharePoint. These are normally a dime a dozen, but what separated this one is that the given user was already a Site Collection administrator for that site collection. In fact he also has Full Control on the entire Web Application through Web Application Policy.Access Denied, but to Site Collection Administrators!

The Delinquent Pages

The pages we were getting locked out of were themselves a clue, they were all User Profile and My Site related in the Shared Service Provider. User Profile/My Site links that were yielding Access Denied.

They were actually all layout pages, their URLs were:

  • /_layouts/ProfMain.aspx (User profiles and properties)
  • /_layouts/PersonalSites.aspx (Profile services policies)
  • /_layouts/ManagePrivacyPolicy.aspx (My Site settings)

Once we got digging into these pages it became pretty obvious what the problem was. In each of these layout pages was an AdminCheck control which required the user to have the ManagePeople privilege.

<spswc:admincheck requiredright="ManagePeople" runat="server" />

If you look up the AdminCheck class you can find it in the MSDN, and discover that it is "reserved for internal use and not intended to be used in your code", so not a lot of great documentation there.

The rights that drive the AdminCheck control are from the Microsoft.SharePoint.Portal.Security.PortalRight enum, which are quite different than those that drive the more popular SPSecurityTrimmedControl. The SPSecurityTrimmedControl is set with rights from the Microsoft.SharePoint.SPBasePermissions enum. It's the SPBasePermissions that you find scattered throughout SPSites, SPWebs, SPLists, and SPItems.

The Shorter Story

As it turns out you add these privileges by using the Personalization services permissions link which is embarrassingly (for us) located in the same place as the layouts pages that were throwing the error. Using this page you can associate the appropriate Microsoft.SharePoint.Portal.Security.PortalRight rights that will allow a given user access to these pages. Simply head into the Personalization services permissions page and assign the appropriate privileges. We ended up adding the Manage User Profiles permission which gave us the appropriate rights and banished our angry Access Denied demons.

Personalizatin services permissions page.

Hope that helps someone. I guess the moral of the story here is that there are other permissions besides SPBasePermissions in play within SharePoint. So there, you've been warned.

Stumbling along,
Tyler

Friday, September 19, 2008

HttpModules, Your Best Friend

Keeps On Giving

HttpModules and HttpHandlers are probably one of your most versatile tools in ASP.NET. Learning these tools not only solidifies your understanding of the ASP.NET Request Life Cycle but will definitely save your bacon in years to come. What makes HttpModules even more powerful these days is that IIS 7 now supports HttpModules that you the developer can write. A developer can now write an HttpModule that could inspect/alter requests for all sites on a web server if need be! Talk about powerful.

Quick Refresher

Remember that HttpModules get called both before and after the end point (which is usually a page or some HttpHandler). They expose a series of events which you can subscribe to and use to alter/enhance the request. In the picture below you see a request coming in, passing through multiple HttpModules, hitting an HttpHandler (which could be an .aspx page) before passing back through the same modules. It's of note that you can "cherry pick" the events you want to subscribe to. Your HttpModule should be as lightweight as possible. In fact it has to be since your code will be called for every request in the web application! For more information on HttpHandlers/HttpModules check out this 15 seconds article.

Real Life Usage

Not too long ago I blogged about locking down application pages in wss/moss. Essentially there was a series of URLs that I didn't want the user to see lest they get prompted with an NTML authentication box. At the end of the post I also mentioned that an HttpModule could simply redirect people to another URL completely as a mild hack. Someone asked me to post the code and I though this would be as good an example as any for using an HttpModule.

What Does It Do?

This HttpModule is pretty straight forward, it hooks in at BeginRequest (quite early in the pipeline) and if the request matches a regular expression in the web.config, the user gets Response.Redirected to another URL. Let's have a look.

/// 
/// This class remaps all urls that match a set of regular expressions to
/// a given page.
///

public class RequestRemapper : IHttpModule
{
//List of regular expressions we're going to use to identify which URLs
//to remap to a dummy page.
private static string[] regularExpressionList =
ConfigurationManager.AppSettings["RequestRemapperRegularExpressions"].
Split(new char[] { ';' }, StringSplitOptions.RemoveEmptyEntries);

//Page we're going to redirect people too.
private static string redirectUrl = ConfigurationManager.AppSettings
["RedirectUrl"];
private List urlRegularExpressions;

///
/// A list of compiled regular expressions.
///

public List UrlRegularExpressions
{
get
{
if (urlRegularExpressions == null)
{
urlRegularExpressions = new List();

foreach (string regEx in regularExpressionList)
urlRegularExpressions.Add(new Regex(regEx, RegexOptions.Compiled
| RegexOptions.IgnoreCase));
}
return urlRegularExpressions;
}
}

///
/// Inspects the url to see if it matches a list of regular expressions that
/// we don't want users to see.
///

private void context_BeginRequest(object sender, EventArgs e)
{
HttpApplication application = sender as HttpApplication;
HttpRequest request = application.Request;
HttpResponse response = application.Response;

//If this isn't the redirectUrl, it matches a url in our Url Filter List
//redirect them.
if (redirectUrl != request.Url.OriginalString && isUrlFiltered(request.Url))
response.Redirect(redirectUrl);
}

///
/// Checks to see if the given url matches any of those in
/// UrlRegularExpressions.
///

private bool isUrlFiltered(Uri url)
{
foreach (Regex regEx in UrlRegularExpressions)
{
if (regEx.IsMatch(url.OriginalString))
return true;
}

return false;
}

#region IHttpModule Members

//Nothing to release
public void Dispose() { }

public void Init(HttpApplication context)
{
context.BeginRequest += new EventHandler(context_BeginRequest);
}

#endregion
}
In addition to the above code you also need register the httpModule in the web.config and add an appSetting which is a semi-colon delimited list of regular expression describing all the "shapes" of URLs you want to remap.
<httpmodules>
<add type="RequestRemapper.RequestRemapper, RequestRemapper, Version=1.0.0.0,
Culture=neutral, PublicKeyToken=null" name="RequestRemapper" />
</httpmodules>
...
<appSettings>
<add key="PageFilterRegularExpressions" value="http[s]?://.+((pages[/]?)$|
(documents[/]?)$|(forms[/]?)$|(forms/allitems.aspx?)$);" />
<add key="PageFilterRedirectUrl" value="http://someUrl"/>
</appSettings>

There's really tonnes of variants off this code, but it's not a terrible starting point. The starting regular expression remaps all urls that end with (pages, pages/, documents, documents/, forms, forms/, or forms/allitems.aspx). To help you build up and test a regular expression that meets your own URLs I'd recommend using a Regular Expression tester tool.

Not A Lot of Alternatives

HttpModules allow you to do things that would be pretty darn difficult using other ASP.NET tools. They're uniquely positioned to give you a lot of control over the request/response pipeline, this alone makes them worth learning. For the people asking for the MOSS redirect code, I hope that helps.

Take Care,
Tyler Holmes

Tuesday, September 16, 2008

What's the Difference Between Dispose() And Close()?

A Question

I was bouncing around StackOverflow the other day when a user asked an interesting question. What's the difference between calling Dispose() and calling Close()?

At first I got into a talk about Non Deterministic Finalization and told the developer that you should always call Dispose(). In fact, a lot of the time Dispose() actually calls Close(), but with Dispose() you're assured that the object is cleaning up any unmanaged resources that it would end up doing when the Garbage Collector finally came around and called Finalize() on it. I mean that's how you avoid Non Deterministic Finalization right?

But Wait, There's More

I was pretty sure I'd gotten it right and delivered some good advice when I was one upped by the post of another user. I liked his answer more and thought he went one step further in delivering a better rule of thumb. In some cases like SqlConnection, calling Dispose() actually resets the state of the object. In these cases you can call Close() more than once, but if you call Dispose() multiple times you can actually have an exception thrown. A decent rule of thumb to take away is:

  • If you are done with an object then call Dispose() on it.
  • If you are planning on reusing the object but want to temporarily free some resources then use Close().

It's worth mentioning that this is a rule of thumb, there are bound to be exceptions. You're kind of hoping that the author of these codes was on the same page as you when s/he was writing them.

Supposedly the above rule was pulled from the Framework Design Guidelines. If you really want to know what's going on then use the .NET Reflector to fill in the details. When I went thumbing through common objects that implement IDisposable, I actually found a couple classes who's Close() actually calls Dispose(), although most that I found have Dispose() methods who end up calling Close() (in addition to doing some other cleanup and suppressing finalize).

That being said all the Dispose() methods do a great job of cleanup and I would still advocate using Dispose() when you're done with the object. Consider the following popular objects that implement IDisposable.

System.Data.SQLClient.SQLConnection.Dispose()
protected override void Dispose(bool disposing)
{
if (disposing)
{
this._userConnectionOptions = null;
this._poolGroup = null;
this.Close();
}
this.DisposeMe(disposing);
base.Dispose(disposing);
}

Microsoft.SharePoint.SPWeb.Dispose()

[SharePointPermission(SecurityAction.Demand, ObjectModel=true)]
public void Dispose()
{
this.Close();
}

System.IO.StreamWriter.Dispose()

protected virtual void Dispose(bool disposing)
{
if (disposing)
{
this.OutStream.Close();
}
}

In Close()ing

So that's my two cents on the whole Close() vs Dispose() topic. As long as you're thinking about your objects cleaning up after themselves you're on the right trail. While garbage collection has come leaps and bounds in the last 10 years it still needs a little help form you the developer. Remember to have your objects clean up after themselves and keep it clean.

Best,
Tyler Holmes


Keep it Clean

Friday, September 12, 2008

Do You Cert?

Another Piece of Paper

It seems like certifications have really taken off over the last ten years. Maybe because it's another revenue stream for vendors, or maybe because some employers have started to show interest in them, but either way it seems like there's a near endless list of certifications available form every vendor known to man.

I'll preface this by stating I'm a little biased on the topic, I've probably written around ten cert exams myself over the last 4 years. Although I have yet to personally pay for an exam (paid by employers/schools/user groups) I would definitely say I'm fueling the fire.

What's The Point?

I'll be painfully blunt about this next point. Certifications don't get you deep in a technology. When I see a cert on a resume all it really tells me is that this guy is at least a newbie (or better) over the entire technology. He's not necessarily a guru or even an intermediate unless he's also been working with the technology at least a couple years.

On the other hand I know he's been exposed and tested on (ideally) the entire breadth of the technology. That is, it's very likely that even though this guy may not be deep in all areas of the technology, at least he knows its boundaries and how what it's supposed to do. A simpler way of putting would be at least this guy knows what he doesn't know. Someone being aware of what they don't know or understand is a great starting point for Google/MSDN/Blogs etc...

Someone who's really well versed on a technology is even better at telling you areas of it that they're weak in than areas that they're great at.

The Benefit of Certifications

I'm not going to say that all certs are created equally. I'm sure there's some that are empty and cover pretty shallow technologies. However, any time you actually sit down and try to become a student of a technology, you're going to start learning it a different way. There's very much a paradigm shift from "I need this technology to create a widget" to "I want to learn this technology as a topic". No longer are we focused explicitly on task driven tutorials or how to perform something syntactically, but instead we start looking at the technology as a subject to be studied.

For me when it comes time to learn a new technology there's some key questions that I always want answers to:

  1. Where is this technology positioned w.r.t. to the existing tech stack? What are it's competitors/alternatives?
  2. What problem is this supposed to solve?
  3. What is the breath/scope of this technology?

While a lot of this information is available on the web, following a cert path is almost guaranteed to lead you down a well organized trail that ensures you at least glance at all the technology has to offer.

Always Learning

The most melancholy truth about IT technology is that it has a shelf life. Just like your cert it will be come obsolete over time and you'll be lucky if you can get 5 years out of it. That being said I feel I owe a lot of my technical know how to frequent certifications more importantly, going through the process of learning a new technology in a very deliberate way.

I'll always remember a math teacher who told us that the material we were covering was more or less useless.

"You're not in school to learn random topics. You're here to learn how to learn!"

For me, sitting down once or twice a year and plying myself against a topic is a way for me to continue to get better at learning. It's one skill that you'll always need in this industry.

Sharpening the Saw,
Tyler Holmes

Tuesday, September 9, 2008

How They Danced Around Your Validators

How'd This Data Get In Here?

I see this a lot while doing code reviews and cleaning up older ASP.NET web applications. Often developers will throw down some validator but not include a server side validation function, or simply not call Page.IsValid server side! Consider the following form:

<%@ Page Language="C#" CodeFile="Default.aspx.cs" Inherits="_Default" %>
<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
    <title>Untitled Page</title>
</head>
<body>
    <form id="form1" runat="server">
    <div>
        <label for="<%= firstName.ClientID %>">First Name</label>
        <asp:TextBox ID="firstName" runat="server"></asp:TextBox>
        <asp:RequiredFieldValidator ID="firstNameValidator" runat="server" ControlToValidate="firstName" Display="dynamic" ErrorMessage="*"></asp:RequiredFieldValidator>

        <br />
        <asp:Button ID="submit" runat="server" Text="Submit" OnClick="Submit_Click" CausesValidation="true" />
    </div>
    </form>
</body>
</html>

There's really only two things going on here.

  • A required field validator is validating a text box client side (read: javascript) so that the user knows the text box is a required field.
  • A button is posting back the form to a server side event handler Submit_Click (below).
protected void Submit_Click(object sender, EventArgs e)

    //do something clever...
}

What I'm really trying to say is that there's a world of difference between the above and below:

protected void Submit_Click(object sender, EventArgs e)
{
    if (Page.IsValid)
    {
        //do something clever...
    }
}

Remember that Page.IsValid will tell you if ANY of the validators have controls that contain invalid data. What's even better is that the page will reevaluate these validators server side during the Page Lifecycle. Early in the lifecycle, the page walks all the validators (kept in Page.Validators) and asks them if they're valid. All you have to do to take advantage of this wonderful server side checking is call Page.IsValid before you assume your input is good.

Custom Validators

The only caveat to the above rule is that if you're implementing custom validators that do some validating with a custom javascript function on the client side then you also need to reimplement that logic on the sever side. For instance:

<asp:CustomValidator id="customValidator" runat="server" ControlToValidate="firstName" ClientValidationFunction="jsValidationFunction" OnServerValidate="serverValidationFunction">
</asp:CustomValidator>

The above requires that you not only write a client side validation function like below:

function jsValidationFunction(source, args)
{
    //Some bunch of logic
    args.IsValid = true;
}

But that you also write a server side validation function that repeats the same logic (except in C# [or whatever your code behind language is]).

protected void serverValidateFunction(object source, ServerValidateEventArgs args)
{
    //Some bunch of logic.
    args.IsValid = true;
}

Now ASP.NET knows to call your server side function and when you call Page.IsValid you'll be savvy as to whether or not the page's inputs are valid. That is, Page.IsValid will include the output of your custom server side validation function.

Is This Actually Important?

As a UI developer, a good portion of the work is validating input from the user. On the browser level this is often done in javascript, but javascript is pretty unreliable.

Even though it's currently 2008 a lot of users still browse around the web with javascript turned off (especially if they want to dodge your pesky validators). Even if they wanted to leave javascript turned on they could still use a tool like Tamper Data (FireFox extension) to simply alter the request in flight. In fact doing this is dead simple.

Want to dodge a client side validator?

  1. Download FireFox and Tamper data.
  2. Visit a web site that has form validation.
  3. Click Tools -> Tamper Data. Then click Start Tamper in the top left.
  4. Fill out and submit the form.
  5. When Tamper Data asks you if you want to tamper, do so and change the request values to anything you want. At this point there's nothing javascript can do to stop you from sending bogus values to the page.

Tools like tamper data can make it easy for any client to dodge your javascript validation.Hopefully this reinforces how important it is to validate server side! Don't trust your documents people are going to tamper with them!

For a complete break down on ASP.NET validators and what they can do check out this great Code Project article written by Paul Riley.

Best,
Tyler Holmes

Friday, September 5, 2008

Google's Awesome New Browser: Chrome

...Another Browser, Are You Kidding Me?

When I first heard that Google had decided to add yet another browser to the world I couldn't help but groan. As a web developer I hate the fact that I'm often developing content that will behave differently in a plethora of idiosyncratic browsers.

I also couldn't see any benefit to another browser, how could this possible help the web world, won't another browser just make it worse!? But then I read the web comic and I started to believe.

Below is a quick break down of Google Chrome from 30,000 feet. In many ways Chrome really is a radically different from existing browsers.

About The Browser

Chrome is a light weight completely open source browser from Google that boasts the simplest UI I've ever seen. It uses Apple's Open Source WebKit as a rendering engine and out of the gates is Acid2 compliant. Chrome also has its own JavaScript virtual machine called V8 which is ridiculously fast. At the time of writing Chrome was only available for Windows XP/Vista, but word on the street says that there'll be versions available for Linux and OS X "soon". Here are some of the things that I think make this browser worth your time.

Multi-Process

Not only does every tab inside Chrome runs it its own process, but every plug-in does as well. Because of this not only is there less memory fragmentation, but when a plug-in goes sour (Flash/Silverlight) or a page starts to misbehave it only kills the tab. Chrome also comes with it's own little Task Manager that lets you see how much memory/cpu/bandwith each tab is consuming. You can also see these processes running around in the Windows Task Manager.Google Chrome Task Manager View

Google Chrome Processes from Windows Task Manager

V8 - Not the Juice

V8 is Chrome's JavaScript engine. It supposedly boasts better garbage collection and more speed than other JavaScript runtimes. This is partly because V8 will actually compile your JavaScript into byte code as opposed to interpreting it. I ran Chrome at some JavaScript Benchmark sites and found it much faster than FireFox v2 and IE 7. That being said Celtic Kane claims that it still runs slower than Opera v9.5.2.

Other Features

Chrome combines the Google Search with a memory of site specific searches (ie. searches you did on amazon.com or wikipedia.org) into a single text box dubbed the Omni Box. This box is supposed to help you find what you're looking for a lot faster than you would using other UI elements in traditional browsers.

The browser also supports a built in privacy mode called Incognito which is much like Safari's private browsing. It comes with the browser out of the box and it's dead simple to turn on (CTRL+SHIFT+N).

You can also view pages in Chrome Application Mode which basically strips away all the UI and allows you to focus on just the UI of the site you're on. This is ideal for web sites which are really web applications and have comprehensive UI's of their own.

There's also Phishing Filter network offered by Chrome which will try to detect phishing urls and protect you from getting duped by these sites. This behaves a lot like IE 7 or FireFox 3's phishing filters.Chrome Start Page.

If you're into splash/home pages, Chrome has an interesting take on home pages which displays your 9 most visited sites. The Chrome team claims that 70% of your browser activity is revisiting the same content/sites over an over again and this will speed up your browsing experience.

Chrome supposedly also has better security than most browsers in the sense that the rendering engine (WebKit) runs in a really low security profile AND is in a security sandbox. Ideally it will become much more difficult for hackers to tack unwanted add-ons and extension onto your browser.

The Best Kind of Open Source

Chrome and V8 source codes are released under a BSD license which is one of the most liberal software licenses out there. In a nutshell you can do anything you want with the code as long as you keep the copyright message in the code and waive rights to hold the author liable for any issues that come up. You can even repackage and sell the code commercially.

Give it a Try

I spent over two hours going through the web comic and the introduction video as Google explained why the world needed a new browser. I think the easiest way to see if YOU feel it's worth your while is to simply download the beta and browse around. Believe me when I say that it's very likely to become your next default browser.

Best,
Tyler

Thursday, September 4, 2008

Printing From The Web With CSS

All I Want To Do is Print!

Printing from the web has always been kind of hack. I have yet to find a silver bullet that allows users to manage how they go about printing web content. This post will speak to two pretty decent techniques and what they're good at.

There's Wheat, and There's Chaff

When it comes to printing a web page there's traditionally a small section that you want to print and the rest of the page (navigation, footers, etc...) you could really do without. The following two techniques deal with doing just that, allowing the user to print specific sections that you dictate.

Hidden IFrame

The first technique involves putting a hidden IFrame in the page. When it comes time to print we inject the content we want into the hidden IFrame and call print on the IFrame. This is great because it requires very little effort on our part and we can cherry pick content we want to print.

Consider the following example:

A bunch of content that will get printed by button Print1.
Other content that will get printed by button Print2.

The following JavaScript needs to be included in the page:

function printContent(iframeId, elementId)
{
   var element = document.getElementById(elementId);
   if (element != null && window.frames[iframeId] != null)
   {
      var content = "<html><body>"+element.innerHTML+"</body></html>";
      window.frames[iframeId].document.open();
      window.frames[iframeId].document.clear();
      window.frames[iframeId].document.write(content);
      window.frames[iframeId].document.close();
      window.frames[iframeId].focus();
      window.frames[iframeId].print();
   }
   else
   {
      alert("Unable to locate print resources, request aborted.");
   }
}

Then you simply need to throw down a hidden iframe like below:

<iframe id="printIframe" name="printIframe" scrolling="auto" frameborder="0" style="height: 0px; width: 0px;" />

And then some container that contains your content and a button to call print on the iframe:

<div id="printContent1">A bunch of content that will get printed by Print1</div>

<button onclick="printContent('printIframe','printContent1')">Print1 </button>

And there you have it, the ability to print discrete page sections. What's even better is that most of the CSS is honored during the print. The mild issue with this is that because the printing is actually done from the iframe as opposed to the original document, the user is left high and dry should they try to do a print preview from the web. There's also a chance that if your CSS selectors are driven by a hierarchy of html elements (ie p div.content) and you don't include all the html in the hierarchy, it won't get applied during the print.

CSS and the @Media Attribute

Another way to print out from the web is to use the @Media CSS attribute to specify a different set of CSS for the printer. This well supported attribute allows you to apply different CSS to a document if the client is a computer, mobile device, printer, web tv etc... more information can be found here. That is, when the user visits your page a normal set of CSS is used, and when they go to print (or print preview) some additional CSS or an entirely different set of CSS is used, it's your call.

The problem with using the @Media attribute is that without any help you'd need to reauthor the stylesheet selectively hiding navigation areas while showing other areas. Fortunately we can take the lazy route by writing some JavaScript.

Here's a JavaScript/@Media Print technique which I'm going to share. Essentially we do 3 things.

  1. Add a script link to the following JavaScript.
    <script src="PrintHelper.js" type="text/javascript"></script>
  2. Add a CSS style that look like this:
    @media print
    {
       .DontPrintMe
       {
          display: none;
       }
    }
  3. Add a "PrintMe" class to any block of content you want to print:
    <div class="PrintMe">Printable Content</div>

Essential the JavaScript does two things after the document has finished loading.

  • It looks at any element that has the class "PrintMe" and adds that same class to all that elements ancestors (parents) and all that elements descendents.
  • After that it takes all other elements that don't have this class and puts a "DontPrintMe" class on them.
  • Now when the user goes to print, only elements (and their children) that you specifically asked to show in the print will. All others will be hidden (because of the CSS rule we added).

The best thing about this solution is that it's flexible AND it honors the print preview in browsers so users know exactly what's going to be printed. I liked this solution so much that I implemented it in my blog and now all my posts are print/preview savvy.

Wrap it up

During this post we looked at selective printing on the web with two different techniques.

In the first technique we injected specific pieces of content into an iframe and then called print() on the iframe in JavaScript. This was really straight forward to implement and allowed us to selectively print areas of the screen as long as the user prints by clicking a HTML button that we created. This also doesn't honor print preview, but this technique is pretty simple to set up.

In the second technique we used JavaScript to walk the document and tag all content that we didn't specifically mark as areas we wanted to print. We then added a rule that hides all those areas with CSS using the @Media print rule. This has the added benefit of working anytime the document is printed and honoring the browser's print preview.

I have to give a special thanks to James Fedor who taught me about the IFrame printing, some of the code you saw in this post is derived from his work.

That's it, hope it helped someone.

My Best,
Tyler Holmes


Monday, September 1, 2008

Debug Any JavaScript In Your Browser With Visual Studio

What's Actually In This Page?

Web pages are getting pretty complex these days. What used to be your plain old HTML page could now reference multiple CSS files, have globs of XML embedded in it AND reference multiple JavaScript files which could be doing things like extending the DOM and DHTMLing a lot of markup into the document.

So how does one keep track of all this stuff? How does one even begin to put it all together?

While I don't have an easy answer to these immediate questions above I do have a good tip for dealing with JavaScript. Not only can you use Visual Studio and other debuggers to debug JavaScript, but you can also use the Visual Studio Script Explorer to step into any JavaScripts emitted in the page. That includes those from a WebResource.axd or any other type of end point that's emitting a JavaScript file. Let's have a look.

Debugging JavaScript

Most web developers already have a good handle on this. To debug JavaScript with Internet Explorer perform the following tasks.

  1. Open up Internet Explorer. Click Tools->Options.
  2. Click on the the Advanced Tab.
  3. Under the Browsing category, ensure that the "Disable script debugging (Internet Explorer)" is unchecked (below).Enabling script debugging in Internet Explorer to enable JavaScript debugging.

Two Ways to Attach

Once this is done, you have two options. The first is to visit the page you want to debug and once the page is done loading, click View->Script Debugger->Open (or Break at Next Statement if you want to debug a specific JavaScript call). Then choose a New instance of Visual Studio 2005 (or 2008 if you prefer).

You'll now be able to debug the document and any JavaScript that is running on that page. Simply put down a break point on any inline JavaScript or JavaScript functions and you'll be able to stop execution of code, pull up immediate windows, quick watches, etc...

Debugging an HTML page with Visual Studio 2005/2008 and Internet Explorer.

Another way to trigger the debugger is to use a the special debugger keyword in an inline JavaScript call which will also allow you attach Visual Studio (or the Script Debugger). This usually involves you injecting some JavaScript into the page.

Consider the following html page:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title>Untitled Page</title>
</head>
<body>
    <p>
        Some random html
    </p>
    <script type="text/javascript">
        var foo = "some string";
        debugger;
    </script>
</body>
</html>

Notice the use of the debugger keyword? This is a special command which causes a runtime exception and allows Visual Studio to attach and debug. This is often useful if you want to attach and debug before the page is finished loading in the browser. As an example you could take the above html, and use it to make a test.html page. When you open it in Internet Explorer you will have the option of attaching to it.

To use either option you need to enable script debugging in IE (see above steps 1 to 3).

Script Explorer

The last tip I have in this post is using the Script Explorer window. This is a window that's buried in Visual Studio that lets you see all the scripts in play.Script Explorer in Visual Studio 2005/2008.

Consider the screen cap below which is attached to the default.aspx of a SharePoint Team Site. This page has many JavaScripts in play and the Script Explorer allows you to open up any of them and set breakpoints. This makes it a lot easier for you to debug a certain section of code, even if that code is buried in some WebResource.axd or coming from some other bizarre location. SharePoint developers will probably notice WSS' init.js and core.js. We can double click on any of these these JavaScript documents in the Script Explorer and set breakpoints to start debugging and troubleshooting.

To pull up the Script Explorer you can either press CTRL+ALT+N while debugging OR you can add it to a toolbar by doing the following:

  1. In Visual Studio click Tools->Customize
  2. Under the Commands tab click the Debug category.
  3. Find Script Explorer and drag it into a toolbar or menu (like Debug).

Using the Script Explorer with option two (using the debugger keyword) gives you tremendous flexibility in seeing exactly what is happening JavaScript wise in your page. This becomes quite powerful because you can attach very early in the document lifecycle where as you can only attach after the document is finished loading in option one.

Recap

In this post we looked at various ways to debug JavaScript using IE and Visual Studio. Although Visual Studio 2008 boasts the ability to debug JavaScript as a new feature, you've really been able to do it since at least Visual Studio 2003.

I know there are other JavaScript debuggers out there like the FireFox extensions JavaScript Debugger, and Firebug, but I tend to live in Visual Studio. It's also important to remember that JavaScript is interpreted by the browser and you may get different results in different browsers since they have their own interpreter. As a result some JavaScripts that behave correctly in one browser may act differently in another. When this happens you'll probably end up using Visual Studio to fix the script for IE and then use a FireFox JavaScript debugger to figure things out for the Fox.

Take Care,
Tyler Holmes