Sunday, November 30, 2014

Adjustments For SQL Azure – No OLE DB

Same Same, But Different

sql-azureLately, I’ve been working with SQL Azure a lot more. For some of our simpler projects the differences aren’t that palpable, but the more data centric applications have started to tease out some of the more notable differences.

Over next couple of months I’ll write up some of the more glaring ones and how they’re likely to impact development (e.g. throttling, connection dropping, missing system stored procedures, statistics on sample slowing of some our ETL on Basic, Standard, Professional, Web, and Business editions vs. on premise deployments).

Today I’d like to talk about one of the documented, but less obvious differences. SQL Azure does not support OLE DB (at least not at time of writing).

An OLE DB Diet

Even though connecting to SQL Azure via OLE DB is a documented limitation, it still works (i.e. no error is thrown, commands are processed, and result sets are returned).

Even commonly used sites like connectionstrings.com list OLE DB connections in their SQL Azure section, so it’s not obvious to most developers new to SQL Azure that this genre of providers are not supported. I wouldn’t be surprised if a lot of teams are using unsupported OLE DB providers.

In addition, most of the SSIS/ETL developers that I’ve worked with tend to favor OLE DB components. A lot of this stems from bad experiences with initial ADO.NET implementations when they first came out (lack of 3rd party support, initial implementations by ISVs tended to be slower than their OLE DB implementations), but pretty much all of these have been addressed these days, and ADO.NET providers are in most cases equally efficient.

As an alternative, when connecting to SQL Azure you’re encouraged to use ADO.NET as a provider. This shouldn’t cause much acid reflux for most .NET developers but what’s the difference and should I care?

What are These Technologies Doing For Me Anyways?

Remember that OLE DB and ADO.NET are both data access APIs designed by Microsoft to allow data access with common reusable paradigms across vendor specific implementations (e.g. DB2, PostgreSQL, Oracle, etc…).

OLE DB breaks up the API into contracts that should be implemented by OLE DB providers and OLE DB consumers, and is based on COM.

ADO.NET also dictates an API for providers and consumers, asking that they rally around the DbConnection, DbCommand, DbDataReader, and DbDataAdaptor classes. The implementations fore these are generally written in managed code.

These days, popular software products almost always have equivalently speedy implementations in both flavors (OLE DB and ADO.NET).

But if you’re looking to connect to either an older system or one that has very sparse support, you might have an easier time finding an OLE DB provider (but this scenario should be rare).

So…What’s the Difference?

This is one of those questions that seldom receives a straight forward answer. In the case of SQL Azure…one isn’t supported so use ADO.NET.

In the general case…it depends. When evaluating the differences between provider architectures, here are some points of interest that help make the decision:

  1. Tool Support: It’s probably still possible to find SSIS components (mostly from 3rd parties) that don’t support ADO.NET and DO support OLE DB, but this is starting to become relatively rare.
  2. Performance: It used to be that ADO.NET provider implementations were just managed facades around existing OLE DB implementations. But again for most well supported data stores and components, this is no longer the case.
  3. 32-bit vs. 64-bit Environments: If you have mixed environments (32-bit and 64-bit), a managed provider (like ADO.NET) is likely to simplify your deployments as you won’t need to worry about which assembly to deploy. With OLE DB you’ll need to ensure that the correct driver is deployed AND that there are both 32-bit and 64-bit implementations from the original vendor. If there’s not, you can still run the 32-bit driver in a 64-bit environment with WOW64, but you won’t be able to address more than 4GB of memory (32-bit limitation) and you can’t run mixed architectures (i.e. 32-bit and 64-bit) in the same process.

In most cases you’re fine to go with the managed providers, but it’s always worth a cursory web search to get a feel for any advantages or pitfalls with alternative providers (OLE DB, ODBC, etc…).

Hope That Helps,
Tyler

Sunday, October 26, 2014

Abandoned Freebees – Http Cache

Lost Art

http caching image I’ve always found it odd, that very few web developers spend much time getting deep into HTTP and some of its goodies. Even though HTTP is a technology that is at the core of all web apps (and most mobile ones!), a lot of its benefits aren’t fully leveraged. Case in point, ask the next web developer you talk to to name five http request or response headers, and you’re likely to have yourself a short conversation.
This is truly unfortunate, because there’s a LOT of great functionality to be had from this layer that’s built into almost every client that’s consuming your site and services (read no additional decencies, it’s already deployed).
The number of opportunities to get free features out of HTTP are even more numerous these days, especially with the popularization of RESTful services and more common service oriented architectures.
One of HTTP’s most often overlooked features is caching. I’ve blogged about client side caching before, but below is a tactical example of how it can be used.

You Should Probably Be Doing It Anyways

Here’s a nice blurb from the wikipedia Representational state transfer article:
As on the World Wide Web, clients can cache responses. Responses must therefore, implicitly or explicitly, define themselves as cacheable, or not, to prevent clients from reusing stale or inappropriate data in response to further requests. Well-managed caching partially or completely eliminates some client–server interactions, further improving scalability and performance.
Since restful services are supposed to describe the cacheability of their responses anyways, it’s not a terrible idea to implement http caching first, and then worry about server side caching.

Caching Example

Below is a simple example of a WCF web service method that delivers a cacheable response where the client is instructed to cache the response for 10 seconds. Requests sent within 10 seconds of each other shouldn’t even leave the browser, they should be fulfilled from browser cache.
Let’s walk through it:
  1. The Date property tells the client the time on the server. The client and server can be in two different time zones and a lot of properties like “max-age” are described in delta seconds. The delta is based off of the “Date” header, so you should supply it.
  2. If there’s a chance that HTTP 1.0 clients may be using your service then consider including the “Expires” header, as 1.0 clients don’t know about Cache-Control (which is an HTTP 1.1 construct).
  3. The Cache-Control header states that the method output is “private” which tells intermediary proxies to not this content. Browsers, on the other hand should cache this response, but to hang on to it for no more than 10 seconds.
public string SomeCacheableWebMethod()
{
    var response = WebOperationContext.Current.OutgoingResponse;

    response.Headers.Add("Date", DateTime.Now.ToUniversalTime().ToString("R"));
    response.Headers.Add("Expires", DateTime.Now.AddSeconds(10).ToUniversalTime().ToString("R"));
    response.Headers.Add("Cache-Control", "private, max-age=10");

    return DateTime.Now.ToString("R");
}

When consumed on a browser, the headers and activity looks like the following:

Response.HeadersResponse Headers with Caching Directives 

Cache.ExampleResponses from Web Method

Notice how the second and third requests came (from cache)? This is because browsers respect caching directives, and the same content can be served much faster from cache. The second and third requests didn’t even leave the browser.

If we wait for 10 seconds to expire the cache becomes stale and the browser issues a brand new request (fourth request in above).

Busting Through Caches

Well caching is great, but what if I don’t want a cached representation? Turns out HTTP has this construct built in too. Both the server and the client can tell intermediaries whether or not responses should be cached. A cursory web search will yield all of the details so I’ll omit them here with the exception of a client example.

When a client makes a request with the header “pragma: no-cache” or “cache-control: no-cache”, all of the caching intermediaries (from the browser, proxies, and the server) get out of the way and stop serving cached representations.

The first construct pragma is the HTTP 1.0 feature while, cache-control was introduced in HTTP 1.1.

Consider the same request same as before, but now with the cache-control: no-cache header added on (by the way, this is exactly what a browser does when you press CTRL+F5 [force refresh] in Windows browsers, check it out yourself).

No.Cache.Request

Now the browser doesn’t even look in its cache, even though it has a cached representation that is within 10 seconds of the last request. The browser sends out a whole new request (with a caching directive telling intermediaries to not return cached representations).

When you issue a request with the header cache-control: no-cache, you’re going to bust through all of the caching constructs, regardless of the other caching directives that came with the original response.

The new set of requests looks like the below (notice that it doesn’t come from cache).

No.Cache.Response

Hopefully the above at least piques some interest in client side caching. Using http proxies and thoughtful caching instructions can scale applications by orders of magnitude, and in a cost effective way.

Friday, August 29, 2014

Cloud Pricing: The Race to the Bottom Continues

Monthly Slogan: It’s Never Been Cheaper

A lot of our clients use cloud services like Microsoft Azure, Amazon WS, and the Google Cloud Platform. Some are doing BI tasks (think analytics services and storage) while others are hosting their production sites (think 24/7 scaling compute instances).

What continues to surprise me is the cadence of recurring price drops. This year itself was even more notable than 2013, with yet another round of massive price drops announced by Google, Amazon, and Microsoft.

Since 2008, Amazon has announced 43 price drops to its cloud services. That’s an average of over 7 price reductions a year, and 2014 still isn’t done yet! This trend has continued to put money back into the hands of CIOs who have made it a priority to shift infrastructure to cloud services (E.G. Google’s cloud storage got 68% cheaper in March).

Price Drops Across the Board

The price drops haven’t been uniform across the board. Consistent with last year, this year’s most massive price drops seem to be targeted towards compute and storage costs. Since compute costs are normally the highest component of the bill (70-90% of overall cloud costs), this is great news for those already invested in cloud computing infrastructure.

Below are some charts speaking to trends in 2013 by RightScale. 2014 is likely to show just as aggressive price decreases.

2013.avereage.price.drop.size

2013.price.drop.service

2013.price.drop.provider

Better Tools

It’s not enough just to mention the price drops. The tools for interacting with everything from cloud storage (queues, tables, blobs), hosted databases (SQL, NoSQL), computes/auto scaling, and even PaaS deployments has also become a LOT easier to use in the last 12 months. This means your developers will likely spend even less time this year coming up to speed in these technologies than they would have last year.

The Water’s Warm

The cloud services story continues to get increasingly attractive. As more services become available, as the tool chain becomes progressively polished, and as more price drops follow; it may soon become a notable decision to not host your services in the cloud.

Sunday, June 15, 2014

Managing the JavaScript “this” keyword in TypeScript

Which This?

If you’ve worked with JavaScript for while, you’ve likely been made aware of the unique power and confusion that comes with the “this” keyword. We’ve been using TypeScript in some of the single-page applications we’ve been working on, and it has a helpful syntax to help you manage the “this” keyword.

Examples

Consider the following TypeScript:

1 $('div').each(function (index: number, element: Element) { 2 $(this).find('p').each(function (index: number, element: Element) { 3 $(this).addClass('selected'); 4 }); 5 }); 6

In each of these anonymous functions (lines 1 and 2 respectively) the keyword “this” (on lines 2 and 3) refers to the respective element that’s being iterated over (each div in the first function, each p in the second function.

However, using the => operator (sometimes called the “rocket ship” operator) when declaring the an anonymous function, changes the context of “this”.

If we rewrite the TypeScript to be:

1 $('div').each(function (index: number, element: Element) { 2 $(this).find('p').each((index: number, element: Element) => { 3 $(this).addClass('selected'); 4 }); 5 }); 6

The second function (that gets applied to each p tag) now has a “this” that refers to the parent scope. The “this” keyword now refers to a div. This is because the resulting JavaScript looks like:

1 $('div').each(function (index, element) { 2 var _this = this; 3 $(this).find('p').each(function (index, element) { 4 $(_this).addClass('selected'); 5 }); 6 }); 7

Notice how the scope prior to the second (rocket shipped) anonymous function becomes accessible inside the anonymous function. This can also be used multiple times. E.G. If the parent TypeScript function had also been authored with the rocket ship syntax (() => {}) then the “this” keyword in all scopes would refer to the owner of the first function (the containing class or window object in this case).

Wrapping Up

As applications are becoming increasingly functional, the context in which anonymous functions are running becomes a more important tool for writing code. Knowing about some of TypeScrpt’s syntactic sugar helps us manage the “this” keyword without dropping down into JavaScript’s call, bind, and apply.

Wednesday, April 23, 2014

The Commoditization of Mapping

The Benefits of Doing Nothing

The longer you stay in software, more technologies you’ll likely see become commoditized. Things that once used to be quite difficult, specialized, and costly inevitably become simple, accessible, and cheap (if not free).downward

Some examples of capabilities that have been commoditized might include:

  • Antivirus
  • CMS’
  • Source Control
  • Advanced hosting/infrastructure (e.g. Geo-redundant storage, elastic computing, CDNs)
  • Many tools in the Business Intelligence space (i.e. early BI maturity technologies like charting/reporting, analytics, visualization) 

The Sweet Spot

In consulting, this usually means there’s a period of time when technologies are recently commoditized and cost effective to implement, but that still provide strong differentiating value to customers.

You could even say that it’s the job of a good vendor to help you identify some of the high value, lost cost capabilities that have recently matured in their space, and help you understand if it’s likely to help you accomplish your goals.

Commoditization is also a bit of a double edged sword. On one hand, you as the developer have the chance to look like a rock star using new tools and libraries to efficiently solve what used to be intractable problems.

On the other, as these solutions start to permeate the marketplace, customers will eventually cease to see high value in these efforts, and will be more interested in the next greatest thing (which means you have some reading to do!).

Mapping on the Cheap

Some would argue that the emergence of high quality open source libraries are a notable milestone in commoditization. If that’s true, then client side mapping is well on its way. In 2005 when Google Maps first emerged, there wasn’t much competition in this space and it was mostly dominated by ESRI. Now, mapping is teeming with offerings from many providers, and open source components are becoming some of the most popular tools used to solve these once very specialized challenges.

Leaflet

I’d like to use Leaflet as an example. With 27KB of minified/gziped javascript, devs can now solve what used to be some pretty troublesome mapping problems. This includes things like:

And that’s just scratching the surface! Leaflet’s plugin inventory alone offers some compelling evidence that accessible client side mapping is pretty well solved.

Keeping Up

Regardless of how we feel about it, capabilities in IT will continue to commoditize over our careers. Regardless of how deep we get in a particular skillset, the value it adds will eventually diminish and we’ll have to either get deeper, or pick up some new tricks (ideally both).

Even though our old tricks will eventually cease to impress anyone, hopefully you’ll feel like you’re getting more done with fewer keystrokes.

Friday, January 31, 2014

A Decade of Mono

Still Kickingmono

For those who track this stuff, Mono will turn 10 years old this summer. Since June 30th 2004, the Mono team has been relentlessly trailing the ever evolving CLR (Microsoft’s implementation of the CLI), and hasn’t seemed to stop or get bored with the idea. At time of writing, they still trail a little bit here and there, but for the most part, have near feature parity with .NET 4.5.

It’s Prolific

One of the interesting thing about Mono is that even though it may not be used as often as Microsoft’s implementation of the CLI (the CLR, which just runs on Windows), Mono has brought .NET to arguably more platforms including:

  • Linux
  • Windows
  • OS X
  • BSD
  • Solaris
  • Nintendo Wii
  • PS 3
  • iOS
  • Android

And architecture wise, Mono runs on more instruction sets too:

  • x86
  • x64
  • IA64
  • PowerPC
  • Sparc
  • ARM
  • Alpha
  • S390/x (32/64)

How It Works

Although the details vary from platform to platform (especially for iOS/Android) effectively your application when executed is hosted and run inside the Mono runtime (just like apps are hosted within the CLR in the Windows world), and your code makes .NET calls against that runtime. Since Mono is a implementation of the CLI, you get all the things you’d expect from a .NET Framework implementation like:

  • Code Security
  • Garbage Collection
  • Exception Handling
  • Thread Management
  • Code Generation
  • etc…

Embedding

Turns out you can even embed .NET applications into C/C++ applications by hosting the Mono runtime (in your C application) and loading regular .NET assemblies into that runtime. Furthermore once loaded, those .NET assemblies and the C codes can interoperate, being able to both make and receive calls from each other.

In a nutshell it looks like the below.

HostingMono

Hosting a .NET Managed Assembly in C a Application

This is often used in scripting scenarios, allowing you to author applications that benefit from both low level and very fast code for scenarios that need it, while still allowing your team to write less mission critical code in highly productive languages like C#, Ruby, Perl, etc… (all of which are .NET enabled languages).

iOS/Android/OSX

Mono has always had the ability to run on Linux, but getting good integration (how do I call native APIs) and UI building tools has always been challenging on increasingly exotic platforms. Xamarin aims to fix this. Although the tools are pretty spendy, Xamarin is one of the only solutions that allows you to author cross platform code that calls against native APIs for iOS, Android, and OS X. They do this by hosting your code in a Mono Runtime, and providing a set of native APIs that you can call against (which in turn call against those native APIs.

Since .NET runs natively on Win Phone/Surface it’s also included in the cross platform reach.

Summary

When Microsoft created the CLI as an open specification, I doubt they dreamed that their idea of a platform agnostic runtime would end up on so many systems. Even though “write once, run anywhere” paradigm never really materialized completely for the CLI or the JRE, they continue to cover an incredibly large array of device profiles. With tools like Xamarin and Mono continuing to close the gap, a world of ever emerging devices doesn’t seem as daunting as it did a couple of years ago.