2005 2006 2007 2008 2009 2010 2011 2015 2016 2017 2018 aspnet azure csharp debugging docker elasticsearch exceptions firefox javascriptajax linux llblgen mongodb powershell projects python security services silverlight training videos wcf wpf xag xhtmlcss

Atlas October 2005

If you are anything like me, you consider a technology is new as long as it's marked as experimental. Atlas still qualifies for being new...or even, pre-new. In any case, the October 2005 version of Atlas has been released and can be downloaded at the link below.

http://atlas.asp.net/ You can find a list of changes here: http://weblogs.asp.net/atlas/

Note that unlike WCF and WPF, which in my mind are stable for production, Atlas is still experimental and is only in the technical preview stages. Not that it will explode into pieces, but there is little to no documentation on almost all of Atlas and given the early nature of the product, things are almost guaranteed to change.

Cool SMS/VS2005 Integration Feature

Today I discovered a very wierd feature regarding SQL Management Studio 2005 ("SMS") and Visual Studio 2005 (of course I'm using the FREE standards editions from the MSDN/Technet seminars)

OK so here it is...

  • Open SMS
  • Navigate to a table and modify it.
  • Copy the text of one of the columns
  • Go to an ASPX page in Visual Studio 2005 and paste.

If you did it right you will see the weirdest thing in the world: it pastes a GridView linked to a SqlDataSource, which it also pastes.

<asp:GridView ID="GridView1" runat="server"
     DataSourceID="SqlDataSource2"
     EmptyDataText="There are no data records to display."
     AutoGenerateColumns="False">
    <Columns>
        <asp:BoundField DataField="ContactID"
            SortExpression="ContactID"
            HeaderText="ContactID">
        </asp:BoundField>
    </Columns>
</asp:GridView>
<asp:SqlDataSource ID="SqlDataSource1" runat="server"
    SelectCommand="SELECT [ContactID] FROM [Employee]"
    ConnectionString="<%$ ConnectionStrings:AdventureWorksConnectionString1 %>"
    ProviderName="<%$ConnectionStrings:AdventureWorksConnectionString1.ProviderName %>">
</asp:SqlDataSource>

You will also find that it pastes the appropriate connection string directly into your web.config.

<add
 name="AdventureWorksConnectionString1"
 connectionString="[string omitted;  it was LONG"
/>

Cool, huh?

p.s. If you want to paste that particular name, as I wanted to do, you can always do the old school paste-into-notepad-copy-out-of-notepad trick that is a tried and true way to strip off Web Browser formatting.

User Control Error: The base class includes the field 'xxx', but its type (yyy) is not compatible with the type of control (zzz).

I'm not a huge user of user controls, but today I was porting an ASP app over to the real world and I figured the particular scenario I was working with didn't really warrant the power of a server control. So after I went an took the 12 seconds out of the day to create a user control all was well. It was working beautifully and all that jazz...until I published the website to the test server!

The base class includes the field 'cltFemaleItems', but its type (Items)
is not compatible with the type of control (ASP.items_ascx).

WHAT!!!! I'm marking this one as an ASP.NET 2.0 bug. After a bit of searching online I realized that there aren't really ANY workarounds to this online.

I went back and read the message again and noticed this small hint: "( ASP.items_ascx)". Below is what my user control class declaration looked like and what the declarative directive looks like. Everything looks fine...

<%@ Control Language="C#" AutoEventWireup="true" CodeFile=" Items.ascx.cs" Inherits="Items" %>

public partial class Items : System.Web.UI.UserControl

Based on the error message hint, however, changed the name of the contro. Here is what I changed the above code to...

<%@ Control Language="C#" AutoEventWireup="true" CodeFile"Items.ascx.cs " Inherits="items_ascx" %>

public partial class items_ascx : System.Web.UI.UserControl 

Then I republished and viola! All it well... That's one really weird ASP.NET 2.0 bug. Anyhow, this workaround seems to work well.

Data Feed Framework - February 2007

Back in late 2005 I created a system which I'm now calling the Data Feed Framework (DFF) to greatly simplify the creation of RSS feeds and the aggregation of RSS and Atom feeds on web sites. The goal of the feed creation portion of the system was to allow developers and IT professionals to create RSS feeds with little to no work at all thereby allowing them to use RSS for purposes other than blogging or news. The goal of the aggregator portion of DFF was to the integration of information into ASP.NET 2.0 pages.

The Data Feed Framework has two primary components :

  • FeedCreation
  • InfoBlocks

Using the FeedCreation mechanism, developers and IT professionals can quickly create RSS feeds to monitor internal SQL databases, create sales RSS feeds for management, or monitor whatever else they wanted. The aggregator portion of the system, called InfoBlocks, allows web developers to declare a control in ASP.NET 2.0, give it an RSS or Atom feed and walk away having an InfoBlock on their page containing a list of feed entries for the RSS feed.

The two portions of the system work together, but can also be used separately.

To make explaining this a bit easier, below is a link to my documentation for the system. Below that are the links for the source of the system as well as screen shots. I'll soon be creating a video to go over the basic concepts of this system.

I can't over emphasis how simply this system makes RSS creation and RSS/Atom aggregation. It's literally a matter writing a SQL statement. Absolutely no more work than that is required. The simplicity is very much like that simplicity in WCF. Where WCF relies primarily on attributes and a config file, I rely simply on a SQL Server 2005 table called "FeedCreation". You literally write a SQL statement and give it a name and you have a complete RSS feed ready for corporate or public distribution. I'm planning on incorporating this system into the core of Minima in the next CTP.

The license for this is simple: use it all you want for whatever you want, customize it to your own needs, but 1) ALWAYS keep the copyright notice in there and 2) I nor my companies are liable for anything. The typical stuff...

Update: This project has been renamed to SQL RSS Services and has been updated for .NET 3.5 using LINQ and the syndication feed support in WCF.

Related Links

Data Feed Framework Overview Video

To make understanding and usage easier, I recorded a video on my Data Feed Framework. This video is an overview, a demo, and basically video documentation. Everything in the video is covered in the documentation file I posted on the original blog entry.

Data Feed Framework (DFF) can be used to greatly simplify the creation of RSS feeds and the aggregation of RSS and Atom feeds on websites. Please see the original blog entry for more information (link below).

Minima Blog Engine February 2007 CTP Released!

Over the past few months I've been getting various requests for portions (or all) of my blog engine. The project was not open or shared source... until now. Having had enough requests, I figured I should finally go back through my code and do the refactoring I've been putting off for quite some time now. I went ahead and did my cleanup, added a few more features, streamlined the API, simplified the code tremendously, fixed up the database a bit, and made a sample application to ship with it. Now I'm finally ready to release this blog engine under the name Minima as a February 2007 CTP. The files are listed below. One file is the Minima solution and the other is the database (yes, RAR only-- I don't do ZIP).

As far as licensing... it's a shared source project. Meaning, I'm going to continue development on it and release new CTPs as time goes on. I have a list of things I want to implement in future releases and I'll be shipping those in future CTPs. The license does however allow for modifications... as that's the entire point! This is a template for you own blog system and you can add to it add to it as see fit. However, please be warned that I'll be releasing new versions in the future. So, you may want to keep track of your changes or communicate with me about them. You really don't want to get into the situation where you say "Oh man... he released an assembly to do exactly what I wanted... uh oh... I rebuilt this entire section... this is going to get sloppy!" Just be careful with your changes. Furthermore, no matter how much you change it, you must put somewhere on your blog that your blog either uses Minima or is based on Minima. Lastly, the disclaimer is the typical disclaimer: neither myself or my company will be liable for any usage in any way, shape, or form of either this application or derivatives of it.

By the way... this is why my blog was flaky lately. I've been doing constant deployments to production, which caused all kinds of problems as this web site was my QA system.

Now, here are the release notes as seen in the ReleaseNotes.xml in the MinimaLibrary project in the RAR. Please pay special attention to the "Technology" and "As a Training Tool" sections as it explains the technology in this application, which I think will serve as an example for each of us in many areas. This is why I'm labeling this entry with so many labels.

Purpose

Minima is designed to give developers a minimalistic template for creating a feature rich alternative to Blogger, Wordpress, and other large-scale blogging systems in manner consistent with the technologies and design paradigms of ASP.NET 2.0, XHTML, CSS, ECMAScript, and the Framework Design Guidelines.

Minimalistic?

Minima is minimalistic in a number of respects. First, does not overload itself with every possible feature in the known universe. Next, it's designed to put extra features as add-ons in an effort to keep the code somewhat maintainable. Furthermore, the primary way of interacting with Minima is a single facade (a class designed to make interaction with the internal mechanics easier) with very understandable methods. This facade is actually the API exposed as a WCF service. Finally, in this release there is no client application; however, as I say, there is a very easy to use API. It should cover most everything you need.

There are also other dimensions to it's minimalism. For example, I put in my mini-exception monitoring system, which notifies me of any exceptions thrown from the web site. I could have used the Application Blocks, but I went the more minimal route instead Be clear on this: I'm a complete minimalist and purist. I refuse to have multiple computers, never put two toppings on my ice scream, hate putting anything on my sandwiches, I never use MODs for games, NEVER wear shirts with logos, and never wear more than 2 colors at a time. I hate stuff that gets overly complex. So, I'm a minimalist and this fits me.

Blog Management Application?

There is no management application in this release. I personally use is a small interface I wrote in WPF, which communicates via WCF to the primary server. It was my first real WPF application I wrote and I wrote it before I deeply understood XAML, so I wrote the entire thing using C#. (Those of you who were ASP/PHP masters before learning ASP.NET and therefore wrote your first project in pure C# without any markup will know what I mean) I'm rebuilding it now in mostly XAML with a little code here and there for WCF interaction.

Having said all that, you can very easily write your own tool. Even still, I find SQL Server Management Studios to be one of the best front-ends ever made.

Windows Communication Foundation

The primary way to communicate with Minima is the MinimaFacade class. This class is used internally to get the information for the web site. It's also what you should use when writing your own management tool. Looking at the class you will ask yourself "Why in the world isn't this thing static!?". I didn't make it static because I wanted to apply a ServiceContract interface to it thereby giving it exposure as a potential WCF service. The web site, however, does use it statically via the MinimaFacadeCache class. Anyway, the point is, you can easily write your own remote management application using WPF, Winforms, or ASP.NET 2.0 by using WCF. Of course, if you want a secure channel with WCF... that you will have to add on your own as I didn't have an SSL certificate for testing purposes.

Potential Future Changes

There are some things I would definitely like to change in future CTPs of Minima. I have an entire list of things I want to either change, fix, or add. More information is forthcoming.

Primary Features

The primary features in Minima are just the ones that I just HAD to have. If I didn't absolutely need the feature, I probably didn't add it (but may in the future!) A few things I needed are: "fake" paths or archives and labels, "fake" URLs for each blog entry, multiple "fake" URLs for each blog entry (some times I have a typo in a title of a blog entry, but by the time I find out the blog entry is already popular--so I can't merely fix it-- I need two URLs to point to the same thing), almost completely database driven (including URL mappings), labels (not folders!, I wanted many labels per blog entry), pure CSS layout and style, pure XHTML structure, and the ability to add, remove, or change a major feature on a whim! Now that last one is very important... if I want to change something, I can. This ability came in handy when I went from blogger to my own engine and in the process lost my automatic technorati ping. That's something I quickly added though.

Technology

The DAL was generated using LLBLGen using Self-Servicing template in a two-class scenario. Everything was written in C# 2.0 using ASP.NET 2.0 with a few bits of custom AJAX functionality (I didn't want to use Atlas on this one). All style and layout is CSS as only people who are in desperate need of getting fired use tables for layout. The technorati ping functionality is based on an abridgement of my XML service framework. The RSS feed creation abilities is actual a function of the RSS.NET framework. I would have added Atom, but I've had majors problems with the Atom.NET framework in the past. Finally, the database is SQL Server 2005 (Express in my case), using one stored procedure (which I would like to refactor into LLBLGen).

As a Training Tool

One of my intentions regarding Minima is to use it as a sample application for .NET training. For example, this is a great way to demonstrate the power and capabilities of HTTPHandlers. It's also a good example of how LLBLGen can be effectively utilized. Furthermore, it also demonstrates how you can quickly and efficiently use WCF to turn a simple facade into a multi-endpoint service. It also demonstrates manual AJAX, CSS themeing, HttpWebRequest, proper use of global.asax, framework design guidelines, and type organization.

The API

For now, just look at the MinimaFacade and everything should become apparent. I'll be posting API samples in the future. See the Samples section below for some examples on using the API.

Update: Minima is now in the NetFXHarmonics Subversion respository at http://svn.netfxharmonics.com/Minima/tags/.

The Universal HttpHandlerFactory Technique

In the past two months I've done more with HttpHandlers than probably anything else. One technique that I'm finding myself use a lot is one that uses a universal HttpHandlerFactory to filter ALL ASP.NET 2.0 traffic. In fact, this is exactly what I'm doing in the next release of Minima. The next release actually has many HttpHandlers, each utilized by a master HttpHandlerFactory. Here's an example of what I'm doing and how you can do it too:

First, I create a wildcard mapping in IIS to:

c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll

When doing this you want to make sure to uncheck the "Verify that file exists" or else your pages will have to actually exist. For most of the HttpHandlers that I create, the pages that are being accessed are completely imaginary.

Next, I create my HttpHandlerFactory. The normal way of doing this is to create a class which implements the IHttpHandlerFactory, have your logic in the GetHandler method do whatever it must do, and then return an instance of a class implementing IHttpHandler. If I don't want to do anything fancy with the request, I can just process it as an ASPX page. Unfortunately, since we are going to run EVERYTHING in the website through the HttpHandlerFactory, we can't do that. Why? The HttpHandlerFactory that processes ASPX pages is PageHandlerFactory, which you can't return from our custom HttpHandlerFactory. Here's what I mean...

public IHttpHandler GetHandler(HttpContext context, String requestType, String url, String pathTranslated) {
    if (url.EndsWith("/feed/")) {
        return new FeedHttpHandler( );
    }
    else {
        return new PageHandlerFactory( );
    }
}

The preceding code in no way compiles. First off, PageHttpFactory implements IHttpHandlerFactory (well, IHttpHandlerFactory2, which in turn implements IHttpHandlerFactory), NOT IHttpHandler. Secondly, though completely irrelevant by the first reason, the single constructor for PageHttpFactory is marked as internal.

Fortunately, however, we can get around his by actually inheriting from PageHttpFactory and overriding the GetHandler, which is marked as virtual.

Here's an example that does compile and works beautifully:

namespace MyProject.Web.HttpExtensions
{
    public class MyProjectHttpHandlerFactory : PageHandlerFactory
    {
        public override IHttpHandler GetHandler(HttpContext context, string requestType, string virtualPath, string path) {
            if (context.Request.Url.AbsoluteUri.EndsWith("/feed/")) {
                return new FeedHttpHandler( );
            }
            if (context.Request.Url.AbsoluteUri.Contains("/files/")) {
                return new FileMapperHttpHandler( );
            }
            if (context.Request.Url.AbsoluteUri.Contains("/service/endpoint/")) {
                return new XmlServiceHttpHandler( );
            }
            if (context.Request.Url.AbsoluteUri.Contains("/images/")) {
                return new ImageProcessorHttpHandler( );
            }
            else {
                return base.GetHandler(context, requestType, virtualPath, path);
            }
        }
    }
}

To complete the solution, you simply have to add the following in your web.config file:

<httpHandlers>
  <add verb="*" path="*" type="MyProject.Web.HttpExtensions.MyProjectHttpHandlerFactory"/>
</httpHandlers>

Now we are filtering all content through our HttpHandler. By doing this, you have full control of what URLs mean and do without having to screw with a ton of strange HttpHandlers in your web.config file. Furthermore, by doing it this way you get better control of what your web.config looks like. In Minima, for example, I have a type which controls all access to various configurations. My primary HttpHandlerFactory looks at this type to determine what paths do what and go where. You can either use the defaults in Minima or you can override them in the web.config file.

Regardless of what you do, the point is you can do anything you want. I often find myself creating virtual endpoints for web services allowing access in a variety of ways. In one service I recently created I actually have parameters coming in in the form of a URL (http://MyWebsite/Endpoint/ABCCompany/). My HttpHandlerFactory notices that a certain handler is to handle that particular type of request and then returns an instance of that particular handler. The handler then obtains the parameters from the URL and processes the request appropriately. Very little work was done in the actual setup and almost no work was done in IIS, it's all magically done via the master HttpHandlerFactory.

XmlHttp Service Interop - Part 1 (Simple Service Creation)

This entry is the first is a series on XmlHttp Service Interop.

In my day job I am constantly making diverse systems communicate. I make classic ASP talk to the .NET 2.0 Framework, .NET systems talk with Java systems, a Pascal-based system communicate with a custom .NET mail server, and even make .NET components access systems from the stone age. I love interop, but I'm finding that many people don't. From what I see, it seems to be a lack of understanding more so than a lack of desire. It's actually some pretty cool stuff and you can find great books on various interop topics.

While I do work a lot with COM, ES, and web service interop, my favorite communication mechanism is via straight XmlHttp calls. It's just so easy to do and support for it is just about universal. You can take JavaScript and make a call to ASP.NET, go to a VBS WScript and make a call to a web service, or force the oldest, nastiest product in your company to communicate with WS-* services. In this part of the series, we are going to discuss XmlHttp in general and see a call from JavaScript to a manually created service endpoint.

To start off with lets make it clear what we are and are not talking about. We are not talking about sockets or direct TCP calls nor are we talking about a service framework. XmlHttp is a way to transfer XML over HTTP. However, even though we talk about XML, in your using XmlHttp you'll find that this isn't a requirement at all, so, at root what we are doing is making simple HTTP calls.

To see what we're talking about in action, lets create a simple XML service endpoint that accepts a well-defined XML format to allow the sending of e-mail. Then, lets access the service via JavaScript. This is very similar to something I recently created to allow a very old system to send e-mails using the .NET Framework. Of course, in that situation I used the programming language of the system in question (Pascal), not JavaScript.

To begin with lets create the client. I know it seems a bit backwards, but lets look at this from the standpoint of a framework designer: look at how it will be used first, then implement mechanics. Now, the first thing we need to do this is a simple "htm" document. I want the page to be "htm" for the sake of this demonstration, simply to show that there is no server-side processing at all in this page.

Next, we need a way to access our endpoint. I'm not going to get into severe detail about how to do this in every single browser in the world, but, rather, I'm only going to show the standardized way. You can quickly do a search online to see how to extent this behavior to IE5 and IE6.

Skipping the lame setup and stuff many 6th graders can do, let's get right to the core of what we are going to do. The full implementation of everything seen here is in an accompanying VS2005 solution. It would probably be a good idea to have that open as you go through this.

To send a request to a server, simply use syntax similar to the following:

var xmlhttp = new XMLHttpRequest( ); xmlhttp.open('POST', ' Service.aspx', true); xmlhttp.onreadystatechange = function ( ) { if(xmlhttp.readyState == 4) { alert(xmlhttp.responseText); } }; xmlhttp.send(data);

This syntax works in any version of Firefox, the newer versions of Opera, and IE7 (or what I like to call " IE6.5"). Basically, what's happening here is this: you are creating a new instance of an HTTP requestor, giving it some connection information, setting a callback function, and sending some data to the service.

The part that you should look at closely is the XMLHttpRequest::open (note: my double colon syntax is from C++ and simply means Class::Member)function. This obviously takes three parameters: the HTTP method, the HTTP endpoint, and a boolean stating asynchronous communication. I want this to be an asynchronous call, so I'm setting the third parameter to true. I'll come back to the HTTP method in a moment and the HTTP endpoint is just the service address.

After that we see that the property XMLHttpRequest::onreadystatechange is being assigned a JavaScript anonymous function. If you are unfamiliar with these, just think of them as anonymous delegates in C# 2.0. This is the function that's going to be called when the state of the XmlHttp call changed. When inside this function, there are a few parameters you can look at when this function gets calls, but here I'm only testing one: readyState. This property basically states the status of the call. Notice the XMLHttpRequest property is called "onreadystatechange", not "oncomplete". This function is actually called when the state of the HTTP request changes. When I test for readyState == 4 I'm looking for round trip completion. Frankly, you probably never touch the values 1, 2, and 3 though you could check for 0, which means that the XMLHttpRequest::open function has not yet been called. In this situation, if the readyState is 4, then I want to display a message box showing the response content, which is accessible via XMLHttpRequest::responseText. One other very important property you will definately be using a lot is the XMLHttpRequest::status. This property gives values like 404, 415, 500, and so on. If the request did a successful round trip the status will be 200, so that's something you'll probably be testing for quite a bit.

Finally we see the XMLHttpRequest::send method. This simply sends a set of data to the service... well, kind of. In the XMLHttpRequest::open, the first parameter, the HTTP method, is very important. Depending on what you want to do you will either set it to GET or POST. If you are calling a pre-existing page that has no idea what a HTTP stream is, but knows about querystrings, then you will want to use GET. In this situation, you will want to put parameters in the querystring in the HTTP end point, that is, in the second parameter of XMLHttpRequest::open. However, if you are creating your own service, you may want to use POST instead as using POST makes the development on both the client and service simplier. On the client, you don't pack stuff in the querystring (though you still could) and on the server, you can access the data via a stream rather than via parsing the URL or doing any iteration. As that last sentence implies, by using the POST method you send the data you want to submit to the HTTP endpoint as a parameter of the XMLHttpRequest::send function. For those of you who understand WCF terminology, you can think of the HTTP method as being analogous to the WCF binding and the HTTP endpoint as being analogous to the WCF address. The only thing analogous to the WCF contract is the XML schema you use to create your data stream.

Now, since we are sending the information in the data variable to the service, we need to put something in it. For this service, I'm using the following XML, though it doesn't have to be XML at all.

var data = ''; data += '<Mail>'; data += '<ToAddresses>'; data += '<ToAddress>johndoe@tempuri.org</ToAddress>'; data += '</ToAddresses>'; data += '<CcAddresses>'; data += '</CcAddresses>'; data += '<BccAddresses>'; data += '</BccAddresses>'; data += '<FromAddress>no-reply@tempuri.org</FromAddress>'; data += '<Subject>XmlHttp Service Interop - Part 1</Subject>'; data += '<DateTime>03-08-07 2:26PM'; data += '</DateTime>'; data += '<Body>This is the message body.</Body>'; data += '</Mail>';

Given proper event management in JavaScript, you have everything you need to have a fully functional client. Now onto the server.

As we saw when we looked at the client code, the service endpoint is Service.aspx . To help us focus on the task at hand, we aren't going to do anything fancy like using URL aliasing (aka URL rewriting) to make it look cooler, though in reality you would probably do that.

In the code behind for the Service.aspx, we have code that starts like this:

XmlDocument doc = new XmlDocument( ); doc.Load(Request.InputStream); XmlNode documentRoot = doc.DocumentElement; XmlNode mailRoot = documentRoot.SelectSingleNode("//Mail");

Here what we're doing is creating a new XmlDocument and loading the data streamed from the client into it. Then we are getting the root of the document via XPath. The rest of the code is in the accompanying project and simply consists of a bunch of XPath queries to get the information from the document.

After we found all the values we needed in the XML document, either via XPath or another mechanism, we can do whatever we want with what we found. The important point is this: all we have to do is a simple Response.Write ("") to send data sent back to the client, which in turn changes the readyState in the previously seen JavaScript to 4, thus allowing the client to display the output in the alert window. It's really just as simple as this: the client sends stuff to the service, the service does something and sends stuff back.

Now, we could beef this up a bit by adding some HTTP headers. This is something you may find yourself doing often. To do this, use XMLHttpRequest::setRequestHeader to set a key/value pair on the connection. Here's an example.

xmlhttp.setRequestHeader('X-App-Source', 'My Client');

That 'X-App-Source' was completely made up. You could use 'My Awesome Service Caller' if you wanted. That doesn't matter, what does matter however is that you put this after the call to XMLHttpRequest::open or else you will seriously want to throw something across the room, because it's a painfully subtle error that will cause the call will fail every time.

On the server side, to access a header, simply do this:

String xAppSource = Request.Headers["X-App-Source"];

I know. You were expecting something a bit more profound. Okay, I can satisfy that need. If you have many headers and you want them all, here's what you can do.

foreach (String header in Request.Headers) { // Do whatever you want... }

Whatever you do, try to fight the temptation to do this:

Dictionary<String, String> headers = new Dictionary<String, String>( );
foreach (String header in Request.Headers) {
    headers.Add(header, Request.Headers [header]);
}

As nice as that looks, if you really want to have a headers key/value pair collection, you can just do this:

NameValueCollection headers = Request.Headers;

Regardless of what you do with the headers, remember that they are there for a reason and if you do a lot of service calls, you will find yourself using HTTP headers a lot. This is something you will see in the next part of the series.

So, that's a quick overview of XmlHttp. Please see the solution provided with this post for the full code. The next part of the series discusses making manual XmlHttp calls to WCF.

Materials

HnD Customer Support and Forum System

If you are curious about LLBLGen Pro or have been using it for a while and want to further more skills with it, then you simply must check out Frans Bouma's HnD (Help and Discuss), an ASP.NET-based Customer Support and Forum System.  It was actually released back in December 2006, but I only just now got around to checking it out and I have to say it's really, really nice.  But what else would you expect from the person who created the world's most powerful database abstraction system (LLBLGen Pro)?

You can actually see an example of HnD by going to LLBLGen Pro's support forum.  I've been using their support forum for a while now and I could seriously tell a difference in usability from their own system.  One feature of HnD I definitely want to mention is something that many forum's don't have, but all need: when replying, it's critical to see the message you are replying to as you are typing (this is one of the reasons I switch to Gmail from Yahoo!)  Frans thought of this and HnD has that ability.  HnD even allows for attachments and has an attachment approval system for moderators, which is really nice.  The feature list goes on and on.

Not only is the end product nice and easy to use, it's released with full source code (just as LLBLGen Pro is buy, when you buy it).  However, unlike the commercial product LLBLGen Pro, HnD is released under the GPLv2 license, so... we can have all kinds of fun messing with it.  From my perspective, this is one of the greatest things about it and is exactly why I released Minima (a minimalistic ASP.NET 2.0 blog engine built using LLBLGen Pro).  Simple, to the point, source is provided, and the source is actually easy to navigate.

The solution is split into an SD.HnD.Utility project which contains very base level functionality (much like Minima's General project), it includes an SD.HnD.DAL project which contains the LLBLGen Pro DAL (much like Minima's MinimaDAL project), it includes an SD.HnD.BL project which contains "business logic" for Hnd (much like Minima's MinimaLibrary project), and finally it includes the web site.

This is an incredible project for anyone who wants to strengthen their LLBLGen Pro skills.  I can tell you that it has personally already helped me with my own LLBLGen Pro skills.  So, whether you want a really nice ASP.NET-based forum system, want to learn more about ASP.NET data binding, want to learn LLBLGen Pro for the first time, or just want to enhance your LLBLGen Pro skills, you should seriously considering nabbing the source code for HnD.

As a postscript, if you are unfamiliar with Frans Bouma's work, then should check out his blog at the below link.  His work is always great and he definitely deserves his MVP many times over.

Related Links

Minima and Data Feed Framework Renamed and Explained

As of today I'm renaming any of my CTP releases to simply... "releases". That is, my Minima February 2007 CTP is now "Minima - February 2007 Release" and my Data Feed Framework February 2007 CTP is now "Data Feed Framework - February 2007 Release".

The motivation behind these is different for each. With regard to Minima, I knew it wouldn't be a long term or real production project, so announcing it as a CTP was a mistake on my part. Not a big deal. Lesson learned. Furthermore, I knew from the start that it would be more of a training tool than anything else. With regard to my Data Feed Framework (DFF), after using it in various areas I realized that my initial release was sufficient for most scenarios.

As a reminder... what is Minima? Minima is an ASP.NET 2.0 blog engine built using a SQL Server 2005 database and an LLBLGen Pro 2.0 DAL that provides the base functionality that most technical bloggers would need. Since it's initial release I've added some functionality to my own instance of Minima and have used the February 2007 release as a training tool numerous times. Moving forward I want to make it very clear that Minima is primarily a training tool and a such, it's a blog template that people learning ASP.NET can enhance and upgrade to aide in their own personal learning. Having said that, Minima is a full fledged blog engine and it does have features such as labels and the ability to have more than one URL point to the same entry. In any case, if you want something to help you learn the various components of ASP.NET, please feel free to take Minima and use it as you please (see attribution/licensing note below).

By using Minima as a training tool you can learn much about base ASP.NET technology as well as manual Ajax prinicples, CSS theming, HttpWebRequest, proper use of global.asax, framework guidelines, and type organization. Furthermore you can use it to learn WCF, the power of HTTPHandlers, and how to effectively utilize LLBLGen Pro. I will try to release versions of Minima to demonstrate the new technologies of the day. For example, when ASP.NET Ajax matures a bit (I find it slower than a dead turtle right now), I'll be adding portions to demonstrate ASP.NET Ajax. However, I will not be adding new functionality for the sake of functionality. If the functionality can be used as a training tool, then I will add it. Also, Minima is a great way of learning WPF. How so? I deliberately did NOT include a client! Why? Because I would rather you use whatever you want to use to create a simple form to access the API via WCF. The client I use a very basic WPF client that calls the Minima WCF service. So far, Minima has been a very effective learning tool and I hope you will find it useful as well.

As far as my Data Feed Framework (DFF). What is it? It's a self-contained framework that converts SQL statements into RSS feeds. I've used this in a number of places where creating a manual RSS feed and MANAGING the RSS feeds would just be too time consuming. For example, say you have a ASP.NET 2.0 e-commerce website and you have new products released at various intervals. Well, it would be AWESOME if you had an RSS feed to announce new products and sales without having to send out an ethically questionable e-mail blast. With DFF, you simply write something like "select Title=ProductName, Description=ProductDescription from Product where ProductDate > '7/11/07' order by ProductDate desc" and BAM you have an RSS feed. Since an RSS feed is simply a select statement in a column in a row in a SQL Server table, you could also use it to dynamically create a custom feed for each person who wants to monitor the price of a certain product. It's very flexible. RSS feeds are accessible via their name, their ID, or you can use a "secret feed" to force a feed to be accessible via GUID only. DFF also includes some templating abilities to help customize the output of the RSS feed. In addition to the DFF SQL to RSS engine, DFF also includes an ASP.NET 2.0 control called an InfoBlock that allows you to consume any RSS feed and display it as an XHTML list. You can see an example of how to use an InfoBlock my looking at my blog. The boxes on the right are InfoBlocks which allow me to manage my lists using a SQL Server table (the DFF database contains a Snippet and a SnippetGroup table to store autonomous information like the information in these lists--please see the documentation for more information). DFF is creating secret RSS feeds that my own personal version of Minima then consumes. With this as an example, it should be easy to see how DFF can be used in portals. My DFF demonstration video shows a bit more of that.

For more information regarding my Data Feed Framework (DFF), please skim the concise documentation for Data Feed Framework linked below. It would also probably be a good idea for you to watch my short video documentation for DFF as well. Please note that even though DFF is designed to be a production framework, it too can be used as a training tool. The most obvious thing you can learn is how to create data-bound server controls for ASP.NET 2.0 as this is exactly what an InfoBlock is.

You may use either the SQL->RSS engine or the InfoBlock portion or both. It's up to you. Also, as with all my .NET technologies that I create, the source and database files are included for extensibility and so you may use these as training tools (for yourself or for others). Lastly, for both Minima and Data Feed Framework, please remember to keep the license information intact and make it very clear that your work either uses or is based on either whichever product you are using.

Minima - Links

Data Feed Framework - Links

NetFXHarmonics SolutionTemplate/E-Book

Recently I started putting together my standard .NET solution template for public release. This template contains the base architecture and functionality that all my projects need. This template is also organized in a very clear way that clearly separates each type of element into it's own section to maximize expansion and ease of development.

In order to make sure that the solution is understandable by many different types of developers, there are commentaries in each file in the solution. In addition to this, many of the files have full chapter-length lessons on a development topic contained in that file. For example, in the Code/SampleDomManipulator.js file, I wrote a rather extensive introduction to JavaScript browser dynamics through DOM manipulation. Because of these lessons, this solution template is also living .NET e-book.

Here is a list of some of the topics I've written about in this first release of my solution template, some of them are simple explanations and others are lengthy detailed lessons.

  • HttpHandler Creation
  • HttpModule Creation
  • HttpHandlerFactory Creation
  •  
  • Custom Config Section Creation
  • .NET Tracing
  •  
  • MasterPages Concepts
  • Global.asax Usage
  •  
  • CSS Theming, Management, and Media-Specific Sheets
  •  
  • JavaScript Namespaces
  • JavaScript File Consolidation
  • Firefox Console Usage
  • JavaScript Anonymous functions
  • JavaScript Multicast Event Handling
  • DOM Element Creation
  • DOM Element Manipulation
  • DOM Element Deletion
  • JavaScript Event Handling with Low Coupling
  •  
  • JavaScript GET/POST XmlHttp Service Interop
  • Manual XmlHttp Service Creation
  •  
  • JavaScript/CSS/ASP.NET/C# Code Separation
  • Highly Cohesive Type Organization

This solution template could be used for the basis for production projects or as a training utility for people new to ASP.NET, people new to JavaScript, DOM Manipulation or AJAX or people who just want to learn how to organize their projects more effectively.

As with all my projects, but much more so with this one, I will be updating the solution template over time to account for more AJAX techniques and .NET technologies. I will also continue to expand the commentaries and lessons to the point where this solution itself becomes a case study, a sample application, and book all wrapped up in one.

Links

Windows Live Writer RULES

Microsoft Windows Live Writer (Beta 2) is by far and away one of the coolest tools I've used in a long time.  Since I created Minima, I was using my own extremely lame WPF app to do all my posting and it made posting a bore.  I've been meaning to put some time into making a more interesting WPF app, but instead Windows Live Writer saved the day.  With this thing I can post new entries, save drafts, set labels, as well as view and edit previous entries.

 Having said all that, setting it up wasn't that easy.  Well, the setup was simple, but figuring out what to setup wasn't.  I kept thinking that there was some .NET interface you had to implement, because the documentation kept talking about it's API and gave COM and .NET examples.  Well as it turns out, all you have to do is implement a well known blogging API and point WLW to it!  In my case, I chose the Metaweblog API.

Setting this API was actually rather simple, though it took some experimentation at first as I've never worked with the API at first.  Also, this API uses XML-RPC calls and at first and, at first, I figured I would have to write the XML listener and all XML messages manually.  It turns out that there's a nice API called XML-RPC.NET.  You set this up similar to how you setup a WCF service: via interfaces.

Here's the basic idea behind the XML-RPC.NET API:

[XmlRpcService(Name = "Minima API", AutoDocumentation = true)]
[XmlRpcUrl("http://www.netfxharmonics.com/xml-rpc/")]

public class XmlRpcApi : XmlRpcService
{
    [XmlRpcMethod("blogger.getUsersBlogs")]
    public BlogInfo[] GetUsersBlogs(String key, String username, String password) {
        // Stuff goes here
    }
}

You just set two class-level attributes and then set a method-level on each method.  Then you expose this class as an HttpHandler as the XmlRpcService class this class is inheriting from actually implements the IHttpHandler interface, which is rather convenient.

How did I know what methods I had to implement?  Well, the Metaweblog API "specification" is NOT a real specification, it's just an article that only mentions parts of it.  Also, XML-RPC.NET doesn't seem to have any useful tracing abilities, so that was out.  After a while though, I just found someone else's web site that implements the Metaweblog API and looked their API documentation (you can just look at the sample API below).  It turns out that to use the Metaweblog API means you will be using parts of the Blogger API as well.  Interesting...

Being a minimalist though, I wasn't about to implement ALL functionality.  So I setup an ASPX page that took the Request.InputStream, pointed WLW at the page, and when WLW did a request I got an e-mail from my ASPX page.  When I saw that WLW was calling a specific function, I implemented that specific one.  Of course I also had to implement specific data structures as well.  Really though, all you have to do is use XML-RPC.NET to implement the functions it wants and give it the structures in the Metaweblog API (as you can see in the sample API below) and you're done.

[As a side note, if you aren't familiar with what I mean by accessing the Request.InputStream steam, this stream contains the information that comes to the ASPX page in the POST portion of the HTTP request.  You will often access this when you are creating manual XML services (see my XmlHttp Interop article below for an example).  Here is an example of getting the input stream:

Byte[] buffer = new Byte[context.Request.InputStream.Length];
context.Request.InputStream.Read(buffer, 0, (Int32)context.Request.InputStream.Length);
String postData = ASCIIEncoding.UTF8.GetString(buffer);

You could use something like this to view what information is being sent from WLW.]

In my debugging I found that WLW has a tremendous number of extremely weird bugs.  For example, one of the structures I needed to implement was a structure called "Post" (I'm using the term structure, but it's just XML over the wire and it's a class in my API-- not a struct).  However, WLW would give me errors if some of the fields were null and would give me a different error if they weren't null, but even then, it was only one some functions.  So I had to create two versions of "Post".  One called "Post" which only had a few members, and the other called "FullPost", which had everything.  Strange.  Oh well... I've seen worst (ever use Internet Explorer?)

In the end though, WLW was talking seamlessly with my API.  I was really, really dreading making a better blog client as that felt like such a waste of time (and there was NO way I was going to use a web client-- WPF RULES!). Windows Live Writer (Beta 2) has already been a great help for me in the past week. Not just WLW itself though, but also some of the great plugins you can use with it. For example, in this write-up, I used a Visual Studio pasting plugin to allow me to copy from VS2005 and paste here to get fancy color syntax. Cool!

Related Links

Real World HttpModule Examples

Back when I first discovered HttpHandlers I remember being ecstatic that the control that I thought I lost when moving from PHP to ASP.NET was finally returned, but when I first discovered HttpModules I remember almost passing out at the level of power and control you get over the ASP.NET request pipeline.  Since then, I've used HttpHandlers, HttpHandlerFactories, and HttpModules in many projects, but I'm noticing that while many people have heard of them, many have no idea what you would ever use them for.  So, I would like to give a few examples.

The first example is really simple.  On my blog, I didn't ant anyone to alias my web site or access it in any other way than by going to www.netfxharmonics.com.  Since HttpModules allow you to plug into the ASP.NET request pipeline, I was able to quick write an HttpModule to do exactly what I wanted:

public class FixDomainHttpModule : IHttpModule
{
    public void Dispose( ) {
    }

    public void Init(HttpApplication context) {
        context.BeginRequest += delegate(Object sender, EventArgs ea) {
            HttpApplication ha = sender as HttpApplication;
            String absoluteUrl = ha.Context.Request.Url.ToString( ).ToLower( );
            if (ha != null) {
                if (MinimaConfiguration.ForceSpecifiedDomain) {
                    if (!absoluteUrl.StartsWith(MinimaConfiguration.Domain.ToLower( ))) {
                        context.Response.Redirect(MinimaConfiguration.Domain);
                    }
                }
            }
        };
    }
}

...with this web.config:

<httpModules>
  <add name="FixDomainHttpModule" type="Minima.Web.HttpExtensions.FixDomainHttpModule" />
</httpModules>

By doing this I don't need to put a check in each page or in my MasterPage.  Since HttpModules are for the entire application, even URLs accessing my images are forced to be done by www.netfxharmonics.com.

Another example is a simple authentication system for a system I was working on a while back.  The application allowed anyone logged into active directory to access it's resources, but only certain people logged into active directory would be authorized to the use application (i.e. anyone could access images and CSS, but only a few people could use the system).  Knowing that the .NET framework is the model for all .NET development, I looked at the machine's web.config to see how ASP.NET implemented its windows and form authentication.  As it turns out, it does so by HttpModules.  So, I figured that the best way to solve this problem was by creating an HttpModule, not by throwing a hack into each of my WebForms or my MasterPages.  Furthermore, since ASP.NET uses the web.config for its configuration, including authentication configuration, I wanted to allow configuration of my authentication module to be via the web.config.  The general way I wanted to configure my HttpModule would be by a custom configuration section like this:

<Jampad>
    <Security RegistrationPage="~/Pages/Register.aspx" />
</Jampad>

The code for the HttpModule was extremely simple and required only a few minutes to throw together.  If the page being accessed is a WebForm and is not the RegistrationPage set in web.config, then the system's Person table is checked to see if the user logged into the machine has an account in the application.  If not, then there is a redirect to the RegistrationPage.  Simple.  Imagine how insane that would have been if you wanted to test for security on each page.

public class JampadSecurityModule : IHttpModule
{
    public void Dispose( ) {
    }

    public void Init(HttpApplication context) {
        context.BeginRequest += delegate(Object sender, EventArgs ea) {
            HttpApplication ha = sender as HttpApplication;

            if (ha != null) {
                CheckSecurity(context);
            }
        };
    }

    private void CheckSecurity(HttpApplication context) {
        SecurityConfigSection cs = (SecurityConfigSection)ConfigurationManager.GetSection("Jampad/Security");

        if (String.IsNullOrEmpty(cs.Security.RegistrationPage)) {
            throw new SecurityException("Security RegistrationPage is required.");
        }

        if (cs == null) {
            return;
        }

        if (!context.Request.Url.AbsoluteUri.Contains(cs.Security.RegistrationPage) &&
            context.Request.Url.AbsoluteUri.EndsWith(".aspx")
            ) {
            PersonCollection pc = new PersonCollection( );
            pc.GetMulti(new PredicateExpression(PersonFields.NTLogin==ActiveDirectoryFacade.NTUserName));

            if(pc.Count < 1){
                context.Response.Redirect(cs.Security.RegistrationPage);
            }
        }
    }
}

Again, plugging this into the web.config makes everything automatically happen:

<httpModules>
    <add name="JampadSecurityModule " type="Jampad.Security.JampadSecurityModule" />
</httpModules>

Recently I had to reuse my implementation in an environment that would not allow me to use LLBLGen Pro, so I had to rewrite the above 2 lines of LLBLGen Pro code into a series of Strongly-Typed DataSet tricks.  This implementation also had a LoginPage and an AccessDeniedPage in the custom configuration section, but other than that it was the same idea.  You could actually take the idea further by checking if the person is currently authenticated and if they aren't do a check on the Person table.  If they have access to the application, then PersonLastLoginTime column to the current time.  You could do many things with this implementation that would be rather crazy to do in a WebForm or MasterPage.

Another example of an HttpModule would be my custom access control system I built into my blog.  I'm not going to paste the code here as it's just the same idea as the other examples, but I will explain the concept.  Basically I created a series of tables in my blog's SQL Server database that held information on access control.  In the Access table I put columns such as AccessDirection (allow, deny), AccessType (IP Address, UserAgent, HTTP Referral), AccessText, AccessMessage, and AccessRedirect.  My HTTPModule would filter every ASP.NET request through this table to figure out what to do with it.  For example, I could block an IP address by creating a table record for 'deny', 'IP Address', '10.2.1.9', 'Your access has been denied.', NULL.  Immediately upon inserting the row, that IP address is blocked.  I could also block certain UserAgents (actually this was the original point of this HttpModule--bots these days have little respect for the robots.txt file).  I could also block requests that were from a different web site.  This would allow me to stop people from leaching images off my web site for use on their own.  With a simple HttpModule I was able to do all this in about an hour.  By the way, one record I was tempted to create was the following: 'deny', 'UserAgent', 'MSIE', NULL, 'http://www.getfirefox.com/'.  But I didn't :)

Now when would you use an HttpModule versus an HttpHandler?  Well, just think about what the difference is.  An HttpHandler handles a specific address pattern for a specific set of HTTP verbs, while every request in an application goes through an HttpModule.  So, if you wanted to have an image creator at /ImageCreator.imgx, then you need to register .imgx to IIS and then register your image creation HttpHandler in your web.config to handle that address (in case you forgot, web browsers care about the Content-Type, not the file extension.  In this example, your HttpHandler would set the Content-Type as 'image/png' or whatever your image type is.  That's how a web browser will know what to do with a file.  It has nothing to do with the file extensions; that's just for IIS.)  On the other hand, if you wanted to block all traffic from a specific web site, then you would create an HttpModule, becayse HttpModules handle all traffic on an application.  So, if you just remember this fundamental difference in purpose between the two, then shouldn't have my problems in the future.

Simplified Universal HttpHandlerFactory Technique

A few months ago I wrote about the Universal HttpHandlerFactory Technique where you have one HttpHandlerFactory that all your ASP.NET processing goes through and then in that HttpHandlerFactory you then choose what HttpHandler is returned based upon what directory or file is accessed.  I still like this approach for certain scenarios, but my blog got to the point where I was managing 11 different HttpHandlers for about 25 different path and file patterns.  So, it was time to simplify.

What I came up with was basically my go to card for everything: put it in SQL Server.  From there I just checked the URL being accessed against the patterns in a database table and then looked up what HttpHandler to use for that particular request.  Then, of course, I cached the HttpHandler for future file access.

Here are my SQL Server tables (yes, I do all my design in T-SQL-- SQL GUI tricks are for kids):

create table dbo.HttpHandlerMatchType  (
HttpHandlerMatchTypeId int primary key identity not null,
HttpHandlerMatchTypeName varchar(200) not null
) 

insert HttpHandlerMatchType select 'Contains'
insert HttpHandlerMatchType select 'Starts With'
insert HttpHandlerMatchType select 'Ends With'
insert HttpHandlerMatchType select 'Default'

create table dbo.HttpHandler (
HttpHandlerId int primary key identity not null,
HttpHandlerMatchTypeId int foreign key references HttpHandlerMatchType(HttpHandlerMatchTypeId),
HttpHandlerName varchar(300),
HttpHandlerMatchText varchar(200)
) 

Here is the data in the HttpHandler table:

 HttpHandler Table

Looking at this image and the SQL Server code, you can see that I'm matching the URL in different ways.  Sometimes I want to use a certain HttpHandler if the URL simply contains the text in the HttpHandlerMatchText and other times I'll want to see if it the URL ends with it.  I included an option for "starts with" as well, which I may use in the future.  This will allow me to have better control of how paths and files are processed.  Also, notice that one is "base".  This is a special one that basically means that the following HttpHandler will be used (keep in mind we are in a class that inherits from the PageHandlerFactory class-- please see my original blog entry):

base.GetHandler(context, requestType, url, pathTranslated);

Now in my HttpHandlerFactory's GetHandler method I'm doing something like this (also note how LLBLGen Pro helps me simply my database access):

HttpHandlerCollection hc = new HttpHandlerCollection( );
hc.GetMulti(new PredicateExpression(HttpHandlerFields.HttpHandlerMatchTypeId != 4));

IHttpHandler hh = null;
foreach (HttpHandlerEntity h in hc) {
    hh = MatchHttpHandler(absoluteUrl, h.HttpHandlerName.ToLower( ), h.HttpHandlerMatchTypeId, h.HttpHandlerMatchText.ToLower( ));
    if (hh != null) {
        break;
    }
}

This is basically just going to look through all the HttpHandlers in the table which are not the "default" handler (which will be used when there is no match).  The MatchHttpHandler method basically just passes the buck to another method depending of whether I'm matching the URL based on Contains, StartsWith, or EndsWith.

private IHttpHandler MatchHttpHandler(String url, String name, Int32 typeId, String text) {
    IHttpHandler h = null;
    switch (typeId) {
        case 1:
            h = MatchContains(url, name, text);
            break;

        case 2:
            h = MatchStartsWith(url, name, text);
            break;

        case 3:
            h = MatchEndsWith(url, name, text);
            break;

        default:
            throw new ArgumentOutOfRangeException("Invalid HttpHandlerTypeId");
    }

    return h;
}

Here is an example of one of these methods; the others are similiar:

private IHttpHandler MatchContains(String url, String name, String text) {
    if (url.Contains(text)) {
        return GetHttpHandler(name);
    }
    return null;
}

As you can see, it's nothing fancy.  The last method in the chain is the GetHttpHandler, which is basically a factory method that converts text into an HttpHandler object:

private IHttpHandler GetHttpHandler(String text) {
    switch (text) {
        case "base":
            return new MinimaBaseHttpHandler( );

        case "defaulthttphandler":
            return new DefaultHttpHandler( );

        case "minimaapihttphandler":
            return new MinimaApiHttpHandler( );

        case "minimafeedhttphandler":
            return new MinimaFeedHttpHandler( );

        case "minimafileprocessorhttphandler":
            return new MinimaFileProcessorHttpHandler( );

        case "minimapingbackhttphandler":
            return new MinimaPingbackHttpHandler( );

        case "minimasitemaphttphandler":
            return new MinimaSiteMapHttpHandler( );

        case "minimatrackbackhttphandler":
            return new MinimaTrackBackHttpHandler( );

        case "minimaurlprocessinghttphandler":
            return new MinimaUrlProcessingHttpHandler( );

        case "projectsyntaxhighlighterhttphandler":
            return new ProjectSyntaxHighlighterHttpHandler( );

        case "xmlrpcapi":
            return new XmlRpcApi( );

        default:
            throw new ArgumentOutOfRangeException("Unknown HttpHandler in HttpHandlerMatchText");
    }
}

There is one thing in this that stands out:

case "base":
    return new MinimaBaseHttpHandler( );

If the "base" is simply a call to base.GetHandler, then why am I doing this?  Honestly, I just didn't want to pass around all the required parameters for that method call.  So, to make things a bit more elegant I created a blank HttpHandler called MinimaBaseHttpHandler that did absolutely nothing.  After the original iteration through the HttpHandlerCollection is finished, I then do the following (it's just a trick to the logic more consistent):

if (hh is MinimaBaseHttpHandler) {
    return base.GetHandler(context, requestType, url, pathTranslated);
}
else if(hh != null){
    if (!handlerCache.ContainsKey(absoluteUrl)) {
        handlerCache.Add(absoluteUrl, hh);
    }
    return hh;
}

One thing I would like to mention is something that that sample alludes to: I'm not constantly having everything run through this process, but I am caching the URL to HttpHandler mappings.  To accomplish this, I simply setup a simple cached dictionary to map URLs to their appropriate HttpHandlers:

static Dictionary<String, IHttpHandler> handlerCache = new Dictionary<String, IHttpHandler>( );

Before ANY of the above happens, I check to see if the URL to HttpHandler mapping exists and if it does, then return it:

if (handlerCache.ContainsKey(absoluteUrl)) {
    return handlerCache[absoluteUrl];
}

This way, URLs can be processed without having to touch the database (of course ASP.NET caching helps with performance as well).

Related Links

All my work in this area has been rolled-up into my Themelia ASP.NET Framework, which is freely available on CodePlex. See that for another example of this technique.

Creating JavaScript objects from ASP.NET objects

If you have worked with ASP.NET for any length of time you probably know that the ASP.NET ID you set on a control on the server-side changes when it gets to the client side.  For example, if you have a textbox with an ID of "txtUsername" in ASP.NET, you will probably have a textbox with an ID of something like "ctl100_txtUsername".  When working only with server-side code, this is fine.  However, I'm a JavaScript programmer as well as a .NET programmer.  Most of my applications are heavily Ajax based and sometimes the entire application through all of its screens and uses will have ZERO postbacks.  So, it's important for me to have the correct ID on the client.  So, I need to be able to access controls on the client-side.  Not only so I can access the ID from a JavaScript functions, but also so I can set loosely-coupled events on objects.

Typically the way people get around this is with simple, yet architecturally blasphemous techniques.  The first technique is to break a foundational rule of software architectural (e.g. low-coupling) by putting an event right on the element itself.  That is, they hard code the event they want to raise right on the control itself.  This is a very strange technique as the .NET developers who do this technique are usually thos wwho would never put a server-side event on a control using OnServerClick.  Somehow, they think that putting an even directly on a client-side control by OnClick is less wrong.  This is obviously a case of extremely object coupling, an extremely poor architectural practice.  In case you can't picture it, here's what I'm talking about:

<asp:TextBox id="txtUsername" runat="server" Text="Username" OnClick="ClearBox( );"></asp:TextBox>

A much, much better way of getting around this is to use the ClientID property of an ASP.NET control to assign a multi-cast JavaScript event to that button.  However, we must be careful with this technique as it too could lead to design problems.  The most obvious problem is that of spaghetti code, the mixing of two or more languages in one same file.  Professional ASP.NET developers know that to have a sound system, you must be using code-behinds.  The ASP.NET development model greatly improves the readability of code by making sure that the C# (or VB) code and the ASP.NET declarations are completely separate.  While reading one page, your brain doesn't need to be flipping all over the place trying to translate multiple languages at the same time.  To be sure, those of us from the PHP world know that with time you can become very proficient in developing in spaghetti code, but, on the other hand, those of us who have taken over a project from another person know the pains of trying to decode that slop.

The typical technique for applying loosely-coupled events (and for many other JavaScript functionality) is actually very strange.  Though the ASP.NET developers will insist on a separation for their C# (or VB) away from their ASP.NET pages, they have no problem throwing JavaScript in the midst of C# code.  This is almost as bad as putting ad-hoc SQL queries in your C# code (very bad) or coupling CSS rules to an element via the HTML "style" attribute, thereby making the solution absolutely impossible to theme and breaking any chance of debugging CSS problems (very, very bad).  JavaScript and CSS have had a code-behind model long before ASP.NET was around.  So, we need to respect the practices of code separation as much as possible.  To this end, we need a better solution than throwing a large block of JavaScript in to an ASP.NET page.

Here is an example of the old technique using legacy JavaScript (in contrast to Modern JavaScript shown in a bit):

<script type="text/javascript"> 
function ClearBox( ) {
    document.getElementById(<%=txtUsername.ClientID%>).value = ''; 
} 

document.getElementById(<%=txtUsername.ClientID%>).onclick = ClearBox;
</script>

Typically, however, you will see a TON of JavaScript code simply thrown into the page with no respect for code separation and with no possibility for multicast events.  (Furthermore, not only is this code raw spaghetti code, that function isn't even in a JavaScript namespace.  Please see my link below for more information on JavaScript Namespaces;  If you are familiar with .NET namespaces, then you have a head start on learning JavaScript namespaces.  Would you ever throw a class into an assembly that without putting it in a namespace?  Probably not... it's the same idea in JavaScript.)

Fortunately, there is a better model using a couple of JavaScript files.  The first JavaScript file (Event.js) is one of my standard files you will see in all of my JavaScript applications (update: I no longer use this-- now, I use prototype.js from the Prototype JavaScript Framework to replace a lot of my own code):

var Event = {
    Add: function (obj, evt, func, capture) {
        if(obj.addEventListener) {
            obj.addEventListener (evt, func, capture); 
        }
        else if(obj.attachEvent) {
            obj.attachEvent('on' + evt, func); 
        }
    },
        
    Remove: function (obj, evt, func, capture) {
        if(obj.removeEventListener) {
            obj.removeEventListener (evt, func, capture);
        }
        else if(obj.detachEvent) {
            obj.detachEvent('on' + evt, func);
        }
    }
};

This Modern JavaScript document, simply allows you to add or remove events from an object.  It's fairly simple.  Here's a file (AspNet.js) you will find in some of my applications:

var AspNet = {
    Objects: new Object( ), 
    
    RegisterObject: function(clientId, aspNetId, encapsulated) {
        if(encapsulated) {
            eval('AspNet.Objects.' + clientId + ' = $(aspNetId)'); 
        }
        else {
            eval('window.' + clientId + ' = $(aspNetId)'); 
        }
    }
};

This one here is where the meat is.  When you call the RegisterObject function you will actually register an ASP.NET control with JavaScript so that you can use it without needing the fancy ASP.NET ClientID.  Furthermore, it also allows you to use the object directly in JavaScript without relying on document.getElementById( ).  This technique is actually a cleaner version of the one I previously mentioned.  It does require you to put a little JavaScript in your page, but that's OK as it's ASP.NET interop code used to register itself with JavaScript; therefore, you aren't really breaking any rules.

In general, you should never, ever place JavaScript in your ASP.NET system.  There are of course some exceptions to this, but the exceptions are based on common sense and decades of interop research from the industry.  Two of the most common exceptions to never having JavaScript in your ASP.NET system are for control generation and for sewing code ("interop code").  Control generation would be when a server-side control creates that which a browser will use in order to protect users (the developers using the control) from the interop between ASP.NET and JavaScript.  That is, to hide the plumbing, thereby increasing the level of abstraction of the system.  The C++ guys deal with the pointers, protecting me from memory management and the ASP.NET/AJAX control creators deal with the JavaScript plumbing so other developers don't have to.  It's the same idea.  Continuing with this analogy, while C# allows unsafe pointers, they should only be used in extremely rare circumstances.  JavaScript in ASP.NET should be about as rare.  One example of this rarity is in reference to the other exception: sewing code.

Sewing code ("interop code"), on the other hand, is exactly what you are seeing this this technique.  It simply connects one technology to another.  One major example of sewing code in the .NET framework is where ADO.NET connects directly to SQL Server.  At some point there must be a connection to the external system and the calling system must speak its language (i.e. SQL).  In the technique here, the interop is between ASP.NET and JavaScript and, as with all interop, sewing is therefore required.  Mixing languages is a very strong sign of poor design skills and a lack of understanding of GRASP patterns.  Many excellent, genius programmers would take their systems to the next level by following this simple, yet profound time tested technique.  Martin Fowler, author of the classic computer science text "Refactoring: Improving the Design of Existing Code" (one of my core books right next to the framework design guidelines!), is often quoted as saying "Any fool can write code that a computer can understand. Good programmers write code that humans can understand."  That's, of course, contextual as people who are complete fools in software design are often 100x better hardcore programmers than the best software designers.

Now, to use the AspNet JavaScript namespace, you simply put code similar to the following somewhere in your ASP.NET page (or the Event.observe function in the Prototype Framework):

<script type="text/javascript">  
Event.Add(window, 'load', function(evt) { 
    // ASP.NET JavaScript Object Registration

    AspNet.RegisterObject('txtUsername', '<%=txtUsername.ClientID%>');
    AspNet.RegisterObject('txtPassword', '<%=txtPassword.ClientID%>');
    Initialization.Init( ); 
}, false);
</script>

Basically, when the page loads your objects will be registered.  What does this mean?  It means you can use the object as they are used in this Initialization.js file (another file in all of my JavaScript projects):

<script type="text/javascript">  
var Initialization = {
    Init: function( ) {
        txtUsername.onclick = function(evt) {
            if(!txtUsername.alreadyClicked) {
                txtUsername.value = '';
                txtUsername.alreadyClicked = true; 
            }
        };
        
        txtPassword.onclick = function(evt) {
            if(!txtPassword.alreadyClicked) {
                txtPassword.value = '';
                txtPassword.alreadyClicked = true;
                txtPassword.type = 'password';
            }
        };
    }
};
</script>

As you can see there is no document.getElementById( ) or $( ) here.  You are simply naturally using the object as if it were strongly typed.  The best part is that to support another ASP.NET page, you simply have to put a similar JavaScript script block in that page.  That's it.  Furthermore, if you don't want to access the control directly, perhaps because you are worried about potential naming conflicts you can send a boolean value of true as the third argument in the AspNet.RegisterObject function, this will put the objects under the AspNet.Objects namespace.  Thereby, for example, making txtUsername accessible by "AspNet.Objects.txtUsername" instead of simply "txtUsername".

There is one catch though: you have to assign events to your window.load event using multi-cast events.  In other words, if at any point you assign an event directly to the window.load event, then you will obviously overwrite all events.  For example, the following would destroy this entire technique:

window.load = function(evt) {
// Do something...
}

This should not be a shocker to C# developers.  In C#, when we assign an event we are very careful to make sure to assign it using the "+=" syntax and not the "=" syntax.  This the same idea.  It's a very, very poor practice to ever assign events directly to the window.load event because you have absolutely no idea when you will need more than one event to call more than one function.  If your MasterPage needs the window.load event, your Page needs the window.load event, and a Control needs the window.load event, what are you going to do?  If you decide you will never need to do multicast events on load and then get a 3rd party tool that relies on it, what will you do when it overrides your load event or when you override its?  Have fun debugging that one.  Therefore, you should always use loosely-coupled JavaScript multi-cast events for window.load.  Furthermore, it's very important to following proper development practices at all times and never let deadlines stop your from professional quality development.

Related Links

10 Things Most Developers Didn't Know in 2007

To end 2007, I thought I would make a list of things which I found that most developers didn't know.  To make things more interesting, this list is actually a series of 10 mini-articles that I wrote today.  Since this article has several sub-articles, here's a table of contents to help you out (these aren't really in any order of importance):

#1  SQL Server supports powerful subqueries as anonymous sets.

Many developers don't take the time to seriously look at T-SQL or SQL Server internals.  As such, they miss many of SQL Server's more powerful features.  In January 2007, when co-worker saw he write the following query, he about fell out of his seat:

select MemberName, m.MemberId, count(*) from (select 
    distinct MemberId, 
    VisitUserAgent 
    from VisitSession 
    where MemberId is not null) a 
inner join Member m on a.MemberId = m.MemberId 
group by m.MemberId, MemberName, VisitUserAgent 
having count(*) > 1 
order by count(*) desc 

For starters, the guy didn't know you could do a filter after a group by, but that's not my point.  He had no idea that SQL Server (2000) allows you to use subqueries or use subqueries as anonymous sets.  As you can see, you can select from the list as well as use it in a join.  This tidbit alone should toss many painfully slow cursor-based stored procedures into the trash.  It's a simple SQL feature, but it's a powerful one.

#2  Firefox has an operating-system style console for web application debugging.

It's incredibly hard to find an ASP.NET web developer who knows this one.  It's a feature that knocks people right off their seats.  Instead of throwing alerts all over your AJAX applications, you can use the Firefox console and the dump( ) function.  Did I mention this has been a native feature since Firefox 1.0?

Step 1 (start Firefox with -console switch)

Step 2 (add the boolean key 'browser.dom.window.dump' to the Firefox configuration an set it to true)

Then simply call dump( ), instead of alert( ) and you're done. Your output will go to the Firefox console window (which looks almost exactly like a cmd window).

With this technique you can entirely avoid any possibility of having an infinite loops of alerts.  Personally, I like to track all the output of my web applications.  This comes in very handy when I'm using event capturing or need to watch the progressive state of my application.  When I do this, I also like to write an output identifier to each data dump.  Here's a sample of what I usually use for debugging:

var Configuration = { 
    Debug: false
}; 

var Debug = { 
    counter: 0, 
    write: function(text) { 
        if(Configuration && Configuration.Debug) { 
            dump(text); 
        } 
    }, 
    writeLine: function(text) { 
        if(Configuration && Configuration.Debug) { 
            Debug.counter++;        
            dump(Debug.counter + ':'+ text + '\n'); 
        } 
    } 
};

Here's some sample output using the Debug.writeLine( ) abstraction:

Leaves alert( ) in the dust, doesn't it? You can actually learn more about this technique and others from my Firefox for ASP.NET Web Developer video series found on my blog.  These topics are crucial to your understanding of modern web development.

#3  JavaScript has natively handled loosely-coupled multi-cast events for years.

This isn't something just for the Firefox, Opera, Safari world.  Even IE6 has native support for this feature. I'm not sure why this is, but in September 2007 when I was designing the AJAX exam for Brainbench, not a single one of the reviewers knew that JavaScript natively supported loosely-coupled multi-cast events.  I actually comments from almost all of the reviewers telling me that I should "leave server-side questions out of the exam".

JavaScript loosely-coupled multi-cast events are one of the most important core features of AJAX applications. They allow you to quickly and efficiently attach multiple event handlers to the XHTML same element. This becomes critically important when you are with multiple AJAX components, each of which that want to have an event handler attached to the load event of the window object.

I wrote an article about this in September 2007, so I'm not going to go into any kind of details here.  You my also opt to view this file from my SolutionTemplate, which supplements that blog entry.

#4  Not all image formats are created equal.

A few months ago, I came in as lead architect about half way through a project.  After having a few people fired for absolute incompetence, I did find a few people (PHP guys) who were ready, willing, and actually able to learn ASP.NET.  Everything was going well until the designer came back with his new theme and my associate whom I was training implemented it.  Everyone thought the project was going fine until I stepped in the room.  It didn't take but 10 seconds for a red flag to go up.  Just looking at the web site I could tell that this theme implementation was a disaster.  I noticed that there were signs of JPEG compression all over every single one of the images.  However, being a scientist and part-engineer I knew that measurement was a major key to success.  So, I whipped out Firebug, hit refresh and felt my jaw drop.  The landing page was 1.5MB.  Ouch.

You absolutely can not use one single image format for ever image on your web site, especially not the deadly JPEG format which does little more than destroy your images.  There are rules which web developers must need to follow or else a project is doomed to failure.  First off, you need to be using PNG24s for the all important images, while comparing their file sizes and quality with PNG8 compression.  Using Adobe Photoshop's Save For Web feature is very helpful for this.  If the image is a photo or something with many "real life" colors and shades, perhaps you want to do a size and quality comparison against a JPEG version as well.  If you absolutely need to have transparent images for IE6, then you need to take extreme care and either make special PNG versions for each background or, if you don't care too much about quality and the image is small with very few colors, use a GIF with transparencies.  The same goes for Firefox and printing.  Firefox (as of 2.0) does not print transparent PNG images.  So, if you want to support printing in Firefox, then you need to either make special PNG images for each background or make low-quality GIF images.

Needless to say, the designers theme had to go under severe reconstruction.  Not just because of the image sizes, but because he felt the need to design special input box, textarea, and button controls.  His design would have worked well for a WPF application, but this is the web (... but don't even get me started on the fact that his design for a wide screen monitor at over 1300x800.  The design was useless anyhow!)  The next project I ran as lead architect went much smoother.  Because it was extremely AJAX intensive, everything was minimized to the absolute core.  Each page had the minimal default.css plus it's own CSS sheet and only included the JavaScript it needed.  The web site landing page included barely anything and even had it's own extremely stripped down version of the JavaScript files.  For this project, I went from 350K in development to 80k in production.

#5  Custom server controls are not esoteric, complicated, or take too long to create.

  This seems to be a very common misconception amongst ASP.NET developers.  The reality, however, is that creating server controls is often a very trivial task.  Yet, many developers will use a GridView or other canned control for everything.  The GridView is awesome for basic tabular in a very simple, data-driven applications, but I can rarely use it.  On the other hand, I love the repeater and rely on it for almost everything.  Actually, it and the Literal are my two favorite controls.  I have to rely on these two controls to ensure that my AJAX applications are extremely optimized.  One of the beautiful things about .NET is that every ASP.NET control is simply a .NET class, which means that you can programmatically reuse them, inherit from them, and override their internals.  Thus, allowing us to create some powerful and elegant custom server controls.

On the same project with the overly sizes image files, we had an interesting meeting about how to show a media play list on a web page.  There was all kinds of talk about using Flash to create a media play list.  The conversation was quickly giving me an allergic reaction.  So, after hearing all kinds of absolutely insane quotes of time for creating a Flash play list, I decided to take matters in to my own hands.  Two hours later I handed the client a complete play list from A to Z.  To be clear, I had built this one something I had already had, but the grand total of time was them about 3 hours.  It's amazing what you can do when you understand the .NET framework design guidelines and aren't afraid to follow best-practices.

Here is how you would use a similar control:

<%@ Register Assembly="Jampad.Web" Namespace="Jampad.Web.Controls" TagPrefix="j" %>

<j:Media id="media01" runat="server" />

In your code behind, you would have something that looked like this:

media01.DataSource = MediaAdapter.GetContent(this.MemberGuid);

Upon loading the page, the data was bound and the output was a perfect XHTML structure that could them be customized in any number of ways using the power of CSS.  How do you make something like this happen?  It's simple, here is a similar control (Media.cs) placed in a class library (WebControls.csproj):

using System;
using System.Web;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;

namespace Jampad.Web.Controls
{
    [ToolboxData("<:Media runat=\"server\"></:Media>")]
    public class Media : CompositeControl
    {
        private Repeater repeater;

        public Media( ) {
        }

        private Object dataSource;

        public Object DataSource {
            get { return dataSource; }
            set { dataSource = value; }
        }

        protected override void CreateChildControls( ) {
            HtmlGenericControl div = new HtmlGenericControl("div");
            div.Attributes.Add("class", "media-list");

            try {
                repeater = new Repeater( );
                repeater.DataSource = this.DataSource;
                repeater.ItemTemplate = new MediaTemplate(ListItemType.Item);
                repeater.HeaderTemplate = new MediaTemplate(ListItemType.Header);
                repeater.FooterTemplate = new MediaTemplate(ListItemType.Footer);
                div.Controls.Add(repeater);
                repeater.DataBind( );
            }
            catch (Exception ex) {
                Literal error = new Literal( );
                error.Text = "<span class=\"error-message\">" + ex.Message + "</a>";
                div.Controls.Add(error);
            }

            this.Controls.Add(div);
            base.CreateChildControls( );
        }
    }
}

Notice the use of the repeater control.  This is the same control we use in ASP.NET as <asp:Repeater />.  Since this is .NET, we can use it programmatically to create our own powerful controls.  Also notice the various templates that are being set on the Repeater.  These are the same templates you would set declaratively in an ASPX page.  In this case, I'm programmatically assigning to these templates an instance of MediaTemplate (in MediaTemplate.cs).  This MediaTemplate.cs is just another file thrown in a class library, in our case the same WebControls.csproj, though since it's just a class, it could be in a different assembly and namespace altogether. Here's what the MediaTemplate.cs looks like:

using System;
using System.Collections.Generic;
using System.Text;
using System.Web.UI.WebControls;
using System.Web.UI;

namespace Jampad.Web.Controls
{
    internal class MediaTemplate : ITemplate
   {
        ListItemType type = new ListItemType( );

        public MediaTemplate(ListItemType type) {
            this.type = type;
        }

        public void InstantiateIn(Control container) {
            Literal lit = new Literal( );
            switch(type) {
                case ListItemType.Header:
                    break;

                case ListItemType.Item:
                    lit.DataBinding += new EventHandler(delegate(Object sender, System.EventArgs ea) {
                        Literal literal = (Literal)sender;
                        RepeaterItem item = (RepeaterItem)literal.NamingContainer;
                        literal.Text += String.Format("<div class=\"media-item\">\n");
                        literal.Text += String.Format("  <div class=\"media-item-inner\">\n");
                        literal.Text += String.Format("    <a href=\"\"><img src=\"\" alt=\"Media\" class=\"media-thumb\" /></a>\n", (String)DataBinder.Eval(item.DataItem, "mediaPath"), (String)DataBinder.Eval(item.DataItem, "thumbPath"));
                        literal.Text += String.Format("  </div>\n");
                        literal.Text += String.Format("  <div class=\"media-item-bottom\"></div>\n");
                        literal.Text += String.Format("</div>\n\n");
                    });
                    break;

                case ListItemType.AlternatingItem:
                    break;

                case ListItemType.Footer:
                    break;
            }
            container.Controls.Add(lit);
        }
    }
}


Simply compile those to together and you're set.  You can even embed (hopefully tiny) images in your project to make things even more seamless.  Using this simple pattern, I've created all kinds of things.  You can see a real example of this, including image embedding, in my SQL Feed Framework (formerly known as Data Feed Framework).  It's InfoBlock controls follow this same pattern.  For much better examples, whip out reflector and start digging around the System.Web namespaces.

It's actually rather astonishing to learn of some of the attituted some developers have about custom controls. When I was one of the editors for an ASP.NET 2.0 exam last year, I noticed one of the questions ask which type of control was "harder" to create. The answers were something like "User Control", "Custom Control", and a few others. They were looking for the answer "Custom Control". Since "harder" is not only a relative term, but also a subjective and an abstract one, the question had no actual meaning. Custom controls aren't "harder" than user controls.

#6  Most developers I worked with in 2007 had never heard of an O/R mapper.

Why do most developers still absolutely insist on wasting their time writing a chain of SqlConnection, SqlCommand, and SqlDataAdapter?  Perhaps it's just an addiction to being busy instead of actually being productive that causes this.  I don't know.  I would, however, expect these developers have to have some curiosity that there may be an easier way.  ADO.NET is awesome stuff and it is the foundation for all .NET O/R mappers, but if I'm not throwing around 1,000,000 records at a time with SqlBulkCopy, I'm not interested in working with ADO.NET directly.  We need to have a system that allows us to get what we want instead of forcing us to screw about with low-level mechanics.  It's no secret that I'm a huge supporter of Frans Bouma's work with LLBLGen Pro and I also use LINQ in most of my .NET 3.5 applications.  For a corporate .NET 2.0 project, there's absolutely no excuse to not pay the $300 for LLBLGen Pro.  Managers!  Open the wallets!  It will save you money.

However, it's not always about the money.  Even if the developers knew about O/R mapping, and the company isn't from in a poverty-stricken 3rd world country, sometimes extreme pride, lack of personal integrity, and political alignment can destroy any chance of being productive.  A long time ago I worked at a company where I thought I would be productive.  Five or so weeks into the design phase of the project, we received a politically-focused project manager as big brother.  He was absolutely against the use of any modern technology and despised the idea of an O/R mapper.  He instead told us that we were to write a stored procedure for every possible piece of interaction that would happen.  He also wanted us to use Microsoft's data application block to access the stored procedures.  At one point he said that this was their O/R mapper, showing that he had no idea what an O/R mapper was.

A few days after his reign had started, I took an hour or so to write up a 12 page review document covering various aspects of LLBLGen Pro and how they would work on the project.  I thought it was a very convincing document.  In fact, one guy looked at it and was convinced that I took it from the LLBLGen web site.  The project manager, however, was beginning to be annoyed (this is not uncommon with me and old-school project managers!)  The project manager decided to call together a panel of his "best" offshore developers and put me in what basically amounted to be a doctoral defense.  Prior to the meeting I sent out my "dissertation" and asked everyone to read it before they arrived at the meeting so that they would be prepared for the discussion.  When it was time for the meeting, I was told to sit at one side of a large meeting table and the project manager and his team sat at the other.  Then the disaster began.  First off, not one single person on that team had read my document.  Secondly, for the next 45 minutes they asked me basic questions that the document would have answered.  Even after they admitted that I had answered all of their concerns to their satisfaction and being told by their team that LLBLGen Pro was obviously a very productive tool, they reached the conclusion that they still weren't going to use it.  It was a waste of my time and I still want those 45 minutes of my life back.

What was really interesting about my defense was the developer's code.  In the meeting, the developers had showed me their [virtually unreadable, anti-.NET framework design guidelines, inefficient, insecure] .NET project code and I was shocked to see how much time they wasted on writing the same stuff over and over and over again.  When they showed me their stored procedures, I about passed out.  It's a wonder how any of their systems run.  They were overridden with crazy dynamic SQL and cursors.  They even had most of the business logic in the data access tier.  The concept of N-tier architecture was not something that they understood at all.  I think that's the point where I gave up on my defense.  If a developer doesn't even understand the critical need for N-layer and N-tier architecture, there's just no way they will be able to understand the need for an O/R mapper.  It's actually one of the fastest way to find a coder hiding amongst professionals.  Their SQL/ADO.NET code was also obviously not strongly-typed.  This was one of the core points of an O/R mapper and these developers could not understand that.  They could not see the benefit of having an entity called Person in place of the string "Persno" (deliberate misspelling).

This project didn't really take off at all, but for what parts I was involved, I used the next best thing to an O/R mapper: a strongly-typed data-set.  Read this carefully: there is no shame in using a strongly-typed data set if you don't have an O/R mapper.  They are no where near as powerful, but they are often good enough to efficiently build your prototypes so that the presentation layer can be built   You can replace the data access components later.

The training of developers in the use of LLBLGen Pro and LINQ O/R mapping was one of the main reasons I publicly released both my Minima Blog Engine and my Minima 3.5 Blog Engine source code to the public in 2007.  You are free to use these examples in your own training as you see fit. 

For more information and for some example of using an O/R mapper, please some of my resources below:

#7  You don't need to use SOAP for everything.

This is one of the reasons I wrote my XmlHttp Service Interop series in March and May 2007.  Sometimes straight up HTTP calls are good enough.  They are quick, simple, and light-weight.  If you want more structure, you can simply use XML serialization to customize the smallest possible data format you can think of.  No SOAP envelope required.

Here are the parts to my series:

Also keep in mind that you don't need to keep JSON to JavaScript.  It's a beautiful format that could easily be an amazing structured replacement for flat CSV files.  RESTful interfaces using GET or POST with HTTP headers are also a great way to communication using very little bandwidth.  My AJAX applications rely heavily on these techniques, but I've also used them for some behind the scenes work as well.

One great example of how you can use RESTful services is by looking at the interface of the ESV Bible Web Service V2. In November 2007, I wrote a .NET 3.5-based framework to abstract the REST calls from the developer. By looking at my freely available source code, you can see how I'm interacting with the very light-weight REST service.

#8  A poor implementation of even the most beautiful database model can lead to a disaster.

For more information on this topic, see my October 2007 post entitled "SQL Server Database Model Optimization for Developers". Here is an abstract:

It's my assessment that most developers have no idea how much a poor database model implementation or implementation by a DBA unfamiliar with the data semantics can affect a system. Furthermore, most developers whom I have worked don't really understand the internals of SQL Server enough to be able to make informed decisions for their project. Suggestions concerning the internals of SQL Server are often met with extremely reluctance from developers.

#9  Most web developers have no idea how to build a proper XHTML structure.

XHTML is not HTML and shouldn't be be treated like it is.  While HTML is a presentation format, XHTML is a structure format.  You can use HTML for visual formatting, but XHTML simply defines a structure.  In August 2007, I wrote an article entitled "Coders and Professional Programmers" and in it I discussed some of the differences between coders who have no clue what's going on, but who overwhelm the technology world and rare programming professionals.  I didn't go into to many specifics in this article, but one of the things I had in mind was the severe lack of XHTML knowledge that coders have.  What's profound is that XHTML is probably the single most basic web development topic in existence, yet people just have no idea how to use it properly.

When you come at a web project and you have defined your user experience, you need to materialize that definition.  You do not go about this by dragging and dropping a bunch of visual elements on a screen and nesting 4 tables.  Well, if you do, then you're probably a coder, not a professional.  Building your interface structure is actually rather similar to building a database model in that you need to define your entities and semantic meaning.  So, when you look at the top of you landing page, you need to avoid thinking "this is a 4em piece of italic black text" and simply say that this it a heading.  What type of heading?  If it's he most important heading, then it would probably internally translate to a h1 element.  In the same way, if you have text on your screen, you should avoid doing this:

Lorem ipsum dolor sit amet.<br/>

Mauris nonummy, risus in fermentum.<br/>

By doing this, you are completely destroying any possibility of text formatting.  You have also fallen into he world of telling a system how to do it's job.  Just think about how we work with XML.  Do you go in and tell the system how to parse the information or how to scan for a certain element?  No, the entire point of abstraction is so that we can get closer and closer to telling the system what we want instead of telling it how to do it's job.  In XML, we simply state an XPath and we're done.  With XHTML, we don't want to say "put text here, break, put more text here, and break again".  You could think of the above HTML code as a "procedural structure".  What if we used a more object-oriented model?  In an object-oriented model, we focus on the semantics of the entities and this is exactly how we are to design in XHTML.  A proper way to declare our text would be like this:

<p>Lorem ipsum dolor sit amet.</p>

<p>Mauris nonummy, risus in fermentum.</p>

Now, instead of telling the system how to do it's job we state that we want two paragraphs.  Done.  By focusing on the semantic representation we are closer to focusing on the purpose of the application and letting the system do whatever it does best.  Furthermore, since XHTML is a structural format and not a presentation format, we have another technology for presentation, namely, CSS.  With our new XHTML sturcture we can simply attach a default.css document to every page on the web site and in a centralized manner state the following to format every single document on our web site in an instant.  You couldn't beat the power of that with quantum parallelism (well... maybe).

p {
font-family: Georgia, times new roman, serif;
font-size: 90%;
line-height: 1.1em;
}

Every single item in XHTML has some purpose and a set of guidelines attached to it to allow you to choose the correct element to match your semantic meaning.  This is a long and fancy way of saying, don't use divs for everything!  A div is a containment unit, not something to hold every single thing in all the world just so that you can use more CSS in a misguided attempt to look modern.  Use divs for containment, not for giving IDs to text.  So, what should you use?  Whatever closely matches your needs.  For example, when you are adding a single horizontal menu or tab list to your user experience, you should avoid using a bloated HTML table, which will force longer load times and basically kill your chanced for mobile support.  You should also avoid throwing together a list of divs in a parent div, which provides to semantic meaning at all.  This would be like declaring all your .NET objects as Object and using reflection everything you wanted to access anything.  Rather, you want to ask yourself "to what data structure does this object most closely map?"  In this case, it's basically a simple list or a collection.  What XHTML element most closely brings out this item's meaning?  Depending if the list is an ordered list or an unordered list, your XHTML element will be either a <ul/> or an <ol/>.

What if you wanted to show a column of images where the image metadata (i.e. title, date, description) was to the right of each image image?  Do we use a bloated table?  No, your load time will go through the roof, your DOM interaction will be come prohibitively complex, and your mobile support will be shot.  Do we use a series of divs with CSS floating?  No, again, this in no way reflects any semantic relation of the entity.   To what data structure does this closely maps?  Think about it.  It's a series of "things" where each thing has two sub-components (image and data).  This is a dictionary.  In XHTML, the closest element you have to a dictionary is a <dl/>.  The dl contains an alternating series of data terms (<dt/>) and a data definitions (<dd/>) to allow you to present your data in a way that makes sense.  You don't have a full semantic representation as you would with a "imagelist" element, but you are accurately representing the fact that this is a dictionary.

After you have defined your complete structure, you may then start to modify the elements using CSS.  Your headings and titles (mapped to h1,h2,h3,h4,h5,6) will be formatted according to their requirements as will all your paragraphs (mapped to p).  You will also modify your ol, ul, and dl lists to match your visual requirements.  Your <ul/> or <ol/> lists will probably have something like the following:

ul {
list-style-type: none;
padding: 0;
}

ul li {
display: inline;
/* or float: left; depending on how you want to format them. */
/* Floating would keep the list items as block elements thereby */
/* allowing more padding and margin tweaking. */
}

With your dl may have something similar to this:

dl {
width: 300px;
}

dl dt,
dl dd {
float: left;
}

dl dt {
clear: both;
}

This technique of semantically mapping entities to visual elements is neither new or isolated to web development.  The Windows Presentation Foundation (WPF) allows a similar technique.  You can define a simple ListBox which contains your raw data for your series of elements (e.g. icon list, list of names, menu).  Then you apply a Style to enhance the user experience.  Elements in XHTML and elements in XAML don't really have their own look and feel, but are, rather, structural entities which allow you to define the element's semantic representation to reality to which later the look and feel can be applied.

#10  CSS is not simply a technology to allow simple font size, color, and style changes.

It's a very powerful technology which allows us to efficiently create powerful web solutions.  It can allow help us preserve ASP.NET caching and help us to avoid page recompilation.  Furthermore, a proper CSS architecture can bring us media specific CSS to enable us to efficiently customize our web pages for print and for mobile devices.  As if that weren't enough, CSS themes can allow us to quickly deploy branded web sites.

Unfortunately, however, CSS architecture is not something known by too many web developers, especially ASP.NET developers.  Back in May 2007, I wrote an article on my blog entitled "CSS Architecture Overview", so I won't go into any more details here.

Those were the top 10 things in 2007 which I found developers to be the most ignorant.  It's not really an exhaustive list and doesn't cover things like the lack of understanding how MSIL maps to reality, how JavaScript deals with variable scope, or how you may not need to waste company resources on SQL Server 2005 Standard when express may work just fine for your requirements.  Some of these topics are closer to my core speciality than others, but each of them represents an incredibly important segment of technology which web solution architects must take into account.  Looking back over the list of articles I wrote the open source projects I released and the various clients and developers I worked with in 2007, this has easily been by busiest year ever.  This won't be stopping in 2008.  Hopefully an increased knowledge base and an stronger adherence to best practices will turn more and more coders into professional in 2008 and beyond.

To learn more about some of these topics or even related ones, be sure to walk around my blog a bit and to subscribe to my my RSS feed.

ASP.NET 3.5 Web Site and Application

One of the most horrendously thing about ASP.NET 1.1 was that the developers confused a web site and a project.  All that did was allow a severe influx of desktop developers into the web world that had no right to call themselves web developers.  ASP.NET 1.1 even added resx files for web forms and of course since the file was there, many developers (senior level!) actually thought they were required files.  That didn't stop me drop regularly going into CVS and DELETING them.  Worthless.

Fortunately, ASP.NET 2.0 fixed this problem by making sure that people realized that a web site was NOT a project.  This made everything so much easier to work with.  Furthermore, now we had the beautiful CodeFile page directive attribute so that we didn't have to rely on VS for everything.  There was also no need for absolutely ridiculous and redundant designer or resources files for web forms.  The ASP.NET guys were finally conforming to the preexisting conditions of the web, instead of trying to come up with a new [flawed] paradigm.

HOWEVER! Apparently the ASP.NET 3.5 team fell asleep at the wheel because I'm having horrendous flashbacks to the slop of ASP.NET 1.1.  First of all, when you add a web site, you are adding a project.  I don't WANT a csproj file for my web site!  Secondly, web forms have returned to using the completely useless CodeBehind attribute.  It took me QUITE a bit of debugging to finally realize this.  Third, every single web form now has a completely meaningless X.designer.cs file.  This also took me a while to realize.

I realized this when I kept getting an error telling me that type X.Y didn't match type X.Y.  What?  Yes it does!  After I finally fixed that error (can't even remember how), I kept getting that one stupid error telling you that your type is in two separate places.  HOW?  This was a new project!  I haven't done anything yet!  It turns out that the designer.cs file had become out of date between the time I typed up my added my custom control to the page and ran it.  Err... what?  This is beyond frustrating.

There's good news though.  The ASP.NET team wasn't completely asleep.  You can add an ASP.NET web site or an ASP.NET web application.  Yes, I realized there's no REAL difference, but for some reason they decided to make a whimsical split (I suspect it was a political or PM decision-- the ASP.NET team is smarter than that).  Perhaps they wanted to aid the old VB developers, who I would argue have no right to put things on the web anyhow (i.e. they are web coders, not web development professionals!)

If you add a ASP.NET web application, you get the old ASP.NET 1.1 style of hard to use nonsense.  On the other hand, if you add a ASP.NET web site, you get the appropriate ASP.NET 2.0 style.  Personally, I say forget both.  I always just create a folder and then "open web site".  Done.  Most of the time, however, I just start a project by checking my continually changing solution template out of subversion.  Again, DONE.  This is why it took me 8 months to finally notice this.  I don't even want to think about how many sloppy intern or VB6-developer created applications I'm going to have to clean up based on this painfully flawed design.

Free Templated Data Bound Custom Controls Chapter

Google Book Search, must like most Google products, is a great gift to humanity.  I often find myself going there to read a chapter in a book to quickly get up to speed or to review a topic.  Today, while I was reviewing a few new ASP.NET books, I came across the book ASP.NET AJAX Programming Tricks on Google Books.  The first two chapters are "Http Modules Demystified" and "Templated Data Bound Custom Controls" and are freely viewable.  This is a great reference for anyone looking to learn how to build more powerful custom controls or for anyone who needs a quick refresher.

One thing I did notice, is that the chapter looks very much like chapter 29 in ASP.NET 3.5 Unleashed.  In fact, not only is the content the same, they had the same order of the content is the same.  Furthermore, they use almost the same "tab control" example.  Ouch.  Before anyone says the P word, I would like to mention that ASP.NET AJAX Programming Tricks was released first. 

Links

Squid Micro-Blogging Library for .NET 3.5

A few years ago I designed a system that would greatly ease data syndication, data aggregation, and reporting.  The first two components of the system were repackaged and release early last year under the incredibly horrible name "Data Feed Framework".  The idea behind the system was two fold.  The first concept was that you write a SQL statement and you immediately get a fully functional RSS feed with absolutely no more work required.  Here's an example of a DFF SQL statement that creates an RSS feed of SQL Server jobs:

select Id=0,
Title=name,
Description=description
from msdb.dbo.sysjobs
where enabled = 1

The second part of DFF was it's ASP.NET control named InfoBlock that would accept an RSS or ATOM feed and display it in a mini-reader window.  The two parts of DFF combine to create the following:

Given the following SQL statement (or more likely a stored procedure)...

select top 10
Id=pc.ContactID, 
Title=pc.FirstName + ' ' + pc.LastName + ': $' + convert(varchar(20), convert(numeric(10,2), sum(LineTotal))), 
Description='', 
LinkTemplate = '/ShowContactInformation/'
from Sales.SalesOrderDetail sod
inner join Sales.SalesOrderHeader soh on soh.SalesOrderID = sod.SalesOrderID
inner join Person.Contact pc on pc.ContactID = soh.SalesPersonID
group by pc.FirstName, pc.LastName, pc.ContactID
order by sum(LineTotal) desc

...we have an automatically updating RSS feed and when that RSS feed is given to an InfoBlock, you get the following:

image

InfoBlocks could be placed all over a web site or intranet to give quick and easy access to continually updating information.  The InfoBlock control would also register the feed with modern web browsers that had integrated RSS support.  Furthermore, since it was styled properly in CSS, there's no reason for it to be a block at all.  It could be a horizontal list, a DOM-based window, or even a ticker as CSS and modern AJAX techniques allow.

DFF relied on RSS.NET for syndication feed creation and both RSS.NET and Atom.NET for aggregation.  It also used LLBLGen Pro a bit to access the data from SQL Server.  As I've promised with all my projects, they will update as new technologies are publicly released.  Therefore, DFF has been completely updated for .NET 3.5 technologies including LINQ and WCF.

I've also decided to continue down my slippery slope of a change in product naming philosophy.  Whereas before I would follow the Microsoft marketing philosophy of "add more words to the title until it's so long to say that you require an acronym" to the more Linux or O'Reilly approaches of "choose a random weird sounding word and leave it be" and "pick a weird animal", respectively.  I've also been moving more towards the idea of picking a cool name and leaving it as is.  This is in contrast to Microsoft's idea of picking an awesome name and then changing it to an impossibly long name right before release (i.e. Sparkle, Acrylic, and Atlas)  Therefore, I decided to rename DFF to Squid.  I think this rivals my Dojr.NET and Prominax (to be released-- someday) projects as having the weirdest and most random name I've ever come up with.  I think it may have something to do with SQL and uhhhh.. something about a GUID.  Donno.

Squid follows the same everything as DFF, however the dependencies on RSS.NET and ATOM.NET were completely removed.  This was possible due to the awesome syndication support in WCF 3.5.  Also, all reliance on LLBLGen Pro was removed.  LLBLGen Pro (see my training video here) is an awesome system and is the only enterprise-class O/R mapping solution in existence.  NHibernate should not be considered enterprise-class and it's usability is almost through the floor.  Free in terms of up-front costs, does not mean free in terms of usability (something Linux geeks don't seem to get).  However, given that LINQ is built into .NET 3.5, I decided that all my shared and open-source projects should be using LINQ, not LLBLGen Pro.  The new LLBLGen Pro uses LINQ and when it's released, should absolutely be used as the primary solution for enterprise-class O/R mapping.

Let me explain a bit about the new syndication feature in WCF 3.5 and how it's used in Squid.  Creating a syndication feed in WCF is required a WCF endpoint just like everything else in WCF.  This endpoint will be part of a service and will have an address, binding, and contract.  Nothing fancy yet as the sweetness is in the details.  Here's part of the contract Squid uses for it's feed service (don't be jealous of the VS2008 theme -- see Scott Hanselman's post on VS2008 themes):

namespace Squid.Service
{
    [ServiceContract(Namespace = "http://www.netfxharmonics.com/services/squid/2008/03/")]
    public interface ISquidService
    {
        [OperationContract]
        [WebGet(UriTemplate = "GetFeedByTitle/")]
        Rss20FeedFormatter GetFeedByTitle(String title);

        //+ More code here
    }
}

Notice the WebGet attribute.  This is applied to signify that this will be part of a HTTP GET request.  This relates to the fact that we are using a new WCF 3.5 binding called the WebHttpBinding.  This is the same binding used by JSON and POX services.  There are actually a few new attributes, each of which provides it's own treasure chest (see later in this post when I mention a free chapter on the topic).  The WebGet attribute has an awesome property on it called UriTemplate that allows you to match parameters in the request URI to parameters in the WCF operation contract.  That's beyond cool.

The service implementation is extremely straight forward.  All you have to do is create a SyndicationFeed object, populate it with SyndicationItem objects and return it in the constructor of the Rss20FeedFormatter.  Here's a non-Squid example:

SyndicationFeed feed = new SyndicationFeed();
feed.Title = new TextSyndicationContent("My Title");
feed.Description = new TextSyndicationContent("My Desc");
List<SyndicationItem> items = new List<SyndicationItem>();
items.Add(new SyndicationItem()
{
    Title = new TextSyndicationContent("My Entry"),
    Summary = new TextSyndicationContent("My Summary"),
    PublishDate = new DateTimeOffset(DateTime.Now)
});
feed.Items = items;
//+
return new Rss20FeedFormatter(feed);

You may want to make note that you can create an RSS or ATOM feed directly from an SyndicationFeed instance using the SaveAsRss20 and SaveAsAtom10 methods.

As with any WCF service, you need a place to host it and you need to configure it.  To create a service, I simply throw down a FeedService.svc file with the following page directive (I'm really not trying to have the ugliest color scheme in the world-- it's just an added bonus):

<%@ ServiceHost Service="Squid.Service.SquidService" %>

The configuration is also fairly straight forward, all we have is our previously mentioned ending with an address(blank to use FeedService.svc directly), binding (WebHttpBinding), and contract(Squid.Service.ISquidService).  However, you also need to remember to add the WebHttp behavior or else nothing will work for you.

<system.serviceModel>
  <behaviors>
    <endpointBehaviors>
      <behavior name="FeedEndpointBehavior">
        <webHttp/>
      </behavior>
    </endpointBehaviors>
  </behaviors>
  <services>
    <service name="Squid.Service.SquidService">
      <endpoint address=""
                binding="webHttpBinding"
                contract="Squid.Service.ISquidService"
                behaviorConfiguration="FeedEndpointBehavior"/>
    </service>
  </services>
</system.serviceModel>

That's seriously all there is to it: write your contract, write your implementation, create a host, and set configuration.  In other words, creating a syndication feed in WCF is no different than creating a WsHttpBinding or NetTcpBinding service.  However, what about reading an RSS or ATOM feed? This is even simpler.

To read a feed all you have to do is create an XML reader with the data source of the feed and pass that off to the static Load method of the SyndicationFeed class.  This will return an instance of SyndicationFeed which you may iterate or, as I'm doing in Squid, transform with LINQ.  I actually liked how my server-control used an internal repeater instance and therefore wanted to continue to use it.  So, I kept my ITemplate object (RssListTemplate) the same and used the following LINQ to transform a SyndicationFeed to what my ITemplate what already using:

Object bindingSource = from entry in feed.Items
                       select new SimpleFeedEntry
                       {
                           DateTime = entry.PublishDate.DateTime,
                           Link = entry.Links.First().Uri.AbsoluteUri,
                           Text = entry.Content != null ? entry.Content.ToString() : entry.Summary.Text,
                           Title = entry.Title.Text
                       };

Thus, with .NET 3.5 I was able to remove RSS.NET and ATOM.NET completely from the project.  LINQ also, of course helped me with my database access and therefore remove my dependency on my LLBLGen Pro generated DAL:

using (DataContext db = new DataContext(Configuration.DatabaseConnectionString))
{
    var collection = from p in db.FeedCreations
                     where p.FeedCreationTitle == title
                     select p;
    //+ More code here
}

Thus, you can use Squid in your existing .NET 3.5 system with little impact to anything.  Squid is what I use in my Minima blog engine to provide the boxes of information in the sidebar.  I'm able to modify the data in the Snippet table in the Squid database to modify the content and order in my sidebar.  Of course I can also easily bring in RSS/ATOM content from the web with this as well.

You can get more information on the new web support in WCF 3.5 by reading the chapter "Programmable Web" (free chapter) in the book Essential WCF for .NET 3.5 (click to buy).  This is an amazing book that I highly recommend to all WCF users.

Links

Creating JavaScript Components and ASP.NET Controls

Every now and again I'll actually meet someone who realizes that you don't need a JavaScript framework to make full-scale AJAX applications happen... but rarely in the Microsoft community.  Most people think you need Prototype, jQuery, or ASP.NET AJAX framework in order to do anything from networking calls, DOM building, or component creation.  Obviously this isn't true.  In fact, when I designed the Brainbench AJAX exam, I specific designed it to test how effectively you can create your own full-scale JavaScript framework (now how well the AJAX developer did on following my design, I have no idea).

So, today I would like to show you how you can create your own strongly-typed ASP.NET-based JavaScript component without requiring a full framework.  Why would you not have Prototype or jQuery on your web site?  Well, you wouldn't.  Even Microsoft-oriented AJAX experts recognizes that jQuery provides an absolutely incredible boost to their applications.  However, when it comes to my primary landing page, I need that to be extremely tiny.  Thus, I rarely include jQuery or Prototype on that page (remember, Google makes EVERY page a landing page, but I mean the PRIMARY landing page.)

JavaScript Component

First, let's create the JavaScript component.  When dealing with JavaScript, if you can't do it without ASP.NET, don't try it in ASP.NET.  You only use ASP.NET to help package the component and make it strongly-typed.  If the implementation doesn't work, then you have more important things to focus on.

Generally speaking, here's the template I follow for any JavaScript component:

window.MyNamespace = window.MyNamespace || {};
//+
//- MyComponent -//
MyNamespace.MyComponent = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            //+ validate and save DOM host
            if (init.host) {
                this._host = init.host;
                //+
                this.DOMElement = $(this._host);
                if(!this.DOMElement) {
                    throw 'Element with id of ' + this._host + ' is required.';
                }
            }
            else {
                throw 'host is required.';
            }
            //+ validate and save parameters
            if (init.myParameter) {
                this._myParameter = init.myParameter;
            }
            else {
                throw 'myParameter is required.';
            }
        }
    }
    ctor.prototype = {
        //- myfunction -//
        myfunction: function(t) {
        }
    };
    //+
    return ctor;
})( );

You may then create the component like the following anywhere in your page:

new MyNamespace.MyComponent({
    host: 'hostName',
    myParameter: 'stuff here'
 });

Now on to see a sample component, but, first, take note of the following shortcuts, which allow us to save a lot of typing:

var DOM = document;
var $ = function(id) { return document.getElementById(id); };

Here's a sample Label component:

window.Controls = window.Controls || {};
//+
//- Controls -//
Controls.Label = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            //+ validate and save DOM host
            if (init.host) {
                this._host = init.host;
                //+
                this.DOMElement = $(this._host);
                if(!this.DOMElement) {
                    throw 'Element with id of ' + this._host + ' is required.';
                }
            }
            else {
                throw 'host is required.';
            }
            //+ validate and save parameters
            if (init.initialText) {
                this._initialText = init.initialText;
            }
            else {
                throw 'initialText is required.';
            }
        }
        //+
        this.setText(this._initialText);
    }
    ctor.prototype = {
        //- myfunction -//
        setText: function(text) {
            if(this.DOMElement.firstChild) {
                this.DOMElement.removeChild(this.DOMElement.firstChild);
            }
            this.DOMElement.appendChild(DOM.createTextNode(text));
        }
    };
    //+
    return ctor;
})( );

With the above JavaScript code and "<div id="host"></div>" somewhere in the HTML, we can use the following to create an instance of a label:

window.lblText = new Controls.Label({
    host: 'host',
    initialText: 'Hello World'
});

Now, if we had a button on the screen, we could handle it's click event, and use that to set the text of the button, as follows:

<div>
    <div id="host"></div>
    <input id="btnChangeText" type="button" value="Change Value" />
</div>
<script type="text/javascript" src="Component.js"></script>
<script type="text/javascript">
    //+ in reality you would use the dom ready event, but this is quicker for now
    window.onload = function( ){
        window.lblText = new Controls.Label({
            host: 'host',
            initialText: 'Hello World'
        });
         window.btnChangeText = $('btnChangeText');
         //+ in reality you would use a muli-cast event
         btnChangeText.onclick = function( ) {
            lblText.setText('This is the new text');
         };
    };
</script>

Thus, components are simple to work with.  You can do this with anything from a simple label to a windowing system to a marquee to any full-scale custom solution.

ASP.NET Control

Once the component works, you may then package the HTML and strongly-type it for ASP.NET.  The steps to doing this are very simple and once you do it, you can just repeat the simple steps (some times with a simple copy/paste) to make more components.

First, we need to create a .NET class library and add the System.Web assembly.   Next, add the JavaScript component to the .NET class library.

Next, in order to make the JavaScript file usable my your class library, you need to make sure it's set as an Embedded Resource.  In Visual Studio 2008, you do this by going to the properties window of the JavaScript file and changing the Build Action to Embedded Resource.

Then, you need to bridge the gap between the ASP.NET and JavaScript world by registering the JavaScript file as a web resource.  To do this you register an assembly-level WebResource attribute with the location and content type of your resource.  This is typically done in AssemblyInfo.cs.  The attribute pattern looks like this:

[assembly: System.Web.UI.WebResource("AssemblyName.FolderPath.FileName", "ContentType")]

Thus, if I were registering a JavaScript file named Label.js in the JavaScript.Controls assembly, under the _Resource/Controls folder, I would register my file like this:

[assembly: System.Web.UI.WebResource("JavaScript.Controls._Resource.Label.js", "text/javascript")]

Now, it's time to create a strongly-typed ASP.NET control.  This is done by creating a class which inherits from the System.Web.UI.Control class.  Every control in ASP.NET, from the TextBlock to the GridView, inherits from this base class.

When creating this control, we want to remember that our JavaScript control contains two required parameters: host and initialText.  Thus, we need to add these to our control as properties and validate these on the ASP.NET side of things.

Regardless of your control though, you need to tell ASP.NET what files you would like to send to the client.  This is done with the Page.ClientScript.RegisterClientScriptResource method, which accepts a type and the name of the resource.  Most of the time, the type parameter will just be the type of your control.  The name of the resource must match the web resource name you registered in AssemblyInfo.  This registration is typically done in the OnPreRender method of the control.

The last thing you need to do with the control is the most obvious: do something.  In our case, we need to write the client-side initialization code to the client.

Here's our complete control:

using System;
//+
namespace JavaScript.Controls
{
    public class Label : System.Web.UI.Control
    {
        internal static Type _Type = typeof(Label);

        //+
        //- @HostName -//
        public String HostName { get; set; }

        //- @InitialText -//
        public String InitialText { get; set; }

        //+
        //- @OnPreRender -//
        protected override void OnPreRender(EventArgs e)
        {
            Page.ClientScript.RegisterClientScriptResource(_Type, "JavaScript.Controls._Resource.Label.js");
            //+
            base.OnPreRender(e);
        }

        //- @Render -//
        protected override void Render(System.Web.UI.HtmlTextWriter writer)
        {
            if (String.IsNullOrEmpty(HostName))
            {
                throw new InvalidOperationException("HostName must be set");
            }
            if (String.IsNullOrEmpty(InitialText))
            {
                throw new InvalidOperationException("InitialText must be set");
            }
            writer.Write(@"
<script type=""text/javascript"">
(function( ) {
    var onLoad = function( ) {
        window." + ID + @" = new Controls.Label({
            host: '" + HostName + @"',
            initialText: '" + InitialText + @"'
        });
    };
    if (window.addEventListener) {
        window.addEventListener('load', onLoad, false);
    }
    else if (window.attachEvent) {
        window.attachEvent('onload', onLoad);
    }
})( );
</script>
");
            //+
            base.Render(writer);
        }
    }
}

The code written to the client may looks kind of crazy, but that's because it's written very carefully.  First, notice it's wrapped in a script tag.  This is required.  Next, notice all the code is wrapped in a (function( ) { }) ( ) block.  This is a JavaScript containment technique.  It basically means that anything defined in it exists only for the time of execution.  In this case it means that the onLoad variable exists inside the function and only inside the function, thus will never conflict outside of it.  Next, notice I'm attaching the onLoad logic to the window.load event.  This isn't technically the correct way to do it, but it's the way that requires the least code and is only there for the sake of the example.  Ideally, we would write (or use a prewritten one) some sort of event handler which would allow us to bind handlers to events without having to check if we are using the lameness known as Internet Explorer (it uses window.attachEvent while real web browsers use addEventListener).

Now, having this control, we can then compile our assembly, add a reference to our web site, and register the control with our page or our web site.  Since this is a "Controls" namespace, it has the feel that it will contains multiple controls, thus it's best to register it in web.config for the entire web site to use.  Here's how this is done:

<configuration>
  <system.web>
    <pages>
      <controls>
        <add tagPrefix="c" assembly="JavaScript.Controls" namespace="JavaScript.Controls" />
      </controls>
    </pages>
  </system.web>
</configuration>

Now we are able to use the control in any page on our web site:

<c:Label id="lblText" runat="server" HostName="host" InitialText="Hello World" />

As mentioned previously, this same technique for creating, packaging and strongly-typing JavaScript components can be used for anything.  Having said that, this example that I have just provided borders the raw definition of useless.  No one cares about a stupid host-controlled label.

If you don't want a host-model, but prefer the in-place model, you need to change a few things.  After the changes, you'll have a template for creating any in-place control.

First, remove anything referencing a "host".  This includes client-side validation as well as server-side validation and the Control's HostName property.

Next, put an ID on the script tag.  This ID will be the ClientID suffixed with "ScriptHost" (or whatever you want).  Then, you need to inform the JavaScript control of the ClientID.

Your ASP.NET control should basically look something like this:

using System;
//+
namespace JavaScript.Controls
{
    public class Label : System.Web.UI.Control
    {
        internal static Type _Type = typeof(Label);

        //+
        //- @InitialText -//
        public String InitialText { get; set; }

        //+
        //- @OnPreRender -//
        protected override void OnPreRender(EventArgs e)
        {
            Page.ClientScript.RegisterClientScriptResource(_Type, "JavaScript.Controls._Resource.Label.js");
            //+
            base.OnPreRender(e);
        }

        //- @Render -//
        protected override void Render(System.Web.UI.HtmlTextWriter writer)
        {
            if (String.IsNullOrEmpty(InitialText))
            {
                throw new InvalidOperationException("InitialText must be set");
            }
            writer.Write(@"
<script type=""text/javascript"" id=""" + this.ClientID + @"ScriptHost"">
(function( ) {
    var onLoad = function( ) {
        window." + ID + @" = new Controls.Label({
            id: '" + this.ClientID + @"',
            initialText: '" + InitialText + @"'
        });
    };
    if (window.addEventListener) {
        window.addEventListener('load', onLoad, false);
    }
    else if (window.attachEvent) {
        window.attachEvent('onload', onLoad);
    }
})( );
</script>
");
            //+
            base.Render(writer);
        }
    }
}

Now you just need to make sure the JavaScript control knows that it needs to place itself where it has been declared.  To do this, you just create a new element and insert it into the browser DOM immediately before the current script block.  Since we gave the script block and ID, this is simple.  Here's basically what your JavaScript should look like:

window.Controls = window.Controls || {};
//+
//- Controls -//
Controls.Label = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            if (init.id) {
                this._id = init.id;
                //+
                this.DOMElement = DOM.createElement('span');
                this.DOMElement.setAttribute('id', this._id);
            }
            else {
                throw 'id is required.';
            }
            //+ validate and save parameters
            if (init.initialText) {
                this._initialText = init.initialText;
            }
            else {
                throw 'initialText is required.';
            }
        }
        //+
        var scriptHost = $(this._id + 'ScriptHost');
        scriptHost.parentNode.insertBefore(this.DOMElement, scriptHost);
        this.setText(init.initialText);
    }
    ctor.prototype = {
        //- setText -//
        setText: function(text) {
            if(this.DOMElement.firstChild) {
                this.DOMElement.removeChild(this.DOMElement.firstChild);
            }
            this.DOMElement.appendChild(DOM.createTextNode(text));
        }
    };
    //+
    return ctor;
})( );

Notice that the JavaScript control constructor creates a span with the specified ID, grabs a reference to the script host, inserts the element immediately before the script host, then sets the text.

Of course, now that we have made these changes, you can just throw something like the following into your page and to use your in-place JavaScript control without ASP.NET.  It would look something like this:

<script type="text/javascript" id="lblTextScriptHost">
    window.lblText = new Controls.Label({
        id: 'lblText',
        initialText: 'Hello World'
    });
</script>

So, you can create your own JavaScript components without requiring jQuery or Prototype dependencies, but, if you are using jQuery or Prototype (and you should be!; even if you are using ASP.NET AJAX-- that's not a full JavaScript framework), then you can use this same ASP.NET control technique to package all your controls.

kick it on DotNetKicks.com