2005 2006 2007 2008 2009 2010 2011 2015 2016 2017 2018 aspnet azure csharp debugging docker elasticsearch exceptions firefox javascriptajax linux llblgen mongodb powershell projects python security services silverlight training videos wcf wpf xag xhtmlcss

New XAG Feature - Better Generics

In the original release of XAG, properties could be generic only by using aliases. That is, if you wanted to have a property with a type of "Collection<Item>" you would have to create an alias like this:

using LineItemsCollection = System.Collections.ObjectModel.Collection<XagNewGenericsExample.Item>;

...

private LineItemsCollection lineitems;

public LineItemsCollection LineItems {
    get { return lineitems; }
    set { lineitems = value; }
}

which in XAG's markup looked like this:

<Aliases>
  <Alias x:Key="LineItemsCollection" Name="Whatever" Type="System.Collections.ObjectModel.Collection[{Type Item}]" />
</Aliases>
<Properties>
  <LineItems Type="{Type LineItemsCollection}" />
</Properties>

That's not really a bad deal as it really helps to enforce proper type design. For example, many times I will see people doing this:

public class MyAlias : Collection<Item> {}

MyAlias is NOT an alias for Collection<Item>. Try to compare them in code, they won't match up. However, in the first example "LineItemsCollection" and "Collection<Item>" are the same.

That said, I wanted to add something a bit more flexible. So I wrote an extension markup subsystem for XAG that allows for more complex generics. Here's an example of a property using the new generics system (assuming you have the System.Collections.ObjectModel namespace in):

<LineItems Type="{GenericType Name=Collection, Types=({Type Item})}" />

As you can see it's completely inline. Now here's a more complex example:

<ComplexItems Type="{GenericType Name=Dictionary, Types=(Int32, {GenericType Name=Collection, Types=({Type Item})})}" />

This XAG property translates to the following C# property:

private Dictionary<Int32, Collection<Item>> complexitems;

public Dictionary<Int32, Collection<Item>> ComplexItems {
    get { return complexitems; }
    set { complexitems = value; }
}

Here's a complete example you can try on your own.

<Assembly xmlns:x="http://www.jampadtechnology.com/xag/2006/11/" DefaultNamespace="XagNewGenericsExample">
  <Item x:Key="Item" Type="Class" AccessModifier="Public" />
  <ItemSet x:Key="ItemSet" Type="Class" AccessModifier="Public">
    <Imports>
      <Import Name="System.Collections.Generic" />
      <Import Name="System.Collections.ObjectModel" />
    </Imports>
    <Properties>
      <Items Type="{GenericType Name=Collection, Types=({Type Item})}" />
      <ComplexItems Type="{GenericType Name=Dictionary, Types=(Int32, {GenericType Name=Collection, Types=({Type Item})})}" />
    </Properties>
  </ItemSet>
</Assembly>
So now you an have more complex generic types in your XAG structure.

Links

Data Feed Framework - February 2007

Back in late 2005 I created a system which I'm now calling the Data Feed Framework (DFF) to greatly simplify the creation of RSS feeds and the aggregation of RSS and Atom feeds on web sites. The goal of the feed creation portion of the system was to allow developers and IT professionals to create RSS feeds with little to no work at all thereby allowing them to use RSS for purposes other than blogging or news. The goal of the aggregator portion of DFF was to the integration of information into ASP.NET 2.0 pages.

The Data Feed Framework has two primary components :

  • FeedCreation
  • InfoBlocks

Using the FeedCreation mechanism, developers and IT professionals can quickly create RSS feeds to monitor internal SQL databases, create sales RSS feeds for management, or monitor whatever else they wanted. The aggregator portion of the system, called InfoBlocks, allows web developers to declare a control in ASP.NET 2.0, give it an RSS or Atom feed and walk away having an InfoBlock on their page containing a list of feed entries for the RSS feed.

The two portions of the system work together, but can also be used separately.

To make explaining this a bit easier, below is a link to my documentation for the system. Below that are the links for the source of the system as well as screen shots. I'll soon be creating a video to go over the basic concepts of this system.

I can't over emphasis how simply this system makes RSS creation and RSS/Atom aggregation. It's literally a matter writing a SQL statement. Absolutely no more work than that is required. The simplicity is very much like that simplicity in WCF. Where WCF relies primarily on attributes and a config file, I rely simply on a SQL Server 2005 table called "FeedCreation". You literally write a SQL statement and give it a name and you have a complete RSS feed ready for corporate or public distribution. I'm planning on incorporating this system into the core of Minima in the next CTP.

The license for this is simple: use it all you want for whatever you want, customize it to your own needs, but 1) ALWAYS keep the copyright notice in there and 2) I nor my companies are liable for anything. The typical stuff...

Update: This project has been renamed to SQL RSS Services and has been updated for .NET 3.5 using LINQ and the syndication feed support in WCF.

Related Links

Data Feed Framework Overview Video

To make understanding and usage easier, I recorded a video on my Data Feed Framework. This video is an overview, a demo, and basically video documentation. Everything in the video is covered in the documentation file I posted on the original blog entry.

Data Feed Framework (DFF) can be used to greatly simplify the creation of RSS feeds and the aggregation of RSS and Atom feeds on websites. Please see the original blog entry for more information (link below).

Introductory 3D WPF Video Demo

The other night I remembered a video I recorded around July 2006 about 3D XAML development in WPF. I never publicly published the video as it wasn't really all that great, so I just used it for internal training.

The videos is an accelerated demo of how you can start using WPF's 3D functionality. There's some errata in the video, so don't go taking notes... just take it as a proof-of-concept and an overall how-to.

Here are the videos...

Minima Blog Engine February 2007 CTP Released!

Over the past few months I've been getting various requests for portions (or all) of my blog engine. The project was not open or shared source... until now. Having had enough requests, I figured I should finally go back through my code and do the refactoring I've been putting off for quite some time now. I went ahead and did my cleanup, added a few more features, streamlined the API, simplified the code tremendously, fixed up the database a bit, and made a sample application to ship with it. Now I'm finally ready to release this blog engine under the name Minima as a February 2007 CTP. The files are listed below. One file is the Minima solution and the other is the database (yes, RAR only-- I don't do ZIP).

As far as licensing... it's a shared source project. Meaning, I'm going to continue development on it and release new CTPs as time goes on. I have a list of things I want to implement in future releases and I'll be shipping those in future CTPs. The license does however allow for modifications... as that's the entire point! This is a template for you own blog system and you can add to it add to it as see fit. However, please be warned that I'll be releasing new versions in the future. So, you may want to keep track of your changes or communicate with me about them. You really don't want to get into the situation where you say "Oh man... he released an assembly to do exactly what I wanted... uh oh... I rebuilt this entire section... this is going to get sloppy!" Just be careful with your changes. Furthermore, no matter how much you change it, you must put somewhere on your blog that your blog either uses Minima or is based on Minima. Lastly, the disclaimer is the typical disclaimer: neither myself or my company will be liable for any usage in any way, shape, or form of either this application or derivatives of it.

By the way... this is why my blog was flaky lately. I've been doing constant deployments to production, which caused all kinds of problems as this web site was my QA system.

Now, here are the release notes as seen in the ReleaseNotes.xml in the MinimaLibrary project in the RAR. Please pay special attention to the "Technology" and "As a Training Tool" sections as it explains the technology in this application, which I think will serve as an example for each of us in many areas. This is why I'm labeling this entry with so many labels.

Purpose

Minima is designed to give developers a minimalistic template for creating a feature rich alternative to Blogger, Wordpress, and other large-scale blogging systems in manner consistent with the technologies and design paradigms of ASP.NET 2.0, XHTML, CSS, ECMAScript, and the Framework Design Guidelines.

Minimalistic?

Minima is minimalistic in a number of respects. First, does not overload itself with every possible feature in the known universe. Next, it's designed to put extra features as add-ons in an effort to keep the code somewhat maintainable. Furthermore, the primary way of interacting with Minima is a single facade (a class designed to make interaction with the internal mechanics easier) with very understandable methods. This facade is actually the API exposed as a WCF service. Finally, in this release there is no client application; however, as I say, there is a very easy to use API. It should cover most everything you need.

There are also other dimensions to it's minimalism. For example, I put in my mini-exception monitoring system, which notifies me of any exceptions thrown from the web site. I could have used the Application Blocks, but I went the more minimal route instead Be clear on this: I'm a complete minimalist and purist. I refuse to have multiple computers, never put two toppings on my ice scream, hate putting anything on my sandwiches, I never use MODs for games, NEVER wear shirts with logos, and never wear more than 2 colors at a time. I hate stuff that gets overly complex. So, I'm a minimalist and this fits me.

Blog Management Application?

There is no management application in this release. I personally use is a small interface I wrote in WPF, which communicates via WCF to the primary server. It was my first real WPF application I wrote and I wrote it before I deeply understood XAML, so I wrote the entire thing using C#. (Those of you who were ASP/PHP masters before learning ASP.NET and therefore wrote your first project in pure C# without any markup will know what I mean) I'm rebuilding it now in mostly XAML with a little code here and there for WCF interaction.

Having said all that, you can very easily write your own tool. Even still, I find SQL Server Management Studios to be one of the best front-ends ever made.

Windows Communication Foundation

The primary way to communicate with Minima is the MinimaFacade class. This class is used internally to get the information for the web site. It's also what you should use when writing your own management tool. Looking at the class you will ask yourself "Why in the world isn't this thing static!?". I didn't make it static because I wanted to apply a ServiceContract interface to it thereby giving it exposure as a potential WCF service. The web site, however, does use it statically via the MinimaFacadeCache class. Anyway, the point is, you can easily write your own remote management application using WPF, Winforms, or ASP.NET 2.0 by using WCF. Of course, if you want a secure channel with WCF... that you will have to add on your own as I didn't have an SSL certificate for testing purposes.

Potential Future Changes

There are some things I would definitely like to change in future CTPs of Minima. I have an entire list of things I want to either change, fix, or add. More information is forthcoming.

Primary Features

The primary features in Minima are just the ones that I just HAD to have. If I didn't absolutely need the feature, I probably didn't add it (but may in the future!) A few things I needed are: "fake" paths or archives and labels, "fake" URLs for each blog entry, multiple "fake" URLs for each blog entry (some times I have a typo in a title of a blog entry, but by the time I find out the blog entry is already popular--so I can't merely fix it-- I need two URLs to point to the same thing), almost completely database driven (including URL mappings), labels (not folders!, I wanted many labels per blog entry), pure CSS layout and style, pure XHTML structure, and the ability to add, remove, or change a major feature on a whim! Now that last one is very important... if I want to change something, I can. This ability came in handy when I went from blogger to my own engine and in the process lost my automatic technorati ping. That's something I quickly added though.

Technology

The DAL was generated using LLBLGen using Self-Servicing template in a two-class scenario. Everything was written in C# 2.0 using ASP.NET 2.0 with a few bits of custom AJAX functionality (I didn't want to use Atlas on this one). All style and layout is CSS as only people who are in desperate need of getting fired use tables for layout. The technorati ping functionality is based on an abridgement of my XML service framework. The RSS feed creation abilities is actual a function of the RSS.NET framework. I would have added Atom, but I've had majors problems with the Atom.NET framework in the past. Finally, the database is SQL Server 2005 (Express in my case), using one stored procedure (which I would like to refactor into LLBLGen).

As a Training Tool

One of my intentions regarding Minima is to use it as a sample application for .NET training. For example, this is a great way to demonstrate the power and capabilities of HTTPHandlers. It's also a good example of how LLBLGen can be effectively utilized. Furthermore, it also demonstrates how you can quickly and efficiently use WCF to turn a simple facade into a multi-endpoint service. It also demonstrates manual AJAX, CSS themeing, HttpWebRequest, proper use of global.asax, framework design guidelines, and type organization.

The API

For now, just look at the MinimaFacade and everything should become apparent. I'll be posting API samples in the future. See the Samples section below for some examples on using the API.

Update: Minima is now in the NetFXHarmonics Subversion respository at http://svn.netfxharmonics.com/Minima/tags/.

New XAG Feature - Simplified DTO Creation

After finishing up the first CTP of Minima I have to say that creating my own objects for data transfer (called DTOs - Data Transfer Objects) is painfully lame and using XAG to create an entire projet every thing is a bit overkill. So, I added a middle ground feature into XAG to allow all of us to very efficiently generate DTOs.

So, now, you can go to XAG, select the DTO Class Template (or write your own), select "Single Type Only" and it will show you the class ON SCREEN in a text area for easy copy/paste into your own project. This should make data transfer in the pre-C# 3.0 world MUCH easier.

Here's an example of what you can do.... you simply put the following in (you could have XAG put the DataContract and DataMember attributes on there too-- see the XAG WCF template), select Single Type Only mode, hit Create, and you get the below code instantaneously on the screen. I've been an advocate of AJAX for 8 years now and this should serve as an example of why you should use it too. Link to XAG is below.

<Assembly xmlns:x="http://www.jampadtechnology.com/xag/2006/11/">
    <Person x:Key="Person" Type="Class" AutoGenerateConstructorsByProperties="True" AccessModifier="Public" Namespace="AcmeCorp.Sales">
      <Properties>
        <FirstName Type="String" />
        <LastName Type="String" />
        <Address1 Type="String" />
        <Address2 Type="String" />
        <City Type="String" />
        <State Type="String" />
        <PostalCode Type="String" />
      </Properties>
    </Person>
</Assembly>
using System;
using System.Runtime.Serialization;

namespace AcmeCorp.Sales
{
    public class Person
    {
        private String firstname;
        private String lastname;
        private String address1;
        private String address2;
        private String city;
        private String state;
        private String postalcode;

        public String FirstName {
            get { return firstname; }
            set { firstname = value; }
        }

        public String LastName {
            get { return lastname; }
            set { lastname = value; }
        }

        public String Address1 {
            get { return address1; }
            set { address1 = value; }
        }

        public String Address2 {
            get { return address2; }
            set { address2 = value; }
        }

        public String City {
            get { return city; }
            set { city = value; }
        }

        public String State {
            get { return state; }
            set { state = value; }
        }

        public String PostalCode {
            get { return postalcode; }
            set { postalcode = value; }
        }

        public Person(String firstName, String lastName, String address1, String address2, String city, String state, String postalCode) {
            this.FirstName=firstName;
            this.LastName=lastName;
            this.Address1=address1;
            this.Address2=address2;
            this.City=city;
            this.State=state;
            this.PostalCode=postalCode;
        }

        public Person( ) {
        }
    }
}

Links

Strange SQL Server Column Names

OK, this is strange stuff... Today I was working with Frans Bouma on an LLBLGen problem I was having and we both came to an interesting realization that surprised us both. This sample table should explain it...

create table Cursing (
[@)^!@*&%';] int primary key identity(1, 1) not null,
[%!@#$%%%] varchar(20),
[\234@#$+_!@#$=] bit,
[@#%*@#$%#$%%] datetime,
)

That actually works!

We actually found a column with an odd name similar to the above in a table in a database at the company I work at. The table literally had a column named "%". Seriously. I'm only partly surprised as the table in question was created my Microsoft Navision 3.x. That thing has some REALLY weird table names... and now I see it has even weirder column names.

Why in the WORLD can we have weird characters as field names? Who in the WORLD requested that feature? Why? That's partly rhetorical, but it seriously makes me wonder!

WCF Relative Binding Speed Demo Updated

Last year I released a simple demo showing the relative binding speeds of the various native bindings in WCF. Since that time, WCF the syntax has changed a bit making my original code obsolete. Now I'm posting the the source code based on the final version of WCF.

Here are the changes I made (most are simply to update):

  • The client proxies changed from "Proxy" to "Client"
  • <service type="..."> changed to <service name="...">
  • The 'mex' binding was added.
  • ServiceEndpoint was moved to System.ServiceModel.Description
  • On the service interface, the ServiceContract attribute ProtectionLevel was set to ProtectionLevel.None so that security is not required, thus giving a more accurate benchmark.

You can use these as a bases for you own services, but you may also wish to look at my XML Assembly Compiler which has a WCF service template. You can simply select the template, perhaps design a few types and interfaces, and let it create your entire WCF project for you.

Links

Comments Fixed

A few weeks ago I started to flood my public blog with parts of the next CTP of Minima. Then I took a break for a week or so, but in my flooding I carelessly broke comments and didn't find out about it until today. Oops!

So, if you left a comment anywhere you'll have to resubmit the comment.

Sorry!

English Standard Version Bible Web Service via WCF

The WCF version of this died years ago. The new versions are at https://bitbucket.org/davidbetz/esv-bible-services-for-.net

Many of the applications that I create are Bible based applications and my designs are made easier thanks to the the English Standard Version of Scripture. Long before I used it for development, I fell in love with it and made it my primary Bible version, not only because it's an incredible Bible translation with incredible accuracy and amazing readability, but also because it's so widely accessible. Their website at http://www.gnpcb.org/esv/ is by far one of my favorite websites. I can read on my screen, adjusting the font-size on a whim or click on an audio link to listen to any chapter in a variety of formats. Furthermore, they have great Firefox support and great technical articles explaining the internals of the system.

Not only that, you can also access the ESV text through e-mail and through a SOAP interface. Being someone deeply involved with interop and someone who has commited myself to the study of Scripture I am overjoyed to have the ability to directly access the ESV Bible data source. That said, now adays I'm a WCF user and the ESV Bible Web Service interface has methods and properties which, when used in WCF, are very Java-ish. Not a problem... I went ahead and got the svcutil code and modified it a bit to make the proxy follow the .NET Framework Design Guidelines. I cleaned up the svcutil output, added XML comments, added an added enumeration class and then split into different files for easier modification. Feel free to take the classes and combine them back into one proxy file as you wish. Here is a sample application using the fixed proxy:

using (var client = new ESVBibleClient( )) {
    var options = new OutputOptions( );
    options.IncludeFootnotes = true;
    options.IncludeCopyright = true;
    options.OutputFormat = OutputFormat.PlainText;

    String text = client.DoPassageQuery("TEST", "1 John 2", options);

    Console.WriteLine(text);
    Console.ReadLine( );
}

More Flexible XAG Properties

Recently, I found myself constantly writing classes that have many properties with no backing fields (i.e. use ViewState) or are get-only. So, I added this feature to XAG. Now, you can set the "Mode" to "GetOnly" or "SetOnly" and "Backing" to either "true" or "false". Keep in mind that set-only (write-only) properties aren't exactly the best thing to use. If you find yourself wanting to do that, you should really reexamine what you're doing.

Furthermore, I wanted the ability to have values cascade to others. So, now you can set things like the Mode, Backing, Static, and AccessModifier on the entire Properties or Methods group instead of the individual properties or methods.

Here's an example of all this:

<Assembly xmlns:x="http://www.jampadtechnology.com/xag/2007/02/" DefaultNamespace="MyCompany.MyProject.Confguration">
  <ProjectConfiguration Type="Class" AccessModifier="Internal" Static="True">
    <Properties Mode="GetOnly" Backing="False" AccessModifier="Internal">
      <ProjectTitle Type="String" />
      <DatabaseConnectionString Type="String" />
      <ErrorEmailToAddress Type="String" />
      <ErrorEmailFromAddress Type="String" />
      <AutoNotifyOnError Type="Boolean" />
    </Properties>
  </ProjectConfiguration>
  <Configuration>
    <appSettings>
      <add key="ProjectTitle" value="My Title" />
      <add key="DatabaseConnectionString " value="..." />
      <add key="ErrorEmailToAddress " value="dfb@davidbetz.net" />
      <add key="ErrorEmailFromAddress " value="no-reply@tempuri.org" />
      <add key="AutoNotifyOnError " value="False" />
    </appSettings>
  </Configuration>
</Assembly>

In addition, I changed the XAG interface a bit to make it much more user friendly. Personally, I use my WPF version, which I'll probably be releasing soon (as well as documentation for the Web Service).

.NET Slides

Back in 2005 I taught a class where I gave an overview of various parts of the .NET universe (at the time, the WinFX universe). I covered everything from C# to data binding to ASP.NET to service-oriented architecture to web standards. My goal was to give a familiarization with each of the topics so that the students can then learn the topics on their own in a way that best fits them.

My first session, however, was a presentation experiment. Instead of using really nasty verbose slides, I had just a few words on a slide and on one slide I had simply the number 42. Don Box would have loved my presentation. The point of the experiment was to make sure that people were paying attention to the topic and not simply reading a useless slide. I really think that PowerPoint is a massive hindrance to education. You can access my experimental presentation, named .NET Overview, below.

My next to last session was on web standards and in this session I simply told a story and then gave a few examples showing how web standards development is the only web development that is acceptable. You can access this presentation, named Web Standards Presentation, below.

Below is also a link to a video where Don Box talks about giving great technical presentations. I would recommend this video to anyone who gives talks on really any topic. Don't just watch it, study it. It takes practice.

The Universal HttpHandlerFactory Technique

In the past two months I've done more with HttpHandlers than probably anything else. One technique that I'm finding myself use a lot is one that uses a universal HttpHandlerFactory to filter ALL ASP.NET 2.0 traffic. In fact, this is exactly what I'm doing in the next release of Minima. The next release actually has many HttpHandlers, each utilized by a master HttpHandlerFactory. Here's an example of what I'm doing and how you can do it too:

First, I create a wildcard mapping in IIS to:

c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll

When doing this you want to make sure to uncheck the "Verify that file exists" or else your pages will have to actually exist. For most of the HttpHandlers that I create, the pages that are being accessed are completely imaginary.

Next, I create my HttpHandlerFactory. The normal way of doing this is to create a class which implements the IHttpHandlerFactory, have your logic in the GetHandler method do whatever it must do, and then return an instance of a class implementing IHttpHandler. If I don't want to do anything fancy with the request, I can just process it as an ASPX page. Unfortunately, since we are going to run EVERYTHING in the website through the HttpHandlerFactory, we can't do that. Why? The HttpHandlerFactory that processes ASPX pages is PageHandlerFactory, which you can't return from our custom HttpHandlerFactory. Here's what I mean...

public IHttpHandler GetHandler(HttpContext context, String requestType, String url, String pathTranslated) {
    if (url.EndsWith("/feed/")) {
        return new FeedHttpHandler( );
    }
    else {
        return new PageHandlerFactory( );
    }
}

The preceding code in no way compiles. First off, PageHttpFactory implements IHttpHandlerFactory (well, IHttpHandlerFactory2, which in turn implements IHttpHandlerFactory), NOT IHttpHandler. Secondly, though completely irrelevant by the first reason, the single constructor for PageHttpFactory is marked as internal.

Fortunately, however, we can get around his by actually inheriting from PageHttpFactory and overriding the GetHandler, which is marked as virtual.

Here's an example that does compile and works beautifully:

namespace MyProject.Web.HttpExtensions
{
    public class MyProjectHttpHandlerFactory : PageHandlerFactory
    {
        public override IHttpHandler GetHandler(HttpContext context, string requestType, string virtualPath, string path) {
            if (context.Request.Url.AbsoluteUri.EndsWith("/feed/")) {
                return new FeedHttpHandler( );
            }
            if (context.Request.Url.AbsoluteUri.Contains("/files/")) {
                return new FileMapperHttpHandler( );
            }
            if (context.Request.Url.AbsoluteUri.Contains("/service/endpoint/")) {
                return new XmlServiceHttpHandler( );
            }
            if (context.Request.Url.AbsoluteUri.Contains("/images/")) {
                return new ImageProcessorHttpHandler( );
            }
            else {
                return base.GetHandler(context, requestType, virtualPath, path);
            }
        }
    }
}

To complete the solution, you simply have to add the following in your web.config file:

<httpHandlers>
  <add verb="*" path="*" type="MyProject.Web.HttpExtensions.MyProjectHttpHandlerFactory"/>
</httpHandlers>

Now we are filtering all content through our HttpHandler. By doing this, you have full control of what URLs mean and do without having to screw with a ton of strange HttpHandlers in your web.config file. Furthermore, by doing it this way you get better control of what your web.config looks like. In Minima, for example, I have a type which controls all access to various configurations. My primary HttpHandlerFactory looks at this type to determine what paths do what and go where. You can either use the defaults in Minima or you can override them in the web.config file.

Regardless of what you do, the point is you can do anything you want. I often find myself creating virtual endpoints for web services allowing access in a variety of ways. In one service I recently created I actually have parameters coming in in the form of a URL (http://MyWebsite/Endpoint/ABCCompany/). My HttpHandlerFactory notices that a certain handler is to handle that particular type of request and then returns an instance of that particular handler. The handler then obtains the parameters from the URL and processes the request appropriately. Very little work was done in the actual setup and almost no work was done in IIS, it's all magically done via the master HttpHandlerFactory.

XmlHttp Service Interop - Part 1 (Simple Service Creation)

This entry is the first is a series on XmlHttp Service Interop.

In my day job I am constantly making diverse systems communicate. I make classic ASP talk to the .NET 2.0 Framework, .NET systems talk with Java systems, a Pascal-based system communicate with a custom .NET mail server, and even make .NET components access systems from the stone age. I love interop, but I'm finding that many people don't. From what I see, it seems to be a lack of understanding more so than a lack of desire. It's actually some pretty cool stuff and you can find great books on various interop topics.

While I do work a lot with COM, ES, and web service interop, my favorite communication mechanism is via straight XmlHttp calls. It's just so easy to do and support for it is just about universal. You can take JavaScript and make a call to ASP.NET, go to a VBS WScript and make a call to a web service, or force the oldest, nastiest product in your company to communicate with WS-* services. In this part of the series, we are going to discuss XmlHttp in general and see a call from JavaScript to a manually created service endpoint.

To start off with lets make it clear what we are and are not talking about. We are not talking about sockets or direct TCP calls nor are we talking about a service framework. XmlHttp is a way to transfer XML over HTTP. However, even though we talk about XML, in your using XmlHttp you'll find that this isn't a requirement at all, so, at root what we are doing is making simple HTTP calls.

To see what we're talking about in action, lets create a simple XML service endpoint that accepts a well-defined XML format to allow the sending of e-mail. Then, lets access the service via JavaScript. This is very similar to something I recently created to allow a very old system to send e-mails using the .NET Framework. Of course, in that situation I used the programming language of the system in question (Pascal), not JavaScript.

To begin with lets create the client. I know it seems a bit backwards, but lets look at this from the standpoint of a framework designer: look at how it will be used first, then implement mechanics. Now, the first thing we need to do this is a simple "htm" document. I want the page to be "htm" for the sake of this demonstration, simply to show that there is no server-side processing at all in this page.

Next, we need a way to access our endpoint. I'm not going to get into severe detail about how to do this in every single browser in the world, but, rather, I'm only going to show the standardized way. You can quickly do a search online to see how to extent this behavior to IE5 and IE6.

Skipping the lame setup and stuff many 6th graders can do, let's get right to the core of what we are going to do. The full implementation of everything seen here is in an accompanying VS2005 solution. It would probably be a good idea to have that open as you go through this.

To send a request to a server, simply use syntax similar to the following:

var xmlhttp = new XMLHttpRequest( ); xmlhttp.open('POST', ' Service.aspx', true); xmlhttp.onreadystatechange = function ( ) { if(xmlhttp.readyState == 4) { alert(xmlhttp.responseText); } }; xmlhttp.send(data);

This syntax works in any version of Firefox, the newer versions of Opera, and IE7 (or what I like to call " IE6.5"). Basically, what's happening here is this: you are creating a new instance of an HTTP requestor, giving it some connection information, setting a callback function, and sending some data to the service.

The part that you should look at closely is the XMLHttpRequest::open (note: my double colon syntax is from C++ and simply means Class::Member)function. This obviously takes three parameters: the HTTP method, the HTTP endpoint, and a boolean stating asynchronous communication. I want this to be an asynchronous call, so I'm setting the third parameter to true. I'll come back to the HTTP method in a moment and the HTTP endpoint is just the service address.

After that we see that the property XMLHttpRequest::onreadystatechange is being assigned a JavaScript anonymous function. If you are unfamiliar with these, just think of them as anonymous delegates in C# 2.0. This is the function that's going to be called when the state of the XmlHttp call changed. When inside this function, there are a few parameters you can look at when this function gets calls, but here I'm only testing one: readyState. This property basically states the status of the call. Notice the XMLHttpRequest property is called "onreadystatechange", not "oncomplete". This function is actually called when the state of the HTTP request changes. When I test for readyState == 4 I'm looking for round trip completion. Frankly, you probably never touch the values 1, 2, and 3 though you could check for 0, which means that the XMLHttpRequest::open function has not yet been called. In this situation, if the readyState is 4, then I want to display a message box showing the response content, which is accessible via XMLHttpRequest::responseText. One other very important property you will definately be using a lot is the XMLHttpRequest::status. This property gives values like 404, 415, 500, and so on. If the request did a successful round trip the status will be 200, so that's something you'll probably be testing for quite a bit.

Finally we see the XMLHttpRequest::send method. This simply sends a set of data to the service... well, kind of. In the XMLHttpRequest::open, the first parameter, the HTTP method, is very important. Depending on what you want to do you will either set it to GET or POST. If you are calling a pre-existing page that has no idea what a HTTP stream is, but knows about querystrings, then you will want to use GET. In this situation, you will want to put parameters in the querystring in the HTTP end point, that is, in the second parameter of XMLHttpRequest::open. However, if you are creating your own service, you may want to use POST instead as using POST makes the development on both the client and service simplier. On the client, you don't pack stuff in the querystring (though you still could) and on the server, you can access the data via a stream rather than via parsing the URL or doing any iteration. As that last sentence implies, by using the POST method you send the data you want to submit to the HTTP endpoint as a parameter of the XMLHttpRequest::send function. For those of you who understand WCF terminology, you can think of the HTTP method as being analogous to the WCF binding and the HTTP endpoint as being analogous to the WCF address. The only thing analogous to the WCF contract is the XML schema you use to create your data stream.

Now, since we are sending the information in the data variable to the service, we need to put something in it. For this service, I'm using the following XML, though it doesn't have to be XML at all.

var data = ''; data += '<Mail>'; data += '<ToAddresses>'; data += '<ToAddress>johndoe@tempuri.org</ToAddress>'; data += '</ToAddresses>'; data += '<CcAddresses>'; data += '</CcAddresses>'; data += '<BccAddresses>'; data += '</BccAddresses>'; data += '<FromAddress>no-reply@tempuri.org</FromAddress>'; data += '<Subject>XmlHttp Service Interop - Part 1</Subject>'; data += '<DateTime>03-08-07 2:26PM'; data += '</DateTime>'; data += '<Body>This is the message body.</Body>'; data += '</Mail>';

Given proper event management in JavaScript, you have everything you need to have a fully functional client. Now onto the server.

As we saw when we looked at the client code, the service endpoint is Service.aspx . To help us focus on the task at hand, we aren't going to do anything fancy like using URL aliasing (aka URL rewriting) to make it look cooler, though in reality you would probably do that.

In the code behind for the Service.aspx, we have code that starts like this:

XmlDocument doc = new XmlDocument( ); doc.Load(Request.InputStream); XmlNode documentRoot = doc.DocumentElement; XmlNode mailRoot = documentRoot.SelectSingleNode("//Mail");

Here what we're doing is creating a new XmlDocument and loading the data streamed from the client into it. Then we are getting the root of the document via XPath. The rest of the code is in the accompanying project and simply consists of a bunch of XPath queries to get the information from the document.

After we found all the values we needed in the XML document, either via XPath or another mechanism, we can do whatever we want with what we found. The important point is this: all we have to do is a simple Response.Write ("") to send data sent back to the client, which in turn changes the readyState in the previously seen JavaScript to 4, thus allowing the client to display the output in the alert window. It's really just as simple as this: the client sends stuff to the service, the service does something and sends stuff back.

Now, we could beef this up a bit by adding some HTTP headers. This is something you may find yourself doing often. To do this, use XMLHttpRequest::setRequestHeader to set a key/value pair on the connection. Here's an example.

xmlhttp.setRequestHeader('X-App-Source', 'My Client');

That 'X-App-Source' was completely made up. You could use 'My Awesome Service Caller' if you wanted. That doesn't matter, what does matter however is that you put this after the call to XMLHttpRequest::open or else you will seriously want to throw something across the room, because it's a painfully subtle error that will cause the call will fail every time.

On the server side, to access a header, simply do this:

String xAppSource = Request.Headers["X-App-Source"];

I know. You were expecting something a bit more profound. Okay, I can satisfy that need. If you have many headers and you want them all, here's what you can do.

foreach (String header in Request.Headers) { // Do whatever you want... }

Whatever you do, try to fight the temptation to do this:

Dictionary<String, String> headers = new Dictionary<String, String>( );
foreach (String header in Request.Headers) {
    headers.Add(header, Request.Headers [header]);
}

As nice as that looks, if you really want to have a headers key/value pair collection, you can just do this:

NameValueCollection headers = Request.Headers;

Regardless of what you do with the headers, remember that they are there for a reason and if you do a lot of service calls, you will find yourself using HTTP headers a lot. This is something you will see in the next part of the series.

So, that's a quick overview of XmlHttp. Please see the solution provided with this post for the full code. The next part of the series discusses making manual XmlHttp calls to WCF.

Materials

XmlHttp Service Interop - Part 2 (Utilizing WCF)

Today I'm publishing the second in an article series regarding XmlHttp Service Interop. In this article, entitled "Utilizing WCF", I explain how to build WCF services, how to trace their XML messages, and how to communicate with them and other SOAP services via raw XmlHttp.

Instead of posting the entire thing on my blog, from here on out I'm going to be posting articles in their own pages to make them easier to read and easier to access. This will also make it easier for me to update and edit as well. You may access this article as well as others in the series by the links below.

Converting JSON to XAML

For reasons beyond human comprehension, the world felt like making an huge deal about Microsoft "revealing" SilverLight even even though WPF/E has been known about for some time now and even though nothing technical has changed even in the slightest with the simple change of a name. Having said that... it will probably be an awesome technology and I'm sure I'll be marinating many of my future applications in it. Editing XAML in notepad is much more appealing to me than being forced to use the overpriced and overly complicated Flash.

As cool as that is, however, since most of the work I do involves pure Ajax/JavaScript clients with almost all .NET coding at the service level, I definitely find JSON (which IS a JavaScript object) easier to manage than XML (which CAN BE a JavaScript object). So, in one of my applications I have the service that provides graphical information to Silverlight in the form of JSON serialized XAML, which will then be converted into XAML.

Here is an example of something the service would provide:

var jsonElement1 = {
        'Ellipse': {
        'Canvas.Left': '130',
        'Canvas.Top': '130',
        Height: '200',
        Width: '200',
        Stroke: 'Red',
        StrokeThickness: '10',
        Fill: 'SlateBlue'
    }
};

No, it's not human readable like XML is, but it's what JavaScript loves to see and it's what my service creates. Also, no, this is done for for efficiently purposes. This doesn't lower the service "message" size in the slightest, but it does however help to keep a consistence programming model across all my service calls. Furthermore, given that the data was in a database and not in XAML on the server, there's no real overhead. If, however, the data was in XAML on the server it would be a sign of pure stupidity for me to convert that to JSON and then back to XAML.

The parsing for something like this is actually really simple: just iterate through the JSON objects and arrays and creating an XML tree from it. As a reminder or reference, with regard to XML in a web browser, Firefox uses document.implementation.createDocument for XML while IE uses the MSXML2.DOMDocument COM object. Furthermore, Firefox is more strict in its usage of XML than the more familiar COM model that IE uses.

Here is an example of what I mean:

var doc = null;
if(DOM.implementation && DOM.implementation.createDocument) {
    doc = DOM.implementation.createDocument('', '', null);
    Xaml.ScanJSONElements(doc, data, doc);


    var xmlSerializer = new XMLSerializer( );
    return xmlSerializer.serializeToString(doc);
}
else {
    doc = new ActiveXObject("MSXML2.DOMDocument");
    Xaml.ScanJSONElements(doc, data, doc);
    return doc.xml;
}

As you look through the Xaml.js file provided, you will also see that Firefox is very explicit about it's namespaces, while the COM model figured you will take care of them. There's nothing wrong with either approach, it's just something you will want to be aware of if you ever create XML in JavaScript.

Links

JavaScript Repeater Data Binding

In a previous post, I wrote about a way to convert JSON to XAML. The sample I provided with that post didn't really stop with a simple conversion example, but it also demonstrated how to create an use a simple JavaScript data binding repeater. It's just something I threw together in two minutes to bind an array of objects to a div to basically do in JavaScript/XHTML what ASP.NET does with the Repeater control. I think many people will be interested in this technique to see how they can bind data to a browser in a more efficient way.

As far as my requirements, I wanted this to be like ASP.NET in that the data is bound from a data source matching the fields in the data source objects to the data fields declared in this repeater and after all is said and done, the original declarative code should disappear. As far as the data source, I wanted it to be a series of the same type of object... kind of like a generic List.

So, to start off with, I simply declared the code that I wanted to write as the client. Some people call this the 'Sellsian' method (after Chris Sells at Microsoft), though I simply call it... common sense (as I honestly suspect Chris would too!). So, here is the declarative code I wanted to write for my data binding:

<div id="rptData">
    <h2>Title</h2>
    <div id="title" class="bind"></div>

    <h2>XAML Output </h2>
    <div id="output" class="bind"></div>
</div>

In this situation the data source fields are matched to element with the bind class and the element id. This is much like <%#Bind("") %> in ASP.NET.

On the other side, I would like my data source to be in JSON format and I would like to be able to bind in a way that 'feels' like a static class. Here is what I decided on:

var dataSource = [{
        title: 'Ellipse',
        output: Xaml.CreateXamlFromJSON(jsonElement1)
    }, {
        title: 'Rectangle',
        output: Xaml.CreateXamlFromJSON(jsonElement2)
    }, {
        title: 'Canvas',
        output: Xaml.CreateXamlFromJSON(jsonElement3)
    }
];

DataBinder.BindTextRepeater(D('rptData'), dataSource);

In the sample above you can see that I have a data source with 3 objects with the object being defined with the interface of a string field named 'title' and another string field named 'output'. Furthermore, I wanted to call the method what it is: a text repeater, not a fancy object repeater (though building that shouldn't be much more difficult), so my static method is called BindTextRepeater and accepts the declarative repeater object and the data source as parameters. In my examples I use the D('id') syntax where D is simply an alias for document.getElementById. I know some people use a dollar sign for that, but that just looks really weird to me.

Now onto the code. Here is the basic shell:

var DataBinder = {
    BindTextRepeater: function(obj, ds) {

    }
}

The first thing we do in this situation is look at the data source and see what the objects look like. For this we simply need to create an array to record what fields we are looking at and iterate through the object to record it's fields. Put another way... we simply need to do simple JavaScript reflection and record the object interface, something that's incredibly simple in JavaScript.

var fields = new Array( );
for(var f in ds[0]) { fields.push(f); }

Now that we know what the object looks like, let's iterate through the datasource and bind each field to it's proper place in the repeater. This is what the rest of the method does and it should be fairly self explanatory, except for the following things:

First, I said that data is bound to fields with the 'bind' class. What if you had your own class on it? That's not a problem. JavaScript classes are a bit like .NET interfaces (where as JavaScript ids are a bit like .NET classes), so you can "apply" (or implement in .NET) a few of them. So, if you wanted the apply the class "message-log" to the bindable element, you would simply have the following:

<div class="bind message-log"></div>

In this case this is possible because I'm simply checking to see if the class STARTS with "bind", rather than simply checking to see if it IS "bind":

if(obj.childNodes[e].id && obj.childNodes[e].className && obj.childNodes[e].className.substring(0, 4) == 'bind') {
    /* ... */
}

Second, if the element if found to be bindable, the method looks through the fields array to see if that element has data for in the specified data field. If so, it binds. If not... there's not much it can do (though ideally you would throw an exception). One thing to note about this is that it replicates the element and binds the text as a child. This is seen by the following line:

var bindableObj = obj.childNodes[e].cloneNode(false);

When you clone a node, you can say true, which means to clone it's children, or you can say false, which means to clone only that particular element. In this case, we don't need the children as this is a text repeater and we are going to put our own text as a child. If we were to say true, we would have to go out of our way to remove the children.

If the element is not found to be bindable, it copies the element and it's children as can be seen as the cloneNode(true).

Third, after the data object is ready you have a duplicate of the original repeater, but now filled with data from the data source. This data object is then bound to the browser's DOM as a element immediately before the repeater template. After all data objects have been bound, the original repeater is removed. Thus, you replaced the repeater template with data bound controls and you're done.

Here is the final implementation of the BindTextRepeater method:

var DataBinder = {
    BindTextRepeater: function(obj, ds) {
        var fields = new Array( );
        for(var f in ds[0]) { fields.push(f); }
        
        for(var r in ds) {
            var outputObj = DOM.createElement('div');

            if(ds[r]) {
                var d = ds[r]
                for(var e in obj.childNodes) {
                    if(obj.childNodes[e].nodeType && obj.childNodes[e].nodeType == 1) {
                        if(obj.childNodes[e].id && obj.childNodes[e].className && obj.childNodes[e].className.substring(0, 4) == 'bind') {
                            for(var i in fields) {
                                if(obj.childNodes[e].id == fields[i]) {
                                    var bindableObj = obj.childNodes[e].cloneNode(false);
                                    bindableObj.appendChild(DOM.createTextNode(d[fields[i]]));
                                    outputObj.appendChild(bindableObj);
                                }
                            }
                        }
                        else {
                            outputObj.appendChild(obj.childNodes[e].cloneNode(true));
                        }
                    }
                }
            }
            obj.parentNode.insertBefore(outputObj, obj);
        }

        obj.parentNode.removeChild(obj);
    }
};

Using this same approach of templating XHTML elements and reflecting JavaScript JSON data sources, you could actually create a full scale data binding solution for all your client-side data binding needs. Furthermore, since we used a JSON data source we can now bind data directly from JSON services accessed via Ajax techniques. Lastly, as I hope you can see, there wasn't really much magic in this example and absolutely no proprietary technology. It's simply a usage of what we already have and have had for many, many years.

Links

Object Range Selection and Emulating Word 2007's Formatting Box

Say you're working on a text driven web application and you want to get the selected text in order to see what was selected. With JavaScript that's easy enough, but what if you wanted to know what strongly-typed text was selected in order to recreate the selection later? This would come in tremendously handy if you want to have text annotation capabilities in your application. But, why stop there? If you are going to do that, why not add a nice pop-up window over your selection to give easy access to appropriate toolbox options? That's exactly what Word 2007 gives you and you can do the same in your JavaScript applications.

Implementing these two things involves the following tasks: assign each word (or letter if you want really fine grained control) a container span with an id and then watch for hovering over an object. When an object is hovered over, start a timer counter for the time of the hover. If hovering ends, reset the timer. When a specified amount of hover time has elapsed, see if they is a selection and if the object currently hovered over is in the selection. If so, show the toolbox. The first word and he last word selected are saved in the browser's Selection object as the 'anchorNode' and 'focusNode' objects of the selection, respectively.

Here's the meat:

// Get all spans and iterate through them making sure each REALLY exists and
// making sure each as an id.
var b = document.getElementsByTagName('span');
for(var a in b) {
    if(b[a] && b[a].id) {
    
        // Only use those with an id starting with 'wid'.
        if(b[a].id.substring(0, 3) == 'wid') {
        
            // Set the event that is called at each interval.
            b[a].timercall = function(evt){
            
                // It there is a saved object (see onmouseover event below), 
                // then continue...
                if(gHoverObject) {
                
                    // Increment counter.  When there are 4 timer intervals, 
                    // then get the object information and show the hover box.
                    hoverCounter++;
                    if(hoverCounter > 3) {
                    
                        // Get the text selection
                        var selection = window.getSelection( );
                        
                        // Does the selection contain the object the cursor is currently over?
                        // false means that the object the cursor is over must be fully selected.
                        // That is, half the word being selected won't cut it.
                        if(selection && selection.containsNode(gHoverObject, false)) {
                        
                            // Save the first object id selected and the last object id selected
                            toolboxObj.start = selection.anchorNode.parentNode.id;
                            toolboxObj.end = selection.focusNode.parentNode.id;
                            toolboxObj.style.display = 'block';
                            toolboxObj.style.left = parseInt(gHoverObject.x) + 'px';
                            toolboxObj.style.top = parseInt(gHoverObject.y) + 'px';
                        }
                    }
                }
            };

            b[a].onmouseover = function(evt) {
                // When the object is hovered over, save the object.
                gHoverObject = this;
                gHoverObject.x = evt.pageX;
                gHoverObject.y = evt.pageY;
                
                this.timer = setInterval(this.timercall, 150);
                hoverCounter = 0;
            };
            
            b[a].onmouseout = function(evt) {
                // Destroy the object so the algorithm doesn't run.
                gHoverObject = null;
                clearInterval(this.timer);
                hoverCounter = 0;
            };
        }
    }
}

The provided proof-of-concept demonstration also demonstrates how to setup regular text to be strongly typed. This is simply done by splitting the text by a space and putting each word into a span, then putting each span into a parent object and finally putting that parent object before the original text and deleting the original text. You can view all this happening and see the resulting structure using Firebug for Mozilla Firefox.

The proof-of-concept demonstration provided is for Mozilla Firefox only.  Internet Explorer does NOT have the abilities to do this.

Important Disclaimer: it's not my intention to teach anyone JavaScript. This information is not for someone to simply copy/paste into their own applications. A solid understanding of the DOM, JavaScript syntax and dynamics, and XHTML is required before any true application can be built. So, please do not ask me any "how do I..." questions. When I get those I merely explain the basics and tell the person to figure it out on their own. In other words, I won't do your homework for you.

Links

CSS Architecture Overview

As a technology architect who has an emphasis in web architecture, one thing that I find that most ASP.NET completely misunderstand is the concept of CSS architecture.  It's incredibly important to learn that CSS is not simply "styling". This is one of the most common misconceptions about CSS in existence. CSS is more than the silly “style” attribute on XHTML elements. If that were the case, then none of us should ever rely on CSS as the “style” attribute does little more than dramatically increase the coupling between page structure and style.

Fortunately , however, CSS is more than simply "styling". In fact, CSS is an acronym that stands for Cascading Style Sheets. It's not just some nice marketing acronym used to make the technology sound cool, but is rather a very well thought out acronym that explains CSS very well. When most people come to CSS, they only look at the middle letter ("style") and completely ignore the other two letters completely. You need to understand each of the letters in order to fully grasp the architectural power of CSS.

The third letter represents CSS is "sheet". This means that CSS is a technology designed to be placed by itself in its own file.  By keeping your CSS rules away from your XHTML structure you are maximizing the potential of your system be removing all coupling and maximizing cohesiveness. That is, your XHTML is doing what it does best by representing the page structure, leaving CSS to do its job by focusing on the visual elements. There is no sharing of responsibilities and the aren't directly tied to each other, thereby allowing reuse of your CSS rules.

There are secondary reasons for doing this as well. One of them deals with manageability.  Instead of being forced to change each "style" attribute on each element across multiple files manually, you now have a centralized place for your style.  Saying "it's just one 'style' attribute, what's it going to hurt?" is a sign of laziness and unprofessionalism that will leads only to more people saying the same thing leading to a complete nightmare of spaghetti code. Imagine if people did that with C# and ASP.NET: "what's the big deal? It's only one script block." You can destroy anything by taking cutting corners.

Another secondary advantage of keeping your CSS in "Style Sheets" (CSS sheet) is that it keeps your client download time to a minimum. If you keep your CSS away from your ASP.NET/XHTML structure, your client web browsers can cache the XHTML document and CSS page separately.  When the ASP.NET/XHTML page changes, the web browser doesn't need to get all the CSS information again. In the same way, if you need to change the "style" of something, you can do so in the CSS sheet without affecting anything else. If you kept your CSS either in the "style" attribute as a sloppy blob of CSS rules in a <style/> element, then even the slightest color change is a modification of the ASP.NET/XHTML structure leading to a recompilation.  You may have killed your page cache as well.

The cascading nature of CSS, or to the “C” in CSS, is also at the heart of CSS architecture. If you manage your CSS sheets correctly, then you should have a system where you can literally change one file and have it reflect across potentially thousands of pages. Your changes will naturally flow from XHTML page to XHTML page, from CSS to CSS, and from parent elements to child elements with well thought out, virtually object-oriented element style overriding. Trying that with HTML or with CSS coupled to your XHTML elements or pages!

By working with CSS as true CSS, not simply as a nice technology to change font sizes and colors, we are able to take the potential of CSS to its logical conclusion: CSS themes. If you create a set of CSS sheets that specify the images, layout, colors, and various other style related aspects a specific look and feel, then you are in the perfect place to allow themeing of your web site.  If the term "theme" doesn't appeal to you, what about the term "branding"?  A few years ago I was involved in a project for a major fast food chain who had about 10 different brands.  They basically wanted 10 different store fronts, which made the resident e-commerce developer about flip out.  However, using a properly designed CSS architecture, I was able to provide a simple, yet powerful branding model.  First, I defined the XHTML structure based on their semantic representation.  Second, I applied a colorless and image-less CSS page which gave each website its element positioning.  Third, I gave each brand its own folder with its own root CSS page and its own images and other media files.  All the developer had to do was look at the URL path to obtain the brand name and then change a single CSS page between brand.  The developer was actually able to sleep that night.

As you can see from this story, CSS themes can be very powerful and save a lot of time.  You could event go more complex in your CSS themes.  For example, you could use a CSS coordinator.  Then is when you attach a single CSS sheet to your XHTML page and have that single CSS sheet contain a series of “@import” statements.  You could then actually change the look and feel of your entire website without ever touching the ASP.NET/XHTML structure and thereby never forcing a recompile anywhere.  I often use this technique to coordinate a CSS page for screen size (800.css, 1024.css, 1280.css) with a theme (plain.css, green.css, blue.css), with a core feature set (report.css, membership.css, landing.css-- which would be tiny, so your landing page is very quick to load).  This technique should look familiar to anyone deep into object-oriented design as it's similar to some of the GoF structural design patterns. Having said that, this is not always a required technique and sometimes it can lead to caching problems itself. For example, how will the page know to get the new CSS sheet? Sometimes it's OK to invalidate cache or to force a recompile.

Another reason for proper CSS design that is continually growing in importance is media specific CSS. Say you created a screen for management that allows simple report creation. You put a set of ASP.NET controls next to the report to allow management to refine the report data. How can you allow management to print the report without printing the ASP.NET controls? How can you turn off the colors of the report to maximize print quality on a black laser printer? How can you remove the application header from the form so that the report prints professionally or to make sure the document prints on one sheet? All these things are possible with media specific CSS. By having one structure stating the content of the document you have a base level entity that can then be transformed to various output media. Media specific CSS works for more than just printers. If you thought ahead in your web architecture and therefore created a proper semantic representation of your XHTML elements and separated your CSS from your XHTML, you can easily add light-weight mobile support with very little effort. When a web browser views your page it will display in all its glory, when a printer accesses the page the buttons and header will be gone, and when a mobile device views the page the sidebar is now hidden, the fonts are now smaller, text shows in place of images, and the menu is now abridged.

You can easily add media specific CSS to your documents by placing more <link /> elements in your XHTML head element.

For example, the following will set a general CSS page, a print theme, and a mobile theme.

<link href="Style/ClassicTheme/default.css" rel="stylesheet" type="text/css" />
<link href="Style/ProfessionalPrintTheme/default.css" media="print" rel="stylesheet" type="text/css" />
<link href="Style/SimpleMobileTheme/default.css" media="handheld" rel="stylesheet" type="text/css" />

One thing you may want to keep in mind about this is that the iPhone is not considered a “handheld” device. To add a media specific CSS page to the iPhone, please see this article.

It's hard to overestimate the importance of proper CSS architecture, yet I find that most web developers have never even heard of this before.  I suspect that it's because its a fusion of diverse topics which with most web developers aren't familiar.  For example, many ASP.NET developers I work with know software architectural principles, but can't even spell CSS.  Other times, I'll see awesome web designers who have no idea what software architecture is.

Reflecting Graph Silverlight Demo

In one of my Ajax applications I wanted a REALLY cool interface for some of my data modeling. I was a bit stuck until I heard that Microsoft published a beta of Silverlight 1.0 (an actually working one too!) This of course opened up an entirely new planet of new design possibilities.

The design that I came up with was a tree model where each element was a node in a mesh like structure where the user can move the elements as well as the entire structure in one smooth motion and where the user can zoom in and out with their mouse scroller... something that was incredibly simple to build in Silverlight.

To demonstrate what I built, I would like to use toned-down version of the same thing to show a simple type reflector with a Silverlight interface. In this demo, simply select an assembly and select a type. The type will show up as a graph with branches for reflected members, organized by methods, properties, and events.

The mechanics of this is actually really simple. First, define a few canvases so that an inner canvas can scale and translate to give the effect of a zoom and canvas movement. Second, have a method that accepts a JSON structure containing the model for your tree. In the iteration plot the ball element, the text as as well as a line that goes from itself to its parent. The trick in this step is to get the math right to know where exactly to plot the element. I'll discuss this in a bit. Lastly, watch for mousedown, mouseup, and mousemove events on the elements and the on canvas as well as mouse scrolling events on the canvas.

Placing the elements on the screen is really just basic geometry. We know that the first element can go anywhere and that its sole purpose is to set a positioning basis for all other elements. Plotting the children is just about as easy. For perfectly symmetrical elements, you just have to plot them at equal points from each other around a 360 degree radius. For mathematical details on this, see the essential mathematics page below.

Having symmetrical elements is all well and good, but it's ugly and nasty. [See image 1] So, to take it a bit further, I played with the angle a bit. If the element was the first child, I just plotted it normally via symmetry. If not, make the angle an increment of 45. This will just make the children the same length from each other regardless of how many there are. [See image 2] After this, I scaled down the separation of the elements by how many elements there were. [See image 3]

Next, I wanted to make sure the children were pointing in the same general direction their their parents pointed in. So I subtracted the angle of the parent (minus 90) to make it go in the same general area. Remember that all the elements are grouped together, so now we are moving groups of elements, not just a single element. [See image 4]

At this point it doesn't look too bad, but the child groups don't really line up with their parents. For them to line up I needed to add 45 degrees plus half the radius of the entire child set (another 45 degrees scaled by how many children there are). [See image 5]

Now we have something that looks like it it could grow nicely... however, it would be nice if it didn't look like a robot built it. To give it a more organic feel I added a random angle to every element. No, it's not the most sophisticated method for creating a more natural look, but it does work rather nicely. [See image 6]

As far as the events are concerned. If you get a mousedown on an element, you are in dragging mode and when you get a mousemove, adjust the element, text, and line accordingly. Silverlight will automatically redraw the line based on the new X, Y endpoint of the line. If you get a mouseup, then you are no longer in dragging mode and mousemove, doesn't do anything. If you get a mousedown that's on the canvas, then you simply need to adjust the translation of the canvas. If you get a mouse scroll simply, adjust the scaling of the canvas. It's extremely important to take the scaling into effect when you work with the moveable elements as the coordinate system completely shifts. In the provided demo, I added a few adjustments for this, but it could be much better it a live version.

As a closing note, this demo can also act as an example for an asymmetrical schema Ajax service. The service I wrote here accepts an XML message, but responds with JSON data. It's important to remember that the focus of communication is on receiving, not on transmission. If a person from France and a person from German want to have a conversation and each CAN speak the other's language, but understand their own flawlessly, the one from French may speak German so as to maximize communication and the person from German may speak French to the same end. So in this example, JavaScript speaks XML to .NET, which can easily deserialize it to a usable object and .NET speaks JSON back to JavaScript for easy direct use.

Related Materials

Reflecting Graph Video Demo

For those of your who have not installed Silverlight, below is a link to a video of the demo. As you will see it works perfectly in Firefox and IE.

Links

XmlHttp Service Interop - Part 3 (XML Serialization)

In this third part of the series, I discuss how to manually create an XML service and a SOAP service using XML serialization to maximize the flexibility in your service interop work. The technique allows you to very quickly accept just about any XML format from the wire and work with it in a strongly typed manner in your .NET applications.

Moonlight: Mono's Silverlight

Here's something rather interesting... the developers on the Mono team "hacked" Silverlight and it only took them 21 days to do it. They are calling their own product "Moonlight", which I have to admit is a much cooler product name than Silverlight.

Project details as well as some internal information including a bit of information about Silverlight internals are are the link below. Even if you care nothing about Mono or Moonlight, if you have any interest in Silverlight at all you should check it out simply for that.

This is actually really cool as it should seriously help promote Silverlight as a de facto standard worldwide, but I doubt anyone will be making a documentary called "21 Days" any time soon.

Silverlight's Adoption as Public De-Facto Standard

Recently there have been comments floating around the internet and around conferences that Microsoft's Silverlight needlessly uses XAML as its mark up language where it should have used SVG (Scalable Vector Graphics). The argument here is based on the idea that since SVG is a vector technology accepted in all web browsers except IE, Microsoft should have used it instead of XAML and then simply added support for SVG to IE. While this seams to some to be a valid criticism and a good point to some of the web standards world, it is absolutely groundless and carries no weight.

Silverlight can be viewed as a web extension of the Windows Presentation Foundation (WPF), a .NET 3.0 technology and not simply as a new web technology. As such, it makes sense that Silverlight uses XAML, not SVG. If Silverlight were based on SVG, then there would be a chasm between Silverlight and the .NET Framework, but as it stands Silverlight's use of XAML makes it part of the .NET family. In fact, it’s important to note that elements in XAML usually represent objects in the .NET Framework; this would simply not be possible in SVG. Therefore, by choosing to use XAML over SVG, Microsoft kept SVG pure by not add proprietary technology to it.

Furthermore, SVG is "Scalable Vector Graphics" and as the name suggests it for vector graphics. If Microsoft were to start out with SVG as their base technology they would quick to add UI controls to it thereby altering it making it a Microsoft dialect of SVG, which would not be SVG at all. This would take the entire web world back to the same disasters that were seen in proprietary HTML elements, proprietary CSS selectors and rules, and proprietary JavaScript dynamics (i.e. "DHTML behaviors"). Aside from the architects and developers on the Internet Explorer team, the world, including Microsoft as a whole, thankfully has a lot more respect for specifications and standards than this.

Standards advocates should be very satisfied with Microsoft's decision to start from scratch on this one. Standards, as they are normally referred to as, deal with standardization and thereby allow everyone to talk about the same technology without proprietary terminology or technology. If Microsoft sets an explicit syntax on a well-defined language, publicly states the specification for the language, and makes it a multi-platform language, then the world has basically the same result that a standard would have (this point will be hit again in a moment.) While it's true that "popular" doesn't make things right (something most people learn early in high school) and that statements about "most people" do not mean a single thing, having a well-defined technology set with a set specification does make it a first class technology.

We can see this in the ECMA-334 (C#) specification. Microsoft could have taken JavaScript and forced it to obey the Common Language Specification (CLS) and pawned it off as the primary .NET Language, but they instead chose to create something entirely new and then publicly displayed the specification for it. So when .NET guides are written the samples usually have code for C# and VB, instead of JScript and VB. Those who criticize Microsoft for not using SVG as their client-side technology probably applaud Microsoft for creating a new programming language.

Lastly, it should be noted that one of the primary purposes of using the web as a platform is to make something more accessible. People should be able to use any sufficiently advanced web browser to access a web page or web application anywhere on the entire Internet. Truly, even if an application is created in a corporate environment, if the technologies used are proprietary to any one specific web browser, the primary purpose of using a web browser is defeated and using a smart client would probably give a much richer experience and result. So, while it is true then that the use of proprietary technology, such IE specific content, defeats the entire purpose of using the web as a platform, if a technology is spread enough, is on enough platforms, and allows integration into their current environment (i.e. Firefox, Safari, Opera), it doesn't need to be a standard because it would achieve the same result as a standard. Flash is probably the best example of this.

In conclusion, there should not be much fuss about Microsoft's new "Flash-killer" using what some would view as private corporate technology. As long as there is support for Silverlight in Firefox, Opera, Safari on Windows, Mac OS X, and Linux and a seamless installer experience, there shouldn't be any problem with a rapid world wide adoption of this new technology. Hopefully Silverlight's adoption into the web as a de-facto standard will silence many forms of critisism about it and help prompt developers to do a better job of creating intranet and Internet solutions for muliple platforms and multiple web browsers.

The ECMA-334 Standard

Let me state publicly for the record: I do not approve of books for learning C#. The only guide you will ever need is the ECMA-334 standard. It's a readable guide to the entire C# language that covers every facet of the language. This document is the definitive guide for C#. In fact, there is no document on MSDN that even comes close to the comprehensive native of C# than does the ECMA-334 standard as the ECMA-334 standard *is* C#.

Why do I mention this at all? Because, from time to time I get people asking me what book he or she should *buy* to learn C# and I always tell them what I've just stated above. Unlike Visual Basic, C# is NOT Microsoft's baby. They may have been the ones who gave birth to C# and continue feed it and nurture it, but it's a standard -- not Microsoft's baby.

Lastly, while you are getting the ECMA-334 standard, check out the ECMA-335 standard. This one is the standard for the CLI (Common Language Infrastructure -- which Microsoft implements as the CLR or Common Language Runtime). It goes into detail about the IL (Intermediate Language), the Common Language Specification (CLS), the Common Type System, and talks in some detail about various .NET concepts. I would consider this standard to be more advanced than what most people need at first, so I don't recommend this for initial learning (for that get Duffy's Professional .NET Framework 2.0 [0764571354] and then Richter's CLR via C# [0735621632]).

As a footnote, you could also check out the ECMA-262 standard (ECMAScript, that is, JavaScript), but it's not nearly as well written as the ECMA-334 and ECMA-335 standards and because of that I've never recommended it's reading to anyone.

Extremely Misunderstood Software

Today I was thinking about some of the misunderstandings going around about software. Actually my life revolves around misunderstandings in every area of my life (not just my life in technology!) and I'm usually thinking about it in some respect. I can recall back to early 2004 when I was the black sheep in both the Microsoft AND Mozilla communities for promoting .NET *and* web standards. The Mozilla-ish (open-source) guys would mock the severe inefficiencies and painful security flaws of .NET (though they never ACTUALLY used .NET!) and the Microsoft-ish (.NET) guys would mock call me personally unrepeatable names for even suggesting that web developers should learn the foundational principles of JavaScript, CSS, and XHTML before they starting asking a load of "how do I..." questions. Now while the open-source community has wised up a bit, I still get extremely vile insults from the git-r-done areas .NET community where even mentioning proper development or training will get you killed. In any case, this is my life and like I say... not just in technology. Basically, everyone hates me :)

So, here's my concise (yeah like I'm capable of that) list of software that is EXTREMELY misunderstood. I'm not going to go into any detail except put a single line by the item which should give you a hint as to what the software really is. This is important: if you understand one of the confusions, that confusion is analogous to each of the others. For example, if you know how insane it is to compare SQL Server 2005 to MySQL then, guess what... you now understand why it's insane to compare Firefox to IE or LLBLGen Pro to .netTiers or NHibernate. You just can't do it! Yet, people all day long give me unjustifiable grief about this stuff and I am forced to spend literally 70% of my time in marketing instead of programming!

I'll shut up now... here's the list:

List of Extremely Misunderstood Software

Firefox 2.0 Bookmark Auto-Backup

The other someone called me up telling me their world has come to and end because they came home to find that someone accidentally deleted all their Firefox bookmarks. The situation was rather serious to them, which is understandable as no one likes to lose anything, so I went over to their place to see what I could do for them. Sure enough the bookmarks were gone, but...

...much to by delight (and their's), I found that Firefox 2.0 keeps automatic backups of your bookmarks for the past 5 days. I found all their bookmarks in the following folder:

%appdata%\Mozilla\Firefox\Profiles\<random stuff>.default\bookmarkbackups

So, I was able to import their bookmarks from a backup and BAM... happiness. That's just a down-right awesome feature that MANY more applications should have. Firefox 3.0 has a new bookmark system called Places which uses SQLite (an open source relation database system) to manage its bookmarks and it should be interesting to see what techniques they use in their final release.

HnD Customer Support and Forum System

If you are curious about LLBLGen Pro or have been using it for a while and want to further more skills with it, then you simply must check out Frans Bouma's HnD (Help and Discuss), an ASP.NET-based Customer Support and Forum System.  It was actually released back in December 2006, but I only just now got around to checking it out and I have to say it's really, really nice.  But what else would you expect from the person who created the world's most powerful database abstraction system (LLBLGen Pro)?

You can actually see an example of HnD by going to LLBLGen Pro's support forum.  I've been using their support forum for a while now and I could seriously tell a difference in usability from their own system.  One feature of HnD I definitely want to mention is something that many forum's don't have, but all need: when replying, it's critical to see the message you are replying to as you are typing (this is one of the reasons I switch to Gmail from Yahoo!)  Frans thought of this and HnD has that ability.  HnD even allows for attachments and has an attachment approval system for moderators, which is really nice.  The feature list goes on and on.

Not only is the end product nice and easy to use, it's released with full source code (just as LLBLGen Pro is buy, when you buy it).  However, unlike the commercial product LLBLGen Pro, HnD is released under the GPLv2 license, so... we can have all kinds of fun messing with it.  From my perspective, this is one of the greatest things about it and is exactly why I released Minima (a minimalistic ASP.NET 2.0 blog engine built using LLBLGen Pro).  Simple, to the point, source is provided, and the source is actually easy to navigate.

The solution is split into an SD.HnD.Utility project which contains very base level functionality (much like Minima's General project), it includes an SD.HnD.DAL project which contains the LLBLGen Pro DAL (much like Minima's MinimaDAL project), it includes an SD.HnD.BL project which contains "business logic" for Hnd (much like Minima's MinimaLibrary project), and finally it includes the web site.

This is an incredible project for anyone who wants to strengthen their LLBLGen Pro skills.  I can tell you that it has personally already helped me with my own LLBLGen Pro skills.  So, whether you want a really nice ASP.NET-based forum system, want to learn more about ASP.NET data binding, want to learn LLBLGen Pro for the first time, or just want to enhance your LLBLGen Pro skills, you should seriously considering nabbing the source code for HnD.

As a postscript, if you are unfamiliar with Frans Bouma's work, then should check out his blog at the below link.  His work is always great and he definitely deserves his MVP many times over.

Related Links

Minima and Data Feed Framework Renamed and Explained

As of today I'm renaming any of my CTP releases to simply... "releases". That is, my Minima February 2007 CTP is now "Minima - February 2007 Release" and my Data Feed Framework February 2007 CTP is now "Data Feed Framework - February 2007 Release".

The motivation behind these is different for each. With regard to Minima, I knew it wouldn't be a long term or real production project, so announcing it as a CTP was a mistake on my part. Not a big deal. Lesson learned. Furthermore, I knew from the start that it would be more of a training tool than anything else. With regard to my Data Feed Framework (DFF), after using it in various areas I realized that my initial release was sufficient for most scenarios.

As a reminder... what is Minima? Minima is an ASP.NET 2.0 blog engine built using a SQL Server 2005 database and an LLBLGen Pro 2.0 DAL that provides the base functionality that most technical bloggers would need. Since it's initial release I've added some functionality to my own instance of Minima and have used the February 2007 release as a training tool numerous times. Moving forward I want to make it very clear that Minima is primarily a training tool and a such, it's a blog template that people learning ASP.NET can enhance and upgrade to aide in their own personal learning. Having said that, Minima is a full fledged blog engine and it does have features such as labels and the ability to have more than one URL point to the same entry. In any case, if you want something to help you learn the various components of ASP.NET, please feel free to take Minima and use it as you please (see attribution/licensing note below).

By using Minima as a training tool you can learn much about base ASP.NET technology as well as manual Ajax prinicples, CSS theming, HttpWebRequest, proper use of global.asax, framework guidelines, and type organization. Furthermore you can use it to learn WCF, the power of HTTPHandlers, and how to effectively utilize LLBLGen Pro. I will try to release versions of Minima to demonstrate the new technologies of the day. For example, when ASP.NET Ajax matures a bit (I find it slower than a dead turtle right now), I'll be adding portions to demonstrate ASP.NET Ajax. However, I will not be adding new functionality for the sake of functionality. If the functionality can be used as a training tool, then I will add it. Also, Minima is a great way of learning WPF. How so? I deliberately did NOT include a client! Why? Because I would rather you use whatever you want to use to create a simple form to access the API via WCF. The client I use a very basic WPF client that calls the Minima WCF service. So far, Minima has been a very effective learning tool and I hope you will find it useful as well.

As far as my Data Feed Framework (DFF). What is it? It's a self-contained framework that converts SQL statements into RSS feeds. I've used this in a number of places where creating a manual RSS feed and MANAGING the RSS feeds would just be too time consuming. For example, say you have a ASP.NET 2.0 e-commerce website and you have new products released at various intervals. Well, it would be AWESOME if you had an RSS feed to announce new products and sales without having to send out an ethically questionable e-mail blast. With DFF, you simply write something like "select Title=ProductName, Description=ProductDescription from Product where ProductDate > '7/11/07' order by ProductDate desc" and BAM you have an RSS feed. Since an RSS feed is simply a select statement in a column in a row in a SQL Server table, you could also use it to dynamically create a custom feed for each person who wants to monitor the price of a certain product. It's very flexible. RSS feeds are accessible via their name, their ID, or you can use a "secret feed" to force a feed to be accessible via GUID only. DFF also includes some templating abilities to help customize the output of the RSS feed. In addition to the DFF SQL to RSS engine, DFF also includes an ASP.NET 2.0 control called an InfoBlock that allows you to consume any RSS feed and display it as an XHTML list. You can see an example of how to use an InfoBlock my looking at my blog. The boxes on the right are InfoBlocks which allow me to manage my lists using a SQL Server table (the DFF database contains a Snippet and a SnippetGroup table to store autonomous information like the information in these lists--please see the documentation for more information). DFF is creating secret RSS feeds that my own personal version of Minima then consumes. With this as an example, it should be easy to see how DFF can be used in portals. My DFF demonstration video shows a bit more of that.

For more information regarding my Data Feed Framework (DFF), please skim the concise documentation for Data Feed Framework linked below. It would also probably be a good idea for you to watch my short video documentation for DFF as well. Please note that even though DFF is designed to be a production framework, it too can be used as a training tool. The most obvious thing you can learn is how to create data-bound server controls for ASP.NET 2.0 as this is exactly what an InfoBlock is.

You may use either the SQL->RSS engine or the InfoBlock portion or both. It's up to you. Also, as with all my .NET technologies that I create, the source and database files are included for extensibility and so you may use these as training tools (for yourself or for others). Lastly, for both Minima and Data Feed Framework, please remember to keep the license information intact and make it very clear that your work either uses or is based on either whichever product you are using.

Minima - Links

Data Feed Framework - Links

Minima, DFF, and SolutionTemplate/E-Book now in Subversion

I've recently realized how lame it is to have to download a new ZIP file each file a new version of a framework is released or when a project has some changes. So, I'm moving my projects to Subversion instead of making everyone download my ZIP files. This should help me update the projects more often and allow everyone else to get my projects easier. Please note that this replaces all RAR/ZIP files I've previously released.

Currently, I have the following projects in Subversion:

  • Minima
  • Data Feed Framework
  • NetFXHarmonics .NET SolutionTemplate With E-Book

You can access these projects with any Subversion client, though you would probably want to use TortoiseSVN for Windows development. You can access the projects at the following SVN HTTP addresses. You are free to also use these SVN HTTP locations to browse through the code in your web browser. My primary emphasis is in .NET training and education, so I do hope this helps. Also, given that my SolutionTemplate is also my e-book, you can easily look at the files there and read them online without having to download the project.

Note the /trunk/ path at the end. There are currently no projects in the tags section of the Subversion repository and, honestly, I'm still planning what to do with that section. The branches section is currently set to not allow anonymous access.

By the way, if you're unfamiliar with it, Subversion is an incredibly powerful and seamlessly easy to use revision control system that allows for code repositories (that is, code stores) to be stored in a centralized local to allow access from diverse locations. Subversion also does automatic versioning of your commits (that is, saves) to the code repository. Not just versioning as in a magic number change, but also as in it saves all versions of your files so you can go back and see your changes over time.

Subversion is used for many different reasons and some of them have nothing to do with code. For example, I've been using something similar to it (CVS) for a rather long time now to store all my documents so I can keep everything stored in a centralized location and so I can see the progress of my work. One use that I found rather interesting was that one company was using Subversion to store and provision new instances of their application. So, you can use it as a place to store your code, as a global file system, or as an application repository. Subversion stores whatever you want to store, including binary files. For more information on Subversion, see the online O'Reilly e-book "Version Control with Subversion" below.

Related Links

NetFXHarmonics SolutionTemplate/E-Book

Recently I started putting together my standard .NET solution template for public release. This template contains the base architecture and functionality that all my projects need. This template is also organized in a very clear way that clearly separates each type of element into it's own section to maximize expansion and ease of development.

In order to make sure that the solution is understandable by many different types of developers, there are commentaries in each file in the solution. In addition to this, many of the files have full chapter-length lessons on a development topic contained in that file. For example, in the Code/SampleDomManipulator.js file, I wrote a rather extensive introduction to JavaScript browser dynamics through DOM manipulation. Because of these lessons, this solution template is also living .NET e-book.

Here is a list of some of the topics I've written about in this first release of my solution template, some of them are simple explanations and others are lengthy detailed lessons.

  • HttpHandler Creation
  • HttpModule Creation
  • HttpHandlerFactory Creation
  •  
  • Custom Config Section Creation
  • .NET Tracing
  •  
  • MasterPages Concepts
  • Global.asax Usage
  •  
  • CSS Theming, Management, and Media-Specific Sheets
  •  
  • JavaScript Namespaces
  • JavaScript File Consolidation
  • Firefox Console Usage
  • JavaScript Anonymous functions
  • JavaScript Multicast Event Handling
  • DOM Element Creation
  • DOM Element Manipulation
  • DOM Element Deletion
  • JavaScript Event Handling with Low Coupling
  •  
  • JavaScript GET/POST XmlHttp Service Interop
  • Manual XmlHttp Service Creation
  •  
  • JavaScript/CSS/ASP.NET/C# Code Separation
  • Highly Cohesive Type Organization

This solution template could be used for the basis for production projects or as a training utility for people new to ASP.NET, people new to JavaScript, DOM Manipulation or AJAX or people who just want to learn how to organize their projects more effectively.

As with all my projects, but much more so with this one, I will be updating the solution template over time to account for more AJAX techniques and .NET technologies. I will also continue to expand the commentaries and lessons to the point where this solution itself becomes a case study, a sample application, and book all wrapped up in one.

Links

NetFXHarmonics Subversion Repository Update

Today a did two things that compel me to give a very short update.

First, I created a permanent and unchanging version of the Data Feed Framework in the tags/ section under February2007/. This release is the original February 2007 release and it's what you could call is the "stable" release. All version of the DFF will be stable in terms of technology, but this is also stable in terms of documentation. You can be assured that when you download the Data Feed Framework from the trunk/ that you will be getting a solid version, but perhaps the API changed and you want a specified version for that API. Well, when the API interfaces changes, then that's a new version and I will tag that one as well. So, for now there is a tagged release called February2007/ which you may use in production at any time (as other are doing already). You may get this from the following location:

Second, I added an index to the my SolutionTemplate/E-Book. This file is the LessonIndex.txt file in the project repository. This file lists only the full-length lessons and as I write more full-length lessons I will be adding on to this file (today, I added a lesson on HttpHandlers and another on HttpHandlerFactories and added them to the index.)

Also, for you convienence here is that index:

Working with Global.asax

SolutionWebsite: App_Code/SampleHttpApplication.cs

Creating ASP.NET HTTP Handlers

SolutionWebsite: App_Code/ServiceHttpHandler.cs

Creating ASP.NET HTTP Handler Factories

SolutionWebsite: App_Code/SampleHttpHandlerFactory.cs

DOM Manipulation

SolutionWebsite: Code/SampleDomManipulator.js

JavaScript Events and Anonymous Functions

SolutionWebsite: Code/Initialization.js

JavaScript Namespaces

SolutionWebsite: Code/SampleStructure.js

JavaScript Closures

SolutionWebsite: Code/SampleStructure.js

Firefox Console

SolutionWebsite: Lib/Debug.js

JavaScript Loosely-Coupled Multicast Events

SolutionWebsite: Lib/Events.js

.NET Tracing

SolutionWebsite: Services/SampleAsyncSchemaService.aspx.cs

Custom Config Sections

SolutionWebsite: /Includes.js.aspx

ASP.NET MasterPages

SolutionWebsite: /MasterPages.master.cs

Using XAG

SampleLibrary: Config/JavaScriptImport.cs

Client-Side Type Organization

Introduction.txt

CSS Architecture

Introduction.txt

Framework Design

Introduction.txt

You can access my SolutionTemplate/E-Book at the following Subversion repository path:

Reflections on Windows Mobile 6 and my T-Mobile Wing

The other day I got my new T-Mobile Wing. This phone is a Windows Mobile 6 device with a pullout keyboard and a whole world of cool features. Last year I had the T-Mobile MDA, which didn't last me a week. This phone fixes a ton of the common problems with Windows Mobile devices, but not only the ones you normally think of. For example, this phone has an almost rubbery texture to it so it doesn't fly across the car and get stuck under my passengers seat every time I dodge someone trying to cut me off. I absolutely love this phone and it was the only phone in the past 4 years that met my EXTREMELY high standards for technology. Before this I was on the Nokia 3650 for many, many, many years and before that I would pre-order and overnight a new $500 cell phone each year directly from Taiwan. So, it takes a lot for me to switch to and keep a phone.

With regard to my service... in my world, a 1500 minute cell plan costs $40.00 with no concept of "US long distance" (i.e. every thing is the US is local) and no concept of roaming (i.e. I can call from anywhere). Period. I have a million features and upgrades on my account, but even with my T-Mobile Hotspot wireless access to every Borders and Starbucks in the world that gives me high speed wireless Internet for BOTH my cell phone AND my laptop, my bill is still under $80 *including taxes*. Of course I also have the ability to switch phones on a whim by flipping out my SIM card and putting it into my old phone. This comes in very handy. For example, I'm not bringing my new phone to the beach. I'll put my SIM card in one of my Symbian phones for the day. If you want to put up with calling someone to switch YOUR service to another one of YOUR phones, then be my guest. Furthermore, EDGE networking is faster than GPRS and even though it's slower than 3G I'm not an impatient child and I since I get WiFi access at every Borders and Starbucks (and apartment complex via an unsecured Internet router setup by someone who can't read a step-by-step, large-print wireless router setup poster) in the world, I have no problems with speed.

Now... for those of you with Windows Mobile 6 devices, may I recommend that if you want to be as cool as the people with the iPhone, then you should get a few things. First, you seriously need to check out the Picsel web browser. IEMobile is actually an OK browser, Opera also good, and Minimo (Mobile Firefox)... doesn't even pretend to work. Picsel and IEMobile together is the way to go for people like me who loves FREE (as in root beer) much more than FREE (as in speech). Using Picsel you can do some of the the cool stuff the iPhone people are doing like showing the FULL web page in your screen and zooming in and out of it with your finger. I find it absolutely amazing for doing my daily bank account monitoring via my bank's website. It also helps tremendously with my NewEgg.com shopping. iPhone's Safari is a very nice mobile browser, so you don't have all the features... but you have the essentials. Now, for things that have a really nice mobile interface... use IEMobile. There's no point in trying to view the entire Amazon.com in a screen of only 2-3 inches. That's rather intolerable. Viewing a full website newspaper on my phone is NOT something that appeals to me. The point is to get the result you want, not to have a fancy "full" webpage.

Google, Gmail, Google Calendar, Amazon.com, and my ESV Bible have amazing mobile interfaces. Amazon.com one is particularly nice as you can even read book reviews. This came in VERY handy when I was a Borders last night where I wanted to compare local prices with Amazon's and also check out the reviews. So, I whipped out my mobile and looked at the ratings and reviews for each of the books I wanted. Gmail is also incredible as a mobile website. I should note that Windows Mobile 6 supports Gmail as a mail service and that is my primary way of getting mail. I get Gmail notifications on my phone just like any other mail system, but I still like Gmail mobile for browsing the mail already in my box. Google Calendar and my ESV Bible also fit their purposes very well. They are both very minimalistic, but that's how I like it. For websites like these, IEMobile is actually much nicer than Picsel. Needless features and fanciness or ads don't really do it for me on my mobile device.

Also, how about that cool Google maps feature for the iPhone? Guess what... it's a free download from Google for you Windows Mobile device. You can zoom in and out and navigate as needed just as you would do from your desktop web browser. Windows Mobile 6 also comes with Windows Live which also does something similar, though I find their interface less intuitive.

With those two utilities I feel my need to have an iPhone subsided (and guess what... I have MMS and can install my own apps, including ones I've created using the .NET Framework!) However, there are two major things that make an iPhone an iPhone and I'm NOT going to list "it has a real operating system on it" as one. As cool as that is, it's pointless (where's the shell access?) First, the multi-touch surface. That's just insanely awesome. I have so many designs in my head for things I can do with Microsoft Surface it's not even funny anymore. I *NEED* Microsoft Surface now so I can start building my apps! I can't wait until I and throw away this stupid keyboard and mouse paradigm and start using an interaction system that's actually intuitive (ohhhh how I hate the keyboard-- and ohhh how I wish they would STOP putting a CAPS lock key on there-- it's the most worthless and pointless key ever thought up) Alas, I don't have any multi-touch with my Windows Mobile 6 device. Second, this phone uses a MicroSD card and the largest SD card that's affordable by mere mortals is a 2GB card. What? 2GB? That's NOTHING. The iPhone has 8GB of space and that will probably be doubled very soon. On the bright side though... I can install applications, documents, and data sources (i.e. XML or SQL Server 2005 Compact Edition databases) into my 2GB whereas the iPhone people can only download battery draining music and videos into their 8GB.

As a footnote, let me mention some non-free applications that I'm using. The first is one of the most amazing applications I've ever seen: NewsGator Go. I don't really like their desktop application and as software designer of the Google design school of thought I much prefer the minimalistic Google Reader over the Yahoo-ish NewsGator online. However, their mobile application is just great. To set it up I exported my OPML from my Google Reader and imported it into the online NewsGator and simply logged into my NewsGator Go. It automatically downloaded my settings and updated my RSS feeds. Now THAT'S good software design! I should NEVER be forced to do ANY setup of ANYTHING on my mobile device... Gosh I would LOVE to have a web interface on my desktop to access the Windows Mobile 6 settings.

The other application I use is one that I'm seriously growing addicted to: OneNote 2007 . At this point, I'm talking about the desktop version. It's basically like a personal wiki that just stores all your ANYTHING without you EVER needing to save. AGAIN, this is called good software design (are you taking notes?) Granted, this thing is in absolute desperate need of an online version with magically auto sync and without that feature I'm going to continue to use my own personal wiki to store all my important notes and drafts (BTW, in case you didn't know: MediaWiki is FREE and runs on Linux AND Windows; Apache AND IIS). Still, it does auto sync with OneNote Mobile on my Windows Mobile 6 Device. However (and this is a HUGE however), OneNote Mobile is completely worthless to me because there is absolutely NO option for "send this note". I'm not sure what the people were thinking when they designed that application, but that's fatal design flaw in the same category as Apple's forgetting to add MMS to the iPhone and the IE team forgetting that the W3C runs the web standards, not the IE team.

Lastly, I feel I should at least mention that Money 2006 for Windows Mobile is a free download from Microsoft's website. I'm a local, small town banking person as long as that local, small town bank has a website where I can track my money. I don't really need a WS-Security service to access my information, though that would be nice, but I do need a way to track my daily expenses without using Excel. Money 2006 for Windows Mobile fills this need for my very well.

Silverlight 1.0 Release Candidate Finally Out

For those of you who don't know, Silverlight 1.0 Release Candidate has been released.  If you haven't tried Silverlight out yet, this would be a good time to do so.  Be warned though: you need to know the fundamentals of Modern JavaScript to work with it.

In an effort to help educate the community on these topics, if you re not familiar with Modern JavaScript (i.e. multicast events, closures, anonymous functions, Ajax, prototype orientation, namespaces, etc...) please send me an e-mail and I will advise you.  Having said that... in almost every case I'll probably tell you to go buy Pro JavaScript Techniques by John Resig.  Oh any btw... being able to do validation or form processing in no way means you know JavaScript :)

Windows Live Writer RULES

Microsoft Windows Live Writer (Beta 2) is by far and away one of the coolest tools I've used in a long time.  Since I created Minima, I was using my own extremely lame WPF app to do all my posting and it made posting a bore.  I've been meaning to put some time into making a more interesting WPF app, but instead Windows Live Writer saved the day.  With this thing I can post new entries, save drafts, set labels, as well as view and edit previous entries.

 Having said all that, setting it up wasn't that easy.  Well, the setup was simple, but figuring out what to setup wasn't.  I kept thinking that there was some .NET interface you had to implement, because the documentation kept talking about it's API and gave COM and .NET examples.  Well as it turns out, all you have to do is implement a well known blogging API and point WLW to it!  In my case, I chose the Metaweblog API.

Setting this API was actually rather simple, though it took some experimentation at first as I've never worked with the API at first.  Also, this API uses XML-RPC calls and at first and, at first, I figured I would have to write the XML listener and all XML messages manually.  It turns out that there's a nice API called XML-RPC.NET.  You set this up similar to how you setup a WCF service: via interfaces.

Here's the basic idea behind the XML-RPC.NET API:

[XmlRpcService(Name = "Minima API", AutoDocumentation = true)]
[XmlRpcUrl("http://www.netfxharmonics.com/xml-rpc/")]

public class XmlRpcApi : XmlRpcService
{
    [XmlRpcMethod("blogger.getUsersBlogs")]
    public BlogInfo[] GetUsersBlogs(String key, String username, String password) {
        // Stuff goes here
    }
}

You just set two class-level attributes and then set a method-level on each method.  Then you expose this class as an HttpHandler as the XmlRpcService class this class is inheriting from actually implements the IHttpHandler interface, which is rather convenient.

How did I know what methods I had to implement?  Well, the Metaweblog API "specification" is NOT a real specification, it's just an article that only mentions parts of it.  Also, XML-RPC.NET doesn't seem to have any useful tracing abilities, so that was out.  After a while though, I just found someone else's web site that implements the Metaweblog API and looked their API documentation (you can just look at the sample API below).  It turns out that to use the Metaweblog API means you will be using parts of the Blogger API as well.  Interesting...

Being a minimalist though, I wasn't about to implement ALL functionality.  So I setup an ASPX page that took the Request.InputStream, pointed WLW at the page, and when WLW did a request I got an e-mail from my ASPX page.  When I saw that WLW was calling a specific function, I implemented that specific one.  Of course I also had to implement specific data structures as well.  Really though, all you have to do is use XML-RPC.NET to implement the functions it wants and give it the structures in the Metaweblog API (as you can see in the sample API below) and you're done.

[As a side note, if you aren't familiar with what I mean by accessing the Request.InputStream steam, this stream contains the information that comes to the ASPX page in the POST portion of the HTTP request.  You will often access this when you are creating manual XML services (see my XmlHttp Interop article below for an example).  Here is an example of getting the input stream:

Byte[] buffer = new Byte[context.Request.InputStream.Length];
context.Request.InputStream.Read(buffer, 0, (Int32)context.Request.InputStream.Length);
String postData = ASCIIEncoding.UTF8.GetString(buffer);

You could use something like this to view what information is being sent from WLW.]

In my debugging I found that WLW has a tremendous number of extremely weird bugs.  For example, one of the structures I needed to implement was a structure called "Post" (I'm using the term structure, but it's just XML over the wire and it's a class in my API-- not a struct).  However, WLW would give me errors if some of the fields were null and would give me a different error if they weren't null, but even then, it was only one some functions.  So I had to create two versions of "Post".  One called "Post" which only had a few members, and the other called "FullPost", which had everything.  Strange.  Oh well... I've seen worst (ever use Internet Explorer?)

In the end though, WLW was talking seamlessly with my API.  I was really, really dreading making a better blog client as that felt like such a waste of time (and there was NO way I was going to use a web client-- WPF RULES!). Windows Live Writer (Beta 2) has already been a great help for me in the past week. Not just WLW itself though, but also some of the great plugins you can use with it. For example, in this write-up, I used a Visual Studio pasting plugin to allow me to copy from VS2005 and paste here to get fancy color syntax. Cool!

Related Links

Disabling IIS6 Socket Pooling

Yesterday I ran into one of the worst features of any program ever. In an attempt to setup Subversion on Apache on Windows, I found that Apache couldn't start on the specified IP address using port 80 even though NONE of my IIS6 websites were using this particular IP address. Well, after banging my head against the wall for about an hour, I realized that even though IIS was told to use SPECIFIC addresses for each website, it felt the need to take over ALL the IP addresses on my ENTIRE system!

After a bit of research I found that this "feature" is called Socket Pooling and has been driving people nuts for a while. So, in my attempt to research how to get around this "feature", I found numerous articles online including one on TechNet describing in PARTIAL detail how to fix the problem. Not a single one of the 10+ articles I read explained how to fix this problem correctly.

Here is how you fix this "feature":

Basically you need to tell IIS6 exactly what IP addresses it is allows to use. By default it thinks it has the right to take over ALL your IP Addresses. Not sure why someone thought that was a good idea, but ok...

To tell IIS6 exactly what IP addresses you want to use, thereby freeing up all other IP address, you need the httpcfg.exe utility that comes in the Support Tools on your Windows Server 2003 disc (for R2, it's disc 1) at the follow path:

\SUPPORT\TOOLS\SUPTOOLS.MSI

In my situation, my dedicated hosting service didn't feel that I would EVER need to use ANY of the utilities installed by SUPTOOLS.MSI, so I was stuck there. Furthermore, after wasting about a half hour on searching my closets for a Windows Server 2003 trial disc, I realized that I needed a new plan. If you find yourself in a similiar situation, you can install the Windows XP SP2 Support Tools on your local machine and upload httpcfg.exe to the server. No other files are needed. The Windows XP SP2 Support Tools download is a free file downloadable from the Microsoft website.

After you have httpcfg.exe on the server, get to a command line and get httpcfg.exe in your path (I put the file in the Windows folder).

Now you need to decide what IP addresses you want IIS6 to use. This is where I kept getting confused. Because the default model of IIS6 is to greedily takeover ALL IP addresses, I kept thinking that I wanted to give IIS6 an EXCLUSION. All the articles make it sound like that. All the articles I read made it sound like that was the case. So I kept using httpcfg.exe to include an address in an exclusion list. Well, that's not how it works and the articles should have explicitily stated that. In reality, you tell IIS6 what specific IP addresses you want IIS6 to use and by doing so you are naturally disabling the "feature" of socket pooling. Moving on...

Use the following command to give IIS6 an IP address you would like for it to use (clearly 10.1.1.1 is just an example):

httpcfg set iplisten -i 10.1.1.1

Repeat that for EACH IP address you want IIS6 to use.

If you accidentally set an IP address that you didn't want to add, then you may use the following sample command:

httpcfg delete iplisten -i 10.1.1.1

If you want to see what addresses you have set, use this command:

httpcfg query iplisten

Then STOP the http service by:

net stop http /y

Then START the W3C service by:

net start w3svc

Now your IIS should be running and you should be able to start other web servers as well on different IP addresses. Now, if you find that one of your websites will NOT start and showed up as "stopped" when the other websites started fine, then the IP address for that specific site was not added via httpcfg and you need to do so for that website to start. If that website is using an IP address that Apache or something else is using, you need to choose a different IP address. It's not that big of a deal. That's what DNS was for and if you are using DNS and not just handing out IP addresses, then you won't have a problem.

As a final note, if you are on IIS5 for some reason, then, well, you should upgrade, but say you can't upgrade for some reason, then you too have this same socket pooling "feature", but it's disabled differently. I haven't tried it myself on IIS5, but according to the documenation, you can disable all IIS5 socket pooling by openning a command prompt window and moving to the \Inetpub\AdminScripts folder, then running the following command:

cscript adsutil.vbs set w3svc/disablesocketpooling true

If you see the following response, then you are good. Just stop IIS, then restart IIS and the WWW service to come back online.

disablesocketpooling : (BOOLEAN) True

This method will give you the same nice and happy message on IIS6 as well, but it's just a show intended to give you a false sense of acomplishment and doesn't actually disable socket pooling at all. This method only works for IIS5. You need to use the httpcfg.exe method described above for IIS6.

Coders and Professional Programmers

I have to say it: there is a severe lack of skill in the technology world today, even among senior-level developers.  Perhaps I just have too high of standards, but when I see a person working in technology, I kind of expect them to have the requisite skill set for the job.  It always amazes me when I see a developer, even a senior-level developer, who doesn't know the basics of their own system (i.e. an ASP.NET developer who doesn't understand CSS positioning or a .NET developer who doesn't understand the framework design guidelines).  They may have degrees and have years of experience, but they have no actual skill.  It's always concerned me a bit that people have so little skill, but I wasn't able to put the problem into words until recent.  Well, that's not completely true.  An old friend of mine put it into words years ago, but it wasn't until this week that I was reminded of my own story.

Last week I was in a bookstore looking at a few design patterns books and a "1960s NASA engineer" type of guy in a white shirt and tie came up next to me, dressed in extremely casual blue jeans and a black shirt, looking at similar books.  I noticed he picked up a rather lame design patterns book, so I took it upon myself to hand him the GoF book saying "This one is the classic".  He thanked me and started to flip through it.  A minute later I realized that I couldn't resist asking: "So, what exactly are you looking for?"  He mentioned he was looking for a book demonstrating a state machine.  I grabbed another book, flipped the pages a bit and handed it to him saying "page 486".  His response was "Wow, thanks... so, where did you go to school?"  I have to admit that I've gotten many questions in bookstores, but I've never been asked that one FIRST.  The way he asked the question wasn't so much out of sincere curiosity, but more out of arrogance.  It felt like wanted to compare his degrees to my own.  I simply replied "School?  What?  You're looking at it...", pointing to the wall of computer books.  He immediately shut up and stepped away.  I'm not sure if that qualified me as a leper or if I just castrated the man's sense of academic accomplishment, but, regardless, that conversation was over.  However, a few moments later I found a book I was looking for and said "AHA! This will make them think!"  The same man curiously asked "What do you mean?"  I explained that I was looking for a book to recommend to developers to help then get a clue about software architecture.  The man gave the look of familiar with my situation and said something I haven't thought about in years: "Oh yeah... I know exactly what you mean.  Some people just don't get the difference between a coder and a professional."  At that point I had my flash back to a time of my life almost 6 years earlier... back when I was a coder.  It was a lifetime ago and I had all but repressed until recently.

The year was 2001 and I was a PHP programmer, Linux hacker and Math major at Kansas State University.  My idea of programming was looking at the PHP documentation, reading the PHP documentation comments and copy/pasting until my application was built.  I figured I knew what I was doing because I had already been in the field for 5 years; I was only now going back to school after a 3 year academic hiatus.  I would mix PHP, HTML, JavaScript, and tiny bits of CSS all in the same page in an absolutely unreadable pile of slop, "but it worked".  A friend of mine from high school, who by this time was a Microsoft employee, looked at what I was doing and mentioned to me in his usual blunt style: "Dude, that's not programming, that's slop coding.  You don't even have a data access layer, do you?"  Completely ignorant of the first clue of what he was talking about I said "Well, sure I do, PHP has a connector to mySQL.  I'm not accessing mySQL directly; I'm using the PHP API."  Thinking back on it I think he wanted to punch me in the face from that painfully ignorant comment."  Truth be told, I was not a professional programmer even though I had 5 years of crazy JavaScript development, 2 years of Ajax development, 2 years of ASP development, 2 years of SQL Server development, and 1 year of PHP development under my belt; I was only a coder.

After that, I spent a tremendous amount of time going back to the books and learning more about technology.  In high school I had some of the lowest grades in my classes because I spent my nights reading video game programming,  graphics programming, C/C++ programming, computer history, networking theory, Novell NetWare, and web development books.  In college, the pattern was being repeated in a the same familiar way.  I spend a ton of time studying all kinds of new technologies, one of which being XHTML/CSS design, all to the destruction of my meaningless academic career.  After finally breaking free from the tight bondage of academia I went to work for a few places as a PHP developer.  After one of two complete waste-of-time contractor positions I finally realized that it was time to cross over to .NET.  So, I spend an incredible amount of time studying .NET concepts, ASP.NET, the Framework, and C# technologies.  After a month I took and passed the 70-315 (ASP.NET) exam and the 70-229 SQL Design exam a week later.  From there I went to study the 70-320 (.NET components) and 70-228 material (SQL Administration).  Around that same time I realized that Mozilla finally released a product that didn't suck.  Not only did this product not suck, it was by far and away the most revolutionary piece of technology since the Internet itself: Mozilla Firefox 1.0.  With that I also started studying web standards much more deeply.  Being former Linux hacker, I wasn’t and still am not brained washed by the Microsoft extremist cult, so I've always been open to having the best technology regardless of vendor.

At this same time I realized that I really, really wanted to be a professional programmer and not a coder in any way.  So, I said goodbye to Visual Studio and took back my EditPlus to do all my .NET work.  I was not about to rely on Intellisense to do my work for me and already knew the danger of learning a new technology by error messages.  For the next year I deliberately did all my development without color syntax and without Intellisense.  I've never had any respect for "drag-n-drop" property developers, so I never used the designers anyhow, but now I wasn't even going to let Intellisense help me either.  If I needed to do something, guess what, I had to actually read the manual.  There would be no copy/paste slop coding and no hitting the forums every single time I hit the slightest bump.  If I had a problem, I would think about it and solve it using that technology, not simply a hack to throw some hack together.  I had enough years of wasting my time doing that.  Yes, at first, this got me into trouble, but I was determined to be a professional programmer at any cost and not a coder at all.  I wanted to internalize the systems as I had internalized driving my car.  After the initial slowdown (and the initial "being let go" from a project), I became much faster than all the other developers from the forced memorization of much of the documentation.  Whereas others had to look up how to use the Sqlconnection, SqlCommand, SqlDataAdapter, and DataSet combination, I was typing them out as if I were writing a sentence.  Around the same time I spent much time studying LLBLGen Pro, unit testing, object-oriented design (not to be confused with the entry-level concept of object-oriented programming I learned 10 years ago in C++), N-tier architecture, Modern JavaScript, and a year after that I took a block of time to study web services, MSMQ, COM+, service interop, and other communication mechanisms (most of which were later rolled up into WCF) all to the point of either complete or partial internalization.  In 2005 I was finally able to say that I am a professional programmer, not simply a coder.

As you can see from my story I know what it’s like to be a coder and I know the temptation to remain as a coder.  You think “your way” works and therefore you shouldn’t change it.  In fact, you probably hate other technologies, because their way isn’t "your way".  I can understand that too, but really it’s just a misunderstanding of that other technology.  My first ASP.NET application (a photo gallery) had absolutely no declarative code and was written with 100% Response.Write statements (I often tell people "make sure your first project in any technology is NOT a production one"; this is why).  I didn’t have the first clue what I was doing and, being a longtime classic web developer, I could not get that “magical” ASP.NET postback through my head.  Building that same application today would consist of an elegant ASP.NET custom control and two or three web forms.  I also understand what it’s like to say “...but this is how I do it in this other technology”, when in a reality that’s a completely meaningless statement.  Each technology has its own paradigms, naming schemes, guidelines, concepts, and languages.  Just because you are a Java rock star, doesn’t mean you will be able to do anything in .NET for at least 6 months.  The same goes for VB rock stars and PHP rock stars (as odd as it sounds, in my experience PHP developers have the hardest time learning .NET where as VB developers excel rather quickly-- this goes to show how hard it can be to un-link and old technology from your mind). I also know what it’s like to ask yourself “how can I do this?”  That’s what a coder asks where as a professional asks “How has this been built before in this technology; what is my precedence?”  In case you didn’t know, software development has been around for decades.  If you are running around trying to come up with a way to do something, you may be wasting your time as it’s probably been done before, the debates have probably already happened, people have probably already learned from their mistakes; you just have to accept what they've done for you, within limits, of course (i.e. C++ design patterns transfer to C#, but obviously not OOP style as .NET doesn’t allow multiple implementation inheritance).

So, if you find yourself in a place where most of your development is done doing things “your way” or the way “you have always done it” or if you ever ask “how can I do this”, maybe it’s time to break down and take some time out of your life to convert from being a coder to a professional.  It is definitely an investment, but with many great books out today (not all are so great), you could probably convert from being a coder to some level of professional programmer rather quickly (it took me a bit longer because I didn't have any role models to work under and I had no idea what the scope of the training was). There is an old computer science saying that goes something like this: "Anyone can write code that a computer can read, it takes a professional to write code a human can use."  We should all think about that and possibly post it on our walls.

To aide in the process of converting from a coder to a professional programmer, a few months ago I wrote up my requirements for a Senior-Level developer or Architect on my blog.  It should give you a shell outline for some of the technologies and topics you seriously need to have under your belt to be a professional.

Fast Visual Studio 2008 Beta 2 Downloads

WOW! I heard that these downloads were fast, but I had no idea they were this fast!  MSTorrent [theoretically] rules!

Hopefully this will actually work.  The first two times I've tried MSCD with Standard I got a "verified" download, but it was completely corrupted.  Then I downloaded and installed Web Developer Express from the web site and then downloaded and installed Standard from the web site to find out that they have completely incompatible versions of the .NET Framework 3.5.  No idea.

If you want to give it a try, head on over to the link below.

Related Links

Minima .NET 3.5 Blog Engine (a.k.a. Minima 2.0)

Since the original release of Minima, I've added a ton of new features to my own blog (which ran on the original Minima).  That plus the release of the .NET Framework 3.5 Beta 2, which includes LINQ, has prompted me to work further on the public release of Minima.  Thus, the creation of the Minima .NET 3.5 Blog Engine.  This is major upgrade to Minima.  Here are some of the new features...

LINQ

Though I love LLBLGen Pro (the worlds only enterprise-class O/R mapper and database abstractor), I thought it would be tremendously beneficial to build the new version of Minima on something that many more people would be able to appreciate.  Thus, I rewrote all data access components to use LINQ instead of LLBLGen Pro.  In the Minima .NET 3.5 Blog Engine, there is no LLBLGen Pro left over and thus you do not need the LLBLGen Pro assemblies.

If you are new to LINQ, then the Minima .NET 3.5 Blog Engine is something you may want to checkout.

Windows Live Writer Support (with RSD Support)

The previous version of Minima didn't have any supported blog client.  That was left as a training exercise to those using Minima as a training tool.  It did however have an extensive WCF interface that allowed simple web-service interaction with Minima.  However, since the release of Windows Live Writer (WLW) Beta 2, the need for a WCF interface and the need to write your own blog client is gone.  With this release of Minima, you simply setup your blog , point WLW at it and it will detect everything for you via the provided rsd.xml and wlwmanifest.xml files.

Metaweblog API

As previously stated, the Minima .NET 2.0 Blog Engine had a WCF interface, but this new release doesn't.  In it's place is the XML-RPC based Metaweblog API to allow for seamless interaction by WLW.

Simplified File Structure

The file structure of Minima has been greatly simplified and the number of files in the actual website is very minimal now.  If you recall, the entire idea behind Minima was that it is very minimalistic.  This doesn't mean it doesn't have many features, but it rather means that everything is organized in such a way that makes it look minimalistic.

SQL Server-based HttpHandler Management

A few weeks ago I published a simplified version of the universal HttpHandlerFactory technique, which relies on SQL Server instead of XML configuration files for HttpHandler management.  Under this model you can simply go to the HttpHandler table and put in an HttpHandler, the text used to match against a URL, and a matching type ("Contains", "EndsWith", "StartsWith", or "Default").  So, if you want to move your API from /xml-rpc/ to /962c8b59-97dc-490b-a1d1-09b55e47455b/, just go into the table and paste in that GUID over the text XML-RPC and you're done (you will probably have to kill the cache as well since Minima caches HttpHandler mappings-- just open and save the web.config file.)  Using this same HttpHandler table you can change the comment moderation endpoint (which uses the MinimaCommentHttpHandler).

Google Sitemap Creator

Minima now allows you to create a Google Sitemap simply by using the MinimaSiteMapHttpHandler registered in the HttpHandler SQL Server table.  By default it's registered to anything ending with "sitemap.xml", but by changing the path in the HttpHandler table you can instantaneously change the sitemap path.

File Mapping Support

One of the features I found that I really, really needed a few months ago was a way to symbolically link virtual files to physical files.  Thus I built a simple system that would allow me to store symbolic links in SQL Server.  Initially there is a default "catch all" folder to which all files are naturally mapped.  Internally, this is by setting MinimaFileHttpHandler to a certain path in the HttpHandler table; /materials/ by default.  For specific files that aren't in that folder, you simply add the mappings in the FileMapping table.  Theoretically, you could have multiple links to the same file using this system.

User Rights System

Since there is a public API, there needs to be some type of rights management system.  In the Minima .NET 3.5 Blog Engine, rights are done at the system and blog level and you assign rights to blog authors.  For example, an author needs the 'R' (retrieve) system right to get a list of blogs on the system (what WLW will try to do when you point it at this release of Minima) and would need the 'C' (create) blog right to post to a particular blog.  There are 'C', 'R', 'U', and 'D' rights that can be set in the UserRight table.  If it's a blog right, there is a blogId associated with the right, if it's a system right the blogId is null.  It's actually a really simple, yet effective rights system.  That the entire point of minimalism.

Access Control System (via IP address, UserAgent, and HTTP Referrer)

In a recent blog entry I alluded to an access control system I added to my blog.  This access control system allowed me to control access by IP address, UserAgent, or HTTP Referrer and based up on one of those three things I could either write a message to the browser or forward them.  So, for example, if I want to block someone trying to download my entire site I can put the following data in the SQL Server 'Access' table: 1, 'I', 10.29.2.100, 'You have been banned', null, true.  That is, on BlogId 1, for an IP Address of 10.29.2.100, show a message of 'You have been banned', don't forward them anywhere (that's the null), and enable the rule ('true').  If you want to have them forwarded to another address, just put the destination address in the AccessHttpForward column and the rule is activated immediately.

This is implemented by an HttpModule, so it's going to apply to absolutely every request coming across the system.  They won't be able to download anything off your website, not even images or CSS files.  Also, this system works on a "first hit" model.  That is, the first rule that is hit is the one that is applied.

Tracing via Reflection

This release of Minima also allows you to trace just about anything to SQL Server.  The tracing mechanism works by serializing what you give it into XML and then stores that XML in a table.  Doing something like this comes in tremendously handy when troubleshooting APIs.  Since Minima's TraceManager has a RecordMethodCall method that accepts a params array of Object types, you can send as many parameters as you want to the method and it will serialize them into XML for storage in SQL Server.  Some types are obviously no serializable, but you can always send each property of a type to the method instead of sending the entire object.

Other Features

As with all my web projects, exceptions are e-mailed to the person monitoring the system.  With this release of Minima, I added a feature that I've been using for a while: support for Gmail subjects.  Usually in Gmail, messages that look a like will be grouped together in the same conversation.  This is a tremendously time saving feature for most everything, but not when it comes to exception monitoring (or comment notification).  So, when enhanced Gmail subjects are enabled, e-mail subjects are suffixed with a long random number (actually, the time in ticks) so that no two messages are in the same conversation.  The same is done for comment notification.

Previous Features

Previous features were fairly basic and included labeling, commenting, and other blog stuff.  The biggest feature of the previous release of Minima was its ability to have more than one URL for a blog entry.  So, for example, if you accidentally blog as /Blog/2007/08/LINQ-ruels.aspx, you can add another URL mapping to that entry so /Blog/2007/08/LINE-rules.aspx, which would set as the default mapping, goes to the same place.  Both are still accessible, but the new default will be the one that shows up at the blog entry's permanent home.  The Minima .NET 3.5 Blog Engine retains this feature.

As a Training Tool

As with all my project there is an "As a Training Tool" section in the release notes.  This release of Minima can be used to train concepts such as ASP.NET, CSS theming, proper use of global.asax, integrating with Windows Live Writer, framework design guidelines, HttpModules, HttpHandlers, HttpHandlerFactories, LINQ, type organization, proper-SQL Server table design and naming scheme, XML serialization, and XML-RPC.NET usage.

To download the Minima .NET 3.5 Blog Engine simply access the following Subversion repository and download the associated sample SQL Server 2005 database:

NetFXHarmonics Subversion Viewer

In an attempt to make my SolutionTemplate/EBook more accessible and readable, I went ahead and built a simple ASP.NET-based Subversion viewer to allow readers to view and read my SolutionTemplate/EBook online.  This should allow easier navigation of the project without actually forcing anyone to download it.  Furthermore, this really applies to all my projects in Subversion: Minima, Data Feed Framework and SolutionTemplate/EBook.

You can access the NetFXHarmonics Subversion Viewer at one of the following locations:

Basically you just access http://viewer.netfxharmonics.com/PROJECT_NAME/trunk.  It's that simple.  Hopefully these will make referencing these resources and reading SolutionTemplate/EBook a lot easier.

By the way, in case you are wondering, this viewer was created by an HttpHandler that uses the URL being accessed to figure out what path or file to look at in the Apache front-end for Subversion.  After a quick HttpWebRequest, a few regular expressions are done and a header and footer are tacked on to give the final version of the page.

Real World HttpModule Examples

Back when I first discovered HttpHandlers I remember being ecstatic that the control that I thought I lost when moving from PHP to ASP.NET was finally returned, but when I first discovered HttpModules I remember almost passing out at the level of power and control you get over the ASP.NET request pipeline.  Since then, I've used HttpHandlers, HttpHandlerFactories, and HttpModules in many projects, but I'm noticing that while many people have heard of them, many have no idea what you would ever use them for.  So, I would like to give a few examples.

The first example is really simple.  On my blog, I didn't ant anyone to alias my web site or access it in any other way than by going to www.netfxharmonics.com.  Since HttpModules allow you to plug into the ASP.NET request pipeline, I was able to quick write an HttpModule to do exactly what I wanted:

public class FixDomainHttpModule : IHttpModule
{
    public void Dispose( ) {
    }

    public void Init(HttpApplication context) {
        context.BeginRequest += delegate(Object sender, EventArgs ea) {
            HttpApplication ha = sender as HttpApplication;
            String absoluteUrl = ha.Context.Request.Url.ToString( ).ToLower( );
            if (ha != null) {
                if (MinimaConfiguration.ForceSpecifiedDomain) {
                    if (!absoluteUrl.StartsWith(MinimaConfiguration.Domain.ToLower( ))) {
                        context.Response.Redirect(MinimaConfiguration.Domain);
                    }
                }
            }
        };
    }
}

...with this web.config:

<httpModules>
  <add name="FixDomainHttpModule" type="Minima.Web.HttpExtensions.FixDomainHttpModule" />
</httpModules>

By doing this I don't need to put a check in each page or in my MasterPage.  Since HttpModules are for the entire application, even URLs accessing my images are forced to be done by www.netfxharmonics.com.

Another example is a simple authentication system for a system I was working on a while back.  The application allowed anyone logged into active directory to access it's resources, but only certain people logged into active directory would be authorized to the use application (i.e. anyone could access images and CSS, but only a few people could use the system).  Knowing that the .NET framework is the model for all .NET development, I looked at the machine's web.config to see how ASP.NET implemented its windows and form authentication.  As it turns out, it does so by HttpModules.  So, I figured that the best way to solve this problem was by creating an HttpModule, not by throwing a hack into each of my WebForms or my MasterPages.  Furthermore, since ASP.NET uses the web.config for its configuration, including authentication configuration, I wanted to allow configuration of my authentication module to be via the web.config.  The general way I wanted to configure my HttpModule would be by a custom configuration section like this:

<Jampad>
    <Security RegistrationPage="~/Pages/Register.aspx" />
</Jampad>

The code for the HttpModule was extremely simple and required only a few minutes to throw together.  If the page being accessed is a WebForm and is not the RegistrationPage set in web.config, then the system's Person table is checked to see if the user logged into the machine has an account in the application.  If not, then there is a redirect to the RegistrationPage.  Simple.  Imagine how insane that would have been if you wanted to test for security on each page.

public class JampadSecurityModule : IHttpModule
{
    public void Dispose( ) {
    }

    public void Init(HttpApplication context) {
        context.BeginRequest += delegate(Object sender, EventArgs ea) {
            HttpApplication ha = sender as HttpApplication;

            if (ha != null) {
                CheckSecurity(context);
            }
        };
    }

    private void CheckSecurity(HttpApplication context) {
        SecurityConfigSection cs = (SecurityConfigSection)ConfigurationManager.GetSection("Jampad/Security");

        if (String.IsNullOrEmpty(cs.Security.RegistrationPage)) {
            throw new SecurityException("Security RegistrationPage is required.");
        }

        if (cs == null) {
            return;
        }

        if (!context.Request.Url.AbsoluteUri.Contains(cs.Security.RegistrationPage) &&
            context.Request.Url.AbsoluteUri.EndsWith(".aspx")
            ) {
            PersonCollection pc = new PersonCollection( );
            pc.GetMulti(new PredicateExpression(PersonFields.NTLogin==ActiveDirectoryFacade.NTUserName));

            if(pc.Count < 1){
                context.Response.Redirect(cs.Security.RegistrationPage);
            }
        }
    }
}

Again, plugging this into the web.config makes everything automatically happen:

<httpModules>
    <add name="JampadSecurityModule " type="Jampad.Security.JampadSecurityModule" />
</httpModules>

Recently I had to reuse my implementation in an environment that would not allow me to use LLBLGen Pro, so I had to rewrite the above 2 lines of LLBLGen Pro code into a series of Strongly-Typed DataSet tricks.  This implementation also had a LoginPage and an AccessDeniedPage in the custom configuration section, but other than that it was the same idea.  You could actually take the idea further by checking if the person is currently authenticated and if they aren't do a check on the Person table.  If they have access to the application, then PersonLastLoginTime column to the current time.  You could do many things with this implementation that would be rather crazy to do in a WebForm or MasterPage.

Another example of an HttpModule would be my custom access control system I built into my blog.  I'm not going to paste the code here as it's just the same idea as the other examples, but I will explain the concept.  Basically I created a series of tables in my blog's SQL Server database that held information on access control.  In the Access table I put columns such as AccessDirection (allow, deny), AccessType (IP Address, UserAgent, HTTP Referral), AccessText, AccessMessage, and AccessRedirect.  My HTTPModule would filter every ASP.NET request through this table to figure out what to do with it.  For example, I could block an IP address by creating a table record for 'deny', 'IP Address', '10.2.1.9', 'Your access has been denied.', NULL.  Immediately upon inserting the row, that IP address is blocked.  I could also block certain UserAgents (actually this was the original point of this HttpModule--bots these days have little respect for the robots.txt file).  I could also block requests that were from a different web site.  This would allow me to stop people from leaching images off my web site for use on their own.  With a simple HttpModule I was able to do all this in about an hour.  By the way, one record I was tempted to create was the following: 'deny', 'UserAgent', 'MSIE', NULL, 'http://www.getfirefox.com/'.  But I didn't :)

Now when would you use an HttpModule versus an HttpHandler?  Well, just think about what the difference is.  An HttpHandler handles a specific address pattern for a specific set of HTTP verbs, while every request in an application goes through an HttpModule.  So, if you wanted to have an image creator at /ImageCreator.imgx, then you need to register .imgx to IIS and then register your image creation HttpHandler in your web.config to handle that address (in case you forgot, web browsers care about the Content-Type, not the file extension.  In this example, your HttpHandler would set the Content-Type as 'image/png' or whatever your image type is.  That's how a web browser will know what to do with a file.  It has nothing to do with the file extensions; that's just for IIS.)  On the other hand, if you wanted to block all traffic from a specific web site, then you would create an HttpModule, becayse HttpModules handle all traffic on an application.  So, if you just remember this fundamental difference in purpose between the two, then shouldn't have my problems in the future.

Seriously Awesome Blog

No... unfortunately not this one.  Today I was across Rick Strahl's blog.  I've been reading his stuff for a long time now and have been a big fan of his work for a while, but some for some reason it was only today that I got around to thinking "wait... maybe he has a blog!".  In any case, if you read my blog, then you REALLY need to be reading his.  His work covers very similar topics: .NET, ASP.NET Internals, JavaScript, WCF Stuff, COM, Windows Live Writer API, but with the exception that his, of course, is much better.  Aside from his blog, his articles and papers are also incredible.  One in particular I would like to point out is a classic: A low-level Look at the ASP.NET Architecture, an incredibly in depth look at the internals of ASP.NET (and required reading for all ASP.NET developers!).  You can also see him answering questions all over the forums (places where you won't see me!  Scary places!)

He's definitely on my list of "people smart than me" (there's a success principle that states that the key to success to to surround yourself with people smarter than yourself).  Also on that list would be Scott Hanselman, Fritz Onion, Charles Petzold, Don Box, Frans Bouma, and Brad Abrams (there are more, but as a minimalist I feel weird when a list gets too long)  These are my .NET role models and you if don't have their blogs in your feed reader, then you seriously need to get on that one.  If you have too many already, delete mine.  BTW, they each have something in common: they are either MVPs or work at Microsoft (or are kind of both):

Related Links

Simplified Universal HttpHandlerFactory Technique

A few months ago I wrote about the Universal HttpHandlerFactory Technique where you have one HttpHandlerFactory that all your ASP.NET processing goes through and then in that HttpHandlerFactory you then choose what HttpHandler is returned based upon what directory or file is accessed.  I still like this approach for certain scenarios, but my blog got to the point where I was managing 11 different HttpHandlers for about 25 different path and file patterns.  So, it was time to simplify.

What I came up with was basically my go to card for everything: put it in SQL Server.  From there I just checked the URL being accessed against the patterns in a database table and then looked up what HttpHandler to use for that particular request.  Then, of course, I cached the HttpHandler for future file access.

Here are my SQL Server tables (yes, I do all my design in T-SQL-- SQL GUI tricks are for kids):

create table dbo.HttpHandlerMatchType  (
HttpHandlerMatchTypeId int primary key identity not null,
HttpHandlerMatchTypeName varchar(200) not null
) 

insert HttpHandlerMatchType select 'Contains'
insert HttpHandlerMatchType select 'Starts With'
insert HttpHandlerMatchType select 'Ends With'
insert HttpHandlerMatchType select 'Default'

create table dbo.HttpHandler (
HttpHandlerId int primary key identity not null,
HttpHandlerMatchTypeId int foreign key references HttpHandlerMatchType(HttpHandlerMatchTypeId),
HttpHandlerName varchar(300),
HttpHandlerMatchText varchar(200)
) 

Here is the data in the HttpHandler table:

 HttpHandler Table

Looking at this image and the SQL Server code, you can see that I'm matching the URL in different ways.  Sometimes I want to use a certain HttpHandler if the URL simply contains the text in the HttpHandlerMatchText and other times I'll want to see if it the URL ends with it.  I included an option for "starts with" as well, which I may use in the future.  This will allow me to have better control of how paths and files are processed.  Also, notice that one is "base".  This is a special one that basically means that the following HttpHandler will be used (keep in mind we are in a class that inherits from the PageHandlerFactory class-- please see my original blog entry):

base.GetHandler(context, requestType, url, pathTranslated);

Now in my HttpHandlerFactory's GetHandler method I'm doing something like this (also note how LLBLGen Pro helps me simply my database access):

HttpHandlerCollection hc = new HttpHandlerCollection( );
hc.GetMulti(new PredicateExpression(HttpHandlerFields.HttpHandlerMatchTypeId != 4));

IHttpHandler hh = null;
foreach (HttpHandlerEntity h in hc) {
    hh = MatchHttpHandler(absoluteUrl, h.HttpHandlerName.ToLower( ), h.HttpHandlerMatchTypeId, h.HttpHandlerMatchText.ToLower( ));
    if (hh != null) {
        break;
    }
}

This is basically just going to look through all the HttpHandlers in the table which are not the "default" handler (which will be used when there is no match).  The MatchHttpHandler method basically just passes the buck to another method depending of whether I'm matching the URL based on Contains, StartsWith, or EndsWith.

private IHttpHandler MatchHttpHandler(String url, String name, Int32 typeId, String text) {
    IHttpHandler h = null;
    switch (typeId) {
        case 1:
            h = MatchContains(url, name, text);
            break;

        case 2:
            h = MatchStartsWith(url, name, text);
            break;

        case 3:
            h = MatchEndsWith(url, name, text);
            break;

        default:
            throw new ArgumentOutOfRangeException("Invalid HttpHandlerTypeId");
    }

    return h;
}

Here is an example of one of these methods; the others are similiar:

private IHttpHandler MatchContains(String url, String name, String text) {
    if (url.Contains(text)) {
        return GetHttpHandler(name);
    }
    return null;
}

As you can see, it's nothing fancy.  The last method in the chain is the GetHttpHandler, which is basically a factory method that converts text into an HttpHandler object:

private IHttpHandler GetHttpHandler(String text) {
    switch (text) {
        case "base":
            return new MinimaBaseHttpHandler( );

        case "defaulthttphandler":
            return new DefaultHttpHandler( );

        case "minimaapihttphandler":
            return new MinimaApiHttpHandler( );

        case "minimafeedhttphandler":
            return new MinimaFeedHttpHandler( );

        case "minimafileprocessorhttphandler":
            return new MinimaFileProcessorHttpHandler( );

        case "minimapingbackhttphandler":
            return new MinimaPingbackHttpHandler( );

        case "minimasitemaphttphandler":
            return new MinimaSiteMapHttpHandler( );

        case "minimatrackbackhttphandler":
            return new MinimaTrackBackHttpHandler( );

        case "minimaurlprocessinghttphandler":
            return new MinimaUrlProcessingHttpHandler( );

        case "projectsyntaxhighlighterhttphandler":
            return new ProjectSyntaxHighlighterHttpHandler( );

        case "xmlrpcapi":
            return new XmlRpcApi( );

        default:
            throw new ArgumentOutOfRangeException("Unknown HttpHandler in HttpHandlerMatchText");
    }
}

There is one thing in this that stands out:

case "base":
    return new MinimaBaseHttpHandler( );

If the "base" is simply a call to base.GetHandler, then why am I doing this?  Honestly, I just didn't want to pass around all the required parameters for that method call.  So, to make things a bit more elegant I created a blank HttpHandler called MinimaBaseHttpHandler that did absolutely nothing.  After the original iteration through the HttpHandlerCollection is finished, I then do the following (it's just a trick to the logic more consistent):

if (hh is MinimaBaseHttpHandler) {
    return base.GetHandler(context, requestType, url, pathTranslated);
}
else if(hh != null){
    if (!handlerCache.ContainsKey(absoluteUrl)) {
        handlerCache.Add(absoluteUrl, hh);
    }
    return hh;
}

One thing I would like to mention is something that that sample alludes to: I'm not constantly having everything run through this process, but I am caching the URL to HttpHandler mappings.  To accomplish this, I simply setup a simple cached dictionary to map URLs to their appropriate HttpHandlers:

static Dictionary<String, IHttpHandler> handlerCache = new Dictionary<String, IHttpHandler>( );

Before ANY of the above happens, I check to see if the URL to HttpHandler mapping exists and if it does, then return it:

if (handlerCache.ContainsKey(absoluteUrl)) {
    return handlerCache[absoluteUrl];
}

This way, URLs can be processed without having to touch the database (of course ASP.NET caching helps with performance as well).

Related Links

All my work in this area has been rolled-up into my Themelia ASP.NET Framework, which is freely available on CodePlex. See that for another example of this technique.

Brainbench AJAX Exam

Well, it's official: I took the position as role as principal author of the Brainbench AJAX Exam.  Now I need to turn my usual AJAX curriculum into something worthy of an exam.  Basically I need to create a suitable outline with about 7-9 topics and 3-5 subtopics and put 4-6 questions into each subtopic to come up with a grand total of 160 questions.  Since I've done this already with the C# 2.0 exam, it should be fairly straight forward!  Err, maybe...

What will the exam cover?  Well, the fundamentals of AJAX.  I'm working on a video series right now that will cover what I refer to as the three pillars of AJAX: Modern JavaScript, Browser Dynamics (a.k.a. "DOM Manipulation" or "DHTML"), and AJAX communication.  Modern JavaScript topics that will be covered are JavaScript namespaces, closures, multi-cast events.  The browser dynamics include topics such as DOM tree navigation, node creation and removal, interpreting node manipulation (e.g. moving a box, changing a color) as well as architecture decisions (e.g. "should this be a div or a span?").  Finally, AJAX communication topics will include XMLHttpRequest usage, result interpretation, performance concerns, JSON translation, and callback creation.  These are of course not all the topics, but just a sampling.  The point is that the exam will basically be the exam for the video series.

To be clear, I will not have anything vendor specific near the exam.  This is one of the reasons I took the position.  The last thing we need is an exam which tests you on two or three completely different frameworks.  Java developers won't have a clue about ASP.NET AJAX and ASP.NET developers won't have a clue about the other 100 or so frameworks in existence.  I also have absolutely no intention of asking about obscure AJAX techniques that almost no one would ever know (e.g. request queuing, animation).  So, really, my video series will cover more than the exam as I have every intention of relying fairly heavily on Firebug in the video series and , but that can't be on the exam.

Comment Rules

Below are my filters for comments; I've made them as simple as possible.

  1. If your comment resembles the immature and nonsensical gibberish on YouTube, then it won't ever see my web site.
  2. If your comment is simply hate mail, then it would be unprofessional for me to post it.  No one needs to read about someone else's insecurities.
  3. If you ask me an in depth question or bring up a conversation topic, then I will, of course, answer the question through the appropriate channel of e-mail.  This isn't a forum or a discussion board, it's a blog that allows for one-way intelligent statements.  My blog and comment system are designed after the idea of a scholarly lecture: there will be no questions in class except for clarification with further conversation being in private sessions.  Responses will be posted to the blog when appropriate, otherwise they will be sent via e-mail (see next rule).
  4. If you don't include your real e-mail and ask a question, then I obviously can't post it or answer your question.  Again, this isn't a forum.   I don't like forums and avoid them whenever I can.
  5. If I have to hire a professional linguist to parse your comment, then I'm not going to read it let alone post it.
  6. If you are going to object to a point, you must obviously cite your reference and/or give precedence.
  7. If you in any way suggest that a rule, standard, law, regulation, code, specification, or guideline isn't a "good one" (not sure what that means), then I cannot post your comment.  We cannot come to the law to judge the law; rather, are judged by the law.

In the past two weeks I've read some really insane comments ranging from people saying that memorization is bad (weird!) to calling "home made" techniques something "hackers" do (this was very much a YouTube-style comment).  Honestly, I have little time for comments like these (or the hate mail ones), so please be aware that I have a very well defined process I follow when considering a comment (FYI, this process was adapted from my process for filtering recruiters-- getting 8 calls a day for jobs is a bit excessive when you already despise the industry).

Creating JavaScript objects from ASP.NET objects

If you have worked with ASP.NET for any length of time you probably know that the ASP.NET ID you set on a control on the server-side changes when it gets to the client side.  For example, if you have a textbox with an ID of "txtUsername" in ASP.NET, you will probably have a textbox with an ID of something like "ctl100_txtUsername".  When working only with server-side code, this is fine.  However, I'm a JavaScript programmer as well as a .NET programmer.  Most of my applications are heavily Ajax based and sometimes the entire application through all of its screens and uses will have ZERO postbacks.  So, it's important for me to have the correct ID on the client.  So, I need to be able to access controls on the client-side.  Not only so I can access the ID from a JavaScript functions, but also so I can set loosely-coupled events on objects.

Typically the way people get around this is with simple, yet architecturally blasphemous techniques.  The first technique is to break a foundational rule of software architectural (e.g. low-coupling) by putting an event right on the element itself.  That is, they hard code the event they want to raise right on the control itself.  This is a very strange technique as the .NET developers who do this technique are usually thos wwho would never put a server-side event on a control using OnServerClick.  Somehow, they think that putting an even directly on a client-side control by OnClick is less wrong.  This is obviously a case of extremely object coupling, an extremely poor architectural practice.  In case you can't picture it, here's what I'm talking about:

<asp:TextBox id="txtUsername" runat="server" Text="Username" OnClick="ClearBox( );"></asp:TextBox>

A much, much better way of getting around this is to use the ClientID property of an ASP.NET control to assign a multi-cast JavaScript event to that button.  However, we must be careful with this technique as it too could lead to design problems.  The most obvious problem is that of spaghetti code, the mixing of two or more languages in one same file.  Professional ASP.NET developers know that to have a sound system, you must be using code-behinds.  The ASP.NET development model greatly improves the readability of code by making sure that the C# (or VB) code and the ASP.NET declarations are completely separate.  While reading one page, your brain doesn't need to be flipping all over the place trying to translate multiple languages at the same time.  To be sure, those of us from the PHP world know that with time you can become very proficient in developing in spaghetti code, but, on the other hand, those of us who have taken over a project from another person know the pains of trying to decode that slop.

The typical technique for applying loosely-coupled events (and for many other JavaScript functionality) is actually very strange.  Though the ASP.NET developers will insist on a separation for their C# (or VB) away from their ASP.NET pages, they have no problem throwing JavaScript in the midst of C# code.  This is almost as bad as putting ad-hoc SQL queries in your C# code (very bad) or coupling CSS rules to an element via the HTML "style" attribute, thereby making the solution absolutely impossible to theme and breaking any chance of debugging CSS problems (very, very bad).  JavaScript and CSS have had a code-behind model long before ASP.NET was around.  So, we need to respect the practices of code separation as much as possible.  To this end, we need a better solution than throwing a large block of JavaScript in to an ASP.NET page.

Here is an example of the old technique using legacy JavaScript (in contrast to Modern JavaScript shown in a bit):

<script type="text/javascript"> 
function ClearBox( ) {
    document.getElementById(<%=txtUsername.ClientID%>).value = ''; 
} 

document.getElementById(<%=txtUsername.ClientID%>).onclick = ClearBox;
</script>

Typically, however, you will see a TON of JavaScript code simply thrown into the page with no respect for code separation and with no possibility for multicast events.  (Furthermore, not only is this code raw spaghetti code, that function isn't even in a JavaScript namespace.  Please see my link below for more information on JavaScript Namespaces;  If you are familiar with .NET namespaces, then you have a head start on learning JavaScript namespaces.  Would you ever throw a class into an assembly that without putting it in a namespace?  Probably not... it's the same idea in JavaScript.)

Fortunately, there is a better model using a couple of JavaScript files.  The first JavaScript file (Event.js) is one of my standard files you will see in all of my JavaScript applications (update: I no longer use this-- now, I use prototype.js from the Prototype JavaScript Framework to replace a lot of my own code):

var Event = {
    Add: function (obj, evt, func, capture) {
        if(obj.addEventListener) {
            obj.addEventListener (evt, func, capture); 
        }
        else if(obj.attachEvent) {
            obj.attachEvent('on' + evt, func); 
        }
    },
        
    Remove: function (obj, evt, func, capture) {
        if(obj.removeEventListener) {
            obj.removeEventListener (evt, func, capture);
        }
        else if(obj.detachEvent) {
            obj.detachEvent('on' + evt, func);
        }
    }
};

This Modern JavaScript document, simply allows you to add or remove events from an object.  It's fairly simple.  Here's a file (AspNet.js) you will find in some of my applications:

var AspNet = {
    Objects: new Object( ), 
    
    RegisterObject: function(clientId, aspNetId, encapsulated) {
        if(encapsulated) {
            eval('AspNet.Objects.' + clientId + ' = $(aspNetId)'); 
        }
        else {
            eval('window.' + clientId + ' = $(aspNetId)'); 
        }
    }
};

This one here is where the meat is.  When you call the RegisterObject function you will actually register an ASP.NET control with JavaScript so that you can use it without needing the fancy ASP.NET ClientID.  Furthermore, it also allows you to use the object directly in JavaScript without relying on document.getElementById( ).  This technique is actually a cleaner version of the one I previously mentioned.  It does require you to put a little JavaScript in your page, but that's OK as it's ASP.NET interop code used to register itself with JavaScript; therefore, you aren't really breaking any rules.

In general, you should never, ever place JavaScript in your ASP.NET system.  There are of course some exceptions to this, but the exceptions are based on common sense and decades of interop research from the industry.  Two of the most common exceptions to never having JavaScript in your ASP.NET system are for control generation and for sewing code ("interop code").  Control generation would be when a server-side control creates that which a browser will use in order to protect users (the developers using the control) from the interop between ASP.NET and JavaScript.  That is, to hide the plumbing, thereby increasing the level of abstraction of the system.  The C++ guys deal with the pointers, protecting me from memory management and the ASP.NET/AJAX control creators deal with the JavaScript plumbing so other developers don't have to.  It's the same idea.  Continuing with this analogy, while C# allows unsafe pointers, they should only be used in extremely rare circumstances.  JavaScript in ASP.NET should be about as rare.  One example of this rarity is in reference to the other exception: sewing code.

Sewing code ("interop code"), on the other hand, is exactly what you are seeing this this technique.  It simply connects one technology to another.  One major example of sewing code in the .NET framework is where ADO.NET connects directly to SQL Server.  At some point there must be a connection to the external system and the calling system must speak its language (i.e. SQL).  In the technique here, the interop is between ASP.NET and JavaScript and, as with all interop, sewing is therefore required.  Mixing languages is a very strong sign of poor design skills and a lack of understanding of GRASP patterns.  Many excellent, genius programmers would take their systems to the next level by following this simple, yet profound time tested technique.  Martin Fowler, author of the classic computer science text "Refactoring: Improving the Design of Existing Code" (one of my core books right next to the framework design guidelines!), is often quoted as saying "Any fool can write code that a computer can understand. Good programmers write code that humans can understand."  That's, of course, contextual as people who are complete fools in software design are often 100x better hardcore programmers than the best software designers.

Now, to use the AspNet JavaScript namespace, you simply put code similar to the following somewhere in your ASP.NET page (or the Event.observe function in the Prototype Framework):

<script type="text/javascript">  
Event.Add(window, 'load', function(evt) { 
    // ASP.NET JavaScript Object Registration

    AspNet.RegisterObject('txtUsername', '<%=txtUsername.ClientID%>');
    AspNet.RegisterObject('txtPassword', '<%=txtPassword.ClientID%>');
    Initialization.Init( ); 
}, false);
</script>

Basically, when the page loads your objects will be registered.  What does this mean?  It means you can use the object as they are used in this Initialization.js file (another file in all of my JavaScript projects):

<script type="text/javascript">  
var Initialization = {
    Init: function( ) {
        txtUsername.onclick = function(evt) {
            if(!txtUsername.alreadyClicked) {
                txtUsername.value = '';
                txtUsername.alreadyClicked = true; 
            }
        };
        
        txtPassword.onclick = function(evt) {
            if(!txtPassword.alreadyClicked) {
                txtPassword.value = '';
                txtPassword.alreadyClicked = true;
                txtPassword.type = 'password';
            }
        };
    }
};
</script>

As you can see there is no document.getElementById( ) or $( ) here.  You are simply naturally using the object as if it were strongly typed.  The best part is that to support another ASP.NET page, you simply have to put a similar JavaScript script block in that page.  That's it.  Furthermore, if you don't want to access the control directly, perhaps because you are worried about potential naming conflicts you can send a boolean value of true as the third argument in the AspNet.RegisterObject function, this will put the objects under the AspNet.Objects namespace.  Thereby, for example, making txtUsername accessible by "AspNet.Objects.txtUsername" instead of simply "txtUsername".

There is one catch though: you have to assign events to your window.load event using multi-cast events.  In other words, if at any point you assign an event directly to the window.load event, then you will obviously overwrite all events.  For example, the following would destroy this entire technique:

window.load = function(evt) {
// Do something...
}

This should not be a shocker to C# developers.  In C#, when we assign an event we are very careful to make sure to assign it using the "+=" syntax and not the "=" syntax.  This the same idea.  It's a very, very poor practice to ever assign events directly to the window.load event because you have absolutely no idea when you will need more than one event to call more than one function.  If your MasterPage needs the window.load event, your Page needs the window.load event, and a Control needs the window.load event, what are you going to do?  If you decide you will never need to do multicast events on load and then get a 3rd party tool that relies on it, what will you do when it overrides your load event or when you override its?  Have fun debugging that one.  Therefore, you should always use loosely-coupled JavaScript multi-cast events for window.load.  Furthermore, it's very important to following proper development practices at all times and never let deadlines stop your from professional quality development.

Related Links

The Wandering Developer

This has been an interesting week.  I did an experiment to help prove something that deep down we all know anyway: YOU DON'T NEED TO BE AT THE OFFICE TO WORK.  Last weekend I drove to Chicago (from Kansas City) to fix a few problems caused by overworking in the previous week and while on the trip, I started my 4-Hour Workweek ("4HWW") training.  The trip was only a Saturday, Sunday, and Monday trip and I had to be back Tuesday for work.  However, on the way back the 4HWW training made me realize the obvious: I work remotely and I am remote.  DUH!  When I realized that, I immediately turned NORTH (home is south) away from Kansas City heading towards Minneapolis.  I also called my client telling him that I'm going to remotely call in for the meeting as there was absolutely no reason for me to physically be there.  While in Minneapolis I stayed with a relative and worked from an office in their house.  Since there was no boss, no client, and no coworkers to bother me, I was able to have PURE productivity just as the 4HWW book said I would have.

It never really made ANY sense to me why, living in the 21st century, we developers need to physically go to an office to have a boss fight our productivity at every turn.  People just work better when people aren't watching.  DUH!  Therefore, as of right now... I'm done working on site and am extending my consultant business ("Jampad Technology, Inc.") to from coast to coast (possibly global soon).  I am no longer going to work at any particular location, but will work from a different city in the United States at various intervals for the next few years (until I get sick of that and change careers completely).  Since I don't own a house, don't have kids, am not married and since my car is completely paid off and have the lowest rent in the world, I can do this without affecting anything.  Why didn't I do this soon?  Well, I only did the 4HWW training last weekend.  Phenomenal training!  I'm sick and tired of living out Office Space every day of my life and, as it turns out, my Seminary work isn't going to do itself.  Last year I instituted by quarterly vacation policy (I take a 3-9 day vacation every 3 months) and the success of that naturally lead to this next step.  It was either that or continue to be on the lame 100 Hour Work Week program that most people are on.  Forget that.  I'm sick of working in an office.  Period.

One thing that I realized recently was something that makes me feel stupid for not thinking of sooner.  As a right-brained (as opposed to left-brained) developer, architect, minimalist, and purist I always try to increase the level of abstraction for my life.  I'm always trying to make things more logically manageable instead of simply physically manageable.  The other day when I handed my drivers license to a cashier at a grocery store and she responded "Wow, you're a long way from home".  I immediately got to thinking what a strange thing that is to say.  First of all, what ever happened to the saying "home is where the heart is".  Is this something people hang on their kitchen wall, but don't ACTUALLY believe?  Is society so bad that people have bumper stickers and plaques of cute little saying, but don't actually believe them? (obviously, yes)  Secondly, this person was making a statement about my physical, not logical representation.  When I realized this, it dawned on me that much of the technology world (including myself) is living in a painful contradiction.  We are trying to making everything logically management (i.e. active directory, the Internet, web server farms), but we just can't seem to have a logical representation of the most important thing of all: people.  There's no reason for me to be in an office every single day just like there's no reason my web server needs to be with me.  Furthermore, what's with those awesome science fiction scenes in movies where people are remotely (logically) present in meetings via 3D projection from all over the world?  We dream of this stuff, but I'm taking it now.

So, I'm now available to help on projects nation-wide project.  If you need .NET (C#), ASP.NET, JavaScript/AJAX, LLBLGen Pro/LINQ, Firefox, or XHTML/CSS development , porting, auditing, architecture, or training, all based on best practices, please drop me a line using my i-name account.  My rate varies from project to project and for certain organizations my rate is substantially discounted.  Also, please note that I will never, ever work with the counter productive technologies VB or Visual Source Safe (if you want me to setup web-based Subversion on your site, drop me a line!)

NetFXHarmonics .NET SolutionTemplate

I've had a number of requests for my SolutionTemplate without the e-book lessons, so below is its Subversion repository:

You may still access the e-book version as well in the related links section, but if all you need is a new project to get started then you can use the above; this repository shares the same code base with the e-book version.  I will be very careful to make sure that the two versions are kept in sync to minimize confusion.

I should also mention that since the initial release, SolutionTemplate has had various substantial updates based on my more recent research.  They will both continue to be updated as I think of more foundational portions that should be put in the template.  This SolutionTemplate has helped me time and time again when starting new projects and I hope it helps you too; you can use it for any of your production projects as you please.

Lastly, in case you're wondering why this isn't a Visual Studio template: Subversion is just a better way to work with code that regularly updates.  It's an extremely versatile lightweight, transaction, distributed file system that allows for extremely efficient updates.  I would pay for each of those things, but Subversion is FREE.  Can't beat that!

Related Links

Silverlight 1.0 Released

In case you haven't found out yet, Silverlight 1.0 has officially been released.  Being a JavaScript developer and having been trained in Game Development I find that to be really awesome because Silverlight 1.0 does an amazing job with graphics and video.  However, you probably shouldn't get too excited as this isn't actually the REAL Silverlight we are all waiting for.  The Silverlight that will make people jump for joy is Silverlight 1.1 (I highly expect this to be renamed to Silverlight 1.5 or Silverlight 2.0 before it's released), which should be released sometime in the next year.  Why Silverlight 1.0 is released in such a castrated form is beyond me.  I'm hearing all kinds of Flash experts slam it left and right and every million or so insults they actually get one that's right.  I'm not sure why Microsoft's marketing did it this way, but I'm sure they have an awesome plan for it (the marketing team isn't stupid-- there is a reason it's a multi-billion dollar corporation).  I'm very anxious to see how this marketing tactic works.

As I just mentioned I've been hearing Flash "experts" say all kinds of things about Silverlight and rarely, rarely, rarely are they ever true.  For instance, two days ago I actually heard someone say "...then you have Microsoft's Silverlight and what a piece of junk.  I mean... come on... it doesn't even do video!  How can you expect to compete with Flash when you don't even support video!"  Of course, when you say something with a smile and the right tone, you immediately get people to agree with you.  In reality, however, not only will Silverlight 1.1 support video, Silverlight 1.0 supports video (in all kinds of awesome ways).  Furthermore, video is one of the core features of Silverlight!  In fact, Scott Guthrie over at Microsoft has an awesome blog entry published last night discussing some of the great media intensive features Silverlight brings (he also shows clients actually using Silverlight!)  So, I'm not sure were people get their information (an Adobe forum??) but it's clearly a product of propaganda, not of truth.  Perhaps people should watch demos of a technology and ask a Silverlight expert about the technology before listening to the anti-Microsoft Flash advocate's explanation of their competition.  It's just a thought.  Actually, that idea could go into every area of life.

In one sense Silverlight isn't going to touch Flash.  That is, in the Silverlight 1.0 sense.  This version of Silverlight doesn't do anything but video and graphics.  If you want to do more, you're going to be building your components by creating them from using graphics components.  However, in another sense there is no competition between Silverlight and Flash because of a simple DOA: Flash is Dead On the Arrival of the next Silverlight.  Period.  If you want to do comparisons, then it may be better to compare Silverlight with Adobe Flex.  You can't have a fair comparison between Silverlight and Flash.  That's like comparing Firefox to Internet Explorer.  Firefox is a development integration and web suite where as Internet Explorer is a COM component thrown into a shell application.  Flash does all kinds of awesome animations and graphics, but is slower than even Java (and that's slow!) when it comes to applications whereas Silverlight is WPF for the Web backed by the power and depth of the .NET framework and by CLS languages.  Flash requires that you cough up hundreds of dollars to use an extremely non-intuitive timeline based system whereas Silverlight allows you to use anything from notepad to Visual Studio and follows a much more intuitive event based model (Flash can do events and SilverLight can do timelines, but I'm talking in general).  Furthermore, Silverlight is designed after the XHTML/CSS and WPF model of design and development separation that allows developers to do what they do best and designers to do what they do best at the same time in the same project with no conflicts.  Where as in the XHTML/CSS model, developers create raw XML and designers create CSS designs and in WPF/Silverlight developers breath the life of logic into a solution while the designers are breathing the life of beauty in to it.  One is using Visual Studio and the other is using Expression Blend.  Or one is using Visual Studio Express and the other is using... um... Visual Studio Express.  You don't need expensive tools.

Something I don't think people realize that Silverlight isn't all that new.  It's basically "WPF for the Web".  Personally, I think the naming is kind of weird.  I mean, it seems completely backwards.  WPF/E was the project codename and Silverlight is the product name.  That sounds like the opposite of what Microsoft's naming people usually do.  You would think that Silverlight would be the project codename and that WPF/E or "WPF Web Edition" or something would be the final product name.  It's with the name of Silverlight that people get the idea that Silverlight is a completely new Microsoft technology, when in reality it's simply "WPF for the Web".  You need to learn WPF before you get near Silverlight (unless you just want to be a hacking coder, not a professional) and you really need to learn .NET development before you learn WPF.  It's an incredibly powerful technology that seamlessly fits in with the many other .NET technologies.  If you try to jump into Silverlight without learning WPF you won't have a clue what's going on.  You will be completely confused why the technology even exists and probably go off telling people that Microsoft made a new useless technology simply to take over a new market.  I expect that to be exactly what the Flash people will be doing.  Also, if you try to learn WPF (or ASP.NET or WCF or any other .NET technology) without understanding the fundamentals and paradigms of .NET, you will probably hate those technologies and complain about how hard .NET is to learn or use.  I've seen this before and it's always based on ignorance of .NET or confusion of .NET.  So, if you go to hit up Silverlight, make sure you understand WPF and .NET first.  I don't mean you "did some projects" WPF or .NET.  Your experience has nothing to do with your skill.  Did you study the paradigms and philosophies of .NET and WPF?  Do you understand dependency propertiesRouted eventsXAML? No? Then you need to fulfill the prerequisites first.

Related Links

Temporary Post Used For Style Detection (c2764a60-646f-4bbd-86b1-d3b8ca31eb31)

This is a temporary post that was not deleted. Please delete this manually. (ed6f01c4-790d-4ea6-bc50-194c65de0014)

Temporary Post Used For Style Detection (e8a1d251-e526-4736-bb7a-d096f2aa5aab)

This is a temporary post that was not deleted. Please delete this manually. (ee8209fe-26a5-48fc-941d-9eb6248296de)

Web Application Security Presentation

Today I found a really nice web application security presentation by Joe Walker.  Honestly, almost none of it is common sense and I would therefore encourage all web developers to check this out.  Also on the same page as the presentation are a number of very good AJAX security links like the XSS (Cross Site Scripting) cheat sheet.

BTW, this type of stuff is touched on in the Brainbench AJAX exam.

Links

Prototype and Scriptaculous

OK, it's time that I come out with it: I've switch to using the prototype and script.aculo.us ("scriptaculous") JavaScript/AJAX frameworks. For the longest time I've sworn my allegiance to manual AJAX/DOM manipulation as I've always found it to be the absolute most powerful way to get the job done correctly, but as it turns out prototype/scriptaculous provide an incredible level of simplification without taking any of your power from you.  It's the ONLY AJAX framework I found that didn't suck.  Though I'm a .NET developer, I can't  the Microsoft ASP.NET AJAX ("Atlas") extensions.  Except for it's awesome web service handling, which I use all the time, it's a slap in the face of AJAX development. It's bulky with hard to read source code that has an incredibly low usability.  It seems to be the opposite of the beauty of C# and .NET in general.  With those technologies, everything just falls together without ever needing to read a thing (assuming you are a qualified web professional who understands the foundational concepts of the system). Sure, you have to look up stuff in the docs, but you don't have to pour over book on the topic to be productive.  The same can be said for prototype and scriptaculous.

So, what is this thing? Actually are two frameworks, one, prototype, is a single JavaScript file and the other, scriptaculous, is a series of JavaScript files. Prototype is a foundational JavaScript framework that simplifies the existing client-side JavaScript API into something much more intuitive and that's also widely cross browser compatible. Finally! Cross browser compatibility without needing to support it!  That means we no longer have to do a zillion tests to see how we are supposed to get an element's height. I can just simply call $('box').getHeight( ) and be done with it! Prototype has classes (types) for Arrays (which including a very powerful each( ) function-- similar to .NET's ForEach method, Elements (which allows you to modify style, add classes, get ALL descendants -- not just the children), Events (instead of trying to test for addEventListener or attachEvent, just use Event.observe!), and classes for a ton of other things. To put it simply: prototype gives you a new client-side API. The source code is also incredibly each to read. It's just the same stuff most of us have already written, but now we don't have to support it.  If we build our applications on prototype, some one else has pre-tested a BUNCH of our system for us!

Scriptaculous is a different beast entirely. While prototype is a new general client-side API, scriptaculous goes more specifically into dynamics. For example, it allows you to turn a normal list into a sortable list with ONE line of JavaScript.  ONE.  Uno.  Eins.  It also allows you to Turn one div set into a series of draggable elements (products?) and another set of divs into containers that the items can go to (shopping carts?) There are also a very nice set of pre-built animations as well as other common things like an autocompleting textbox and an in-place editable label. These are I've normally built manually, but can use them without micro-managing code.  Code by subtraction RULES!  Scriptaculous is also VERY flexible. Everything you can do in scriptaculous is extremely customizable thanks to JavaScript's flexible dynamic object syntax and native higher-order function capabilities. That means, for example, that when you create a sortable list you can control exactly how it can scroll and set callback functions all in the same simple line of code. Also, note that scriptaculous uses prototype's API for it's lower-level functionality. This is why you will often see the two products named together, like the various books written on "prototype and scriptaculous".

What about some samples? Well, Prototype and Scriptaculous are both SO simple to work with I have absolutely no idea how someone can write a book on them. I go to various Borders bookstores about every day (it's my office), so I get to see many different books. When I flip through the prototype/scriptaculous books I get very confused. How can someone take hundreds of pages to write something that could be said in 20 or 30?  Verbosity sucks (yeah I know... look who's talking).  These framework are insultingly simple to work with.

Here are a few very quick samples.  For better samples, just download scriptaculous and view extremely well-documented prototype API online.

Prototype

Want to make a simple AJAX request?

new Ajax.Request('/service/', { 
  method: 'get', 
  onSuccess: function(t) { 
    alert(t.responseText); 
  }
}); 

No XmlHttpRequest object, no COM objects, nothing!

How about updating the content of an element?

Using this element...

<div id="myText"></div> 

...with this JavaScript...

$('myText').update('this is the new text'); 

... you get an updated element!  As you can see, it even uses the typical $ syntax (in addition to $$, $A, $F, $H, $R, and $w syntax!) Just look at the examples in the Prototype API to see more.  You will be shocked to see how easy it is to walk to DOM tree now.  You will also be amazed at how much easier arrays are to manipulate.

Script.aculo.us

Using this XHTML structure...

<ul id="greek">
<li>Alpha</li>
<li>Beta</li>
<li>Gamma</li>
<li>Delta</li>
</ul>

...with this SINGLE line of JavaScript...

Sortable.create('greek');

..., you have a sorting list (try that out-- you will also notice some nice spring-back animations happening too!)

Need callback when sort is completed? (well of course you do!)  Just give the <li> elements a patterned ID ('listid_count')... 
 

<ul id="greek">
<li id="greek_1">Alpha</li>
<li id="greek_2">Beta</li>
<li id="greek_3">Gamma</li>
<li id="greek_4">Delta</li>
</ul>

...and add a function callback and you're done.

Sortable.create('greek', {
  onUpdate: function( ){ 
    alert('something happened');
  } 
});

Ooooooooooooooo scary. IT'S THAT EASY! You don't need a book. Just use the docs and samples online.

Here's another one: want to move an item from one list to another?

Just use these elements...

<ul id="greek">
<li id="greek_1">Alpha</li>
<li id="greek_2">Beta</li>
<li id="greek_3">Gamma</li>
<li id="greek_4">Delta</li>
</ul>
<ul id="hebrew">
<li id="hebrew_1">Aleph</li>
<li id="hebrew_2">Bet</li>
<li id="hebrew_3">Gimmel</li>
<li id="hebrew_4">Dalet</li>
</ul> 

... with this JavaScript.

Sortable.create('greek', { containment: ['greek', 'hebrew'] });
Sortable.create('hebrew', { containment: ['greek', 'hebrew'] });

Want to save the entire state of a list?

var state = Sortable.serialize('greek');

Couple that with the simple prototype Ajax.Request call and you can very quickly save the state of your dynamic application.

Now close your jaw and stop drooling.  I haven't even shown the drag-n-drop, animations, or visual effects that Scriptaculous provides.  Also, get this: it's all FREE. Just go get it at the links below. Be sure to look over the docs a few times to get some more examples of the prototype functionality and scriptaculous usage. I've thrown out A LOT of my own code without looking back now that I have these amazing frameworks. This is good stuff.

AdvancED DOM Scripting Book

Oh, and as always... be very certain that you know your AJAX before you do this.  I know it goes without saying that you need to be a qualified professional to use powerful tools, but some amateurs and hobbyists (and men who get a hand crushed trying to fix the wash machine) think "Hey! This tool can do it for me! I don't need to know how it works!".  So, make sure you understand the three pillars of AJAX (AJAX Communication, Browser Dynamics, and Modern JavaScript) before you even bother with the powerful frameworks or else you will by flying blind.  Basically, if you can't recreate the Prototype framework (very easy to read code!), you shouldn't be using any JavaScript/AJAX framework.  If you aren't familiar with AJAX Communication, Browser Dynamics, or Modern JavaScript. Check out Jeffery Sambell's amazing book AdvancED DOM Scripting   It's an amazing guide covers all the prerequisites for AJAX development from service communication to DOM manipulation to CSS alteration.  It's amazing.  Even if you're an AJAX expert, buy this book!

Links

SQL Server Database Model Optimization for Developers

It's my assessment that most developers have no idea how much a poor database model implementation or implementation by a DBA unfamiliar with the data semantics can affect a system. Furthermore, most developers whom I have worked don't really understand the internals of SQL Server enough to be able to make informed decisions for their project. Suggestions concerning the internals of SQL Server are often met with extremely reluctance from developers. This is unfortunate, because it is only when we understand a system’s mechanics that we can fully optimize our usage of it. Some familiar with the history of Physics will recall the story of when Einstein "broke" space by his special theory of relativity. Before Einstein was able to "fix" space, he had to spend nearly a deciding trying to decipher how space worked. Thus was born the general theory of relativity.

It's not a universal rule, but I would have to say that the database model is the heart of any technical solution. Yet, in reality, the database implementation often seems to be one of the biggest bottle necks of a solution. Sometimes it’s a matter of poorly maintained databases, but from my experience it seems to mostly be a matter of a poorly designed implementation. More times than not, the SQL Server database model implementation has been designed by someone with either only a cursory knowledge of database modeling or by someone who is an expert in MySQL or Oracle, not SQL Server.

Database modeling does not end with defining entities and their respective relations. Rather, it extends completely into the implementation. What good is an amazing plan, if it is going to be implemented poorly? The implementation phase to which I am referring comes before the actual implementation, yet after what most people refer to as “modeling”. It’s actually not even a phase with a temporal separation, but is rather a phase that requires continual thought and input from the start about the semantic understanding of the real world solution. This phase includes things like data-type management, index design, and security. This phase is the job of the resident architect or senior level developer, not the job of the DBA. It needs to be overseen by someone who deeply understanding both SQL Server and the semantics of the solution. Most of the time the DBA needs to completely stay away from the original data model and focus more on the server specific tasks like monitoring backups and tweaking existing data models based on the specifications that an architect has previously documented. Having said this, I often find that it's not only not the architect or senior developer optimizing a project, often nobody even cares!

Developers need to start understanding that designing a proper data model based on the real world representation includes minimizing data usage, optimizing performance, and increasing usability (for the solution’s O/R mapping). These are not jobs for a DBA. Someone with close knowledge to the project needs to make these decisions. More times than not, a DBA simply does not have the understanding of the project required to make these important decisions. They should stay away from the requirements of the system, leaving this to the architect and senior-level developers. Despite what many well intentioned DBAs think, they do not own the data. They are merely assistants to the development team leaders.

Let's start off by looking at storage optimization. Developers should be able to look at their data model and notice certain somewhat obvious flaws. For example, suppose you have a table with a few million rows with each row containing multiple char(36) columns (a guid), two datatime columns (8-bytes each), six int columns (4-bytes each)-- two of which are foreign keys to reference/look-up/enumeration tables, and an int (4-bytes) column which is also table's primary key and identity. To optimize this table, you absolutely must know the semantics of the solution. For example, if we don't care about recording the seconds of a time, then the two datetime columns should be set to be smalldatetime columns (4-bytes each). Also, how many possible values could there be in the non-foreign key int columns? Under 32,727? If so, then these could easily be smallint columns (2-bytes each).

What about the primary key? The architect or senior-level developer should have a fairly good estimate on how large a table will ever become. If this table is simply a list of categories, then what should be do? Often the common response is to convert it to a tinyint (1-byte). In reality, however, we shouldn't even care about size of the primary key. It’s completely negligible; even if there were only 100 rows, switching it to a tinyint could cause all kinds of problems. The table would only be marginally smaller and all your O/R mappers are now using an Int16 instead of an Int32, which could potentially cause casting problems in your solution. However, if this table tracks transactions, then perhaps you need to make it a bigint (8-bytes). In this case, you need to put force a strong effort to making sure that you have optimized this table down to its absolutely raw core as those bigint values can add up.

Now, what about the foreign keys? If they are simply for reference, then the range of values probably isn't really that wide. Perhaps there are only 5 different values against which the data is to be constrained. In this case, the column could probably be a tinyint (1-byte). Since a primary key and foreign key must be the same data type, the primary key must also become a tinyint (1-byte). This small change alone could cut your database size by a few hundred MB. It wasn't just the foreign key table that dropped in size, but the references between the two tables are now smaller as well (-- I hope every now understand why you need to have a very good reason before you even think about using a bigint foreign key!) There's something else to notice here as well. Reference tables are very helpful for the developer to look at the raw data, but does there really need to be a constraint in the database? If the table simply contains an Id and Text column with only 8 possible values, then, while the table may be tremendously helpful for documentation purposes, you could potentially break the foreign key constraint and put the constraint logic in your application. However, keep in mind that this is for millions or possibility billions of rows. If the referencing table contains only a few thousand rows or if space doesn’t have a high opportunity cost, which may be the case if the solution is important enough to actually have that many rows in the first place, then this optimization could cause more problems than it solves. First off, your O/R mapper wouldn’t be able to detect the relation. Secondly, obviously, you wouldn’t have the database level constraint for applications not using the solution’s logic.

Another optimization that’s important is performance optimization. Sometimes a table will be used in many joins and will be used heavily by each of the CRUD (Create, Retrieve, Update, Delete) operations. Depending on how important the column is, you may be able to switch a varchar(10) to a char(10) . The column will allocate more space, but your operations may be more efficient. Also, try to avoid using variable length columns (varchar) as foreign keys. In fact, try to keep your keys as the smallest integer type you possibly can. This is both a space and performance optimization. It's also important to think very carefully about how the database will be accessed. Perhaps certain columns need extra indexes and others need less. Yes, less. Indexes are great for speeding up read access, but slow down insert operations. If you add too many indexes, your database inserts could run your system to a crawl and any index defragmentation could leave you with a painfully enormous transaction log or a non-functioning SQL Server.

This is exactly what happened to a company I worked with in 2005. Every Saturday night for several weeks in a row, the IT team would get an automated page from their monitoring service telling them that all their e-commerce web sites were down. Having received the phone call about 2AM, I looked into a few things and noticed that the transaction log had reached over 80GB for the 60GB database. Being a scientist who refuses fall into the post hoc ergo proctor hoc fallacy, I needed measurements and evidence. The first thing I did was write a stored procedure that would do some basic analysis on the transaction log by pulling data from the fn_dblog( ) function and doing a simple cube and save the results into a table for later review. Then I told them that the next time the problem occurred they were to run the stored procedure and call me the next Monday (a polite way of telling them that I’m sleeping at 2AM on Saturdays). Exactly one week later the same thing happened and the IT department ran the stored procedure as instructed (and, yes, waited to Monday to call me, for which I am grateful). Looking over the stored analysis data, I noticed that there were a tremendous number of operations on various table indexes. That gave me the evidence that I needed to look more closely at the indexes of each of the 5,000+ tables (yes, that’s four digits—now you know why I needed more information). After looking at the indexes, I realized that the database was implemented by someone who didn’t really understand the purpose of indexing and who probably had an itchy trigger finger on the Index Tuning Wizard. There were anywhere from 6 to 24 indexes on each table. This explained everything. When the weekly (Saturday at 2AM) SQL Server maintenance plan would run, each of the indexes were defragmented to clean up the work done by high volume of sales that occurred during the week. This, therefore, caused a very large number of index optimizations to occur. Each index defragmentation operation would be documented in the transaction log, filling the transaction log’s 80GB hard drive, thereby functionally disabling SQL Server.

In your index design, be sure to also optimize your index fill factors. Too full and you will cause a page split and bring your system to a crawl. Too empty and you're wasting space. Do not let a DBA do this. Every bit of optimization requires that a person deeply knowledgeable about the system to implement a complete database design. After the specifications have been written, then the DBA can get involved so that he or she can then run routine maintenance. It is for this reason that DBAs exist. For more information on the internals of SQL Server, see the book Inside SQL Server 2005: The Storage Engine by Kalen Delaney (see also Inside SQL Server 2000). This is a book which should be close to everyone who works with SQL Server at all times. Buy it. Read it. Internalized it.

There’s still more to database modeling. You want to also be sure to optimize for usability. Different O/R mapping solutions will have different specific guidelines, but some of the guidelines are rather general. One such guideline is fairly well known: use singular table names. It's so incredibly annoying to see code like "Entries entry = new Entries( );" The grammar just doesn't agree. Furthermore, LINQ automatically inflects certain tables. For example, a table called “BlogEntry” will be related to the LINQ entity “BlogEntry” as well as “BlogEntries” in the LINQ data context. Also, be sure to keep in mind that your O/R mapper may have special properties that you’ll want to work around. For example, if your O/R mapper creates an entity for every table and in each created entity there is a special "Code" property for internal O/R mapper use, then you want to make sure to avoid having any columns named "Code". O/R mappers will often work around this for you, but "p.Code_" can get really confusing. You should also consider using Latin-style database naming (where you prefix each column with its table name—so-named because Latin words are typically inflected with their sentence meaning thereby allowing isolated subject/object identification), this is not only a world of help in straight SQL joins, but just think about how Intellisense works: alphabetically. If you prefix all your columns with the table name, then when you hit Ctrl-J you’ll have all your mapped properties grouped together. Otherwise, you’ll see the "Id" property and it could be be 10-15 internal O/R mapper properties before you find the next actual entity column. Doing this prefixing also usually alleviates conflicts with existing O/R mapper internal properties. This isn't quite as important for LINQ, but not having table-name prefixed columns in LLBLGen Pro can lead to some headaches.

Clearly, there's more to think about for database design than the entity relation diagramming we learn in Databases 101. It should also be clear that your design will become more optimized the more specific it becomes. A data model designed for any database server probably won’t be as efficient as a well thought out design for your specific database server and for your specific solution requirements. Just keep in mind that your database model is your applications view of reality and is often the heart of the system and it therefore should be optimized for space, speed, and usability. If you don't do this, you could be wasting many gigabytes of space (and therefore also hundreds of dollars of needless backups), have a painfully inefficient system, and have a hard time figuring out how to access the data in your application.

Accelerated Language Learning (Timothy Ferris)

Many years ago I wrote a paper on accelerated learning and experience induction.  This paper explains how I induce weeks of experience in days, months of experience in weeks, and years of experience in months and how to dramatically learn new technologies with little to no investment.  I know people who have worked in a field for 4 years, but only have 6 months worth of skill (usually VB developers -seriously).  I also know people who have worked for 6 months, but have over 4 years of skill (usually Linux geeks; paradoxically, VB developers usually are quicker to learn .NET basics than PHP developers, though they usually switch places in more advanced studies.)  How can anyone expect to gain skill by doing the exact same job for 4 years (e.g. building database driven interfaces, cleaning data, writing reports)?  Obviously, calendar-years of experience is not directly related to skill-years of experience.  As it turns out, my learning techniques are not uncommon.

Today, author Timothy Ferris (Four Hour Work Week) posted a blog entry about how he learns languages in an incredibly short timeframe.  His post was fascinating to me for many reasons, one of them being that his first step is as follows: "Before you invest (or waste) hundreds and thousands of hours on a language, you should deconstruct it."  This is the same first step in my accelerated learning method.  Apparently I was on to something!  In his deconstruction method, he asks a few key questions and does some component and paradigm comparisons to give you some idea of the language scope and of its difficulty.  Based on what you learn from the deconstruction, you should have a good idea of what the language entails.

In my learning system, I refer to this deconstruction as "learning the shell", which is followed by "learning the foundations", then "learning the specifics" -- Shell, Foundations, Specifics -- my SFS (pronounced "sifs") method.  The method exploits Pareto's Law, allowing you to learn 20% of the technology at first to give you 80% of the return.  That's MUCH more than what most so-called "experts" have anyhow!  As it turns out, Timothy Ferris uses Pareto's Law in his language learning as well.  You can hear about this in his interview with my other role model, Scott Hanselman.

For more information on Timothy Ferris' other life-optimization work, check out his book The Four Hour Work Week and his blog.

Related Links

ESV Bible Web Service Client for .NET 3.5

A while back, the guys over at the ESV Bible web site announced their new REST-based interface to replace their old SOAP interface.  This new interface provides the same functionality as the old, but allows for 5,000 queries per day instead of 500 previously and is based on REST architectural principles.  Because the service is fundamentally chatty, it made sense to switch to REST.  In the context of a Bible web service, it's hard to justify a 200-byte XML message when your actual request is 6 bytes ("John 1").  Also, because the method call is in the URI, the entire call is simplified all the more.

For those of you who are completely unfamiliar with REST interfaces, all you really need to know is that it's a resource (or noun) based architecture.  That is to say instead of calling, for example, a "GetItem" method, you simply access an "item" entity.  You access what the thing is, not what the thing does; kind of a web-based reversal of encapsulation.  In other words, instead of giving a server a command (a verb), you are accessing the resource directly (a noun).  There's obviously more to REST than this and you can get more information from this nice article titled "Building Web Services the REST Way".

RESTful architecture really is a nice way to telling a system what you want, not how to get it.  This is really the point of framework design and abstraction in general.  In light of this it's obvious to see that, as awesome as REST is, it's not how .NET developers want to think when working working on a project.  When I'm working with something I want to focus on the object at hand, not on the URLs and parameters.  For this reason, I built a .NET 3.5 framework that allows easy and efficient access to the new ESV Bible REST web service.  Here are some samples of how to use it:

Here's a simple passage query returning HTML data:

ESVBibleServiceV2 service = new ESVBibleServiceV2( );
String output = service.PassageQuery("Galatians 3:11");

With the flip of a switch you can turn it into plain text:

ESVBibleServiceV2 service = new ESVBibleServiceV2(OutputFormat.PlainText);
String output = service.PassageQuery("Galatians 3:11");

For more flexibility, you may use the provided parameter objects.  Using these in C# 3.0 is seamless thanks to object initializers:

PassageQueryParameters pqp = new PassageQueryParameters( ) { Passage = "John 14:6" };
ESVBibleServiceV2 service = new ESVBibleServiceV2(new PlainTextSettings( )
{
    LineLength = 100,
    Timeout = 30
});
String output = service.PassageQuery(pqp);

Here is a simple sample of accessing the verse of the day (in HTML without the audio link -- optional settings):

ESVBibleServiceV2 service = new ESVBibleServiceV2(new HtmlOutputSettings( )
{
    IncludeAudioLink = false
});
String output = service.DailyVerse( );

You can also access various reading plans via the provided .NET enumeration:

ESVBibleServiceV2 service = new ESVBibleServiceV2( );
String output = service.ReadingPlanQuery(new ReadingPlanQueryParameters( )
{
    ReadingPlan = ReadingPlan.EveryDayInTheWord
});

Searching is also streamlined:

ESVBibleServiceV2 service = new ESVBibleServiceV2( );
String output = service.Query("Justified");

Here is a length example showing how you can use the QueryInfoAsObject method to get information about a query as a strongly-type object:

ESVBibleServiceV2 service = new ESVBibleServiceV2( );
QueryInfoData result = service.QueryInfoAsObject("Samuel");

Console.WriteLine(result.QueryType);
Console.WriteLine("----------------------");
if (result.QueryType == QueryType.Passage) {
    Console.WriteLine("Passage: " + result.Readable);
    Console.WriteLine("Complete Chapter?: " + result.IsCompleteChapter);
    if (result.AlternateQueryType != QueryType.None) {
        Console.WriteLine(String.Format("Alternate: , ", result.AlternateQueryType, result.AlternateResultCount));
    }
}

if (result.HasWarnings) {
    foreach (Warning w in result.Warnings) {
        Console.WriteLine(String.Format(": ", w.Code, w.Readable));
    }
}

Here is the output:

QueryInfoAsObject Example Output

For more advanced users, the Crossway XML format is also available:

ESVBibleServiceV2 service = new ESVBibleServiceV2(new CrosswayXmlVersion10Settings( )
{
    IncludeWordIds = true,
    IncludeXmlDeclaration = true
});
String output = service.PassageQuery(new PassageQueryParameters( )
{
    Passage = "Galatians 3"
});
Console.WriteLine(output);

That same XML data is also retrievable as an XmlDocument for pure XML interaction:

ESVBibleServiceV2 service = new ESVBibleServiceV2( );
XmlDocument output = service.PassageQueryAsXmlDocument("Galatians 3");

For more flexible XML interaction, you may use XPath:

ESVBibleServiceV2 service = new ESVBibleServiceV2( );

String output = service.PassageQueryValueViaXPath(new PassageQueryParameters( )
{
    Passage = "Gal 3:4-5",
    XPath = "//crossway-bible/passage/surrounding-chapters/current"
});

Sometimes, however, you will want more than one result from XPath:

String[] output = service.PassageQueryValueViaXPathMulti(new PassageQueryParameters( )
{
    Passage = "Gal 3:4-5",
    XPathSet = new[]
    {
        "//crossway-bible/passage/surrounding-chapters/previous",
        "//crossway-bible/passage/surrounding-chapters/next"                
    }
});

Here's what the result looks like the debugger:

XPathSet Example Output

I've also copied the documentation for functions and parameters into the .NET XML comments, so you can quickly and easily see what a certain function or parameter does and it's default:

ESVBibleServiceXmlComment

The new API uses your existing ESV Bible Web Service access key.  To use this key in this framework you simply add an element called ESVBibleServiceKey to the addSettings in your configuration file (a sample is provided with the framework).  You may also set it in any one of the parameter objects (i.e. PassageQueryParameters, QueryParameters, etc...), which override the key in the configuration file.  Per the API, you can use TEST for testing and IP for for general purpose queries.

Lastly, I would like to mention that this framework minimizes traffic by only sending options that deviate from defaults. So, for example, if you set IncludeWordIds to false and IncludeXmlDeclaration to true, only the IncludeXmlDeclaration value will be sent over the wire since IncludeWordIds is false by default.

You can access this ESV Bible Web Service 2.0 framework on CodePlex at the address in the links section.  Enjoy!

Links

.NET Framework 3.5 Released

If you don't already know, .NET 3.5 is finally out and with it came VS 2008.  I've been using it full time for many months now and there are some features which I've come to love and others which I find completely worthless.  Here is a quick break down of what I find cool (be sure to check out the links section to see more resources):

Notice I didn't mention anything about ASP.NET AJAX becoming native (or should I say naive?). This is an incredibly poorly designed technology bordering on the quality of Internet Explorer (ok ok, not even a leaky nuclear core is quite that bad).  The JavaScript intellisense is a complete joke and only gets in the way, the Sys namespaces pollute the Firebug watch window so you can never see your objects, and the syntax is painfully non-intuitive.    The only nice feature it has its it's does have is the ability to allow you to access ASMX services from JavaScript.  Having said that, the year is almost 2008.  It's not 2002 and therefore we use WCF, not ASMX.  In WCF 3.5 we can very easily create very flexible and powerful REST-based JSON services (adding straight XML support if needed with a single endpoint configuration element).  There's just no need to have SOAP turn your 6-byte request into a 300 byte message.  It adds up.  So, ASP.NET AJAX ("Atlas") is complete obsolete in my book. If you want to do real AJAX, then learn the fundamentals and whip out prototype/script.aculo.us and use WCF 3.5 for your service interaction.

Accelerated C# 2008 Now, if you're looking for an awesome resource for learning/mastering .NET 3.5 and C# 3.0, I highly recommend the book Accelerated C# 2008 by Trey Nash. It gets right to the point and doesn't mess around with entry-level nonsense. You get the knowledge you need right away and from it I estimate an experience induction of at least 7 months.

For full .NET Framework 3.5 examples, check out my Minima .NET 3.5 Blog Engine (on which this site runs) and my ESV Bible Web Service 2.0 Framework for .NET 3.5.

Links

Firefox 3.0 Beta 1 Features

A few days ago Firefox 3.0 Beta 1 was released.  This is a major revision packed with some seriously awesome features.  Here's a rundown of some of the major features for normal users, power users, and developers (this is not an exhaustive list, but it covers a lot of ground-- also note that I've only tested Firefox 3.0 Beta 1 on Windows):

SQLite - SQLite databases are now used to store cookies, download history, a new concept called "Places" and other things.  This information being stored in a series of databases (in *.sqlite files) means that we can use SQLite front-ends to do SQL CRUD queries against the stored information.  Even if we don't use the SQLite databases directly, developers from all over the world will be able to create very powerful extensions to the features using the SQLite databases.  There's already at least one SQLite manager built as a Firefox extension.  Firefox has actually been using SQLite for a while, but it's only really been used for XUL and extension development.  If you are unfamiliar with SQLite, you should seriously check into it-- it's really awesome.  It's also the storage system for Google Gears.

Places - As I just mentioned, the new concept of "Places" is also stored in the database.  This feature tracks web surfing trends similarly to how various media players track music listening trends.  So, after a bit of surfing you'll be able to see what pages you visit most often.  Places also shows what bookmarks you have recently tagged, your most recently used tags, and a few other things.  Even if we don't use this feature in Firefox as is I'm sure more extensions will be built to help make Places more useful.  I can already visualize an extension to mash Places metadata with your Windows Media most-popular metadata to give you a view of all your favorite things in one place.

Tags and Easier Bookmarking - Firefox 3.0 also introduces del.icio.us-like tags to bookmarks.  This isn't that big of a deal to me, because with Firefox 2.0 you could install the del.icio.us bookmark extension to replace your static Firefox bookmarks to allow del.icio.us manage all your bookmarks.  It was so integrated that CTRL-D even sent your bookmark to del.icio.us.  The exciting part of Firefox 3.0 tagging is that the next del.icio.us extension will probably be faster and even easier to use in Firefox 3.0 since Firefox now has built in mechanisms for this.  Using the Firefox 3.0 tags feature by itself is nice too, though.

Coupled with this feature is the ability to simply click on a star to have your page sent to "Places" (it's actually very similar to the star in Gmail).  Another click of the star gives you a nice out-of-the-way box to set tags on the link.  It's actually very similar to what the del.icio.us extension did in Firefox 2.0, thus making me think even more that there will soon be an awesome del.icio.us extension for Firefox 3.0.

ACID2 Test Passed - It's official: Internet Explorer is the only major web browser that doesn't pass the ACID2 test (and it doesn't get near it).  Firefox has always been close (yes, since V1.0 it has the shape of a face) , but it finally crossed the finish line.  Internet Explorer's rendering on the other hand still looks like someone slaughtered a pig.  If you don't know what the ACID2 test is, it's THE test for a web browsers CSS usefulness.  The better the rendering, the better the browser can render.  As you will see in a moment, Internet Explorer is SO far off that it's not even CLOSE to being a 7th generation web browser (...and I do not apologize for bashing IE -- there's always time for that.)

Here are the renderings of Firefox 3.0b1, Opera 9.24, Safari 3.04, and Internet Explorer 7 (and 6) for the ACID 2 test:

Firefox 3.0 Beta 1

Opera 9.24

Safari 3.0.4 (Windows)

Internet Explorer 7 (this is a scaled version-- click for full)

Internet Explorer 6 (also scaled-- click for full).

Sheesh... notice any similarities? If you think IE7 is a major improvement over IE6, think again. It's just the 6th generation IE6.5 in a 7th generation skin (i.e. has tabs and integrated search).  Adding XMLHttpRequest doesn't make it a 7th generation browser (XMLHttpRequest was NOT in IE before IE7-- before IE7, the IE world had only ActiveX controls and Java proxies for remote scripting.  These are the opposite of standardized components.)  Trying adding window.addEventListener, removing that horrendous ClearType, and getting somewhere near the shadow of the ball park of the ACID2 test and we'll talk.

JavaScript 1.8 - Some people know it and take it for granted, yet others don't realize it and are offended by it: Firefox has the most powerful JavaScript in any web browser at all.  Most of us know that Internet Explorer's CSS is just about nonexistent, but most people don't know that Opera's analogous in the area of JavaScript.  Safari is a close second.  Firefox is the only web browser that continually and consciously has a constant flow of documented JavaScript features.  Internet Explorer is actually pretty good in this area (I know-- it's shocking) and Opera is continually getting better and better, but Firefox is head and shoulders above everyone else (and none of this is to even mention how are advanced Firefox' DOM implementation is -- Firefox even has native base64 conversion functions!). 

Firefox 1.5 had JavaScript 1.6, which included iterative methods (i.e. forEach), like in C# 2.0, and E4X.  Firefox 2.0 had JavaScript 1.7, which provided a functional programming feel to JavaScript similar to LINQ's functional nature.  Firefox 3.0 now has JavaScript 1.8 and takes JavaScript functional programming to the next level by including lambda expressions.  If you love C# 3.0, you will love JavaScript 1.8.  Firefox 3.0 may or may not also have client-side JSON serialization.  If it does, it should seriously fit nicely with the WCF 3.5 JSON feature.  By now, any one who still sees Firefox as anti-Microsoft technology needs to repent.

There are also new DOM features, like two new drag events and support for Internet Explorer's clientTop and clientLeft attributes.  Firefox 3.0 also has a scriptable idle service allowing you to check to see how long as user has been idle.  I wish I had that 8 years ago when I created a web-based screen saver for a kiosk.  Another thing I wish I had years ago is Firefox 3's new getElementsByClassName function.  Since it's native (C++) it's MUCH faster than any artificial JavaScript implementation (see John Resig's benchmarks.)

For more information on Firefox' powerful development capabilities, check out the MDC (Mozilla Development Center-- the Firefox equivalent of MSDN).  There you will find a detailed references for the DOM, JavaScript, AJAX, XSLT, CSS, SOAP, XML-RPC, SVG, Canvas (which was Silverlight before Silverlight and native to Firefox, Safari, and Opera-- notice which browser is missing?), XUL, and a whole host of other technologies you probably never knew existed or never knew were native in Firefox.  If you do ANY client-side web development, you need to check out these references and keep them close by.  The samples alone will save you hours of wasted debugging.

Lower Memory Utilization - Now, to be clear I'm not one of those far too uptight people who cry every time SQL Server uses multiple GBs of memory.  On the contrary I'm usually ecstatic to see that I'm actually using the memory that I paid so much money for.  I'm not too uptight about Firefox using a lot of memory either as I know it's caching everything it sees.  Since I use Firefox more than anything else, I have no problem with it using more memory than anything else-- that includes Photoshop.  However, Firefox 3.0 uses a lot less memory.  You can do simple configuration tweaks in Firefox 2.0 to make it use a lot less memory and to even release memory when you minimize and this all without any extensions, but Firefox 3.0 cleans up the memory as you go.  As I was watching the memory charts of Firefox, I was shocked to see it return 30MB of memory upon closing a tab.  Now it's going to be Safari that's the target of memory usage paranoid.

Webmail Handlers - This isn't a feature I've seen yet, but I'm really hoping comes to Gmail soon.  I'll just quote the release notes: "...web applications, such as your favorite webmail provider, can now be used instead of desktop applications for handling mailto: links from other sites. Similar support is available for other protocols (Web applications will have to first enable this by registering as handlers with Firefox)."  If Gmail does that registration, I'll finally be able to replace Google Chat as my mailto handler.

Offline Applications - This needs to be explicitly utilized by the developers of each particular web application, but now Firefox theoretically doesn't need Google Gears in order to use online application locally.  Firefox 2.0 already had one interesting offline feature in the form of HTML 5's sessionStorage attribute.  This feature was conceptually similar to ASP.NET's ViewState in that it persists across page refreshes, but not across pages.  Firefox 3 included two new events for offline functionality: "online" and "offline".  When an application goes offline, the offline event is raise and similarly with the online event.   I've checked out these events and they are rock-solid.  There are also other offline application features in Firefox 3.0, but they aren't that well documented yet.  You can see an example of the concept of office applications by using Google Reader and Google Gears.  I expect this feature to be available in Gmail soon and hopefully without ever needing a plugin.

One Click Website Info - When you click on the website icon in the address bar you get a box of information telling you a little about the website.  Really what were talking about here is SSL websites.  You can click the icon to get a quick view of the SSL information.  I personally just like the idea of not having to double-click.  I know, I'm picky.  It's the little things in life that make the difference, right?

Native viewing of ZIP files - This feature is not that well documented from what I've seen, but it's really awesome!  It allows you to view ZIP and JAR files directly in Firefox 3.0 by using the following pattern: jar:http://www.mywebsite.com/MyZipFile.zip!/.   Thus jar:http://www.davidbetz.net/dotnetcourse/CSharpLanguage.zip!/ (copy, don't click) views the contents of one of my course samples.  You know you're intrigued now.

There are also many new security features like the forged website blocker which stops you (or, your relatives) from going to verified phishing web sites and malware protection which does the same for malware web sites.  There are also user experience enhancements.  Now when you type in the address bar you are filtering by page title and URL, compared to just filtering by URL previously.  Also, zooming now attempts to zoom images and text, not just text, though I'm not finding that to be all that successful; safari on the iPhone/iPod touch still owns that one.  Other development features include support for animated PNGs (APNG), the ability to use a web service as a content handler, support for rgba and hsla colors, and... ready for this? Cross-site XMLHttpRequest!  That's right, we will finally be able to do cross-domain AJAX without script block hacks!  Other normal user/power user features include a permanent restart button (in Tools->Add-ons), a much better application content-type screen, a really, really nice page info window which includes a cookie viewer and the supposed ability to enable and disable images, popup windows, cookies, and extension and theme installations per web site.

On the negative side, the new download window is absolutely horrible.  Firefox' download manager and download options actually get worst with each major Firefox release.  The download setup is finally as bad as Safari's.  Firefox 1.0 had absolutely the best download setup I've ever seen.  You could go to the options screen and with the click of a button, a My Downloads folder was created and downloads would start going there.  That actually made sense!  In Firefox 1.5, they got rid of that awesome selling point, forcing you to make the folder yourself or suffer having all your downloads be thrown all over your desktop.  Lame.  At least in Firefox 1.5 you could click the button next to "All files downloaded to:" and have access to your downloads in a folder view of your desktop.  In Firefox 3.0 you can't even do that! I'm never getting to my downloads again! Well, not never, because the Firefox developer have to be smart enough to fix that and even if they aren't, Firefox has an awesome extension system that allows anyone to make a quick fix using XML, JavaScript and CSS.  Furthermore, the download manager API has been updated so extension developers can do much more.  It's also been moved from RDF to SQLite, thus allowing even more extensibility.

With all these additions, it's not hard to see that Firefox 3.0 is a major upgrade over previous versions pushing the Firefox dynasty even further in the face of its competition (that is, Opera and Safari-- IE isn't in the ball park.)  Some would criticize this statement though and possibly even say that I have double standards.  They would say that when Firefox gets a feature I proclaim it as awesome and slam other browsers for not having it, but when those other browsers get a feature that Firefox lacks, I ignore it.  To be sure, when other browsers get a feature that it lacks I very much criticize Firefox for it.  Their lack of perfection on the ACID2 test in Firefox 2.0 was a good example and their lousy download manager in Firefox 3.0 beta 1 is another.  I slammed them rather hard for that and submitted/voted for all kinds of other bugs in Firefox.  Furthermore, I love other browsers as well.  For example, because of it's beautiful anti-aliasing and support for the CTRL-L and CTRL-K shortcuts, I use Safari about as much these days.  Even still, Firefox is leaps and bounds ahead of the rest.  The zip viewer means nothing, the SQLite is only "cool", the "Places" is something I'm not too excited about, because it's the JavaScript, CSS, DOM and extension support that actually matters.  Web browsers need to be standards compliant and have a strong development feature set to be acceptable in today's web.  Opera will probably always be flashier, but Firefox will probably always be smarter.

As I've stated initially, there's more to Firefox 3.0 than what I've mentioned here.  If you want to know more about any point of Firefox 3.0, just check out the many links above or the developer notes below.  For more developer information, I highly suggest going to the Mozilla Developer Center.  For other information, just check out the release notes and it's links.

Related Links

New XAG Feature: Support for C# 3.0 Automatic Properties

One of the nicest features of C# 3.0 is one of the most subtle: automatic properties.  It's really nothing more than syntactical sugar and saves us a little bit of typing, but it's been a big help in making my code more self-documenting.  If you're unfamiliar with automatic properties, here is what one looks like:

public Int32 Id { get; set; }

When that single line is compiled and viewed in Reflector, you get the following:

[CompilerGenerated]
private int <Id>k__BackingField;

public int Id
{
    [CompilerGenerated]
    get
    {
        return this.<Id>k__BackingField;
    }
    [CompilerGenerated]
    set
    {
        this.<Id>k__BackingField = value;
    }
}

The new syntax is equivalent to a classic C# property.  Note that this property has a get accessor and a set accessor.  This is the only type of automatic property you will be able to create.  You need the full {get; set; } for the automatic property to compile.  { get; } or { set; } won't cut it.  If you need a property with only a get or set accessor, then you need to use a classic C# property.  However, you can use { get; private set; } for a read-only property.  It will create both accessors, but only the get accessor will be public.  Also keep in mind that the Visual Studio 2008 code-snippet shortcut "prop" now creates an automatic property and "propg" creates an automatic property with a private set accessor.

Since this feature helps so greatly in the readability of the code, I have added a new feature to XAG: minimized properties.  Here is what the classical C# 2.0 syntax would look like for simple DTO (data transfer object) using XAG:

<Assembly xmlns:x="http://www.jampadtechnology.com/xag/2006/11/">
    <SimpleType x:Key="ClassKey" Type="Class" AutoGenerateConstructorsByProperties="True" Namespace="ClassNamespace"  AccessModifier="Public">
        <Properties AccessModifier="Public">
            <Id Type="Int32" />
            <Name Type="String" />
            <Title Type="String" />
        </Properties>
    </SimpleType>
</Assembly>

Using XAG's express type creation, the XML compiles to the following C# code:

using System;

namespace ClassNamespace
{
    public class SimpleType
    {
        private Int32 id;
        private String name;
        private String title;
        public Int32 Id {
            get { return id; }
            set { id = value; }
        }

        public String Name {
            get { return name; }
            set { name = value; }
        }

        public String Title {
            get { return title; }
            set { title = value; }
        }

        public SimpleType(Int32 id, String name, String title) {
            this.Id = id;
            this.Name = name;
            this.Title = title;
        }

        public SimpleType( ) {
        }
    }
}

That's painfully verbose when compared with automatic properties.  The new feature in XAG allows you to choose between a classic property and a minimized property (an automatic property in C# 3.0).  Below is the same XAG DTO done with Minimized properties.  In this example, notice that AutoGenerateConstructorsByProperties is set to false (the default).  This is because C# 3.0 has feature called object initializers, which allow you to set properties when you instantiate an object without needing any special constructor.

<Assembly xmlns:x="http://www.jampadtechnology.com/xag/2006/11/">
  <SimpleType x:Key="ClassKey" Type="Class" Namespace="ClassNamespace" AccessModifier="Public">
    <Properties AccessModifier="Public" Minimized="True">
      <Id Type="Int32" />
      <Name Type="String" />
      <Title Type="String" />
    </Properties>
  </SimpleType>
</Assembly>

By simply setting Minimized to true (and optionally, AutoGenerateConstructorsByProperties to false), you get the following C# 3.0 code:

using System;

namespace ClassNamespace
{
    public class SimpleType
    {
        public Int32 Id { get; set; }
        public String Name { get; set; }
        public String Title { get; set; }

        public SimpleType( ) {
        }
    }
}

You can also use this new minimize option with the existing options Static (a Boolean) and Mode (Blank, "GetOnly", or "SetOnly"), but you obviously can't use it with the Backing option.   The Backing option has a default value of true which means that the property is backed by a private field.  There is no such thing as an automatic property with an explicit backing field; that's the entire point of an automatic property.  The following example demonstrates a few legal combinations for properties in XAG.  Notice that you can tell XAG that you want all but a few specified properties to be minimized.

<Assembly xmlns:x="http://www.jampadtechnology.com/xag/2006/11/">
    <SimpleType x:Key="ClassKey" Type="Class" Namespace="ClassNamespace"  AccessModifier="Public">
        <Properties AccessModifier="Public" Minimized="True">
            <Id Type="Int32" />
            <Name Type="String" Static="true" Mode="GetOnly" />
            <Title Type="String" Minimized="False" Backing="False" Mode="GetOnly" />
        </Properties>
    </SimpleType>
</Assembly>

This XML code compiles to the following C# 3.0 class:

using System;

namespace ClassNamespace
{
    public class SimpleType
    {
        public Int32 Id { get; set; }

        public static String Name { get; private set; }

        public String Title {
            get { throw new Exception("The method or operation is not implemented."); }
        }

        public SimpleType( ) {
        }
    }
}

In C# 3.0, you could use that code with an object initializer like this:

SimpleType st = new SimpleType( )
{
    Id = 8
};

Int32 id = st.Id; // id == 8

You can find more information about my XML Assembly Compiler at http://www.jampadtechnology.com/xag/.

Related Links

Prototype and Scriptaculous Book

Today I noticed the book "Prototype and script.aculo.us: You never knew JavaScript could do this!" and while you do not need a book to learn P&S, this book will definitely induce a good 6 months to a year of experience into your skill set.  The book is available on Amazon in print or on the book's website in PDF format.

If you only want to know the basics of P&S, then you'll be fine with looking over the Prototype documentation and script.aculo.us samples.  However, regardless of how deep you want to go, you should definitely check out the freely available source code for the book available on the book's website.

As always, let the tools do the work, but don't rely on them for everything.  It's critically important that you understand AJAX developer from a deep mechanical level before you start using JavaScript or AJAX frameworks.  If you aren't well-versed in JavaScript and AJAX development, then I highly recommend AdvancED DOM Scripting: Dynamic Web Design Techniques by Jeffery Sambell.

Related Links

10 Things Most Developers Didn't Know in 2007

To end 2007, I thought I would make a list of things which I found that most developers didn't know.  To make things more interesting, this list is actually a series of 10 mini-articles that I wrote today.  Since this article has several sub-articles, here's a table of contents to help you out (these aren't really in any order of importance):

#1  SQL Server supports powerful subqueries as anonymous sets.

Many developers don't take the time to seriously look at T-SQL or SQL Server internals.  As such, they miss many of SQL Server's more powerful features.  In January 2007, when co-worker saw he write the following query, he about fell out of his seat:

select MemberName, m.MemberId, count(*) from (select 
    distinct MemberId, 
    VisitUserAgent 
    from VisitSession 
    where MemberId is not null) a 
inner join Member m on a.MemberId = m.MemberId 
group by m.MemberId, MemberName, VisitUserAgent 
having count(*) > 1 
order by count(*) desc 

For starters, the guy didn't know you could do a filter after a group by, but that's not my point.  He had no idea that SQL Server (2000) allows you to use subqueries or use subqueries as anonymous sets.  As you can see, you can select from the list as well as use it in a join.  This tidbit alone should toss many painfully slow cursor-based stored procedures into the trash.  It's a simple SQL feature, but it's a powerful one.

#2  Firefox has an operating-system style console for web application debugging.

It's incredibly hard to find an ASP.NET web developer who knows this one.  It's a feature that knocks people right off their seats.  Instead of throwing alerts all over your AJAX applications, you can use the Firefox console and the dump( ) function.  Did I mention this has been a native feature since Firefox 1.0?

Step 1 (start Firefox with -console switch)

Step 2 (add the boolean key 'browser.dom.window.dump' to the Firefox configuration an set it to true)

Then simply call dump( ), instead of alert( ) and you're done. Your output will go to the Firefox console window (which looks almost exactly like a cmd window).

With this technique you can entirely avoid any possibility of having an infinite loops of alerts.  Personally, I like to track all the output of my web applications.  This comes in very handy when I'm using event capturing or need to watch the progressive state of my application.  When I do this, I also like to write an output identifier to each data dump.  Here's a sample of what I usually use for debugging:

var Configuration = { 
    Debug: false
}; 

var Debug = { 
    counter: 0, 
    write: function(text) { 
        if(Configuration && Configuration.Debug) { 
            dump(text); 
        } 
    }, 
    writeLine: function(text) { 
        if(Configuration && Configuration.Debug) { 
            Debug.counter++;        
            dump(Debug.counter + ':'+ text + '\n'); 
        } 
    } 
};

Here's some sample output using the Debug.writeLine( ) abstraction:

Leaves alert( ) in the dust, doesn't it? You can actually learn more about this technique and others from my Firefox for ASP.NET Web Developer video series found on my blog.  These topics are crucial to your understanding of modern web development.

#3  JavaScript has natively handled loosely-coupled multi-cast events for years.

This isn't something just for the Firefox, Opera, Safari world.  Even IE6 has native support for this feature. I'm not sure why this is, but in September 2007 when I was designing the AJAX exam for Brainbench, not a single one of the reviewers knew that JavaScript natively supported loosely-coupled multi-cast events.  I actually comments from almost all of the reviewers telling me that I should "leave server-side questions out of the exam".

JavaScript loosely-coupled multi-cast events are one of the most important core features of AJAX applications. They allow you to quickly and efficiently attach multiple event handlers to the XHTML same element. This becomes critically important when you are with multiple AJAX components, each of which that want to have an event handler attached to the load event of the window object.

I wrote an article about this in September 2007, so I'm not going to go into any kind of details here.  You my also opt to view this file from my SolutionTemplate, which supplements that blog entry.

#4  Not all image formats are created equal.

A few months ago, I came in as lead architect about half way through a project.  After having a few people fired for absolute incompetence, I did find a few people (PHP guys) who were ready, willing, and actually able to learn ASP.NET.  Everything was going well until the designer came back with his new theme and my associate whom I was training implemented it.  Everyone thought the project was going fine until I stepped in the room.  It didn't take but 10 seconds for a red flag to go up.  Just looking at the web site I could tell that this theme implementation was a disaster.  I noticed that there were signs of JPEG compression all over every single one of the images.  However, being a scientist and part-engineer I knew that measurement was a major key to success.  So, I whipped out Firebug, hit refresh and felt my jaw drop.  The landing page was 1.5MB.  Ouch.

You absolutely can not use one single image format for ever image on your web site, especially not the deadly JPEG format which does little more than destroy your images.  There are rules which web developers must need to follow or else a project is doomed to failure.  First off, you need to be using PNG24s for the all important images, while comparing their file sizes and quality with PNG8 compression.  Using Adobe Photoshop's Save For Web feature is very helpful for this.  If the image is a photo or something with many "real life" colors and shades, perhaps you want to do a size and quality comparison against a JPEG version as well.  If you absolutely need to have transparent images for IE6, then you need to take extreme care and either make special PNG versions for each background or, if you don't care too much about quality and the image is small with very few colors, use a GIF with transparencies.  The same goes for Firefox and printing.  Firefox (as of 2.0) does not print transparent PNG images.  So, if you want to support printing in Firefox, then you need to either make special PNG images for each background or make low-quality GIF images.

Needless to say, the designers theme had to go under severe reconstruction.  Not just because of the image sizes, but because he felt the need to design special input box, textarea, and button controls.  His design would have worked well for a WPF application, but this is the web (... but don't even get me started on the fact that his design for a wide screen monitor at over 1300x800.  The design was useless anyhow!)  The next project I ran as lead architect went much smoother.  Because it was extremely AJAX intensive, everything was minimized to the absolute core.  Each page had the minimal default.css plus it's own CSS sheet and only included the JavaScript it needed.  The web site landing page included barely anything and even had it's own extremely stripped down version of the JavaScript files.  For this project, I went from 350K in development to 80k in production.

#5  Custom server controls are not esoteric, complicated, or take too long to create.

  This seems to be a very common misconception amongst ASP.NET developers.  The reality, however, is that creating server controls is often a very trivial task.  Yet, many developers will use a GridView or other canned control for everything.  The GridView is awesome for basic tabular in a very simple, data-driven applications, but I can rarely use it.  On the other hand, I love the repeater and rely on it for almost everything.  Actually, it and the Literal are my two favorite controls.  I have to rely on these two controls to ensure that my AJAX applications are extremely optimized.  One of the beautiful things about .NET is that every ASP.NET control is simply a .NET class, which means that you can programmatically reuse them, inherit from them, and override their internals.  Thus, allowing us to create some powerful and elegant custom server controls.

On the same project with the overly sizes image files, we had an interesting meeting about how to show a media play list on a web page.  There was all kinds of talk about using Flash to create a media play list.  The conversation was quickly giving me an allergic reaction.  So, after hearing all kinds of absolutely insane quotes of time for creating a Flash play list, I decided to take matters in to my own hands.  Two hours later I handed the client a complete play list from A to Z.  To be clear, I had built this one something I had already had, but the grand total of time was them about 3 hours.  It's amazing what you can do when you understand the .NET framework design guidelines and aren't afraid to follow best-practices.

Here is how you would use a similar control:

<%@ Register Assembly="Jampad.Web" Namespace="Jampad.Web.Controls" TagPrefix="j" %>

<j:Media id="media01" runat="server" />

In your code behind, you would have something that looked like this:

media01.DataSource = MediaAdapter.GetContent(this.MemberGuid);

Upon loading the page, the data was bound and the output was a perfect XHTML structure that could them be customized in any number of ways using the power of CSS.  How do you make something like this happen?  It's simple, here is a similar control (Media.cs) placed in a class library (WebControls.csproj):

using System;
using System.Web;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;

namespace Jampad.Web.Controls
{
    [ToolboxData("<:Media runat=\"server\"></:Media>")]
    public class Media : CompositeControl
    {
        private Repeater repeater;

        public Media( ) {
        }

        private Object dataSource;

        public Object DataSource {
            get { return dataSource; }
            set { dataSource = value; }
        }

        protected override void CreateChildControls( ) {
            HtmlGenericControl div = new HtmlGenericControl("div");
            div.Attributes.Add("class", "media-list");

            try {
                repeater = new Repeater( );
                repeater.DataSource = this.DataSource;
                repeater.ItemTemplate = new MediaTemplate(ListItemType.Item);
                repeater.HeaderTemplate = new MediaTemplate(ListItemType.Header);
                repeater.FooterTemplate = new MediaTemplate(ListItemType.Footer);
                div.Controls.Add(repeater);
                repeater.DataBind( );
            }
            catch (Exception ex) {
                Literal error = new Literal( );
                error.Text = "<span class=\"error-message\">" + ex.Message + "</a>";
                div.Controls.Add(error);
            }

            this.Controls.Add(div);
            base.CreateChildControls( );
        }
    }
}

Notice the use of the repeater control.  This is the same control we use in ASP.NET as <asp:Repeater />.  Since this is .NET, we can use it programmatically to create our own powerful controls.  Also notice the various templates that are being set on the Repeater.  These are the same templates you would set declaratively in an ASPX page.  In this case, I'm programmatically assigning to these templates an instance of MediaTemplate (in MediaTemplate.cs).  This MediaTemplate.cs is just another file thrown in a class library, in our case the same WebControls.csproj, though since it's just a class, it could be in a different assembly and namespace altogether. Here's what the MediaTemplate.cs looks like:

using System;
using System.Collections.Generic;
using System.Text;
using System.Web.UI.WebControls;
using System.Web.UI;

namespace Jampad.Web.Controls
{
    internal class MediaTemplate : ITemplate
   {
        ListItemType type = new ListItemType( );

        public MediaTemplate(ListItemType type) {
            this.type = type;
        }

        public void InstantiateIn(Control container) {
            Literal lit = new Literal( );
            switch(type) {
                case ListItemType.Header:
                    break;

                case ListItemType.Item:
                    lit.DataBinding += new EventHandler(delegate(Object sender, System.EventArgs ea) {
                        Literal literal = (Literal)sender;
                        RepeaterItem item = (RepeaterItem)literal.NamingContainer;
                        literal.Text += String.Format("<div class=\"media-item\">\n");
                        literal.Text += String.Format("  <div class=\"media-item-inner\">\n");
                        literal.Text += String.Format("    <a href=\"\"><img src=\"\" alt=\"Media\" class=\"media-thumb\" /></a>\n", (String)DataBinder.Eval(item.DataItem, "mediaPath"), (String)DataBinder.Eval(item.DataItem, "thumbPath"));
                        literal.Text += String.Format("  </div>\n");
                        literal.Text += String.Format("  <div class=\"media-item-bottom\"></div>\n");
                        literal.Text += String.Format("</div>\n\n");
                    });
                    break;

                case ListItemType.AlternatingItem:
                    break;

                case ListItemType.Footer:
                    break;
            }
            container.Controls.Add(lit);
        }
    }
}


Simply compile those to together and you're set.  You can even embed (hopefully tiny) images in your project to make things even more seamless.  Using this simple pattern, I've created all kinds of things.  You can see a real example of this, including image embedding, in my SQL Feed Framework (formerly known as Data Feed Framework).  It's InfoBlock controls follow this same pattern.  For much better examples, whip out reflector and start digging around the System.Web namespaces.

It's actually rather astonishing to learn of some of the attituted some developers have about custom controls. When I was one of the editors for an ASP.NET 2.0 exam last year, I noticed one of the questions ask which type of control was "harder" to create. The answers were something like "User Control", "Custom Control", and a few others. They were looking for the answer "Custom Control". Since "harder" is not only a relative term, but also a subjective and an abstract one, the question had no actual meaning. Custom controls aren't "harder" than user controls.

#6  Most developers I worked with in 2007 had never heard of an O/R mapper.

Why do most developers still absolutely insist on wasting their time writing a chain of SqlConnection, SqlCommand, and SqlDataAdapter?  Perhaps it's just an addiction to being busy instead of actually being productive that causes this.  I don't know.  I would, however, expect these developers have to have some curiosity that there may be an easier way.  ADO.NET is awesome stuff and it is the foundation for all .NET O/R mappers, but if I'm not throwing around 1,000,000 records at a time with SqlBulkCopy, I'm not interested in working with ADO.NET directly.  We need to have a system that allows us to get what we want instead of forcing us to screw about with low-level mechanics.  It's no secret that I'm a huge supporter of Frans Bouma's work with LLBLGen Pro and I also use LINQ in most of my .NET 3.5 applications.  For a corporate .NET 2.0 project, there's absolutely no excuse to not pay the $300 for LLBLGen Pro.  Managers!  Open the wallets!  It will save you money.

However, it's not always about the money.  Even if the developers knew about O/R mapping, and the company isn't from in a poverty-stricken 3rd world country, sometimes extreme pride, lack of personal integrity, and political alignment can destroy any chance of being productive.  A long time ago I worked at a company where I thought I would be productive.  Five or so weeks into the design phase of the project, we received a politically-focused project manager as big brother.  He was absolutely against the use of any modern technology and despised the idea of an O/R mapper.  He instead told us that we were to write a stored procedure for every possible piece of interaction that would happen.  He also wanted us to use Microsoft's data application block to access the stored procedures.  At one point he said that this was their O/R mapper, showing that he had no idea what an O/R mapper was.

A few days after his reign had started, I took an hour or so to write up a 12 page review document covering various aspects of LLBLGen Pro and how they would work on the project.  I thought it was a very convincing document.  In fact, one guy looked at it and was convinced that I took it from the LLBLGen web site.  The project manager, however, was beginning to be annoyed (this is not uncommon with me and old-school project managers!)  The project manager decided to call together a panel of his "best" offshore developers and put me in what basically amounted to be a doctoral defense.  Prior to the meeting I sent out my "dissertation" and asked everyone to read it before they arrived at the meeting so that they would be prepared for the discussion.  When it was time for the meeting, I was told to sit at one side of a large meeting table and the project manager and his team sat at the other.  Then the disaster began.  First off, not one single person on that team had read my document.  Secondly, for the next 45 minutes they asked me basic questions that the document would have answered.  Even after they admitted that I had answered all of their concerns to their satisfaction and being told by their team that LLBLGen Pro was obviously a very productive tool, they reached the conclusion that they still weren't going to use it.  It was a waste of my time and I still want those 45 minutes of my life back.

What was really interesting about my defense was the developer's code.  In the meeting, the developers had showed me their [virtually unreadable, anti-.NET framework design guidelines, inefficient, insecure] .NET project code and I was shocked to see how much time they wasted on writing the same stuff over and over and over again.  When they showed me their stored procedures, I about passed out.  It's a wonder how any of their systems run.  They were overridden with crazy dynamic SQL and cursors.  They even had most of the business logic in the data access tier.  The concept of N-tier architecture was not something that they understood at all.  I think that's the point where I gave up on my defense.  If a developer doesn't even understand the critical need for N-layer and N-tier architecture, there's just no way they will be able to understand the need for an O/R mapper.  It's actually one of the fastest way to find a coder hiding amongst professionals.  Their SQL/ADO.NET code was also obviously not strongly-typed.  This was one of the core points of an O/R mapper and these developers could not understand that.  They could not see the benefit of having an entity called Person in place of the string "Persno" (deliberate misspelling).

This project didn't really take off at all, but for what parts I was involved, I used the next best thing to an O/R mapper: a strongly-typed data-set.  Read this carefully: there is no shame in using a strongly-typed data set if you don't have an O/R mapper.  They are no where near as powerful, but they are often good enough to efficiently build your prototypes so that the presentation layer can be built   You can replace the data access components later.

The training of developers in the use of LLBLGen Pro and LINQ O/R mapping was one of the main reasons I publicly released both my Minima Blog Engine and my Minima 3.5 Blog Engine source code to the public in 2007.  You are free to use these examples in your own training as you see fit. 

For more information and for some example of using an O/R mapper, please some of my resources below:

#7  You don't need to use SOAP for everything.

This is one of the reasons I wrote my XmlHttp Service Interop series in March and May 2007.  Sometimes straight up HTTP calls are good enough.  They are quick, simple, and light-weight.  If you want more structure, you can simply use XML serialization to customize the smallest possible data format you can think of.  No SOAP envelope required.

Here are the parts to my series:

Also keep in mind that you don't need to keep JSON to JavaScript.  It's a beautiful format that could easily be an amazing structured replacement for flat CSV files.  RESTful interfaces using GET or POST with HTTP headers are also a great way to communication using very little bandwidth.  My AJAX applications rely heavily on these techniques, but I've also used them for some behind the scenes work as well.

One great example of how you can use RESTful services is by looking at the interface of the ESV Bible Web Service V2. In November 2007, I wrote a .NET 3.5-based framework to abstract the REST calls from the developer. By looking at my freely available source code, you can see how I'm interacting with the very light-weight REST service.

#8  A poor implementation of even the most beautiful database model can lead to a disaster.

For more information on this topic, see my October 2007 post entitled "SQL Server Database Model Optimization for Developers". Here is an abstract:

It's my assessment that most developers have no idea how much a poor database model implementation or implementation by a DBA unfamiliar with the data semantics can affect a system. Furthermore, most developers whom I have worked don't really understand the internals of SQL Server enough to be able to make informed decisions for their project. Suggestions concerning the internals of SQL Server are often met with extremely reluctance from developers.

#9  Most web developers have no idea how to build a proper XHTML structure.

XHTML is not HTML and shouldn't be be treated like it is.  While HTML is a presentation format, XHTML is a structure format.  You can use HTML for visual formatting, but XHTML simply defines a structure.  In August 2007, I wrote an article entitled "Coders and Professional Programmers" and in it I discussed some of the differences between coders who have no clue what's going on, but who overwhelm the technology world and rare programming professionals.  I didn't go into to many specifics in this article, but one of the things I had in mind was the severe lack of XHTML knowledge that coders have.  What's profound is that XHTML is probably the single most basic web development topic in existence, yet people just have no idea how to use it properly.

When you come at a web project and you have defined your user experience, you need to materialize that definition.  You do not go about this by dragging and dropping a bunch of visual elements on a screen and nesting 4 tables.  Well, if you do, then you're probably a coder, not a professional.  Building your interface structure is actually rather similar to building a database model in that you need to define your entities and semantic meaning.  So, when you look at the top of you landing page, you need to avoid thinking "this is a 4em piece of italic black text" and simply say that this it a heading.  What type of heading?  If it's he most important heading, then it would probably internally translate to a h1 element.  In the same way, if you have text on your screen, you should avoid doing this:

Lorem ipsum dolor sit amet.<br/>

Mauris nonummy, risus in fermentum.<br/>

By doing this, you are completely destroying any possibility of text formatting.  You have also fallen into he world of telling a system how to do it's job.  Just think about how we work with XML.  Do you go in and tell the system how to parse the information or how to scan for a certain element?  No, the entire point of abstraction is so that we can get closer and closer to telling the system what we want instead of telling it how to do it's job.  In XML, we simply state an XPath and we're done.  With XHTML, we don't want to say "put text here, break, put more text here, and break again".  You could think of the above HTML code as a "procedural structure".  What if we used a more object-oriented model?  In an object-oriented model, we focus on the semantics of the entities and this is exactly how we are to design in XHTML.  A proper way to declare our text would be like this:

<p>Lorem ipsum dolor sit amet.</p>

<p>Mauris nonummy, risus in fermentum.</p>

Now, instead of telling the system how to do it's job we state that we want two paragraphs.  Done.  By focusing on the semantic representation we are closer to focusing on the purpose of the application and letting the system do whatever it does best.  Furthermore, since XHTML is a structural format and not a presentation format, we have another technology for presentation, namely, CSS.  With our new XHTML sturcture we can simply attach a default.css document to every page on the web site and in a centralized manner state the following to format every single document on our web site in an instant.  You couldn't beat the power of that with quantum parallelism (well... maybe).

p {
font-family: Georgia, times new roman, serif;
font-size: 90%;
line-height: 1.1em;
}

Every single item in XHTML has some purpose and a set of guidelines attached to it to allow you to choose the correct element to match your semantic meaning.  This is a long and fancy way of saying, don't use divs for everything!  A div is a containment unit, not something to hold every single thing in all the world just so that you can use more CSS in a misguided attempt to look modern.  Use divs for containment, not for giving IDs to text.  So, what should you use?  Whatever closely matches your needs.  For example, when you are adding a single horizontal menu or tab list to your user experience, you should avoid using a bloated HTML table, which will force longer load times and basically kill your chanced for mobile support.  You should also avoid throwing together a list of divs in a parent div, which provides to semantic meaning at all.  This would be like declaring all your .NET objects as Object and using reflection everything you wanted to access anything.  Rather, you want to ask yourself "to what data structure does this object most closely map?"  In this case, it's basically a simple list or a collection.  What XHTML element most closely brings out this item's meaning?  Depending if the list is an ordered list or an unordered list, your XHTML element will be either a <ul/> or an <ol/>.

What if you wanted to show a column of images where the image metadata (i.e. title, date, description) was to the right of each image image?  Do we use a bloated table?  No, your load time will go through the roof, your DOM interaction will be come prohibitively complex, and your mobile support will be shot.  Do we use a series of divs with CSS floating?  No, again, this in no way reflects any semantic relation of the entity.   To what data structure does this closely maps?  Think about it.  It's a series of "things" where each thing has two sub-components (image and data).  This is a dictionary.  In XHTML, the closest element you have to a dictionary is a <dl/>.  The dl contains an alternating series of data terms (<dt/>) and a data definitions (<dd/>) to allow you to present your data in a way that makes sense.  You don't have a full semantic representation as you would with a "imagelist" element, but you are accurately representing the fact that this is a dictionary.

After you have defined your complete structure, you may then start to modify the elements using CSS.  Your headings and titles (mapped to h1,h2,h3,h4,h5,6) will be formatted according to their requirements as will all your paragraphs (mapped to p).  You will also modify your ol, ul, and dl lists to match your visual requirements.  Your <ul/> or <ol/> lists will probably have something like the following:

ul {
list-style-type: none;
padding: 0;
}

ul li {
display: inline;
/* or float: left; depending on how you want to format them. */
/* Floating would keep the list items as block elements thereby */
/* allowing more padding and margin tweaking. */
}

With your dl may have something similar to this:

dl {
width: 300px;
}

dl dt,
dl dd {
float: left;
}

dl dt {
clear: both;
}

This technique of semantically mapping entities to visual elements is neither new or isolated to web development.  The Windows Presentation Foundation (WPF) allows a similar technique.  You can define a simple ListBox which contains your raw data for your series of elements (e.g. icon list, list of names, menu).  Then you apply a Style to enhance the user experience.  Elements in XHTML and elements in XAML don't really have their own look and feel, but are, rather, structural entities which allow you to define the element's semantic representation to reality to which later the look and feel can be applied.

#10  CSS is not simply a technology to allow simple font size, color, and style changes.

It's a very powerful technology which allows us to efficiently create powerful web solutions.  It can allow help us preserve ASP.NET caching and help us to avoid page recompilation.  Furthermore, a proper CSS architecture can bring us media specific CSS to enable us to efficiently customize our web pages for print and for mobile devices.  As if that weren't enough, CSS themes can allow us to quickly deploy branded web sites.

Unfortunately, however, CSS architecture is not something known by too many web developers, especially ASP.NET developers.  Back in May 2007, I wrote an article on my blog entitled "CSS Architecture Overview", so I won't go into any more details here.

Those were the top 10 things in 2007 which I found developers to be the most ignorant.  It's not really an exhaustive list and doesn't cover things like the lack of understanding how MSIL maps to reality, how JavaScript deals with variable scope, or how you may not need to waste company resources on SQL Server 2005 Standard when express may work just fine for your requirements.  Some of these topics are closer to my core speciality than others, but each of them represents an incredibly important segment of technology which web solution architects must take into account.  Looking back over the list of articles I wrote the open source projects I released and the various clients and developers I worked with in 2007, this has easily been by busiest year ever.  This won't be stopping in 2008.  Hopefully an increased knowledge base and an stronger adherence to best practices will turn more and more coders into professional in 2008 and beyond.

To learn more about some of these topics or even related ones, be sure to walk around my blog a bit and to subscribe to my my RSS feed.

IE8, CSS, and Other Critical Standards

Early this morning I read that there is an internal build of IE8 that supposedly renders the Acid2 test perfectly.  When hearing about this, I'm sure there were crowds of naive developers rejoicing and singing praise.  That's great, but I'm not one of them.  The reason is simple: CSS is only one standard.

Passing the Acid2 test brings a great deal of weight of the reliability of a web browser, but as an AJAX specialist, my core technologies are the DOM and JavaScript.  This is why I could use Firefox 1.5 and 2.0 even though they were only somewhat close to passing the ACID2 test (though this didn't stop me writing writing Mozilla a letter or two!)

People seem to forget that JavaScript is also a standard (by ECMA-International, who also has C#-- ironic?)  Furthermore, the DOM is a standard.  I can easily deal with strange stuff going on in CSS by using more images in place of text or by using IE conditional CSS (a feature the other browsers need).  It's just one of the many standards required to be a proper web browser.  Honestly, I can even deal with their weak implementation of JavaScript, because it handles closures, namespaces, and higher-order functions fine.

The problem I hit... every... single... day... however, is their lack of strong DOM support.  There are just SO few things you can do with the DOM in IE!  I don't even mean the awesome stuff like being able to access a mouse selection is a safe way, but something simple and common like being able to use addEventListener instead of attachEvent.  Even the Silverlight team thought it was important to add support for that (in their first release too!)

In addition to the DOM, I should also mention that this is not the end of the standards list.  Firefox, Opera, and Safari all take HTML5, Canvases (part of HTML5), and SVG for granted.  IE has absolutely no support for these standards.  I'm sure more avid standards specialists could go on and on, listing even more standards that IE lacks and that others have had for a while.  We just can't forget about the other technologies.  We only complain about CSS because it's partially supported and therefore it reminds people of its sloppiness in IE, prompting us to talk about it.  Since we haven't seen them in IE, most don't consider than as important, but if we had them in IE, then we wouldn't have to complain so much about the CSS support.  We would have more standards-based visual technologies to help us get to the same end.

Lastly, I would like to mention again that "standard" not only means "common, same, or basis for comparison", but it also refers to a certain level of quality.  So, even if the IE team were to pass the Acid2 test, support JavaScript 1.5+, and add support for the addEventListener function, they would have to continually and consistently prove their integrity by releasing a major update either annually or bi-annually to keep up with the technologies.  It's very important to keep changing with technology and to keep going with the flow.  IE's lack of proper web technology support has held web technology back for way too long.

I don't think most people realize how significance of a technological boost Firefox 1.0 was when it first came on the scene!  It wasn't just a new browser or some neat piece of technology.  It was like someone dropped a Lexus LS 2008 into a local car in 1991.  It shouldn't have been that way though.  The IE team had the most power and with that power came the responsibility to lead the charge.  They failed.  To this day, the Firefox and Opera guys are very hard working people who are constantly putting out new updates and therefore are constantly proving that even though they aren't perfect, they are willing to stay with the times and provide regular updates.  The IE team has to prove themselves in the same way and I'm confident that myself and my fellow web developers will completely accept IE when it becomes a proper web browser.

Therefore, I'm not too excited about IE8 passing the Acid2 test; I was much more excited when Opera did.  It's awesome that they finally got that far, but the IE team has a TON of things that must be done before IE can start playing with the big boys again.  Personally, I think they should just do what Apple did with OS9 and just rewrite the entire thing from scratch.  I also think they should recreate the IE team with some of the best of the best from other portions of Microsoft.  The web browser is arguably the most used application on a PC today and it is therefore worthy of our best resources.  Microsoft could even rewrite the entire thing in .NET to prove to the world the amazing power and efficiency of .NET and feed two birds with one scone!

Opera Sues Microsoft Over Web Standards

I'm not going to go into too many details here, but I just wanted to point out that Opera has filed a complaint with the European Union against Microsoft for "...tying its browser...to the Windows operating system" and for "...hindering interoperability by not following accepted Web standards."  The article goes into all kinds of the same old anti-trust stuff, but it also mentions that Microsoft's technology "creates a de facto standard that is more costly to support, harder to maintain, and technologically inferior and that can even expose users to security risks."

This is tremendous huge win for anyone who 1) has respect for the web, 2) don't like segmenting QA plans to including an entire segment to IE support testing, or 3) likes to have a little self respect left over after a pure-AJAX project.  For years I've been saying that someone needs to take the IE team up on war crimes, but this filing by Opera is definitely a step in the right direction.  Perhaps someday we web developers will have the freedom to create rich client-side applications without having to add special support for the world's most "special" browser.

It's just absolutely unacceptable that someone that can infuse such a product into the world's information infrastructure and think they can get away with it.  If the WCF team had the same quality-control standards as the IE team, then SOAP would never, ever communication with anything.  If the networking stack guys had the that low of standards, can you even imagine trying to communicate between a "Microsoft TCP/IP" client and and Apache server?  Microsoft is an excellent company with great products and amazing standards, but the IE team seems to be absolutely against these things.  Sometimes people seem to forget that the word "standards" isn't just a word meaning "common, same, or basis for comparison", but that it also refers to a certain level of quality.  I've said it before and I'll say it again: the IE team has no standards (I feel a bumper-sticking coming on!)

Do yourself and the world a favor by downloading and supporting Mozilla Firefox, Apple Safari, or Opera.  Each of these are proper 7th generation web browsers, unlike Intranet Explorer whose existence is analogous to those half dead, temporary batteries that sometimes come with your kid's toys.  They are meant to be replaced.  So, if you are getting someone a computer for Christmas, give them the gift of one of these web browsers so they don't have to drag their muddy feet all over the Internet.

Links

  • Mozilla Firefox - Great CSS support, absolutely unsurpassable JavaScript and DOM support, and the ability to write browser extension with just JavaScript and CSS.
  • Apple Safari - Great font anti-aliasing with support for many of the same shortcut keys as Firefox.  A little quirky on the JavaScript and DOM side, but it's constantly improving.  I love Safari.  I usually keep Safari up most of the day when I'm working on Firefox specific projects that require me to do a lot of restarting.
  • Opera - Great support for CSS and continually improving JavaScript support with an amazing set of user features.  A little awkward for people not used to it though.