2005 2006 2007 2008 2009 2010 2011 2015 2016 2017 aspnet azure csharp debugging elasticsearch exceptions firefox javascriptajax linux llblgen mongodb powershell projects python security services silverlight training videos wcf wpf xag xhtmlcss

Architectural Overview: Creating Streamlined, Simplified, yet Scalable WCF Connectivity

Contents

Introduction

One of the most awesome things about WCF is that the concepts scale extremely well.  If you understand the ABCs of WCF, then you can do anything from creating a simple Hello World to a complex sales processing service.  It's all based on having an address, a binding, and a contract.  All the other concepts like behaviors, validators, and service factories are simply supplemental to the core of the system.  When you understand the basics, you have the general essence of all of WCF.

Because of this fully-scalable ABC concept, I'm able to use the same pattern for WCF architecture and development for every solution.  This is really nice because it makes it so that I don't have to wait time designing a new setup every time a new problem comes along.  In this discussion, I would like to demonstrate how you can create your own extremely efficient WCF solution based on my template.  Along the way, you will also learn a few pieces of WCF internals to help you understand WCF better.

Before I begin the explanation though, keep in mind that most concepts mentioned here are demonstrated in my Minima Blog Engine 3.1.  This is my training software demonstrating an enormous world of modern technologies.  It's regularly refactored and often fully re-architected to be in line with newer technologies.  This blog engine relies heavily on WCF as it's a service-oriented blog engine.  Regardless of how many blogs you have (or any other series of "content entries"-- for example, the documentation section of the Themelia web site is Minima), you have a single set of services that your entire organization uses.  If you understand the WCF usage in Minima, you will understand WCF very well.

Now, onto the meat (or salad, for the vegetarians) of the discussion...

Service Structure

On any one of my solutions, you will find a X.Service project and a X.ServiceImpl project where X is the solution "code" (either the solution name or some other word that represents the essence of the solution).  The former is the public .NET project which contains service contracts, data contracts, service configuration, and service clients.  The latter is the private .NET project which contains the service implementations, behaviors, fault management, service hosts, validators, and other service-side-only, black-boxes portions of the service.  All projects have access to the former, only the service itself will ever even know about the latter.

This is a very simple setup based upon a public/private model, like in public key cryptography.  The idea is that everything private is protected with all your might and everything public is released for anyone, within the context of the solution, to see.

For example, below is the Minima.Service project for Minima Blog Engine 3.1.  Feel free to ignore the folder structure, no one cares about that.  Just because I make each folder a namespace with prefixed folder names for exclusions, doesn't mean anyone else in the world does.  I find it to be the most optimal way to manage namespaced and non-namespaced groups, but the point of this discussion is the separation of concerns in the projects.

MinimaService

For the time being, simply notice that data contracts and service contracts are considered public.  Everything else in the file is either meaningless to this discussion or will be discussed later.

Here is the Minima.ServiceImpl project for the same solution:

MinimaServiceImpl

Here you can see anything from the host factory to various behaviors to validators to fault management to LINQ-to-SQL capabilities.  Everything here is considered private.  The outside world doesn't need to know, therefore shouldn't know.

For the sake of a more simplified discussion, let's switch from the full-scale solution example of Minima to a smaller "Person" service example.  This "Person" service is part of the overall "Contact" solution.  Here's the Contact.Service for our Contact solution:

PersonService

As you can see, you have a standard data contract, a service contract, and a few other things, which will be discussed in a bit.

For this example, we don't need validators, fault management, or behaviors, or host factories, all we need is a simple service in our Contact.ServiceImpl project:

PersonServiceImpl

By utilizing this model separating the private from the public you can easily send the Contact.Service assembly to anyone you want without requiring them to create their own client proxy or use the painfully horrible code generated by "Add Service Reference", which ranks in my lists as one of the worst code generators right next to FrontPage 95 and Word 2000.

As a side note, I should mention that I have colleagues who actually take this a step further and make a X.Service.Client project which houses the WCF client classes.  They are basically following a Service/Client/Implementation model where as I'm following a Public/Private model.  Just use which ever model makes sense for you.

The last piece needed in this WCF setup is the host itself.  This is just a matter of creating a new folder for the web site root, adding a web.config, and adding a X.svc file.  Period.  This is the entire service web site.

In the Person service example, the service host has only two files: web.config and Person.svc.

Below is the web.config, which declares two endpoints for the same service.  One of the endpoints is a plain old fashioned ASMX style "basic profile" endpoint and the other is one to be used for JSON connectivity.

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.serviceModel>
    <behaviors>
      <endpointBehaviors>
    <behavior name="JsonEndpointBehavior">
      <enableWebScript />
    </behavior>
      </endpointBehaviors>
    </behaviors>
    <services>
      <service name="Contact.Service.PersonService">
    <endpoint address="json" binding="webHttpBinding" contract="Contact.Service.IPersonService" behaviorConfiguration="JsonEndpointBehavior" />
    <endpoint address="" binding="basicHttpBinding" contract="Contact.Service.IPersonService" />
      </service>
    </services>
  </system.serviceModel>
</configuration>

The Person.svc is even most basic:

<%@ ServiceHost Service="Person.Service.PersonService" %>

This type of solution scales to any level.  If you want to add a service, just add a new X.svc file and register as a new service.  If you want to add another endpoint, just add that line.  It's incredibly simple and scales to solutions of any size and even works well with non-HTTP services like netTcpBinding services.

Service MetaData

Now let's look at each of these files to see how they are optimized.  First, lets' look at the data contract, Person:

using System;
using System.Runtime.Serialization;
//+
namespace Contact.Service
{
    [DataContract(Namespace = Information.Namespace.Contact)]
    public class Person
    {
    //- @Guid -//
    [DataMember]
    public String Guid { get; set; }

    //- @FirstName -//
    [DataMember]
    public String FirstName { get; set; }

    //- @LastName -//
    [DataMember]
    public String LastName { get; set; }

    //- @City -//
    [DataMember]
    public String City { get; set; }

    //- @State -//
    [DataMember]
    public String State { get; set; }

    //- @PostalCode -//
    [DataMember]
    public String PostalCode { get; set; }
    }
}

Everything about this file should be self explanatory.  There's a data contract attribute on the class and data member attributes on each member.  Simple.  But what's with the data contract namespace?

Well, earlier this year, a co-architect of mine mentioned to me that you can centralize your namespaces in static locations.  Genius.  No more typing the same namespace on each and every data and service contract.  Thus, the following file is included in each of my projects:

using System;
//+
namespace Contact.Service
{
    public class Information
    {
    //- @NamespaceRoot -//
    public const String NamespaceRoot = "http://www.netfxharmonics.com/service/";

    //+
    //- @Namespace -//
    public class Namespace
    {
        public const String Contact = Information.NamespaceRoot + "Contact/2008/11/";
    }
    }
}

If there are multiple services in a project, and in 95%+ of the situations there will be, then you can simply add more services to the Namespace class and reference them from your data and service contracts.  Thus, you never, EVER have to update your service namespaces in more than one location.  You can see this in the service contract as well:

using System;
using System.ServiceModel;
//+
namespace Contact.Service
{
    [ServiceContract(Namespace = Information.Namespace.Contact)]
    public interface IPersonService
    {
    //- GetPersonData -//
    [OperationContract]
    Person GetPersonData(String personGuid);
    }
}

This service contract doesn't get much simpler.  There's no reason to discuss it any longer.

Service Implementation

For the sake of our discussion, I'm not going to talk very much at all about the service implementation.  If you want to see a hardcore implementation, go look at my Minima Blog Engine 3.1.  You will see validators, fault management, operation behaviors, message headers, and on and on.  You will seriously learn a lot from the Minima project.

For our discussion, here's our Person service:

using System;
//+
namespace Contact.Service
{
    public class PersonService : Contact.Service.IPersonService
    {
    //- @GetPersonData -//
    public Person GetPersonData(String personGuid)
    {
        return new Person
        {
        FirstName = "John",
        LastName = "Doe",
        City = "Unknown",
        Guid = personGuid,
        PostalCode = "66062",
        State = "KS"
        };
    }
    }
}

Not too excited, eh?  But as it is, this is all that's required to create a WCF service implementation.  Just create an every day ol' class and implement a service contract.

As I've mentioned, though, you could do a ton more in your private service implementation.  To give you a little idea, say you wanted to make absolutely sure that no one turned off metadata exchange for your service.  This is something that I do for various projects and it's incredibly straight-forward: just create a service host factory, which creates the service host and programmatically adds endpoints and modified behaviors. Given that WCF is an incredibly streamlined system, I'm able to add other endpoints or other behaviors in the exact same way.

Here's what I mean:

using System;
using System.ServiceModel;
using System.ServiceModel.Description;
//+
namespace Contact.Service.Activation
{
    public class PersonServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory
    {
    //- @CreateServiceHost -//
    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)
    {
        ServiceHost host = new ServiceHost(typeof(PersonService), baseAddresses);
        //+ add metadata exchange
        ServiceMetadataBehavior serviceMetadataBehavior = host.Description.Behaviors.Find<ServiceMetadataBehavior>();
        if (serviceMetadataBehavior == null)
        {
        serviceMetadataBehavior = new ServiceMetadataBehavior();
        host.Description.Behaviors.Add(serviceMetadataBehavior);
        }
        serviceMetadataBehavior.HttpGetEnabled = true;
        ServiceEndpoint serviceEndpoint = host.Description.Endpoints.Find(typeof(IMetadataExchange));
        if (serviceEndpoint == null)
        {
        host.AddServiceEndpoint(typeof(IMetadataExchange), MetadataExchangeBindings.CreateMexHttpBinding(), "mex");
        }
        //+
        return host;
    }
    }
}

Then, just modify your service host line:

<%@ ServiceHost Service="Person.Service.PersonService" Factory="Person.Service.PersonServiceFactory" %>

My point in mentioning that is to demonstrate how well your service will scale based upon a properly setup base infrastructure.  Each thing that you add will require a linear amount of work.  This isn't like WSE where you need three doctorates in order to modify the slightest thing.

Service Client without Magic

At this point, a few of my colleagues end their X.Service project and begin a X.Service.Client class.  That's fine.  Keeping the client away from the service metadata is nice, but its not required.  So, for the sake of this discussion, let's continue with the public/private model instead of the service/client/implementation model.

Thankfully, I have absolutely no colleagues who would ever hit "Add Service Reference".  I've actually never worked with a person who does this either.  This is good news because, as previously mentioned, the code generated by the add service reference is a complete disaster.  You also can't modify it directly without risking your changing being overwritten.  Even then, the code is SO bad, you wouldn't want to edit it.  There's no reason to have hundreds or THOUSANDS of lines of code to create a client to your service.

A much simpler and much more manageable solution is to keep your code centralized and DRY (don't repeat yourself).  To do this, you just need to realize that the publicly accessible X.Service project already contains the vast majority of everything needed to create a WCF client.  Because of this, you may access the service using WCF's built in mechanisms.  You don't need anything fancy, it's all built right in.

Remember, WCF simply requires the ABC: and address, binding, and contract.  On the client-side, WCF uses this information to create a channel.  This channel implements your service contract, thus allowing you to directly access your WCF service without any extra work required.  You already have the contract, so you just have to do is declare the address and binding.  Here's what I mean:

//+ address
EndpointAddress endpointAddress = new EndpointAddress("http://localhost:1003/Person.svc");
//+ binding
BasicHttpBinding basicHttpBinding = new BasicHttpBinding();
//+ contract
IPersonService personService = ChannelFactory<IPersonService>.CreateChannel(basicHttpBinding, endpointAddress);
//+ just use it!
Person person = personService.GetPersonData("F488D20B-FC27-4631-9FB9-83AF616AB5A6");
String firstName = person.FirstName;

No 3,000 line client generated code, no excess classes, no configuration.  Just direct access to your side.

Of course, if you want to use a configuration file, that's great too.  All you need to do in create a system.serviceModel section in your project or web site and declare your service endpoint.  What's really nice about this in WCF is that, since the concept for WCF are ABC on both the client and server, most of the configuration is already done for you.  You can just copy and paste the endpoint from the service configuration to your client configuration and add a name element.

<system.serviceModel>
  <client>
    <endpoint name="PersonServiceBasicHttpBinding" address="http://localhost:1003/Person.svc" binding="basicHttpBinding" contract="Simple.Service.IPersonService" />
  </client>
</system.serviceModel>

For the most part, you will also need to keep any bindings on the service side as well, though you wouldn't copy over validation information.

At this point you can just change the previously used WCF channel code to use an endpoint, but that's an architectural disaster.  You never want to compile configuration information into your system.  Instead of tying them directly, I like to create a custom configuration for my solution to allow the endpoint configuration to change.  For the sake of this discussion though, let's just use appSettings (do NOT rely on appSettings for everything!  That's a cop-out.  Create a custom configuration section!).  Here's our appSetting:

<appSettings>
  <add key="PersonServiceActiveEndpoint" value="PersonServiceBasicHttpBinding" />
</appSettings>

At this point, I can explain that Configuration class found in the public Contact.Service project:

using System;
//+
namespace Contact.Service
{
    public static class Configuration
    {
    //- @ActivePersonServiceEndpoint -//
    public static String ActivePersonServiceEndpoint
    {
        get
        {
        return System.Configuration.ConfigurationManager.AppSettings["PersonServiceActiveEndpoint"] ?? String.Empty;
        }
    }
    }
}

As you can see, this class just gives me strongly-typed access to my endpoint name.  Thus allowing me to access my WCF endpoint via a loosely connected configuration:

//+ configuration and contract
IPersonService personService = new ChannelFactory<IPersonService>(ServiceConfiguration.ActivePersonServiceEndpoint).CreateChannel();
//+ just use it!
Person person = personService.GetPersonData("F488D20B-FC27-4631-9FB9-83AF616AB5A6");
String firstName = person.FirstName;

When I feel like using a different endpoint, I don't modify the client endpoint, but just add another one with a new name and update the appSettings pointer (if you're info fancy names, this is essentially the bridge pattern).

Now, while this is a great way to directly access data by people who understand WCF architecture, it's probably not a good idea to allow your developers to have direct access to system internals.  Entry-level developers need to focus on the core competency of your company (i.e. sales, marketing, data management, etc), not ponder the awesomeness of system internals.  Thus, to allow other developers to work on a solution without having to remember WCF internals, I normally create two layers of abstraction on top of what I've already shown.

For the first layer, I create a concrete ClientBase class to hide the WCF channel mechanics.  This is the type of class that "Add Service Reference" would have created if your newbie developers accidentally used it.  However, the one we will create won't have meaningless attributes and virtually manageable excess code.

Below is our entire client class:

using System;
using System.ServiceModel;
using System.ServiceModel.Channels;
//+
namespace Contact.Service
{
    public class PersonClient : System.ServiceModel.ClientBase<IPersonService>, IPersonService
    {
    //- @Ctor-//
    public PersonClient(String endpointConfigurationName)
        : base(endpointConfigurationName) { }

    //+
    //- @GetPersonData -//
    public Person GetPersonData(String personGuid)
    {
        return Channel.GetPersonData(personGuid);
    }
    }
}

The pattern here is incredibly simple: create a class which inherits from System.ServiceModel.ClientBase<IServiceContractName> and implements your service contract.  When you implement the class, the only implementation you need to add is a call to the base channel.  In essence, all this class does is accept calls and pass them off to a pre-created channel.  When you add a new operation to your service contract, just implement the interface and add a single line of connecting code to wire up the client class.

The channel creation mechanics that I demonstrated earlier is now done provided automatically by the ClientBase class.  You can also modify this class a little by bridging up to a total of 10 different constructors provided by ClientBase.  For example, the following constructor call will allow developers to specify a specific binding and endpoint:

public PersonClient(Binding binding, EndpointAddress address)
    : base(binding, address) { }

At this point, we have something that protects developers from having to remember how to create a channel.  However, they still have to mess with configuration names and must remember to dispose this client object (there's an open channel, remember).  Therefore, I normally add another layer of abstraction.  This one will be directly accessibly for developer user.

This layer consists of a series of service agents.  Each service has its own agent and is essentially a series of static methods which provide the most efficient means of making a service call.  Here's what I mean:

using System;
//+
namespace Contact.Service
{
    public static class PersonAgent
    {
    //- @GetPersonData -//
    public static Person GetPersonData(String personGuid)
    {
        using (PersonClient client = new PersonClient(ServiceConfiguration.ActivePersonServiceEndpoint))
        {
        return client.GetPersonData(personGuid);
        }
    }
    }
}

As you can see, the pre-configured service endpoint is automatically used and the PersonClient is automatically disposed at the end of the call (ClientBase<T> implements IDisposable).  If you want to use a different service endpoint, then just change it in your appSettings configuration.

Conclusion

At this point, I've explained every class or each project in my WCF service project model.  It's up to you to decide how to best create and manage your data and service contracts as well as clients.  But, if you want a streamlined, efficient, model for all your service projects, you will want to create a publicly accessible project to house all your reusable elements.

Also, remember you don't need a full-on client class to access a service.  WCF communicates with channels and channel creation simply requires an address, binding, and contract.  If you have that information, just create your channel and make your call.  You can abstract the internals of this by using a ClientBase object, but this is entirely optional.  If the project you are working on requires hardcore WCF knowledge, there's no reason to pretty it up.  However, if non-WCF experts will be working with your system, abstractions are easy to create.

Links

Love Sudoku?  Love competition?  Try the new Sudokian.com experience today.

kick it on DotNetKicks.com

Creating JavaScript Components and ASP.NET Controls

Every now and again I'll actually meet someone who realizes that you don't need a JavaScript framework to make full-scale AJAX applications happen... but rarely in the Microsoft community.  Most people think you need Prototype, jQuery, or ASP.NET AJAX framework in order to do anything from networking calls, DOM building, or component creation.  Obviously this isn't true.  In fact, when I designed the Brainbench AJAX exam, I specific designed it to test how effectively you can create your own full-scale JavaScript framework (now how well the AJAX developer did on following my design, I have no idea).

So, today I would like to show you how you can create your own strongly-typed ASP.NET-based JavaScript component without requiring a full framework.  Why would you not have Prototype or jQuery on your web site?  Well, you wouldn't.  Even Microsoft-oriented AJAX experts recognizes that jQuery provides an absolutely incredible boost to their applications.  However, when it comes to my primary landing page, I need that to be extremely tiny.  Thus, I rarely include jQuery or Prototype on that page (remember, Google makes EVERY page a landing page, but I mean the PRIMARY landing page.)

JavaScript Component

First, let's create the JavaScript component.  When dealing with JavaScript, if you can't do it without ASP.NET, don't try it in ASP.NET.  You only use ASP.NET to help package the component and make it strongly-typed.  If the implementation doesn't work, then you have more important things to focus on.

Generally speaking, here's the template I follow for any JavaScript component:

window.MyNamespace = window.MyNamespace || {};
//+
//- MyComponent -//
MyNamespace.MyComponent = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            //+ validate and save DOM host
            if (init.host) {
                this._host = init.host;
                //+
                this.DOMElement = $(this._host);
                if(!this.DOMElement) {
                    throw 'Element with id of ' + this._host + ' is required.';
                }
            }
            else {
                throw 'host is required.';
            }
            //+ validate and save parameters
            if (init.myParameter) {
                this._myParameter = init.myParameter;
            }
            else {
                throw 'myParameter is required.';
            }
        }
    }
    ctor.prototype = {
        //- myfunction -//
        myfunction: function(t) {
        }
    };
    //+
    return ctor;
})( );

You may then create the component like the following anywhere in your page:

new MyNamespace.MyComponent({
    host: 'hostName',
    myParameter: 'stuff here'
 });

Now on to see a sample component, but, first, take note of the following shortcuts, which allow us to save a lot of typing:

var DOM = document;
var $ = function(id) { return document.getElementById(id); };

Here's a sample Label component:

window.Controls = window.Controls || {};
//+
//- Controls -//
Controls.Label = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            //+ validate and save DOM host
            if (init.host) {
                this._host = init.host;
                //+
                this.DOMElement = $(this._host);
                if(!this.DOMElement) {
                    throw 'Element with id of ' + this._host + ' is required.';
                }
            }
            else {
                throw 'host is required.';
            }
            //+ validate and save parameters
            if (init.initialText) {
                this._initialText = init.initialText;
            }
            else {
                throw 'initialText is required.';
            }
        }
        //+
        this.setText(this._initialText);
    }
    ctor.prototype = {
        //- myfunction -//
        setText: function(text) {
            if(this.DOMElement.firstChild) {
                this.DOMElement.removeChild(this.DOMElement.firstChild);
            }
            this.DOMElement.appendChild(DOM.createTextNode(text));
        }
    };
    //+
    return ctor;
})( );

With the above JavaScript code and "<div id="host"></div>" somewhere in the HTML, we can use the following to create an instance of a label:

window.lblText = new Controls.Label({
    host: 'host',
    initialText: 'Hello World'
});

Now, if we had a button on the screen, we could handle it's click event, and use that to set the text of the button, as follows:

<div>
    <div id="host"></div>
    <input id="btnChangeText" type="button" value="Change Value" />
</div>
<script type="text/javascript" src="Component.js"></script>
<script type="text/javascript">
    //+ in reality you would use the dom ready event, but this is quicker for now
    window.onload = function( ){
        window.lblText = new Controls.Label({
            host: 'host',
            initialText: 'Hello World'
        });
         window.btnChangeText = $('btnChangeText');
         //+ in reality you would use a muli-cast event
         btnChangeText.onclick = function( ) {
            lblText.setText('This is the new text');
         };
    };
</script>

Thus, components are simple to work with.  You can do this with anything from a simple label to a windowing system to a marquee to any full-scale custom solution.

ASP.NET Control

Once the component works, you may then package the HTML and strongly-type it for ASP.NET.  The steps to doing this are very simple and once you do it, you can just repeat the simple steps (some times with a simple copy/paste) to make more components.

First, we need to create a .NET class library and add the System.Web assembly.   Next, add the JavaScript component to the .NET class library.

Next, in order to make the JavaScript file usable my your class library, you need to make sure it's set as an Embedded Resource.  In Visual Studio 2008, you do this by going to the properties window of the JavaScript file and changing the Build Action to Embedded Resource.

Then, you need to bridge the gap between the ASP.NET and JavaScript world by registering the JavaScript file as a web resource.  To do this you register an assembly-level WebResource attribute with the location and content type of your resource.  This is typically done in AssemblyInfo.cs.  The attribute pattern looks like this:

[assembly: System.Web.UI.WebResource("AssemblyName.FolderPath.FileName", "ContentType")]

Thus, if I were registering a JavaScript file named Label.js in the JavaScript.Controls assembly, under the _Resource/Controls folder, I would register my file like this:

[assembly: System.Web.UI.WebResource("JavaScript.Controls._Resource.Label.js", "text/javascript")]

Now, it's time to create a strongly-typed ASP.NET control.  This is done by creating a class which inherits from the System.Web.UI.Control class.  Every control in ASP.NET, from the TextBlock to the GridView, inherits from this base class.

When creating this control, we want to remember that our JavaScript control contains two required parameters: host and initialText.  Thus, we need to add these to our control as properties and validate these on the ASP.NET side of things.

Regardless of your control though, you need to tell ASP.NET what files you would like to send to the client.  This is done with the Page.ClientScript.RegisterClientScriptResource method, which accepts a type and the name of the resource.  Most of the time, the type parameter will just be the type of your control.  The name of the resource must match the web resource name you registered in AssemblyInfo.  This registration is typically done in the OnPreRender method of the control.

The last thing you need to do with the control is the most obvious: do something.  In our case, we need to write the client-side initialization code to the client.

Here's our complete control:

using System;
//+
namespace JavaScript.Controls
{
    public class Label : System.Web.UI.Control
    {
        internal static Type _Type = typeof(Label);

        //+
        //- @HostName -//
        public String HostName { get; set; }

        //- @InitialText -//
        public String InitialText { get; set; }

        //+
        //- @OnPreRender -//
        protected override void OnPreRender(EventArgs e)
        {
            Page.ClientScript.RegisterClientScriptResource(_Type, "JavaScript.Controls._Resource.Label.js");
            //+
            base.OnPreRender(e);
        }

        //- @Render -//
        protected override void Render(System.Web.UI.HtmlTextWriter writer)
        {
            if (String.IsNullOrEmpty(HostName))
            {
                throw new InvalidOperationException("HostName must be set");
            }
            if (String.IsNullOrEmpty(InitialText))
            {
                throw new InvalidOperationException("InitialText must be set");
            }
            writer.Write(@"
<script type=""text/javascript"">
(function( ) {
    var onLoad = function( ) {
        window." + ID + @" = new Controls.Label({
            host: '" + HostName + @"',
            initialText: '" + InitialText + @"'
        });
    };
    if (window.addEventListener) {
        window.addEventListener('load', onLoad, false);
    }
    else if (window.attachEvent) {
        window.attachEvent('onload', onLoad);
    }
})( );
</script>
");
            //+
            base.Render(writer);
        }
    }
}

The code written to the client may looks kind of crazy, but that's because it's written very carefully.  First, notice it's wrapped in a script tag.  This is required.  Next, notice all the code is wrapped in a (function( ) { }) ( ) block.  This is a JavaScript containment technique.  It basically means that anything defined in it exists only for the time of execution.  In this case it means that the onLoad variable exists inside the function and only inside the function, thus will never conflict outside of it.  Next, notice I'm attaching the onLoad logic to the window.load event.  This isn't technically the correct way to do it, but it's the way that requires the least code and is only there for the sake of the example.  Ideally, we would write (or use a prewritten one) some sort of event handler which would allow us to bind handlers to events without having to check if we are using the lameness known as Internet Explorer (it uses window.attachEvent while real web browsers use addEventListener).

Now, having this control, we can then compile our assembly, add a reference to our web site, and register the control with our page or our web site.  Since this is a "Controls" namespace, it has the feel that it will contains multiple controls, thus it's best to register it in web.config for the entire web site to use.  Here's how this is done:

<configuration>
  <system.web>
    <pages>
      <controls>
        <add tagPrefix="c" assembly="JavaScript.Controls" namespace="JavaScript.Controls" />
      </controls>
    </pages>
  </system.web>
</configuration>

Now we are able to use the control in any page on our web site:

<c:Label id="lblText" runat="server" HostName="host" InitialText="Hello World" />

As mentioned previously, this same technique for creating, packaging and strongly-typing JavaScript components can be used for anything.  Having said that, this example that I have just provided borders the raw definition of useless.  No one cares about a stupid host-controlled label.

If you don't want a host-model, but prefer the in-place model, you need to change a few things.  After the changes, you'll have a template for creating any in-place control.

First, remove anything referencing a "host".  This includes client-side validation as well as server-side validation and the Control's HostName property.

Next, put an ID on the script tag.  This ID will be the ClientID suffixed with "ScriptHost" (or whatever you want).  Then, you need to inform the JavaScript control of the ClientID.

Your ASP.NET control should basically look something like this:

using System;
//+
namespace JavaScript.Controls
{
    public class Label : System.Web.UI.Control
    {
        internal static Type _Type = typeof(Label);

        //+
        //- @InitialText -//
        public String InitialText { get; set; }

        //+
        //- @OnPreRender -//
        protected override void OnPreRender(EventArgs e)
        {
            Page.ClientScript.RegisterClientScriptResource(_Type, "JavaScript.Controls._Resource.Label.js");
            //+
            base.OnPreRender(e);
        }

        //- @Render -//
        protected override void Render(System.Web.UI.HtmlTextWriter writer)
        {
            if (String.IsNullOrEmpty(InitialText))
            {
                throw new InvalidOperationException("InitialText must be set");
            }
            writer.Write(@"
<script type=""text/javascript"" id=""" + this.ClientID + @"ScriptHost"">
(function( ) {
    var onLoad = function( ) {
        window." + ID + @" = new Controls.Label({
            id: '" + this.ClientID + @"',
            initialText: '" + InitialText + @"'
        });
    };
    if (window.addEventListener) {
        window.addEventListener('load', onLoad, false);
    }
    else if (window.attachEvent) {
        window.attachEvent('onload', onLoad);
    }
})( );
</script>
");
            //+
            base.Render(writer);
        }
    }
}

Now you just need to make sure the JavaScript control knows that it needs to place itself where it has been declared.  To do this, you just create a new element and insert it into the browser DOM immediately before the current script block.  Since we gave the script block and ID, this is simple.  Here's basically what your JavaScript should look like:

window.Controls = window.Controls || {};
//+
//- Controls -//
Controls.Label = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            if (init.id) {
                this._id = init.id;
                //+
                this.DOMElement = DOM.createElement('span');
                this.DOMElement.setAttribute('id', this._id);
            }
            else {
                throw 'id is required.';
            }
            //+ validate and save parameters
            if (init.initialText) {
                this._initialText = init.initialText;
            }
            else {
                throw 'initialText is required.';
            }
        }
        //+
        var scriptHost = $(this._id + 'ScriptHost');
        scriptHost.parentNode.insertBefore(this.DOMElement, scriptHost);
        this.setText(init.initialText);
    }
    ctor.prototype = {
        //- setText -//
        setText: function(text) {
            if(this.DOMElement.firstChild) {
                this.DOMElement.removeChild(this.DOMElement.firstChild);
            }
            this.DOMElement.appendChild(DOM.createTextNode(text));
        }
    };
    //+
    return ctor;
})( );

Notice that the JavaScript control constructor creates a span with the specified ID, grabs a reference to the script host, inserts the element immediately before the script host, then sets the text.

Of course, now that we have made these changes, you can just throw something like the following into your page and to use your in-place JavaScript control without ASP.NET.  It would look something like this:

<script type="text/javascript" id="lblTextScriptHost">
    window.lblText = new Controls.Label({
        id: 'lblText',
        initialText: 'Hello World'
    });
</script>

So, you can create your own JavaScript components without requiring jQuery or Prototype dependencies, but, if you are using jQuery or Prototype (and you should be!; even if you are using ASP.NET AJAX-- that's not a full JavaScript framework), then you can use this same ASP.NET control technique to package all your controls.

kick it on DotNetKicks.com

Cross-Browser JavaScript Tracing

No matter what system you are working with, you always need mechanisms for debugging.  One of the most important mechanisms a person can have is tracing.  Being able to see trace output from various places in your application is vital.  This is especially true with JavaScript.  I've been working with JavaScript since 1995, making stuff 12 years ago that would still be interesting today (in fact, I didn't know server-side development existed until 1998!) and I have noticed a clear correlation between the complexity of JavaScript applications and the absolute need for tracing.

Thus, a long, long time ago I built a tracing utility that would help me view all the information I need (and absolute no more or less).  These days this means being able to trace information to a console, dump arrays and objects, and be able to view line-numbered information for future reference.  The utility I've created has since been added to my Themelia suite (pronounces the-meh-LEEUH; as in thistle or the name Thelma), but today I would like to demonstrate it and deliver it separately.

The basis of my tracing utility is the Themelia.Trace namespace.  In this namespace is.... wait... what?  You're sick of listening to me talk?  Fine.  Here's the sample code which demonstrates the primary uses of Themelia.Trace, treat this as your reference documentation:

//+ enables tracing
Themelia.Trace.enable( );
//+ writes text
Themelia.Trace.write('Hello World!');
//+ writes a blank line
Themelia.Trace.addNewLine( );
//+ writes a numbered line
Themelia.Trace.writeLine('...and Hello World again!');
Themelia.Trace.writeLine('Another line...');
Themelia.Trace.writeLine('Yet another...');
Themelia.Trace.writeLine('One more...');
//+
//++ label
//+ writes labeled data to putput (e.g. 'variableName (2)')
Themelia.Trace.writeLabeledLine('variableName', 2);
//+
//++ buffer
//+ creates a buffer
var buffer = new Themelia.Trace.Buffer( );
//+ declares beginning of new segment
buffer.beginSegment('Sample');
//+ writes data under specific segment
buffer.write('data here');
//+ nested segment
buffer.beginSegment('Array Data');
//+ write array to buffer
var a = [1,2,3,4,5];
buffer.write(a);
//+ declares end of segment
buffer.endSegment('Array Data');
buffer.beginSegment('Object Data');
//+ write raw object/JSON data
buffer.write({
    color: '#0000ee',
    fontSize: '1.1em',
    fontWeight: 'bold'
});
buffer.endSegment('Object Data');
//+ same thing again
buffer.beginSegment('Another Object');
var o = {
    'personId': 2,
    name: 'david'
};
buffer.write(o);
buffer.endSegment('Another Object');
buffer.endSegment('Sample');
//+ writes all built-up data to output
buffer.flush( );

Notice a few thing about this reference sample:

  • First, you must use Themelia.Trace.enable( ) to turn tracing on.  In a production application, you would just comment this line out.
  • Second, Themelia.Trace.writeLine prefixes each line with a line number.  This is especially helpful when dealing with all kinds of async stuff floating around or when dealing with crazy events.
  • Third, you may use Themelia.Trace.writeLabeledLine to output data while giving it a name like "variableName (2)".
  • Fourth, if you want to run a tracer through your application and only later on have output, create an instance of Themelia.Trace.Buffer, write text to it, write an array to it, or write an object to it, then call flush( ) to send to data to output.  You may also use beginSegment and endSegment to create nested, indented portion of the output.
  • Fifth, notice you can throw entire arrays of objects/JSON into buffer.write( ) to write it to the screen.  This is especially handy when you want to trace your WCF JSON messages.

Trace to what?

Not everyone knows this, but Firefox, Google Chrome, Safari, and Opera each has its own console for allowing output.  Themelia.Trace works with each console in its own way.  Here are some screen shots to show you what I mean:

Firefox

Firefox since version 1.0 has the Firefox Console which allows you to write just about anything to a separate window.  I've done a video on this many years ago and last year I posted a quick "did you know"-style blog post on it, so there's no reason for me to cover it again here.  Just watch my Introduction to the Firefox Console for a detailed explanation of using the Firefox Console (you may also opt to watch my Setting up your Firefox Development Environment-- it should seriously help you out).

Firefox

Google Chrome

Chrome does things a little different than any other browser.  Instead of having a "browser" wide console, each tab has its own console.  Notice "browser" is in quotes.  Technically, each tab in Chrome is it's own mini browser, so this console-per-tab model makes perfect sense.  To access this console, just hit Alt-` on a specific tab.

Chrome

Safari

In Safari, you go to Preferences, in the Advanced Tab to check "Show Develop menu in menu bar".  When you do this, you will see the Develop menu show up.  The output console is at Develop -> Show Web Inspector.

Safari

Opera

In Opera 9, you go to Tools -> Advanced -> Developer Tools and you will see a big box show up at the bottom.  The console is the Error Console tab.

Opera9

Internet Explorer

To use Themelia.Trace with Internet Explorer, install Nikhil's Web Developer Helper.  This is different from the IE Developer Toolbar.

IEWebDevHelper

Firebug

It's important to note that, in many situations it's actually more effective to rely on Firebug for Firefox or Firebug lite for Safari/Chrome, IE, and Opera, then to use a console directly.  Therefore, Themelia.Trace allows you to set Themelia.Trace.alwaysUseFirebug to true and have all output redirected to Firebug instead of the default console.  Just try it, use the above sample, but put "Themelia.Trace.alwaysUseFirebug = true;" above it.  All data will redirect to Firebug.  Here's a screen shot (this looks basically the same in all browsers):

FirebugLite

There you have it.  A cross-browser solution to JavaScript tracing.

Links

Love Sudoku? Love brain puzzles? Check out my new world-wide Sudoku competition web site, currently in beta, at Sudokian.com.

kick it on DotNetKicks.com

Minima 3.1 Released

I've always thought that one of the best ways to learn or teach a series of technologies is to create either a photo gallery, some forum software, or a blog engine.  Thus to aide in teaching various areas of .NET (and to have full control over my own blog), I created Minima v1.  Minima v2 came on the scene adding many new features and showing how LINQ can help make your DAL shine.  Then, Minima v3 showed up and demonstrated an enormous load of technologies as well as demonstrating proper architectural principles.

Well, Minima 3.1 is an update to Minima 3.0 and it's still here to help people see various technologies in action.  However, while Minima 3.1 adds various features to the Minima 3.1 base, its primary important to note that it's also the first major Themelia 2.0 application (Minima 3.0 was built on Themelia 1.x).  As such, not only is it a prime example of many technologies ranging from WCF to LINQ to ASP.NET controls to custom configuration, it's also a good way to see how Themelia provides a component model to the web.  In fact, Minima 3.1 is technically a Themelia 2.0 plug-in.

Here's a quick array of new blog features:

  • It's built on Themelia 2.0.  I've already said this, but it's worth mentioning again.  This isn't a classic ASP.NET application.  It's the first Themelia 2.0 application.
  • Minima automatically creates indexes (table of contents) to allow quick viewing of your site
  • Images are now stored in SQL Server as a varbinary(max) instead of as a file.
  • Themelia CodeParsers are used to automatically turn codes like {Minima{BlogEntry{3324c8df-4d49-4d4a-9878-1e88350943b6}}} into a link to a link entry, {Minima{BlogEntry{3324c8df-4d49-4d4a-9878-1e88350943b6|Click here for stuff}}} into a renamed blog entry and {Minima{AmazonAffiliate}} into a Amazon.com product link with your configured (see below) affiliate ID.
  • Minima now has its own full custom configuration.  Here's an example:
<minima.blog entriesToShow="7" domain="http://www.tempuri.org/">
  <service>
    <authentication defaultUserName="jdoe@tempuri.org" defaultPassword="blogpassword"/>
    <endpoint author="AuthorServiceWs2007HttpBinding" blog="BlogServiceWs2007HttpBinding" comment="CommentServiceWs2007HttpBinding" image="ImageServiceWs2007HttpBinding" label="LabelServiceWs2007HttpBinding" />
  </service>
  <suffix index="Year Review" archive="Blog Posts" label="Label Contents" />
  <display linkAuthorsToEmail="false" blankMessage="There are no entries in this view." />
  <comment subject="New Comment on Blog" />
  <codeParsers>
    <add name="AmazonAffiliate" value="net05c-20" />
  </codeParsers>
</minima.blog>
  • In addition to the normal MinimaComponent (the Themelia component which renders the blog with full interactivity), you may also use the MinimaProxyComponent to view single entries.  For example, you just as a BlogEntryProxy component to a web form and set either the blog entry guid or the blog guid plus the link, then your blog entry will show.  I built this feature in to allow Minima to be used for more than just blogging, it's a content stream.  With this feature I can keep every single page of a web site inside of Minima and no one will ever know.  There's also the MinimaViewerComponent which renders a read-only blog.  This means no rsd.xml, no site map, no commenting, no editing, just viewing.  In fact, the Themelia web site uses this component to render all its documentation.
  • There is also support for adding blog post footers.  Minima 3.1 ships with a FeedBurner footer to provide the standard FeedBurner footer.  See the "implementing" section below for more info.

As a Training Tool

Minima is often used as a training tool for introductory, intermediate, and expert-level .NET.

Minima 2.0 could be used as a training tool for ASP.NET, CSS theming, proper use of global.asax, integrating with Windows Live Writer, framework design guidelines, HttpModules, HttpHandlers, HttpHandlerFactories, LINQ, type organization, proper-SQL Server table design and naming scheme, XML serialization, and XML-RPC.NET usage.

Minima 3.1 can be used as a training tool for the same concepts and technologies as Minima 2.0 as well as SOA principles, custom WCF service host factories, custom WCF behaviors, WCF username authentication, custom WCF declarative operation-level security, WCF exception shielding and fault management, custom WCF message header usage, WCF type organization, WCF-LINQ DTO transformation, enhanced WCF clients, using WCF sessions for Captcha verification, SQL Server 2005 schema security, XmlWriter usage, ASP.NET programmatic user control usage, custom configuration sections, WCF JavaScript clients, ASP.NET control JavaScript registration, JavaScript namespaces, WCF JSON services, WCF RSS services, ASP.NET templated databinding, and ASP.NET control componentization.

Architecture

Probably the most important thing to learn from Minima is architecture.  Minima is built to provide great flexibility.  However, that's not for the faint of heart.  I heard one non-architect and obvious newbie say that it was "over architected".  According to this person, apparently, adding security to your WCF services to protect you private information is "over architecting" something (not to mention the fact that WCF enforces security for username authentication).

In any case, Minima is split into two parts: the service and the web site.  I use Minima many places, but for all my blogs (or, more accurately, content streams) I have a single centralized, well-protected service set.  All my internal web sites access this central location via the WCF NetNamedPipeBinding.

Implementing

Minima is NOT your every day blog engine.  If your company needs a blog engine for various people on the team, get community server.  Minima isn't for you.  Minima allows you to plop a blog into any existing web site.  For example, if you have an existing web site, just install Themelia (remember, Minima is a Themelia plugin), create a new Themelia web domain, and register Minima into that web domain as follows:

<themelia.web>
  <webDomains>
    <add>
      <components>
        <add key="Minima" type="Minima.Web.Routing.MinimaComponent, Minima.Web">
          <parameters>
            <add name="page" value="~/Page_/Blog/Root.aspx" />
            <add name="blogGuid" value="19277C41-7E4D-4AE0-A196-25F45AC48762" />
          </parameters>
        </add>
      </components>
    </add>>
  </webDomains>
</themelia.web>

Now, on that Root.aspx page, just add a simple Minima.Web.Controls.MinimaBlog control.  Your blog immediately starts rendering.  Not only that, commenting is automatically supported.  Furthermore, you have a site map, a Windows Live Writer (MetaWeblog API) endpoint, a rsd.xml file, and a wlwmanifest.xml file.  All that just dropping a control on to a web site without configuring anything in that page.  Of course, you can configure things if you want and you can add more control to the page as well.  Perhaps you want a label list, an archive list, or a recent entry list.  Just add the appropriate control to the web form.  In fact, the same Minima binaries that you will compile with the source is used on each of my web sites with absolutely no changes; they are all just a single control, yet look nothing alike.

Personally, I don't like to add a lot of controls to my web forms.  Thus, I normally add a place holder control and then add my controls to that place holder.  There more here's a snippet from my blog's web form (my entire blog has only one page):

phLabelList.Controls.Add(new Minima.Web.Controls.LabelList { Heading = "Label Cloud", ShowHeading = true, TemplateType = typeof(Minima.Web.Controls.LabelListControlTemplateFactory.SizedTemplate) });
phArchivedEntryList.Controls.Add(new Minima.Web.Controls.ArchivedEntryList { ShowEntryCount = false });
phRecentEntryList.Controls.Add(new Minima.Web.Controls.RecentEntryList());
phMinimaBlog.Controls.Add(new Minima.Web.Controls.MinimaBlog
{
    ShowAuthorSeries = false,
    PostFooterTypeInfo = Themelia.Activation.TypeInfo.GetInfo(Minima.Web.Controls.FeedBurnerPostFooter.Type, "http://feeds.feedburner.com/~s/FXHarmonics"),
    ClosedCommentText = String.Empty,
    DisabledCommentText = String.Empty
});

There's nothing here that you can't do as well.  Most everything there is self explanatory too.  However, notice the post footer type.  By setting this type, Minima knows to render the feed burner post footer at the end of each entry.

Thus, with a simple configuration and a drop of a control, you can add a blog anywhere.  Or, in the case of the Themelia web site, you can add a content stream anywhere.

Here's a snippet from the configuration for the Themelia web site:

<add name="framework" path="framework" defaultPage="/Sequence_/Home.aspx" acceptMissingTrailingSlash="true">
  <components>
    <add key="Minima" type="Minima.Web.Routing.MinimaViewerComponent, Minima.Web">
      <parameters>
        <add name="blogGuid" value="19277C41-7E4D-4AE0-A196-25F45AC48762" />
      </parameters>
    </add>
  </components>
</add>

By looking at the Themelia web site, you can see that on the Themelia web site, Minima isn't being used as a blog engine, but as a content stream.  Go walk around the documentation of http://themelia.netfxharmonics.com/framework/docs.  I didn't make a bunch of pages, all I did was drop in that component and throw a Minima.Web.Controls.BlogViewer control on the page and BAM I have an entire documentation system already built based upon various entries from my blog.

As a side note, if you look on my blog, you will see each of the Themelia blog entries have a list of links, but the same thing in the Themelia documentation does not have the link list.  This is because I've set IgnoreBlogEntryFooter to true on the BlogViewer control and thus telling Minima to remove all text after the special code.  Thus I can post the same entry in two places.

This isn't a marketing post on why you should use Minima.  If you want to use Minima, go ahead, you can contact me on my web site for help.  However, the point is to learn as much as you can about modern technology using Minima as an example.  It's not meant to be used in major web sites by just anyone at this point (though I use it in production in many places).  Having said that, the next version of Minima will be part of the Themelia suite and will have much more user support and formal documentation.

In conclusion, I say again (and again and again), you may use Minima for your personal training all you want.  That's why it's public. 

Links

Microsoft MVP (ASP.NET) 2009

I'm rather pleased to announce that October 1, 2008 was my day: I was made a Microsoft MVP for ASP.NET.  Thus, as tradition seems to state, I'm posting a blog entry about it.

Thanks to God for not letting me have the MVP until now, the timing is flawless.  Thanks also to my MVP advisor David Silverlight who got me more serious about the MVP program and admitted to having nominated me.  Next, thanks to Scott Hanselman who fixed a clog in the MVP pipeline.  Apparently, I was in the system, but completely lost; in the wrong category or something.  He took it upon himself to contact a few people to get the problem fixed.

Thanks also to Brad Abrams for his recommendation to the MVP committee and to Rick Strahl, a fellow MVP, Microsoft ASP.NET AJAX loather, and Subversion lover who showed me that open source developers have equal rights to the MVP title.

To bring in the new [MVP] year, today I took some time and did a massive redesign to my web site featuring the cool MVP logo, which just so happens to fit perfectly in my existing color scheme.  I'll probably be tweaking the site over the next few days as the waves of whimsical changes come my way.