2005 2006 2007 2008 2009 2010 2011 2015 2016 2017 2018 aspnet azure csharp debugging docker elasticsearch exceptions firefox javascriptajax linux llblgen mongodb powershell projects python security services silverlight training videos wcf wpf xag xhtmlcss

Tim Ferris - Trial by Fire



This is beyond awesome.  Tim Ferris, author of one of the greatest books ever written, Four Hour Work Week, has announced that he has a new show called Trial by Fire.  I'm incredibly excited to hear this.  Time Ferris is one of my core role models for just about every area of life.  I regularly reread and reference his Four Hour Work Week book and am constantly studying his blog.  In fact, when you read my NetFXHarmonics web site, you are reading Ferris principles applied to the development world.

He calls himself a life hacker.  What it takes others years to master, he tries to learn in days.  This is the primary purpose of his Trial by Fire show.  It's also something I've been studying for years through my research in accelerated learning and experience induction.  Ferris sometimes mentions that his technique is to deconstruct, streamline, and remap.  If you read my recent posts on streamlining WCF and WCF in Silverlight, you've seen a taste of how you can apply these principles to development.  It's how I personally think, act, and speak.

For more information on Tim Ferris, his show, or his book, check out his blog at http://www.fourhourblog.com/.  His blog is essentially an extension to his book, Four Hour Work Week, a book every single person in the world needs to read and reread.  You absolutely must buy this book.  Get it in print or get it in audio, just get it.

Understanding WCF



If you like this document, please consider writing a recommendation for me on my LinkedIn account.

Contents

Introduction

One of the most beautiful things about the Windows Communication Foundation (WCF) is that it's a completely streamlined technology.  When you can provide solutions to myriad of diverse problems using the same principles, you know you're dealing with a work of genius.  This is the case with WCF.  With a single service implementation, you can provide access to ASMX, PHP, Java, TCP, named pipe, and JSON-based services by add a single XML element for each type of connection you want to support.  On the flip side, with a single WCF client you can connect to each of these types of services, again, by adding a single like of XML for each.  It's that simple and streamlined.  Not only that, this client scenario works the same for both .NET and Silverlight.

In this document, I'm going to talk about how to access WCF services using Silverlight 2 without magic.  There will be no proxies, no generated code, no 3rd party utilities, and no disgusting "Add Service Reference" usage.  Just raw WCF.  This document will cover WCF connectivity in quite some depth.  We will talk about service setup, various WCF, SOA, and Silverlight paradigms, client setup,  some security issues, and a few supplemental features and techniques to help you aide and optimize service access.  You will learn about various WCF attributes, some interfaces, and a bunch of internals.  Though this document will be in depth, nothing will ever surpass the depth of MSDN.  So, for a more full discussion on any topic, see the WCF documentation on MSDN.

Even though we're focusing on Silverlight, most of what will be explained will be discussed in a .NET context and then applied to Silverlight 2.  That is, instead of learning .NET WCF and Silverlight WCF, you will .NET WCF and how to vary this for Silverlight.  This comparative learning method should help you both remember and understand the concepts better.  Before we begin, though, let's begin with a certain WCF service setup.  After all, we you don't have a service, we can't talk about accessing it.

Service Setup In Depth

When working with WCF, you are working with a completely streamlined system.  The most fundamental concept in this system is the ABC.  This concept scales from Hello World to the most complex sales processing system.  That is, for all WCF communication, you need an address, a binding, and a contract.  Actually, this is for any communication anywhere, even when talking to another person.  You have to know to whom, how, and what.  If you don't have these three, then there can't be any communication.

With these three pieces of information, you either create a service-side endpoint which a client will access or a client-side channel which the client will use to communicate with the service.

WCF services are setup using a 3 step method:

  • First, create a service contract with one or more operation contracts.
  • Second, create a service implementation for those contracts. 
  • Third, configure a service host to provide that implementation with an endpoint for that specific contract.

Let's begin by defining a service contract.  This is just a simple .NET interface with the System.ServiceModel.ServiceContractAttribute attribute applied to it.  This interface will contain various operation contracts, which are simply method signatures with the System.ServiceModel.OperationContractAttribute applied to each.  Both of these attributes are in the System.ServiceModel assembly.

Do not under any circumstances apply the ServiceContract attribute directly to the implementation (i.e. the class).  The ability to do this is probably the absolute worst feature in WCF.  It defeats the entire purpose of using WCF: your address, your binding, your contract and your implementation are complete separate.  Because this essentially makes your implementation your contract, all your configuration files will be incredibly confusing to those of us who know WCF well.  When I look for a contract, I look for something that starts with an "I".  Don't confuse me with "PersonService" as my contract.  Person service means person… service.  Not only that, but later on you will see how to use a contract to access a service.  It makes no sense to have my service access my service; thus with the implementation being the contract, your code will look painfully confusing to anyone who knows WCF.

Here's the sample contract that we will use for the duration of this document:

using System;
using System.ServiceModel;
//+
namespace Contact.Service
{
    [ServiceContract(Namespace = Information.Namespace.Contact)]
    public interface IPersonService
    {
        //- GetPersonData -//
        [OperationContract]
        Person GetPersonData(String personGuid);
    }
}

Keep in mind that when you design for WCF, you need to keep your interfaces as simple as possible.  The general rule of thumb is that you should have somewhere between 3 to 7 operations per service contract.  When you hit the 12-14 mark, it's seriously time to factor out your operations.  This is very important.  As I'll mention again later, in any one of my WCF projects I'll have upwards of dozens of service contracts per service.  You need to continuously keep in mind what your purpose is for creating this service, filtering those purposes through the SOA filter.  Don't design WCF services like you would a framework, which, even then shouldn't have many access points!

The Namespace property set on the attribute specified the namespace used to logically organize services.  Much like how .NET uses namespaces to separate various classes, structs, and interfaces, SOAP services use namespaces to separate various actions.  The name space may be arbitrarily chosen, but the client and service must just agree on this namespace. In this case, the namespace is the URI http://www.netfxharmonics.com/service/Contact/2008/11/.  This namespace will also be on the client.  This isn't a physical URL (universal resource locator), but a logical URI (universal resource identifier).  Despite what some may say, both terms are in active use in daily life.  Neither is more important than the other and neither is "deprecated".  All URLs are URIs, but not all URIs are URLs as you can see here.

Notice in this interface, there is a method interface that returns Person.  This is a data contract.  Data contracts are classes which have the System.Runtime.Serialization.DataContractAttribute attribute applied to them.  These have one or more data members, which are public or private properties or fields that have the System.Runtime.Serialization.DataMemberAttribute attribute applied to them.  Both of these attributes are in the System.Runtime.Serialization assembly.  This is important to remember; if you forget, you will probably assume them to be in the System.ServiceModel assembly and your contract will never compile.

Notice I said that data members are private or public properties or fields.  That was not a typo.  Unlike the serializer for the System.SerializableAttribute attribute, the serializer for DataContract attribute allows you to have private data members.  This allows you to hide information from developers, but allow services to see it.  Related to this is the how classes with the DataContract attribute differ from classes with the Serializable attribute.  When you use the Serializable attribute, you are using an opt-out model.  This means that when the attribute is applied to the class, each members is serializable.  You then opt-out particular fields (not properties; thus one major inflexibility) using the System.NonSerializedAttribute attribute.  On the other hand, when you apply the DataContract attribute, you are using an opt-in model.  Thus, when you apply this attribute, you must opt-in each field or property you wish to be serialized by applying the DataMember attribute.  Now to finally look at the Person data contract:

[DataContract(Namespace = Information.Namespace.Contact)]
public class Person
{
    //- @Guid -//
    [DataMember]
    public String Guid { get; set; }


    //- @FirstName -//
    [DataMember]
    public String FirstName { get; set; }


    //- @LastName -//
    [DataMember]
    public String LastName { get; set; }


    //- @City -//
    [DataMember]
    public String City { get; set; }


    //- @State -//
    [DataMember]
    public String State { get; set; }


    //- @PostalCode -//
    [DataMember]
    public String PostalCode { get; set; }
}

Note how simple this class is.  This is incredibly important.  You need to remember what this class represents: data moving over the wire.  Because of this, you need to make absolutely sure that you are sending only what you need.  Just because your internal "business object" has 10,000 properties doesn't mean that your service client will ever be able to handle it.  You can't get blood from a turnip.  Your business desires will never change the physics of the universe.  You need to design with this specific scenario of service-orientation in mind.  In the case of Silverlight, this is even more important since you are dealing with information that needs to get delegated through a web browser before the plug-in ever sees it.  Not only that, but every time you send an extra property over the wire, you are making your Silverlight application that less responsive.

When I coach architects on database design, I always remind them to design for the specific system which they'll be using (i.e. SQL Server) and always keep performance, space, and API usability in mind (this is why it's the job of the architect, not the DBA, to design databases!)  In the same way, if you are designing a system that you know will be used over the wire, account for that scenario ahead of time.  Much like security, performance and proper API design aren't "features", they're core parts of the system.  Do not design 10 different classes, each representing a property which will be used in another class which, in turn, will be serialized and sent over the wire.  This will be so absolutely massive that no one will ever be able to handle it.  If you have more than around 15 properties in your entire object graph, it's seriously time to rethink what you want to send.  And, never, ever, ever send an instance of System.Data.DataSet over the wire.  There has never been, is not now, and never will be any reason to ever send any instance of this type anywhere.  It's beyond massive and makes the 10,000 property data transfer object seem lightweight.  The fact that something is serializable doesn't mean that it should be.

This is main reason you should not apply the Serializable attribute to all classes.  Remember, this attribute follows an opt-out model (and a weak one at that).  If you want your "business objects" to work in your framework as well as over the wire, you need to remove this attribute and apply the DataContract attribute.  This will allow you to specify via the DataMember attribute which properties will be used over the wire, while leaving your existing framework completely untouched.  This is the reason the DataContract attribute exists!  Microsoft realized that the Serializable attribute is not fine grained enough for SOA purposes.  They also realized that there's no reason to force everyone in the world to write special data transfer objects for every operation.  Even then, use DataContract sparingly.  Just as you should keep as much private as possible and as much internal as possible, you want to keep as much un-serializable as possible.  Less is more.

In my Creating Streamlined, Simplified, yet Scalable WCF Connectivity document, I explain that these contracts are considered public.  That is, both the client and the service need the information.  It's the actual implementation that's private.  The client needs only the above information, where as the service needs the above information as well as the service implementation.  Therefore, as my document explains, everything mentioned above should be in a publicly accessible assembly separate from the service implementation to maximize flexibility.  This will also allow you to rely on the original contracts instead of relying on a situation where the contracts are converted to metadata over the wire and then converted to sloppily generated contracts.  That's slower, adds latency, adds another point of failure, and completely destroys your hand crafted, highly-optimized contracts.  Simply add a reference to the same assembly on both the client and server-side and you're done.  If multiple people are using the service, just hand out the public assembly.

At this point, many will try to do what I've just mentioned in a Silverlight environment to find that it doesn't seem to work.  That is, when you try to add a reference to a .NET assembly in a Silverlight project, you will get the following error message:

DotNetSilverlightReferenceMessageBox

Fortunately, this isn't the end of the world.  In my document entitled Reusing .NET Assemblies in Silverlight, I explain that this is only a Visual Studio 2008 constraint.  There's absolutely no technical reason why Silverlight can't use .NET assemblies.  Both the assembly and module formats are the same for Silverlight and .NET.  When you try to reference an assembly in a Silverlight project, Visual Studio 2008 does a check to see what version of mscorlib the assembly references.  If it's not 2.X.5.X, then it says it's not a Silverlight assembly.  So, all you need to do is modify your assembly to have it use the appropriate mscorlib file.  Of course, then it's still referencing the .NET System.ServiceModel and System.Runtime.Serialization assemblies.  Not a big deal, just copy/paste the Silverlight references in.  My aforementioned document explains everything you need to automate this procedure.

Therefore, there's no real problem here at all.  You can reuse all your contracts on both the service-side and on the client-side in both a .NET Silverlight environment.  As you will see a bit later, Silverlight follows an async communication model and, therefore, must use async-compatible service contracts.  At that time you may begin to think that you can't simply have a one-stop shop for all your contract needs.  However, this isn't the case.  As it turns out .NET can do asynchronous communication too, so when you create that new contract, you can keep it right next to your original service contract.  Thus, once again, you have a single point where you keep all your contracts.

Moving on to step 2, we need to use these contracts to create an implementation.  The service implementation is just a class which implements a service contract.  The service implementation for our document here is actually incredibly simple:

using System;
//+
namespace Contact.Service
{
    public class PersonService : Contact.Service.IPersonService
    {
        //- @GetPersonData -//
        public Person GetPersonData(String personGuid)
        {
            return new Person
            {
                FirstName = "John",
                LastName = "Doe",
                City = "Unknown",
                Guid = personGuid,
                PostalCode = "66062",
                State = "KS"
            };
        }
    }
}

That's it.  So, if you already have some logic you know is architecturally sound and you would like to turn it into a service.  Create an interface for your class and add some attributes to the interface.  That's your entire service implementation.

Step 3 is to configure a service host with the appropriate endpoints.  In our document, we are going to be using an HTTP based service.  Thus after we setup a new web site, we create a Person.svc file in the root and add to it a service directive specifying our service implementation.  Here's the entire Person.svc file:

<%@ ServiceHost Service="Contact.Service.PersonService" %>

No, I'm not joking.  If you keep your implementation in this class as well, then you are not using WCF properly.  In WCF, you keep your address, your binding, your contract, and your implementation completely separate.  By putting your implementation in this file, you are essentially tying the address to the implementation.  This defeats the entire purpose of WCF.  So, again, the above code is all that should ever be in any svc file anywhere.  Sometimes you may have another attribute set on your service directive, but this is basically it.

This is an unconfigured service host.  Thus, we must configure it.  We will do this in the service web site's web.config file.  There's really only one step to this, but that one step has a prerequisite.  The step is this: setup a service endpoint, but this requires a declared service.  Thus, we will declare a service and add it to an endpoint.  An endpoint specifies the WCF ABC: an address (where), a binding (how), and a contract (what).  Below is the entire web.config file up to this point:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.serviceModel>
    <services>
      <service name="Contact.Service.PersonService">
        <endpoint address="" binding="basicHttpBinding" contract="Contact.Service.IPersonService" />
      </service>
    </services>
  </system.serviceModel>
</configuration>

This states that there can be "basicHttpBinding" communication through Contact.Service.IPersonService at address Person.svc to Contact.Service.PersonService.  Let's quickly cover each concept here.

The specified address is a relative address.  This means that the value of this attribute is appended onto the base address.  In this case the base address is the address specified by our web server.  In the case of a service outside of a web server, then you can specify an absolute address here.  But, remember, when using a web server, the web server is going to control the IP address and port bindings.  Our service is at Person.svc, thus we have already provided for us the base URL.  In this case the address is blank, but you will use this address attribute if you add more endpoints as you will see later.

The binding specifies how the information is to be format for transfer.  There's actually nothing too magical about a binding, though.  It's really just a collection of binding elements and pre-configured parameter defaults, which are easily changed in configuration.  Each binding element will have at a minimum two binding elements.  One of these is a message encoding binding element, which will specify how the message is formatted.  For example, the message could be text (via the TextMessageEncodingBindingElement class; note: binding elements are in the System.ServiceModel.Channels namespace), binary (via the BinaryMessageEncodingBindingElement class), or some other encoding.  The other required binding element is the transport binding element, which specified how the message is to go over the wire.  For example, the message could go over HTTP (via the HttpTransportBindingElement), HTTPS (via the HttpsTransportBindingElement), TCP (via the TcpTransportBindingElement), or even a bunch of others.  A binding may also have other binding elements to add more features.  I'll mention this again later, when we actually use a binding.

The last part of an endpoint, the contract, has already been discussed earlier.  One thing that you really need to remember about this though is that you are communication through a contract to the hosted service.  If you are familiar with interface based development in .NET or COM, then you already have a strong understanding of what this means.  However, let's review.

If a class implements an interface, you can access the instantiated object through the interface.  For example, in the following code, you are able to access the Dude object through the ISpeak interface:

interface ISpeak
{
    void Speak(String text);
}


class Dude : ISpeak
{
    public void Speak(String text)
    {
        //+ speak text
    }
}


public class Program
{
    public void Run()
    {
        ISpeak dude = new Dude();
        dude.Speak("Hello");
    }
}

You can think of accessing a WCF service as being exactly like that.  You can even push the comparison even further.  Say the Dude class implemented IEat as well.  Then we can access the instantiated Dude object through the IEat interface.  Here's what I mean:

interface ISpeak
{
    void Speak(String text);
}


interface IEat
{
    void Eat(String nameOfFood);
}


class Dude : ISpeak, IEat
{
    public void Speak(String text)
    {
        //+ speak text
    }
}


public class Program
{
    public void Run()
    {
        IEat dude = new Dude();
        dude.Eat("Pizza");
    }
}

In the same way, when configuring a WCF service, you will add an endpoint for contract through which you would like your service to be accessed.

Though it's beyond the scope of this document, WCF also allows you to version contracts.  Perhaps you added or removed a parameter from your contract.  Unless you want to break all the clients accessing the service, you must keep the old contract applied to your service (read: keep the old interface on the service class) and keep the old endpoint running by setting up a parallel endpoint.

You will add a new service endpoint every time you change your version, change your contract, or change your binding.  On a given service, you have have dozens of endpoints.  This is good thing.  Perhaps you provide for four different bindings with two of them having two separate configurations each, three different contracts, and 2 different versions of one of the contracts.  In this document, we are going to start out with one endpoint and add more later.

Now we have setup a complete service.  However, it's an incredibly simple service setup, thus not requiring too much architectural attention.  When you work with WCF in a real project, you will want to organize your WCF infrastructure to be a bit more architecturally friendly.  In my document entitled Creating Streamlined, Simplified, yet Scalable WCF Connectivity, I explain streamlining and simplifying WCF connectivity and how you can use a private/public project model to simplify your

Architectural Overview: Creating Streamlined, Simplified, yet Scalable WCF Connectivity

Contents

Introduction

One of the most awesome things about WCF is that the concepts scale extremely well.  If you understand the ABCs of WCF, then you can do anything from creating a simple Hello World to a complex sales processing service.  It's all based on having an address, a binding, and a contract.  All the other concepts like behaviors, validators, and service factories are simply supplemental to the core of the system.  When you understand the basics, you have the general essence of all of WCF.

Because of this fully-scalable ABC concept, I'm able to use the same pattern for WCF architecture and development for every solution.  This is really nice because it makes it so that I don't have to wait time designing a new setup every time a new problem comes along.  In this discussion, I would like to demonstrate how you can create your own extremely efficient WCF solution based on my template.  Along the way, you will also learn a few pieces of WCF internals to help you understand WCF better.

Before I begin the explanation though, keep in mind that most concepts mentioned here are demonstrated in my Minima Blog Engine 3.1.  This is my training software demonstrating an enormous world of modern technologies.  It's regularly refactored and often fully re-architected to be in line with newer technologies.  This blog engine relies heavily on WCF as it's a service-oriented blog engine.  Regardless of how many blogs you have (or any other series of "content entries"-- for example, the documentation section of the Themelia web site is Minima), you have a single set of services that your entire organization uses.  If you understand the WCF usage in Minima, you will understand WCF very well.

Now, onto the meat (or salad, for the vegetarians) of the discussion...

Service Structure

On any one of my solutions, you will find a X.Service project and a X.ServiceImpl project where X is the solution "code" (either the solution name or some other word that represents the essence of the solution).  The former is the public .NET project which contains service contracts, data contracts, service configuration, and service clients.  The latter is the private .NET project which contains the service implementations, behaviors, fault management, service hosts, validators, and other service-side-only, black-boxes portions of the service.  All projects have access to the former, only the service itself will ever even know about the latter.

This is a very simple setup based upon a public/private model, like in public key cryptography.  The idea is that everything private is protected with all your might and everything public is released for anyone, within the context of the solution, to see.

For example, below is the Minima.Service project for Minima Blog Engine 3.1.  Feel free to ignore the folder structure, no one cares about that.  Just because I make each folder a namespace with prefixed folder names for exclusions, doesn't mean anyone else in the world does.  I find it to be the most optimal way to manage namespaced and non-namespaced groups, but the point of this discussion is the separation of concerns in the projects.

MinimaService

For the time being, simply notice that data contracts and service contracts are considered public.  Everything else in the file is either meaningless to this discussion or will be discussed later.

Here is the Minima.ServiceImpl project for the same solution:

MinimaServiceImpl

Here you can see anything from the host factory to various behaviors to validators to fault management to LINQ-to-SQL capabilities.  Everything here is considered private.  The outside world doesn't need to know, therefore shouldn't know.

For the sake of a more simplified discussion, let's switch from the full-scale solution example of Minima to a smaller "Person" service example.  This "Person" service is part of the overall "Contact" solution.  Here's the Contact.Service for our Contact solution:

PersonService

As you can see, you have a standard data contract, a service contract, and a few other things, which will be discussed in a bit.

For this example, we don't need validators, fault management, or behaviors, or host factories, all we need is a simple service in our Contact.ServiceImpl project:

PersonServiceImpl

By utilizing this model separating the private from the public you can easily send the Contact.Service assembly to anyone you want without requiring them to create their own client proxy or use the painfully horrible code generated by "Add Service Reference", which ranks in my lists as one of the worst code generators right next to FrontPage 95 and Word 2000.

As a side note, I should mention that I have colleagues who actually take this a step further and make a X.Service.Client project which houses the WCF client classes.  They are basically following a Service/Client/Implementation model where as I'm following a Public/Private model.  Just use which ever model makes sense for you.

The last piece needed in this WCF setup is the host itself.  This is just a matter of creating a new folder for the web site root, adding a web.config, and adding a X.svc file.  Period.  This is the entire service web site.

In the Person service example, the service host has only two files: web.config and Person.svc.

Below is the web.config, which declares two endpoints for the same service.  One of the endpoints is a plain old fashioned ASMX style "basic profile" endpoint and the other is one to be used for JSON connectivity.

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.serviceModel>
    <behaviors>
      <endpointBehaviors>
    <behavior name="JsonEndpointBehavior">
      <enableWebScript />
    </behavior>
      </endpointBehaviors>
    </behaviors>
    <services>
      <service name="Contact.Service.PersonService">
    <endpoint address="json" binding="webHttpBinding" contract="Contact.Service.IPersonService" behaviorConfiguration="JsonEndpointBehavior" />
    <endpoint address="" binding="basicHttpBinding" contract="Contact.Service.IPersonService" />
      </service>
    </services>
  </system.serviceModel>
</configuration>

The Person.svc is even most basic:

<%@ ServiceHost Service="Person.Service.PersonService" %>

This type of solution scales to any level.  If you want to add a service, just add a new X.svc file and register as a new service.  If you want to add another endpoint, just add that line.  It's incredibly simple and scales to solutions of any size and even works well with non-HTTP services like netTcpBinding services.

Service MetaData

Now let's look at each of these files to see how they are optimized.  First, lets' look at the data contract, Person:

using System;
using System.Runtime.Serialization;
//+
namespace Contact.Service
{
    [DataContract(Namespace = Information.Namespace.Contact)]
    public class Person
    {
    //- @Guid -//
    [DataMember]
    public String Guid { get; set; }


    //- @FirstName -//
    [DataMember]
    public String FirstName { get; set; }


    //- @LastName -//
    [DataMember]
    public String LastName { get; set; }


    //- @City -//
    [DataMember]
    public String City { get; set; }


    //- @State -//
    [DataMember]
    public String State { get; set; }


    //- @PostalCode -//
    [DataMember]
    public String PostalCode { get; set; }
    }
}

Everything about this file should be self explanatory.  There's a data contract attribute on the class and data member attributes on each member.  Simple.  But what's with the data contract namespace?

Well, earlier this year, a co-architect of mine mentioned to me that you can centralize your namespaces in static locations.  Genius.  No more typing the same namespace on each and every data and service contract.  Thus, the following file is included in each of my projects:

using System;
//+
namespace Contact.Service
{
    public class Information
    {
    //- @NamespaceRoot -//
    public const String NamespaceRoot = "http://www.netfxharmonics.com/service/";


    //+
    //- @Namespace -//
    public class Namespace
    {
        public const String Contact = Information.NamespaceRoot + "Contact/2008/11/";
    }
    }
}

If there are multiple services in a project, and in 95%+ of the situations there will be, then you can simply add more services to the Namespace class and reference them from your data and service contracts.  Thus, you never, EVER have to update your service namespaces in more than one location.  You can see this in the service contract as well:

using System;
using System.ServiceModel;
//+
namespace Contact.Service
{
    [ServiceContract(Namespace = Information.Namespace.Contact)]
    public interface IPersonService
    {
    //- GetPersonData -//
    [OperationContract]
    Person GetPersonData(String personGuid);
    }
}

This service contract doesn't get much simpler.  There's no reason to discuss it any longer.

Service Implementation

For the sake of our discussion, I'm not going to talk very much at all about the service implementation.  If you want to see a hardcore implementation, go look at my Minima Blog Engine 3.1.  You will see validators, fault management, operation behaviors, message headers, and on and on.  You will seriously learn a lot from the Minima project.

For our discussion, here's our Person service:

using System;
//+
namespace Contact.Service
{
    public class PersonService : Contact.Service.IPersonService
    {
    //- @GetPersonData -//
    public Person GetPersonData(String personGuid)
    {
        return new Person
        {
        FirstName = "John",
        LastName = "Doe",
        City = "Unknown",
        Guid = personGuid,
        PostalCode = "66062",
        State = "KS"
        };
    }
    }
}

Not too excited, eh?  But as it is, this is all that's required to create a WCF service implementation.  Just create an every day ol' class and implement a service contract.

As I've mentioned, though, you could do a ton more in your private service implementation.  To give you a little idea, say you wanted to make absolutely sure that no one turned off metadata exchange for your service.  This is something that I do for various projects and it's incredibly straight-forward: just create a service host factory, which creates the service host and programmatically adds endpoints and modified behaviors. Given that WCF is an incredibly streamlined system, I'm able to add other endpoints or other behaviors in the exact same way.

Here's what I mean:

using System;
using System.ServiceModel;
using System.ServiceModel.Description;
//+
namespace Contact.Service.Activation
{
    public class PersonServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory
    {
    //- @CreateServiceHost -//
    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)
    {
        ServiceHost host = new ServiceHost(typeof(PersonService), baseAddresses);
        //+ add metadata exchange
        ServiceMetadataBehavior serviceMetadataBehavior = host.Description.Behaviors.Find<ServiceMetadataBehavior>();
        if (serviceMetadataBehavior == null)
        {
        serviceMetadataBehavior = new ServiceMetadataBehavior();
        host.Description.Behaviors.Add(serviceMetadataBehavior);
        }
        serviceMetadataBehavior.HttpGetEnabled = true;
        ServiceEndpoint serviceEndpoint = host.Description.Endpoints.Find(typeof(IMetadataExchange));
        if (serviceEndpoint == null)
        {
        host.AddServiceEndpoint(typeof(IMetadataExchange), MetadataExchangeBindings.CreateMexHttpBinding(), "mex");
        }
        //+
        return host;
    }
    }
}

Then, just modify your service host line:

<%@ ServiceHost Service="Person.Service.PersonService" Factory="Person.Service.PersonServiceFactory" %>

My point in mentioning that is to demonstrate how well your service will scale based upon a properly setup base infrastructure.  Each thing that you add will require a linear amount of work.  This isn't like WSE where you need three doctorates in order to modify the slightest thing.

Service Client without Magic

At this point, a few of my colleagues end their X.Service project and begin a X.Service.Client class.  That's fine.  Keeping the client away from the service metadata is nice, but its not required.  So, for the sake of this discussion, let's continue with the public/private model instead of the service/client/implementation model.

Thankfully, I have absolutely no colleagues who would ever hit "Add Service Reference".  I've actually never worked with a person who does this either.  This is good news because, as previously mentioned, the code generated by the add service reference is a complete disaster.  You also can't modify it directly without risking your changing being overwritten.  Even then, the code is SO bad, you wouldn't want to edit it.  There's no reason to have hundreds or THOUSANDS of lines of code to create a client to your service.

A much simpler and much more manageable solution is to keep your code centralized and DRY (don't repeat yourself).  To do this, you just need to realize that the publicly accessible X.Service project already contains the vast majority of everything needed to create a WCF client.  Because of this, you may access the service using WCF's built in mechanisms.  You don't need anything fancy, it's all built right in.

Remember, WCF simply requires the ABC: and address, binding, and contract.  On the client-side, WCF uses this information to create a channel.  This channel implements your service contract, thus allowing you to directly access your WCF service without any extra work required.  You already have the contract, so you just have to do is declare the address and binding.  Here's what I mean:

//+ address
EndpointAddress endpointAddress = new EndpointAddress("http://localhost:1003/Person.svc");
//+ binding
BasicHttpBinding basicHttpBinding = new BasicHttpBinding();
//+ contract
IPersonService personService = ChannelFactory<IPersonService>.CreateChannel(basicHttpBinding, endpointAddress);
//+ just use it!
Person person = personService.GetPersonData("F488D20B-FC27-4631-9FB9-83AF616AB5A6");
String firstName = person.FirstName;

No 3,000 line client generated code, no excess classes, no configuration.  Just direct access to your side.

Of course, if you want to use a configuration file, that's great too.  All you need to do in create a system.serviceModel section in your project or web site and declare your service endpoint.  What's really nice about this in WCF is that, since the concept for WCF are ABC on both the client and server, most of the configuration is already done for you.  You can just copy and paste the endpoint from the service configuration to your client configuration and add a name element.

<system.serviceModel>
  <client>
    <endpoint name="PersonServiceBasicHttpBinding" address="http://localhost:1003/Person.svc" binding="basicHttpBinding" contract="Simple.Service.IPersonService" />
  </client>
</system.serviceModel>

For the most part, you will also need to keep any bindings on the service side as well, though you wouldn't copy over validation information.

At this point you can just change the previously used WCF channel code to use an endpoint, but that's an architectural disaster.  You never want to compile configuration information into your system.  Instead of tying them directly, I like to create a custom configuration for my solution to allow the endpoint configuration to change.  For the sake of this discussion though, let's just use appSettings (do NOT rely on appSettings for everything!  That's a cop-out.  Create a custom configuration section!).  Here's our appSetting:

<appSettings>
  <add key="PersonServiceActiveEndpoint" value="PersonServiceBasicHttpBinding" />
</appSettings>

At this point, I can explain that Configuration class found in the public Contact.Service project:

using System;
//+
namespace Contact.Service
{
    public static class Configuration
    {
    //- @ActivePersonServiceEndpoint -//
    public static String ActivePersonServiceEndpoint
    {
        get
        {
        return System.Configuration.ConfigurationManager.AppSettings["PersonServiceActiveEndpoint"] ?? String.Empty;
        }
    }
    }
}

As you can see, this class just gives me strongly-typed access to my endpoint name.  Thus allowing me to access my WCF endpoint via a loosely connected configuration:

//+ configuration and contract
IPersonService personService = new ChannelFactory<IPersonService>(ServiceConfiguration.ActivePersonServiceEndpoint).CreateChannel();
//+ just use it!
Person person = personService.GetPersonData("F488D20B-FC27-4631-9FB9-83AF616AB5A6");
String firstName = person.FirstName;

When I feel like using a different endpoint, I don't modify the client endpoint, but just add another one with a new name and update the appSettings pointer (if you're info fancy names, this is essentially the bridge pattern).

Now, while this is a great way to directly access data by people who understand WCF architecture, it's probably not a good idea to allow your developers to have direct access to system internals.  Entry-level developers need to focus on the core competency of your company (i.e. sales, marketing, data management, etc), not ponder the awesomeness of system internals.  Thus, to allow other developers to work on a solution without having to remember WCF internals, I normally create two layers of abstraction on top of what I've already shown.

For the first layer, I create a concrete ClientBase class to hide the WCF channel mechanics.  This is the type of class that "Add Service Reference" would have created if your newbie developers accidentally used it.  However, the one we will create won't have meaningless attributes and virtually manageable excess code.

Below is our entire client class:

using System;
using System.ServiceModel;
using System.ServiceModel.Channels;
//+
namespace Contact.Service
{
    public class PersonClient : System.ServiceModel.ClientBase<IPersonService>, IPersonService
    {
    //- @Ctor-//
    public PersonClient(String endpointConfigurationName)
        : base(endpointConfigurationName) { }


    //+
    //- @GetPersonData -//
    public Person GetPersonData(String personGuid)
    {
        return Channel.GetPersonData(personGuid);
    }
    }
}

The pattern here is incredibly simple: create a class which inherits from System.ServiceModel.ClientBase<IServiceContractName> and implements your service contract.  When you implement the class, the only implementation you need to add is a call to the base channel.  In essence, all this class does is accept calls and pass them off to a pre-created channel.  When you add a new operation to your service contract, just implement the interface and add a single line of connecting code to wire up the client class.

The channel creation mechanics that I demonstrated earlier is now done provided automatically by the ClientBase class.  You can also modify this class a little by bridging up to a total of 10 different constructors provided by ClientBase.  For example, the following constructor call will allow developers to specify a specific binding and endpoint:

public PersonClient(Binding binding, EndpointAddress address)
    : base(binding, address) { }

At this point, we have something that protects developers from having to remember how to create a channel.  However, they still have to mess with configuration names and must remember to dispose this client object (there's an open channel, remember).  Therefore, I normally add another layer of abstraction.  This one will be directly accessibly for developer user.

This layer consists of a series of service agents.  Each service has its own agent and is essentially a series of static methods which provide the most efficient means of making a service call.  Here's what I mean:

using System;
//+
namespace Contact.Service
{
    public static class PersonAgent
    {
    //- @GetPersonData -//
    public static Person GetPersonData(String personGuid)
    {
        using (PersonClient client = new PersonClient(ServiceConfiguration.ActivePersonServiceEndpoint))
        {
        return client.GetPersonData(personGuid);
        }
    }
    }
}

As you can see, the pre-configured service endpoint is automatically used and the PersonClient is automatically disposed at the end of the call (ClientBase<T> implements IDisposable).  If you want to use a different service endpoint, then just change it in your appSettings configuration.

Conclusion

At this point, I've explained every class or each project in my WCF service project model.  It's up to you to decide how to best create and manage your data and service contracts as well as clients.  But, if you want a streamlined, efficient, model for all your service projects, you will want to create a publicly accessible project to house all your reusable elements.

Also, remember you don't need a full-on client class to access a service.  WCF communicates with channels and channel creation simply requires an address, binding, and contract.  If you have that information, just create your channel and make your call.  You can abstract the internals of this by using a ClientBase object, but this is entirely optional.  If the project you are working on requires hardcore WCF knowledge, there's no reason to pretty it up.  However, if non-WCF experts will be working with your system, abstractions are easy to create.

Links

Love Sudoku?  Love competition?  Try the new Sudokian.com experience today.

Creating JavaScript Components and ASP.NET Controls



Every now and again I'll actually meet someone who realizes that you don't need a JavaScript framework to make full-scale AJAX applications happen… but rarely in the Microsoft community.  Most people think you need Prototype, jQuery, or ASP.NET AJAX framework in order to do anything from networking calls, DOM building, or component creation.  Obviously this isn't true.  In fact, when I designed the Brainbench AJAX exam, I specific designed it to test how effectively you can create your own full-scale JavaScript framework (now how well the AJAX developer did on following my design, I have no idea).

So, today I would like to show you how you can create your own strongly-typed ASP.NET-based JavaScript component without requiring a full framework.  Why would you not have Prototype or jQuery on your web site?  Well, you wouldn't.  Even Microsoft-oriented AJAX experts recognizes that jQuery provides an absolutely incredible boost to their applications.  However, when it comes to my primary landing page, I need that to be extremely tiny.  Thus, I rarely include jQuery or Prototype on that page (remember, Google makes EVERY page a landing page, but I mean the PRIMARY landing page.)

JavaScript Component

First, let's create the JavaScript component.  When dealing with JavaScript, if you can't do it without ASP.NET, don't try it in ASP.NET.  You only use ASP.NET to help package the component and make it strongly-typed.  If the implementation doesn't work, then you have more important things to focus on.

Generally speaking, here's the template I follow for any JavaScript component:

window.MyNamespace = window.MyNamespace || {};
//+
//- MyComponent -//
MyNamespace.MyComponent = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            //+ validate and save DOM host
            if (init.host) {
                this.host = init.host;
                //+
                this.DOMElement = $(this.host);
                if(!this.DOMElement) {
                    throw 'Element with id of ' + this.host + ' is required.';
                }
            }
            else {
                throw 'host is required.';
            }
            //+ validate and save parameters
            if (init.myParameter) {
                this.myParameter = init.myParameter;
            }
            else {
                throw 'myParameter is required.';
            }
        }
    }
    ctor.prototype = {
        //- myfunction -//
        myfunction: function(t) {
        }
    };
    //+
    return ctor;
})( );

You may then create the component like the following anywhere in your page:

new MyNamespace.MyComponent({
    host: 'hostName',
    myParameter: 'stuff here'
 });

Now on to see a sample component, but, first, take note of the following shortcuts, which allow us to save a lot of typing:

var DOM = document;
var $ = function(id) { return document.getElementById(id); };

Here's a sample Label component:

window.Controls = window.Controls || {};
//+
//- Controls -//
Controls.Label = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            //+ validate and save DOM host
            if (init.host) {
                this._host = init.host;
                //+
                this.DOMElement = $(this._host);
                if(!this.DOMElement) {
                    throw 'Element with id of ' + this._host + ' is required.';
                }
            }
            else {
                throw 'host is required.';
            }
            //+ validate and save parameters
            if (init.initialText) {
                this._initialText = init.initialText;
            }
            else {
                throw 'initialText is required.';
            }
        }
        //+
        this.setText(this._initialText);
    }
    ctor.prototype = {
        //- myfunction -//
        setText: function(text) {
            if(this.DOMElement.firstChild) {
                this.DOMElement.removeChild(this.DOMElement.firstChild);
            }
            this.DOMElement.appendChild(DOM.createTextNode(text));
        }
    };
    //+
    return ctor;
})( );

With the above JavaScript code and "<div id="host"></div>" somewhere in the HTML, we can use the following to create an instance of a label:

window.lblText = new Controls.Label({
    host: 'host',
    initialText: 'Hello World'
});

Now, if we had a button on the screen, we could handle it's click event, and use that to set the text of the button, as follows:

<div>
    <div id="host"></div>
    <input id="btnChangeText" type="button" value="Change Value" />
</div>
<script type="text/javascript" src="Component.js"></script>
<script type="text/javascript">
    //+ in reality you would use the dom ready event, but this is quicker for now
    window.onload = function( ){
        window.lblText = new Controls.Label({
            host: 'host',
            initialText: 'Hello World'
        });
         window.btnChangeText = $('btnChangeText');
         //+ in reality you would use a muli-cast event
         btnChangeText.onclick = function( ) {
            lblText.setText('This is the new text');
         };
    };
</script>

Thus, components are simple to work with.  You can do this with anything from a simple label to a windowing system to a marquee to any full-scale custom solution.

ASP.NET Control

Once the component works, you may then package the HTML and strongly-type it for ASP.NET.  The steps to doing this are very simple and once you do it, you can just repeat the simple steps (some times with a simple copy/paste) to make more components.

First, we need to create a .NET class library and add the System.Web assembly.   Next, add the JavaScript component to the .NET class library.

Next, in order to make the JavaScript file usable my your class library, you need to make sure it's set as an Embedded Resource.  In Visual Studio 2008, you do this by going to the properties window of the JavaScript file and changing the Build Action to Embedded Resource.

Then, you need to bridge the gap between the ASP.NET and JavaScript world by registering the JavaScript file as a web resource.  To do this you register an assembly-level WebResource attribute with the location and content type of your resource.  This is typically done in AssemblyInfo.cs.  The attribute pattern looks like this:

[assembly: System.Web.UI.WebResource("AssemblyName.FolderPath.FileName", "ContentType")]

Thus, if I were registering a JavaScript file named Label.js in the JavaScript.Controls assembly, under the _Resource/Controls folder, I would register my file like this:

[assembly: System.Web.UI.WebResource("JavaScript.Controls._Resource.Label.js", "text/javascript")]

Now, it's time to create a strongly-typed ASP.NET control.  This is done by creating a class which inherits from the System.Web.UI.Control class.  Every control in ASP.NET, from the TextBlock to the GridView, inherits from this base class.

When creating this control, we want to remember that our JavaScript control contains two required parameters: host and initialText.  Thus, we need to add these to our control as properties and validate these on the ASP.NET side of things.

Regardless of your control though, you need to tell ASP.NET what files you would like to send to the client.  This is done with the Page.ClientScript.RegisterClientScriptResource method, which accepts a type and the name of the resource.  Most of the time, the type parameter will just be the type of your control.  The name of the resource must match the web resource name you registered in AssemblyInfo.  This registration is typically done in the OnPreRender method of the control.

The last thing you need to do with the control is the most obvious: do something.  In our case, we need to write the client-side initialization code to the client.

Here's our complete control:

using System;
//+
namespace JavaScript.Controls
{
    public class Label : System.Web.UI.Control
    {
        internal static Type _Type = typeof(Label);


        //+
        //- @HostName -//
        public String HostName { get; set; }


        //- @InitialText -//
        public String InitialText { get; set; }


        //+
        //- @OnPreRender -//
        protected override void OnPreRender(EventArgs e)
        {
            Page.ClientScript.RegisterClientScriptResource(_Type, "JavaScript.Controls._Resource.Label.js");
            //+
            base.OnPreRender(e);
        }


        //- @Render -//
        protected override void Render(System.Web.UI.HtmlTextWriter writer)
        {
            if (String.IsNullOrEmpty(HostName))
            {
                throw new InvalidOperationException("HostName must be set");
            }
            if (String.IsNullOrEmpty(InitialText))
            {
                throw new InvalidOperationException("InitialText must be set");
            }
            writer.Write(@"
<script type=""text/javascript"">
(function( ) {
    var onLoad = function( ) {
        window." + ID + @" = new Controls.Label({
            host: '" + HostName + @"',
            initialText: '" + InitialText + @"'
        });
    };
    if (window.addEventListener) {
        window.addEventListener('load', onLoad, false);
    }
    else if (window.attachEvent) {
        window.attachEvent('onload', onLoad);
    }
})( );
</script>
");
            //+
            base.Render(writer);
        }
    }
}

The code written to the client may looks kind of crazy, but that's because it's written very carefully.  First, notice it's wrapped in a script tag.  This is required.  Next, notice all the code is wrapped in a (function( ) { }) ( ) block.  This is a JavaScript containment technique.  It basically means that anything defined in it exists only for the time of execution.  In this case it means that the onLoad variable exists inside the function and only inside the function, thus will never conflict outside of it.  Next, notice I'm attaching the onLoad logic to the window.load event.  This isn't technically the correct way to do it, but it's the way that requires the least code and is only there for the sake of the example.  Ideally, we would write (or use a prewritten one) some sort of event handler which would allow us to bind handlers to events without having to check if we are using the lameness known as Internet Explorer (it uses window.attachEvent while real web browsers use addEventListener).

Now, having this control, we can then compile our assembly, add a reference to our web site, and register the control with our page or our web site.  Since this is a "Controls" namespace, it has the feel that it will contains multiple controls, thus it's best to register it in web.config for the entire web site to use.  Here's how this is done:

<configuration>
  <system.web>
    <pages>
      <controls>
        <add tagPrefix="c" assembly="JavaScript.Controls" namespace="JavaScript.Controls" />
      </controls>
    </pages>
  </system.web>
</configuration>

Now we are able to use the control in any page on our web site:

<c:Label id="lblText" runat="server" HostName="host" InitialText="Hello World" />

As mentioned previously, this same technique for creating, packaging and strongly-typing JavaScript components can be used for anything.  Having said that, this example that I have just provided borders the raw definition of useless.  No one cares about a stupid host-controlled label.

If you don't want a host-model, but prefer the in-place model, you need to change a few things.  After the changes, you'll have a template for creating any in-place control.

First, remove anything referencing a "host".  This includes client-side validation as well as server-side validation and the Control's HostName property.

Next, put an ID on the script tag.  This ID will be the ClientID suffixed with "ScriptHost" (or whatever you want).  Then, you need to inform the JavaScript control of the ClientID.

Your ASP.NET control should basically look something like this:

using System;
//+
namespace JavaScript.Controls
{
    public class Label : System.Web.UI.Control
    {
        internal static Type _Type = typeof(Label);


        //+
        //- @InitialText -//
        public String InitialText { get; set; }


        //+
        //- @OnPreRender -//
        protected override void OnPreRender(EventArgs e)
        {
            Page.ClientScript.RegisterClientScriptResource(_Type, "JavaScript.Controls._Resource.Label.js");
            //+
            base.OnPreRender(e);
        }


        //- @Render -//
        protected override void Render(System.Web.UI.HtmlTextWriter writer)
        {
            if (String.IsNullOrEmpty(InitialText))
            {
                throw new InvalidOperationException("InitialText must be set");
            }
            writer.Write(@"
<script type=""text/javascript"" id=""" + this.ClientID + @"ScriptHost"">
(function( ) {
    var onLoad = function( ) {
        window." + ID + @" = new Controls.Label({
            id: '" + this.ClientID + @"',
            initialText: '" + InitialText + @"'
        });
    };
    if (window.addEventListener) {
        window.addEventListener('load', onLoad, false);
    }
    else if (window.attachEvent) {
        window.attachEvent('onload', onLoad);
    }
})( );
</script>
");
            //+
            base.Render(writer);
        }
    }
}

Now you just need to make sure the JavaScript control knows that it needs to place itself where it has been declared.  To do this, you just create a new element and insert it into the browser DOM immediately before the current script block.  Since we gave the script block and ID, this is simple.  Here's basically what your JavaScript should look like:

window.Controls = window.Controls || {};
//+
//- Controls -//
Controls.Label = (function( ) {
    //- ctor -//
    function ctor(init) {
        if (init) {
            if (init.id) {
                this._id = init.id;
                //+
                this.DOMElement = DOM.createElement('span');
                this.DOMElement.setAttribute('id', this._id);
            }
            else {
                throw 'id is required.';
            }
            //+ validate and save parameters
            if (init.initialText) {
                this._initialText = init.initialText;
            }
            else {
                throw 'initialText is required.';
            }
        }
        //+
        var scriptHost = $(this._id + 'ScriptHost');
        scriptHost.parentNode.insertBefore(this.DOMElement, scriptHost);
        this.setText(init.initialText);
    }
    ctor.prototype = {
        //- setText -//
        setText: function(text) {
            if(this.DOMElement.firstChild) {
                this.DOMElement.removeChild(this.DOMElement.firstChild);
            }
            this.DOMElement.appendChild(DOM.createTextNode(text));
        }
    };
    //+
    return ctor;
})( );

Notice that the JavaScript control constructor creates a span with the specified ID, grabs a reference to the script host, inserts the element immediately before the script host, then sets the text.

Of course, now that we have made these changes, you can just throw something like the following into your page and to use your in-place JavaScript control without ASP.NET.  It would look something like this:

<script type="text/javascript" id="lblTextScriptHost">
    window.lblText = new Controls.Label({
        id: 'lblText',
        initialText: 'Hello World'
    });
</script>

So, you can create your own JavaScript components without requiring jQuery or Prototype dependencies, but, if you are using jQuery or Prototype (and you should be!; even if you are using ASP.NET AJAX-- that's not a full JavaScript framework), then you can use this same ASP.NET control technique to package all your controls.

kick it on DotNetKicks.com

Cross-Browser JavaScript Tracing



No matter what system you are working with, you always need mechanisms for debugging.  One of the most important mechanisms a person can have is tracing.  Being able to see trace output from various places in your application is vital.  This is especially true with JavaScript.  I've been working with JavaScript since 1995, making stuff 12 years ago that would still be interesting today (in fact, I didn't know server-side development existed until 1998!) and I have noticed a clear correlation between the complexity of JavaScript applications and the absolute need for tracing.

Thus, a long, long time ago I built a tracing utility that would help me view all the information I need (and absolute no more or less).  These days this means being able to trace information to a console, dump arrays and objects, and be able to view line-numbered information for future reference.  The utility I've created has since been added to my Themelia suite (pronounces the-meh-LEEUH; as in thistle or the name Thelma), but today I would like to demonstrate it and deliver it separately.

The basis of my tracing utility is the Themelia.Trace namespace.  In this namespace is…. wait… what?  You're sick of listening to me talk?  Fine.  Here's the sample code which demonstrates the primary uses of Themelia.Trace, treat this as your reference documentation:

//+ enables tracing
Themelia.Trace.enable( );
//+ writes text
Themelia.Trace.write('Hello World!');
//+ writes a blank line
Themelia.Trace.addNewLine( );
//+ writes a numbered line
Themelia.Trace.writeLine('…and Hello World again!');
Themelia.Trace.writeLine('Another line…');
Themelia.Trace.writeLine('Yet another…');
Themelia.Trace.writeLine('One more…');
//+
//++ label
//+ writes labeled data to putput (e.g. 'variableName (2)')
Themelia.Trace.writeLabeledLine('variableName', 2);
//+
//++ buffer
//+ creates a buffer
var buffer = new Themelia.Trace.Buffer( );
//+ declares beginning of new segment
buffer.beginSegment('Sample');
//+ writes data under specific segment
buffer.write('data here');
//+ nested segment
buffer.beginSegment('Array Data');
//+ write array to buffer
var a = [1,2,3,4,5];
buffer.write(a);
//+ declares end of segment
buffer.endSegment('Array Data');
buffer.beginSegment('Object Data');
//+ write raw object/JSON data
buffer.write({
    color: '#0000ee',
    fontSize: '1.1em',
    fontWeight: 'bold'
});
buffer.endSegment('Object Data');
//+ same thing again
buffer.beginSegment('Another Object');
var o = {
    'personId': 2,
    name: 'david'
};
buffer.write(o);
buffer.endSegment('Another Object');
buffer.endSegment('Sample');
//+ writes all built-up data to output
buffer.flush( );

Notice a few thing about this reference sample:

  • First, you must use Themelia.Trace.enable( ) to turn tracing on.  In a production application, you would just comment this line out.
  • Second, Themelia.Trace.writeLine prefixes each line with a line number.  This is especially helpful when dealing with all kinds of async stuff floating around or when dealing with crazy events.
  • Third, you may use Themelia.Trace.writeLabeledLine to output data while giving it a name like "variableName (2)".
  • Fourth, if you want to run a tracer through your application and only later on have output, create an instance of Themelia.Trace.Buffer, write text to it, write an array to it, or write an object to it, then call flush( ) to send to data to output.  You may also use beginSegment and endSegment to create nested, indented portion of the output.
  • Fifth, notice you can throw entire arrays of objects/JSON into buffer.write( ) to write it to the screen.  This is especially handy when you want to trace your WCF JSON messages.

Trace to what?

Not everyone knows this, but Firefox, Google Chrome, Safari, and Opera each has its own console for allowing output.  Themelia.Trace works with each console in its own way.  Here are some screen shots to show you what I mean:

Firefox

Firefox since version 1.0 has the Firefox Console which allows you to write just about anything to a separate window.  I've done a video on this many years ago and last year I posted a quick "did you know"-style blog post on it, so there's no reason for me to cover it again here.  Just watch my Introduction to the Firefox Console for a detailed explanation of using the Firefox Console (you may also opt to watch my Setting up your Firefox Development Environment-- it should seriously help you out).

Firefox

Google Chrome

Chrome does things a little different than any other browser.  Instead of having a "browser" wide console, each tab has its own console.  Notice "browser" is in quotes.  Technically, each tab in Chrome is it's own mini browser, so this console-per-tab model makes perfect sense.  To access this console, just hit Alt-` on a specific tab.

Chrome

Safari

In Safari, you go to Preferences, in the Advanced Tab to check "Show Develop menu in menu bar".  When you do this, you will see the Develop menu show up.  The output console is at Develop -> Show Web Inspector.

Safari

Opera

In Opera 9, you go to Tools -> Advanced -> Developer Tools and you will see a big box show up at the bottom.  The console is the Error Console tab.

Opera9

Internet Explorer

To use Themelia.Trace with Internet Explorer, install Nikhil's Web Developer Helper.  This is different from the IE Developer Toolbar.

IEWebDevHelper

Firebug

It's important to note that, in many situations it's actually more effective to rely on Firebug for Firefox or Firebug lite for Safari/Chrome, IE, and Opera, then to use a console directly.  Therefore, Themelia.Trace allows you to set Themelia.Trace.alwaysUseFirebug to true and have all output redirected to Firebug instead of the default console.  Just try it, use the above sample, but put "Themelia.Trace.alwaysUseFirebug = true;" above it.  All data will redirect to Firebug.  Here's a screen shot (this looks basically the same in all browsers):

FirebugLite

There you have it.  A cross-browser solution to JavaScript tracing.

Links

Love Sudoku? Love brain puzzles? Check out my new world-wide Sudoku competition web site, currently in beta, at Sudokian.com.

kick it on DotNetKicks.com

Minima 3.1 Released



I've always thought that one of the best ways to learn or teach a series of technologies is to create either a photo gallery, some forum software, or a blog engine.  Thus to aide in teaching various areas of .NET (and to have full control over my own blog), I created Minima v1.  Minima v2 came on the scene adding many new features and showing how LINQ can help make your DAL shine.  Then, Minima v3 showed up and demonstrated an enormous load of technologies as well as demonstrating proper architectural principles.

Well, Minima 3.1 is an update to Minima 3.0 and it's still here to help people see various technologies in action.  However, while Minima 3.1 adds various features to the Minima 3.1 base, its primary important to note that it's also the first major Themelia 2.0 application (Minima 3.0 was built on Themelia 1.x).  As such, not only is it a prime example of many technologies ranging from WCF to LINQ to ASP.NET controls to custom configuration, it's also a good way to see how Themelia provides a component model to the web.  In fact, Minima 3.1 is technically a Themelia 2.0 plug-in.

Here's a quick array of new blog features:

  • It's built on Themelia 2.0.  I've already said this, but it's worth mentioning again.  This isn't a classic ASP.NET application.  It's the first Themelia 2.0 application.
  • Minima automatically creates indexes (table of contents) to allow quick viewing of your site
  • Images are now stored in SQL Server as a varbinary(max) instead of as a file.
  • Themelia CodeParsers are used to automatically turn codes like {Minima{BlogEntry{3324c8df-4d49-4d4a-9878-1e88350943b6}}} into a link to a link entry, {Minima{BlogEntry{3324c8df-4d49-4d4a-9878-1e88350943b6|Click here for stuff}}} into a renamed blog entry and {Minima{AmazonAffiliate{0875526438}}} into a Amazon.com product link with your configured (see below) affiliate ID.
  • Minima now has its own full custom configuration.  Here's an example:
<minima.blog entriesToShow="7" domain="http://www.tempuri.org/">
  <service>
    <authentication defaultUserName="jdoe@tempuri.org" defaultPassword="blogpassword"/>
    <endpoint author="AuthorServiceWs2007HttpBinding" blog="BlogServiceWs2007HttpBinding" comment="CommentServiceWs2007HttpBinding" image="ImageServiceWs2007HttpBinding" label="LabelServiceWs2007HttpBinding" />
  </service>
  <suffix index="Year Review" archive="Blog Posts" label="Label Contents" />
  <display linkAuthorsToEmail="false" blankMessage="There are no entries in this view." />
  <comment subject="New Comment on Blog" />
  <codeParsers>
    <add name="AmazonAffiliate" value="net05c-20" />
  </codeParsers>
</minima.blog>

  • In addition to the normal MinimaComponent (the Themelia component which renders the blog with full interactivity), you may also use the MinimaProxyComponent to view single entries.  For example, you just as a BlogEntryProxy component to a web form and set either the blog entry guid or the blog guid plus the link, then your blog entry will show.  I built this feature in to allow Minima to be used for more than just blogging, it's a content stream.  With this feature I can keep every single page of a web site inside of Minima and no one will ever know.  There's also the MinimaViewerComponent which renders a read-only blog.  This means no rsd.xml, no site map, no commenting, no editing, just viewing.  In fact, the Themelia web site uses this component to render all its documentation.
  • There is also support for adding blog post footers.  Minima 3.1 ships with a FeedBurner footer to provide the standard FeedBurner footer.  See the "implementing" section below for more info.

As a Training Tool

Minima is often used as a training tool for introductory, intermediate, and expert-level .NET.

Minima 2.0 could be used as a training tool for ASP.NET, CSS theming, proper use of global.asax, integrating with Windows Live Writer, framework design guidelines, HttpModules, HttpHandlers, HttpHandlerFactories, LINQ, type organization, proper-SQL Server table design and naming scheme, XML serialization, and XML-RPC.NET usage.

Minima 3.1 can be used as a training tool for the same concepts and technologies as Minima 2.0 as well as SOA principles, custom WCF service host factories, custom WCF behaviors, WCF username authentication, custom WCF declarative operation-level security, WCF exception shielding and fault management, custom WCF message header usage, WCF type organization, WCF-LINQ DTO transformation, enhanced WCF clients, using WCF sessions for Captcha verification, SQL Server 2005 schema security, XmlWriter usage, ASP.NET programmatic user control usage, custom configuration sections, WCF JavaScript clients, ASP.NET control JavaScript registration, JavaScript namespaces, WCF JSON services, WCF RSS services, ASP.NET templated databinding, and ASP.NET control componentization.

Architecture

Probably the most important thing to learn from Minima is architecture.  Minima is built to provide great flexibility.  However, that's not for the faint of heart.  I heard one non-architect and obvious newbie say that it was "over architected".  According to this person, apparently, adding security to your WCF services to protect you private information is "over architecting" something (not to mention the fact that WCF enforces security for username authentication).

In any case, Minima is split into two parts: the service and the web site.  I use Minima many places, but for all my blogs (or, more accurately, content streams) I have a single centralized, well-protected service set.  All my internal web sites access this central location via the WCF NetNamedPipeBinding.

Implementing

Minima is NOT your every day blog engine.  If your company needs a blog engine for various people on the team, get community server.  Minima isn't for you.  Minima allows you to plop a blog into any existing web site.  For example, if you have an existing web site, just install Themelia (remember, Minima is a Themelia plugin), create a new Themelia web domain, and register Minima into that web domain as follows:

<themelia.web>
  <webDomains>
    <add>
      <components>
        <add key="Minima" type="Minima.Web.Routing.MinimaComponent, Minima.Web">
          <parameters>
            <add name="page" value="~/Page_/Blog/Root.aspx" />
            <add name="blogGuid" value="19277C41-7E4D-4AE0-A196-25F45AC48762" />
          </parameters>
        </add>
      </components>
    </add>>
  </webDomains>
</themelia.web>

Now, on that Root.aspx page, just add a simple Minima.Web.Controls.MinimaBlog control.  Your blog immediately starts rendering.  Not only that, commenting is automatically supported.  Furthermore, you have a site map, a Windows Live Writer (MetaWeblog API) endpoint, a rsd.xml file, and a wlwmanifest.xml file.  All that just dropping a control on to a web site without configuring anything in that page.  Of course, you can configure things if you want and you can add more control to the page as well.  Perhaps you want a label list, an archive list, or a recent entry list.  Just add the appropriate control to the web form.  In fact, the same Minima binaries that you will compile with the source is used on each of my web sites with absolutely no changes; they are all just a single control, yet look nothing alike.

Personally, I don't like to add a lot of controls to my web forms.  Thus, I normally add a place holder control and then add my controls to that place holder.  There more here's a snippet from my blog's web form (my entire blog has only one page):

phLabelList.Controls.Add(new Minima.Web.Controls.LabelList { Heading = "Label Cloud", ShowHeading = true, TemplateType = typeof(Minima.Web.Controls.LabelListControlTemplateFactory.SizedTemplate) });
phArchivedEntryList.Controls.Add(new Minima.Web.Controls.ArchivedEntryList { ShowEntryCount = false });
phRecentEntryList.Controls.Add(new Minima.Web.Controls.RecentEntryList());
phMinimaBlog.Controls.Add(new Minima.Web.Controls.MinimaBlog
{
    ShowAuthorSeries = false,
    PostFooterTypeInfo = Themelia.Activation.TypeInfo.GetInfo(Minima.Web.Controls.FeedBurnerPostFooter.Type, "http://feeds.feedburner.com/~s/FXHarmonics"),
    ClosedCommentText = String.Empty,
    DisabledCommentText = String.Empty
});

There's nothing here that you can't do as well.  Most everything there is self explanatory too.  However, notice the post footer type.  By setting this type, Minima knows to render the feed burner post footer at the end of each entry.

Thus, with a simple configuration and a drop of a control, you can add a blog anywhere.  Or, in the case of the Themelia web site, you can add a content stream anywhere.

Here's a snippet from the configuration for the Themelia web site:

<add name="framework" path="framework" defaultPage="/Sequence_/Home.aspx" acceptMissingTrailingSlash="true">
  <components>
    <add key="Minima" type="Minima.Web.Routing.MinimaViewerComponent, Minima.Web">
      <parameters>
        <add name="blogGuid" value="19277C41-7E4D-4AE0-A196-25F45AC48762" />
      </parameters>
    </add>
  </components>
</add>

By looking at the Themelia web site, you can see that on the Themelia web site, Minima isn't being used as a blog engine, but as a content stream.  Go walk around the documentation of http://themelia.netfxharmonics.com/framework/docs.  I didn't make a bunch of pages, all I did was drop in that component and throw a Minima.Web.Controls.BlogViewer control on the page and BAM I have an entire documentation system already built based upon various entries from my blog.

As a side note, if you look on my blog, you will see each of the Themelia blog entries have a list of links, but the same thing in the Themelia documentation does not have the link list.  This is because I've set IgnoreBlogEntryFooter to true on the BlogViewer control and thus telling Minima to remove all text after the special code.  Thus I can post the same entry in two places.

This isn't a marketing post on why you should use Minima.  If you want to use Minima, go ahead, you can contact me on my web site for help.  However, the point is to learn as much as you can about modern technology using Minima as an example.  It's not meant to be used in major web sites by just anyone at this point (though I use it in production in many places).  Having said that, the next version of Minima will be part of the Themelia suite and will have much more user support and formal documentation.

In conclusion, I say again (and again and again), you may use Minima for your personal training all you want.  That's why it's public. 

Links

Microsoft MVP (ASP.NET) 2009



I'm rather pleased to announce that October 1, 2008 was my day: I was made a Microsoft MVP for ASP.NET.  Thus, as tradition seems to state, I'm posting a blog entry about it.

Thanks to God for not letting me have the MVP until now, the timing is flawless.  Thanks also to my MVP advisor David Silverlight who got me more serious about the MVP program and admitted to having nominated me.  Next, thanks to Scott Hanselman who fixed a clog in the MVP pipeline.  Apparently, I was in the system, but completely lost; in the wrong category or something.  He took it upon himself to contact a few people to get the problem fixed.

Thanks also to Brad Abrams for his recommendation to the MVP committee and to Rick Strahl, a fellow MVP, Microsoft ASP.NET AJAX loather, and Subversion lover who showed me that open source developers have equal rights to the MVP title.

To bring in the new [MVP] year, today I took some time and did a massive redesign to my web site featuring the cool MVP logo, which just so happens to fit perfectly in my existing color scheme.  I'll probably be tweaking the site over the next few days as the waves of whimsical changes come my way.

Minima 3.0 Released



Every few months I like to release a new open-source project or at least a new major revision of an existing project. Today I would like to introduce Minima 3.0.  This is a completely new Minima Blog Engine that is built on WCF, that is factored into various controls and that introduces a completely new model for ASP.NET development.

As a Training Tool

Normally I leave this for last, but this time I would like to immediately start off by mention how Minima 3.0 may act as a training tool. This will give you a good idea to Minima 3.0's architecture.  Here was the common "As a Training Tool" description for Minima 2.0 (a.k.a. Minima .NET 3.5):

Minima 2.0 could be used as a training tool for ASP.NET, CSS theming, proper use of global.asax, integrating with Windows Live Writer, framework design guidelines, HttpModules, HttpHandlers, HttpHandlerFactories, LINQ, type organization, proper-SQL Server table design and naming scheme, XML serialization, and XML-RPC.NET usage.

Here's the new "As a Training Tool" description for Minima 3.0:

Minima 3.0 can be used as a training tool for the same concepts and technologies as Minima 2.0 as well as SOA principles, custom WCF service host factories, custom WCF behaviors, WCF username authentication, custom WCF declarative operation-level security, WCF exception shielding and fault management, custom WCF message header usage, WCF type organization, WCF-LINQ DTO transformation, enhanced WCF clients, using WCF sessions for Captcha verification, SQL Server 2005 schema security, XmlWriter usage, ASP.NET programmatic user control usage, custom configuration sections, WCF JavaScript clients, ASP.NET control JavaScript registration, JavaScript namespaces, WCF JSON services, WCF RSS services, ASP.NET templated databinding, and ASP.NET control componentization.

As you can see, it's an entirely new beast. As you should also be able to guess, I'm not going to use Minima for simply entry level .NET training anymore. With this new feature set, it's going to be my primary tool for intermediate and expert-level .NET training.  In the future, I'll post various blog entries giving lessons on various parts of Minima.

New Features

Since it's no where near the purpose of Minima, in no version have I ever claimed to have an extraordinary feature set. In fact, the actual end-user feature set of Minima 3.0 is fundamentally the same as Minima 2.0 except where features are naturally added because of the new architecture.  For example, it's now naturally a multi-blog environment with each blog allowed to have it's own blog discovery data, Google sitemap, and other things.

Architecture

There are really three major "pillars" to the architecture of Minima 3.0: WCF, ASP.NET, and my Themelia Foundation (pronounced TH[as in "Thistle"]-MEH-LEE-UH; Koine Greek for "foundations"). It will take more than one blog entry to cover every aspect of Minima's architecture (see my lessons on Themelia), but for now I'll give a very brief overview.  I will explain the ASP.NET and Themelia pillars together.

WCF Architecture

The backend of Minima is WCF and is split up into various services to factor out some of the operations that occur within Minima. Of course, not every single possible operation is included as that would violate the "specificness" of SOA, but the core operations are intact.

The entire Minima security structure is now in WCF using a custom declarative operation-level security implementation.  To set security in Minima, all you have to do on the service side is apply the MinimaBlogSecurityBehavior attribute to an operation and you're all set.  Here's an example:

[MinimaBlogSecurityBehavior(PermissionRequired = BlogPermission.Retrieve)]

Architectural Overview: Using LINQ in WCF



Today I would like to give an architectural overview of my usage of LINQ.  This may actually become the first in a series of architectural discussions on various .NET and AJAX technologies.  In this discussion, I'm going to be talking about the architecture of the next revision of my training blog engine, Minima.  Since the core point of any system is that which goes into and comes out of the system, the goal of this commentary will be to get to the point where LINQ projects data into WCF DTOs.  Let me start by explaining how I organize my types.  Some of you will find this boring, but it's amazing how many times I get questions on this topic.  For good reason too!  These questions show that a person's priorities are in the right place as your type, namespace, and file organization is critical to the manageability and architectural clarity of your system.

However, before we get started, let me state briefly that as I've stated in my post entitled SQL Server Database Model Optimization for Developers, when you design your database structure you should design it with your O/R mapper in mind.  If you don't, then you will probably fall into all kinds of problems as my post describes.  This is incredibly important, however, if you keep to normal everyday normalization procedures, you are probably doing OK for the most part anyway.  Since I've written about that before, there's no reason for me to go into detail here.  Just know that, if you database design sucks, your application will probably suck too.  Don't built your house on the sand.

In terms of LINQ, I actually use the VS2008 "LINQ to SQL Classes" template to create the LINQ information.  In most every other area of technology, it's a good practice to avoid wizards and templates like the plague, but when it comes to O/R mapping, you need to be using an automated tool.  If your O/R mapper requires you to do any work (…NHibernate…coughcough*), then you can't afford to work with it.  You need to be focusing on the business logic of your system, not playing around with mechanical nonsense.  As I've said in other contexts, stored procedures and ad hoc SQL are forms of unmanaged code.  When you are managing the mechanics of a system yourself, it's, by definition, unmanaged.  Stored procedures and ad hoc SQL are to LLBLGen/LINQ as ASP/PHP is to ASP.NET as C++ is to .NET languages.  If you are managing the mechanical stuff yourself, you are working with unmanaged code.  When it comes to using managed code, in the context of database access, this is the point of an O/R mapper.  Furthermore, if the O/R mapping software you are using requires you to write up templates or do manual mapping, that's obviously not completely managed code.

Now when I create a LINQ classes I will create one for each "architectural domain" of the system that I deem necessary.  For example, in a future release of Minima, there will be a LINQ class to handle my HttpHandler and UrlRewriting subsystem and another LINQ class to handle blog interaction.  There needs to be this level of flexibility or my WCF services will know too much about my web environment and my web site (a WCF client) will then have direct access to the data which the WCF service is intended to abstract.  Therefore, there will be a LINQ class for web site specific mechanics and another LINQ class for service specific mechanics.  Also, when I create the class for a particular domain I will give it a simple name with the suffix of LINQ.  So, my Minima core LINQ class is CoreLINQ.cs and my Minima service LINQ class is ServiceLINQ.cs.  Simple.

Upon load of the LINQ designer and either after or before I drop in the specific tables required in that particular architectural domain.  Then I'll set my context namespace to <SimpleName>.Data.Context and my entity namespace to <SimpleName>.Data.Entity.  For example, in the Minima example, I'll then have Core.Data.Context and Core.Data.Entity.  One may argue that there's nothing really going on in Core.Data.Context to which I much respond: yeah, well there's already a lot going on in Core.Data (other data related non-LINQ logic I would create) and Core.Data.Entity.   The reason I say "after or before I drop in the specific tables" is to emphasize the fact that you can change this at a later point.  It's important to keep in mind at this point that LINQ doesn't automatically update its schema with the schema from your database.  LLBLGen Pro does have this feature built in and it does the refreshing in a masterful way, but currently LINQ doesn't have this ability.  Therefore, to do a refresh, you need to do a "CTRL-A, Delete", to delete all the tables, do a refresh in Server Explorer, and then just re-add them.  It's not much work.

Now, moving on to using LINQ.  When I'm working with both LINQ entities (or LLBLGen entities or whatever) and WCF DTOs in my WCF service, I do not bring in the LINQ entity namespace.  The ability to import types in from another namespace is one of the most powerful set under appreciated features in all of .NET (um.. JavaScript needs them!), however, when you have a Person entity in LINQ and a Person DTO, things can get confusing fast.  Therefore, to avoid all potential conflicts, my import is left out and I, instead, keep a series of type aliases at the top of my service classes just under the namespace imports.  Notice also the visual signal in the BlogEntryXAuthor table name.  This tells the developer that this is a many-to-many linking table.  In this case it's in the database schema, but if it weren't in there, I could easily alias it as BlogEntryXAuthorLINQ without affecting anyone else.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
//+
using DataContext = Minima.Service.Data.Context.MinimaServiceLINQDataContext;
using AuthorLINQ = Minima.Service.Data.Entity.Author;
using CommentLINQ = Minima.Service.Data.Entity.Comment;
using BlogLINQ = Minima.Service.Data.Entity.Blog;
using BlogEntryLINQ = Minima.Service.Data.Entity.BlogEntry;
using BlogEntryUrlMappingLINQ = Minima.Service.Data.Entity.BlogEntryUrlMapping;
using BlogEntryXAuthorLINQ = Minima.Service.Data.Entity.BlogEntryAuthor;
using LabelLINQ = Minima.Service.Data.Entity.Label;
using LabelXBlogEntryLINQ = Minima.Service.Data.Entity.LabelBlogEntry;
using UserRightLINQ = Minima.Service.Data.Entity.UserRight;
//+

Next, since we are in the context of WCF, we need to discussion validation of incoming information.  The following method is an implementation of a WCF service operation.  As you can see, when a user sends in an e-mail address, there is an immediate validation on the e-mail address that retrieves the author's LINQ entity.  This is why the validation isn't being done in a WCF behavior (even though there are tricks to get data from a behavior too!)  You may also note my camelCasing of instances of LINQ entities.  The purpose of this is to provide an incredibly obvious signal to the brain that this is an object, not simply a type (...as is the point of almost all the Framework Design Guidelines-- buy the book!; 2nd edition due Sept 29 '08)

//- @GetBlogMetaData -//
[MinimaBlogSecurityBehavior(PermissionRequired = BlogPermission.Retrieve)]
public BlogMetaData GetBlogMetaData(String blogGuid)
{
    using (DataContext db = new DataContext(ServiceConfiguration.ConnectionString))
    {
        //+ ensure blog exists
        BlogLINQ blogLinq;
        Validator.EnsureBlogExists(blogGuid, out blogLinq, db);
        //+
        return new BlogMetaData
        {
            Description = blogLinq.BlogDescription,
            FeedTitle = blogLinq.BlogFeedTitle,
            FeedUri = new Uri(blogLinq.BlogFeedUrl),
            Guid = blogLinq.BlogGuid,
            Title = blogLinq.BlogTitle,
            Uri = new Uri(blogLinq.BlogPrimaryUrl),
            CreateDateTime = blogLinq.BlogCreateDate,
            LabelList = new List<Label>(
                blogLinq.Labels.Select(p => new Label
                {
                    Guid = p.LabelGuid,
                    FriendlyTitle = p.LabelFriendlyTitle,
                    Title = p.LabelTitle
                })
            )
        };
    }
}

It would probably be a good idea at this point to step into the Validator class to see what's really going on here.  As you can see in the following class I have two methods (in reality there are dozens!) and most of it should be obvious.  The validation is obviously in the second method, however, it's the first one that's being directly called.  Notice two things about this: First, notice that I'm passing in my DataContext.  This is to completely obliterate any possibilities of overlapping DataContexts and, therefore, any strange locking issues.  Second, notice that I'm pre-registering my messages in a strongly typed Message class(notice also that the members of Message are not static-- the magic of const.)  This last piece could easily be done in a way that provides for nice localization.

Now moving on to the actual validation.  Unless I'm desperately trying to inline some code, I normally declare the LINQ criteria prior to the actual link statement.  Of course, this is exactly what the Func<T1, T2> delegate is doing.  Notice also that I try to bring the semantics of the criteria into the name of the object.  This really helps in in making many of your LINQ statements read more naturally: "db.Person.Where(hasEmployees)".

namespace Minima.Service.Validation
{
    internal static class Validator
    {
        //- ~Message -//
        internal class Message
        {
            public const String InvalidEmail = "Invalid author Email";
        }


        //- ~EnsureAuthorExists -//
        internal static void EnsureAuthorExists(String authorEmail, out AuthorLINQ authorLinq, DataContext db)
        {
            EnsureAuthorExists(authorEmail, out authorLinq, Message.InvalidEmail, db);
        }


        //- ~EnsureAuthorExists -//
        internal static void EnsureAuthorExists(String authorEmail, out AuthorLINQ authorLinq, String message, DataContext db)
        {
            Func<AuthorLINQ, Boolean> authorExists = x => x.AuthorEmail == authorEmail;
            authorLinq = db.Authors.SingleOrDefault(authorExists);
            if (authorLinq == null)
            {
                FaultThrower.Throw<ArgumentException>(new ArgumentException(message));
            }
        }
    }
}

In the actual query itself, you can see that the semantics of the method is that a maximum of one author should be returned.  Therefore, I'm able to use the Single or SingleOrDefault methods.  Note that if you use these and you return more than one entity, an exception will be throw as Single and SingleOrDefault only allow what their name implies.  In this case here, AuthorEmail is the primary key in the database and, by definition, there can be only one (at this point I'm sure about 30% of you are doing Sean Connery impressions).  The difference between Single and SingleOrDefault is simple: when the criteria is not met, Single throws an exception and SingleOrDefault returns the type's default value.  The default of a type is that which the C# "default" keyword will return.  In other words, a reference type will be null and a struct will be something else (i.e. 0 for Int32).  In this case, I'm dealing with my AuthorLINQ class, which is obviously a reference type, and therefore I need to check null on it.  If it's null, then that author doesn't exist and I need to throw a fault (which is what my custom FaultThrower class does).  What's a fault?  That's a topic for a different post.

As you can see from the method signatures, not only is the author e-mail address being validated, the LINQ entity is being returned to the caller via an out parameter.  Once I have this authorLinq entity, then I can proceed to use it's primary key (AuthorId) in various other LINQ queries.  It's critical to remember that you always want to make sure that you are only using validated information.  If you aren't, then you have no idea what will happen to your system.  Therefore, you should ignore all IDs that are sent into a WCF service operation and use only the validated ones.  A thorough discussion of this topic is left for a future discussion.

Now we are finally at the place where LINQ to WCF projection happens.  For clarity, here it is again (no one likes to scroll back and forth):

return new BlogMetaData
{
    Description = blogLinq.BlogDescription,
    FeedTitle = blogLinq.BlogFeedTitle,
    FeedUri = new Uri(blogLinq.BlogFeedUrl),
    Guid = blogLinq.BlogGuid,
    Title = blogLinq.BlogTitle,
    Uri = new Uri(blogLinq.BlogPrimaryUrl),
    CreateDateTime = blogLinq.BlogCreateDate,
    LabelList = new List<Label>(
        blogLinq.Labels.Select(p => new Label
        {
            Guid = p.LabelGuid,
            FriendlyTitle = p.LabelFriendlyTitle,
            Title = p.LabelTitle
        })
    )
};

The basics flow of this are as follows: In DataContext db, in the Blogs table, pull sub-set where PersonId == AuthorId, then select transform that data into a new type.  The DTO projection is obviously happening in the Select method.  This method is akin to a SELECT in SQL.  My point in saying that is to make sure that you are aware that SELECT is not a filter; that's what Where does.  After execution of the Where method as well as after execution of the Select method, you have an IQueryable<Blog> object, which contains information about the query, but no actual data yet.  LINQ defers execution of SQL statements until they are actually used.  In this case, the data is actually being used when the ToList method is called.  This of course returns a list of List<Blog>, which is exactly what this service operation should do.  What's really nice about this is that WCF loves List<T>.  It's not a big fan of Collection<T>, but List<T> is it's friend.  Over the wire it's an Array and when it's being used by a WCF client, it's also a List<T> object.

In closing I should mention something that I know people are going to ask me about: To project from WCF DTO to LINQ you do the exact same thing.  LINQ isn't a database-specific technology.  You can LINQ between all kinds of things.  Though I use LINQ for my data access in many projects, most of my LINQ usage is actually for searching lists, combining to lists together, or modifying the data that gets bounds to the interface.  It's incredibly powerful.

Moving into a non-Minima example, if, for example, you needed to have a person's full name in a WPF ListBox and the name-specific LINQ properties you have are FirstName and LastName property, instead of doing tricks in your ItemTemplate, you can just have your ItemsSource use LINQ to sew the FirstName and LastName together.

lstPerson.ItemsSource = personList.Select(p => new
{
    FullName = p.FirstName + " " + p.LastName,
    p.PostalCode,
    Country = p.Country ?? String.Empty
});

The really sweet part about this is the fact that LINQ entities implement the INotifyPropertyChanged interface, so when doing WPF data binding, WPF will automatically update the ListBox when the data changes!  Of course, this doesn't help you if you are doing a seriously SOA system.  Therefore, my DTOs normally implement INotifyPropertyChanged as well.  This is not a WPF-specific interface (it lives in System.ComponentModel) and therefore does not tie the business object to any presentation.

That should show you a bit more of how LINQ can work with all kinds of stuff.  Therefore, it shouldn't be hard to figure out how to project from a WCF DTO to LINQ. You could literally copy/paste the LINQ -> DTO code and just switch around a few names.

If you are new to LINQ, then I recommend the book Pro LINQ by Joseph C. Rattz Jr. However, if you are already using LINQ or want a view into its internal mechanics, then I must recommend LINQ in Action by Fabrice Marguerie, Steve Eichert, and Jim Wooley.

Spring 2008 Sabbatical



Starting May 23rd, I'm starting another sabbatical to work on my company projects, to continue my seminary work, and to work on my book (to be clear: sabbatical != vacation).  During this time I will be accepting part-time AJAX, WCF, ASP.NET (no graphics work!-- hire a professional graphic designer, they are worth the money!), or general C# 3.0 and .NET 3.5 telecommuting consulting.  I'll assist in projects, but I'm not going to be able to work as senior architect on any projects.  Also remember, as a web developer, it's my duty to make sure my projects work in Mozilla, Opera, Safari, and IE, and is in no way IE-specific.  IE-only environments are the absolute most difficult to work with.

Also keep in mind that this is 2008, not 1988 and the primary purpose of modern technology is to allow us to have simpler lives and just about every single aspect of our technology has it's root in the Internet allowing us to communication from anywhere.  What's the point in having web casting and online meeting abilities or in having online white boarding or web-based project management software, or even Google Office if you aren't going to use them in a meaningful way?  Why have e-mail at all if you are going to absolutely rely on the ability to go to the person's office?  The addiction to physical contact is something that needs to be broken in the 21st century.  Stop managing with your physical "field of view" and start managing by results.

I'm a web developer/architect, not a piano mover; I don't need to be in a physical office.  If you are into technology at all, you are into moving your physical resources into a logical cloud.  If I've said it once, I've said it a million times: your associates are your greatest resource and should, therefore, be even more in a logical cloud (as they are humans and would appreciate it more!)  It is inconsistent to pursue logical management of resources and require physical management of personnel.  Not only that, but it costs a lot less (no office space required!)  If your employees don't have enough discipline to work from home, what makes you think they are working in their cube?  Unless you are working off the failed notion of "hourly management" instead of being a results-oriented manager, you won't have a problem with 100% telecommuting.  Results matter, not "time".  Also, if you don't trust your employees, well… maybe you hired the wrong people (or maybe have trust issues in general?)  Trust is the foundation of all life.  I could speak volumes on this topic, but I'll leave that to the expert: Timothy Ferris.  See his blog or get his book for more information.  I'm only an anonymous disciple of his, he is the master and authority on this topic.  Therefore, send your flames (read: insecurities) his way (after you read his book!-- audio also available; they are both worth 100x their weight in gold!)  See also, Scott Hanselman's interview with Timothy Ferris.  His YouTube page is also available.

With regards to the book, let me simply say that it's generically about AJAX communication and I'm not going to give out too many specific details on the project at this point, but I will say this: AJAX + SOA - CSS + Prototype + (ASP.NET Controls) - (ASP.NET AJAX) + WCF + (.NET 2.0 Service Interop) + Silverlight + Development Tools.  Also, I reserve the right to turn it into a video series (likely), make it a completely learning set of reading + video series (even more likely!), or to completely chuck the project.  I don't like to do things the classical way, so whatever I do, you can bet on the fact that I won't do the traditional "book".  As I've always said, the blog is the new book, but for this I think I may use a different paradigm.  I've turned down two book offers so far because I absolutely refuse to throw more paper on a bookshelf or do something that's been done a million times before.

If you are moving from ASP to ASP.NET, from PHP to ASP.NET, from ASMX to WCF 3.5 or want to add AJAX to your solutions drop me an e-mail and let's talk.