2005 2006 2007 2008 2009 2010 2011 2015 2016 2017 aspnet azure csharp debugging elasticsearch exceptions firefox javascriptajax linux llblgen mongodb powershell projects python security services silverlight training videos wcf wpf xag xhtmlcss

Architectural Overview: Using LINQ in WCF

Today I would like to give an architectural overview of my usage of LINQ.  This may actually become the first in a series of architectural discussions on various .NET and AJAX technologies.  In this discussion, I'm going to be talking about the architecture of the next revision of my training blog engine, Minima.  Since the core point of any system is that which goes into and comes out of the system, the goal of this commentary will be to get to the point where LINQ projects data into WCF DTOs.  Let me start by explaining how I organize my types.  Some of you will find this boring, but it's amazing how many times I get questions on this topic.  For good reason too!  These questions show that a person's priorities are in the right place as your type, namespace, and file organization is critical to the manageability and architectural clarity of your system.

However, before we get started, let me state briefly that as I've stated in my post entitled SQL Server Database Model Optimization for Developers, when you design your database structure you should design it with your O/R mapper in mind.  If you don't, then you will probably fall into all kinds of problems as my post describes.  This is incredibly important, however, if you keep to normal everyday normalization procedures, you are probably doing OK for the most part anyway.  Since I've written about that before, there's no reason for me to go into detail here.  Just know that, if you database design sucks, your application will probably suck too.  Don't built your house on the sand.

In terms of LINQ, I actually use the VS2008 "LINQ to SQL Classes" template to create the LINQ information.  In most every other area of technology, it's a good practice to avoid wizards and templates like the plague, but when it comes to O/R mapping, you need to be using an automated tool.  If your O/R mapper requires you to do any work (...NHibernate...*cough*cough*), then you can't afford to work with it.  You need to be focusing on the business logic of your system, not playing around with mechanical nonsense.  As I've said in other contexts, stored procedures and ad hoc SQL are forms of unmanaged code.  When you are managing the mechanics of a system yourself, it's, by definition, unmanaged.  Stored procedures and ad hoc SQL are to LLBLGen/LINQ as ASP/PHP is to ASP.NET as C++ is to .NET languages.  If you are managing the mechanical stuff yourself, you are working with unmanaged code.  When it comes to using managed code, in the context of database access, this is the point of an O/R mapper.  Furthermore, if the O/R mapping software you are using requires you to write up templates or do manual mapping, that's obviously not completely managed code.

Now when I create a LINQ classes I will create one for each "architectural domain" of the system that I deem necessary.  For example, in a future release of Minima, there will be a LINQ class to handle my HttpHandler and UrlRewriting subsystem and another LINQ class to handle blog interaction.  There needs to be this level of flexibility or my WCF services will know too much about my web environment and my web site (a WCF client) will then have direct access to the data which the WCF service is intended to abstract.  Therefore, there will be a LINQ class for web site specific mechanics and another LINQ class for service specific mechanics.  Also, when I create the class for a particular domain I will give it a simple name with the suffix of LINQ.  So, my Minima core LINQ class is CoreLINQ.cs and my Minima service LINQ class is ServiceLINQ.cs.  Simple.

Upon load of the LINQ designer and either after or before I drop in the specific tables required in that particular architectural domain.  Then I'll set my context namespace to <SimpleName>.Data.Context and my entity namespace to <SimpleName>.Data.Entity.  For example, in the Minima example, I'll then have Core.Data.Context and Core.Data.Entity.  One may argue that there's nothing really going on in Core.Data.Context to which I much respond: yeah, well there's already a lot going on in Core.Data (other data related non-LINQ logic I would create) and Core.Data.Entity.   The reason I say "after or before I drop in the specific tables" is to emphasize the fact that you can change this at a later point.  It's important to keep in mind at this point that LINQ doesn't automatically update its schema with the schema from your database.  LLBLGen Pro does have this feature built in and it does the refreshing in a masterful way, but currently LINQ doesn't have this ability.  Therefore, to do a refresh, you need to do a "CTRL-A, Delete", to delete all the tables, do a refresh in Server Explorer, and then just re-add them.  It's not much work.

Now, moving on to using LINQ.  When I'm working with both LINQ entities (or LLBLGen entities or whatever) and WCF DTOs in my WCF service, I do not bring in the LINQ entity namespace.  The ability to import types in from another namespace is one of the most powerful set under appreciated features in all of .NET (um.. JavaScript needs them!), however, when you have a Person entity in LINQ and a Person DTO, things can get confusing fast.  Therefore, to avoid all potential conflicts, my import is left out and I, instead, keep a series of type aliases at the top of my service classes just under the namespace imports.  Notice also the visual signal in the BlogEntryXAuthor table name.  This tells the developer that this is a many-to-many linking table.  In this case it's in the database schema, but if it weren't in there, I could easily alias it as BlogEntryXAuthorLINQ without affecting anyone else.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
//+
using DataContext = Minima.Service.Data.Context.MinimaServiceLINQDataContext;
using AuthorLINQ = Minima.Service.Data.Entity.Author;
using CommentLINQ = Minima.Service.Data.Entity.Comment;
using BlogLINQ = Minima.Service.Data.Entity.Blog;
using BlogEntryLINQ = Minima.Service.Data.Entity.BlogEntry;
using BlogEntryUrlMappingLINQ = Minima.Service.Data.Entity.BlogEntryUrlMapping;
using BlogEntryXAuthorLINQ = Minima.Service.Data.Entity.BlogEntryAuthor;
using LabelLINQ = Minima.Service.Data.Entity.Label;
using LabelXBlogEntryLINQ = Minima.Service.Data.Entity.LabelBlogEntry;
using UserRightLINQ = Minima.Service.Data.Entity.UserRight;
//+

Next, since we are in the context of WCF, we need to discussion validation of incoming information.  The following method is an implementation of a WCF service operation.  As you can see, when a user sends in an e-mail address, there is an immediate validation on the e-mail address that retrieves the author's LINQ entity.  This is why the validation isn't being done in a WCF behavior (even though there are tricks to get data from a behavior too!)  You may also note my camelCasing of instances of LINQ entities.  The purpose of this is to provide an incredibly obvious signal to the brain that this is an object, not simply a type (...as is the point of almost all the Framework Design Guidelines-- buy the book!; 2nd edition due Sept 29 '08)

//- @GetBlogMetaData -//
[MinimaBlogSecurityBehavior(PermissionRequired = BlogPermission.Retrieve)]
public BlogMetaData GetBlogMetaData(String blogGuid)
{
    using (DataContext db = new DataContext(ServiceConfiguration.ConnectionString))
    {
        //+ ensure blog exists
        BlogLINQ blogLinq;
        Validator.EnsureBlogExists(blogGuid, out blogLinq, db);
        //+
        return new BlogMetaData
        {
            Description = blogLinq.BlogDescription,
            FeedTitle = blogLinq.BlogFeedTitle,
            FeedUri = new Uri(blogLinq.BlogFeedUrl),
            Guid = blogLinq.BlogGuid,
            Title = blogLinq.BlogTitle,
            Uri = new Uri(blogLinq.BlogPrimaryUrl),
            CreateDateTime = blogLinq.BlogCreateDate,
            LabelList = new List<Label>(
                blogLinq.Labels.Select(p => new Label
                {
                    Guid = p.LabelGuid,
                    FriendlyTitle = p.LabelFriendlyTitle,
                    Title = p.LabelTitle
                })
            )
        };
    }
}

It would probably be a good idea at this point to step into the Validator class to see what's really going on here.  As you can see in the following class I have two methods (in reality there are dozens!) and most of it should be obvious.  The validation is obviously in the second method, however, it's the first one that's being directly called.  Notice two things about this: First, notice that I'm passing in my DataContext.  This is to completely obliterate any possibilities of overlapping DataContexts and, therefore, any strange locking issues.  Second, notice that I'm pre-registering my messages in a strongly typed Message class(notice also that the members of Message are not static-- the magic of const.)  This last piece could easily be done in a way that provides for nice localization.

Now moving on to the actual validation.  Unless I'm desperately trying to inline some code, I normally declare the LINQ criteria prior to the actual link statement.  Of course, this is exactly what the Func<T1, T2> delegate is doing.  Notice also that I try to bring the semantics of the criteria into the name of the object.  This really helps in in making many of your LINQ statements read more naturally: "db.Person.Where(hasEmployees)".

namespace Minima.Service.Validation
{
    internal static class Validator
    {
        //- ~Message -//
        internal class Message
        {
            public const String InvalidEmail = "Invalid author Email";
        }

        //- ~EnsureAuthorExists -//
        internal static void EnsureAuthorExists(String authorEmail, out AuthorLINQ authorLinq, DataContext db)
        {
            EnsureAuthorExists(authorEmail, out authorLinq, Message.InvalidEmail, db);
        }

        //- ~EnsureAuthorExists -//
        internal static void EnsureAuthorExists(String authorEmail, out AuthorLINQ authorLinq, String message, DataContext db)
        {
            Func<AuthorLINQ, Boolean> authorExists = x => x.AuthorEmail == authorEmail;
            authorLinq = db.Authors.SingleOrDefault(authorExists);
            if (authorLinq == null)
            {
                FaultThrower.Throw<ArgumentException>(new ArgumentException(message));
            }
        }
    }
}

In the actual query itself, you can see that the semantics of the method is that a maximum of one author should be returned.  Therefore, I'm able to use the Single or SingleOrDefault methods.  Note that if you use these and you return more than one entity, an exception will be throw as Single and SingleOrDefault only allow what their name implies.  In this case here, AuthorEmail is the primary key in the database and, by definition, there can be only one (at this point I'm sure about 30% of you are doing Sean Connery impressions).  The difference between Single and SingleOrDefault is simple: when the criteria is not met, Single throws an exception and SingleOrDefault returns the type's default value.  The default of a type is that which the C# "default" keyword will return.  In other words, a reference type will be null and a struct will be something else (i.e. 0 for Int32).  In this case, I'm dealing with my AuthorLINQ class, which is obviously a reference type, and therefore I need to check null on it.  If it's null, then that author doesn't exist and I need to throw a fault (which is what my custom FaultThrower class does).  What's a fault?  That's a topic for a different post.

As you can see from the method signatures, not only is the author e-mail address being validated, the LINQ entity is being returned to the caller via an out parameter.  Once I have this authorLinq entity, then I can proceed to use it's primary key (AuthorId) in various other LINQ queries.  It's critical to remember that you always want to make sure that you are only using validated information.  If you aren't, then you have no idea what will happen to your system.  Therefore, you should ignore all IDs that are sent into a WCF service operation and use only the validated ones.  A thorough discussion of this topic is left for a future discussion.

Now we are finally at the place where LINQ to WCF projection happens.  For clarity, here it is again (no one likes to scroll back and forth):

return new BlogMetaData
{
    Description = blogLinq.BlogDescription,
    FeedTitle = blogLinq.BlogFeedTitle,
    FeedUri = new Uri(blogLinq.BlogFeedUrl),
    Guid = blogLinq.BlogGuid,
    Title = blogLinq.BlogTitle,
    Uri = new Uri(blogLinq.BlogPrimaryUrl),
    CreateDateTime = blogLinq.BlogCreateDate,
    LabelList = new List<Label>(
        blogLinq.Labels.Select(p => new Label
        {
            Guid = p.LabelGuid,
            FriendlyTitle = p.LabelFriendlyTitle,
            Title = p.LabelTitle
        })
    )
};

The basics flow of this are as follows: In DataContext db, in the Blogs table, pull sub-set where PersonId == AuthorId, then select transform that data into a new type.  The DTO projection is obviously happening in the Select method.  This method is akin to a SELECT in SQL.  My point in saying that is to make sure that you are aware that SELECT is not a filter; that's what Where does.  After execution of the Where method as well as after execution of the Select method, you have an IQueryable<Blog> object, which contains information about the query, but no actual data yet.  LINQ defers execution of SQL statements until they are actually used.  In this case, the data is actually being used when the ToList method is called.  This of course returns a list of List<Blog>, which is exactly what this service operation should do.  What's really nice about this is that WCF loves List<T>.  It's not a big fan of Collection<T>, but List<T> is it's friend.  Over the wire it's an Array and when it's being used by a WCF client, it's also a List<T> object.

In closing I should mention something that I know people are going to ask me about: To project from WCF DTO to LINQ you do the exact same thing.  LINQ isn't a database-specific technology.  You can LINQ between all kinds of things.  Though I use LINQ for my data access in many projects, most of my LINQ usage is actually for searching lists, combining to lists together, or modifying the data that gets bounds to the interface.  It's incredibly powerful.

Moving into a non-Minima example, if, for example, you needed to have a person's full name in a WPF ListBox and the name-specific LINQ properties you have are FirstName and LastName property, instead of doing tricks in your ItemTemplate, you can just have your ItemsSource use LINQ to sew the FirstName and LastName together.

lstPerson.ItemsSource = personList.Select(p => new
{
    FullName = p.FirstName + " " + p.LastName,
    p.PostalCode,
    Country = p.Country ?? String.Empty
});

The really sweet part about this is the fact that LINQ entities implement the INotifyPropertyChanged interface, so when doing WPF data binding, WPF will automatically update the ListBox when the data changes!  Of course, this doesn't help you if you are doing a seriously SOA system.  Therefore, my DTOs normally implement INotifyPropertyChanged as well.  This is not a WPF-specific interface (it lives in System.ComponentModel) and therefore does not tie the business object to any presentation.

That should show you a bit more of how LINQ can work with all kinds of stuff.  Therefore, it shouldn't be hard to figure out how to project from a WCF DTO to LINQ. You could literally copy/paste the LINQ -> DTO code and just switch around a few names.

If you are new to LINQ, then I recommend the book Pro LINQ by Joseph C. Rattz Jr. However, if you are already using LINQ or want a view into its internal mechanics, then I must recommend LINQ in Action by Fabrice Marguerie, Steve Eichert, and Jim Wooley.

Spring 2008 Sabbatical

Starting May 23rd, I'm starting another sabbatical to work on my company projects, to continue my seminary work, and to work on my book (to be clear: sabbatical != vacation).  During this time I will be accepting part-time AJAX, WCF, ASP.NET (no graphics work!-- hire a professional graphic designer, they are worth the money!), or general C# 3.0 and .NET 3.5 telecommuting consulting.  I'll assist in projects, but I'm not going to be able to work as senior architect on any projects.  Also remember, as a web developer, it's my duty to make sure my projects work in Mozilla, Opera, Safari, and IE, and is in no way IE-specific.  IE-only environments are the absolute most difficult to work with.

Also keep in mind that this is 2008, not 1988 and the primary purpose of modern technology is to allow us to have simpler lives and just about every single aspect of our technology has it's root in the Internet allowing us to communication from anywhere.  What's the point in having web casting and online meeting abilities or in having online white boarding or web-based project management software, or even Google Office if you aren't going to use them in a meaningful way?  Why have e-mail at all if you are going to absolutely rely on the ability to go to the person's office?  The addiction to physical contact is something that needs to be broken in the 21st century.  Stop managing with your physical "field of view" and start managing by results.

I'm a web developer/architect, not a piano mover; I don't need to be in a physical office.  If you are into technology at all, you are into moving your physical resources into a logical cloud.  If I've said it once, I've said it a million times: your associates are your greatest resource and should, therefore, be even more in a logical cloud (as they are humans and would appreciate it more!)  It is inconsistent to pursue logical management of resources and require physical management of personnel.  Not only that, but it costs a lot less (no office space required!)  If your employees don't have enough discipline to work from home, what makes you think they are working in their cube?  Unless you are working off the failed notion of "hourly management" instead of being a results-oriented manager, you won't have a problem with 100% telecommuting.  Results matter, not "time".  Also, if you don't trust your employees, well... maybe you hired the wrong people (or maybe have trust issues in general?)  Trust is the foundation of all life.  I could speak volumes on this topic, but I'll leave that to the expert: Timothy Ferris.  See his blog or get his book for more information.  I'm only an anonymous disciple of his, he is the master and authority on this topic.  Therefore, send your flames (read: insecurities) his way (after you read his book!-- audio also available; they are both worth 100x their weight in gold!)  See also, Scott Hanselman's interview with Timothy Ferris.  His YouTube page is also available.

With regards to the book, let me simply say that it's generically about AJAX communication and I'm not going to give out too many specific details on the project at this point, but I will say this: AJAX + SOA - CSS + Prototype + (ASP.NET Controls) - (ASP.NET AJAX) + WCF + (.NET 2.0 Service Interop) + Silverlight + Development Tools.  Also, I reserve the right to turn it into a video series (likely), make it a completely learning set of reading + video series (even more likely!), or to completely chuck the project.  I don't like to do things the classical way, so whatever I do, you can bet on the fact that I won't do the traditional "book".  As I've always said, the blog is the new book, but for this I think I may use a different paradigm.  I've turned down two book offers so far because I absolutely refuse to throw more paper on a bookshelf or do something that's been done a million times before.

If you are moving from ASP to ASP.NET, from PHP to ASP.NET, from ASMX to WCF 3.5 or want to add AJAX to your solutions drop me an e-mail and let's talk.

NetFXHarmonics on CodePlex

Well, I finally broke down.  My public projects are now freely available for download on CodePlex.  Below is a list of the current projects on CodePlex

Here are the current projects on CodePlex:

As far as creating "releases", these are shared-source/open-source projects and in the model I'm following "releases" are always going to be obsolete.  Therefore, I will provide ZIP versions of the archived major revisions of a project and the current revision will always be available as source code.  The only exception to this may be DevServer, which I may do monthly releases or releases based upon major upgrades.  I'm currently working on new major revisions for a few other projects and when they are completed, I will then post them on to CodePlex as well.

As a reminder, my projects are always architected to follow the current best-practices and idiots for a particular technology and are therefore often fully re-architected based on the current technology.  The reason I do this is for the simple reason that my core specialty is training (technology or not) and that's the driving principle in my projects.  Therefore, on each of my projects there is a "As a Training Tool" section that will explain that projects technology and architecture as well as what else you might be able to learn from it.

As a final note, SVNBridge is working OK for me and has really helped me get over the CodePlex hurdle.  Scott Hanselman was kind enough to encourage me to try SVNBridge again.  I'm honesty glad I did.  The Team System 2008 Team Explorer client which integrated into Visual Studio works every now and again, but I got absolutely sick of everything locking up every time I would save a file.  Not even a check it!  A simple local save!  How people put up with "connected" version control systems is beyond me.  Do people not realize that Subversion does locking too?  Anyways, SVNBridge works great for both check outs and commits (we don't "check in" in the Subversion world-- we use transactional terminology).  If you want Visual Studio 2008 integration AND speed and power and flexibility with CodePlex, get VisualSVN.  It's an add-on for VS2008 that uses Tortoise behind the scenes.  With that, depending on my mood I can commit in both VS2008 (what I would do when working on refactoring or something) and in the Windows shell (what I would do when working with JavaScript files in the world's best JavaScript IDE: Notepad2).

NetFXHarmonics DevServer Released

Two months ago started work on a project to help me in my AJAX and SOA development.  What I basically needed was a development web server that allowed me to start up multiple web servers at once, monitor server traffic, and bind to specific IP interfaces.  Thus, the creation of NetFXHarmonics DevServer.  I built it completely for myself, but others started to ask for it as well.  When the demand for it became stronger, I realized that I needed to release the project on the web.  Normally I would host it myself, but given the interest from the .NET community, I thought I would put it on CodePlex.  I've only cried twice seen I've put it on CodePlex, but I'll survive.

NetFXHarmonics DevServer is a web server hosting environment built on WPF and WCF technologies that allows multiple instances of Cassini-like web servers to run in parallel. DevServer also includes tracing capabilities for monitoring requests and responses, request filtering, automatic ViewState and ControlState parsing, visually enhanced HTTP status codes, IP binding modes for both local-only as well as remote access, and easy to use XML configuration.

Using this development server, I am able to simultaneously start multiple web sites to very quickly view everything that happens over the wire and therefore easily debug JSON and SOAP messages flying back and forth between client and server and between services.  This tool have been a tremendous help for me in the past few months to discover exactly why my services are tripping out without having to enable WCF tracing.  It's also been a tremendous help in managing my own web development server instances for all my projects, each having 3-5 web sites (or segregated service endpoints) each.

Let me give you a quick run down of the various features in NetFXHarmonics DevServer with a little discussion of each feature's usage:

XML Configuration

NetFXHarmonics DevServer has various projects (and therefore assemblies) with the primary being DevServer.Client, the client application which houses the application's configuration.

In the app.config of DevServer.Client, you have a structure that looks something like the following:

<jampad.devServer>
</jampad.devServer>

This is where all your configuration lives and the various parts of this will be explained in their appropriate contexts in the discussions that follow.

Multiple Web Site Hosting

In side of the jampad.devServer configuration section in the app.config file, there is a branch called <servers /> which allows you to declare the various web servers you would like to load.  This is all that's required to configure servers.  Each server requires a friendly name, a port, a virtual path, and the physical path.  Given this information, DevServer will know how to load your particular servers.

<servers>
  <server key="SampleWS1" name="Sample Website 1" port="2001"
          virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite1">
  </server>
  <server key="SampleWS2" name="Sample Website 2" disabled="true" port="2003"
          virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite2">
  </server>
</servers>

If you want to disable a specific server from loading, use the "disabled" attribute.  All disabled servers will be completely skipped in the loading process.  On the other hand, if you would like to load a single server, you can actually do this from the command line by setting a server key on the <server /> element and by accessing it via a command line argument:

DevServer.Client.exe -serverKey:SampleWS1

In most scenarios you will probably want to load various sets of servers at once.  This is especially true in properly architected service-oriented solutions.  Thus, DevServer includes a concept of startup profiles.  Each profile will include links to a number of keyed servers.  You configure these startup profiles in the <startupProfiles /> section.

<startupProfiles activeProfile="Sample">
  <profile name="Sample">
    <server key="SampleWS1" />
    <server key="SampleWS2" />
  </profile>
</startupProfiles>

This configuration block lives parallel to the <servers /> block and the inclusion of servers should be fairly self-explanatory.  When DevServer starts it will load the profile in the "activeProfile" attribute.  If the activeProfile block is missing, it will be ignored.  If the activeProfile states a profile that does not exist, DevServer will not load.  When using a startup profile, the "disabled" attribute on each server instance is ignored.  That attribute is only for non-startup profile usage.  An activeProfile may also be set via command line:

DevServer.Client.exe -activeProfile:Sample

This will override any setting in the activeProfile attribute of <startupProfiles/>.  In fact, the "serverKey" command line argument overrides the activeProfile <startupProfiles /> attribute as well.  Therefore, the order of priority is is as follows: command line argument override profile configuration and profile configuration overrides the "disabled" attribute.

Most developers don't work on one project and with only client.  Or, even if they do, they surely have their own projects as well.  Therefore, you may have even more servers in your configuration:

<server key="ABCCorpMainWS" name="Main Website" port="7001"
        virtualPath="/" physicalPath="C:\Project\ABCCorp\Website">
</server>
<server key="ABCCorpKBService" name="KB Service" port="7003"
        virtualPath="/" physicalPath="C:\Project\ABCCorp\KnowledgeBaseService">
</server>
<server key="ABCCorpProductService" name="Product Service" port="7005"
        virtualPath="/" physicalPath="C:\Project\ABCCorp\ProductService">
</server>

These would be grouped together in their own profile with the activeProfile set to that profile.

<startupProfiles activeProfile="ABCCorp">
  <profile name="ABCCorp">
    <server key="ABCCorpMainWS" />
    <server key="ABCCorpKBService" />
    <server key="ABCCorpProductService" />
  </profile>
  <profile name="Sample">
    <server key="SampleWS1" />
    <server key="SampleWS2" />
  </profile>
</startupProfiles>

What about loading servers from different profiles?  Well, think about it... that's a different profile:

<startupProfiles activeProfile="ABCCorpWithSampleWS1">
  <profile name="ABCCorpWithSampleWS1">
    <server key="SampleWS1" />
    <server key="ABCCorpMainWS" />
    <server key="ABCCorpKBService" />
    <server key="ABCCorpProductService" />
  </profile>
</startupProfiles>

One of the original purposes of DevServer was to allow remote non-IIS access to development web sites.  Therefore, in DevServer you can use the <binding /> configuration element to set either "loopback" (or "localhost") to only allow access to your machine, "any" to allow web access from all addresses, or you can specific a specific IP address to bind the web server to a single IP address so that only systems with access to that IP on that interface can access the web site.

In the following example the first web site is only accessible by the local machine and the second is accessible by others.  This comes in handy for both testing in a virtual machine as well as quickly doing demos.  If your evil project manager (forgive the redundancy) wants to see something, bring the web site up on all interface and he can poke around from his desk and then have all his complains and irrational demands ready when he comes to your desk (maybe you want to keep this feature secret).

<server key="SampleWS1" name="Sample Website 1" port="2001"
        virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite1">
  <binding address="loopback" />
</server>
<server key="SampleWS2" name="Sample Website 2" port="2003"
        virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite2">
  <binding address="any" />
</server>

Web Site Settings

In addition to server configuration, there is also a bit of general configuration that apply to all instances.  As you can see from the following example, you can add default documents to the existing defaults and you can also setup content type mappings.  A few content types already exist, but you can override as the example shows.  In this example, where ".js" is normally sent as text/javascript, you can override it to go to "application/x-javascript" or to something else.

<webServer>
  <defaultDocuments>
    <add name="index.jsx" />
  </defaultDocuments>
  <contentTypeMappings>
    <add extension=".jsx" type="application/x-javascript" />
    <add extension=".js" type="application/x-javascript" override="true" />
  </contentTypeMappings>
</webServer>

Request/Response Tracing

One of the core features of DevServer is the ability to do tracing on the traffic in each server.  Tracing is enabled by adding a <requestTracing /> configuration element to a server and setting the "enabled" attribute to true.

<server key="SampleWS1" name="Sample Website 1" port="2001"
        virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite1">
  <binding address="loopback" />
  <requestTracing enabled="true" enableVerboseTypeTracing="false" enableFaviconTracing="true" />
</server>

This will have request/response messages show up in DevServer which will allow you to view status code, date/time, URL, POST data (if any), response data, request headers, response headers, as well as parsed ViewState and Control state for both the request and response.  In addition, each entry is color coded based on it's status code.  Different colors will show for 301/302, 500+, and 404.

image

When working with the web, you don't always want to see every little thing that happens all the time.  Therefore, by default, you only trace common text specific file like HTML, CSS, JavaScript, JSON, XAML, Text, and SOAP and their content.  If you want to trace images and other things going across, then set "enableVerboseTypeTracing" to true.  However, since there is no need to see the big blob image data, the data of binary types are not sent to the trace viewer even with enableVerboseTypeTracing.  You can also toggle both tracing as well as verbose type tracing on each server as each is running.

There's also the ability to view custom content types without seeing all the images and extra types.  This is the purpose of the <allowedConntetTypes /> configuration block under <requestTracing />, which is parallel to <servers />.

<requestTracing>
  <allowedContentTypes>
    <add value="application/x-custom-type" />
  </allowedContentTypes>
</requestTracing>

In this case, responses of content-type "application/x-custom-type" are also traced without needing to turn on verbose type tracing.

However, there is another way to control this information.  If you want to see all requests, but want the runtime ability to see various content types, then you can use a client-side filter in the request/response list.  In the box immediately above the request/response list, you can type something like the following:

verb:POST;statuscode:200;file:css;contentType:text/css

Filtering will occur as you type, allowing you to find the particular request you are looking for.  The filter is NOT case sensitive.  You can also clear the request/response list with the clear button.  There is also the ability to copy/paste the particular headers that you want from the headers list by using typical SHIFT (for range) and CTRL-clicking (for single choosing).

Request/Response monitoring actually goes a bit further by automatically parsing both ViewState and ControlState for both request (POST) and response data.  Thanks goes to Fritz Onion for granting me permission to use his ViewState parser class in DevServer.

As a Training Tool

When announce any major project I always provide an "as a training tool" section to explain how the project can be used for personal training.  NetFXHarmonics DevServer is built using .NET 3.5 and relies heavily on LINQ and WCF with a WPF interface.  It also uses extensive .NET custom configuration for all server configuration.  In terms of LINQ, you can find many examples of how to use both query expression syntax and extension method syntax.  When people first learn LINQ, they think that LINQ is an O/R mapper.  Well, it's not (and probably shouldn't be usef for that in enterprise applications! there is only one enterprise class O/R mapper: LLBLGen Pro).  LINQ allows Language INtegrated Query in both C# and VB.  So, in DevServer, you will see heavy reliance on LINQ to search List<T> objects and also to transform LINQ database entities to WCF DTOs.

DevServer also relies heavily on WCF for all inner-process communication via named-pipes.  The web servers are actually hosted inside of a WCF service, thus segregating the web server loader from the client application in a very SOA friendly manner.  The client application loads the service and then acts as a client to the service calling on it to start, stop, and kill server instances.  WCF is also used to communicate the HTTP requests inside the web server back to the client, which is itself a WCF service to which the HTTP request is a client.  Therefore, DevServer is an example of how you can use WCF to communicate between AppDomains.

The entire interface in DevServer is a WPF application that relies heavy on WPF binding for all visual information.  All status information is in a collection to which WPF binds.  Not only that all, but all request/response information is also in a collection.  WPF simply binds to the data.  Using WPF, no eventhandling was required to say "on a click event, obtain SelectedIndex, pull data, then text these TextBox instances".  In WPF, you simply have normal every day data and WPF controls bind directly to that data being automatically updated via special interfaces (i.e INotifyPropertyChanged and INotifyCollectionChanged) or the special generic ObservableCollection<T>.

Since the bindings are completely automated, there also needs to be ways to "transform" data.  For example, in the TabItem header I have a little green or red icon showing the status of that particular web server instance.  There was no need to handle this manually.  There is already a property on my web server instance that has a status.  All I need to do is bind the image to my status enumeration and set a TypeConverter which transforms the enumeration value to a specific icon.  When the enumeration is set to Started, the icon is green, when it says "Stopped", the icon is red.  No events are required and the only code required for this scenario is the quick creation of a TypeConverter.

Therefore, DevServer is an example of WPF databinding.  I've heard people say that they are more into architecture and WCF and therefore have no interested in learning WPF.  This statement makes no sense.  If you don't want to mess with UI stuff, you need to learn WPF.  Instead of handing events all over the place and manually setting data, you can do whatever it is you do and have WPF just bind to your data.  When it comes to creating quick client applications, WPF is a much more productive platform than Windows Forms... or even the web!

Links

Squid Micro-Blogging Library for .NET 3.5

A few years ago I designed a system that would greatly ease data syndication, data aggregation, and reporting.  The first two components of the system were repackaged and release early last year under the incredibly horrible name "Data Feed Framework".  The idea behind the system was two fold.  The first concept was that you write a SQL statement and you immediately get a fully functional RSS feed with absolutely no more work required.  Here's an example of a DFF SQL statement that creates an RSS feed of SQL Server jobs:

select Id=0,
Title=name,
Description=description
from msdb.dbo.sysjobs
where enabled = 1

The second part of DFF was it's ASP.NET control named InfoBlock that would accept an RSS or ATOM feed and display it in a mini-reader window.  The two parts of DFF combine to create the following:

Given the following SQL statement (or more likely a stored procedure)...

select top 10
Id=pc.ContactID, 
Title=pc.FirstName + ' ' + pc.LastName + ': $' + convert(varchar(20), convert(numeric(10,2), sum(LineTotal))), 
Description='', 
LinkTemplate = '/ShowContactInformation/'
from Sales.SalesOrderDetail sod
inner join Sales.SalesOrderHeader soh on soh.SalesOrderID = sod.SalesOrderID
inner join Person.Contact pc on pc.ContactID = soh.SalesPersonID
group by pc.FirstName, pc.LastName, pc.ContactID
order by sum(LineTotal) desc

...we have an automatically updating RSS feed and when that RSS feed is given to an InfoBlock, you get the following:

image

InfoBlocks could be placed all over a web site or intranet to give quick and easy access to continually updating information.  The InfoBlock control would also register the feed with modern web browsers that had integrated RSS support.  Furthermore, since it was styled properly in CSS, there's no reason for it to be a block at all.  It could be a horizontal list, a DOM-based window, or even a ticker as CSS and modern AJAX techniques allow.

DFF relied on RSS.NET for syndication feed creation and both RSS.NET and Atom.NET for aggregation.  It also used LLBLGen Pro a bit to access the data from SQL Server.  As I've promised with all my projects, they will update as new technologies are publicly released.  Therefore, DFF has been completely updated for .NET 3.5 technologies including LINQ and WCF.

I've also decided to continue down my slippery slope of a change in product naming philosophy.  Whereas before I would follow the Microsoft marketing philosophy of "add more words to the title until it's so long to say that you require an acronym" to the more Linux or O'Reilly approaches of "choose a random weird sounding word and leave it be" and "pick a weird animal", respectively.  I've also been moving more towards the idea of picking a cool name and leaving it as is.  This is in contrast to Microsoft's idea of picking an awesome name and then changing it to an impossibly long name right before release (i.e. Sparkle, Acrylic, and Atlas)  Therefore, I decided to rename DFF to Squid.  I think this rivals my Dojr.NET and Prominax (to be released-- someday) projects as having the weirdest and most random name I've ever come up with.  I think it may have something to do with SQL and uhhhh.. something about a GUID.  Donno.

Squid follows the same everything as DFF, however the dependencies on RSS.NET and ATOM.NET were completely removed.  This was possible due to the awesome syndication support in WCF 3.5.  Also, all reliance on LLBLGen Pro was removed.  LLBLGen Pro (see my training video here) is an awesome system and is the only enterprise-class O/R mapping solution in existence.  NHibernate should not be considered enterprise-class and it's usability is almost through the floor.  Free in terms of up-front costs, does not mean free in terms of usability (something Linux geeks don't seem to get).  However, given that LINQ is built into .NET 3.5, I decided that all my shared and open-source projects should be using LINQ, not LLBLGen Pro.  The new LLBLGen Pro uses LINQ and when it's released, should absolutely be used as the primary solution for enterprise-class O/R mapping.

Let me explain a bit about the new syndication feature in WCF 3.5 and how it's used in Squid.  Creating a syndication feed in WCF is required a WCF endpoint just like everything else in WCF.  This endpoint will be part of a service and will have an address, binding, and contract.  Nothing fancy yet as the sweetness is in the details.  Here's part of the contract Squid uses for it's feed service (don't be jealous of the VS2008 theme -- see Scott Hanselman's post on VS2008 themes):

namespace Squid.Service
{
    [ServiceContract(Namespace = "http://www.netfxharmonics.com/services/squid/2008/03/")]
    public interface ISquidService
    {
        [OperationContract]
        [WebGet(UriTemplate = "GetFeedByTitle/")]
        Rss20FeedFormatter GetFeedByTitle(String title);

        //+ More code here
    }
}

Notice the WebGet attribute.  This is applied to signify that this will be part of a HTTP GET request.  This relates to the fact that we are using a new WCF 3.5 binding called the WebHttpBinding.  This is the same binding used by JSON and POX services.  There are actually a few new attributes, each of which provides it's own treasure chest (see later in this post when I mention a free chapter on the topic).  The WebGet attribute has an awesome property on it called UriTemplate that allows you to match parameters in the request URI to parameters in the WCF operation contract.  That's beyond cool.

The service implementation is extremely straight forward.  All you have to do is create a SyndicationFeed object, populate it with SyndicationItem objects and return it in the constructor of the Rss20FeedFormatter.  Here's a non-Squid example:

SyndicationFeed feed = new SyndicationFeed();
feed.Title = new TextSyndicationContent("My Title");
feed.Description = new TextSyndicationContent("My Desc");
List<SyndicationItem> items = new List<SyndicationItem>();
items.Add(new SyndicationItem()
{
    Title = new TextSyndicationContent("My Entry"),
    Summary = new TextSyndicationContent("My Summary"),
    PublishDate = new DateTimeOffset(DateTime.Now)
});
feed.Items = items;
//+
return new Rss20FeedFormatter(feed);

You may want to make note that you can create an RSS or ATOM feed directly from an SyndicationFeed instance using the SaveAsRss20 and SaveAsAtom10 methods.

As with any WCF service, you need a place to host it and you need to configure it.  To create a service, I simply throw down a FeedService.svc file with the following page directive (I'm really not trying to have the ugliest color scheme in the world-- it's just an added bonus):

<%@ ServiceHost Service="Squid.Service.SquidService" %>

The configuration is also fairly straight forward, all we have is our previously mentioned ending with an address(blank to use FeedService.svc directly), binding (WebHttpBinding), and contract(Squid.Service.ISquidService).  However, you also need to remember to add the WebHttp behavior or else nothing will work for you.

<system.serviceModel>
  <behaviors>
    <endpointBehaviors>
      <behavior name="FeedEndpointBehavior">
        <webHttp/>
      </behavior>
    </endpointBehaviors>
  </behaviors>
  <services>
    <service name="Squid.Service.SquidService">
      <endpoint address=""
                binding="webHttpBinding"
                contract="Squid.Service.ISquidService"
                behaviorConfiguration="FeedEndpointBehavior"/>
    </service>
  </services>
</system.serviceModel>

That's seriously all there is to it: write your contract, write your implementation, create a host, and set configuration.  In other words, creating a syndication feed in WCF is no different than creating a WsHttpBinding or NetTcpBinding service.  However, what about reading an RSS or ATOM feed? This is even simpler.

To read a feed all you have to do is create an XML reader with the data source of the feed and pass that off to the static Load method of the SyndicationFeed class.  This will return an instance of SyndicationFeed which you may iterate or, as I'm doing in Squid, transform with LINQ.  I actually liked how my server-control used an internal repeater instance and therefore wanted to continue to use it.  So, I kept my ITemplate object (RssListTemplate) the same and used the following LINQ to transform a SyndicationFeed to what my ITemplate what already using:

Object bindingSource = from entry in feed.Items
                       select new SimpleFeedEntry
                       {
                           DateTime = entry.PublishDate.DateTime,
                           Link = entry.Links.First().Uri.AbsoluteUri,
                           Text = entry.Content != null ? entry.Content.ToString() : entry.Summary.Text,
                           Title = entry.Title.Text
                       };

Thus, with .NET 3.5 I was able to remove RSS.NET and ATOM.NET completely from the project.  LINQ also, of course helped me with my database access and therefore remove my dependency on my LLBLGen Pro generated DAL:

using (DataContext db = new DataContext(Configuration.DatabaseConnectionString))
{
    var collection = from p in db.FeedCreations
                     where p.FeedCreationTitle == title
                     select p;
    //+ More code here
}

Thus, you can use Squid in your existing .NET 3.5 system with little impact to anything.  Squid is what I use in my Minima blog engine to provide the boxes of information in the sidebar.  I'm able to modify the data in the Snippet table in the Squid database to modify the content and order in my sidebar.  Of course I can also easily bring in RSS/ATOM content from the web with this as well.

You can get more information on the new web support in WCF 3.5 by reading the chapter "Programmable Web" (free chapter) in the book Essential WCF for .NET 3.5 (click to buy).  This is an amazing book that I highly recommend to all WCF users.

Links

Powered by
Python / Django / Elasticsearch / Azure / Nginx / CentOS 7

Mini-icons are part of the Silk Icons set of icons at famfamfam.com