2005 2006 2007 2008 2009 2010 2011 2015 2016 2017 2018 aspnet azure csharp debugging docker elasticsearch exceptions firefox javascriptajax linux llblgen mongodb powershell projects python security services silverlight training videos wcf wpf xag xhtmlcss

Minima 3.1 Released



I've always thought that one of the best ways to learn or teach a series of technologies is to create either a photo gallery, some forum software, or a blog engine.  Thus to aide in teaching various areas of .NET (and to have full control over my own blog), I created Minima v1.  Minima v2 came on the scene adding many new features and showing how LINQ can help make your DAL shine.  Then, Minima v3 showed up and demonstrated an enormous load of technologies as well as demonstrating proper architectural principles.

Well, Minima 3.1 is an update to Minima 3.0 and it's still here to help people see various technologies in action.  However, while Minima 3.1 adds various features to the Minima 3.1 base, its primary important to note that it's also the first major Themelia 2.0 application (Minima 3.0 was built on Themelia 1.x).  As such, not only is it a prime example of many technologies ranging from WCF to LINQ to ASP.NET controls to custom configuration, it's also a good way to see how Themelia provides a component model to the web.  In fact, Minima 3.1 is technically a Themelia 2.0 plug-in.

Here's a quick array of new blog features:

  • It's built on Themelia 2.0.  I've already said this, but it's worth mentioning again.  This isn't a classic ASP.NET application.  It's the first Themelia 2.0 application.
  • Minima automatically creates indexes (table of contents) to allow quick viewing of your site
  • Images are now stored in SQL Server as a varbinary(max) instead of as a file.
  • Themelia CodeParsers are used to automatically turn codes like {Minima{BlogEntry{3324c8df-4d49-4d4a-9878-1e88350943b6}}} into a link to a link entry, {Minima{BlogEntry{3324c8df-4d49-4d4a-9878-1e88350943b6|Click here for stuff}}} into a renamed blog entry and {Minima{AmazonAffiliate{0875526438}}} into a Amazon.com product link with your configured (see below) affiliate ID.
  • Minima now has its own full custom configuration.  Here's an example:
<minima.blog entriesToShow="7" domain="http://www.tempuri.org/">
  <service>
    <authentication defaultUserName="jdoe@tempuri.org" defaultPassword="blogpassword"/>
    <endpoint author="AuthorServiceWs2007HttpBinding" blog="BlogServiceWs2007HttpBinding" comment="CommentServiceWs2007HttpBinding" image="ImageServiceWs2007HttpBinding" label="LabelServiceWs2007HttpBinding" />
  </service>
  <suffix index="Year Review" archive="Blog Posts" label="Label Contents" />
  <display linkAuthorsToEmail="false" blankMessage="There are no entries in this view." />
  <comment subject="New Comment on Blog" />
  <codeParsers>
    <add name="AmazonAffiliate" value="net05c-20" />
  </codeParsers>
</minima.blog>

  • In addition to the normal MinimaComponent (the Themelia component which renders the blog with full interactivity), you may also use the MinimaProxyComponent to view single entries.  For example, you just as a BlogEntryProxy component to a web form and set either the blog entry guid or the blog guid plus the link, then your blog entry will show.  I built this feature in to allow Minima to be used for more than just blogging, it's a content stream.  With this feature I can keep every single page of a web site inside of Minima and no one will ever know.  There's also the MinimaViewerComponent which renders a read-only blog.  This means no rsd.xml, no site map, no commenting, no editing, just viewing.  In fact, the Themelia web site uses this component to render all its documentation.
  • There is also support for adding blog post footers.  Minima 3.1 ships with a FeedBurner footer to provide the standard FeedBurner footer.  See the "implementing" section below for more info.

As a Training Tool

Minima is often used as a training tool for introductory, intermediate, and expert-level .NET.

Minima 2.0 could be used as a training tool for ASP.NET, CSS theming, proper use of global.asax, integrating with Windows Live Writer, framework design guidelines, HttpModules, HttpHandlers, HttpHandlerFactories, LINQ, type organization, proper-SQL Server table design and naming scheme, XML serialization, and XML-RPC.NET usage.

Minima 3.1 can be used as a training tool for the same concepts and technologies as Minima 2.0 as well as SOA principles, custom WCF service host factories, custom WCF behaviors, WCF username authentication, custom WCF declarative operation-level security, WCF exception shielding and fault management, custom WCF message header usage, WCF type organization, WCF-LINQ DTO transformation, enhanced WCF clients, using WCF sessions for Captcha verification, SQL Server 2005 schema security, XmlWriter usage, ASP.NET programmatic user control usage, custom configuration sections, WCF JavaScript clients, ASP.NET control JavaScript registration, JavaScript namespaces, WCF JSON services, WCF RSS services, ASP.NET templated databinding, and ASP.NET control componentization.

Architecture

Probably the most important thing to learn from Minima is architecture.  Minima is built to provide great flexibility.  However, that's not for the faint of heart.  I heard one non-architect and obvious newbie say that it was "over architected".  According to this person, apparently, adding security to your WCF services to protect you private information is "over architecting" something (not to mention the fact that WCF enforces security for username authentication).

In any case, Minima is split into two parts: the service and the web site.  I use Minima many places, but for all my blogs (or, more accurately, content streams) I have a single centralized, well-protected service set.  All my internal web sites access this central location via the WCF NetNamedPipeBinding.

Implementing

Minima is NOT your every day blog engine.  If your company needs a blog engine for various people on the team, get community server.  Minima isn't for you.  Minima allows you to plop a blog into any existing web site.  For example, if you have an existing web site, just install Themelia (remember, Minima is a Themelia plugin), create a new Themelia web domain, and register Minima into that web domain as follows:

<themelia.web>
  <webDomains>
    <add>
      <components>
        <add key="Minima" type="Minima.Web.Routing.MinimaComponent, Minima.Web">
          <parameters>
            <add name="page" value="~/Page_/Blog/Root.aspx" />
            <add name="blogGuid" value="19277C41-7E4D-4AE0-A196-25F45AC48762" />
          </parameters>
        </add>
      </components>
    </add>>
  </webDomains>
</themelia.web>

Now, on that Root.aspx page, just add a simple Minima.Web.Controls.MinimaBlog control.  Your blog immediately starts rendering.  Not only that, commenting is automatically supported.  Furthermore, you have a site map, a Windows Live Writer (MetaWeblog API) endpoint, a rsd.xml file, and a wlwmanifest.xml file.  All that just dropping a control on to a web site without configuring anything in that page.  Of course, you can configure things if you want and you can add more control to the page as well.  Perhaps you want a label list, an archive list, or a recent entry list.  Just add the appropriate control to the web form.  In fact, the same Minima binaries that you will compile with the source is used on each of my web sites with absolutely no changes; they are all just a single control, yet look nothing alike.

Personally, I don't like to add a lot of controls to my web forms.  Thus, I normally add a place holder control and then add my controls to that place holder.  There more here's a snippet from my blog's web form (my entire blog has only one page):

phLabelList.Controls.Add(new Minima.Web.Controls.LabelList { Heading = "Label Cloud", ShowHeading = true, TemplateType = typeof(Minima.Web.Controls.LabelListControlTemplateFactory.SizedTemplate) });
phArchivedEntryList.Controls.Add(new Minima.Web.Controls.ArchivedEntryList { ShowEntryCount = false });
phRecentEntryList.Controls.Add(new Minima.Web.Controls.RecentEntryList());
phMinimaBlog.Controls.Add(new Minima.Web.Controls.MinimaBlog
{
    ShowAuthorSeries = false,
    PostFooterTypeInfo = Themelia.Activation.TypeInfo.GetInfo(Minima.Web.Controls.FeedBurnerPostFooter.Type, "http://feeds.feedburner.com/~s/FXHarmonics"),
    ClosedCommentText = String.Empty,
    DisabledCommentText = String.Empty
});

There's nothing here that you can't do as well.  Most everything there is self explanatory too.  However, notice the post footer type.  By setting this type, Minima knows to render the feed burner post footer at the end of each entry.

Thus, with a simple configuration and a drop of a control, you can add a blog anywhere.  Or, in the case of the Themelia web site, you can add a content stream anywhere.

Here's a snippet from the configuration for the Themelia web site:

<add name="framework" path="framework" defaultPage="/Sequence_/Home.aspx" acceptMissingTrailingSlash="true">
  <components>
    <add key="Minima" type="Minima.Web.Routing.MinimaViewerComponent, Minima.Web">
      <parameters>
        <add name="blogGuid" value="19277C41-7E4D-4AE0-A196-25F45AC48762" />
      </parameters>
    </add>
  </components>
</add>

By looking at the Themelia web site, you can see that on the Themelia web site, Minima isn't being used as a blog engine, but as a content stream.  Go walk around the documentation of http://themelia.netfxharmonics.com/framework/docs.  I didn't make a bunch of pages, all I did was drop in that component and throw a Minima.Web.Controls.BlogViewer control on the page and BAM I have an entire documentation system already built based upon various entries from my blog.

As a side note, if you look on my blog, you will see each of the Themelia blog entries have a list of links, but the same thing in the Themelia documentation does not have the link list.  This is because I've set IgnoreBlogEntryFooter to true on the BlogViewer control and thus telling Minima to remove all text after the special code.  Thus I can post the same entry in two places.

This isn't a marketing post on why you should use Minima.  If you want to use Minima, go ahead, you can contact me on my web site for help.  However, the point is to learn as much as you can about modern technology using Minima as an example.  It's not meant to be used in major web sites by just anyone at this point (though I use it in production in many places).  Having said that, the next version of Minima will be part of the Themelia suite and will have much more user support and formal documentation.

In conclusion, I say again (and again and again), you may use Minima for your personal training all you want.  That's why it's public. 

Links

Microsoft MVP (ASP.NET) 2009



I'm rather pleased to announce that October 1, 2008 was my day: I was made a Microsoft MVP for ASP.NET.  Thus, as tradition seems to state, I'm posting a blog entry about it.

Thanks to God for not letting me have the MVP until now, the timing is flawless.  Thanks also to my MVP advisor David Silverlight who got me more serious about the MVP program and admitted to having nominated me.  Next, thanks to Scott Hanselman who fixed a clog in the MVP pipeline.  Apparently, I was in the system, but completely lost; in the wrong category or something.  He took it upon himself to contact a few people to get the problem fixed.

Thanks also to Brad Abrams for his recommendation to the MVP committee and to Rick Strahl, a fellow MVP, Microsoft ASP.NET AJAX loather, and Subversion lover who showed me that open source developers have equal rights to the MVP title.

To bring in the new [MVP] year, today I took some time and did a massive redesign to my web site featuring the cool MVP logo, which just so happens to fit perfectly in my existing color scheme.  I'll probably be tweaking the site over the next few days as the waves of whimsical changes come my way.

Minima 3.0 Released



Every few months I like to release a new open-source project or at least a new major revision of an existing project. Today I would like to introduce Minima 3.0.  This is a completely new Minima Blog Engine that is built on WCF, that is factored into various controls and that introduces a completely new model for ASP.NET development.

As a Training Tool

Normally I leave this for last, but this time I would like to immediately start off by mention how Minima 3.0 may act as a training tool. This will give you a good idea to Minima 3.0's architecture.  Here was the common "As a Training Tool" description for Minima 2.0 (a.k.a. Minima .NET 3.5):

Minima 2.0 could be used as a training tool for ASP.NET, CSS theming, proper use of global.asax, integrating with Windows Live Writer, framework design guidelines, HttpModules, HttpHandlers, HttpHandlerFactories, LINQ, type organization, proper-SQL Server table design and naming scheme, XML serialization, and XML-RPC.NET usage.

Here's the new "As a Training Tool" description for Minima 3.0:

Minima 3.0 can be used as a training tool for the same concepts and technologies as Minima 2.0 as well as SOA principles, custom WCF service host factories, custom WCF behaviors, WCF username authentication, custom WCF declarative operation-level security, WCF exception shielding and fault management, custom WCF message header usage, WCF type organization, WCF-LINQ DTO transformation, enhanced WCF clients, using WCF sessions for Captcha verification, SQL Server 2005 schema security, XmlWriter usage, ASP.NET programmatic user control usage, custom configuration sections, WCF JavaScript clients, ASP.NET control JavaScript registration, JavaScript namespaces, WCF JSON services, WCF RSS services, ASP.NET templated databinding, and ASP.NET control componentization.

As you can see, it's an entirely new beast. As you should also be able to guess, I'm not going to use Minima for simply entry level .NET training anymore. With this new feature set, it's going to be my primary tool for intermediate and expert-level .NET training.  In the future, I'll post various blog entries giving lessons on various parts of Minima.

New Features

Since it's no where near the purpose of Minima, in no version have I ever claimed to have an extraordinary feature set. In fact, the actual end-user feature set of Minima 3.0 is fundamentally the same as Minima 2.0 except where features are naturally added because of the new architecture.  For example, it's now naturally a multi-blog environment with each blog allowed to have it's own blog discovery data, Google sitemap, and other things.

Architecture

There are really three major "pillars" to the architecture of Minima 3.0: WCF, ASP.NET, and my Themelia Foundation (pronounced TH[as in "Thistle"]-MEH-LEE-UH; Koine Greek for "foundations"). It will take more than one blog entry to cover every aspect of Minima's architecture (see my lessons on Themelia), but for now I'll give a very brief overview.  I will explain the ASP.NET and Themelia pillars together.

WCF Architecture

The backend of Minima is WCF and is split up into various services to factor out some of the operations that occur within Minima. Of course, not every single possible operation is included as that would violate the "specificness" of SOA, but the core operations are intact.

The entire Minima security structure is now in WCF using a custom declarative operation-level security implementation.  To set security in Minima, all you have to do on the service side is apply the MinimaBlogSecurityBehavior attribute to an operation and you're all set.  Here's an example:

[MinimaBlogSecurityBehavior(PermissionRequired = BlogPermission.Retrieve)]

Architectural Overview: Using LINQ in WCF



Today I would like to give an architectural overview of my usage of LINQ.  This may actually become the first in a series of architectural discussions on various .NET and AJAX technologies.  In this discussion, I'm going to be talking about the architecture of the next revision of my training blog engine, Minima.  Since the core point of any system is that which goes into and comes out of the system, the goal of this commentary will be to get to the point where LINQ projects data into WCF DTOs.  Let me start by explaining how I organize my types.  Some of you will find this boring, but it's amazing how many times I get questions on this topic.  For good reason too!  These questions show that a person's priorities are in the right place as your type, namespace, and file organization is critical to the manageability and architectural clarity of your system.

However, before we get started, let me state briefly that as I've stated in my post entitled SQL Server Database Model Optimization for Developers, when you design your database structure you should design it with your O/R mapper in mind.  If you don't, then you will probably fall into all kinds of problems as my post describes.  This is incredibly important, however, if you keep to normal everyday normalization procedures, you are probably doing OK for the most part anyway.  Since I've written about that before, there's no reason for me to go into detail here.  Just know that, if you database design sucks, your application will probably suck too.  Don't built your house on the sand.

In terms of LINQ, I actually use the VS2008 "LINQ to SQL Classes" template to create the LINQ information.  In most every other area of technology, it's a good practice to avoid wizards and templates like the plague, but when it comes to O/R mapping, you need to be using an automated tool.  If your O/R mapper requires you to do any work (…NHibernate…coughcough*), then you can't afford to work with it.  You need to be focusing on the business logic of your system, not playing around with mechanical nonsense.  As I've said in other contexts, stored procedures and ad hoc SQL are forms of unmanaged code.  When you are managing the mechanics of a system yourself, it's, by definition, unmanaged.  Stored procedures and ad hoc SQL are to LLBLGen/LINQ as ASP/PHP is to ASP.NET as C++ is to .NET languages.  If you are managing the mechanical stuff yourself, you are working with unmanaged code.  When it comes to using managed code, in the context of database access, this is the point of an O/R mapper.  Furthermore, if the O/R mapping software you are using requires you to write up templates or do manual mapping, that's obviously not completely managed code.

Now when I create a LINQ classes I will create one for each "architectural domain" of the system that I deem necessary.  For example, in a future release of Minima, there will be a LINQ class to handle my HttpHandler and UrlRewriting subsystem and another LINQ class to handle blog interaction.  There needs to be this level of flexibility or my WCF services will know too much about my web environment and my web site (a WCF client) will then have direct access to the data which the WCF service is intended to abstract.  Therefore, there will be a LINQ class for web site specific mechanics and another LINQ class for service specific mechanics.  Also, when I create the class for a particular domain I will give it a simple name with the suffix of LINQ.  So, my Minima core LINQ class is CoreLINQ.cs and my Minima service LINQ class is ServiceLINQ.cs.  Simple.

Upon load of the LINQ designer and either after or before I drop in the specific tables required in that particular architectural domain.  Then I'll set my context namespace to <SimpleName>.Data.Context and my entity namespace to <SimpleName>.Data.Entity.  For example, in the Minima example, I'll then have Core.Data.Context and Core.Data.Entity.  One may argue that there's nothing really going on in Core.Data.Context to which I much respond: yeah, well there's already a lot going on in Core.Data (other data related non-LINQ logic I would create) and Core.Data.Entity.   The reason I say "after or before I drop in the specific tables" is to emphasize the fact that you can change this at a later point.  It's important to keep in mind at this point that LINQ doesn't automatically update its schema with the schema from your database.  LLBLGen Pro does have this feature built in and it does the refreshing in a masterful way, but currently LINQ doesn't have this ability.  Therefore, to do a refresh, you need to do a "CTRL-A, Delete", to delete all the tables, do a refresh in Server Explorer, and then just re-add them.  It's not much work.

Now, moving on to using LINQ.  When I'm working with both LINQ entities (or LLBLGen entities or whatever) and WCF DTOs in my WCF service, I do not bring in the LINQ entity namespace.  The ability to import types in from another namespace is one of the most powerful set under appreciated features in all of .NET (um.. JavaScript needs them!), however, when you have a Person entity in LINQ and a Person DTO, things can get confusing fast.  Therefore, to avoid all potential conflicts, my import is left out and I, instead, keep a series of type aliases at the top of my service classes just under the namespace imports.  Notice also the visual signal in the BlogEntryXAuthor table name.  This tells the developer that this is a many-to-many linking table.  In this case it's in the database schema, but if it weren't in there, I could easily alias it as BlogEntryXAuthorLINQ without affecting anyone else.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
//+
using DataContext = Minima.Service.Data.Context.MinimaServiceLINQDataContext;
using AuthorLINQ = Minima.Service.Data.Entity.Author;
using CommentLINQ = Minima.Service.Data.Entity.Comment;
using BlogLINQ = Minima.Service.Data.Entity.Blog;
using BlogEntryLINQ = Minima.Service.Data.Entity.BlogEntry;
using BlogEntryUrlMappingLINQ = Minima.Service.Data.Entity.BlogEntryUrlMapping;
using BlogEntryXAuthorLINQ = Minima.Service.Data.Entity.BlogEntryAuthor;
using LabelLINQ = Minima.Service.Data.Entity.Label;
using LabelXBlogEntryLINQ = Minima.Service.Data.Entity.LabelBlogEntry;
using UserRightLINQ = Minima.Service.Data.Entity.UserRight;
//+

Next, since we are in the context of WCF, we need to discussion validation of incoming information.  The following method is an implementation of a WCF service operation.  As you can see, when a user sends in an e-mail address, there is an immediate validation on the e-mail address that retrieves the author's LINQ entity.  This is why the validation isn't being done in a WCF behavior (even though there are tricks to get data from a behavior too!)  You may also note my camelCasing of instances of LINQ entities.  The purpose of this is to provide an incredibly obvious signal to the brain that this is an object, not simply a type (...as is the point of almost all the Framework Design Guidelines-- buy the book!; 2nd edition due Sept 29 '08)

//- @GetBlogMetaData -//
[MinimaBlogSecurityBehavior(PermissionRequired = BlogPermission.Retrieve)]
public BlogMetaData GetBlogMetaData(String blogGuid)
{
    using (DataContext db = new DataContext(ServiceConfiguration.ConnectionString))
    {
        //+ ensure blog exists
        BlogLINQ blogLinq;
        Validator.EnsureBlogExists(blogGuid, out blogLinq, db);
        //+
        return new BlogMetaData
        {
            Description = blogLinq.BlogDescription,
            FeedTitle = blogLinq.BlogFeedTitle,
            FeedUri = new Uri(blogLinq.BlogFeedUrl),
            Guid = blogLinq.BlogGuid,
            Title = blogLinq.BlogTitle,
            Uri = new Uri(blogLinq.BlogPrimaryUrl),
            CreateDateTime = blogLinq.BlogCreateDate,
            LabelList = new List<Label>(
                blogLinq.Labels.Select(p => new Label
                {
                    Guid = p.LabelGuid,
                    FriendlyTitle = p.LabelFriendlyTitle,
                    Title = p.LabelTitle
                })
            )
        };
    }
}

It would probably be a good idea at this point to step into the Validator class to see what's really going on here.  As you can see in the following class I have two methods (in reality there are dozens!) and most of it should be obvious.  The validation is obviously in the second method, however, it's the first one that's being directly called.  Notice two things about this: First, notice that I'm passing in my DataContext.  This is to completely obliterate any possibilities of overlapping DataContexts and, therefore, any strange locking issues.  Second, notice that I'm pre-registering my messages in a strongly typed Message class(notice also that the members of Message are not static-- the magic of const.)  This last piece could easily be done in a way that provides for nice localization.

Now moving on to the actual validation.  Unless I'm desperately trying to inline some code, I normally declare the LINQ criteria prior to the actual link statement.  Of course, this is exactly what the Func<T1, T2> delegate is doing.  Notice also that I try to bring the semantics of the criteria into the name of the object.  This really helps in in making many of your LINQ statements read more naturally: "db.Person.Where(hasEmployees)".

namespace Minima.Service.Validation
{
    internal static class Validator
    {
        //- ~Message -//
        internal class Message
        {
            public const String InvalidEmail = "Invalid author Email";
        }


        //- ~EnsureAuthorExists -//
        internal static void EnsureAuthorExists(String authorEmail, out AuthorLINQ authorLinq, DataContext db)
        {
            EnsureAuthorExists(authorEmail, out authorLinq, Message.InvalidEmail, db);
        }


        //- ~EnsureAuthorExists -//
        internal static void EnsureAuthorExists(String authorEmail, out AuthorLINQ authorLinq, String message, DataContext db)
        {
            Func<AuthorLINQ, Boolean> authorExists = x => x.AuthorEmail == authorEmail;
            authorLinq = db.Authors.SingleOrDefault(authorExists);
            if (authorLinq == null)
            {
                FaultThrower.Throw<ArgumentException>(new ArgumentException(message));
            }
        }
    }
}

In the actual query itself, you can see that the semantics of the method is that a maximum of one author should be returned.  Therefore, I'm able to use the Single or SingleOrDefault methods.  Note that if you use these and you return more than one entity, an exception will be throw as Single and SingleOrDefault only allow what their name implies.  In this case here, AuthorEmail is the primary key in the database and, by definition, there can be only one (at this point I'm sure about 30% of you are doing Sean Connery impressions).  The difference between Single and SingleOrDefault is simple: when the criteria is not met, Single throws an exception and SingleOrDefault returns the type's default value.  The default of a type is that which the C# "default" keyword will return.  In other words, a reference type will be null and a struct will be something else (i.e. 0 for Int32).  In this case, I'm dealing with my AuthorLINQ class, which is obviously a reference type, and therefore I need to check null on it.  If it's null, then that author doesn't exist and I need to throw a fault (which is what my custom FaultThrower class does).  What's a fault?  That's a topic for a different post.

As you can see from the method signatures, not only is the author e-mail address being validated, the LINQ entity is being returned to the caller via an out parameter.  Once I have this authorLinq entity, then I can proceed to use it's primary key (AuthorId) in various other LINQ queries.  It's critical to remember that you always want to make sure that you are only using validated information.  If you aren't, then you have no idea what will happen to your system.  Therefore, you should ignore all IDs that are sent into a WCF service operation and use only the validated ones.  A thorough discussion of this topic is left for a future discussion.

Now we are finally at the place where LINQ to WCF projection happens.  For clarity, here it is again (no one likes to scroll back and forth):

return new BlogMetaData
{
    Description = blogLinq.BlogDescription,
    FeedTitle = blogLinq.BlogFeedTitle,
    FeedUri = new Uri(blogLinq.BlogFeedUrl),
    Guid = blogLinq.BlogGuid,
    Title = blogLinq.BlogTitle,
    Uri = new Uri(blogLinq.BlogPrimaryUrl),
    CreateDateTime = blogLinq.BlogCreateDate,
    LabelList = new List<Label>(
        blogLinq.Labels.Select(p => new Label
        {
            Guid = p.LabelGuid,
            FriendlyTitle = p.LabelFriendlyTitle,
            Title = p.LabelTitle
        })
    )
};

The basics flow of this are as follows: In DataContext db, in the Blogs table, pull sub-set where PersonId == AuthorId, then select transform that data into a new type.  The DTO projection is obviously happening in the Select method.  This method is akin to a SELECT in SQL.  My point in saying that is to make sure that you are aware that SELECT is not a filter; that's what Where does.  After execution of the Where method as well as after execution of the Select method, you have an IQueryable<Blog> object, which contains information about the query, but no actual data yet.  LINQ defers execution of SQL statements until they are actually used.  In this case, the data is actually being used when the ToList method is called.  This of course returns a list of List<Blog>, which is exactly what this service operation should do.  What's really nice about this is that WCF loves List<T>.  It's not a big fan of Collection<T>, but List<T> is it's friend.  Over the wire it's an Array and when it's being used by a WCF client, it's also a List<T> object.

In closing I should mention something that I know people are going to ask me about: To project from WCF DTO to LINQ you do the exact same thing.  LINQ isn't a database-specific technology.  You can LINQ between all kinds of things.  Though I use LINQ for my data access in many projects, most of my LINQ usage is actually for searching lists, combining to lists together, or modifying the data that gets bounds to the interface.  It's incredibly powerful.

Moving into a non-Minima example, if, for example, you needed to have a person's full name in a WPF ListBox and the name-specific LINQ properties you have are FirstName and LastName property, instead of doing tricks in your ItemTemplate, you can just have your ItemsSource use LINQ to sew the FirstName and LastName together.

lstPerson.ItemsSource = personList.Select(p => new
{
    FullName = p.FirstName + " " + p.LastName,
    p.PostalCode,
    Country = p.Country ?? String.Empty
});

The really sweet part about this is the fact that LINQ entities implement the INotifyPropertyChanged interface, so when doing WPF data binding, WPF will automatically update the ListBox when the data changes!  Of course, this doesn't help you if you are doing a seriously SOA system.  Therefore, my DTOs normally implement INotifyPropertyChanged as well.  This is not a WPF-specific interface (it lives in System.ComponentModel) and therefore does not tie the business object to any presentation.

That should show you a bit more of how LINQ can work with all kinds of stuff.  Therefore, it shouldn't be hard to figure out how to project from a WCF DTO to LINQ. You could literally copy/paste the LINQ -> DTO code and just switch around a few names.

If you are new to LINQ, then I recommend the book Pro LINQ by Joseph C. Rattz Jr. However, if you are already using LINQ or want a view into its internal mechanics, then I must recommend LINQ in Action by Fabrice Marguerie, Steve Eichert, and Jim Wooley.

Spring 2008 Sabbatical



Starting May 23rd, I'm starting another sabbatical to work on my company projects, to continue my seminary work, and to work on my book (to be clear: sabbatical != vacation).  During this time I will be accepting part-time AJAX, WCF, ASP.NET (no graphics work!-- hire a professional graphic designer, they are worth the money!), or general C# 3.0 and .NET 3.5 telecommuting consulting.  I'll assist in projects, but I'm not going to be able to work as senior architect on any projects.  Also remember, as a web developer, it's my duty to make sure my projects work in Mozilla, Opera, Safari, and IE, and is in no way IE-specific.  IE-only environments are the absolute most difficult to work with.

Also keep in mind that this is 2008, not 1988 and the primary purpose of modern technology is to allow us to have simpler lives and just about every single aspect of our technology has it's root in the Internet allowing us to communication from anywhere.  What's the point in having web casting and online meeting abilities or in having online white boarding or web-based project management software, or even Google Office if you aren't going to use them in a meaningful way?  Why have e-mail at all if you are going to absolutely rely on the ability to go to the person's office?  The addiction to physical contact is something that needs to be broken in the 21st century.  Stop managing with your physical "field of view" and start managing by results.

I'm a web developer/architect, not a piano mover; I don't need to be in a physical office.  If you are into technology at all, you are into moving your physical resources into a logical cloud.  If I've said it once, I've said it a million times: your associates are your greatest resource and should, therefore, be even more in a logical cloud (as they are humans and would appreciate it more!)  It is inconsistent to pursue logical management of resources and require physical management of personnel.  Not only that, but it costs a lot less (no office space required!)  If your employees don't have enough discipline to work from home, what makes you think they are working in their cube?  Unless you are working off the failed notion of "hourly management" instead of being a results-oriented manager, you won't have a problem with 100% telecommuting.  Results matter, not "time".  Also, if you don't trust your employees, well… maybe you hired the wrong people (or maybe have trust issues in general?)  Trust is the foundation of all life.  I could speak volumes on this topic, but I'll leave that to the expert: Timothy Ferris.  See his blog or get his book for more information.  I'm only an anonymous disciple of his, he is the master and authority on this topic.  Therefore, send your flames (read: insecurities) his way (after you read his book!-- audio also available; they are both worth 100x their weight in gold!)  See also, Scott Hanselman's interview with Timothy Ferris.  His YouTube page is also available.

With regards to the book, let me simply say that it's generically about AJAX communication and I'm not going to give out too many specific details on the project at this point, but I will say this: AJAX + SOA - CSS + Prototype + (ASP.NET Controls) - (ASP.NET AJAX) + WCF + (.NET 2.0 Service Interop) + Silverlight + Development Tools.  Also, I reserve the right to turn it into a video series (likely), make it a completely learning set of reading + video series (even more likely!), or to completely chuck the project.  I don't like to do things the classical way, so whatever I do, you can bet on the fact that I won't do the traditional "book".  As I've always said, the blog is the new book, but for this I think I may use a different paradigm.  I've turned down two book offers so far because I absolutely refuse to throw more paper on a bookshelf or do something that's been done a million times before.

If you are moving from ASP to ASP.NET, from PHP to ASP.NET, from ASMX to WCF 3.5 or want to add AJAX to your solutions drop me an e-mail and let's talk.

NetFXHarmonics on CodePlex



Well, I finally broke down.  My public projects are now freely available for download on CodePlex.  Below is a list of the current projects on CodePlex

Here are the current projects on CodePlex:

As far as creating "releases", these are shared-source/open-source projects and in the model I'm following "releases" are always going to be obsolete.  Therefore, I will provide ZIP versions of the archived major revisions of a project and the current revision will always be available as source code.  The only exception to this may be DevServer, which I may do monthly releases or releases based upon major upgrades.  I'm currently working on new major revisions for a few other projects and when they are completed, I will then post them on to CodePlex as well.

As a reminder, my projects are always architected to follow the current best-practices and idiots for a particular technology and are therefore often fully re-architected based on the current technology.  The reason I do this is for the simple reason that my core specialty is training (technology or not) and that's the driving principle in my projects.  Therefore, on each of my projects there is a "As a Training Tool" section that will explain that projects technology and architecture as well as what else you might be able to learn from it.

As a final note, SVNBridge is working OK for me and has really helped me get over the CodePlex hurdle.  Scott Hanselman was kind enough to encourage me to try SVNBridge again.  I'm honesty glad I did.  The Team System 2008 Team Explorer client which integrated into Visual Studio works every now and again, but I got absolutely sick of everything locking up every time I would save a file.  Not even a check it!  A simple local save!  How people put up with "connected" version control systems is beyond me.  Do people not realize that Subversion does locking too?  Anyways, SVNBridge works great for both check outs and commits (we don't "check in" in the Subversion world-- we use transactional terminology).  If you want Visual Studio 2008 integration AND speed and power and flexibility with CodePlex, get VisualSVN.  It's an add-on for VS2008 that uses Tortoise behind the scenes.  With that, depending on my mood I can commit in both VS2008 (what I would do when working on refactoring or something) and in the Windows shell (what I would do when working with JavaScript files in the world's best JavaScript IDE: Notepad2).

NetFXHarmonics DevServer Released



Two months ago started work on a project to help me in my AJAX and SOA development.  What I basically needed was a development web server that allowed me to start up multiple web servers at once, monitor server traffic, and bind to specific IP interfaces.  Thus, the creation of NetFXHarmonics DevServer.  I built it completely for myself, but others started to ask for it as well.  When the demand for it became stronger, I realized that I needed to release the project on the web.  Normally I would host it myself, but given the interest from the .NET community, I thought I would put it on CodePlex.  I've only cried twice seen I've put it on CodePlex, but I'll survive.

NetFXHarmonics DevServer is a web server hosting environment built on WPF and WCF technologies that allows multiple instances of Cassini-like web servers to run in parallel. DevServer also includes tracing capabilities for monitoring requests and responses, request filtering, automatic ViewState and ControlState parsing, visually enhanced HTTP status codes, IP binding modes for both local-only as well as remote access, and easy to use XML configuration.

Using this development server, I am able to simultaneously start multiple web sites to very quickly view everything that happens over the wire and therefore easily debug JSON and SOAP messages flying back and forth between client and server and between services.  This tool have been a tremendous help for me in the past few months to discover exactly why my services are tripping out without having to enable WCF tracing.  It's also been a tremendous help in managing my own web development server instances for all my projects, each having 3-5 web sites (or segregated service endpoints) each.

Let me give you a quick run down of the various features in NetFXHarmonics DevServer with a little discussion of each feature's usage:

XML Configuration

NetFXHarmonics DevServer has various projects (and therefore assemblies) with the primary being DevServer.Client, the client application which houses the application's configuration.

In the app.config of DevServer.Client, you have a structure that looks something like the following:

<jampad.devServer>
</jampad.devServer>

This is where all your configuration lives and the various parts of this will be explained in their appropriate contexts in the discussions that follow.

Multiple Web Site Hosting

In side of the jampad.devServer configuration section in the app.config file, there is a branch called <servers /> which allows you to declare the various web servers you would like to load.  This is all that's required to configure servers.  Each server requires a friendly name, a port, a virtual path, and the physical path.  Given this information, DevServer will know how to load your particular servers.

<servers>
  <server key="SampleWS1" name="Sample Website 1" port="2001"
          virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite1">
  </server>
  <server key="SampleWS2" name="Sample Website 2" disabled="true" port="2003"
          virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite2">
  </server>
</servers>

If you want to disable a specific server from loading, use the "disabled" attribute.  All disabled servers will be completely skipped in the loading process.  On the other hand, if you would like to load a single server, you can actually do this from the command line by setting a server key on the <server /> element and by accessing it via a command line argument:

DevServer.Client.exe -serverKey:SampleWS1

In most scenarios you will probably want to load various sets of servers at once.  This is especially true in properly architected service-oriented solutions.  Thus, DevServer includes a concept of startup profiles.  Each profile will include links to a number of keyed servers.  You configure these startup profiles in the <startupProfiles /> section.

<startupProfiles activeProfile="Sample">
  <profile name="Sample">
    <server key="SampleWS1" />
    <server key="SampleWS2" />
  </profile>
</startupProfiles>

This configuration block lives parallel to the <servers /> block and the inclusion of servers should be fairly self-explanatory.  When DevServer starts it will load the profile in the "activeProfile" attribute.  If the activeProfile block is missing, it will be ignored.  If the activeProfile states a profile that does not exist, DevServer will not load.  When using a startup profile, the "disabled" attribute on each server instance is ignored.  That attribute is only for non-startup profile usage.  An activeProfile may also be set via command line:

DevServer.Client.exe -activeProfile:Sample

This will override any setting in the activeProfile attribute of <startupProfiles/>.  In fact, the "serverKey" command line argument overrides the activeProfile <startupProfiles /> attribute as well.  Therefore, the order of priority is is as follows: command line argument override profile configuration and profile configuration overrides the "disabled" attribute.

Most developers don't work on one project and with only client.  Or, even if they do, they surely have their own projects as well.  Therefore, you may have even more servers in your configuration:

<server key="ABCCorpMainWS" name="Main Website" port="7001"
        virtualPath="/" physicalPath="C:\Project\ABCCorp\Website">
</server>
<server key="ABCCorpKBService" name="KB Service" port="7003"
        virtualPath="/" physicalPath="C:\Project\ABCCorp\KnowledgeBaseService">
</server>
<server key="ABCCorpProductService" name="Product Service" port="7005"
        virtualPath="/" physicalPath="C:\Project\ABCCorp\ProductService">
</server>

These would be grouped together in their own profile with the activeProfile set to that profile.

<startupProfiles activeProfile="ABCCorp">
  <profile name="ABCCorp">
    <server key="ABCCorpMainWS" />
    <server key="ABCCorpKBService" />
    <server key="ABCCorpProductService" />
  </profile>
  <profile name="Sample">
    <server key="SampleWS1" />
    <server key="SampleWS2" />
  </profile>
</startupProfiles>

What about loading servers from different profiles?  Well, think about it... that's a different profile:

<startupProfiles activeProfile="ABCCorpWithSampleWS1">
  <profile name="ABCCorpWithSampleWS1">
    <server key="SampleWS1" />
    <server key="ABCCorpMainWS" />
    <server key="ABCCorpKBService" />
    <server key="ABCCorpProductService" />
  </profile>
</startupProfiles>

One of the original purposes of DevServer was to allow remote non-IIS access to development web sites.  Therefore, in DevServer you can use the <binding /> configuration element to set either "loopback" (or "localhost") to only allow access to your machine, "any" to allow web access from all addresses, or you can specific a specific IP address to bind the web server to a single IP address so that only systems with access to that IP on that interface can access the web site.

In the following example the first web site is only accessible by the local machine and the second is accessible by others.  This comes in handy for both testing in a virtual machine as well as quickly doing demos.  If your evil project manager (forgive the redundancy) wants to see something, bring the web site up on all interface and he can poke around from his desk and then have all his complains and irrational demands ready when he comes to your desk (maybe you want to keep this feature secret).

<server key="SampleWS1" name="Sample Website 1" port="2001"
        virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite1">
  <binding address="loopback" />
</server>
<server key="SampleWS2" name="Sample Website 2" port="2003"
        virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite2">
  <binding address="any" />
</server>

Web Site Settings

In addition to server configuration, there is also a bit of general configuration that apply to all instances.  As you can see from the following example, you can add default documents to the existing defaults and you can also setup content type mappings.  A few content types already exist, but you can override as the example shows.  In this example, where ".js" is normally sent as text/javascript, you can override it to go to "application/x-javascript" or to something else.

<webServer>
  <defaultDocuments>
    <add name="index.jsx" />
  </defaultDocuments>
  <contentTypeMappings>
    <add extension=".jsx" type="application/x-javascript" />
    <add extension=".js" type="application/x-javascript" override="true" />
  </contentTypeMappings>
</webServer>

Request/Response Tracing

One of the core features of DevServer is the ability to do tracing on the traffic in each server.  Tracing is enabled by adding a <requestTracing /> configuration element to a server and setting the "enabled" attribute to true.

<server key="SampleWS1" name="Sample Website 1" port="2001"
        virtualPath="/" physicalPath="C:\Project\DevServer\SampleWebsite1">
  <binding address="loopback" />
  <requestTracing enabled="true" enableVerboseTypeTracing="false" enableFaviconTracing="true" />
</server>

This will have request/response messages show up in DevServer which will allow you to view status code, date/time, URL, POST data (if any), response data, request headers, response headers, as well as parsed ViewState and Control state for both the request and response.  In addition, each entry is color coded based on it's status code.  Different colors will show for 301/302, 500+, and 404.

image

When working with the web, you don't always want to see every little thing that happens all the time.  Therefore, by default, you only trace common text specific file like HTML, CSS, JavaScript, JSON, XAML, Text, and SOAP and their content.  If you want to trace images and other things going across, then set "enableVerboseTypeTracing" to true.  However, since there is no need to see the big blob image data, the data of binary types are not sent to the trace viewer even with enableVerboseTypeTracing.  You can also toggle both tracing as well as verbose type tracing on each server as each is running.

There's also the ability to view custom content types without seeing all the images and extra types.  This is the purpose of the <allowedConntetTypes /> configuration block under <requestTracing />, which is parallel to <servers />.

<requestTracing>
  <allowedContentTypes>
    <add value="application/x-custom-type" />
  </allowedContentTypes>
</requestTracing>

In this case, responses of content-type "application/x-custom-type" are also traced without needing to turn on verbose type tracing.

However, there is another way to control this information.  If you want to see all requests, but want the runtime ability to see various content types, then you can use a client-side filter in the request/response list.  In the box immediately above the request/response list, you can type something like the following:

verb:POST;statuscode:200;file:css;contentType:text/css

Filtering will occur as you type, allowing you to find the particular request you are looking for.  The filter is NOT case sensitive.  You can also clear the request/response list with the clear button.  There is also the ability to copy/paste the particular headers that you want from the headers list by using typical SHIFT (for range) and CTRL-clicking (for single choosing).

Request/Response monitoring actually goes a bit further by automatically parsing both ViewState and ControlState for both request (POST) and response data.  Thanks goes to Fritz Onion for granting me permission to use his ViewState parser class in DevServer.

As a Training Tool

When announce any major project I always provide an "as a training tool" section to explain how the project can be used for personal training.  NetFXHarmonics DevServer is built using .NET 3.5 and relies heavily on LINQ and WCF with a WPF interface.  It also uses extensive .NET custom configuration for all server configuration.  In terms of LINQ, you can find many examples of how to use both query expression syntax and extension method syntax.  When people first learn LINQ, they think that LINQ is an O/R mapper.  Well, it's not (and probably shouldn't be usef for that in enterprise applications! there is only one enterprise class O/R mapper: LLBLGen Pro).  LINQ allows Language INtegrated Query in both C# and VB.  So, in DevServer, you will see heavy reliance on LINQ to search List<T> objects and also to transform LINQ database entities to WCF DTOs.

DevServer also relies heavily on WCF for all inner-process communication via named-pipes.  The web servers are actually hosted inside of a WCF service, thus segregating the web server loader from the client application in a very SOA friendly manner.  The client application loads the service and then acts as a client to the service calling on it to start, stop, and kill server instances.  WCF is also used to communicate the HTTP requests inside the web server back to the client, which is itself a WCF service to which the HTTP request is a client.  Therefore, DevServer is an example of how you can use WCF to communicate between AppDomains.

The entire interface in DevServer is a WPF application that relies heavy on WPF binding for all visual information.  All status information is in a collection to which WPF binds.  Not only that all, but all request/response information is also in a collection.  WPF simply binds to the data.  Using WPF, no eventhandling was required to say "on a click event, obtain SelectedIndex, pull data, then text these TextBox instances".  In WPF, you simply have normal every day data and WPF controls bind directly to that data being automatically updated via special interfaces (i.e INotifyPropertyChanged and INotifyCollectionChanged) or the special generic ObservableCollection<T>.

Since the bindings are completely automated, there also needs to be ways to "transform" data.  For example, in the TabItem header I have a little green or red icon showing the status of that particular web server instance.  There was no need to handle this manually.  There is already a property on my web server instance that has a status.  All I need to do is bind the image to my status enumeration and set a TypeConverter which transforms the enumeration value to a specific icon.  When the enumeration is set to Started, the icon is green, when it says "Stopped", the icon is red.  No events are required and the only code required for this scenario is the quick creation of a TypeConverter.

Therefore, DevServer is an example of WPF databinding.  I've heard people say that they are more into architecture and WCF and therefore have no interested in learning WPF.  This statement makes no sense.  If you don't want to mess with UI stuff, you need to learn WPF.  Instead of handing events all over the place and manually setting data, you can do whatever it is you do and have WPF just bind to your data.  When it comes to creating quick client applications, WPF is a much more productive platform than Windows Forms... or even the web!

Links

Squid Micro-Blogging Library for .NET 3.5



A few years ago I designed a system that would greatly ease data syndication, data aggregation, and reporting.  The first two components of the system were repackaged and release early last year under the incredibly horrible name "Data Feed Framework".  The idea behind the system was two fold.  The first concept was that you write a SQL statement and you immediately get a fully functional RSS feed with absolutely no more work required.  Here's an example of a DFF SQL statement that creates an RSS feed of SQL Server jobs:

select Id=0,
Title=name,
Description=description
from msdb.dbo.sysjobs
where enabled = 1

The second part of DFF was it's ASP.NET control named InfoBlock that would accept an RSS or ATOM feed and display it in a mini-reader window.  The two parts of DFF combine to create the following:

Given the following SQL statement (or more likely a stored procedure)...

select top 10
Id=pc.ContactID, 
Title=pc.FirstName + ' ' + pc.LastName + ': $' + convert(varchar(20), convert(numeric(10,2), sum(LineTotal))), 
Description='', 
LinkTemplate = '/ShowContactInformation/{id}'
from Sales.SalesOrderDetail sod
inner join Sales.SalesOrderHeader soh on soh.SalesOrderID = sod.SalesOrderID
inner join Person.Contact pc on pc.ContactID = soh.SalesPersonID
group by pc.FirstName, pc.LastName, pc.ContactID
order by sum(LineTotal) desc

...we have an automatically updating RSS feed and when that RSS feed is given to an InfoBlock, you get the following:

image

InfoBlocks could be placed all over a web site or intranet to give quick and easy access to continually updating information.  The InfoBlock control would also register the feed with modern web browsers that had integrated RSS support.  Furthermore, since it was styled properly in CSS, there's no reason for it to be a block at all.  It could be a horizontal list, a DOM-based window, or even a ticker as CSS and modern AJAX techniques allow.

DFF relied on RSS.NET for syndication feed creation and both RSS.NET and Atom.NET for aggregation.  It also used LLBLGen Pro a bit to access the data from SQL Server.  As I've promised with all my projects, they will update as new technologies are publicly released.  Therefore, DFF has been completely updated for .NET 3.5 technologies including LINQ and WCF.

I've also decided to continue down my slippery slope of a change in product naming philosophy.  Whereas before I would follow the Microsoft marketing philosophy of "add more words to the title until it's so long to say that you require an acronym" to the more Linux or O'Reilly approaches of "choose a random weird sounding word and leave it be" and "pick a weird animal", respectively.  I've also been moving more towards the idea of picking a cool name and leaving it as is.  This is in contrast to Microsoft's idea of picking an awesome name and then changing it to an impossibly long name right before release (i.e. Sparkle, Acrylic, and Atlas)  Therefore, I decided to rename DFF to Squid.  I think this rivals my Dojr.NET and Prominax (to be released-- someday) projects as having the weirdest and most random name I've ever come up with.  I think it may have something to do with SQL and uhhhh.. something about a GUID.  Donno.

Squid follows the same everything as DFF, however the dependencies on RSS.NET and ATOM.NET were completely removed.  This was possible due to the awesome syndication support in WCF 3.5.  Also, all reliance on LLBLGen Pro was removed.  LLBLGen Pro (see my training video here) is an awesome system and is the only enterprise-class O/R mapping solution in existence.  NHibernate should not be considered enterprise-class and it's usability is almost through the floor.  Free in terms of up-front costs, does not mean free in terms of usability (something Linux geeks don't seem to get).  However, given that LINQ is built into .NET 3.5, I decided that all my shared and open-source projects should be using LINQ, not LLBLGen Pro.  The new LLBLGen Pro uses LINQ and when it's released, should absolutely be used as the primary solution for enterprise-class O/R mapping.

Let me explain a bit about the new syndication feature in WCF 3.5 and how it's used in Squid.  Creating a syndication feed in WCF is required a WCF endpoint just like everything else in WCF.  This endpoint will be part of a service and will have an address, binding, and contract.  Nothing fancy yet as the sweetness is in the details.  Here's part of the contract Squid uses for it's feed service (don't be jealous of the VS2008 theme -- see Scott Hanselman's post on VS2008 themes):

namespace Squid.Service
{
    [ServiceContract(Namespace = "http://www.netfxharmonics.com/services/squid/2008/03/")]
    public interface ISquidService
    {
        [OperationContract]
        [WebGet(UriTemplate = "GetFeedByTitle/{title}")]
        Rss20FeedFormatter GetFeedByTitle(String title);


        //+ More code here
    }
}

Notice the WebGet attribute.  This is applied to signify that this will be part of a HTTP GET request.  This relates to the fact that we are using a new WCF 3.5 binding called the WebHttpBinding.  This is the same binding used by JSON and POX services.  There are actually a few new attributes, each of which provides it's own treasure chest (see later in this post when I mention a free chapter on the topic).  The WebGet attribute has an awesome property on it called UriTemplate that allows you to match parameters in the request URI to parameters in the WCF operation contract.  That's beyond cool.

The service implementation is extremely straight forward.  All you have to do is create a SyndicationFeed object, populate it with SyndicationItem objects and return it in the constructor of the Rss20FeedFormatter.  Here's a non-Squid example:

SyndicationFeed feed = new SyndicationFeed();
feed.Title = new TextSyndicationContent("My Title");
feed.Description = new TextSyndicationContent("My Desc");
List<SyndicationItem> items = new List<SyndicationItem>();
items.Add(new SyndicationItem()
{
    Title = new TextSyndicationContent("My Entry"),
    Summary = new TextSyndicationContent("My Summary"),
    PublishDate = new DateTimeOffset(DateTime.Now)
});
feed.Items = items;
//+
return new Rss20FeedFormatter(feed);

You may want to make note that you can create an RSS or ATOM feed directly from an SyndicationFeed instance using the SaveAsRss20 and SaveAsAtom10 methods.

As with any WCF service, you need a place to host it and you need to configure it.  To create a service, I simply throw down a FeedService.svc file with the following page directive (I'm really not trying to have the ugliest color scheme in the world-- it's just an added bonus):

<%@ ServiceHost Service="Squid.Service.SquidService" %>

The configuration is also fairly straight forward, all we have is our previously mentioned ending with an address(blank to use FeedService.svc directly), binding (WebHttpBinding), and contract(Squid.Service.ISquidService).  However, you also need to remember to add the WebHttp behavior or else nothing will work for you.

<system.serviceModel>
  <behaviors>
    <endpointBehaviors>
      <behavior name="FeedEndpointBehavior">
        <webHttp/>
      </behavior>
    </endpointBehaviors>
  </behaviors>
  <services>
    <service name="Squid.Service.SquidService">
      <endpoint address=""
                binding="webHttpBinding"
                contract="Squid.Service.ISquidService"
                behaviorConfiguration="FeedEndpointBehavior"/>
    </service>
  </services>
</system.serviceModel>

That's seriously all there is to it: write your contract, write your implementation, create a host, and set configuration.  In other words, creating a syndication feed in WCF is no different than creating a WsHttpBinding or NetTcpBinding service.  However, what about reading an RSS or ATOM feed? This is even simpler.

To read a feed all you have to do is create an XML reader with the data source of the feed and pass that off to the static Load method of the SyndicationFeed class.  This will return an instance of SyndicationFeed which you may iterate or, as I'm doing in Squid, transform with LINQ.  I actually liked how my server-control used an internal repeater instance and therefore wanted to continue to use it.  So, I kept my ITemplate object (RssListTemplate) the same and used the following LINQ to transform a SyndicationFeed to what my ITemplate what already using:

Object bindingSource = from entry in feed.Items
                       select new SimpleFeedEntry
                       {
                           DateTime = entry.PublishDate.DateTime,
                           Link = entry.Links.First().Uri.AbsoluteUri,
                           Text = entry.Content != null ? entry.Content.ToString() : entry.Summary.Text,
                           Title = entry.Title.Text
                       };

Thus, with .NET 3.5 I was able to remove RSS.NET and ATOM.NET completely from the project.  LINQ also, of course helped me with my database access and therefore remove my dependency on my LLBLGen Pro generated DAL:

using (DataContext db = new DataContext(Configuration.DatabaseConnectionString))
{
    var collection = from p in db.FeedCreations
                     where p.FeedCreationTitle == title
                     select p;
    //+ More code here
}

Thus, you can use Squid in your existing .NET 3.5 system with little impact to anything.  Squid is what I use in my Minima blog engine to provide the boxes of information in the sidebar.  I'm able to modify the data in the Snippet table in the Squid database to modify the content and order in my sidebar.  Of course I can also easily bring in RSS/ATOM content from the web with this as well.

You can get more information on the new web support in WCF 3.5 by reading the chapter "Programmable Web" (free chapter) in the book Essential WCF for .NET 3.5 (click to buy).  This is an amazing book that I highly recommend to all WCF users.

Links

March 2008 Web Technology Update



Recently a bunch of technologies have been released and/or updated and I would like to mention a few of them briefly.

First and foremost, Silverlight 2 Beta 1 has finally been released and you may download it immediately.  There is also an accompanying SDK.  You can find a nice development tutorial series on Scott Guthrie's blog.  If you are already familiar with WPF, you can just skim this entire series in less than 5 minutes.  Given that this technology isn't the same as the full WPF and given that it's designed for the web, there will obviously be differences.  It's important to remember that Silverlight 2 isn't simply WPF for the web.  I would call WPF 3.5's XBAP support for IE/Firefox "WPF for the web".  No, this is possibly the biggest web technology improvement since the release of Firefox 1.0, which in turn was the biggest technology release since the printing press.  Alight, alight… since .NET 1.1.  It's support for the dynamic language runtime is going to completely revolutionize our web development.

When reading through Scott's tutorial series (serious, at least skim it), it's interesting to note that Silverlight 2 allows cross-domain communication.  It does this by reusing the Flash communication policy files.  This is really awesome as it means that you can start accessing resources that Flash has been using for a while.  Being able to dynamically access resources from different domains is critical to the success of web architecture in the future.

Speaking of cross-domain communication, John Resig and I received a very depressing e-mail the other day telling us horrible news: cross-domain communication will probably be removed from Firefox 3 before it's official release.  Apparently a bunch of paranoid anti-architects were complaining about the dreaded evils of being able to access resources from different domains.  Um ok.  Fortunately, however, Firefox 3 has a feature called postMessage that allows you to get around this.  Malte Ubl has produced a library called xssinterface to demonstrate just this concept.  You could, of course, get around this completely with some iframe hacks or some other scripting magic.

Speaking of web browsers, I would like to bring people's attention to a technology that I've been following for some time now: Apple WebKit.  This is basically the brains inside Safari.  I absolutely love the Safari web browser.  It's by far and away the easiest web browser to use.  It also has the same keyboard short-cuts as Firefox, which is how I'm able to use it.  It's also incredibly fast, but I should mention that it uses even more memory than Firefox.  My last instance passed 500MB.  Given it's lack of an extension or configuration (i.e. about:config) system, it's obviously no where near the same caliber as Firefox though.  It is, however, my primary web browser as has been since October '07.

The reason I mention WebKit is because as very few people know, this is an open source project and has nightly binaries released on their webkit.org web site.  One of the most interesting thing about nightlies that you can actually watch the progress of development as time goes on.  About every month or so I like to get the latest Firefox nightly.  It's always interesting to see the major experiments that the developers try about 2 months after a major release of Firefox.  There's always some really awesome "teaser" feature in there that later grows into a fully grown technology.  The same can be said for WebKit.

None of that is, however, my primary reason for mentioning WebKit.  As, most web developers know, the Acid2 test has been the standard for checking a web browsers compatibility with the CSS standard.  I've been pushing this test for a long time, but I've never pushed it as the only test.  There are many things that a web browser must do and many features a web browser must have before it can be considered appropriate for use.  Merely focusing on CSS, while completely ignoring DOM support, JavaScript, and general user usability can lead a browser to be as impossible to use as Opera 9.

As I've said time and time again, I'm not a CSS specialist.  Part of the definition of being a professional web develop is that I have a solid understand of the inner workings of CSS including specificity, the various selectors, and how to merge absolute, floating, and relative position on the same elements, tasks "coders" see as nearly impossible to learn.  However, my focus is on AJAX interaction as seen from the JavaScript and DOM worlds.  Therefore, we need to have a test for browsers that goes beyond the simple Acid 2 test for CSS.  I'm not the only one thinking this way, because recently the Acid3 test was published and it tests CSS, JavaScript and DOM support.  This is the new standard for web browsers.

So far no web browser has even gotten close, with the lowest score from a web browser being 39% in Safari to the best score being 50% in Firefox 2.0.0.12.  However, in terms of non-released software, Firefox 3.0b3 has a score between 59% and 61%, depending on its mood (update: b4 is steady at 67%) and the latest WebKit nighty has a score is 90% (watch WebKit progress on Acid 3 at http://bugs.webkit.org/show_bug.cgi?id=17064).  That's phenomenal!  The newly released Internet Explorer 8 beta 1 has a score of 17%.  Those of you who have naively praising the IE team for being YEARS late on getting near the Acid 2 test need to wake up and realize this is 2008.  Time moves-- keep up.  Firefox has been close for the longest time and has always had the next-gen's next-gen JavaScript and DOM support, but has only recently completely passed the finish line of the Acid 2 test.  So, they are finally off my watch list there, but I will not stop bugging them until they pass the Acid 3 test.

For more information on the Acid 3 test, see John Resig's most entitled "Acid 3 tackes EMCAScript".  He's about as passionate as I am for web standards and Firefox and his blog is an invaluable resource for all things JavaScript.  His work is so good that I would like to take the time to plug his book he is currently writing: Secrets of the JavaScript Ninja.  I absolutely guarantee you that this book will redefine the entire world of JavaScript and will raise the bar incredibly out of the reach of "coders". To all of you coders who think you know JavaScript, do a view-source on the Acid 3 source code (you may want to bring a change of underwear with you).

Lastly, it's not necessarily a "new" technology, but it's so incredibly phenomenal that I need to mention it: Prototype 1.6.  It's amazing to me that people actually go out of their way to use ASP.NET AJAX 3.5 (I still find the ICallbackEventHandler interface more productive).  ASP.NET AJAX 3.5 is not nearly as bad as extremists think, but the design is still flawed.  Prototype on the other hand is absolutely incredible.  I've written about Prototype before, but this version 1.6 is even more powerful.  There a A LOT of changes from Prototype 1.5.  It's so good that I no longer call it "prototype/script.aculo.us".  Script.aculo.us is a great animation system, but, honestly, the main reason I used it was for the DOM abstraction in the Builder object.  Prototype now has an Element object to help create DOM objects, thus allowing me to remove Script.aculo.us from most of my projects (it's not as complete as the Builder object, but it allows object chaining-- which greatly increases code readability, conciseness and understanding!).  The Template object is also amazing as it gives you the ability to go far beyond simple String.Format formatting.  The new Class object for OOP is also great.  It's so much easier to use than Prototype 1.5.  Also, being able to hide all elements with a particular CSS pattern with one shot is very useful! (for example, $$('div span .cell-block').invoke('hide')).  It even allows you to use CSS 3 selectors on the most dead of web browsers.  It really makes developing for Internet Explorer 6 and 7 bearable!  Even if I have to use ASP.NET AJAX 3.5, I'll still including prototype.js.  If you do anything with JavaScript, you need Prototype!

 

Links

Comic Strip #2: .NET and PHP Source Code



Here's another conversations that I've had with various PHP programmers over the years.  Actually, this strip is a combination of three separate conversations wrapped up into one.  I think these conversations also give an accurate image of how the ignorant anti-Microsoft cult thinks.  Most of the time these people don't even know their own systems and often assume that since I'm a .NET programmer that I'm a complete fool.

.NET and PHP Source Code Comic Strip