2005 2006 2007 2008 2009 2010 2011 2015 2016 2017 2018 aspnet azure csharp debugging docker elasticsearch exceptions firefox javascriptajax linux llblgen mongodb powershell projects python security services silverlight training videos wcf wpf xag xhtmlcss

Accelerated Language Learning (Timothy Ferris)

Many years ago I wrote a paper on accelerated learning and experience induction.  This paper explains how I induce weeks of experience in days, months of experience in weeks, and years of experience in months and how to dramatically learn new technologies with little to no investment.  I know people who have worked in a field for 4 years, but only have 6 months worth of skill (usually VB developers -seriously).  I also know people who have worked for 6 months, but have over 4 years of skill (usually Linux geeks; paradoxically, VB developers usually are quicker to learn .NET basics than PHP developers, though they usually switch places in more advanced studies.)  How can anyone expect to gain skill by doing the exact same job for 4 years (e.g. building database driven interfaces, cleaning data, writing reports)?  Obviously, calendar-years of experience is not directly related to skill-years of experience.  As it turns out, my learning techniques are not uncommon.

Today, author Timothy Ferris (Four Hour Work Week) posted a blog entry about how he learns languages in an incredibly short timeframe.  His post was fascinating to me for many reasons, one of them being that his first step is as follows: "Before you invest (or waste) hundreds and thousands of hours on a language, you should deconstruct it."  This is the same first step in my accelerated learning method.  Apparently I was on to something!  In his deconstruction method, he asks a few key questions and does some component and paradigm comparisons to give you some idea of the language scope and of its difficulty.  Based on what you learn from the deconstruction, you should have a good idea of what the language entails.

In my learning system, I refer to this deconstruction as "learning the shell", which is followed by "learning the foundations", then "learning the specifics" -- Shell, Foundations, Specifics -- my SFS (pronounced "sifs") method.  The method exploits Pareto's Law, allowing you to learn 20% of the technology at first to give you 80% of the return.  That's MUCH more than what most so-called "experts" have anyhow!  As it turns out, Timothy Ferris uses Pareto's Law in his language learning as well.  You can hear about this in his interview with my other role model, Scott Hanselman.

For more information on Timothy Ferris' other life-optimization work, check out his book The Four Hour Work Week and his blog.

Related Links

Web Application Security Presentation

Today I found a really nice web application security presentation by Joe Walker.  Honestly, almost none of it is common sense and I would therefore encourage all web developers to check this out.  Also on the same page as the presentation are a number of very good AJAX security links like the XSS (Cross Site Scripting) cheat sheet.

BTW, this type of stuff is touched on in the Brainbench AJAX exam.

Links

Prototype and Scriptaculous

OK, it's time that I come out with it: I've switch to using the prototype and script.aculo.us ("scriptaculous") JavaScript/AJAX frameworks. For the longest time I've sworn my allegiance to manual AJAX/DOM manipulation as I've always found it to be the absolute most powerful way to get the job done correctly, but as it turns out prototype/scriptaculous provide an incredible level of simplification without taking any of your power from you.  It's the ONLY AJAX framework I found that didn't suck.  Though I'm a .NET developer, I can't  the Microsoft ASP.NET AJAX ("Atlas") extensions.  Except for it's awesome web service handling, which I use all the time, it's a slap in the face of AJAX development. It's bulky with hard to read source code that has an incredibly low usability.  It seems to be the opposite of the beauty of C# and .NET in general.  With those technologies, everything just falls together without ever needing to read a thing (assuming you are a qualified web professional who understands the foundational concepts of the system). Sure, you have to look up stuff in the docs, but you don't have to pour over book on the topic to be productive.  The same can be said for prototype and scriptaculous.

So, what is this thing? Actually are two frameworks, one, prototype, is a single JavaScript file and the other, scriptaculous, is a series of JavaScript files. Prototype is a foundational JavaScript framework that simplifies the existing client-side JavaScript API into something much more intuitive and that's also widely cross browser compatible. Finally! Cross browser compatibility without needing to support it!  That means we no longer have to do a zillion tests to see how we are supposed to get an element's height. I can just simply call $('box').getHeight( ) and be done with it! Prototype has classes (types) for Arrays (which including a very powerful each( ) function-- similar to .NET's ForEach method, Elements (which allows you to modify style, add classes, get ALL descendants -- not just the children), Events (instead of trying to test for addEventListener or attachEvent, just use Event.observe!), and classes for a ton of other things. To put it simply: prototype gives you a new client-side API. The source code is also incredibly each to read. It's just the same stuff most of us have already written, but now we don't have to support it.  If we build our applications on prototype, some one else has pre-tested a BUNCH of our system for us!

Scriptaculous is a different beast entirely. While prototype is a new general client-side API, scriptaculous goes more specifically into dynamics. For example, it allows you to turn a normal list into a sortable list with ONE line of JavaScript.  ONE.  Uno.  Eins.  It also allows you to Turn one div set into a series of draggable elements (products?) and another set of divs into containers that the items can go to (shopping carts?) There are also a very nice set of pre-built animations as well as other common things like an autocompleting textbox and an in-place editable label. These are I've normally built manually, but can use them without micro-managing code.  Code by subtraction RULES!  Scriptaculous is also VERY flexible. Everything you can do in scriptaculous is extremely customizable thanks to JavaScript's flexible dynamic object syntax and native higher-order function capabilities. That means, for example, that when you create a sortable list you can control exactly how it can scroll and set callback functions all in the same simple line of code. Also, note that scriptaculous uses prototype's API for it's lower-level functionality. This is why you will often see the two products named together, like the various books written on "prototype and scriptaculous".

What about some samples? Well, Prototype and Scriptaculous are both SO simple to work with I have absolutely no idea how someone can write a book on them. I go to various Borders bookstores about every day (it's my office), so I get to see many different books. When I flip through the prototype/scriptaculous books I get very confused. How can someone take hundreds of pages to write something that could be said in 20 or 30?  Verbosity sucks (yeah I know... look who's talking).  These framework are insultingly simple to work with.

Here are a few very quick samples.  For better samples, just download scriptaculous and view extremely well-documented prototype API online.

Prototype

Want to make a simple AJAX request?

new Ajax.Request('/service/', { 
  method: 'get', 
  onSuccess: function(t) { 
    alert(t.responseText); 
  }
}); 

No XmlHttpRequest object, no COM objects, nothing!

How about updating the content of an element?

Using this element...

<div id="myText"></div> 

...with this JavaScript...

$('myText').update('this is the new text'); 

... you get an updated element!  As you can see, it even uses the typical $ syntax (in addition to $$, $A, $F, $H, $R, and $w syntax!) Just look at the examples in the Prototype API to see more.  You will be shocked to see how easy it is to walk to DOM tree now.  You will also be amazed at how much easier arrays are to manipulate.

Script.aculo.us

Using this XHTML structure...

<ul id="greek">
<li>Alpha</li>
<li>Beta</li>
<li>Gamma</li>
<li>Delta</li>
</ul>

...with this SINGLE line of JavaScript...

Sortable.create('greek');

..., you have a sorting list (try that out-- you will also notice some nice spring-back animations happening too!)

Need callback when sort is completed? (well of course you do!)  Just give the <li> elements a patterned ID ('listid_count')... 
 

<ul id="greek">
<li id="greek_1">Alpha</li>
<li id="greek_2">Beta</li>
<li id="greek_3">Gamma</li>
<li id="greek_4">Delta</li>
</ul>

...and add a function callback and you're done.

Sortable.create('greek', {
  onUpdate: function( ){ 
    alert('something happened');
  } 
});

Ooooooooooooooo scary. IT'S THAT EASY! You don't need a book. Just use the docs and samples online.

Here's another one: want to move an item from one list to another?

Just use these elements...

<ul id="greek">
<li id="greek_1">Alpha</li>
<li id="greek_2">Beta</li>
<li id="greek_3">Gamma</li>
<li id="greek_4">Delta</li>
</ul>
<ul id="hebrew">
<li id="hebrew_1">Aleph</li>
<li id="hebrew_2">Bet</li>
<li id="hebrew_3">Gimmel</li>
<li id="hebrew_4">Dalet</li>
</ul> 

... with this JavaScript.

Sortable.create('greek', { containment: ['greek', 'hebrew'] });
Sortable.create('hebrew', { containment: ['greek', 'hebrew'] });

Want to save the entire state of a list?

var state = Sortable.serialize('greek');

Couple that with the simple prototype Ajax.Request call and you can very quickly save the state of your dynamic application.

Now close your jaw and stop drooling.  I haven't even shown the drag-n-drop, animations, or visual effects that Scriptaculous provides.  Also, get this: it's all FREE. Just go get it at the links below. Be sure to look over the docs a few times to get some more examples of the prototype functionality and scriptaculous usage. I've thrown out A LOT of my own code without looking back now that I have these amazing frameworks. This is good stuff.

AdvancED DOM Scripting Book

Oh, and as always... be very certain that you know your AJAX before you do this.  I know it goes without saying that you need to be a qualified professional to use powerful tools, but some amateurs and hobbyists (and men who get a hand crushed trying to fix the wash machine) think "Hey! This tool can do it for me! I don't need to know how it works!".  So, make sure you understand the three pillars of AJAX (AJAX Communication, Browser Dynamics, and Modern JavaScript) before you even bother with the powerful frameworks or else you will by flying blind.  Basically, if you can't recreate the Prototype framework (very easy to read code!), you shouldn't be using any JavaScript/AJAX framework.  If you aren't familiar with AJAX Communication, Browser Dynamics, or Modern JavaScript. Check out Jeffery Sambell's amazing book AdvancED DOM Scripting   It's an amazing guide covers all the prerequisites for AJAX development from service communication to DOM manipulation to CSS alteration.  It's amazing.  Even if you're an AJAX expert, buy this book!

Links

SQL Server Database Model Optimization for Developers

It's my assessment that most developers have no idea how much a poor database model implementation or implementation by a DBA unfamiliar with the data semantics can affect a system. Furthermore, most developers whom I have worked don't really understand the internals of SQL Server enough to be able to make informed decisions for their project. Suggestions concerning the internals of SQL Server are often met with extremely reluctance from developers. This is unfortunate, because it is only when we understand a system’s mechanics that we can fully optimize our usage of it. Some familiar with the history of Physics will recall the story of when Einstein "broke" space by his special theory of relativity. Before Einstein was able to "fix" space, he had to spend nearly a deciding trying to decipher how space worked. Thus was born the general theory of relativity.

It's not a universal rule, but I would have to say that the database model is the heart of any technical solution. Yet, in reality, the database implementation often seems to be one of the biggest bottle necks of a solution. Sometimes it’s a matter of poorly maintained databases, but from my experience it seems to mostly be a matter of a poorly designed implementation. More times than not, the SQL Server database model implementation has been designed by someone with either only a cursory knowledge of database modeling or by someone who is an expert in MySQL or Oracle, not SQL Server.

Database modeling does not end with defining entities and their respective relations. Rather, it extends completely into the implementation. What good is an amazing plan, if it is going to be implemented poorly? The implementation phase to which I am referring comes before the actual implementation, yet after what most people refer to as “modeling”. It’s actually not even a phase with a temporal separation, but is rather a phase that requires continual thought and input from the start about the semantic understanding of the real world solution. This phase includes things like data-type management, index design, and security. This phase is the job of the resident architect or senior level developer, not the job of the DBA. It needs to be overseen by someone who deeply understanding both SQL Server and the semantics of the solution. Most of the time the DBA needs to completely stay away from the original data model and focus more on the server specific tasks like monitoring backups and tweaking existing data models based on the specifications that an architect has previously documented. Having said this, I often find that it's not only not the architect or senior developer optimizing a project, often nobody even cares!

Developers need to start understanding that designing a proper data model based on the real world representation includes minimizing data usage, optimizing performance, and increasing usability (for the solution’s O/R mapping). These are not jobs for a DBA. Someone with close knowledge to the project needs to make these decisions. More times than not, a DBA simply does not have the understanding of the project required to make these important decisions. They should stay away from the requirements of the system, leaving this to the architect and senior-level developers. Despite what many well intentioned DBAs think, they do not own the data. They are merely assistants to the development team leaders.

Let's start off by looking at storage optimization. Developers should be able to look at their data model and notice certain somewhat obvious flaws. For example, suppose you have a table with a few million rows with each row containing multiple char(36) columns (a guid), two datatime columns (8-bytes each), six int columns (4-bytes each)-- two of which are foreign keys to reference/look-up/enumeration tables, and an int (4-bytes) column which is also table's primary key and identity. To optimize this table, you absolutely must know the semantics of the solution. For example, if we don't care about recording the seconds of a time, then the two datetime columns should be set to be smalldatetime columns (4-bytes each). Also, how many possible values could there be in the non-foreign key int columns? Under 32,727? If so, then these could easily be smallint columns (2-bytes each).

What about the primary key? The architect or senior-level developer should have a fairly good estimate on how large a table will ever become. If this table is simply a list of categories, then what should be do? Often the common response is to convert it to a tinyint (1-byte). In reality, however, we shouldn't even care about size of the primary key. It’s completely negligible; even if there were only 100 rows, switching it to a tinyint could cause all kinds of problems. The table would only be marginally smaller and all your O/R mappers are now using an Int16 instead of an Int32, which could potentially cause casting problems in your solution. However, if this table tracks transactions, then perhaps you need to make it a bigint (8-bytes). In this case, you need to put force a strong effort to making sure that you have optimized this table down to its absolutely raw core as those bigint values can add up.

Now, what about the foreign keys? If they are simply for reference, then the range of values probably isn't really that wide. Perhaps there are only 5 different values against which the data is to be constrained. In this case, the column could probably be a tinyint (1-byte). Since a primary key and foreign key must be the same data type, the primary key must also become a tinyint (1-byte). This small change alone could cut your database size by a few hundred MB. It wasn't just the foreign key table that dropped in size, but the references between the two tables are now smaller as well (-- I hope every now understand why you need to have a very good reason before you even think about using a bigint foreign key!) There's something else to notice here as well. Reference tables are very helpful for the developer to look at the raw data, but does there really need to be a constraint in the database? If the table simply contains an Id and Text column with only 8 possible values, then, while the table may be tremendously helpful for documentation purposes, you could potentially break the foreign key constraint and put the constraint logic in your application. However, keep in mind that this is for millions or possibility billions of rows. If the referencing table contains only a few thousand rows or if space doesn’t have a high opportunity cost, which may be the case if the solution is important enough to actually have that many rows in the first place, then this optimization could cause more problems than it solves. First off, your O/R mapper wouldn’t be able to detect the relation. Secondly, obviously, you wouldn’t have the database level constraint for applications not using the solution’s logic.

Another optimization that’s important is performance optimization. Sometimes a table will be used in many joins and will be used heavily by each of the CRUD (Create, Retrieve, Update, Delete) operations. Depending on how important the column is, you may be able to switch a varchar(10) to a char(10) . The column will allocate more space, but your operations may be more efficient. Also, try to avoid using variable length columns (varchar) as foreign keys. In fact, try to keep your keys as the smallest integer type you possibly can. This is both a space and performance optimization. It's also important to think very carefully about how the database will be accessed. Perhaps certain columns need extra indexes and others need less. Yes, less. Indexes are great for speeding up read access, but slow down insert operations. If you add too many indexes, your database inserts could run your system to a crawl and any index defragmentation could leave you with a painfully enormous transaction log or a non-functioning SQL Server.

This is exactly what happened to a company I worked with in 2005. Every Saturday night for several weeks in a row, the IT team would get an automated page from their monitoring service telling them that all their e-commerce web sites were down. Having received the phone call about 2AM, I looked into a few things and noticed that the transaction log had reached over 80GB for the 60GB database. Being a scientist who refuses fall into the post hoc ergo proctor hoc fallacy, I needed measurements and evidence. The first thing I did was write a stored procedure that would do some basic analysis on the transaction log by pulling data from the fn_dblog( ) function and doing a simple cube and save the results into a table for later review. Then I told them that the next time the problem occurred they were to run the stored procedure and call me the next Monday (a polite way of telling them that I’m sleeping at 2AM on Saturdays). Exactly one week later the same thing happened and the IT department ran the stored procedure as instructed (and, yes, waited to Monday to call me, for which I am grateful). Looking over the stored analysis data, I noticed that there were a tremendous number of operations on various table indexes. That gave me the evidence that I needed to look more closely at the indexes of each of the 5,000+ tables (yes, that’s four digits—now you know why I needed more information). After looking at the indexes, I realized that the database was implemented by someone who didn’t really understand the purpose of indexing and who probably had an itchy trigger finger on the Index Tuning Wizard. There were anywhere from 6 to 24 indexes on each table. This explained everything. When the weekly (Saturday at 2AM) SQL Server maintenance plan would run, each of the indexes were defragmented to clean up the work done by high volume of sales that occurred during the week. This, therefore, caused a very large number of index optimizations to occur. Each index defragmentation operation would be documented in the transaction log, filling the transaction log’s 80GB hard drive, thereby functionally disabling SQL Server.

In your index design, be sure to also optimize your index fill factors. Too full and you will cause a page split and bring your system to a crawl. Too empty and you're wasting space. Do not let a DBA do this. Every bit of optimization requires that a person deeply knowledgeable about the system to implement a complete database design. After the specifications have been written, then the DBA can get involved so that he or she can then run routine maintenance. It is for this reason that DBAs exist. For more information on the internals of SQL Server, see the book Inside SQL Server 2005: The Storage Engine by Kalen Delaney (see also Inside SQL Server 2000). This is a book which should be close to everyone who works with SQL Server at all times. Buy it. Read it. Internalized it.

There’s still more to database modeling. You want to also be sure to optimize for usability. Different O/R mapping solutions will have different specific guidelines, but some of the guidelines are rather general. One such guideline is fairly well known: use singular table names. It's so incredibly annoying to see code like "Entries entry = new Entries( );" The grammar just doesn't agree. Furthermore, LINQ automatically inflects certain tables. For example, a table called “BlogEntry” will be related to the LINQ entity “BlogEntry” as well as “BlogEntries” in the LINQ data context. Also, be sure to keep in mind that your O/R mapper may have special properties that you’ll want to work around. For example, if your O/R mapper creates an entity for every table and in each created entity there is a special "Code" property for internal O/R mapper use, then you want to make sure to avoid having any columns named "Code". O/R mappers will often work around this for you, but "p.Code_" can get really confusing. You should also consider using Latin-style database naming (where you prefix each column with its table name—so-named because Latin words are typically inflected with their sentence meaning thereby allowing isolated subject/object identification), this is not only a world of help in straight SQL joins, but just think about how Intellisense works: alphabetically. If you prefix all your columns with the table name, then when you hit Ctrl-J you’ll have all your mapped properties grouped together. Otherwise, you’ll see the "Id" property and it could be be 10-15 internal O/R mapper properties before you find the next actual entity column. Doing this prefixing also usually alleviates conflicts with existing O/R mapper internal properties. This isn't quite as important for LINQ, but not having table-name prefixed columns in LLBLGen Pro can lead to some headaches.

Clearly, there's more to think about for database design than the entity relation diagramming we learn in Databases 101. It should also be clear that your design will become more optimized the more specific it becomes. A data model designed for any database server probably won’t be as efficient as a well thought out design for your specific database server and for your specific solution requirements. Just keep in mind that your database model is your applications view of reality and is often the heart of the system and it therefore should be optimized for space, speed, and usability. If you don't do this, you could be wasting many gigabytes of space (and therefore also hundreds of dollars of needless backups), have a painfully inefficient system, and have a hard time figuring out how to access the data in your application.

The Wandering Developer

This has been an interesting week.  I did an experiment to help prove something that deep down we all know anyway: YOU DON'T NEED TO BE AT THE OFFICE TO WORK.  Last weekend I drove to Chicago (from Kansas City) to fix a few problems caused by overworking in the previous week and while on the trip, I started my 4-Hour Workweek ("4HWW") training.  The trip was only a Saturday, Sunday, and Monday trip and I had to be back Tuesday for work.  However, on the way back the 4HWW training made me realize the obvious: I work remotely and I am remote.  DUH!  When I realized that, I immediately turned NORTH (home is south) away from Kansas City heading towards Minneapolis.  I also called my client telling him that I'm going to remotely call in for the meeting as there was absolutely no reason for me to physically be there.  While in Minneapolis I stayed with a relative and worked from an office in their house.  Since there was no boss, no client, and no coworkers to bother me, I was able to have PURE productivity just as the 4HWW book said I would have.

It never really made ANY sense to me why, living in the 21st century, we developers need to physically go to an office to have a boss fight our productivity at every turn.  People just work better when people aren't watching.  DUH!  Therefore, as of right now... I'm done working on site and am extending my consultant business ("Jampad Technology, Inc.") to from coast to coast (possibly global soon).  I am no longer going to work at any particular location, but will work from a different city in the United States at various intervals for the next few years (until I get sick of that and change careers completely).  Since I don't own a house, don't have kids, am not married and since my car is completely paid off and have the lowest rent in the world, I can do this without affecting anything.  Why didn't I do this soon?  Well, I only did the 4HWW training last weekend.  Phenomenal training!  I'm sick and tired of living out Office Space every day of my life and, as it turns out, my Seminary work isn't going to do itself.  Last year I instituted by quarterly vacation policy (I take a 3-9 day vacation every 3 months) and the success of that naturally lead to this next step.  It was either that or continue to be on the lame 100 Hour Work Week program that most people are on.  Forget that.  I'm sick of working in an office.  Period.

One thing that I realized recently was something that makes me feel stupid for not thinking of sooner.  As a right-brained (as opposed to left-brained) developer, architect, minimalist, and purist I always try to increase the level of abstraction for my life.  I'm always trying to make things more logically manageable instead of simply physically manageable.  The other day when I handed my drivers license to a cashier at a grocery store and she responded "Wow, you're a long way from home".  I immediately got to thinking what a strange thing that is to say.  First of all, what ever happened to the saying "home is where the heart is".  Is this something people hang on their kitchen wall, but don't ACTUALLY believe?  Is society so bad that people have bumper stickers and plaques of cute little saying, but don't actually believe them? (obviously, yes)  Secondly, this person was making a statement about my physical, not logical representation.  When I realized this, it dawned on me that much of the technology world (including myself) is living in a painful contradiction.  We are trying to making everything logically management (i.e. active directory, the Internet, web server farms), but we just can't seem to have a logical representation of the most important thing of all: people.  There's no reason for me to be in an office every single day just like there's no reason my web server needs to be with me.  Furthermore, what's with those awesome science fiction scenes in movies where people are remotely (logically) present in meetings via 3D projection from all over the world?  We dream of this stuff, but I'm taking it now.

So, I'm now available to help on projects nation-wide project.  If you need .NET (C#), ASP.NET, JavaScript/AJAX, LLBLGen Pro/LINQ, Firefox, or XHTML/CSS development , porting, auditing, architecture, or training, all based on best practices, please drop me a line using my i-name account.  My rate varies from project to project and for certain organizations my rate is substantially discounted.  Also, please note that I will never, ever work with the counter productive technologies VB or Visual Source Safe (if you want me to setup web-based Subversion on your site, drop me a line!)

Comment Rules

Below are my filters for comments; I've made them as simple as possible.

  1. If your comment resembles the immature and nonsensical gibberish on YouTube, then it won't ever see my web site.
  2. If your comment is simply hate mail, then it would be unprofessional for me to post it.  No one needs to read about someone else's insecurities.
  3. If you ask me an in depth question or bring up a conversation topic, then I will, of course, answer the question through the appropriate channel of e-mail.  This isn't a forum or a discussion board, it's a blog that allows for one-way intelligent statements.  My blog and comment system are designed after the idea of a scholarly lecture: there will be no questions in class except for clarification with further conversation being in private sessions.  Responses will be posted to the blog when appropriate, otherwise they will be sent via e-mail (see next rule).
  4. If you don't include your real e-mail and ask a question, then I obviously can't post it or answer your question.  Again, this isn't a forum.   I don't like forums and avoid them whenever I can.
  5. If I have to hire a professional linguist to parse your comment, then I'm not going to read it let alone post it.
  6. If you are going to object to a point, you must obviously cite your reference and/or give precedence.
  7. If you in any way suggest that a rule, standard, law, regulation, code, specification, or guideline isn't a "good one" (not sure what that means), then I cannot post your comment.  We cannot come to the law to judge the law; rather, are judged by the law.

In the past two weeks I've read some really insane comments ranging from people saying that memorization is bad (weird!) to calling "home made" techniques something "hackers" do (this was very much a YouTube-style comment).  Honestly, I have little time for comments like these (or the hate mail ones), so please be aware that I have a very well defined process I follow when considering a comment (FYI, this process was adapted from my process for filtering recruiters-- getting 8 calls a day for jobs is a bit excessive when you already despise the industry).

NetFXHarmonics .NET SolutionTemplate

I've had a number of requests for my SolutionTemplate without the e-book lessons, so below is its Subversion repository:

You may still access the e-book version as well in the related links section, but if all you need is a new project to get started then you can use the above; this repository shares the same code base with the e-book version.  I will be very careful to make sure that the two versions are kept in sync to minimize confusion.

I should also mention that since the initial release, SolutionTemplate has had various substantial updates based on my more recent research.  They will both continue to be updated as I think of more foundational portions that should be put in the template.  This SolutionTemplate has helped me time and time again when starting new projects and I hope it helps you too; you can use it for any of your production projects as you please.

Lastly, in case you're wondering why this isn't a Visual Studio template: Subversion is just a better way to work with code that regularly updates.  It's an extremely versatile lightweight, transaction, distributed file system that allows for extremely efficient updates.  I would pay for each of those things, but Subversion is FREE.  Can't beat that!

Related Links

Brainbench AJAX Exam

Well, it's official: I took the position as role as principal author of the Brainbench AJAX Exam.  Now I need to turn my usual AJAX curriculum into something worthy of an exam.  Basically I need to create a suitable outline with about 7-9 topics and 3-5 subtopics and put 4-6 questions into each subtopic to come up with a grand total of 160 questions.  Since I've done this already with the C# 2.0 exam, it should be fairly straight forward!  Err, maybe...

What will the exam cover?  Well, the fundamentals of AJAX.  I'm working on a video series right now that will cover what I refer to as the three pillars of AJAX: Modern JavaScript, Browser Dynamics (a.k.a. "DOM Manipulation" or "DHTML"), and AJAX communication.  Modern JavaScript topics that will be covered are JavaScript namespaces, closures, multi-cast events.  The browser dynamics include topics such as DOM tree navigation, node creation and removal, interpreting node manipulation (e.g. moving a box, changing a color) as well as architecture decisions (e.g. "should this be a div or a span?").  Finally, AJAX communication topics will include XMLHttpRequest usage, result interpretation, performance concerns, JSON translation, and callback creation.  These are of course not all the topics, but just a sampling.  The point is that the exam will basically be the exam for the video series.

To be clear, I will not have anything vendor specific near the exam.  This is one of the reasons I took the position.  The last thing we need is an exam which tests you on two or three completely different frameworks.  Java developers won't have a clue about ASP.NET AJAX and ASP.NET developers won't have a clue about the other 100 or so frameworks in existence.  I also have absolutely no intention of asking about obscure AJAX techniques that almost no one would ever know (e.g. request queuing, animation).  So, really, my video series will cover more than the exam as I have every intention of relying fairly heavily on Firebug in the video series and , but that can't be on the exam.

Creating JavaScript objects from ASP.NET objects

If you have worked with ASP.NET for any length of time you probably know that the ASP.NET ID you set on a control on the server-side changes when it gets to the client side.  For example, if you have a textbox with an ID of "txtUsername" in ASP.NET, you will probably have a textbox with an ID of something like "ctl100_txtUsername".  When working only with server-side code, this is fine.  However, I'm a JavaScript programmer as well as a .NET programmer.  Most of my applications are heavily Ajax based and sometimes the entire application through all of its screens and uses will have ZERO postbacks.  So, it's important for me to have the correct ID on the client.  So, I need to be able to access controls on the client-side.  Not only so I can access the ID from a JavaScript functions, but also so I can set loosely-coupled events on objects.

Typically the way people get around this is with simple, yet architecturally blasphemous techniques.  The first technique is to break a foundational rule of software architectural (e.g. low-coupling) by putting an event right on the element itself.  That is, they hard code the event they want to raise right on the control itself.  This is a very strange technique as the .NET developers who do this technique are usually thos wwho would never put a server-side event on a control using OnServerClick.  Somehow, they think that putting an even directly on a client-side control by OnClick is less wrong.  This is obviously a case of extremely object coupling, an extremely poor architectural practice.  In case you can't picture it, here's what I'm talking about:

<asp:TextBox id="txtUsername" runat="server" Text="Username" OnClick="ClearBox( );"></asp:TextBox>

A much, much better way of getting around this is to use the ClientID property of an ASP.NET control to assign a multi-cast JavaScript event to that button.  However, we must be careful with this technique as it too could lead to design problems.  The most obvious problem is that of spaghetti code, the mixing of two or more languages in one same file.  Professional ASP.NET developers know that to have a sound system, you must be using code-behinds.  The ASP.NET development model greatly improves the readability of code by making sure that the C# (or VB) code and the ASP.NET declarations are completely separate.  While reading one page, your brain doesn't need to be flipping all over the place trying to translate multiple languages at the same time.  To be sure, those of us from the PHP world know that with time you can become very proficient in developing in spaghetti code, but, on the other hand, those of us who have taken over a project from another person know the pains of trying to decode that slop.

The typical technique for applying loosely-coupled events (and for many other JavaScript functionality) is actually very strange.  Though the ASP.NET developers will insist on a separation for their C# (or VB) away from their ASP.NET pages, they have no problem throwing JavaScript in the midst of C# code.  This is almost as bad as putting ad-hoc SQL queries in your C# code (very bad) or coupling CSS rules to an element via the HTML "style" attribute, thereby making the solution absolutely impossible to theme and breaking any chance of debugging CSS problems (very, very bad).  JavaScript and CSS have had a code-behind model long before ASP.NET was around.  So, we need to respect the practices of code separation as much as possible.  To this end, we need a better solution than throwing a large block of JavaScript in to an ASP.NET page.

Here is an example of the old technique using legacy JavaScript (in contrast to Modern JavaScript shown in a bit):

<script type="text/javascript"> 
function ClearBox( ) {
    document.getElementById(<%=txtUsername.ClientID%>).value = ''; 
} 

document.getElementById(<%=txtUsername.ClientID%>).onclick = ClearBox;
</script>

Typically, however, you will see a TON of JavaScript code simply thrown into the page with no respect for code separation and with no possibility for multicast events.  (Furthermore, not only is this code raw spaghetti code, that function isn't even in a JavaScript namespace.  Please see my link below for more information on JavaScript Namespaces;  If you are familiar with .NET namespaces, then you have a head start on learning JavaScript namespaces.  Would you ever throw a class into an assembly that without putting it in a namespace?  Probably not... it's the same idea in JavaScript.)

Fortunately, there is a better model using a couple of JavaScript files.  The first JavaScript file (Event.js) is one of my standard files you will see in all of my JavaScript applications (update: I no longer use this-- now, I use prototype.js from the Prototype JavaScript Framework to replace a lot of my own code):

var Event = {
    Add: function (obj, evt, func, capture) {
        if(obj.addEventListener) {
            obj.addEventListener (evt, func, capture); 
        }
        else if(obj.attachEvent) {
            obj.attachEvent('on' + evt, func); 
        }
    },
        
    Remove: function (obj, evt, func, capture) {
        if(obj.removeEventListener) {
            obj.removeEventListener (evt, func, capture);
        }
        else if(obj.detachEvent) {
            obj.detachEvent('on' + evt, func);
        }
    }
};

This Modern JavaScript document, simply allows you to add or remove events from an object.  It's fairly simple.  Here's a file (AspNet.js) you will find in some of my applications:

var AspNet = {
    Objects: new Object( ), 
    
    RegisterObject: function(clientId, aspNetId, encapsulated) {
        if(encapsulated) {
            eval('AspNet.Objects.' + clientId + ' = $(aspNetId)'); 
        }
        else {
            eval('window.' + clientId + ' = $(aspNetId)'); 
        }
    }
};

This one here is where the meat is.  When you call the RegisterObject function you will actually register an ASP.NET control with JavaScript so that you can use it without needing the fancy ASP.NET ClientID.  Furthermore, it also allows you to use the object directly in JavaScript without relying on document.getElementById( ).  This technique is actually a cleaner version of the one I previously mentioned.  It does require you to put a little JavaScript in your page, but that's OK as it's ASP.NET interop code used to register itself with JavaScript; therefore, you aren't really breaking any rules.

In general, you should never, ever place JavaScript in your ASP.NET system.  There are of course some exceptions to this, but the exceptions are based on common sense and decades of interop research from the industry.  Two of the most common exceptions to never having JavaScript in your ASP.NET system are for control generation and for sewing code ("interop code").  Control generation would be when a server-side control creates that which a browser will use in order to protect users (the developers using the control) from the interop between ASP.NET and JavaScript.  That is, to hide the plumbing, thereby increasing the level of abstraction of the system.  The C++ guys deal with the pointers, protecting me from memory management and the ASP.NET/AJAX control creators deal with the JavaScript plumbing so other developers don't have to.  It's the same idea.  Continuing with this analogy, while C# allows unsafe pointers, they should only be used in extremely rare circumstances.  JavaScript in ASP.NET should be about as rare.  One example of this rarity is in reference to the other exception: sewing code.

Sewing code ("interop code"), on the other hand, is exactly what you are seeing this this technique.  It simply connects one technology to another.  One major example of sewing code in the .NET framework is where ADO.NET connects directly to SQL Server.  At some point there must be a connection to the external system and the calling system must speak its language (i.e. SQL).  In the technique here, the interop is between ASP.NET and JavaScript and, as with all interop, sewing is therefore required.  Mixing languages is a very strong sign of poor design skills and a lack of understanding of GRASP patterns.  Many excellent, genius programmers would take their systems to the next level by following this simple, yet profound time tested technique.  Martin Fowler, author of the classic computer science text "Refactoring: Improving the Design of Existing Code" (one of my core books right next to the framework design guidelines!), is often quoted as saying "Any fool can write code that a computer can understand. Good programmers write code that humans can understand."  That's, of course, contextual as people who are complete fools in software design are often 100x better hardcore programmers than the best software designers.

Now, to use the AspNet JavaScript namespace, you simply put code similar to the following somewhere in your ASP.NET page (or the Event.observe function in the Prototype Framework):

<script type="text/javascript">  
Event.Add(window, 'load', function(evt) { 
    // ASP.NET JavaScript Object Registration

    AspNet.RegisterObject('txtUsername', '<%=txtUsername.ClientID%>');
    AspNet.RegisterObject('txtPassword', '<%=txtPassword.ClientID%>');
    Initialization.Init( ); 
}, false);
</script>

Basically, when the page loads your objects will be registered.  What does this mean?  It means you can use the object as they are used in this Initialization.js file (another file in all of my JavaScript projects):

<script type="text/javascript">  
var Initialization = {
    Init: function( ) {
        txtUsername.onclick = function(evt) {
            if(!txtUsername.alreadyClicked) {
                txtUsername.value = '';
                txtUsername.alreadyClicked = true; 
            }
        };
        
        txtPassword.onclick = function(evt) {
            if(!txtPassword.alreadyClicked) {
                txtPassword.value = '';
                txtPassword.alreadyClicked = true;
                txtPassword.type = 'password';
            }
        };
    }
};
</script>

As you can see there is no document.getElementById( ) or $( ) here.  You are simply naturally using the object as if it were strongly typed.  The best part is that to support another ASP.NET page, you simply have to put a similar JavaScript script block in that page.  That's it.  Furthermore, if you don't want to access the control directly, perhaps because you are worried about potential naming conflicts you can send a boolean value of true as the third argument in the AspNet.RegisterObject function, this will put the objects under the AspNet.Objects namespace.  Thereby, for example, making txtUsername accessible by "AspNet.Objects.txtUsername" instead of simply "txtUsername".

There is one catch though: you have to assign events to your window.load event using multi-cast events.  In other words, if at any point you assign an event directly to the window.load event, then you will obviously overwrite all events.  For example, the following would destroy this entire technique:

window.load = function(evt) {
// Do something...
}

This should not be a shocker to C# developers.  In C#, when we assign an event we are very careful to make sure to assign it using the "+=" syntax and not the "=" syntax.  This the same idea.  It's a very, very poor practice to ever assign events directly to the window.load event because you have absolutely no idea when you will need more than one event to call more than one function.  If your MasterPage needs the window.load event, your Page needs the window.load event, and a Control needs the window.load event, what are you going to do?  If you decide you will never need to do multicast events on load and then get a 3rd party tool that relies on it, what will you do when it overrides your load event or when you override its?  Have fun debugging that one.  Therefore, you should always use loosely-coupled JavaScript multi-cast events for window.load.  Furthermore, it's very important to following proper development practices at all times and never let deadlines stop your from professional quality development.

Related Links

Temporary Post Used For Style Detection (c2764a60-646f-4bbd-86b1-d3b8ca31eb31)

This is a temporary post that was not deleted. Please delete this manually. (ed6f01c4-790d-4ea6-bc50-194c65de0014)