Monday, December 31, 2007

Using the INSTEAD OF Trigger in SQL Server 2005

I, like probably a lot of people, am going to have to do a bit of DB refactoring in order to be able to take advantage of Linq to SQL and/or the ADO.NET Entity Framework.  While the framework is certainly flexible, there's plenty of databases out there that were designed without an O/R layer in mind, and there's definitely spots where you can pigeonhole yourself if you didn't take O/R into account during the initial design.

So, one great way to simplify your model is to create views that abstract the complicated underlying model.  Views work great for just about any read-only operation, however, when you get into the inserting and deleting is when you hit some heavy limitations.  From the SQL Server 2005 BOL:

You can modify the data of an underlying base table through a view, as long as the following conditions are true:

  • Any modifications, including UPDATE, INSERT, and DELETE statements, must reference columns from only one base table.
  • The columns being modified in the view must directly reference the underlying data in the table columns. The columns cannot be derived in any other way, such as through the following:
    • An aggregate function: AVG, COUNT, SUM, MIN, MAX, GROUPING, STDEV, STDEVP, VAR, and VARP.
    • A computation. The column cannot be computed from an expression that uses other columns. Columns that are formed by using the set operators UNION, UNION ALL, CROSSJOIN, EXCEPT, and INTERSECT amount to a computation and are also not updatable.
  • The columns being modified are not affected by GROUP BY, HAVING, or DISTINCT clauses.
  • TOP is not used anywhere in the select_statement of the view together with the WITH CHECK OPTION clause.

The first one is obviously the biggest issue (at least for me).  It's often likely that you've got more than one table that aggregates into a single entity.

So, one option you have then is the INSTEAD OF trigger.  Apparently this feature has been around since SQL 2000, but in my ignorance, haven't heard of it until recently.  The INSTEAD OF triggers can be placed on any view or table to replace the standard action of the INSERT statement. 

As with all triggers, this is one of those "Use with great caution" features, as it definitely convolutes the normal execution path.  That said, it's pretty powerful, and gets you around most of the updatable view limitations.

Now, I'm going to show a simple example.  I'm perfectly aware that this example easy to do in the ADO.NET Entity Framework.  This example doesn't represent my own database issues, and why I am investigating going this route.  Just trying to keep it simple for this example.  If you'd like to hear about my database issues, go ahead and comment, and I'd love to discuss and see if there's better options out there.

So, here's the schema:

db schema

Now, in my application, I want to represent a single entity called Employee, that is an aggregation of the Person attributes and the Employee attributes.  So, what I can do then is create a view.

CREATE VIEW [dbo].[Employees]
AS
SELECT     dbo.Employee.EmployeeID, dbo.Employee.Salary, dbo.Person.FirstName, dbo.Person.LastName, dbo.Person.PersonID
FROM         dbo.Employee INNER JOIN
                      dbo.Person ON dbo.Employee.PersonID = dbo.Person.PersonID

Now, this view is going to work great, except when I want to update the data in the tables.  Now, I can write some insert code using the new Linq To SQL functionality:

MyDatabaseClassesDataContext ctx = new MyDatabaseClassesDataContext();
 
Employee emp = new Employee();
emp.FirstName = "Jim";
emp.LastName = "Fiorato";
emp.Salary = 400000;
 
ctx.Employees.InsertOnSubmit(emp);
 
ctx.SubmitChanges();

However, that will throw an exception like the following: 

System.Data.SqlClient.SqlException: View or function 'dbo.Employees' is not updatable because the modification affects multiple base tables.

So, now what I can do is create the INSTEAD OF trigger on the view.  I'll just do an example of the insert trigger.  That trigger will look like this (note this doesn't support multiple items in the inserted rowset at all):

CREATE TRIGGER [dbo].[TR_INSERT_Employees]
   ON [dbo].[Employees]
   INSTEAD OF INSERT
AS
BEGIN
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT ON;
 
    DECLARE @PersonID AS UNIQUEIDENTIFIER
    DECLARE @EmployeeID AS UNIQUEIDENTIFIER
 
    SET @PersonID = NEWID();
    SET @EmployeeID = NEWID();
 
    -- Person Table
    INSERT INTO Person (PersonID, FirstName, LastName)
    SELECT @PersonID, FirstName, LastName FROM inserted;
 
    -- Employee Table
    INSERT INTO Employee (EmployeeID, PersonID, Salary)
    SELECT @EmployeeID, @PersonID, Salary FROM inserted;
 
END

This all makes me wonder if we're just pushing the impedance mismatch work-around further into the database from the code.  Is that a good or bad thing?  It's likely bad, but just how bad?  I can't tell yet. 

I think people usually feel better about what's in their database, vs. what's in their code, because relational databases are something you can wrap you head around, where boundaries are clear, and people make quick and mostly accurate assumptions about what is going on.  Whereas, in you C# code, you'll often find it hard to grasp it all, because it could be doing so many things, in so many different ways.

This makes me feel better about doing it this way, but still, I wish my data model was more simple, and more clean, but it's a huge undertaking to change a model.  Migration scripts, backward compatibility issues, etc.  I think I need to try this direction right now.  I'll let you know if it doesn't work for me.

Sunday, December 30, 2007

Closures in C# 3.0

I've been hearing a lot about closures and continuations recently, possibly because people are starting to explore C# 3.0 a little more.  I wasn't all that hip to the terms, but they sure sounded smart, so I thought that I would explore a little bit.

Martin Fowler provides a very clear definition of a closure here.  Closures are essentially a block of code that can be passed as an argument to a function.  In addition, closures have access to any bound variables from the calling operation.  Here's an example of a very simple closure:

static void Main(string[] args)
{
    int x = 1;
    MyClosure(x, s => x++);
    Console.WriteLine(x);
}
 
public static void MyClosure(int number, Action<int> code)
{
    code(number);
}

You can see that the variable x is instantiated and assigned a value, and then used within the the block of code (in this case, it is incremented).

Obviously, you're likely not doing a whole lot of single increments of a number in your application, so we should try something a little more interesting.  So, let's create a generic filter extension method for a collection of objects.  I'll use my order item example from my Lambda Expressions post.  Here are the guts of my Filter extension method:

public static IEnumerable<T> Filter<T>(this IEnumerable<T> coll, Func<T, bool> predicate)
{
    foreach (T item in coll)
    {
        if (predicate(item))
            yield return item;
    }
}

If I need functionality that allows a user to see a list of products that is less than a certain threshold dollar amount, I can use a combination of this extension method and closures to do it.  Here's the code:

public static IEnumerable<OrderItem> ShowItemsThatCostLessThan(IEnumerable<OrderItem> items, double highPrice)
{
    return items.Filter(p => p.Price < highPrice);
}

In that example, you can see that the variable highPrice is passed into the method, and then used within the code block that does the comparison.

One thing that Fowler mentions in his post is that along with the formal requirement that a closure have access to the bindings of the environment they came from, there is an informal requirement as well:

The second difference is less of a defined formal difference, but is just as important, if not more so in practice. Languages that support closures allow you to define them with very little syntax. While this might not seem an important point, I believe it's crucial - it's the key to make it natural to use them frequently. Look at Lisp, Smalltalk, or Ruby code and you'll see closures all over the place - much more frequently used than the similar structures in other languages. The ability to bind to local variables is part of that, but I think the biggest reason is that the notation to use them is simple and clear.

The code definitely seems clear to me, it's the short part that I don't feel that great about.  Languages with rich collection objects like Ruby and Smalltalk make it much easier to give a confident yes to this argument.  See Fowler's other post about collections and closures.

I might just be splitting hairs there, and either way you can see how closures are formed, what they do, and can see all the potential they have for decreasing the amount of work you have to do.

References

Closure (computer science) - Wikipedia
Closure - Martin Fowler's Bliki

Wednesday, December 26, 2007

Google Reader Shared Items _is_ Opt In

There's a bit of a meme over the last week about Google's new "Sharing With Friends" feature.  This feature allows people that you have messaged before to get a feed of your shared items in your Google Reader:

picture of friends feature

Everyone is apparently all up in arms about how Google really messed up by turning this feature on without giving users the option, and automatically opting them in for the service.  Dare just Tweeted about how he thinks that Google compromised their users security in a frantic attempt to catch up with Facebook.  Scoble has already posted about how he would fix it.

I just don't get what the concern is.  I'm actually a little annoyed that it isn't easier to get friends feeds.  Here's the criteria:

  • The person must have a PUBLIC shared items feed from Google Reader.
  • That person must be a contact in your Google Contacts.
  • That person must have accepted you to chat at some point through Google Talk.

That last point is the opt in part.  You've already added that person as a contact, you've already probably chatted with them in Google talk, and finally, you've got a completely public feed of your Google shared items.

How is this compromising the privacy of the Google Reader users?  If you're using the shared items for private items, then that feature or Google Reader isn't for you.

Maybe I'm missing something.  If so, let me know.

Sunday, December 23, 2007

Will Apple Repeat History?

Dave Winer is not someone you want dissing your product.  Dave has 9,000 people subscribed to his blog on Google Reader, and who knows how many other readers are out there.  Dave Winer is extremely influential, not only on his own, but he's great friends with Robert Scoble, Mike Arrington, and countless other "A" listers that will certainly vouch for him and spread his word.

Right now, he's really pissed at Apple, and rightfully so. 

Dave brought his Mac into the Apple store because the hard drive fried on him.  After the service was completed, when he asked for the old hard drive back, the Apple Store associate told him that he could not have it back, and that part of what he signed on the repair agreement was that Apple gets to keep his old hard drive.

On top of that, when he started reading the fine print, he noticed that the hard drive that they replaced on his machine, may or may not be refurbished.  It cost him $160 bucks.  $160 bucks for a possibly used 80GB 5400 RPM 2.5 hard drive.  Rip-off.

Also, what if Apple isn't careful with his old hard drive.  What if someone gets a hold of the drive, and grabs his online banking password, or some of his source code?

What I don't get is why Apple hasn't learned from their mistakes.  Why did PC's become popular over the Macs?  Because the OS and the and the hardware are tied together.  This just exacerbates this issue.  Apple should be doing whatever it takes to make sure that their customers don't walk away feeling like they are  wishing they'd got a PC. 

Most Apple customers go out of their way to move to the Mac OS to begin with.  They deal with incompatible file formats from the majority PC users, a browser that isn't supported as well by developers, and often an overly complicated process when switching from the PC.  Apple is lucky to have them in the first place.

I would not buy Apple stock right now.  I really think we're going to see a huge shift in the way people behave towards Apple.  And I think this snafu with Dave Winer will accelerate that if they leave him dissatisfied.  Dave's got a lot of reach, and could swing a huge number of people's perceptions of Apple in a much different direction (after all, I'm blogging about it right now, which means that a whopping 20 more people are going to hear about it).

And it all boils down to customers feeling like they did in the 90's, locked into expensive hardware with the OS that they liked.  No matter what, people will always abandon a product that doesn't give them choices, despite how beautiful it looks.

Lambda Expressions

Definitely one of the hot new features of C# 3 is Lambda Expressions.  I agree that the name is definitely hot, and that it sounds so incredibly math cool, that I just have to use it.  But, I have no idea what it is.  Not having much of a formal math background, the only exposure I've had to Lambda anything was Revenge of the Nerds.

This is the result of the research I've done to figure out exactly they are what they are intended to be used for.

Lambda Calculus

If you have messed around with Lambda Expressions in C# already, the following mathematical definition around Lambda Calculus provides quite a bit of clarity around the expression structure and notation.

From the mathematics standpoint, Lambda Expressions are based upon Lambda Calculus, which is a formal system that puts emphasis on function definition.  In math, if you were to have the function: f(x) = x+3, it would be represented in Lambda Calculus as: λx. x + 3.  If I were then to given f(2), in Lambda Calculus it would be given as (λx. x+3)2.

Evolution of Delegates to Lambda Expressions

So how does this concept benefit us in C#?  In C# 1.0, we were given the feature of delegation, which allowed us to essentially pass functions as an argument to another function.  Below is a really simple example of C# 1.0 delegation:

public delegate string MessageDelegate(int i);
 
public static void PrintMessage(MessageDelegate func, int i)
{
    Console.WriteLine(func(i));
}
 
public static string MessageCreator(int i)
{
    return string.Format("Printing Number: {0}.", i);
}
 
static void Main(string[] args)
{
    PrintMessage(new MessageDelegate(MessageCreator), 10);
}

With C# 2.o, a feature was introduced called anonymous delegates.  With anonymous delegates, you could do the same above but by creating the delegate on the fly:

public delegate string MessageDelegate(int i);
 
public static void PrintMessage(MessageDelegate func, int i)
{
    Console.WriteLine(func(i));
}
 
static void Main(string[] args)
{
    PrintMessage(delegate(int i) { return string.Format("Printing Number: {0}.", i); }, 10);
}

Now, in C# 3.0, with Lambda Expressions, I can do essentially the same thing, but I get to do it a little more succinctly:

public delegate string MessageDelegate(int i);
 
public static void PrintMessage(MessageDelegate func, int i)
{
    Console.WriteLine(func(i));
}
 
static void Main(string[] args)
{
    PrintMessage(i => string.Format("Printing Number: {0}.", i), 10);
}

Take it even one step further, and I can get rid of the delegate declaration altogether with the new generic Func class.  The Func class takes two generic arguments in it's declaration, an input type and an output type:

public static void PrintMessage(Func<int, string> func, int i)
{
    Console.WriteLine(func(i));
}
 
static void Main(string[] args)
{
    PrintMessage(i => string.Format("Printing Number: {0}.", i), 10);
}

A Much Better Usage

You can see as the previous example progressed, it became a worse and worse example.  Why not just call Console.WriteLine("Printing Number: 10"); and be done with it.

Here's a much better example using Extension Methods combined with Lambda Expressions.  In this case, what I want to do is perform an operation on a collection of items.  Let's use the example of an order of items, and I want to total the number of items.  Here's a great way of doing that:

class Program
{
    static void Main(string[] args)
    {
        List<OrderItem> items = new List<OrderItem>
        {
            new OrderItem {ProductID=1, Quantity=1},
            new OrderItem {ProductID=4, Quantity=3},
            new OrderItem {ProductID=18, Quantity=2}
        };
 
        int totalQuantity = items.Total(p => p.Quantity);
 
        Console.WriteLine(totalQuantity);
        Console.ReadLine();
    }
}
 
public class OrderItem
{
    public int ProductID { get; set; }
    public int Quantity { get; set; }
}
 
public static class OrderCollectionHelper
{
    public static int Total<T>(this IEnumerable<T> coll, Func<T, int> predicate)
    {
        int total = 0;
 
        foreach (T item in coll)
        {
            total += predicate(item);
        }
 
        return total;
    }
}

You can see that there's quite a bit of potential in working with Lambda Expressions.  From here you can go even further into expression trees, creating a very rich language with very little code, which is a post (or ten) of it's own.

All of this has become really evident with LINQ, as it is a very powerful implementation of Lambda Expressions.

References

New "Orcas" Language Feature: Lambda Expressions - Scott Guthrie's Blog
Lambda calculus - Wikipedia

Saturday, December 22, 2007

My Exploration of the Relationship Between C# Properties and Abstract Data Types

One of my goals for my time off from work is to read the 2nd Edition Code Complete by Steve McConnell.  I thought I would jot down some thoughts I'm having and realizations that I'm coming to as I read further.

I'm currently reading about Abstract Data Types and extrapolating how C# property language feature fits into the picture.  The purpose of the Abstract Data Type is to hide the details of the management of data within the class.   I'll use the example from the book.

If I have a class that represents a font, the user will likely need to be able to make the font bold.  A data type would represent this activity like so:

public class Font
{
    public bool Bold;
}

The Abstract Data Type would do it like this:

public class Font
{
    private bool _bold = false;
 
    public void SetBoldOn()
    {
        _bold = true;
    }
 
    public void SetBoldOff()
    {
        _bold = false;
    }
}

In C# field assignment and property assignment from a client perspective are semantically similar, however, a C# property certainly has the ability to hide the details of the data within the class.  As a C# property, the font would be represented as this:

public class Font
{
    private bool _bold = false;
 
    public bold Bold { get; set; }
}

The property language feature gives me the ability to hide what's going on under covers just as much as the method does.  Additionally, it seems a bit more efficient to have a single C# property that can turn bold on and off as opposed to two different methods, SetBoldOn and SetBoldOff, and likely another method that would return the current state of bold.  I could create a ToggleBold method, but then I'd still need a separate method to determine the current state of the font's boldness. 

What about in situations in which you need to provide more details about the data you are specifying?  For example, in the same font example, when you are setting the size of a font.  In that case, there may be different units that I am specifying the size to be in, pixels or points.  In this case, for the Abstract Data Type, I would have a couple methods:

public class Font
{
    //size is internally stored as pixels
    private int _size = 10;
 
    public void SetSizeInPixels(int size)
    {
        _size = size;
    }
 
    public void SetSizeInPoints(int size)
    {
        _size = SizeConverter.ConvertPointsToPixels(size);
    }
}

If I choose to abstract the data with C# properties, I would equivalently have:

public class Font
{
    //size is internally stored as pixels
    private int _size = 10;
 
    public int SizeInPixels
    {
        get { return _size; }
        set { _size = value; }
    }
 
    public int SizeInPoints
    {
        get
        {
            return SizeConverter.ConvertPixelsToPoints(_size);
        }
        set
        {
            _size = SizeConverter.ConvertPointsToPixels(value);
        }
    }
}

I feel like a C# property is certainly a reasonable construct for a class to use to be considered an Abstract Data Type in these simple cases.  Ultimately it all boils down to what you require and what your preference is.  You can probably just ask yourself these questions:

  • Do you need equivalent data retrieval functionality out of your class?  If so, you might want to take advantage of the C# property's get accessor.
  • If you need to provide additional information, do you also want your class to return that additional information with the data?  If so, maybe you want to use a C# property here too.  It seems reasonable in some cases that you would want your font formatted back into your preferred unit (the get accessor on the FontSizeInPoints).
  • How much logic do you need to perform on your data based upon the additional information provided?  Is it a complicated conversion between points and pixels?  I think that most programmers make assumptions about the complexity of properties vs. methods, and thus there's a threshold of logic in which a method is more appropriate than a property.
  • What is the data?  For instance, what if the font had a relative size property, and you had a fixed set of relative sizes, smaller, small, large, larger.  In this case, it might make more sense to hide this enumeration from the user, and provide the interface through methods, so that the data itself is hidden from the user.  What if you change the larger to translate into 16 pixels one day?  Much easier to change the method than a big switch statement in the property.

Now, as I type this all, it's such a rudimentary distinction, and I'm feeling a bit self-conscious about what I've written, like I'm way behind and have been missing the point.  But it's something I'd never thought too much about from this angle, and maybe there's some folks out there that haven't either (or just as ignorant as I am).

Friday, December 21, 2007

Code Quality - My Thoughts on the "Nobody Cares What Your Code Looks Like" Argument

Recently Jeff Atwood had a post arguing that the important part of software is not what your code looks like, it's that your application works.  It makes no difference to the customer whether or not you have a 1000 line method or alphabet soup variable names or a high Cyclomatic Complexity.  In his words:

Sure, we programmers are paid to care what the code looks like. We worry about the guts of our applications. It's our job. We want to write code in friendly, modern languages that make our work easier and less error-prone. We'd love any opportunity to geek out and rewrite everything in the newest, sexiest possible language. It's all perfectly natural.

The next time you're knee deep in arcane language geekery, remember this: nobody cares what your code looks like. Except for us programmers. Yes, well-factored code written in a modern language is a laudable goal. But perhaps we should also focus a bit more on things the customer will see and care about, and less on the things they never will.

Ultimately this is true, until the result of that code either:

  1. Causes the software not to work.
  2. Increases the cost of the software.

At that point the customer really starts to care.  Now, Jeff's post in specific is talking about open source projects, in which case, the possible increased cost of the software takes on a slightly different meaning, but it's still an issue.

I think this risk is what the programmers understand.  They understand that crappy code, and making decisions (at the bits and bytes level) that ultimately don't benefit the customer in the lo are a bad idea.

A good example of this is the recent news that IE passed the ACID2 test.  Does the customer care about the ACID2 test?  Not most.  Most sites will work in both IE and FF, mostly at the expense of developers and the crappy code they had to write to produce that compatibility.  Most people browsing could care less about the proper way to render a block element or to apply padding or margins to a span tag through CSS.

The developers understand that bad code costs the customers more in the long run, and support of not writing ugly, nasty code is something that the customer cares about (but they just don't know it).

Now, I'm sure that Jeff, if anybody, knows this, and he might be talking from the perspective of "our code sucks, let's rewrite the entire thing", which I have some other thoughts about (another post, another time).  I can say though, without any doubt, that just because your customers don't care what your code doesn't mean you should write poor code, or brush crappy code under the rug when you are in it.

Wednesday, December 19, 2007

If You're Not a Good Programmer, What Are You?

I don't know what the deal is, but over the last few days, I've seen a barrage of posts from some of my favorite tech bloggers saying they suck at programming.

Here's Jeff Atwood:

One of the most striking and memorable things about Code Complete, even to this day, is that Coding Horror illustration in the sidebar. Every time I saw it on the page, I would chuckle. Not because of other people's code, mind you. Because of my own code. That was the revelation. You're an amateur developer until you realize that everything you write sucks.

YOU are the Coding Horror.

Steve Yegge:

Please – don't think I'm implying that I'm a great programmer. Far from it. I'm a programmer who's committed decades of terrible coding atrocities, and in the process I've learned some lessons that I'm passing along to you in the hopes that it'll help you in your quest to become a great programmer.

Ayende:

I'll start by saying that I often write bad code. What is bad code? In DevTeach, I showed James Kovacks a 150 lines methods that involved threading, anonymous delegates and exception handling in a ingeniously unmaintainable form. Bad as in you will get the axe and come after me with froth coming out of your mouth if you ever see it.

So, if all of these people I read, and trust, and look up to, all write bad code, then what does that say about my code?  I think I've definitely come to grips with the fact that I've seen far more elegant code than what I have produced in the past.

Maybe I'm my own worst critic, and maybe that's what makes me ultimately a better programmer.  And maybe that's exactly what they are all saying.  They've made mistakes, they've recognized them, the mistakes piss them off, and they want to do it a million times better next time around.

** If you are a recruiter, or someone who is interested in hiring me, please disregard all previous comments.  All code I write is extremely pleasing to the eye, rarely, if ever, contains any bugs, and will likely make your product and/or service and/or clients an overnight success.

Monday, December 17, 2007

XML in Visual Basic.NET 9.0 and Visual Studio 2008

One of the new features available in the VB .NET 9.0 compiler is it's deep integration with XML.  The below is a perfectly legal syntax to send to the Visual Basic compiler.

Dim customers = <customers>
        <customer id="1">
            <firstname>Jim</firstname>
            <lastname>Fiorato</lastname>
        </customer>
        <customer id="2">
            <firstname>Mister</firstname>
            <lastname>Pickles</lastname>
        </customer>
    </customers>
 
    For Each customer In customers.<customer>
 
    Console.WriteLine(customer.@id & " - " & customer.<firstname>.Value & "  " & customer.<lastname>.Value)
 
Next
Console.ReadLine()

Yep, that's right.  No quotes around the XML.  And there's Intellisense too:

image

Here's what it looks like when decomposed by Reflector:

XElement VB$t_ref$S0 = new XElement(XName.Get("customers", ""));
XElement VB$t_ref$S1 = new XElement(XName.Get("customer", ""));
VB$t_ref$S1.Add(new XAttribute(XName.Get("id", ""), "1"));
XElement VB$t_ref$S2 = new XElement(XName.Get("firstname", ""));
VB$t_ref$S2.Add("Jim");
VB$t_ref$S1.Add(VB$t_ref$S2);
VB$t_ref$S2 = new XElement(XName.Get("lastname", ""));
VB$t_ref$S2.Add("Fiorato");
VB$t_ref$S1.Add(VB$t_ref$S2);
VB$t_ref$S0.Add(VB$t_ref$S1);
VB$t_ref$S1 = new XElement(XName.Get("customer", ""));
VB$t_ref$S1.Add(new XAttribute(XName.Get("id", ""), "2"));
VB$t_ref$S2 = new XElement(XName.Get("firstname", ""));
VB$t_ref$S2.Add("Mister");
VB$t_ref$S1.Add(VB$t_ref$S2);
VB$t_ref$S2 = new XElement(XName.Get("lastname", ""));
VB$t_ref$S2.Add("Pickles");
VB$t_ref$S1.Add(VB$t_ref$S2);
VB$t_ref$S0.Add(VB$t_ref$S1);
XElement customers = VB$t_ref$S0;
foreach (XElement customer in customers.Elements(XName.Get("customer", "")))
{
    Console.WriteLine(InternalXmlHelper.get_AttributeValue(customer, XName.Get("id", "")) + " - " + InternalXmlHelper.get_Value(customer.Elements(XName.Get("firstname", ""))) + "  " + InternalXmlHelper.get_Value(customer.Elements(XName.Get("lastname", ""))));
}
Console.ReadLine();

This is such a great paradigm shift.  As developers, we're so used to pushing strings that represent rich languages like SQL and XML into our code, which totally removes the compile checks from these languages.  LINQ and now VB.NET XML are closing this gap, and making the development experience far more productive.

Friday, December 14, 2007

My Top 5 Favorite New C# 3.0/.NET 3.5 Framework Features

Partial Methods

This feature really puts the finishing touches on partial classes.  What this feature allows you to do is create an abstract-like method declaration in a partial class, and use it from a concrete operation in that class, with the expectation that the method will be implemented in another class.  In the below code, we've got a partial class defined, that has a constructor that calls a partial method which delegates the implementation to the another partial class.

partial class PartialMethod
{
    public PartialMethod()
    {
        GetTheStringImplementation();
    }
 
    public string Message { get; set;}
 
    partial void GetTheStringImplementation();
}
 
public partial class PartialMethod
{
    partial void GetTheStringImplementation()
    {
        this.Message = "This is the string from the concrete implementation.";
    }
}

You can see that this has pretty good potential for code generation scenarios, where you would need to provide hooks into your gen'd code.  Also could be applicable to a framework API where a specific implementation is required.  There's a couple rules around it.  1.  Your method must return void.  2.  Your method cannot contain any out arguments.

Automatic Properties

This one is just a flat out great LOC saver.  No more declaring member variables for your simple properties.  The below code is a perfectly legal form for a concrete property in C# 3.0;

public string Message { get; set;}

Extension Methods

I'm sure you've heard a ton about this one.  It's such a smart feature.  It allows you to slap your own method onto any class, just by including your namespace.  Below is an example of my extension method.  You can see the argument signature is a bit different.

public static class ExtensionMethods
{
    public static string CleanBRTags(this String theString)
    {
        return theString.Replace(@"<br \>", "");
    }
}

And here is the usage:

string myString = @"This is a test<br \>Line 2";
Console.WriteLine(myString.CleanBRTags());

Note that you must add your namespace to the calling class with a "using" statement, and your extension method must be static, as well as the class that holds the method.

Object Initializers

Another great LOC optimization.  What object initializers allow you to do is instantiate an object and initialize the parameters at the same time.  Take the below class:

public class MyObject
{
    public int ID { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

In C# 2.0, you would likely instantiate and hydrate this class like so:

MyObject myObject = new MyObject();
myObject.ID = 1;
myObject.FirstName = "Jim";
myObject.LastName = "Fiorato";

In C# 3.0, you can now do the following:

MyObject myObject = new MyObject() { ID = 1, FirstName = "Jim", LastName = "Fiorato" };

System.TimeZoneInfo Class

This one is the lone framework feature in this list.  Worthy of changing the entire title of this blog post (if I had just left it "My Top 5 Favorite New C# 3.0 Features", flaming comments, ridicule, and a sure end to my career would have followed).

Why do I love this one so much?  Because it's going to save me about 1,000 lines of code, an xml manifest with all the time zones in it, and I'll never ever have to patch every single released version of my application again.

First, you'll get all the time zones in the system.

foreach(TimeZoneInfo info in TimeZoneInfo.GetSystemTimeZones())
{
    Console.WriteLine(info.DisplayName);
}

Second, along with those time zones, you can get the offset from GMT and present a perfectly adjusted time zone to all your users across the globe.

DateTime currentTimeUtc = DateTime.Now.ToUniversalTime();
 
TimeZoneInfo myCurrentTimeZone = TimeZoneInfo.FindSystemTimeZoneById("Central Standard Time");
 
DateTime myCurrentTime =
    TimeZoneInfo.ConvertTimeFromUtc(currentTimeUtc, myCurrentTimeZone);
 
Console.WriteLine("My Current Time is: {0}", myCurrentTime.ToString());

 

References

C# 3.0: Partial Methods - Wriju's Blog
New "Orcas" Language Feature: Extension Methods - Scott Guthrie's Blog

Thursday, December 13, 2007

Does ASP.NET Dynamic Data == ASP.NET on Rails?

I found this to be a very odd name for this feature of the ASP.NET 3.5 Extensions.  I don't know why, but it sounds so old school.  Didn't we start getting dynamic data with asp and VBScript?  CGI/PERL?

Anywho, since the the name was so misleading, I thought I'd try and clarify it for myself.  Here is how the documentation from www.asp.net describes the feature:

ASP.NET Dynamic Data provides the Web application scaffolding that enables you to build rich data-driven Web applications. This scaffolding is a mechanism that enhances the functionality of the existing ASP.NET framework by adding the ability to dynamically display pages based on the data model of the underlying database, without having to create pages manually. Dynamic Data infers the table to display and the view to create from the URL of the request.

Sounds similar to the plumbing that Rails provides.  Build your model.  Execute a tool that builds the scaffolding, and viola, you've got a model, a view, a controller, and some unit tests.

To create a ASP.NET Dynamic Data project, you need to have Visual Studio.NET 2008, any of the flavors with Visual Web Developer, and you must have installed the ASP.NET 3.5 Extensions Preview.  Open Visual Studio, and create a new web site using the new Dynamic Data Website template:

add project 

Sticking to the "off by default" philosophy, the templates that are generated for each table are not enabled in the web.config after creating the project.  To fix this, you need to locate the DynamicData configuration section and change the enableTemplates attribute to "true".

The next thing you need to do is create your Linq to SQL classes the project.  For this example, I'm going use Northwind again, and am going to put all the tables into my entity model.  See my previous post about creating simple Linq to SQL classes.

image

Next thing you have to do is create a connection string for your database in the web.config.  I'm not sure why the designer didn't do this for us.  A bug maybe?  So, go ahead and add your connection string to the web.config:

<connectionStrings>
    <add name="default" connectionString="Data Source=(local);Initial Catalog=NorthwindEF;Integrated Security=True"/>
</connectionStrings>

Again, the designer should have created a parameter-less constructor for us that would use the connection string from the configuration, but it did not for me (maybe it will for you???).  Open up the designer for your dbml file (in my case Northwind.designer.cs), and add a parameterless constructor:

public NorthwindDataContext() :
    base(System.Configuration.ConfigurationManager.ConnectionStrings["default"].ConnectionString, mappingSource)
{
    OnCreated();
}

After that's done, you can go ahead and run the project.  You'll see that you're given a list of the tables:

table list

Clicking on each gives you the standard asp.net editable grid for that entity.

edit data

This also handles relationships automatically, by providing you a link that will filter the rows of another grid.

Now, this is a simple example.  Rails' approach is that the scaffolding is a great starting point.  From there you can do just about anything you want.

Does the ASP.NET Dynamic Data have the same philosophy?  Is this just a starting point?  If so, where's my model?  My Controllers?  My Tests?  If this isn't the intent, I'm really struggling to find what problem this truly solves.  Is there anyone out there who can set me straight?  In the mean time, I'm going to poke around and see what else you can do.

Wednesday, December 12, 2007

Consuming My Simple REST Service with ADO.NET Data Services ("Project Astoria")

In my previous post, I showed a simpler way of creating a RESTful web service using ADO.NET Data Services.  In this post, I wanted to explore how difficult it is to consume those services.

I'm going to create a console application that will be the client using the service that returns customers from the Northwind database.  After I create the console application, I'm going to need to add a reference to the Microsoft.Data.WebClient assembly which can be found in your Program Files\Reference Assemblies\Microsoft\Framework\ASP.NET 3.5 Extensions directory.

After making that reference, I need to create a class that is identical in form to the object that is being serialized by the service.  So, in my case, the Customer class that I am using for my service needs to look like this:

public class Customer
{
    public string CustomerID { get; set; }
    public string CompanyName { get; set; }
    public string ContactName { get; set; }
    public string ContactTitle { get; set; }
    public string Address { get; set; }
    public string City { get; set; }
    public string Region { get; set; }
    public string PostalCode { get; set; }
    public string Country { get; set; }
    public string Phone { get; set; }
    public string Fax { get; set; }
}

From there, in my entry method, I can create a "WebDataContext" object that points to the service:

WebDataContext context = new WebDataContext(http://localhost:50508/Astoria/MySimpleService.svc);

After I have this context, I can do all sorts of interesting things.  One thing I can do is query the data by creating a WebDataQuery object like this:

WebDataQuery<Customer> customers =
    context.CreateQuery<Customer>("/Customers?$orderby=CustomerID");
 
foreach (Customer customer in customers)
{
    Console.WriteLine(customer.ContactName);
}

Even better, I can throw in some Lambda Expressions with LINQ, and use LINQ to create the queries for me.

var customers = from p in context.CreateQuery<Customer>("Customers")
                where p.CompanyName == "Alfreds Futterkiste"
                select p;

This way is a great way to learn more about the URI format of the queries of the service, because you can set a break point after you've created the query and get the actual URL that is sent when the query is executed.

shows mouseover of query

Again, really awesome stuff.  Although, I still need to do some due diligence on PUT, POST and DELETE activities of the REST service.

References

Man vs. Code - Linq to REST - Andy Conrad
Consuming ADO.NET Data Services - .NET Client Library

Tuesday, December 11, 2007

A Much Simpler Way of Building REST Services With ADO.NET Data Services ("Project Astoria")

In my previous post, Simple GET REST Service with WCF, I detailed how to build a simple web service with WCF that routed a simple request and constructed a REST/POX response.  As part of the release of the ASP.NET 3.5 Extensions yesterday, the previously named "Project Astoria" was released as ADO.NET Data Services.  There's some details here about what all ADO.NET Data Services include.  The feature I want to outline in this post is the feature that can serialize these entities into the industry standard AtomPub format.

You'll need Visual Studio 2008 and the ASP.NET 3.5 Extensions bits released yesterday and install them first.  You can get those here, and read Scott Guthrie's post about the release.

After installing the Extensions CTP, you'll see some new options when you are creating a new web project.

create new project dialog

Choose ASP.NET 3.5 Extensions Web Site.  Once the site is created, add a new item and choose "ADO.NET Data Service" from the Item Template window.

After that, you'll want to create a simple LINQ to SQL class.  You can see my post here about how to do that.  For this example, I'm going to connect to the NorthwindEF sample database and just drag the Customers table onto the designer surface.

Once I've created the ADO.NET Data Service and the LINQ to SQL classes, then I'll need to wire up my simple data source into the service.  This is really simple.  First thing to do is to put your DataContext class created by the LINQ to SQL step into the service class declaration.

What looked like this:

public class MySimpleService : WebDataService< /* TODO: put your data source class name here */ >

Should now look like this:

public class MySimpleService : WebDataService<NorthwindDataContext>

After that, you'll want to make sure that you provide access to your data source, as it is off by default.  In this example, I'll give access to every data entity in the data context object (all classes that implement IQueryable<T> by using the wildcard.   After making these changes, your service class should look like this:

using Microsoft.Data.Web;
 
public class MySimpleService : WebDataService<NorthwindDataContext>
{
    public static void InitializeService(IWebDataServiceConfiguration config)
    {
        config.SetResourceContainerAccessRule("*", ResourceContainerRights.AllRead);
    }
}

That's it.  If you fire up your service, you should see something like this:

atom feed collection

From there, you can do all sorts of fun stuff with the URL:

http://localhost:50508/Astoria/MySimpleService.svc/Customers yields all customers:

all customers

http://localhost:50508/Astoria/MySimpleService.svc/Customers('ALFKI') returns just that customer with that primary key.

http://localhost:50508/Astoria/MySimpleService.svc/Customers?$filter=CompanyName%20eq%20'Alfreds%20Futterkiste' returns all customers with that specific customer id.

There's a bunch more stuff you can do with the URI addressing schemes, such as ordering a and paging.  Fantastic!

You can't beat that.  10 minute fully functional REST web service.  This has such great potential.

Sunday, December 09, 2007

To Me, Software is Like Free Hardware

Yesterday I installed the release candidate for Vista Service Pack 1.  There's no specific issue I've been waiting to get fixed.  I've been perfectly content with the current performance of my system.  So why would I install it?  I'm sure that on Monday, at least one of my colleagues will ask me the same thing.

The reason is, to me, new software is like getting a new computer, or a new smart phone or a new video game console.  I love new software.  I can't wait to get my hands on it.  I don't care if it's a new version, a major release, a minor release or a hotfix.  To me, it's new, and there's bound to be something there to check out.  It's something I can say I have.

Now, fortunately enough for me, in most cases, software is far cheaper than hardware.  There's always something new coming out.  Software satiates my need to buy new hardware (despite the fact my wife thinks I buy too much hardware already).

So, I know that Rocky Lohtka says it's too much.  I say keep it coming.

.NET Framework Source Symbol Integration with Visual Studio 2008

Last night I followed Scott Guthrie's instructions as to how to get early access to the .NET Framework Source Symbol Integration with VS.NET 2008.  Kudos to Scott, Shawn Burke and team for going through the effort to make this feature available to us all.

screen shot of .NET Framework integration

This is such a great feature to have for two reasons.  First, is that it's going to make debugging so much easier.  Stepping into the framework source, seeing exactly what is going on as you step through is huge.  Second, there's a ton of great programmers working on the framework.  Being able to read their code exactly how it was written (Reflector reverse engineers it out of MSIL after compiler optimization) is a great way to become a better developer.

Wednesday, December 05, 2007

SocialAds _Are_ Revolutionary

Dare had a post on his blog yesterday about how Facebook's SocialAds aren't revolutionary, but Evolutionary.  I disagree (very respectfully of course, I think Dare is one insightful cookie).

I think that people fit into two camps when it comes to ads.

There's the folks who like to see ads as they do their daily tasks.  I'm one of these people.  I like to see ads because I like to shop.  If I'm searching for a product, I want to see the most popular, and highly rated product displayed to me on my results screen.

image

The second is the group of people that view ads as a bother.  Any real estate that those ads take up are a waste of space.  And if this kind of person wants something, they'll go to Amazon.com or Buy.com and get it.  The only way to advertise to these folks is to market without them knowing.

I think the advertising that will have reach into both of these personas are the social ads.  They are not invasive at all, but they are viral.  All consumers want to hear how other users feel about the products that they are interested in.  Here's a screen shot Flixster showing the fact that I want to see a movie.

Flixter

Ultimately, I provided information to tell other friends on Facebook that I wanted to see this movie.  Now, the studio that produces this movie is banking on the fact that I want to do this.  This is the ultimate way to do viral marketing.  To the best of my knowledge the Flixter folks don't earn a penny from the movie makers.  I'll quote Dave Winer from his post about OpenSocial:

...advertising is on its way to being obsolete. Facebook is just another step along the path. Advertising will get more and more targeted until it disappears, because perfectly targeted advertising is just information. And that's good!

To me, there's not much revolutionary about Search Ads, and they've provided a marketing function that was pretty predictable.  Social networking and delivering marketing virally within social networks, to me, is revolutionary.  The ability to market products in a way that is ultimately non-invasive is an extremely difficult thing to do.  And as more and more people move towards using the Internet for networking, this flavor of marketing will become more and more pervasive.