Thursday, December 25, 2008

Giving Open Source for Christmas - Growl4Rails Plugin

Growl4Rails is a javascript component that provides the UI for growl-like notifications in your Rails application.

This plugin requires Prototype 1.6 or higher and Scriptaculous 1.7 or higher.



Installation


If you are on Rails 2.1 or higher:
$ script/plugin install git://github.com/jfiorato/growl4rails.git

For older versions of Rails, cd into your application's vendor/plugins, and execute the following:
$ git clone --depth 1 git://github.com/jfiorato/growl4rails.git
$ mkdir ../../public/javascripts/growl4rails ../../public/stylesheets/growl4rails ../../public/images/growl4rails
$ cp growl4rails/public/javascripts/* ../../public/javascripts/growl4rails/
$ cp growl4rails/public/stylesheets/* ../../public/stylesheets/growl4rails/
$ cp growl4rails/public/images/* ../../public/images/growl4rails/

Usage


In your view put the following:
<%= growl4rails_includes %>

Then when you'd like the Growl window to appear:
//javascript
Growl.show("/images/download.png", "Foo Bar.pdf", "File is ready for download.", 5000);

Growl.show method takes 4 arguments:
image - 32x32 icon
title - title of the growl
message - the growl message
duration - the length of time (in milliseconds) that the growl window shows


It's definitely got some kinks, but I'm working those out. Obviously, if you'd like to contribute, feel free!

Sunday, December 21, 2008

Building Something Other Than Software

Ok, so this doesn't have much to do with bits, but it does have to do with building things.

My wife got the idea (for me) to build Harvey (2 1/2) a kitchen for X-Mas this year. She got the idea from Apartment Therapy.

DSC_3759.JPGDSC_3760.JPG



I mostly followed what was described there, but embellished a little bit on the door and shelves and added a bit of structural support. All in all, with pots and pans and utensils it cost about a $100.

Fun project.

Thursday, December 04, 2008

More Cats

Fletcher thinks I need more cats on my blog. Here you go!

spaghetti-cat.jpg

Friday, November 28, 2008

Flow - Building Lean Teams

There's few things more detrimental to the productivity of your thought workers than the interruption of flow, yet I think that so few teams recognize its importance. Demarco and Lister describe flow as

"a condition of deep, nearly meditative involvement"

Flow is a requirement in what we do. It's extremely productive time that increases quality, creativity and an overall feeling of accomplishment at the end of the day. For most, this state of immersion takes a bit of time to get into, but takes a split second to get out of.

In my experience, email and IM are disruptions that you can more easily tune out, or just flat out remove by turning them off. But the biggest disruption of flow that I've come across is when someone stops by a co-worker's desk with a "Hey, do you have a second?" Instantly the flow is lost. Your second is given to that person, at the expense of 15 minutes more of your time to get immersed again.

Say you have 5 of those interruptions a day (which is probably pretty conservative for most people), you've got nearly and hour and a half of interruptions. Tack on a few meetings, and you've got a half of a day of productive time on what was scheduled at 8 hours.

If you have taken the time to recognize this issue, often times what the response is, is to start padding scheduling. You start to say, "Jim is only productive 50% of the day". This results in timelines being pushed out or features being cut, and the overall sense that it takes forever to get stuff out the door. Not only that, but Jim doesn't want to be 50% productive. He want's to be 100% productive. He wants to feel like he knows where his day went.

It's easy to measure your flow. You can count the number of uninterrupted work hours per day. If you make it from 9-11 AM without an interruption, you've got 2 hours of uninterrupted flow. I'd challenge you to try measuring that with your team. I'd bet you'd be surprised at the ratio of uninterrupted hours to hours worked.

What can you do to curb interruptions? Designate certain hours in the day when people are "publicly available". Organize your space so that it's conducive to flow and impedes interruption.

If you need to make a case to higher-ups on the impact of flow, you can try measuring a developer who has usually low uninterrupted work hours, and put them up in an office for a week. Measure their uninterrupted work hours then. Say that they usually have 3 uninterrupted work hours for every 8 they work when they aren't in an office, but they've got 6 uninterrupted for every 8 that they work when they are in an office. By rearranging you've made them 50% more productive.

Protect the flow.

Sunday, November 23, 2008

Traditional Quality Assurance is Wasteful - Building Lean Teams

The traditional view of a software development team almost always includes some form of a quality assurance team. The classic QA team does some sort of formulated inspection of the product, balancing risk, time and resources with an end goal of eliminating as many defect leaks as possible.

The huge myth with this traditional view, is that the quality assurance team is your primary defense against delivering defects to your customer. In actuality it's far from being the most effective prevention. Building quality in to the product is the single best defense against introducing defects into a live system. Teams with agile approaches including unit tests and continuous integration are by nature building quality in.

There's a huge amount of waste in the traditional quality assurance team. So much time is spent making sure they are covering all the bases. You need to make sure that everyone on the team knows the product well enough to write detailed test cases. Even once a decent functional understanding is developed, there's still so much technical knowledge missing. What code is shared? Where could some concurrency issues occur? Those kinds of questions are rarely something that is answered by this kind of team.

It's also extremely difficult to staff a traditional quality assurance team. There's extremely high turnover in the position and general poor quality pool of candidates. To combat this, usually this function is shipped offshore, exacerbating the inefficiencies.

So many other inefficiencies stem from the formation of a QA team. For instance, how can a QA team write test cases if they don't know what the software is supposed to do? This leads to big up front designs (BUFD) and puts even more pressure on the design/development gap. Another inefficiency is the amount of effort that goes into maintaining the relationship between developers and QA. There's always a tension between the two teams that usually stems from under-communicating and a lack of mutual understanding.

If you've got a development team that's good about writing unit tests, uses continuous integration, and has a good grasp on unit test coverage, then you'll almost always find that a QA team offers little or no help in the quality of your application, and is far more of a distraction from getting things out the door. You may feel like arguing against the quality assurance team is an argument against quality, but it isn't. It's an argument for a lean team that builds quality into software that's getting out the door.

Saturday, November 22, 2008

The Design/Development Gap - Building Lean Teams

As teams grow larger, they tend to want to start the design for V2 release during the development of the V1 release. The thought process here is that you can get more features out the door. Why waste months doing waterfall linearly, when you can do waterfall in an offset parallel fashion? It's more productive, right?

This results in the introduction of roles like Business Analyst or Program Managers. Often these roles are filled by persons with primarily a project management based background, and less of a technical background (although most i've worked with have had a small amount of technical experience.) Usually this position is tasked with taking a rough list of high level features, and defining all the details in a functional spec.

The problem with this, is that when developers have little or no input into the details, they tend to lose any sort of emotional connection with the feature. By handing a developer a blue print, you give a position that's used to wired to think creatively, no opportunity to be creative. The more and more functionality that thats dropped on their desk in the form of a function spec, the more and more distant they become with what they are building.

The productivity loss as a result of this gap between design and development almost always rears it's ugly head first during the estimation process. A feature that the developer has no connection with will always result in a bloated estimate.

If you're thinking that you're going to get more productive by further parallelizing your waterfall process right now, you're dead wrong. Get the features into the developers hands to do the design. Work with them and be open to new ideas, coach them on what's important to the customer and the business, and make sure your vision is clear to them.

By doing this, you'll see that you've got developers who are designing great things, and are pumped to be coding them. You'll also see that your "wow" features will increase, as your developers will start thinking about how they can make this one little thing cooler by using Google Charts, or smart Ajax, or rich DHTML.

This is the kind of productivity you need. You don't need lots of features, you need great features, and closing the design/development gap will get you there.

Upcoming Posts on Building Lean Teams

The current economic environment has got me thinking a bunch about what some of the things that are important ingredients in a lean software development team. God knows that now is the wrong time to be on a team that's struggling on getting things out the door and into customers hands. Those that are providing real value, faster and at lower cost are the ones that are going to rise to the top in this climate.

Over the next few posts, the topics will be about what I'd describe as a few reflections on what it is that I think drives productivity, creativity and valuable output on software development teams.

Saturday, October 11, 2008

Great Thomas Friedman Quote

I heard this quote from a presentation given at this year's National Academy of Engineering induction ceremony. It really drove home a great point, and gave me a lot of direction into what I'm really looking for in an economic recovery as well as a presidential candidate I can support.

From NY Times columnist Thomas Friedman

"We need to get back to making stuff, based on real engineering not just financial engineering."
http://www.nytimes.com/2008/09/28/opinion/28friedman.html

Friday, October 03, 2008

How I Interview Software Engineer Candidates

Generally my interviews go like this. They may fluctuate depending on the skill of the candidates, but in almost all cases, this line of questions is sufficient to determine the skill of the candidate, without any ambiguity.

I introduce myself to the candidate and I let them know what I do, what I’m responsible for and also my level of technical detail. I’ll also at this point let them know that the interview is going to be technical one.

Then I’ll setup the next line of questioning. I try to get a feel for how well they remember their data structures. I then let them know I’ll be talking to them about the Binary Search Tree. Then, I let them know that I’m not necessarily worried about whether or not they remember all the details, but am focusing on their problem solving skills.

Sometimes the candidate tries to make this problem more complicated than it is by thinking about balancing the binary search tree. So I usually let them know at this point if they have some recollection of binary search trees that they shouldn’t worry about balancing the tree and that unbalanced trees are perfectly fine for this example. If the candidate does seem to remember these concepts fairly well, then maybe I’ll ask later on how they would go about balancing a BST.

The first question I ask is if they remember the properties of the binary search tree. If they don’t then I ask them if they could describe a general tree structure. Anyone with any sort of programming background should be able to describe a tree. If they recall the tree structure, but don’t recall the BST, I let them know the properties of a BST. What I am getting to here is that I want it to be clear that a BST is tree structure in which the value of the left node is less than the parent node’s value, and the value of the right node is greater than the parent node’s value.

Here’s where we get into the question. Usually I give them this simple array (now, you might be thinking this is really simple, but you’d be so surprised at how few people actually get this stuff.)

[4, 2, 6, 1, 7, 3, 5]

Then I’ll ask them to walk me through how they would take that array of values and arrange them in a BST structure, if they were serially iterating over that array, and not using any temporary storage devices (they almost always want to go to 1 first, and I almost always tell them that they can’t do that, because they’ll be losing their place in the array.)

I’m expecting something like the following:

clip_image002Step 1

clip_image004Step 2

clip_image006Step 3

clip_image008Step 4

clip_image010Step 5

clip_image012Step 6

clip_image014Step 7

Simple right?  You should see some of the answers I get.

Once they’ve successfully done that, I ask them what the advantages of this structure are over an array. I’m expecting to hear that it is more performant than an array because the search time is less as it divides the results based upon the path of the tree. If they’re totally up to speed on this, I’ll ask them the order of magnitude of this performance gain.  I’m expecting to hear log(n).

Once we’ve chatted about this, I’ll ask the candidate to model a BST in the language they are most comfortable in.  In C#, I'm looking for an answer like so:

public class BST
{
    public int Value;
    public BST LeftNode;
    public BST RightNode;
    public BST(int value, BST leftNode, BST rightNode)
    {
        Value = value;
        LeftNode = leftNode;
        RightNode = rightNode;
    }
}

 

Sometimes they don't get to exactly this (again, you'd be surprised at some of the answers that you get here.)  So what I'll do is discuss why they did it the way they did, and then we eventually end up here.

Next I'll ask them to construct a simple tree that looks like step 3 above using the constructor that they've defined above.  What I'm expecting is something like this:

BST myBst = new BST(4, new BST(2, null, null), new BST(6, null, null));

 

Usually they'll have three or four lines of code for the constructor.  If they do, I'll immediately ask them to get it down to a single line of code.

After this I'll do some small, short questions off of their resume, and dig in a bit about what they're doing.

After that I'll ask them "What questions do you have for me?"  It's important to phrase it this way because it insinuates an expectation that they have questions for me.

All of this usually fits into a 45 minute time slot, which is usually what I've been given as an interviewer historically.

Again, this simple flow really gives me a great idea of the candidate's technical skill.  It works really well for entry level to mid-level candidates, it can be language agnostic, and it's really unambiguous.

Why am I writing about this?  Well, I'm going to switch it up.  I'm going to stick to this type of questioning, but I think I'm going to try a new computer science concept.  Could be another structure, could be a sorting algorithm, could be a balancing problem, who knows.

ADO.NET Data Services (Project "Astoria") Query Interceptors

One of the interesting/handy features in ADO.NET Data Services is called query interceptors.  Query interceptors give you the ability to take action on the items returned from the service.

Each Query Interceptor is a method defined in your service and attributed with a QueryInterceptor interface.    The attribute takes an argument of the type of entity in which it is responsible for acting upon.  The results for all requests for this type of entity will then pass through the interceptor.

The interceptor is implemented as a lambda expression, using the Expression and Func types in C#.  An example of a Query Interceptor is below.  You see that it returns a generic Expression type with a specific type of Func, that takes an employee, and returns a boolean.

[QueryInterceptor("Employee")]
public Expression<Func<Employee, bool>> OnQueryEmployee()
{
 
    return pc => pc.Manager.LoginID.Equals("adventure-works\\"
                        + HttpContext.Current.User.Identity.Name) ||
        pc.LoginID.Equals("adventure-works\\"
            + HttpContext.Current.User.Identity.Name);
}

 

This is a pretty cool way of filtering/altering your results,  but what you have to keep in mind is that this expression is going to execute on each of the entities returned from the request.  I'll follow up with a post on another feature of ADO.NET Data Services called Service Operations that allows you to act upon a set of records.

Thursday, October 02, 2008

W3C Selectors API

CSS Selectors have been a slick way of finding elements in the DOM. Something as simple as the following would select all the td tags that are children of a tbody tag within an table with the id of "score" and set the background color to gray.


<style>
#score>tbody>tr>td { background: gray; }
</style>

The selectors syntax for CSS 2.1 is simply described below:

Picture 1.png
Until I became familiar with JQuery, I had been ignorant of the fact that the syntax I've been using for CSS styles would be a fantastic way to get things out of the DOM from Javascript.

Now, the W3C standards body is introducing a selector API that will be available in both IE 8 and Firefox 3.1. So the following syntax will do the same selection as above, except available in javascript:

var cells = document.querySelectorAll("#score>tbody>tr>td");

The W3C draft says that all objects that implement Document, DocumentFragment and Element interfaces, must now also implement the NodeSelector interface.

The NodeSelector interface is defined as follows:

interface NodeSelector {
Element querySelector(DOMString selectors);
NodeList querySelectorAll(DOMString selectors);
};

Pretty cool stuff.

Of course, with every step towards a common implementation across browsers, there's awlays a catch. The selectors syntax is different between CSS 2.1 and CSS 3. And of course IE has decided to use CSS 2.1 selector syntax, and Firefox is using CSS 3. So, you'll probably want to make sure you write for CSS 2.1. Below is a summary of differences:

  • the list of basic definitions (selector, group of selectors, simple selector, etc.) has been changed; in particular, what was referred to in CSS2 as a simple selector is now called a sequence of simple selectors, and the term "simple selector" is now used for the components of this sequence

  • an optional namespace component is now allowed in type element selectors, the universal selector and attribute selectors

  • a new combinator has been introduced

  • new simple selectors including substring matching attribute selectors, and new pseudo-classes

  • new pseudo-elements, and introduction of the "::" convention for pseudo-elements

  • the grammar has been rewritten

  • profiles to be added to specifications integrating Selectors and defining the set of selectors which is actually supported by each specification

  • Selectors are now a CSS3 Module and an independent specification; other specifications can now refer to this document independently of CSS

  • the specification now has its own test suite

Thursday, September 25, 2008

Debugging SQL Server Service Broker

Thought I would post a few of my tricks for debugging SQL Server Service Broker issues.  Most of these tricks are really targeted towards the cache invalidation patterns that I've been writing and presenting about.

Non-Functional Service Broker or your Error Logs are Growing Out of Control

  1. Check if service broker is enabled:
    SELECT is_broker_enabled FROM sys.databases WHERE name = 'MYDB'
    If it isn't enabled then you'll have to run:
    ALTER DATABASE MYDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    ALTER DATABASE MYDB SET NEW_BROKER
    ALTER DATABASE MYDB SET MULTI_USER
  2. Look in the SQL Server Error Logs
    1. If you see "The master key has to exist and the service master key encryption is required.", you'll need to run:
      CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password'
    2. If you see "Cannot execute as the database principal because the principal "[PRINCIPAL]" does not exist...", you'll need to run:
      sp_changedbowner 'sa'

DB Server Has Huge TempDB Growth and CPU Saturation

  1. Execute a query to determine if you are leaking Service Broker conversations.  This will show all open conversations.  If you've got a lot of conversations in here, you're db is leaking conversations and you're definitely going to have performance problems:
    USE MYDB
    SELECT * FROM sys.conversation_endpoints
  2. Add the Dialog Timer Event Count performance counter.  This counter:
    Dialog Timer Event Count Performance Counter

Some other useful queries are:

  • Transmission Queue View - This view will tell you of all of the messages that are attempting to be sent to a target service.  If the target service is having issues, it's possible for this view to have many rows, causing database growth:
    USE MYDB
    SELECT * FROM sys.transmission_queue
  • Sysprocesses view - This view can provide some information on the Service Broker processes that are running.  The following are service broker processes that you may see. 
    USE MYDB
    SELECT * FROM sys.sysprocesses
    Take a look at the CPU time and IO info that the system views can give you to see if any of these are the culprits of your performance issues.
    • BRKR ASYCN CONN
    • BRKR CMPTLN HDLR
    • BRKR MSG XMITTER
    • BRKR MSG DSPTCHR
    • BRKR EVENT HNDLR
    • BRKR INITIALIZER
    • BRKR TASK

Monday, September 15, 2008

Giving Something New a Try

I've decided to leave my job of almost 9 years, the .NET platform, and Windows all in one fell swoop, and give something new a try. At the beginning of October, I'll be back working for a small company called NextPoint as a Software Engineer doing Ruby development on OS X. Big change indeed.

I'm extremely excited to be working on something so different, and ecstatic about working with another fantastic small team of 4 smart folks. I'll be working for a boss that I've worked for in the past, and have a huge amount of respect for and trust in.

What hasn't changed is that I'll still be doing software development for the legal vertical, so I'll get to stay in a domain that I've gained a lot of knowledge in over the years. Hopefully I'll be able to provide some of that knowledge to this new company.

I want it to be clear that I'm not running away from the .NET community, or being driven away. I'm so grateful to this community and the support it's given me. I still plan on trying to stay involved and hopefully I'll be able to be able to provide a different perspective on .NET and C# in specific, to the folks in the .NET community.

Anyway, thanks to the folks that got me started here, and those that provided support over the years (indirectly and directly). It's much appreciated.

Saturday, September 06, 2008

Codeapalooza Slides and Code Samples

Hey All

Yes, we had some pretty nasty hardware issues at today's presentation.  If you were there, I really apologize for that.  Hopefully once we worked out the kinks, you were able to glean some of the great stuff here.

I've posted the slides and code samples.

Thursday, September 04, 2008

Codeapalooza this Saturday!!

Don't forget that Codeapalooza is this Saturday.  Lot's of awesome presentations are lined up, including my presentation on ADO.NET Data Services.

Don't miss it!

Saturday, August 30, 2008

ADO.NET Data Services (Project "Astoria") Query String Options and Expression Syntax

I'm stealing this directly from the ADO.NET Data Services documentation here:

http://msdn.microsoft.com/en-us/data/bb931106.aspx

However, it's all in a Word document (??) right now, so I thought I'd repost to something that might be a bit easier to find.

Structure of Web Data Services URLs

The basic format for URLs is:

http://host/vdir/<service>/<EntitySet>[(<Key>)[/<NavigationProperty>[(<Key>)/...]]]

Note: in the syntax above [ ] imply optional components

The 4 main elements of the URL syntax are:

1. The data service URL. The data service URL is the first part of the URL that points to the data service .svc/.rse file. For example, http://host/myapp/northwind.svc. The examples below assume that the URLs start with that prefix for brevity.

2. The entity-set name (optional). If you include an entity-set name, then all the entities in that entity-set are returned. For example, /Customers would return all of the customers in the Northwind data service. The system allows for an optional filter predicate contained in parenthesis to subset the response to a single entity. For single-key entities, you can simply indicate the key value, and the resulting URL will point to that entity specifically. For example, if there is a customer entity with a key ‘ALFKI’, its URL would be /Customers(‘ALFKI’). Additional expression-based filtering on a set is enabled by using query string parameters, which are described later in this document

3. A navigation property (optional). A navigation property can be placed after the entity-set name (separated by a “/”), indicating that you want to traverse the relationship being pointed to. For example, /Customers(‘ALFKI’)/Orders would return the sales orders of the customer with the primary key ‘ALFKI’. As noted above, a filter can also be applied to the navigation property using query string operators (described later in this document) to return only a subset of the related entities. For example, /Customers(‘ALFKI’)/Orders?$filter=OrderDate gt '1998-1-1' returns all the orders posted after Jan 1st, 1998, for the customer with a key ‘ALFKI’. Since the result of traversing a relationship through a navigation property is another set of entities, you can continue to add navigation properties to the URL to navigate through the relationship graph specified in the data service schema. For example, /Customers(‘ALFKI’)/Orders(1)/Employees returns the employees that created sales order 1 for the customer with a key of ‘ALFKI’.

Query string options

While the URL format allows for filtering and traversing through the graph of entities in the store, it does not have constructs to control the output. For that, a number of optional query string parameters are supported by the Astoria data services. Table 2 below lists all of the query options along with their description and some usage examples.

expand

The ‘expand’ option allows you to embed one or more sets of related entities in the results. For example, if you want to display a customer and its sales orders, you could execute two requests, one for /Customers(‘ALFKI’) and one for /Customers(‘ALFKI’)/Orders. The ‘expand’ option returns the related entities in-line with the response of the parent URL request.

You may specify multiple navigation properties to expand by separating them with commas, and you may traverse more than one relationship by using a dot to jump to the next navigation property.

--a customer with related sales orders

/Customers[ALFKI]?$expand=Orders

--a customer with related sales orders and employee information related to those orders

/Customers[ALFKI]?$expand=Orders.Employees

--Orders with related employees information and related shipper information

/Orders[10248]?$expand=Employees,Shippers

orderby

Sort the results by the criteria given in this value. Multiple properties can be indicated by separating them with a comma. The sort order can be controlled by using the “asc” (default) and “desc” modifiers.

/Customers?$orderby=City

/Customers?$orderby=City desc

/Customers?$orderby=City desc,CompanyName

skip

Skip the number of rows given in this parameter when returning results. This is useful in combination with “top” to implement paging (e.g. if using 10-entity pages, saying $skip=30&top=$10 would return the fourth page). Note that skip only makes sense on sorted sets; if an orderby option is included, ‘skip’ will skip entities in the order given by that option. If no orderby option is given, ‘skip’ will sort the entities by primary key and then perform the skip operation.

--return all customers except the first 10

/Customers?$skip=10

--return the 4th page, in 10-row pages

/Customers?$skip=30&$top=10

top

Restrict the maximum number of entities to be returned. This option is useful both by itself and in combination with skip, where it can be used to implement paging as discussed in the description of ‘skip’.

/Customers?$top=5

--top 5 sales orders with the highest TotalDue

/Orders?$orderby=TotalDue&$top=5

filter

Restrict the entities returned from a query by applying the expression specified in this operator to the entity set identified by the last segment of the URI path

/Customers?$filter=City eq ‘London’

-- all customers in London

/Customers?$filter='Wayne, John' eq insert(ContactName,

length(lastname), ',')

-- Match all Customers with the value of the property  ‘fullname’ equal to ‘Wayne, John’

Table 1. Query string options

Expression Syntax

The simple expression language that is used in filter operators supports references to columns and literals. The literal values can be strings enclosed in single quotes , numbers and boolean values (true or false). If a date/time value needs to be specified, it can be written as a string (in quotes) and the system will attempt to convert it to a date/time if the other operand is a date/time type.

The operators in the expression language use abbreviations of the names instead of symbols to reduce the amount of escaping necessary in the URL. The abbreviations are listed in the table below.

Logical Operators

eq

Equal

/Customers?filter=City eq 'London'

ne

Not equal

/Customers?filter=City ne 'London'

gt

Greater than

/Product?$filter=UnitPrice gt 20

gteq

Greater than or equal

/Orders?$filter=Freight gteq 800

lt

Less than

/Orders?$filter=Freight lt 1

lteq

Less than or equal

/Product?$filter=UnitPrice lteq 20

and

Logical and

/Product?filter=UnitPrice lteq 20 and UnitPrice gt 10

or

Logical or

/Product?filter=UnitPrice lteq 20 or UnitPrice gt 10

not

Logical negation

/Orders?$ ?$filter=not endswith(ShipPostalCode,'100')

Arithmetic Operators

add

Addition

/Product?filter=UnitPrice add 5 gt 10

sub

Subtraction

/Product?filter=UnitPrice sub 5 gt 10

mul

Multiplication

/Orders?$filter=Freight mul 800 gt 2000

div

Division

/Orders?$filter=Freight div 10 eq 4

mod

Modulo

/Orders?$filter=Freight mod 10 eq 0

Grouping Operators

( )

Precedence grouping

/Product?filter=(UnitPrice sub 5) gt 10

Table 2. Operators for filter expressions

In addition to the operators described above, a set of functions are also defined for use with the filter query string operator. The following tables list the available functions. This CTP does not support Aggregate functions (sum, min, max, avg, etc) as they would change the meaning of the ‘/’ operator to allow traversal through sets. For example, /Customers?$filter=average(Orders/Amount) gt 50.00 is not supported. Additionally, ISNULL or COALESCE operators are not defined. There is a null literal which can be used for comparison following CLR semantics.

String Functions

bool contains(string p0, string p1)

bool endswith(string p0, string p1)

bool startswith(string p0, string p1)

int length(string p0)

int indexof(string arg)

string insert(string p0, int pos, string p1)

string remove(string p0, int pos)

string remove(string p0, int pos, int length)

string replace(string p0, string find, string replace)

string substring(string p0, int pos)

string substring(string p0, int pos, int length)

string tolower(string p0)

string toupper(string p0)

string trim(string p0)

string concat(string p0, string p1)

Date Functions

int day(DateTime p0)

int hour(DateTime p0)

int minute(DateTime p0)

int month(DateTime p0)

int second(DateTime p0)

int year(DateTime p0)

Math Functions

double round(double p0)

decimal round(decimal p0)

double floor(double p0)

decimal floor(decimal p0)

double ceiling(double p0)

decimal ceiling(decimal p0)

Type Functions

bool IsOf(type p0)

bool IsOf(expression p0, type p1)

<p0> Cast(type p0)

<p1> Cast(expression p0, type p1)

Examples

  • /Orders?$filter=ID eq 1From
    • From all the Orders in the data store, return only the Orders with the ‘ID’ property equal to 201
  • /Customers?$filter='Wayne, John' eq insert(fullname, length(lastname), ',')
    • Match all Customers with the value of the property ‘fullname’ equal to ‘Wayne, John’
  • /Customer$filter=isof(‘SpecialCustomer’)
    • Match all customers that are of type SpecialCustomer. Entity sets support inheritance and thus customer entities in the entity set may be of different types within a type hierarchy

Wednesday, August 20, 2008

Chicago .NET Users Group Presentation on SQL Cache Dependency Patterns

Thanks to all that came out.  I appreciate you taking a listen.

I've uploaded the Sql Cache Dependency Patterns presentation notes and demos for you all to enjoy.  Also, below is the copy from the handout that was available at the meeting (also in the zip file for download).

Summary of SQL Server 2005 Cache Dependency Patterns

Declarative SqlDependency

Overview

Declaratively add database dependencies into your ASP.NET web applications using an attribute-based approach. Dependencies can be added at the page level for all data sources, or at the individual SqlDataSource level.

Appropriate Uses

If there is heavy use of ASP.NET declarative databinding using simple T-SQL queries.

Pros

Extremely simple to use, piggy backs on the flexible OutputCache scheme.

Cons

Only available to web applications. Severely restricts query composition and database options.

System.Web.Cache.SqlCacheDependency.

Overview

Implementation of CacheDependency that integrates database dependency into the existing System.Web.Cache architecture.

Appropriate Uses

If there is heavy use of the System.Web.Cache architecture, and T-SQL queries are simple.

Pros

Very simple to use, and leverages the System.Web.Cache CacheDependency architecture.

Cons

Only available to web applications. Severely restricts query composition and database options.

System.Data.SqlClient.SqlDependency

Overview

Provides an event-based approach to the problem by notifying the application of when the set of data has changed.

Appropriate Uses

When cache storage is home grown, or complicated to invalidate.

Pros

Available to both WinForms and ASP.NET. Allows flexibility in the clearing of cache, which provides a wider range of cache storage options.

Cons

Severely restricts query composition and database options.

System.Data.SqlClient.SqlNotificationRequest

Overview

This is the lowest level of SqlNotifications provided by the .NET Framework. Monitors query changes and executes the delivery of messages to the Service Broker Queues .

Appropriate Uses

More control is required around how often the application is notified of changes.

Pros

Provides ultimate control over how the notification process is handled.

Cons

Queries and database options are still highly restrictive. No control of the message sent to the notification listener. Lots of plumbing to write.

Roll Your Own with SQL Server Service Broker

Overview

A custom approach, leveraging the SQL Server Service Broker to handle data changes and notification delivery.

Appropriate Uses

When a highly granular, row based cache dependency scheme is in place.

Pros

Allows you to receive detailed information about the exact record that caused the notification. Ultimate flexibility around the entire cache invalidation strategy. Does not restrict the types of commands that you need to execute.

Cons

Can be complicated to implement depending on how complex the cache invalidation rules are. Lots of plumbing to write.

Gotchas

· Not all versions of SQL Server have the SQL Server Service Broker. See http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx.

· You must enable the Service Broker per database. See the \SQL\CreateNewBroker.sql from the sample application

· You must create an encryption key per database. See the \SQL\ CreateNewBroker.sql from the sample application

· The DB principal of the database is incorrect because it has been restored from another server. See http://support.microsoft.com/kb/913423

· You must call System.Data.SqlClient.SqlDependency.Start() if using SqlDependency or SqlCacheDependency.

· Queries must abide by the following rules if you aren’t rolling your own: http://msdn.microsoft.com/en-us/library/ms181122.aspx

· LINQ to SQL must generate valid queries.

· Wiring the SqlDependency, SqlCacheDependency and SqlNotificationRequest must happen prior to executing the command.

Tuesday, August 19, 2008

Using Twitter to Thrill Customers

So check this out. 

I'm prepping for this presentation and I'm planning on boring my audience with yet another AdventureWorks sample DB demo.  In case you haven't noticed, they've recently moved all Microsoft SQL Server sample DBs to CodePlex.

Anywho, I was trying to download the sample DB from CodePlex and it was taking forever.  So I used this spare time to write on Twitter about it.  Nothing horrible, just sort of a prayer to the bandwidth gods, or latency angels, or server uptime wizards, or whatever.

Then this morning I see in my inbox, this message from a fella at Microsoft:

"Hello Jim,

I noticed in your Twitter post that you were having trouble downloading AdventureWorks (http://twitter.com/jfiorato/statuses/891763277).  Were you able to complete your download?  If not, would you like any assistance from us?

Thanks,

Jonathan"

I was totally blown away.  It's seriously fantastic service.  What do they usually say? 

Only about 1 in 100 or 1000 customers actually complains to the company that's selling the product?  Can you imagine the service you can provide to people who are posting complaints to the ultimate anonymous gripe platform, Twitter?

This one dude, using a Summize search alert for CodePlex, is kicking ass.

Thanks again Jonathan.

Sunday, August 17, 2008

The Physical and Logical Topology of Microsoft "Velocity"

I thought it would be good to put up the physical and logical models of Velocity so you can all get a grasp of what it looks like.

Physical Topology

Below is the physical topology of a Velocity caching tier.  As you can see, each cache cluster contains one or more cache servers (hosts) with an instance of Velocity running on it as a Windows service.  The individual hosts are each cognizant of a folder share the contains information that links them all together in a cluster (Microsoft understands this is a single point of failure, and intends to solve this by RTM).  This is one level at which the synchronization of caches occurs.

Physical Topology

Obviously, you can just keep adding more servers to the cluster in order to scale.  Each server in the cluster is configured with a service port (for cache communication), a cluster port (for administration), and a memory governor that controls how much memory is allocated on the server for the cluster.  This configuration happens at install, as seen below:

Installation Configuration

The Cache Administration Tool noted in the diagram above is a command line tool that provides simple control and diagnostic functions, allowing administrators to start and stop clusters, hosts, and get real time statistics about cache inventory.

Each application has a reference to one or more host, through the web.config or app.config.  Velocity will then determine the host to retrieve the cache from.  Below is a diagram of the configuration topology:

Configuration Topology

Logical Topology

Below is the logical model of Velocity.  You can see that there are three different ways in which you can store caches across clusters.  You can use the default cache, a named cache, or a region. 

Logical Model

If you're using a single cluster for a single application, using the default cache will suffice.  However, if you're using the same cache cluster for multiple applications, then a named cache should be used in order to logically partition your applications (think one named cache per database).  At the named or default cache level, you can configure cache policies individually such as fail-over, expiration, and eviction.

Regions are caches that can be created as a subset of a named cache.  Region caches are guaranteed to exist on a single host server.  Though not entirely clear to me, I've gathered that the Velocity internals use this as a unit for replication within named caches (behind the scenes), and they've made it available in the case that you'd like to skip the overhead of routing, and go directly to a cache location for your data.

That's it for  the topology.  Hope it gave you a good understanding of how Velocity is organized and how it's solving the problem of distributed caching.

Saturday, August 16, 2008

Distributed Caching with Microsoft "Velocity" - An Introduction

Something that I think a lot of software developers have dealt with when building high availability web applications is how to manage distributed cache.  It's one thing to be able to get your data/objects into cache, but it's another thing to come up with a great cache synchronization architecture that's reliable and scalable.  To break down the common synchronization problem simply: In a load balanced environment, how do changes made on one server, force cache purging on other servers in the farm? 

This problem has resulted in the creation of distributed caching frameworks, and the emergence of a caching tier.  There's been a couple of frameworks that have been available for some time now.  Some are free (memcached) and some that aren't (NCache).  Now Microsoft is entering the game with a distributed caching framework code named "Velocity".

Velocity solves this problem by providing the infrastructure required to keep caches synchronized across application boundaries.  It essentially fuses memory across application and network boundaries, providing a unified view of a cache from a distributed application.

The key features of Velocity are:

  • It can cache any CLR object that has been declared as Serializable (either through the SerializableAttribute, or by implementing the ISerializable interface).
  • Through configuration, caches can embedded into your application, or accessed over the network.
  • Provides another option for session storage by allowing you to configure sessions state to be persisted to the Velocity cache.
  • Highly flexible, allowing you to have "regions" of data within the same application to use different caching strategies.
  • Highly modular in structure, allowing you to hook in transactions, or even replace the network layer.

The Velocity team is part of the Data Platform group in Microsoft, and in fact Velocity shares the same clustering technology with the team that is building SQL Server Data Services (SSDS), which is a cloud computing initiative from the SQL Server team.  The result of this collaboration and sharing is going to be a huge ability to scale this technology to thousands of nodes within a cache. 

Additionally, they are working with the MSN.com and Live.com teams to see if those sites could benefit from Velocity.  To think, you have the same scaling facility that two of the highest trafficked sites have.  For free!

This is an extremely useful technology coming out of Microsoft, and long overdue, and I'm a fan of the way they are putting it together.  I'll be posting more about the different configuration options and appropriate uses of

Friday, August 15, 2008

Speaking at Chicago .NET Users Group - August 20th

Update: Subject changed.  I'll be speaking on SQL Server Cache Dependency Patterns

I'll be speaking on Distributed Caching with Microsoft Velocity at the August 20th CNUG meeting. Meeting is at the Downers Grove Microsoft office.

Tip For Those Who Sell Housewares

A little tip for those of you that sell home goods and Furnishings.

Find a nice area that has lovely weekly rentals (if you're in the Chicago area, think Saugatuck, MI, Door County, WI, Galena, IL).

Fill these weekly rentals with your stuff for free. Make sure you've got your name and how to buy all over. All these places have concierge books. Get a "Furnished By" or "Pots and Pans provided by", if your name isn't all over your product.

Proof? Got back from Saugatuck Michigan a month ago, and came home today to a brand new set of Rachel Ray pans. Same ones that were in the rental.

Tuesday, August 05, 2008

Speaking at Codeapalooza - September 6th

Beware the shameless plug to follow:

I'm going to be speaking at Codeapalooza on September 6th on ADO.NET Data Services (Project Astoria). There's lots of other great sessions lined up that you might be interested in as well.

http://www.codeapalooza.com

Monday, July 28, 2008

Twitter Hacked

So, it looks like Twitter has been gamed. I'd venture to guess that the speed in which this site gained popularity, and with all the focus on performance and availablity, that they don't have a whole lot of logging going on in the app. I wouldn't be surprised if it takes them some time to figure out how the follow functionality was attacked. Who knows, maybe I'm wrong.

Wonder how quickly these accounts can proliferate in the meantime. Maybe they'll just shut down the entire API until they figure it out.

David Heinemeier Hansson Startup School Presentation

I had been hearing quite a bit about the David Heinemeier Hansson speech at Startup School, and finally got around to watching it. It's a pretty smart presentation. Have a look.

Tuesday, July 08, 2008

Lingual Dexterity - The OCaml Development Environment

All the tools and setups that I mention in this post are on the Mac OS, so it'll be a pretty straight forward port if you're using a Linux platform. Alternately if you are on Windows, you're in good shape if you're running Cygwin.

I've found a pretty good post here on setting up an OCaml development environment on the Mac OS. One thing that this post leaves out is that you'll need to make sure you install the Makefile Textmate Bundle.

Using the Interpreter


Since it's mostly a functional language, you find yourself using the interpreter quite a bit in order to test out small chunks of code or even start getting warmed up to the language. You'll immediately notice that your up/down/back/forward keys are useless. In order to get those buttons to do what you're used to, you'll have to install ledit. ledit is a line editor that gives you Emacs like control characters. Once you get it installed, you can open terminal and type
ledit ocaml

and you'll be running the OCaml interpreter through ledit. The instructions I refer to above have you install this, so if you fallow those closely, you'll be in good shape already.

One option for your edit/compile/run cycle is to write your code in a file, then use that file in the interpreter, to exercise your functions. To do this, you'll need to use the "#use" instruction with in the interpreter. In terminal start up OCaml with
ledit ocaml

then this to import your file (path is relative)
#use "myOcaml.ml

Aside from unit testing (which will be it's own post), this is the fastest way I've found to develop in the OCaml environment.

The IDE


OCaml is really one of those "notepad" languages (are there any other "non-notepad" languages other than VB, C# and Java?). As with lots of other "notepad" languages, there's Eclipse plugins, Emacs clones, VI(M) files and TextMate bundles. I'm a big fan of TextMate and have done most of my work in OCaml using it. There's a bundle for OCaml that I've been using that's been working fairly well for me for some debugging, but doesn't work all that well for unit testing (I guess there's an idea for a bundle with unit testing support.

What I find myself doing most of the time is editing the file in TextMate and then navigationg to the makefile tab in TextMate (option+command+arrow), building (command+r), then switching to my terminal window to execute the unit tests.

In the next post, I'll go over how to setup and execute unit tests.

Again, if there's anyone else out there who has feedback on this setup, I'd love to hear it. As I said, I'm by no means an expert, and any productivity enhancing tips you all have, would be great to hear.

Thursday, June 26, 2008

Handy GMail Trick - Plus Addressing

I saw this popup a while back. Basically you can add "+anything" to your GMail address and it will go to your GMail account. They call it GMail Plus Addressing.

So you can do "steve+Test1@gmail.com" and that email will go to "steve@gmail.com".

Pretty cool eh?

For those of you that need to test emails all the time, and are always in need of unique emails, this comes in super handy.

Wednesday, June 25, 2008

Lingual Dexterity - Objective Caml Overview

Yep, that's right. Objective Caml. Objective Caml is an implementation of the Caml language, that adds some object oriented concepts to make it a multi-paradigm language. I chose Objective Caml because it's currently the language of choice of the course I'm currently taking (Prolog to follow. tee hee). In addition, it's a pretty good language for an introduction to functional programming, despite it's object oriented features.

If you're looking for a good tutorial on the language, you can visit the aptly named http://www.ocaml-tutorial.org. A few highlights of what I find interesting are below. Mind you, an Ocaml expert could probably name off a 100 better things, these are just the ones that stand out in my tiny little brain.

Inferred Types


There's no need to declare types in the Ocaml language, but it's very much a strictly typed language. C# 3.0 is getting closer to this with the var keyword and it's inferred typing, but Ocaml is far more advanced in this area. An example in Ocaml:
let myInteger = 8;;

Which would have a C# equivalent of:
var myInteger = 8;

Obviously the benefit to this is that you're code is far less verbose. On the downside, as a noob Ocaml developer, you end up making educated guesses at the return type of your complex functions. This issue is probably more of a combination of the functional style and the inferred typing however, and you can quickly get used to it. I can imagine it's a breeze for anyone with a functional background or experience with dynamically typed languages.

Recursion Over Lists


Something that probably all good functional languages need to do well is recursion. Ocaml has deep support for recursion over lists. Below is an example of a function that sums integers in a list as it loops through it.

let myList = ["a";"b";"c"];;
let rec recurseList theList = match theList with
| [] -> 0 (*base case, have reached the end of the list*)
| head::tail -> head + (recurseList tail);; (*head is the current item in the recursion, tail is the rest of the list*)

Pattern Matching


This is probably the flagship feature of the language. Ocaml provides a rich pattern matching syntax that allows you to use recursion in place of loops easily, or to work with the custom data types you can declare. It's probably best to describe with an example though.

In this example, the first thing I'm going to do is declare a type that describes a binary tree. The data structure has nodes which can have a left node and a right node, and nodes can be empty. The "*" syntax below is how you define a tuple, which would be the equivalent of a C# Pair, but can be any number of items.

type bst = Node of int * bst * bst | Empty

This doesn't translate well to C#, but essentially what we've done is created a data type called "bst", and it can exist in two states. Either as a Node, which is essentially another data type that is a tuple that has a first position of integer, and a second and third position of bst. The bst can also exist in the state "Empty", which you can think of as null.

Now, where pattern matching comes into play is when I want to recurse over this tree to see it's nodes.

let printNode theBst = match theBst with
| Empty -> print_string "this node is empty"
| Node(value, Empty, Empty) -> print_string "this node has no children"
| Node(value, left, Empty) -> print_string "this node has a left child only"
| Node(value, Empty, right) -> print_string "this node has a right child only"
| Node (value, left, right) -> print_string "this node has two child nodes"
| Node (_,_,_) -> print_string "this node can be anything but Empty"

As you can see, it's a really fancy case statement. The final pattern match is redundant, but I thought I'd put it there so you could see an example of how a kind of "wildcard" pattern matching exists.

High Order Functions


High order functions is another feature that is commonly available in most functional programming languages. Its the ability for a function to take another function as an argument. Here's an example in Ocaml:

let doFunction theFunction theValue = 
theFunction theValue;; (*Execute the given function on the given value*)

let myFunction x = x + 1;; (*This is the function to pass to the high-order function*)

doFunction myFunction 3;;(*Output should be 4*)

This is pretty powerful stuff, as most .NET developers have already seen in C# with delegates and now anonymous functions (which you can do in Ocaml as well.)

Summary


Now, the language is cool and all, but based on my background, I struggle to find good uses of it. I think some of these features I identified point out strengths of the language and could highlight potential uses for it (parsing, lexing, basically pattern matching). The Caml website has a bunch of "successful" applications that show various uses. Some of them make sense to me, others make me think, "you probably could have done that with .NET in like an hour".

This is just the tip of the iceberg of the language, and I'm not sure how useful this is to you all, or how comprehensive this is of a description, but it's helped me sum up my thoughts. I'd love to hear your feeback. I'll follow up with a post on the development environment, project structure and unit testing.

Tuesday, June 03, 2008

Lingual Dexterity - Getting Over the Hump

Whenever I set out to start on learning a new programming language (I've started often, can't say I've ever finished), I always feel like the biggest hump to get over is the practice aspect of the new technology, and not necessarily the language itself. I can sit down and go through the tutorials, but what I always want to see, right from the get go, is the development environment, what the package structure is like, how to write unit tests, etc.

So, over the next few months I'll be putting up posts about getting over the hump with various popular modern languages. The subjects that I'd like to cover (if applicable) for each language are:


  • The development environment/debugging

  • Proper package structure/file and class dependencies

  • Deployment/Build

  • Data Access

  • Unit Testing Framework(s)


Any others that you'd like to see (sans specific language features/constructs)?

As I write these, I'm learning myself, so I'd really appreciate any feedback you have on what I've written, as I'm sure there's lots of more experienced amateurs out there than me. Also, I'll attempt to stay true to the languages and the technologies themselves, and do my best not to write .NET in Python or Ruby.

Again, in the name of collective learning, I'd love to hear your feedback.

Saturday, May 24, 2008

The SqlConnectionStringBuilder Class

One of the more handy BCL (Base Class Library) features that I ran across years ago, but don't think has gotten much attention is the SqlConnectionStringBuilder class that resides in the System.Data.SqlClient namespace.  This is such a handy class, as it allows you to forget about the proper syntax of the connection string, and just let you deal with the parts that seem necessary.  No asking yourself "is it Trusted Security or Integrated Security???".  Well, maybe you have an easier time with it than I do, but, still a handy class.

It's also super handy if you've got your connection string as a string, and you need parts of it.  No need to write your own parsing.  Just load it up into the SqlConnectionStringBuilder class and get the parts you need from the properties of the class.

Below is a simple sample, there's obviously much more to it as far as connection string options, but I'll let you explore those on your own.

SqlConnectionStringBuilder builtFromScratch = new SqlConnectionStringBuilder();
builtFromScratch.DataSource = "(local)";
builtFromScratch.InitialCatalog = "MyDatabase";
builtFromScratch.IntegratedSecurity = true;
 
Console.WriteLine(builtFromScratch.ToString());
 
SqlConnectionStringBuilder builtFromString
    = new SqlConnectionStringBuilder(builtFromScratch.ToString());
Console.WriteLine("Server: {0}", builtFromString.DataSource);
Console.WriteLine("Database: {0}", builtFromString.InitialCatalog);
Console.WriteLine("Windows Authentication: {0}", builtFromString.IntegratedSecurity);

Tuesday, May 20, 2008

Three Things You Should Do When Using SQL Service Broker

There's three things that I've learned (the hard way) recently while working with the SQL Server Broker.  Not doing the first two of these things will really bite you.   Really bite you.  I mean, like, SQL Server will punch you in the neck, steal your wife and eat your babies, bite you...

The symptoms of not doing these things are:

  • Saturated CPU on the SQL Server
  • Out of control tempdb growth
  • Calls from angry customers

Back in January, I put together a post that used a "roll your own" approach to cache invalidation, and in that example I used the SQL Server Service Broker in order to notify the application that there was data that changed.  The sample issues and solutions  that I'm using below are from that post, so refer there to get the whole sample (I've updated that post to make sure I don't get anyone punched in the neck).

SQL Broker is designed to be the new "middleware", and is primarily setup to send messages between SQL Server Brokers.  All messages are sent in the context of a conversation.  The concept of conversation ensures that messages are sent in order, only received once, and are delivered reliably.

All of these symptoms above have to do with not knowing what the hell I'm doing using conversations properly, which leads me to believe there's quite a bit of overhead having to do with the management of conversations.

So, have a look below.  Definitely do things #1 and #2, and see if you can pull off #3.

1.  Always End the Conversation

This one seems to be pretty often overlooked.  In my example from back in January, I was using a one sided service, meaning I was receiving messages on the same service that was sending them.  When using the SQL Broker as middleware, in most cases you will be sending messages on one service, and receiving messages on another (two-sided).

In both cases, you absolutely must end the conversation.  Failing to do so, leaves the conversation out there, along with some space in the tempdb, and some portion of your CPU.  As more and more conversations are created, the worse and worse your problems get.

If you're experiencing this behavior, execute the following query to clear all conversations.

ALTER DATABASE myDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE
ALTER DATABASE myDB SET NEW_BROKER
ALTER DATABASE myDB SET MULTI_USER

After you get your SQL Server back, you'll want to update the code that receives the messages off the queue, to end the conversation after it has received the message.  From the sample in January, I'd change it from:

command.CommandText = "WAITFOR (RECEIVE CAST(message_body AS XML) FROM PersonChangeMessages);";

To:

command.CommandText = @"DECLARE @conversationHandle uniqueidentifier;
                        DECLARE @message AS XML;
                        WAITFOR (RECEIVE @conversationHandle = [conversation_handle],
                            @message = CAST(message_body AS XML) FROM PersonChangeMessages);
                            END CONVERSATION @conversationHandle;
                            SELECT @message;";

2.  Always Set the Lifetime on the Conversation

The next thing you'll always want to do, is provide a duration in which your conversation is valid for.  Conversations without a specified lifetime will last int.MaxValue in seconds.  So, quite some time.

Before we had the following:

BEGIN DIALOG @DialogHandle
FROM SERVICE [PersonChangeNotifications]
TO SERVICE 'PersonChangeNotifications'
ON CONTRACT [http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification]
WITH ENCRYPTION = OFF;
SEND ON CONVERSATION @DialogHandle
MESSAGE TYPE [http://schemas.microsoft.com/SQL/Notifications/QueryNotification] (@Message);

And we'll want to change that to:

BEGIN DIALOG @DialogHandle
FROM SERVICE [PersonChangeNotifications]
TO SERVICE 'PersonChangeNotifications'
ON CONTRACT [http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification]
WITH ENCRYPTION = OFF, LIFETIME = 30;
SEND ON CONVERSATION @DialogHandle
MESSAGE TYPE [http://schemas.microsoft.com/SQL/Notifications/QueryNotification] (@Message);

3.  Reuse Conversations and Conversation Groups

This last item is a bit more difficult to accomplish (thus I won't be providing a sample in this post), but it will insulate you from any conversations issues having a huge impact on your application's performance.  If there aren't that many conversations out there, you'll limit the overhead.  All conversations have a handle, or id, as we saw above.  If you persist this id, in the database, or in memory, you'll be able to reuse the conversation over and over, and reuse all the overhead created with the conversation.

Sunday, May 18, 2008

Are Experience and Education Mutually Exclusive - The Result

I got a great response from my post the other day on whether or not formal education is necessary if you have experience. I appreciate all of you who left comments or contacted me directly.

Almost all of you said that experience was more valuable than education, once someone has already gotten their foot in the door.

But, something funny happened a couple days after that post. I got schooled by a 20 year old teaching assistant. Needless to say, it was a bit humbling, and I drove one thing home for me:

I need to stay in school and finish my education.

Thursday, May 08, 2008

Are Experience and Education Mutually Exclusive?

I'm back.

Thanks to those who gave me a nudge. Promise I'll put some more technical posts up (MVC seems to be highly requested.)

So, most of you know, I got one of those non-traditional starts in the job market, like quite a few developers did in the 90's. I came from a totally different industry, without a 4 year college degree. But I had a knack for computers, a great friend and a small company took a chance, and here I am.

About 3 years ago, the small company was purchased by a much larger company, and introduced a tuition reimbursement program. So, I decided to enroll in a Bachelor's of Science program.

The reason I did so, was that I thought that because of my lack of formal education, I was missing so much lower level, "to the metal" concepts. I'd never written a Binary Tree, or a Quick Sort or a compiler. I'd never determined the order of an operation over a set. It was never an issue of improving my resume, it was always an issue of wanting to learn all the things that most of the people around me had learned.

So, for the last 3 years I've been chipping away at my degree. It's been an okay experience so far. I've found that a lot of what I thought I'd been missing, I'd already had. I've also found that what I thought would be mostly new, turned out to be mostly inefficient tedium.

Recently, as I was complaining (again), about how I felt like I was wasting my time, a noob young buck at work (who happens to be a recent grad), suggested that it would be a better use of my time to spend blogging and learning new things on my own than working on my undergraduate degree.

Ironically, my response to him was that it would be best if I finished it out, because it would look good on my resume. I also started thinking about how graduating would give me a good reason to buy a keg and have a party. Neither of these two reasons were very true to my original intent.

So, my (really long winded) question to you is: Do you feel like an experienced developer needs a degree to be marketable and to get a foot in the door, or is experience (including an active blog, twittering, speaking, networking) more valuable?

Tuesday, March 18, 2008

Putting It All Together - LINQ to SQL, ADO.NET Entity Framework, REST and Silverlight 2.0

A buddy of mine keeps bugging me about putting together something about Silverlight.  So, I thought I'd indulge him and put together an end to end example showing the use of ADO.NET Data Services as our data access layer, which in turn will publish a RESTful web service, that will be queried with LINQ and displayed in a Silverlight application.

Hopefully, this will be the first in many Silverlight posts, as the more and more I use the technology, the more and more it's potential strikes me.

To start, you're going to need a few things to execute this example:

  1. The usual suspects, Visual Studio 2008, SQL Server
  2. The AdventureWorks sample database
  3. Silverlight 2.0
  4. Silverlight Tools CTP from MIX 08
  5. ASP.NET Extensions

First thing we'll do is create a Silverlight application in Visual Studio.  Go to File -> New Project, and select Silverlight Application and click OK.  When Visual Studio prompts you about creating a test site or html page, select Web Site.

image

After creating our project, the next thing we'll do is create our LINQ to SQL entities.  To do so, right click on the web project (not the Silverlight application project) and select Add New Item, then select LINQ to SQL Classes.  Name the class AdventureWorks.dbml and click OK.

Create a connection to your AdventureWorks database in server explorer, and drag the Product table on to the design surface.  You should have a simple surface that looks like this:

image

Next we'll create our ADO.NET Data Service.  Right-click on your web project, choose Add New Item and select ADO.NET Data Service and name it ProductsDataService.svc.  Open the service code behind, and edit the class definition to include your LINQ to SQL DataContext like so, and then un-comment out the lines that restrict access to your entities.  You should have a service code behind that looks like the following:

public class ProductDataService : WebDataService<AdventureWorksDataContext>
{
    public static void InitializeService(IWebDataServiceConfiguration config)
    {
        config.SetResourceContainerAccessRule("*", ResourceContainerRights.AllRead);
    }
}

\When you run the service, you should see a page that looks like this:

image

Now that we have our REST service, we can move on to our Silverlight control.  I'd recommend going through Scott Guthrie's series of tutorials on Silverlight 2.0 if you need an introduction to using Silverlight.  The following goes on the assumption that you've gone through that article.

Our Silverlight application is going to be a simple control that lists out the top 20 products in the products table.  I'm going to use a grid layout with one column, two rows.  The top row will have a header, and the bottom row will have our list of products.  First we'll layout our grid, which will result in the following XAML markup in our Page.xaml file in our Silverlight project:

<UserControl x:Class="WillsProductCatalog.Page"
   xmlns="http://schemas.microsoft.com/client/2007"
   xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
    <Grid x:Name="LayoutRoot" Background="White" ShowGridLines="True">
        <Grid.RowDefinitions>
            <RowDefinition Height="40" />
            <RowDefinition Height="*" />
        </Grid.RowDefinitions>
    </Grid>
</UserControl>

Next thing we'll do is add our title to the control, which will make the markup look like so:

<UserControl x:Class="WillsProductCatalog.Page"
   xmlns="http://schemas.microsoft.com/client/2007"
   xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
   xmlns:Data="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data" >
    <Grid x:Name="LayoutRoot" Background="White" ShowGridLines="True">
        <Grid.RowDefinitions>
            <RowDefinition Height="40" />
            <RowDefinition Height="*" />
        </Grid.RowDefinitions>
        <TextBlock Grid.Row="0" FontSize="20" Text="Will's Product Catalog" HorizontalAlignment="Center" />
    </Grid>
</UserControl>

Now, if we run the Silverlight project, we should get a test page that looks like this:

image

Next thing we'll do is add our ListBox.  Your final XAML markup for the page should look like this:

<UserControl x:Class="WillsProductCatalog.Page"
   xmlns="http://schemas.microsoft.com/client/2007"
   xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" >
    <Grid x:Name="LayoutRoot" Background="White" ShowGridLines="True">
        <Grid.RowDefinitions>
            <RowDefinition Height="40" />
            <RowDefinition Height="*" />
        </Grid.RowDefinitions>
        <TextBlock Grid.Row="0" FontSize="20" Text="Will's Product Catalog" HorizontalAlignment="Center" />
        <ListBox Grid.Row="1" x:Name="productListBox" />
    </Grid>
</UserControl>

Now we can write our code that calls the RESTful service, and binds this data to the ListBox.  To do this, we'll open up the code behind for our Page.xaml, create a simple structure representing our product, and then in the constructor, we'll use the built in Silverlight networking and make the call to our service, hydrate our structure, and bind it to our list box.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Animation;
using System.Windows.Shapes;
using System.Net;
using System.Xml.Linq;
 
namespace WillsProductCatalog
{
    public partial class Page : UserControl
    {
        public Page()
        {
            InitializeComponent();
            WebClient webClient = new WebClient();
            webClient.DownloadStringCompleted += new DownloadStringCompletedEventHandler(webClient_DownloadStringCompleted);
            webClient.DownloadStringAsync(
                new Uri("http://localhost:50492/WillsProductCatalog_Web/ProductDataService.svc/Products?$top=20&$filter=ListPrice%20gt%200"));
 
 
        }
 
        void webClient_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e)
        {
            if (e.Error == null)
            {
                XDocument xLinqDoc = XDocument.Parse(e.Result);
 
                var products = from product in xLinqDoc.Descendants("{http://www.w3.org/2005/Atom}content")
                               select new Product
                               {
                                   ID = Convert.ToInt32(product.Element("{http://schemas.microsoft.com/ado/2007/08/dataweb}ProductID").Value),
                                   Name = product.Element("{http://schemas.microsoft.com/ado/2007/08/dataweb}Name").Value,
                                   ProductNumber = product.Element("{http://schemas.microsoft.com/ado/2007/08/dataweb}ProductNumber").Value,
                                   ListPrice = Convert.ToDecimal(product.Element("{http://schemas.microsoft.com/ado/2007/08/dataweb}ListPrice").Value)
                               };
 
                productListBox.ItemsSource = products;
            }
        }
 
    }
 
    public class Product
    {
        public int ID { get; set; }
        public string Name { get; set; }
        public string ProductNumber { get; set; }
        public decimal ListPrice { get; set; }
    }
}

Its unfortunate that we can't use the ADO.NET Data Services classes (3.6 version of System.Web.Extensions) from Silverlight at this point, but I can understand that limiting the libraries in the Silverlight download keep it nice and small.  Obviously, if it were there, we could really cut down on the amount of code that we need to write here.

So, now that we've got our Listbox bound, you can see how this renders out in the browser:

image

Now, the last thing we need to do is use the Silverlight data binding syntax to put something other than the class name in the Listbox.  You can do that by changing your XAML syntax to the following:

<ListBox Grid.Row="1" x:Name="productListBox">
    <ListBox.ItemTemplate>
        <DataTemplate>
            <StackPanel Orientation="Horizontal">
                <TextBlock Text="{Binding ProductNumber}" Margin="5"/>
                <TextBlock Text="{Binding Name}" Margin="5"/>
                <TextBlock Text="{Binding ListPrice}" Margin="5"/>
            </StackPanel>
        </DataTemplate>
    </ListBox.ItemTemplate>
</ListBox>

And this will produce the following result:

image

Obviously, you can go much further with the formatting, and I'm hoping to be able to put together some smaller posts that highlight some of those formatting abilities.