Skip to content

Speaking at "Moving to Better Secure the Cloud"

I’ll be speaking at a Slashdot/Geeknet "virtual trade show" today.

Moving to Better Secure the Cloud: Governance, Risk, and Compliance Management

My presentation will be on the potential business impact on the web if an efficient and fully homomorphic encryption system is invented. I’ll be speaking sometime in between 3:15 and 4:00 EST, for about 20 minutes. The target audience is CIOs.

Sorry for the short notice, but this came together at the last minute!

Tagged ,

Ad-hoc SQL/POCO Queries in Entity Framework 4.0

Since version 4.0, the Entity Framework has had the ability to query un-mapped data and project it onto POCOs using ad-hoc SQL.

Here, for example, is how we check the current SQL Server version:

        internal class SqlVersionInfo
        {
            public string Edition { get; set; }
            public string ProductLevel { get; set; }
            public string ProductVersion { get; set; }
        }

        private static SqlVersionInfo GetSqlServerVersion(ObjectContext context)
        {
            const string sql =
             @"SELECT SERVERPROPERTY('productversion') as [ProductVersion],
                      SERVERPROPERTY('productlevel') as [ProductLevel],
                      SERVERPROPERTY('edition') as [Edition];";
            var reader = context.ExecuteStoreQuery<SqlVersionInfo>(sql);
            return reader.Single();
        }

Note that no mapping step is required — the code will just run with any ObjectContext at all.

Tagged , ,

Sometimes, SELECT Really Is Broken

We got a bug report from a customer the other day; a certain query in one of our web services was giving the following error:

A column has been specified more than once in the order by list. Columns in the order by list must be unique.

Seems clear enough, except that

  1. There was no duplication in the ORDER BY. Our DBA discovered that the "problem" reference was [Project2].[C5] ASC, which is an alias, not a real DB column name. It certainly appeared only once, but removing it made the error go away. There is documentation which implies that this was an intentional change in SQL Server 2005, but:
  2. The same query worked fine on other SQL Server 2005 servers. The query work correctly on a local QA environment. But it failed on the customer’s database.
  3. We didn’t write the query; it was produced by the Entity Framework, and my experience tells me that the SQL Server provider for EF doesn’t tend to produce un-compilable SQL.

In order to isolate the problem, I ran the LINQ query in LINQPad. This allowed me to reproduce the problem outside the context of the application (useful, since the test case took a couple of minutes, whereas executing the query in LINQPad directly is almost instant), and also allowed me to switch DB servers easily.

I found a workaround — I could change the LINQ and produce SQL which would work on both servers. But I didn’t like this "fix" because I still didn’t understand the problem. I don’t want to "fix" one thing but cause another problem for some other customer. So I dug a little deeper.

Googling turned up information about the actual problem this error message is supposed to explain, but that wasn’t the case with our query. I did learn that we are not the first group to hit this problem, and that some people "fix" it by setting the database compatibility level to SQL Server 2000. But I don’t like this "fix" at all. Changing the database compatibility level would mean that I would also have to change the ProviderManifestToken in the application’s EDM ask, causing the Entity Framework to emit SQL Server 2000-compatible SQL. That would make the entire application less efficient. So I kept looking.

This Stack Overflow answer didn’t explain everything, but provided an important clue. We checked the server versions on both our local QA environment and the customer system. The customer server was running SQL Server 2005, SP 3. Our local QA environment was running SQL Server 2005, SP 4. Further experimentation showed that this was, in fact, the problem.

It’s generally true that "SELECT isn’t broken." But when you’ve ruled out every other possibility, it’s sometimes worth checking!

Tagged , ,

Testing Cross Cutting Concerns

So imagine you’ve been asked to implement the following requirement:

When a to-do list item is marked as complete, the CompletedOn date time property shall be set to the current time.

That doesn’t sound so hard to implement, but how can I test it? I don’t know precisely what the value of the CompletedOn property "should be" since I don’t know precisely when the Complete setter will run.

Method #1: Fuzzy Logic

One option would be to see if the value is "close enough":

var todo = new TodoItem("Test item");

var testStart = DateTime.Now;
todo.Complete = true;
var testStop = DateTime.Now;

Assert.IsTrue(todo.CompletedOn >= testStart && todo.CompletedOn <= testStop,
    "CompletedOn not set to correct timestamp.");

There’s a lot to dislike about this technique. The test could fail if it runs when the system clock moves backwards, e.g., when daylight savings time ends. It’s imprecise, and clearly won’t work in cases where we need a total order across multiple timestamps.

On the other hand, it’s good enough to start writing an implementation, and with that in hand we can consider a better test.

Implementing TodoItem

Here’s a first pass at an implementation:

public class TodoItem
{
    public DateTime? CompletedOn { get; private set; }

    private bool _complete;
    public bool Complete
    {
        get { return _complete; }
        set
        {
             _complete = value;
             CompletedOn = value ? DateTime.Now ? null;
        }
    }
    // ... rest of class

Method #2: Ambient Context

In his excellent book, Dependency Injection in .NET (I’ll write a full review someday, but the book was just updated and I’m still reading that), Mark Seeman suggests replacing the call to DateTime.Now in the implementation with a separate singleton which I can control. The implementation would change to:

public class TodoItem
{
    public DateTime? CompletedOn { get; private set; }

    private bool _complete;
    public bool Complete
    {
        get { return _complete; }
        set
        {
             _complete = value;
             CompletedOn = value ? TimeProvider.Current.Now ? null;
        }
    }
    // ... rest of class

…and the test becomes:

var todo = new TodoItem("Test item");
var expectedTime = new DateTime(2011, 01, 01, 1, 2, 3);
TimeProvider.Current = new OneTimeOnlyProvider { Now = expectedTime } ;

todo.Complete = true;

Assert.AreEqual(expectedTime, todo.CompletedOn, "CompletedOn not set to correct timestamp.");

The test is quite a bit better than the first attempt. I’m now testing for a specific value.

On the other hand, the readability of the code suffers somewhat. Most .NET programmers immediately know what DateTime.Now means, whereas with TimeProvider.Current.Now they have to guess. Also, developers on the team must learn to use a "non-standard" method of obtaining the current time.

Should legacy code, which might not even require a specific value of DateTime.Now for its tests be updated to use this "new" method, just for the sake of consistency? What if that changes the performance characteristics of the code? It’s hard to dismiss this concern as "premature optimization" when we don’t want to change the functionality of the legacy code. This leaves me with the choice of a possibly (slightly) risky change to existing code or inconsistency in the code base.

At any rate, this seems like a good solution, but it’s not the only one.

Method #3: Moles

It’s pretty common to mock or stub methods not directly related to the system under test. Most mocking libraries don’t support this for static methods, but Moles, from Microsoft research, does.

Moles is a kind of monkeypatching for .NET; you can replace just about any method at runtime.

With Moles, we stick with the original implementation of the system under test and instead change the test:

var todo = new TodoItem("Test item");
var expectedTime = new DateTime(2011, 01, 01, 1, 2, 3);
MDateTime.NowGet = () => { return expectedTime; }; // this replaces all calls to DateTime.Now

todo.Complete = true;

Assert.AreEqual(expectedTime, todo.CompletedOn, "CompletedOn not set to correct timestamp.");

Again, it’s a better test than method #1.

Is it better than #2? Well, it’s certainly different. Moles replaces all calls to System.DateTime.Now in the code under test, whether or not I’ve intended this. In my sample code, there’s only one. But in the real world the code under test is generally far more complicated.

Interestingly, the technique in use here — replacing a standard method with one we supply — is the same for Ambient Context and Moles. However the place where we make this redirection is quite different. Although the new method is defined in the test project in both cases, the actual rewiring is done in the test with Moles, whereas with Ambient Context it’s a source code change.

Moles is clearly useful in cases where you need to change a dependency of code which is not your own. Imagine trying to test non-MVC ASP.NET code which depends on ASP.NET methods which internally refer to HttpContext.Current. Indeed, this was painful even in early versions of MVC, although not so much in MVC 3.

But that’s not the case with this code: I have a direct dependency on DateTime.Now, not an indirect dependency. So in this case I’m not forced to use something like Moles.

Moles is pre-release (though it is currently being "productized"), un-supported, and closed source. That, by itself, will disqualify its use from many projects. On the other hand, it’s required for Pex, which is useful in its own right.

Method #4: Constructor Injection

We could of course use normal DI for the time feature

public class TodoItem
{
    public DateTime? CompletedOn { get; private set; }

    private ITimeProvider _timeProvider;

    private bool _complete;
    public bool Complete
    {
        get { return _complete; }
        set
        {
             _complete = value;
             CompletedOn = value ? _timeProvider.Now ? null;
        }
    }
    // ... rest of class

…and the test becomes:

var expectedTime = new DateTime(2011, 01, 01, 1, 2, 3);
var todo = new TodoItem("Test item", new OneTimeOnlyProvider { Now = expectedTime });
todo.Complete = true;

Assert.AreEqual(expectedTime, todo.CompletedOn, "CompletedOn not set to correct timestamp.");

The test is simple and the code is clean. For this specific case it’s arguably the best option.

However, if a significant fraction of all types start requiring a time provider constructor argument, especially if it’s mostly a transient reference to pass along to another type’s constructor, then it’s not a good choice.

Conclusions?

The "fuzzy logic" method is better than no test at all, but that’s about the best I can say for it. Moles can do things which no other method can, but that’s not required for this case. The Ambient Context method works fine here, but raises questions about how to apply it consistently. I can’t tell you which method is best to use; I’m still trying to work that out for myself! Your comments would be welcome.

Thanks to Mark Seeman for answering my questions on Twitter.

Tagged , , ,

Would You Buy a Used Framework from This Tool?

I think the Web Platform Installer is a great tool, but I have to question the wisdom of its home page:

If you click on these, you see… nothing. A description would be nice. ("Application Request Routing? What’s that? EC-CUBE?")

But that’s not really the problem. The bigger problem is this: A "spotlighted installers" feature probably sounded great on the drawing board, but this tool is intended for public-facing web servers. It isn’t the App Store; public-facing web frameworks should not be impulse buys.

What should go here? I’m not certain. My first thought is security updates, but those should already come through Microsoft Update. Perhaps a link there, or a display of what is already installed, and whether it’s up to date?

Tagged , , ,

Great CS Textbooks, Cheap

I’m probably late to this party, but I’ve discovered that you can find incredible deals on used CS textbooks at Amazon, especially for older editions.

For example, I recently ordered a copy of Programming Language Pragmatics, by Michael L. Scott. It’s $63 new for the hardcover or $43 on a Kindle. I got a used copy of the (somewhat older) second edition for $3 + postage, for a total of $7. True, I don’t get the new chapter on VMs, but I can live with that. The third edition totally dried up the market for the second, so there are really good deals available!

The book itself is good so far, though I haven’t read enough to give a full review. It covers substantially similar territory as the dragon book, except more scope talking about programming languages and computer architecture and less depth talking about compiler internals. They’re both worth reading.

Tagged ,

An Excuse Not to Roll Your Own Authentication Scheme

The Rails 3.1 Release Candidate announcement contained news of many new and useful features, plus these regretful words:

has_secure_password: Dead-simple BCrypt-based passwords. Now there’s no excuse not to roll your own authentication scheme.

I will briefly provide an excuse.

"Simple BCrypt-based passwords" is a reasonable feature, but shouldn’t be mistaken for end-to-end authentication, or even a substantial subset of that problem. Web site authentication in the real world is a far harder problem than salting and hashing a password — which BCrypt does OK, as far as I know. You can prove this to yourself merely by observing that many frameworks which have correctly implemented salting and hashing have nonetheless had their authentication systems compromised by other means.

Having (correctly) validated a username and password, most web authentication frameworks use an encrypted authentication token stored in a cookie (or some other place) for future requests. This way the client (the browser) doesn’t need to remember the password or repeatedly prompt the user for it. However, once the token has been issued, having a copy of it is as good as having the password (for a short period of time, anyway — they typically expire). That’s how tools like Firesheep work on un-secured networks. If you produce or handle this token incorrectly then may as well not bother doing username and password authentication in the first place!

Remember last year when the ASP.NET membership provider used in large numbers of ASP.NET sites was discovered to be vulnerable to a padding oracle attack? ASP.NET, as far as anyone knows, is salting and hashing its passwords correctly. But that’s not enough to stop people from breaking into the system. The other links in the "security chain" have to be secure as well. In the case of ASP.NET, the server leaked information which was useful when attempting to break the encryption of the authentication token. This is sometimes called a side channel. Having succeeded in breaking the encryption, a client could then create a "fake" authentication token which the server would mistake for one it had issued itself. Hence, it was possible to break into a site without ever knowing the password. The authors of this exploit had formerly found similar vulnerabilities in JSF and other frameworks.

All of this adds up to old news: Security is hard. Even experts in the field make mistakes that other experts in the field overlook for years. Anyone can build a system which they themselves can’t break into. The best solution for developers of sites not specifically intended to prototype new and potentially vulnerable security systems is to use proven, off-the-shelf authentication systems, and to instead put their efforts into delivering value to their end users!

Tagged , , , ,

Why Won’t Visual Studio Step Into This Code?

I helped another developer debug an interesting problem this morning. Let’s see if you can spot the problem. The code in question looked something like this simplified version containing only enough code to show the problem:

public void Execute()
{
    DoStuff();        // breakpoint 1
}

public IEnumerable<Coordinate> DoStuff()
{
    LaunchMissiles(); // breakpoint 2
    // rest of method here
}

Note that the result of the function DoStuff is not used by Execute. That result actually exists only for testing purposes; it’s essentially a log we use to monitor changes the method makes to external state. The unit tests in question passed, so it was clear that DoStuff worked correctly, at least in a test context. The problem was that when the code ran outside of a test context (i.e., in the real application), the DoStuff method would never run. The debugger would stop at breakpoint 1, but not at breakpoint 2, but only in the "real" application. Similarly, attempting to step into DoStuff would not actually go into the method body. If we debugged the unit tests, the debugger would stop at both breakpoints, and the method worked.

Can you spot the bug?

Perhaps it would help if I showed more of the method:

public IEnumerable<Coordinate> DoStuff()
{
    LaunchMissiles(); // breakpoint 2
    yield return CurrentCoordinates();
}

Now do you see the bug? Remember, the unit tests pass. There is no special knowledge about our application needed to see the problem here; all of the information required to spot the bug is in the code snippets above. The problem is a code bug, not a setup or configuration issue.

Perhaps it would help if I showed you a version of DoStuff which "works."

public IEnumerable<Coordinate> DoStuff()
{
    LaunchMissiles(); // breakpoint 2
    return new List<Coordinate> { CurrentCoordinates() };
}

With this version, both the unit tests and the "real" application work correctly.

The Solution

At first glance, this might seem puzzling. I’ve changed only the last line, and both of those versions appear to do almost exactly the same thing. Why is the behavior of the breakpoint at the previous line different?

The answer is that using yield return causes the C# compiler to change the entire method, not just that single line. It surrounds the code with a state machine containing the rest of the method body. Importantly, the iterator returned from the "yield return" method is entirely lazy; it will not run the method body at all until you attempt to iterate the result of the method. But Execute ignores this result, so the method never runs at all.

Discussion

Some languages, like Haskell, go to great lengths to segregate expressions and side effects. C# isn’t one of them, but even so it’s common to try to improve quality by doing so. Eric Lippert, a member of the C# compiler team, once wrote:

I am philosophically opposed to providing [an IEnumerable<T>.ForEach() method], for two reasons.

The first reason is that doing so violates the functional programming principles that all the other sequence operators are based upon. Clearly the sole purpose of a call to this method is to cause side effects.

The purpose of an expression is to compute a value, not to cause a side effect. The purpose of a statement is to cause a side effect.

It is clear that causing side effects could cause an expression to change in mid-computation. This is problematic for debugging and quality, especially if some of the evaluations are lazy. But as this example demonstrates, the opposite is also true: Adding expressions to a computation can change the side effects, too.

Tagged , ,

A Better View API for Grids in ASP.NET MVC

I’m writing a grid-independent interface for displaying data in ASP.NET MVC applications, and I would like your feedback on the API design.

In my last post, I discussed some of the problems with existing grid components for ASP.NET MVC. Actually, there are a couple more design issues which I forgot to mention in that post. I’ll discuss them briefly before talking about View design.

  • Many grids require two requests in order to display the first page of data: One for the page itself, then a second, AJAX request for the data. I can understand why the AJAX request is necessary for the second page of grid data, but shouldn’t the first page just come already populated? Making too many requests is one of the most common causes of performance problems in web applications.
  • Many grids require executing a JavaScript method in order to set up the component. Unfortunately, they unnecessarily wait until the jQuery.ready event fires, resulting in unnecessary visual disruption and poor performances the page loads. It’s often unnecessary to wait for the ready event, so you can visibly improve the perceived performance of the page by simply calling that method sooner!
  • The two problems above compound each other. If you wait until the ready event fires to even issue a request for data, and then wait some more for that data to return, that can be a lot of waiting!

Displaying Models in MVC 2

In ASP.NET MVC 1, you had to write specific markup in each view, in order to display each individual property of the view model correctly.

In ASP.NET MVC 2, there is a standard way to render the entire model for your page, no matter what kind of data you plan to show:

<%: Html.DisplayForModel() %>

When you do this, MVC will look up the correct template for your model, or its sub-properties, based on its type, and render it appropriately. You can easily customize this with user controls, at any level of your application. You can read about the details on Brad Wilson’s blog.

(Unfortunately, MVC 2’s VS tooling does not use this pattern when it generates new views. I believe this is fixed in MVC 3.)

If you want to render only a certain property of the view model, rather than the entire model (perhaps you have other properties on the view model which control page layout, and are not intended for the final, rendered page itself) then you use:

<%: Html.DisplayFor(m => m.MyCoolGrid) %>

Only if you want to render a control with customizations which cannot be inferred from the metadata on your view model should it be necessary to specify a specific control:

<%: Html.Grid(Model.MyCoolGrid, new { @class = "importantGrid" } ) %>

A Better Grid View API

Oddly, the demo code for most grids supporting MVC seems to favor the third form over the first two. To me, the third form is the exception, and the first two should be the standard methods for most pages. So the "ideal grid API" for views is really no API at all; you simply use the built-in methods which are already favored by MVC for every other kind of model.

Namespace Issues

In a non-trivial MVC application, it is very easy to go overboard with extensions to HtmlHelper. Some components, like Telerik’s suite for ASP.NET MVC, fight this by adding a single extension method, and then putting their components into this "namespace":

<%: Html.Telerik().Grid(Model) %>

I think that this attempt to fight clutter is admirably intentioned, but probably unnecessary. If you’re only going to use Telerik’s controls on a single page, then you can easily Import that namespace on just that one page, instead of specifying it in web.config. If, on the other hand, you intend to use them in most pages in your application, then the extra .Telerik() call in the code above is just noise. Yes, existing code might have a Grid extension method for HtmlHelper, but it is unlikely to specify Telerik’s model as its first argument, so it should not actually conflict.

So, in my initial design, I’ve opted to not do this. However, I remain open to arguments as to why I should. There doesn’t seem to be any clear right or wrong.

Custom Options

Of course, sometimes you do need to use the third form, because the model supplied by the controller is not enough to totally specify the custom needs of a particular page. MVC’s convention for this appears to be to supply large numbers of different overloads for each control, representing every possible combination of options imaginable. (There are, for example, 10 different overloads of Html.ActionLinkcausing no small amount of confusion to MVC developers.)

Instead of overloads, I decided to use a strongly-typed options object. This requires a few more keystrokes, but the resulting code is much more readable, and, as I noted above, even calling the Grid method directly at all is the exception rather than the rule.

Here, for example, is how you might choose to override default options for one specific grid:

<%: Html.Grid(Model, new GridViewOptions
                         {
                             Multiselect = true,
                             Pager = false
                         } ) %>

This is much easier to understand than:

<%: Html.Grid(Model, 18, false, "Foo!" ) %>

I can also change the rendered grid to something other than the default grid for the project:

<%: Html.Grid(Model, new GridViewOptions
                         {
                             Renderer = new JqGridRenderer()
                         } ) %>

In many cases, this will be the only code change necessary in order to display a different grid, despite the fact that, for example, plain HTML tables and JqGrids work very differently!

What’s Not There

Please note that there is no need to define grid columns, headers, alignment, etc., in the view. This can all be inferred from model metadata!

A Fluent Interface?

Another possible approach, which some people like, is to use a so-called "fluent" interface. Telerik does this for their grid. With this approach, my example above would look like:

<%= Html.Grid(Model).EnableMultiselect().NoPager() %>

Note carefully, however, that I had to change the "safe" <%: syntax introduced in .NET 4 to the old, "unsafe" <%= syntax from .NET 2. The code above only works because the .ToString() method on the returned type is overridden to return markup — as a string, not as an MvcHtmlString! The <%: syntax handles encoding strings automatically, and, in my opinion, the <%= syntax should not be used at all in .NET 4. A "beautiful" API which requires using features which make your site vulnerable to XSS attacks is not a step forward, in my opinion!

I can’t think of a way to make the "fluent" approach work with the .NET 4 syntax. The Grid method is going to return HTML markup, so it should not be encoded. However, the "fluent" interface requires returning a type containing the fluent methods, which will not be MvcHtmlString. So a call like this:

<%: Html.Grid(Model).EnableMultiselect().NoPager() %>

…would return encoded output (e.g., "&lt;table&gt;"); not what we want! And this:

<%: Html.Grid(Model).EnableMultiselect().NoPager().ToHtmlString() %>

…is just ugly! If the user forgot the .ToHtmlString() bit, they would get no error, but get encoded output at runtime. Yuck!

I don’t want to close the door to a "fluent" interface permanently, since some people like them, and those who do not like them would not need to use it, but I’m not going to write it unless I can find a way to do it well, and I haven’t been impressed by what I’ve seen in other projects. If you’re reading this and you’re aware of a technique for overcoming the issues I’ve described here, please enlighten me!

The Part with the JavaScript

Many grids require calling a JavaScript method, often with a big JavaScript object specifying the configuration of the grid. Now, one of the original tenets of ASP.NET MVC, from day one, was "control over every angle bracket in your page."

Unfortunately, this dictum has not always been followed when it comes to the generated JavaScript. MVC 2’s validation support, for example, injects its script into the middle of your page, and you cannot change this. (MVC 3, however, fixes this.)

I have seen commercial grid components for MVC which inject a giant $.ready event handler right into the middle of the rendered markup! I’m not going to name names, but before you pay for a commercial grid, look at the markup on their demo site! In fairness, there are also commercial grid components which handle this better.

Anyway, in addition to an HtmlHelper extension for the HTML markup, we also need an extension for including the necessary JavaScript (if any; a "plain HTML table" grid renderer might not require any JavaScript at all) references in the rendered page:

<%: Html.GridScriptTag() %>

What this method really means is, "If any of the grids I have included in the page thus far require JavaScript code, please put that script right here." Because this method has a fairly low overhead and emits no markup if you haven’t actually used any grids on the page, you can put it in your Site.Master, if you prefer. Importantly, however, you can also put it somewhere else. This allows you to optimize the point at which the JavaScript method is invoked, should you need to do that.

This method has an overload which takes a strongly-typed options object. You won’t typically need to use this, but you might have JavaScript which you would like to include if and only if there is at least one grid present on the page:

<%: Html.GridScriptTag(new ViewOptions
                           {
                               AfterGridInitializationJavaScript = SomeScript
                           }) %>

"Unobtrusive" Scripting?

MVC 3 has an "unobtrusive" client validation option. Would this even make sense for a grid? I haven’t figured this out yet, but I’m thinking about it.

How Am I Doing?

I’m afraid that’s quite a lot of discussion for what is, in the end, design concerns over the signatures of two methods. If you’re still reading this, thanks!

At the same time, however, I really want to get this right. The controller and view APIs of this library are the parts which I imagine developers will have to grapple with day in and day out. If the internals of the library aren’t pretty in every corner, well, nothing is perfect. But the controller and view APIs are extremely visible, so they deserve a lot of scrutiny!

What’s Next?

In the future, I’d like to similarly examine controller design. But first I have some questions about testing this stuff; it’s tricky! Oh, yes, and I’ll be publishing the code!

Tagged , ,

One (MVC) Grid to Rule Them All

Imagine you’re starting a new project using ASP.NET MVC. Let’s say it’s a project which frequently requires displaying a list of records, like Google or Stack Overflow or an enterprise database application. Which grid should you use?

The obvious answer is, "I don’t know. I’m just getting started. Does it really matter, right now?" Don’t you wish!

There are many grids available for ASP.NET MVC. If you’re prepared to dedicate your project to a single grid at the outset of your project, and never change it, nor support alternate platforms, like mobile, then you can (almost) be a happy developer. But if you think you might want to support mobile devices, tablets, and desktop browsers with the same application, if you acknowledge the possibility that you might want to change your mind about which grid you will use in the future, or if you care about separation of concerns, then you may have a problem.

Most grids don’t support ASP.NET MVC very well. In particular, they often:

  • Push presentation concerns into the controller. In my opinion, the specific grid you choose in the manner in which it is rendered (column headers, search features, paging, etc.) is a presentation concern, and belongs in the view portion of an application with MVC architecture. Controllers should be grid-agnostic, both in the datatypes they use and in the way you structure your actions.
  • Render JavaScript inline. For best page rendering performance, JavaScript should be included at the very end of the body tag.
  • Don’t support DataAnnotations and other features related to MVC 2’s templates. If I have a view model which is marked up with DataAnnotations attributes like [DataType(DataType.Date)], then I should not have to do anything further in order to get a grid to display correctly.
  • Require too much code in too many places to get a decent grid on the page.
  • Don’t support ASP.NET MVC at all. SlickGrid, for example, is a fairly popular JavaScript-based grid which suffers from being minimally documented and does not ship with an MVC interface. This is understandable, because writing a rich integration with ASP.NET MVC is a fair amount of work! What if there were an easier way…?

A while back, I published some simple examples of how to integrate jqGrid with ASP.NET MVC. I’ve used this general technique in real-world projects, but the lack of support for DataAnnotations/templated views in my code was becoming a maintenance issue; the technique I demonstrated there required writing too much JavaScript. I decided to go back and add support for MVC 2 templated views and generate the JavaScript. As I did so, however, I quickly realized that there was very little jqGrid-specific code in my project. I partitioned this off into its own namespace and added a feature allowing the user to specify a grid renderer at runtime. Mixing in fixes for all of the issues above, I now have the skeleton of a generic grid interface in place.

To be clear, I am not writing a new grid. Instead, I’ve written an interface which supports multiple grids, including jqGrid and even plain HTML tables, without requiring special, grid-specific code strewn throughout your application.

I’m releasing this as open source. Actually, it’s already out, but it’s not quite public yet. I have a few more pieces I need to put in place first, like a demo application.

In the meantime, I’d like to get some feedback on the API. My opinion is that most grids which have an MVC integration at all require the programmer to do far too much work in order to get a decently-formatted grid on the page. I admire the emphasis on API beauty which I see in the Rails community.

So I’ll be "releasing" this a little early. It’s not really ready for production use yet. Although the code is solid, I am still making fairly significant changes to the API. I want this to look right, and I’d appreciate the help of anyone reading this blog.

Or, continue on to the next post in this series, examining the API for views.

Tagged , , , ,

Bad Behavior has blocked 713 access attempts in the last 7 days.

Close