Wednesday, October 21, 2009

Visual Studio 2010 Beta 2

If you have yet heard, Microsoft has release beta 2 of Visual Studio 2010. It’s currently available to MSDN Subscribers and will be available to the general public sometime today. The final release of VS2010 is currently scheduled for March 22nd, 2010. You can get more information and look out for the public download at MSDN:

Microsoft has also released a Visual Studio 2010 training kit which contains presentations, hands-on labs and demos to get you up to speed with this new release.

Sunday, July 12, 2009

VS 2010 Generate From Usage

Microsoft has been adding more and more features to Visual Studio to encourage Test Driven Development. One of the new features in VS2010 for TDD is “Generate From Usage.”
One of the tenants of true test driven development is that you should write your unit tests before you write your code. One little tricky part of this in Visual Studio is that you loose the help of IntelliSense and sometimes it will even get in your way. To get around this problem, developers will often stub out their classes and methods first, and then write their tests. Generate From Usage in VS2010 provides another way to do this.
To try this out we will create a very simple math class called MyMath that has one method call Add(n1,n2) that will add two numbers together and return the result. First create a solution with a new C# Test Project and C# Class Library Project. In the TestMethod1 method of the UnitTest1 class type the following code (type it instead of cutting and pasting to see the full effect of this feature):
MyMath math = new MyMath();
Assert.AreEqual(3,math.Add(1, 2));
First thing you will notice is that when you type “new” and press the space bar, IntelliSense will popup and you will see the MyMath class on the list even though we haven’t defined it yet. The editor figures out that you are intending to create a class called MyMath so it puts in on the list.
As expected you will get red squiggles under MyMath since it doesn’t exist yet. At this point you would normally go and manually stub out this class, but Generate From Usage makes this easier. If you right click on MyMath and select Generate/Class, Visual Studio will automatically create a new stub for the MyMath class. By default it will put the new class in the test project which is probably not where you want it. To get around this instead of clicking Generate/Class, use Generate/Other…, this will pop-up this dialog box:

This give you more control over the generation of the code. In the location section you can change the project so that the new file goes to the Class Library project. Now if you build the solution the squiggles under MyMath will go away and the Add function will be flagged instead.
Now you can right-click on Add and select Generate/Method Sub and this will generate a method stub in the MyMath class. In this case there is no Other option since there really aren’t any other options for creating the Method since to be usable it must be in the MyMath class, must be public and must be called Add.
There is one tricky thing with this feature that I will demonstrate with another example. Let’s say we want to create a new class called Host. In TestMethod1 type Host and press the space bar. When you do this IntelliSense will try to be helpful. Since there isn’t a class called Host in the default namespaces it will put in something that is there; HostTypeAttribute. This is not what we wanted. To get around this there is a new feature in IntelliSense called Consume-First Mode. When the IntelliSense box pops up press Ctrl+Alt+Space to toggle to consume-first mode. In this mode instead of forcing you into a class that already exists it will just accept what you typed when you press the space bar. Once this mode is turned on it will stay in this mode until you press Ctrl+Alt+Space again while an IntelliSense window is open.

Sunday, June 28, 2009

IntelliSense Transparency

While I was playing with Visual Studio 2010 I ran across a cool little feature that I assumed was new to the VS2010 editor but turns out to also be available in VS2008. When an IntelliSense window opens there are times when you need to quickly see something that is underneath it. To do this all you have to do is press and hold the CTRL key. This will cause the window to become transparent until you release the key again:


Monday, June 1, 2009

Visual Studio 2010 Editor

One of the biggest changes in Visual Studio 2010 is the text editor. When you first fire it up you will definitely notice a lot of small visual changes to it but the biggest changes are under the hood. The editor has been re-written in C# using WPF for the presentation layer, and Microsoft’s new Managed Extensibility Framework (MEF) to make the editor much more extensible.

Here are a couple of the nice new features found in the editor. Visual Studio has always had the little plus and minus boxes on the left side of the editor which allows you to expand and collapse blocks of code. In VS 2010 there is an addition to this. If you hover over one of the minus boxes VS will highlight all the code in that block.


Another similar feature works with identifiers like variable names. If you hover your mouse over a variable in a class or function VS will automatically highlight all the other instances of that identifier. This is a handy way to quickly see where you variable is used.


Finally, VS 2010 has a build in code zooming feature. If you hold the CTRL key and roll the mouse wheel you can increase and decrease the font size in the code window. This is not a feature you are likely to use in your day to day work, but it is very handy when you are doing technical presentations and the audience is having a hard time seeing the code.

None of the features is earth shattering, but I think the real promise of the new editor will come from it’s extensibility model. Once third party developers start taking advantage of this I think we will see a lot of very cool add-ons to the editor.

If you want to learn more about the editor check out the Hanselminutes podcast #147 where Scott interviews Noah Richards who is one of the developers of the new editor.

Saturday, May 23, 2009

Visual Studio 2010 Beta

Microsoft has just released the beta of Visual Studio 2010 to the general public. You can download and get more information here, Note that the first install link on this page is for the web installer, but you can click on “See More Download Options” to download the ISO version.

I’ve installed it without any problems on both Windows 7 RC1 and Windows XP. If you install on Windows 7 there is a compatibility problem with the version of SQL Server that installs with Visual Studio, but Microsoft provides a workaround.

So far I have been pretty happy with the beta. The changes from 2008 are not dramatic but there are definitely some nice new features. The biggest changes you will notice are in the code editor which has been completely re-written. There have also been a few small language changes to C# and VB.NET but no where near as many as were introduced in 2008.

2010 provides support out of the box for things like Silverlight, and the F# language. These are available in 2008 but have to be installed separately. You may notice that ASP.NET MVC Framework is NOT included in this beta. According to Phil Haacked the release of MVC was to late to be included in Beta 1 but will be included in the next beta.

As I work with 2010 some more I will be posting about some of the new features.

Friday, May 22, 2009


For those of you in the South Jersey area, I will be doing a presentation at the PhillyNJ.NET meeting on Thursday, May 28th at 6:00PM. The presentation will be an introduction to the Microsoft ASP.NET MVC Framework. PhillyNJ.NET is a sub-chapter of the Philly.NET user group and usually meets at Greenwich Township Public Library in Gibbstown, NJ. You can get more information on the PhillyNJ.NET web site.

Saturday, April 11, 2009

ASP.NET MVC Framework

Microsoft has finally made available the 1.0 release of the ASP.NET MVC Framework. The framework provides a new way of developing ASP.NET web applications that is based around the classic Model View Controller design pattern. In an MVC app instead of each page being a separate file there are three components that work together to deliver a web page to the user:

Model – The model is the class or classes that represent the data that will be displayed on the page. This may be a group of in memory objects, a class that accesses an XML file, a database access layer, etc. The framework doesn’t put any restrictions on what the model can be.

View – This is the file that generates the output that is sent to the browser. Views are HTML files that can also contain scripts to embed dynamic data in the output. These are very similar to .ASPX files used in ASP.NET web forms. The view only works with data that is passed to it by the controller.

Controller – The controller class is responsible for handling input from the user, updating the model, extracting data from the model needed to build a view and finally deciding what view should be displayed. The controller doesn’t have any direct control over how that data will be presented, that will be handled by the view.

It’s important to note that the MVC Framework is not layered on top of the existing Web Forms model, but instead is a totally different way of developing ASP.NET apps. You do lose a lot of things that you may have gotten used to in web forms. In MVC there is no viewstate, no event handlers, and you can only make limited use of server controls. Giving up these things has its upside; it makes the page processing cycle much simpler and thus much faster. It also makes it much easier to get full control of the HTML output.

The biggest advantage to using MVC is the separation of concerns that is promotes. By having a clear separation between display, data and control it makes it easier to maintain large applications and also makes I much easier to perform automated unit testing.

I have been doing some work with the MVC Framework and although there is a learning curve it wasn’t too hard to get up to speed with it. I don’t think MVC will totally replace Web Forms, but I think it is very well suited to certain types of applications. As you start developing new ASP.NET apps you will want to look at the pros and cons of MVC and Web Forms and decide which would be more appropriate for a given application.

If you want to give it a try you can download the MVC Framework from You will need to be running Visual Studio 2008 or Visual Web Developer 2008 SP1 to use it.

Friday, April 3, 2009

Verbatim Literal Strings

I am filing this one under “you learn something new every day”. This is a little feature of the C# language that I only recently learned about.

As you probably know when you put literal strings in a C# program certain character sequences are interpreted as escape characters.  For example if you do this:

it will output


because \n is interpreted as a newline. What if we actually wanted the characters \n to be displayed? You could do this:

This works fine for simple short strings, but starts to get a little ugly in more complex strings especially when you are dealing with regular expressions or URLs.  Here is an alternative syntax that does the same thing:

The ‘@’ character tells the compiler to take the string verbatim and not to interpret any escape sequences. This syntax also allows you to do things like this:

If you didn’t use a verbatim string definition here this would cause a syntax error. In this case since it will actually each piece of text on a different line.

Wednesday, March 18, 2009

Patterns & Practices Application Architecture Guide 2.0

Microsoft has just released the most recent update to their Application Architecture Guide. I got a copy of the first edition of this book quite a few years ago and it has remained close at hand on my bookshelf since then. This version is a much needed update to the guide.

The purpose of this book is to provide  guidance on how to do good application design. Although it’s primary focus is on designing Microsoft .NET apps, it contains a lot of material that is applicable to any development platform.

The book is divided up into four parts. The first part is called Fundamentals. It starts with a discussion of the basic concepts of application architecture, provides a high level overview of the .NET platform, and finally provides a fairly large collection of design guidelines.

The second part, called Design, walks your through the process of how to design an application. It goes through the things you will have to decide on during the design process and provides guidance on how to make these decisions.

Part three, Layers, discusses the traditional layers that are found in an application. It provides information on how to design presentation, business logic, data access and services layers and  discusses how to map various Microsoft technologies to these layers.

Finally part four, Archetypes, goes over each major type of application you may have to design and talks about the design considerations as they relate to each type of app. In this part you will find information on designing web apps, rich Internet apps, mobile apps, etc.

The appendix of the book contains some very useful technology “cheat sheets” for Data, Integration, Presentation, and Workflow. In each section you will find information on the related .NET technologies along with guidance to help in deciding which technologies are best suited to your specific design scenario.

You can download the guide for free from,

You can also hear a talk with Rob Boucher, one of the authors of the guide on the Dot Net Rocks podcast 426,

Sunday, March 15, 2009

Changes coming to SDS

There was an announcement this week from the SDS development team in Microsoft that there will be a change in direction for SDS. Instead of the schema-less database I have been talking about over last few weeks it will be moving towards full blown SQL Server hosted in the cloud. The big advantage is that it will make it much easier to migrate existing database apps to the Azure platform.

This decision also makes sense when you consider that a lot of the features of the existing SDS are already available as part of other Microsoft data access technologies. For example Azure already has a schema less database technology in the form of Azure Table Storage. You can also get the REST based database interface by using ADO.NET Data Services.

If you are getting confused by all the different data access technologies Microsoft is providing, you are not alone. I really should do a post that describes the different technologies that are out there.

Saturday, February 28, 2009

Querying SDS - Sorting

This is the fifth part of a series of posts about SQL Data Services (SDS). Last time I showed the basics of how to query SDS, now I will show some more query options. Let’s start with the basic query from last time that returns all the entities in a container

from e in entities select e

By default the entities will be sorted by the Id property. We can sort by other properties like this

from e in entities orderby e["DateDue"] select e

Note that just like in the Where clause flexible properties in the orderby clause use the syntax like e[“DateDue”] but metadata properties use the e.Verison syntax. By default the sort is done in ascending order. You can sort descending like this:

from e in entities orderby e["DateDue"] descending select e

You can also sort by multiple properties

from e in entities orderby e["Completed"],e["DateDue"] select e

As I have mentioned before SDS entities are schema-less so there could be entities in a container that don’t have one of the properties you are sorting on. In these cases the entities will still appear in the output. I have not seen any official documentation that explains how these are handled but it appear they are treated as having a null value and are sorted to the top of the list.

Sunday, February 8, 2009

Querying SDS

In my last post I showed how to put data into an SDS database, now we will look at how to get it out. As I mentioned at the end of the last post every entity in SDS has a unique address that can be used to retrieve it directly. For example if I put in this address,, and click Get it will retrieve the entity we created last time:

This is good for retrieving a single entity, but if we want to find a group of entities based on some parameters we need to use queries. First let’s create two more entities so we have something to query. Change the address so it points back at the Tasks container, for example Enter each of these entities and click Post.

After creating the entities change the address back to the container once again.

Let’s start with the simplest query, enter this in the query box and click Query:

from e in entities select e

This will return an EntitySet with all three entities in our Tasks container. This query has no conditions so it will return all the entities in a container.

Earlier I showed you how to retrieve a single entity by doing a Get on its address, but you can also retrieve and entity using a query like this:

from e in entities where e.Id=="T1001" select e

Here we have added one condition, e.Id==”T1001” which simply means to retrieve all entities where the Id property is T1001. Since each Id is unique this will return a single entity.

What if we wanted to query for all tasks that have not been completed:

from e in entities where e["Completed"]==false select e

You will notice a difference in the syntax for this condition. Instead of e.Completed, we used e[“Completed”] instead. The e.Id syntax we used last time is only used for metadata properties. When you query flexible properties you have to use the e[“Completed”] syntax.

What if we made a mistake in the query and did this instead:

from e in entities where e["Complete"]==false select e

In a traditional database this would throw and exception since the field Complete doesn’t exist in the database, but since SDS is schema less this will not throw an error, it just won’t return any entities.

We are not limited to just specifying one condition, here we query for all incomplete tasks that are due after 2/12/2009:

from e in entities where e["Completed"]==false && e["DateDue"] >DateTime("2009-02-12") select e

We use the logical operator “&&” to specify one condition and another. Also notice that when we compare to the literal date value we must use the DateTime(“”) function, if you just compared to “2009-02-12” it would not work.

That’s the basics of querying SDS. I will talk about some other query topics in my next post.

Tuesday, February 3, 2009

Working with SDS

In my last two posts I showed you how to get setup to use SQL Data Services, and I described the SDS data model. Now it’s time to start actually working with SDS by using the SSDS Explorer tool.

SDS can be accessed in two ways, using a REST based protocol, or using SOAP. The SDS Explorer tool uses the REST method. At the top of the screen you will see the address (URI) that the requests will be sent to, and the buttons at the bottom represent the various actions that can be executed.

The first step will be to create an authority to work in. If you click the authority button a template for a new authority will be inserted into the text editor. It will look like this:

To create the authority you will first need to put an ID for the authority in the <s:ID> tag. A couple notes on creating the ID. First it must be globally unique over the entire SDS system, meaning that no two people can create the same authority ID. You may want to prefix your authority ID with the application name you created when you signed up for SDS. The ID also must only contain lower case letters, numbers and dashes. Finally, once you create an authority you currently cannot delete it, I assume this will change before the final release of SDS. Once you have entered an ID click the Post button. If everything is working ok you should get a green check mark next to the action buttons and you should receive no errors.

Once you have created the authority the address will automatically change to contain your authority ID. For example if you authority is called ‘testsds’ you will see this:

If you now click Get you will see some information about the authority. Towards the top you will see the ID you just created in the <s:ID> tag. Below this you will see various statistics about the authority which we won’t get into here.

Now that we have setup an authority, we can create a container inside of it. Click on the Container button to get a template for adding a container. Just like with Authority you need to give the Container an ID. Container IDs can contain both upper and lower case and they ARE case sensitive so ‘Tasks’ would be a different container from ‘tasks’. Enter the ID within the <s:Id> tags and press the Post button to create the container. Here is the code for creating a container called ‘Tasks’:

Once again the Address will change to include the container ID, it will look something like this:

Let’s take the “Tasks” off the end of the address so we can go back to the authority level and then click Query, this will query for the contents of the authority. The result will look something like this:

Here you can see all the containers inside the authority. In this case there is only the Tasks container we just created. Change the address back to so we are working with the Tasks container once again.

We have created an authority and container, now we can add an entity to the container. Let’s create a simple entity to hold a task like you would have in a to-do list application Here is the code for creating the entity

The first thing we had to fill out in the entity template is the ID, remember that the ID must be unique within the container. We then have four flexible properties, one string property called Message, two datetime properties called DateAdded and DateDue and finally a Boolean property called Completed. You can see that in each property we specify the name of the property using the tag name, and also the data type in the xsi:type attribute. Within the tags we put the actual data. Once you have entered this click Post to add it to the database.

As always, the address will change again to include in the ID of the entity. You will notice that everything in the database, authorities, containers and entities have their own unique addresses. So if we use this address,, and then click Get, we will retrieve entity T1000 from the container Tasks in authority testsds.

That covers the basics of how to get data into SDS, next time I will talk about how to query the data.

Saturday, January 24, 2009

SDS Data Model

SQL Data Services is often referred to as “SQL Server in the cloud”. Despite this designation it actually operates quite a bit differently then a traditional relational database. In this post I will describe the SDS data model. There are three parts to the data model, Authorities, Containers and Entities.

Authorities are the highest level of organization in SDS. An authority corresponds to a specific SQL Server instance in one of Microsoft’s data centers. The authority name will be the first part of the DNS address used to access SDS.

The next lower level of the hierarchy is a container. Depending on how it’s used a container is akin to either an entire database, or a single table in a database. We will talk more about that when we get to Entities. You can have multiple containers in a single authority, and containers can contain zero or more entities, but cannot contain other containers. At present queries are restricted to a single container, you cannot query across multiple containers.

The lowest level in the hierarchy is the entity where your actual data is stored. Entities are akin to records in a traditional database, but unlike traditional databases SDS doesn’t use schemas. Each entity you create in a container can potentially have a different set of fields (called properties in SDS). If you choose to have every entity in a container have the same properties, then the container behaves like a table, but if you mix different entities in a container then it’s behaving more like a complete database. This is one of the areas where SDS diverges quite a bit from the operation of a traditional relational database.

There are two types of entities in SDS, non-blob which you will use most often, and blob entities used to store binary objects. We will just talk about non-blob entities here. Each entity contains a series of properties of which there are two types, metadata properties and flexible properties.

There are three predefined metadata properties. The first is ID which must contain a unique value for each entity in a container. This is a string value that can have up to 64 characters. The second property is a numeric value called Version. Version is automatically assigned by the server when the entity is created and a new version number is assigned each time the entity is updated. Version can be used to handle optimistic concurrency. The final metadata property is a Kind. Kind is an optional string value that can be used to identify the type of each entity. For example if you were doing an order entry system you could have kinds like “Invoice”, “Sales Order”, etc.

Finally an entity can have zero or more flexible properties. These properties contain the actual data that you want to store in the entity. Each property will have one of the following data types, string, binary, boolean, decimal or dateTime.

In my next post we will start working with the SSDS Explorer tool.

Monday, January 19, 2009

SQL Data Services Getting Started

In my last post I talked about Windows Azure. One of the components of Azure is SQL Data Services (SDS), formally known as SQL Server Data Services (SSDS), which is Microsoft’s “database in the cloud”. If you want to start learning about Azure, SDS is a good place to start since you can use it without having to setup a full Azure development environment. In this posting I will discuss how to get setup to work with SDS.

The first step is to sign up for Azure Service Platform invitation codes, you can do that on this page Microsoft is trying to regulate how many developers get on the service so you may not be able to access all the parts of Azure immediately. Once you have applied you will receive a series of e-mails with the codes. The e-mails come in pairs, the first one gives you the code and the second lets you know that is has been activated. There are three different codes and you probably won’t get them all at the same time. The one needed to access SDS is the “Microsoft .NET Services and Microsoft SQL Services” code. I received my code for this service within 24 hours of signing up, but it may take longer. At the time of this writing I haven’t received codes for any of the other services.

Once you receive the code you will then have to sign up for the actual service. In the activation confirmation e-mail there will be a link to the page where you can enter the invitation code. At this point you will be asked to create a new solution. You just have to provide a name for the solution which will also become the username for logging into the service. You can only have one solution per invitation code. Once you have created the solution you will be provided a password for that solution.

The final step in setting up for SDS is to download and install the SDS SDK. Unlike the Azure SDK which requires Vista or Server 2008, the SDS SDK works under XP and even Windows 2000.

Once you have the SDK installed you can test things out. In the SDK folder on the Start Menu you will find a tool called SSDS explorer. When you start the tool you should see “” in the address bar and “from e in entities select e” in the query box. Click the Query button and a box will pop up allowing you to enter your username and password. The username is the name of the solution you created and the password is the one you received when you created the solution. The query should run without returning any errors.

Now we are finally ready to start working with SDS. I will start getting into the details of how SDS works in my next posting.

Sunday, January 18, 2009

Windows Azure

I have recently started working with the Microsoft Windows Azure service. Announced at last years PDC, Azure is Microsoft’s “cloud computing” platform akin to Amazon’s Elastic Computer Cloud. Windows Azure will run in Microsoft data centers and provide a hosting platform to run applications either completely in the cloud, or run services that can interface with on-premise applications.

The big advantage this will provide over traditional web hosting is the ability to scale on demand. At a moments notice you will easily be able to scale a service from running on one server to running on 10, at a price of course, and then scale back to one when you no longer need the extra capacity. This will be very useful for businesses that have cyclic capacity needs, like a flower shop that needs considerably more capacity around Valentines Day then at other times of the year. It’s also great for independent developers allowing them to quickly and cheaply set up a new application and then easily scale it as new customers come along and demand increases.

For .NET developers Azure supports the familiar .NET languages like C# and VB.NET (although Microsoft has indicated that support for languages like PHP may come in the future) and provides tools that integrate into Visual Studio. All of this makes for an easier learning curve for existing .NET developers.

I will be posting more information on Azure as I started getting familiar with the platform.

You can get more information on azure on Microsoft’s Azure Service Platform page.

Saturday, January 17, 2009

C# Type Inference

C# 3.0 introduced a new language concept called Type Inference. Here is an example:

var n = 3;

Instead of specifying a type in this variable declaration the keyword var is used. When this line is compiled the compiler will determine the appropriate data type for the variable based on the initial value assigned to it. In this case ‘n’ will be an Int32. I think ‘var’ was an unfortunate choice for this keyword since it brings to mind Variants from VB6. Variants changed type based on what was assigned to them, but this is not the case with var. Variables declared with var are strongly typed but the compiler determines the type, not the programmer. The following code will not compile:

var n = 3;

n = “test”;

This will produce the error “Cannot implicitly convert type 'string' to 'int'”.

You can also use var to declare arrays like this:

var nums = new[] {0, 1, 2};

This will result in an array of Int32. Note that when arrays are declared every element has to be of the same type. For example this line will produce a compiler error:

var nums = new[] { 0, "test", 2 };

There are a couple limitations to using var.

- var can only be used for local variables and cannot be used at the class level.

- The variable has to be initialized in the same line as it is declared. You cannot do:

var n;

n = 1;

-You cannot define multiple variables at one time using var. This is not legal:

var n = 3, x = 2;

When you look at this feature and some of the other new features introduced in C# 3.0 they may appear of limited use and pretty random. Although these features do have their uses the main purpose is to support a major new feature in 3.0, Language Integrated Query (LINQ). I will talk about this in a later blog post.

Friday, January 16, 2009

Programming Visual Basic applications?

Typemock have released a new version of their unit testing tool, Typemock Isolator 5.2.This version includes a new friendly VB.NET API which makes Isolator the best Isolation tool for unit testing A Visual Basic (VB) .NET application.
Isolator now allows unit testing in VB or C# for many ‘hard to test’ technologies such as SharePoint, ASP.NET MVC, partial support for Silverlight, WPF, LINQ, WF, Entity Framework, WCF unit testing and more.

Note that the first 25 bloggers who blog this text in their blog and tell us about it, will get a Free Full Isolator license (C#, VB, and Sharepoint included - worth $139 !!!). If you post this in a VB.NET dedicated blog, you'll get a license automatically (even if more than 25 submit) during the first week of this announcement.

Go ahead, click the following link for more information on how to get your free license.

Sunday, January 11, 2009

Tuesday, January 6, 2009


Occasionally you will run into a situation where you want to reference the property of an object by its name instead of directly accessing the property. Using the SetValue and GetValue functions you can actually access a property where the property name is stored in a string. Let’s start with a simple object to demonstrate this

public class Test
 private string name;

 public string Name
 get { return name; }
 set { name = value; }

Next you will need to import the Reflection namespace:

using System.Reflection;

Finally here is the code that will write and then read back the Name property:
Test testObj = new Test();
PropertyInfo pi = testObj.GetType().GetProperty("Name");
pi.SetValue(testObj, "Dan", null);
Console.WriteLine(pi.GetValue(testObj, null));

We start out by creating an instance of the Test object and then use the GetProperty function to get the PropertyInfo for the property “Name”. Property info allows you to access various information about a property and also provides access to the GetValue and SetValue functions. If the property can’t be found, GetProperty will return a null, so if there is any chance that the property name you are trying to access may not exist you would want to check if pi is null and handle that error appropriately.

Now that we have the PropertyInfo for the Name property we can write to it with SetValue. SetValue takes three parameters. The first is the object whose property we want to set, in this case the test object. The second parameter is the value we want to set the property to. The third is use for indexed properties which we will talk about next, for this example we will set that null.
Reading the value back is just as easy, we just call the GetValue function. Like SetValue, GetValue takes the object to get the property from as the first parameter, and the index as the second, again null in this example.

As I mentioned you can also use SetValue/GetValue to access an indexed property. Here is another sample object to work with:

public class Test
 private int[] num = new int[5];

 public int this[int index] 
 get { return num[index]; }
 set { num[index] = value; }

The code to read and write this property would look like this:

pi = testObj.GetType().GetProperty("Item");
pi.SetValue(testObj, 1, new object[] { (int)1 });
Console.WriteLine (pi.GetValue(testObj, new object[] { (int)1 }));

There are a couple differences to note here. First, in C# the default property of an object is the only one that can be indexed, that’s why the test object declares the property name as ‘this’. By default the name of the default property is ‘Item’ so that is what we pass to the GetProperty function. If you want you can change the name of this property by putting the following attribute before the property declaration in the object:


This will change the name of the property from Item to IndexedInstanceProperty. Note that you will have to import the System.Runtime.CompilerServices namespace to us this attribute.
The second difference is the index property that is passed to the SetValue and GetValue functions. This parameter is an array of objects. In this example we are using a static index of 1 but it takes a little extra code to turn this single value into an array.

You are probably not going to use SetValue and GetValue a lot but there are definitely situations where it really comes in handy.

Friday, January 2, 2009

Unused Local Variable

I recently ran into an interesting quirk (not sure if I even want to call it that) in Visual Studio. Here is a piece of code that demonstrates the quirk:

Sub Main()
Dim a As Integer
a = 1

Exit Sub

Dim b As Integer
b = 1
End Sub

If you paste this into Visual Studio you will notice that you get a squiggly under the ‘b’ variable with an warning that says Unused Local Variable. I first encountered this in a much large function I was developing and it drove me crazy for a while until I realized that it was the Exit Sub earlier in the code that was causing it. Obviously you would never do this in production code but I had inserted the Exit Sub for debugging purposes.
This quirk was interesting to me because VS was obviously smart enough to know the b=1 would never be executed but didn’t take into account the fact that it would never even be declared. You can get some more insight into this by looking at the IL disassembly for the code:
.method public static void  Main() cil managed
  .custom instance void [mscorlib]System.STAThreadAttribute::.ctor() = ( 01 00 00 00 ) 
  // Code size       3 (0x3)
  .maxstack  1
  .locals init ([0] int32 a,
           [1] int32 b)
  IL_0000:  ldc.i4.1
  IL_0001:  stloc.0
  IL_0002:  ret
} // end of method Module1::Main

You can see in the IL that both local variables a and b are declared before that start of the code. You will also notice that nothing after the Exit Sub gets compiled, the assignment of b does not show up in the IL. So the behavior we see in VS is consistent with what we see in the IL, b is defined but never used.