ReBuildAll
LenardG's thoughts (mostly) on .NET development

Missing features from LINQ to SQL and Entity Framework: create tables, table name changes/prefixes   (Framework)   
I find it interesting that some common usage scenarios are missing from both LINQ to SQL and the Entity Framework. I did some research on the internet, and I do not seem to be the only person who is having these problems. There are solutions to some, but I still feel the need to rant about these in this blog :-)

Creating the database from the model using code

The first problem is creating the database from the model in the ORM, from code. By the term "from code" I mean creating the model when the application is running instead of using design time tools in Visual Studio.

LINQ to SQL supports this scenario by providing you with a CreateDatabase() and DeleteDatabase() method, that will essentially create the database and destroy it. It works as expected, but is not enough. There are some limitations as well, but I could live with them. For example, the LINQ model does not describe every piece of metadata that the original table would have (from which the model was generated). So when the model is used to generate the database, some info will be missing, like users, usage rights, stored procedures, etc. As said, this would not be a problem if all I need is the tables themselves.

But there is no fine grained control over the creation process! The CreateDatabase() method will either

a) create the entire database from scratch
b) or FAIL

Now what I would want to do is be able to create my tables independent of the database. If the database exists, just add my tables into it. LINQ to SQL does not seem to be able to support this scenario. There is an awful lot of support built into the LINQ to SQL classes for generating all the schema etc. (Reflector shows them … they are of course internal to LINQ to SQL). But you cannot create just the tables. Or just the missing tables.

Now why would I want something like that? Let me put up a few scenarios:

a) In a shared hosting environment you usually have 1 database. So the database is shared between different applications. If I have two applications that I would want to be able to dynamically create the tables, the second one to be run is just out of luck if wants to use this method.
b) If I have an application that is partitioned into components and components can be plugged in dynamically, then I would like to have all components having their own LINQ mappings. Thus the situation is similar to the one described above: two or more sets of tables need to be created. I would even support the opposite, when a component is uninstalled, its tables get removed.

The Entity Framework has a similar CreateDatabase() method, with the same faulty assumptions the LINQ version makes: if the database already exists, it will just throw an exception. There is an interesting method in EF though, CreateDatabaseScript(). This will generate the script that - according to MSDN - the CreateDatabase() method will execute. HOWEVER, the script this method returns omits the database creation statements. So this could be used to create the tables. :)

Otherwise for now the best solution seems to be to generate the database script files by hand, include them in the project and run them using SqlConnection/SqlCommand. This way I can be sure of what happens, and no existing tables are damaged.


Table names (prefixes, postfixes, entire names)

Sometimes the names of the tables will change during runtime. It can be a complete change or it can be something like a prefix that is applied to the table name. There can be a million reasons for this, here are two of those:

a) In a shared hosting environment you will usually get only one database. If you wish to use different applications you could run into naming collisions without prefixes.
b) You are writing a component that will be used by other programs. To play nicely along, your component should not “lock” into using a given table name. Prefixes or changing names should be supported.

It is dreadful how difficult it is to change table names at runtime for both LINQ to SQL and the Entity Framework. I found posts that describe ways to do it, but the entire process is complicated and difficult to automate. Especially the EF way seems very tricky.

Nevertheless, solutions do exist.

The following article describes a way for both technologies. For LINQ to SQL, you need to generate a mapping file and use it at runtime. What this means, that you cannot automate the process, and that you need to generate and modify the file again, should your database or mapping change.

For the EF scenario it also requires file changes, which is not too good. Anyway, here it is:

http://theruntime.com/blogs/jacob/archive/2008/08/27/changing-table-names-in-an-orm.aspx

For EF there is a better way, a small framework found on CodePlex. It enables you to play with metadata at runtime, like adding a table prefix. This solves the problem, but it also requires some code modifications that will disappear (and need to be redone), should you use the designer.

http://efmodeladapter.codeplex.com/

I cannot understand how these feature can be missing, especially from EF 4, which should be a mature product (I know LINQ to SQL is not being actively developed further). Even Microsoft’s own ASP.NET tables use the prefix notation to make their tables stand out. Again, in shared hosting environments it might be good to have prefixes. Versioning scenarios could involve prefixes or postfixes (== table name changes). Use cases are virtually endless, and it is a pretty straightforward thing to implement.

Summary

Creating tables from models and changing names after compile time seem to be simple problems for me - I would even call them everyday problems. And still they are not solved in Microsofts own ORM tools.

I would love to hear feedback on this post, feel free to suggest other solutions, to point out errors or to comment on these things. Also, experience with other OR mappers is welcome. :-)


How to run a .NET application as a 32bit process in a 64bit OS   (Framework)   
When you compile a .NET program into an assembly and do not specify a CPU architecture, it will of course run on every architecture. It will also mean, that it will be run as a 32bit program on a 32bit OS (x86), and will be run as a 64bit program on a 64bit OS (x64).

Sometimes, you will want to force a .NET program to run as a 32bit application, even on a 64bit system. You might wonder, what on earth can make you do that. Well, here are two scenarios.

#1: The .NET program works on 32bit OS and crashes on 64bit

I have actually faced this scenario some while ago. As it turned out, the program did a p/invoke and that crashed on 64bit OSes. This was of course a bug on the part of the program, but I would not wait for the author to fix it. I needed to run the program as a 32bit application (on my 64bit OS), which would solve this issue.

#2: WinDbg will not handle stack frames in the call stack the same on 64bit and 32bit

This might or might not be a problem for you, and it might be solved in a later version of the Debugging Tools for Windows.

Solution: corflags.exe

To solve these problems you can use corflags.exe. This is included with the .NET Framework SDK. It can run without the SDK, so you can copy&paste it onto the target machine if you have to. With this tool you can change a flag value in the CLR header that will instruct the CLR to ALWAYS run the given application in 32bit mode. Clearing this flag will of course always run in the native mode of the OS.

To force 32bit mode on an application, use:

corflags assembly /32bit+

To not force 32bit mode and run in the native mode of the OS, use:

corflags assembly /32bit-

Running .NET threads on selected processor cores   (Framework)   
Today multi-threading is very popular. Just look at the upcoming .NET 4, with parallel programming built right into the framework. It is very easy to run a for or foreach loop that will utilize all cores in the machines - 2, 4, 8 -, and thus increasing performance.

But what if you want to restrict a thread to a given core yourself?

Process affinity

For the entire process, you can adjust the affinity - that is, the processor the process is allowed to run on - by using the following .NET code (C#):

            Process.GetCurrentProcess ().ProcessorAffinity = new IntPtr ( 2 );

This will restrict the current process to running on processor #2 (if we begin numbering with 1). The passed IntPtr is a bit mask, where the first bit means the first processor core, the second bit the second processor core, and so on. To run the process on all cores on a dual core system, you would use 3. On a quad core machine you would use 15.

The same scheme is in use if you have multiple processors. In that case starting from the first bit comes the cores for the first processor, then for the second processor, and so on. For a dual cpu system with dual cores, you would use 3 for the first cpu (both cores) and 12 for the second cpu (both cores).

Thread affinity

For threads, this is not so easy to acomplish. First of all, a .NET thread does not correspond to an operating system thread. And you can set thread affinity for OS threads only. Not only is there no correspondance, but the .NET Framework is allowed to run your .NET thread on multiple operating system threads. Not at the same time, but should your thread run long enough (waiting in between, etc), there is no guarantee that it will always run on the same OS thread.

To solve the problem of the CLR running .NET threads on multiple OS threads, you can use a method from the Thread class:

Thread.BeginThreadAffinity ();
...
Thread.EndThreadAffinity ();

This will guarantee that any code between these calls will always run on the same OS thread. Essentially this disables parts of the CLR thread management.

After we have this problem solved, we can get on with the thread processor affinity issue. You can get the OS threads in your .NET application by using Process.GetCurrentProcess ().Threads. This is a collection of thread objects. However, these are using OS thread IDs and not managed thread IDs. To get the currently executing OS thread, we can use P/Invoke to invoke the neccessary Win32 API code:

        [DllImport ( "kernel32.dll" )]
        public static extern int GetCurrentThreadId ();

With the returned ID we can find our thread, and the .NET ProcessThread object has a property called ProcessorAffinity. This property only has a setter, so you cannot get its value. The actual property works similar to the process affinity I described above.

Putting it all together

Now that we have pieces of the puzzle, it is time to put it together. Below you will find a complete class which I called DistributedThread that allows you to run threads on processor cores you can determine.

Before you start hard coding processor and core numbers, make sure you retrieve the available cores in the system (including all CPUs and cores) with Environment.ProcessorCount.

The code encapsulates the normal Thread object. It handles restricting the thread to run on the current OS thread and then setting the thread affinity to the desired value.

(if you wonder how you can get the code without the nice syntax highlighting, just request the page source, you will find it there ... a little challenge for you :P)

using System;
using System.Diagnostics;
using System.Linq;
using System.Runtime.InteropServices;
using System.Threading;

namespace DistributedWorkManager
{
    public class DistributedThread
    {
        [DllImport ( "kernel32.dll" )]
        public static extern int GetCurrentThreadId ();

        [DllImport ( "kernel32.dll" )]
        public static extern int GetCurrentProcessorNumber ();

        private ThreadStart threadStart;

        private ParameterizedThreadStart parameterizedThreadStart;

        private Thread thread;

        public int ProcessorAffinity { get; set; }

        public Thread ManagedThread
        {
            get
            {
                return thread;
            }
        }

        private DistributedThread ()
        {
            thread = new Thread ( DistributedThreadStart );
        }

        public DistributedThread ( ThreadStart threadStart )
            : this ()
        {
            this.threadStart = threadStart;
        }

        public DistributedThread ( ParameterizedThreadStart threadStart )
            : this ()
        {
            this.parameterizedThreadStart = threadStart;
        }

        public void Start ()
        {
            if ( this.threadStart == null ) throw new InvalidOperationException ();

            thread.Start ( null );
        }

        public void Start ( object parameter )
        {
            if ( this.parameterizedThreadStart == null ) throw new InvalidOperationException ();

            thread.Start ( parameter );
        }

        private void DistributedThreadStart ( object parameter )
        {
            try
            {
                // fix to OS thread
                Thread.BeginThreadAffinity ();

                // set affinity
                if ( ProcessorAffinity != 0 )
                {
                    CurrentThread.ProcessorAffinity = new IntPtr ( ProcessorAffinity );
                }

                // call real thread
                if ( this.threadStart != null )
                {
                    this.threadStart ();
                }
                else if ( this.parameterizedThreadStart != null )
                {
                    this.parameterizedThreadStart ( parameter );
                }
                else
                {
                    throw new InvalidOperationException ();
                }
            }
            finally
            {
                // reset affinity
                CurrentThread.ProcessorAffinity = new IntPtr ( 0xFFFF );
                Thread.EndThreadAffinity ();
            }
        }

        private ProcessThread CurrentThread
        {
            get
            {
                int id = GetCurrentThreadId ();
                return
                    (from ProcessThread th in Process.GetCurrentProcess ().Threads
                     where th.Id == id
                     select th).Single ();
            }
        }
    }
}


How you can use this code?

   DistributedThread thread = new DistributedThread( ThreadProc );
   thread.ProcessorAffinity = 2;
   thread.ManagedThread.Name = "ThreadOnCPU2";
   thread.Start ();

As you can see the syntax is fairly similar to when you use Thread. The ManagedThread property gives access to the actual Thread object, should you need that. The affinity here is a single int value - the class handles converting that to IntPtr.

Global Assembly Cache in .NET 4.0   (Framework)   
I was looking for assemblies in the GAC for .NET 4.0, and I just could not find them. As it turns out, I was looking in the wrong place. The usual C:\WINDOWS\assembly folder does not help any longer: it only lists DLLs from prior versions of the .NET Framework.

That is just sad. It was very convenient to install and uninstall assemblies from the GAC by dragging them to this folder in Explorer, or right clicking and choosing uninstall. You did not need gacutil.exe or MSI files to perform such an operation. This was all made possible by a shell extension, shfusion.dll that was included with .NET 2.0 (and so worked for 2.0, 3.0, 3.5)

Now, shfusion.dll is discontinued - no longer shipped with .NET 4.0. So how can you manipulate the GAC?

You can still use the gacutil.exe -l, but that is only included with the SDK. And you need to run it from the Visual Studio Command Prompt, or make sure you know the path for it.

Or you can build MSI files.

For .NET 4.0, the actual folder is located at: C:\WINDOWS\Microsoft.NET\assembly

But there is no magic any longer: it is just a regular folder. In fact, if you have ever visited the old assembly folder from the command prompt, you will feel right at home - you don't see the assemblies in a unified view, rather the raw folders that they are stored in. Separate folders for x86 and x64 architecture (native images) and MSIL.

I think this was a bad idea. The old kind of way for viewing assemblies worked great. In fact, it was possible to troubleshoot issues with the IT department very easily: they could grasp the idea of drag-and-dropping folders into Windows Explorer. No need to know inside things about .NET. Easy. Now they have to run command line tools or I have to create installers. Hard. Bad bad decision :(

PS/Update: If anyone knows easy ways to achieve this in 4.0, or I just overlooked something, please feel free to comment. I am almost certain there are PowerShell commandlets to help us out, but haven't had time to investigate.

Easy debugging of Windows Services   (Framework)   
Ever wanted to create a Windows Service, but found it is very hard to debug? Here are two tips to make debugging easy.

Use the Debugger class to break into the service when starting

Have you read my previous post about the Debugger.Break() statement? You can read it here: Debugger breakpoints from code. The same method can be used with Windows Services to break into the code as it is being executed. If you want to break into the code just as your service is starting up, add the following code to the class constructor:

    public partial class MyService : ServiceBase
    {
        public MyService ()
        {
            this.ServiceName = "MyService";
            
            Debugger.Break();

            InitializeComponent();
        }
    ...


This will popup an application error message just as the execution is over the line - just as your service is starting up. You just need to say you want to debug it, and attach a debugger to it. If you have your service code open in Visual Studio, you can just use your existing instance of Visual Studio to debug the service. You can then monitor the startup, or add additional breakpoints.

You are not limited to breaking in the constructor - just add the statement wherever you need it.

As the debugger attached, Visual Studio will tell you there is no source code to display. But don't panic, just hit the step over (F10) debug command and you will end up at the statement after the Break() call.

Make your service a normal application for debugging

Another way to solve this problem is to turn your service into a normal Windows or Console application, that you can debug the traditional way. I like this approach, because I can run my application normally, without the hassle of the Service interface, until I am ready to run it as a service. This method makes it possible to seamlessly transition from "normal" run into "service" run, back and forth, as needed.

To make this happen you will have to alter your startup and stop code of the service. You need to do this, because the startup and stop code needs to be accessible from the outside, and ServiceBase have them defined as protected.

If you are worried about this solution, you can always put #if DEBUG precompiler statements around the code you enter. This way it will only be available in the debug version and you do not need to worry about it in the release version (if this is an issue at all).

An easy way to get started is to modify it like the following example. We add two new methods to the class:

    public partial class MyService : ServiceBase
    {
        public MyService ()
        {
            this.ServiceName = "MyService";
            InitializeComponent();
        }

        protected override void OnStart ( string[] args )
        {
            HandleStart();
        }

        protected override void OnStop ()
        {
            HandleStop();
        }

        public void HandleStart ()
        {
            // real startup logic
        }

        public void HandleStop ()
        {
            // real stop logic
        }
    }


When the application is invoked as a service, HandleStart() and HandleStop() methods will be called to handle the actual startup and teardown logic. But now you can also call these methods from your alternate solution.

If you want even more abstraction, you can just create two separate classes. One will contain the actual logic you need to perform, but will be a regular class derived from object. This would contain the HandleStart() and HandleStop() methods I described above. The actual ServiceBase derived class will be a wrapper (think Adapter pattern) around this regular class.

Now after this little modification is complete you still need to alter the startup code.

The default Visual Studio Service project template will generate a Windows application, so you can't really use the console. But no worries, add a reference to System.Windows.Forms, and then change the startup code. Look at the following example. (This reference might be a bad idea in some cases, but I will not go into the cons of this solution, at least not in this post).

        static void Main ( string[] args )
        {
            if ( args.Length > 0 &&
                 args[0] == "console" )
            {
                var service = new MyService();
                service.HandleStart();

                MessageBox.Show( "Press OK to quit" );

                service.HandleStop();
            }
            else
            {
                ServiceBase[] ServicesToRun;
                ServicesToRun = new ServiceBase[] 
			    { 
				    new MyService() 
			    };
                ServiceBase.Run(ServicesToRun);
            }
        }


What this code does is when it detects the command like argument "console" it will not invoke the service startup code, instead it will proceed to instantiate the service class and invoke the methods we added above. If you wanted to go with the abstraction case, you would forget about the ServiceBase derived class here, and simply use your custom class directly.

The code displays a simple MessageBox, and the service will close when the user presses the "OK" button.

Whichever way you go, debugging services can be made really simple, which is a big plus when you need to debug them often.

Overriding virtual methods at runtime   (Framework)   
I was faced with a problem where I had a big bunch of Web service proxy classes, generated from their WSDL descriptions. I needed to override one virtual method of the base class to provide some custom code in the proxy class. This however presented two problems:

- I have many proxy classes in which I want to include this new functionality. These are in many different projects.
- The proxy classes are generated, so when they would be regenerated the code change would need to be reapplied.

The second point could be solved by deriving yet another class from the proxy class (or by using partial classes). This way the changes would live through regeneration cycles, but would create me twice as much classes, and managing changes in these new classes would still be a problem. Not to mention this would require recompiling and redeploying and reconfiguring all the applications.

I ended up using a much different solution using reflection and code emit. This allows for ease of maintainability and allows me to apply the new features without code changes or recompilation in the existing projects.

The .NET Framework provides very good code emiting capabilities: you can create .NET code on the fly, dynamically, and then call it right away. While this was probably aimed at scripting engines, it does give me a solution to the problem. When I will need to call a method from a proxy class I will generate a new derived class dynamically and override the method in question. The proxy classes were being used through reflection to begin with, so not much change is required: I can just pass in the Type instance of the generated class instead of the original, and everything should work.

There are some minor problems with this solution. First, the method body has to be provided in IL - Intermediate Language. That is like writing code in assembly language for the x86 platform, not very friendly. I was able to partly navigate around this problem by creating a new static class that provides the functionality I want, and calling this class from the new overriden method. Another solution would have been to write the code myself in C#, compile, and then use either ILDASM or REFLECTOR to copy the IL generated into my project. Anyhow, I ended up studying a little bit of IL, which was actually quite fun :-)

So how do you create code on the fly? Lets suppose I have a class called MyOriginalClass. It is a generated proxy class, derived from SoapHttpClientProtocol. I want to provide an implementation for the GetWebRequest() method, that is a protected virtual method.

So I started by getting the type of the original class. Now I can create a new Assembly and Module. Think of the Assembly as containing the metadata and the Module as containing the actual code - you can find more about this in MSDN.

Type typeofOriginal = typeof ( MyOriginalClass );

AssemblyName asmName = new AssemblyName ( "Test" );
AssemblyBuilder asmBuilder =
AppDomain.CurrentDomain.DefineDynamicAssembly (
    asmName,
    AssemblyBuilderAccess.RunAndSave );

ModuleBuilder modBuilder =
asmBuilder.DefineDynamicModule (
    asmName.Name,
    asmName.Name + ".dll" );


The code above creates a new assembly named Test. The module is just named using the same syntax, getting the .DLL extension. These is all done dynamically, so nothing is written to disc. Next step is creating the new class, that will derive from my original class.

TypeBuilder typeBuilder = modBuilder.DefineType (
    typeofOriginal.Name + "WithoutKeepAlive",
    TypeAttributes.Public | TypeAttributes.Sealed | TypeAttributes.Class,
    typeofOriginal );


The new class has the name of the original one with WithoutKeepAlive added to the end. And it of course has the original class as its base class. So now it is time to generate our new overriden method. This will simply call MyNewClass.GetWebRequest() method, which is a public static method I define elsewhere. By calling this method I get away with writing as little IL as possible.

// create a new method
MethodBuilder methodBuilder = typeBuilder.DefineMethod (
    "GetWebRequest",
    MethodAttributes.Public | MethodAttributes.ReuseSlot |
      MethodAttributes.HideBySig | MethodAttributes.Virtual,
    typeof ( System.Net.WebRequest ),
    new Type[] { typeof ( Uri ) } );

// get IL generator instance
ILGenerator ilgen = methodBuilder.GetILGenerator ();

// declare local variable
LocalBuilder localBuilder =
ilgen.DeclareLocal ( typeof ( System.Net.WebRequest ) );

// create method body in IL
ilgen.Emit ( OpCodes.Ldarg_1 );
ilgen.Emit (
    OpCodes.Call,
    typeof ( MyNewClass ).GetMethod (
        "GetWebRequest",
        BindingFlags.Static | BindingFlags.Public ) );
   ilgen.Emit ( OpCodes.Stloc_0 );
   ilgen.Emit ( OpCodes.Ldloc_0 );
   ilgen.Emit ( OpCodes.Ret );


Well, that is about it. At this point I can either save the new assembly to disc, or start using it right away. The proxy class has a HelloWebService() method that that is making a Web service call. If I invoke this on the original class nothing new happens. However, if I invoke it on the new class, my static method will be called (through the new override method).

// finish code generation and create type
Type newType = typeBuilder.CreateType ();

// create new instance of the class
object client = Activator.CreateInstance ( newType );

// call web service
MethodInfo minf =
    newType.GetMethod (
        "HelloWebService",
        new Type[] { typeof ( string ) } );
        minf.Invoke ( client, new object[] { "World" } );


That is about it. By writing more IL you could create a solution that does not require the use of another class. But this solution works perfectly fine for me.

Of course it would be wise to cache the types created this way, so they can be reused when another call is to be made. Types created this way remain loaded in the application domain until the entire domain is unloaded, so creating them all the time is bad idea. Code is not garbage collected.