Category Archives: .NET

Replication Across the Country

The MongoDB .NET driver recently had an issue reported that turned out to be a bug on our part. It is a subtle bug that wouldn’t have shown up except for in a specific replica set configuration. I’ll first discuss the new behavior in 1.6 regarding read preferences for replica set members and then discuss the configuration and what actually happened.

Replica sets are the MongoDB name for a group of servers that have one primary and N secondaries. Standard setups usually include 3 replica set members, 1 primary and 2 secondaries. All write operations go to the primary and all reads are governed by the stated read preference and tagging. Together, read preferences and tagging form a way of targetting a specific server or a group of servers in the cluster for reads. In a heavy read/write environment where not-immediately up-to-date reads are valid (most scenarios actually fall into this camp), then a way to load balance your cluster is to allow secondaries to serve up data for reading while the primary takes care of writing.

There are a number of read preferences, Primary, PrimaryPreferred, Secondary, SecondaryPreferred, and Nearest. SecondaryPreferred means to read from a secondary if one is available, otherwise, read from a primary. In addition, when choosing a secondary, we only consider secondaries within 15 milliseconds (by default) of the lowest secondary ping time. We do this to ensure that your reads are generally as fast as possible.

For example, in the setup below with 4 secondaries and one primary, we’d randomly choose from servers B, C, and E when using the SecondaryPreferred read preference. D would be excluded because it’s ping time is 17ms behind that of the lowest secondary’s ping time.

Server   Type       Ping Time
A           Primary       3ms
B          Secondary   7ms
C          Secondary   2ms
D          Secondary   19ms
E          Secondary   11ms

Cloud providers like EC2 and Azure offer the possiblity to stand-up replica set members in different regions of the country. This is great because when an entire region goes down, your app can still function by reading off the servers in other regions of the country. In the case of the bug mentioned at the top of this post, a 2 member replica set existed where the primary was in Region 1 and a secondary was in Region 2. In addition, the web application was located in Region 1. Using the read preference SecondaryPreferred, every single read will have to exit Region 1 and go all the way to Region 2 to get data. This distance imposed a ~100ms penalty.

Our bug manifested itself because of this ping time lag in the regions. Even though we were supposed to choose the secondary, we didn’t because it’s ping time was so much slower than that of the primary. The fix for us is easy, but the customer has to wait for us to fix it. Hence, we suggested a better setup of the cluster to remedy the problem in the mean time. Simply setting up a new secondary in Region 1 will send all reads to this secondary, all writes to the primary, and the secondary in Region 2 is used for failover and backup. I’d actually suggest this setup regardless of the bug.

We’ll be fixing this bug in version 1.6.1, but be aware of your lag times when using disparate data centers.

Disconnecting with the MongoDB .NET Driver

The MongoDB .NET Driver has a public method called Disconnect on the MongoServer class.  This method is somewhat useful in certain contexts such as when the server is shutting down or the application is exiting.  However, it is extremely important to know what this method does before using it because it could kill your application.

The documentation simple states that this causes the server to disconnect from the server.  In other words, this method terminates all connections to all the servers and shuts down any in flight operations.  This isn’t your standard ADO.NET Connection at all.  In fact, MongoServer isn’t a connection at all, but rather a proxy to 1 or more mongod or mongos processes.

In addition, the documented way to get access to a MongoServer is to use a static Create method.  MongoServer.Create() and all it’s overloads actually return the same instance when the specified connection settings match a previously created MongoServer.  Therefore, the documented behavior of Disconnect is even more unexpected when this information gets factored in.

There is a good reason for this method.  It cleanly disposes of all the resources associated with the many connections and sockets it manages.  So it’s useful when an application is exiting or the OS is shutting down.  However, most people call Disconnect because it’s there and it seems like the right thing to do.

We’ve started working on the next version (2.0) of the driver and are working through some issues, such as this one, to clean up and correct so that the api matches expectations and to make it much more difficult to do the wrong thing.

MongoSV Conference

I just returned from the MongoDB conference in San Jose on Saturday.  Because I’m a MongoDB Master, I was able to attend the Master’s Summit the day before the conference.  We did it unconference style and let each topic self-select based on what we wanted to talk about.  I discussed of lot of windows related things like performance counters and SCOM integration as well as how to evangelize to the Microsoft community as a whole.  10gen is really looking to expand into this area more so than they have in the past.

One of these efforts is that MongoDB now runs on Azure.  This is cool because it gives another possibility for scaling in the cloud.  Azure already offers 3 forms of data storage.  SQL, Table, and Blob.  Blob is just a filesystem and suitable for binary items like images.  Table storage is a way to store large quantities of non-relational data.  It is relatively cheap as is blog.  The last is SQL, where Azure supports storing relational data.  SQL Azure, however, is extremely expensive compared to Blob and Table storage.

MongoDB fits in between Table Storage and SQL Storage.  Underneath, it uses Blob storage to keep the data, making it much cheaper that SQL Azure.  MongoDB does not represent its data in relational form, but rather in document form.  However, unlike Table Storage, MongoDB is fully queryable, fully indexable, and super fast.  It is a great alternative for bridging the needs between dynamic queries and fully relational data.

All in all, I thoroughly enjoyed my time and hope to continue it through contact with the other Masters and feedback to 10gen.

Attending MongoSV

I’ll be attending MongoSV in california over the next two days.  Day 1 will be a summit for the MongoDB Master’s group (of which I am a member).  We’ll be discussing anything and everything about MongoDB with hopes to influence it’s future direction.

Day 2 will be more interesting.  As a .NET developer, I’m thoroughly interested in all things related to Microsoft.  A few days ago, 10gen announced that MongoDB has support for running on Azure.  In fact, Microsoft will be speaking on the topic at the conference.  This is totally interesting because it lets us marry a scalable infrastructure with a scalable database and not have to sacrifice either one for the other.  I have nothing against SQL Server and use it for all my transactional business needs.  However, when building systems to scale, transactional business models are not the correct choice.  I’ll talk more on this topic in my next post on CQRS.

Until then, I’ll take notes and blog my thoughts about the direction 10gen is going with MongoDB in the future.

MongoDB Open Source Efforts

I actively (when I have time) work on some open-source projects.  Both of them are related to the MongoDB C# Driver (to which I contributed a lot of code as well).

The first is FluentMongo (https://github.com/craiggwilson/fluent-mongo) which is a linq provider on top of the driver.  This was sucked out of an older C# driver (now defunct) to which I was a core committer with Steve Wagner (http://www.lanwin.de/) and Sam Corder. Writing linq providers is incredibly difficult and I was so proud of my effort in the defunct project that I didn’t want it to go to waste; so I ported it over since the official driver did not have one (and still doesn’t).

The second project is Simple.Data.MongoDB (https://github.com/craiggwilson/Simple.Data.MongoDB).  If you haven’t yet played with Simple.Data (https://github.com/markrendle/Simple.Data), then you are missing out.  It is completely abusing the point of C# 4’s dynamic keyword to build an Active Record style data layer in .NET.  It is a great fit for MongoDB because neither require a schema.  Simple.Data was built for a relational database but working with Mark Rendle has been a pleasure and he has changed some of the core to accomodate a different style database.

Anyways, just wanted to get this stuff out there and I’ll keep these updated as I add features to either one.

Build Your Own IoC Container User Group Recording

So, apparently the talk I gave on the Build Your Own IoC Container series was recorded and posted online. Being one of my first talks, I thought it went well. If I’d known how they were recording, I would have done a few things differently like repeat the questions that were asked, but overall it went pretty well.

There is no sound for about 3 minutes and then I get interupted by the guys running the group to announce some things, but after we get through that, it is pretty smooth.

http://usergroup.tv/videos/build-you-own-ioc-container

Hope you enjoy…

Building an IoC Container – Cyclic Dependencies

The code for this step is located here.

We just finished doing a small refactoring to introduce a ResolutionContext class. This refactoring was necessary to allow us to handle cyclic dependencies. Below is a test that will fail right now with a StackOverflowException because the resolver is going in circles.

public class when_resolving_a_type_with_cyclice_dependencies : ContainerSpecBase
{
    static Exception _ex;

    Because of = () =>
        _ex = Catch.Exception(() => _container.Resolve(typeof(DummyService)));

    It should_throw_an_exception = () =>
        _ex.ShouldNotBeNull();

    private class DummyService
    {
        public DummyService(DepA a)
        { }
    }

    private class DepA
    {
        public DepA(DummyService s)
        { }
    }
}

Technically, a StackOverflowException would get thrown and caught. However, this type of exception is going to take out the whole app and the test runner won’t be able to complete. Regardless, it shouldn’t take a minute to take out the heap and this should fail almost instaneously.

With a slight modification to our ResolutionContext class, we can track whether or not a cycle exists in the resoluation chain and abort early. There are two methods that need to be modified.

public object ResolveDependency(Type type)
{
    var registration = _registrationFinder(type);
    var context = new ResolutionContext(registration, _registrationFinder);
    context.SetParent(this);
    return context.GetInstance();
}

private void SetParent(ResolutionContext parent)
{
    _parent = parent;
    while (parent != null)
    {
        if (ReferenceEquals(Registration, parent.Registration))
            throw new Exception("Cycles found");

        parent = parent._parent;
    }
}

We begin by allowing the ResolutionContext to track it’s parent resolution context. This, as you can see in the SetParent method, will allow us to check each parent and see if we have tried to resolve a given type already. Other than that, nothing special is going on and everything else still works correctly.

At this point, we are at the end of the Building an IoC Container series. I hope you have learned a little more about how the internals of your favorite containers work and, even more so, that there isn’t a lot of magic going on. This is something you can explain to your peers or mentees and hopefully allow the use of IoC to gain an acceptance in area that has once been off-limits because it was a “black-box”. Be sure to leave me a comment if you have any questions or anything else you’d like to have done to our little IoC container.

Building an IoC Container – Refactoring

The code for this step is located here.

In the last post, we added support for singleton and transient lifetimes. But the last couple of posts have made our syntax look a bit unwieldy and is somewhat limitting when looking towards the future, primary when needing to detect cycles in the resolution chain. So today, we are going to refactor our code by introducing a new class ResolutionContext. This will get created everytime a Resolve call is made. There isn’t a lot to say without looking at the code, so below is the ResolutionContext class.

public class ResolutionContext
{
    private readonly Func _resolver;

    public Registration Registration { get; private set; }

    public ResolutionContext(Registration registration, Func resolver)
    {
        Registration = registration;
        _resolver = resolver;
    }

    public object Activate()
    {
        return Registration.Activator.Activate(this);
    }

    public object GetInstance()
    {
        return Registration.Lifetime.GetInstance(this);
    }

    public object ResolveDependency(Type type)
    {
        return _resolver(type);
    }

    public T ResolveDependency()
    {
        return (T)ResolveDependency(typeof(T));
    }
}

Nothing in this is really that special. The Activate method and the GetInstance method are simply here to hide away the details so the caller doesn’t need to dot through the Registration so much (Law of Demeter). The Funcis still here, but this is where it stops. Shown below, our IActivator and ILifetime interfaces now take a ResolutionContext instead of the delegates.

public interface IActivator
{
    object Activate(ResolutionContext context);
}

public interface ILifetime
{
    object GetInstance(ResolutionContext context);
}

Now, they look almost exactly the same, so, as we discussed in the last post, the difference is purely semantic. Activators construct things and Lifetimes manage them.

Finally, no new tests have been added, but a number have changed due to this refactoring. I’d advise you to check out the full source and look it over yourself. In our next post, we’ll be handling cyclic dependencies now that we have an encapsulated ResolutionContext to track calls.

Building an IoC Container – Adding Lifetimes

The code for this step is located here.

We left off the previous post with the need to support different lifetime models such as Singleton, Transient, or per Http Request. After our last refactoring, this is actually a fairly simple step if we follow the single responsiblity principle and define the roles each of our abstractions play.

Currently, we only have 1 abstraction which is around activation. It would be fairly simple to use activators to handle lifetime as well, but things will quickly become complex if we go this route. Therefore, we will define an activators job as activating an object. In other words, it’s only job is to construct an object using whatever means it wants, but it should never store the instance of an object in order to satisfy a lifetime requirement.

In order to manage lifetimes, we will introduce a second abstract called an ILifetime. It’s sole job is to manage the lifetime of an activated object. It can(and will) use an activator to get an instance of an object, but it will never construct one itself.

By keeping these two ideas seperate, it makes this a fairly trivial addition. Below are the two tests for this functionality, one for transient and one for singletons.

public class when_resolving_a_transient_type_multiple_times : ContainerSpecBase
{
    static object _result1;
    static object _result2;

    Because of = () =>
    {
        _result1 = _container.Resolve(typeof(DummyService));
        _result2 = _container.Resolve(typeof(DummyService));
    };

    It should_not_return_the_same_instances = () =>
    {
        _result1.ShouldNotBeTheSameAs(_result2);
    };

    private class DummyService { }
}

public class when_resolving_a_singleton_type_multiple_times : ContainerSpecBase
{
    static object _result1;
    static object _result2;

    Establish context = () =>
        _container.Register().Singleton();

    Because of = () =>
    {
        _result1 = _container.Resolve(typeof(DummyService));
        _result2 = _container.Resolve(typeof(DummyService));
    };

    It should_return_the_same_instances = () =>
    {
        _result1.ShouldBeTheSameAs(_result2);
    };

    private class DummyService { }
}

You see above that we are making transient the default lifetime and adding a method to set a singleton lifetime.

public interface ILifetime
{
    object GetInstance(Type type, IActivator activator, Func resolver);
}

public class TransientLifetime : ILifetime
{
    public object GetInstance(Type type, IActivator activator, Func resolver)
    {
        return activator.Activate(type, resolver);
    }
}

public class SingletonLifetime : ILifetime
{
    private object _instance;

    public object GetInstance(Type type, IActivator activator, Func resolver)
    {
        if (_instance == null)
            _instance = activator.Activate(type, resolver);
        return _instance;
    }
}

So, an ILifetime has a single method that takes the type, the activator, and the resolver. We are going to clean this up in the next installment, but regardless, these implementations are still relatively simple.

The TransientLifetime is basically a pass-through on the way to the activator. It doesn’t store anything, so it has no need to do any interception. The SingletonLifetime, however, only activates an object once, stores the instance, and then returns that instance everytime. I haven’t included threading in here, but a simple lock would suffice for most cases.

We need to add a Lifetime and a Singleton method onto our Registration class:

public class Registration
{
    //other properties

    public ILifetime Lifetime { get; private set; }

    public Registration(Type concreteType)
    {
        ConcreteType = concreteType;
        Activator = new ReflectionActivator();
        Lifetime = new TransientLifetime();

        Aliases = new HashSet();
        Aliases.Add(concreteType);
    }

    //other methods

    public Registration Singleton()
    {
        Lifetime = new SingletonLifetime();
        return this;
    }
}

Finally, we just need to change the Resolve method in our Container class to use the Lifetime as opposed to the Activator and we are all done.

public object Resolve(Type type)
{
    var registration = FindRegistration(type);
    return registration.Lifetime.GetInstance(registration.ConcreteType, registration.Activator, Resolve);
}

In our next post, we’ll do some refactoring ultimately to support handling cyclic dependencies. A by product of this is a better coded container. See you then…

Building an IoC Container–Resolving Abstractions

The code for this step is located here.

From our previous post, we added the ability to register dependencies with dependencies that couldn’t be resolved by the container.  These would be dependencies like primitives or abstractions like interfaces.  In this post, we are going to solve our inability to resolve an abstraction by adding aliases to the Registration class.

Below is our test for this functionality:

public class when_resolving_a_type_by_its_alias : ContainerSpecBase
{
    static object _result;

    Establish context = () =>
        _container.Register<DummyService>().As<IDummyService>();

    Because of = () =>
        _result = _container.Resolve(typeof(IDummyService));

    It should_not_return_null = () =>
        _result.ShouldNotBeNull();

    It should_return_an_instance_of_the_requested_type = () =>
        _result.ShouldBeOfType<DummyService>();

    private interface IDummyService { }
    private class DummyService : IDummyService { }
}

Above, the only difference from an API standpoint is the addition of the “As” method. This is basically telling the container that DummyService should be returned when IDummyService is requested.

So, the first change we’ll make is on the Registration class. We’ll add a property called Aliases. Aliases will include all the types that should resolve to the same concrete class including the concrete version. So, Register().As().As() will resolve when any of the types ISomeServiceA, ISomeServiceB, or SomeService is requested. Our new registration class looks like this:

public class Registration
{
    public Type ConcreteType { get; private set; }

    public IActivator Activator { get; private set; }

    public ISet<Type> Aliases { get; private set; }

    public Registration(Type concreteType)
    {
        ConcreteType = concreteType;
        Activator = new ReflectionActivator();

        Aliases = new HashSet<Type>();
        Aliases.Add(concreteType);
    }

    public Registration ActivateWith(IActivator activator)
    {
        Activator = activator;
        return this;
    }

    public Registration ActivateWith(Func<Type, Func<Type, object>, object> activator)
    {
        Activator = new DelegateActivator(activator);
        return this;
    }

    public Registration As<T>()
    {
        Aliases.Add(typeof(T));
        return this;
    }
}

The only other change we need to make is in the container where we are trying to find a registration.

private Registration FindRegistration(Type type)
{
    var registration = _registrations.FirstOrDefault(r => r.Aliases.Contains(type));
    if (registration == null)
        registration = Register(type);

    return registration;
}

That’s it! All the tests should still pass and all is good with the world. In the next post, we are going to talk about lifetimes (Singleton, Transient, PerRequest, etc…) and how to add them into our container.

Stay Tuned.