5 Reasons to use F# Interactive in Visual Studio 2010

Note: While the following post is targeted at Visual Studio 2010 users, most of the points apply even if you aren’t using Visual Studio 2010. F# Interactive (FSI) is easy (and free) to install for Visual Studio 2008 users and command line users running in Windows or Mono. Details are available at the F# Developer Center.

1) You already have it

F# comes standard with Visual Studio 2010, and it includes F# Interactive. There’s nothing to install, and no configuration is required. You don’t even need to start a F# project in order to use FSI. From anywhere inside Visual Studio, select View | F# Interactive, or just press Ctrl + Alt + F to bring up an FSI instance.

2) Performance Analysis

With the #time option enabled, F# Interactive is a surprisingly useful tool for performance analysis. A few days ago, coworkers Jay Wren and Ben Barefield asked me to help determine why a LINQ statement they wrote was running slowly. After spending time looking over the code and running it through a profiler, we wanted to see how LINQ’s distinct statement behaved with different inputs. Within a couple minutes, I was able to get information using F# Interactive.

First, I wrote a simple setup script:

open System
open System.Linq

#time

let randomizer = Random()

let sequenceSize = 1000000

let sequential = Enumerable.ToList(Seq.init sequenceSize (fun x -> x.ToString()))
let random = Enumerable.ToList(Seq.init sequenceSize (fun _ -> randomizer.Next(1000)))

Then I ran couple of statements to test timing:

> sequential.Distinct().ToList();;
Real: 00:00:00.336, CPU: 00:00:00.296, GC gen0: 1, gen1: 1, gen2: 1
val it : Collections.Generic.List<string> = seq ["0"; "1"; "2"; "3"; ...]
> random.Distinct().ToList();;
Real: 00:00:00.119, CPU: 00:00:00.031, GC gen0: 0, gen1: 0, gen2: 0
val it : Collections.Generic.List<int> = seq [959; 18; 824; 585; ...]

F# Interactive is no substitute for more sophisticated performance analysis techniques, but when it comes to getting fast answers to test bottlenecks, it’s a great tool to have at your disposable.

3) Verifying the Behavior of Base Class Library Functions

There are a lot of APIs in the Base Class Library, and it can be tough to remember exactly how everything works. Let’s say you can’t remember if the Insert function on List<T> inserts elements before or after the input index. You could write a small console application or a throwaway test to verify the behavior, but it’s a lot faster to use FSI:

> open System.Collections.Generic;;
> let list = new List<int>();;

val list : List<int>

> list.Add(0);;
val it : unit = ()
> list.Add(1);;
val it : unit = ()
> list.Add(2);;
val it : unit = ()
> list.Insert(1, 99);;
val it : unit = ()
> list;;
val it : List<int> = seq [0; 99; 1; 2]
>

4) Learning F# and Functional Programming

Learning any language teaches you new coding techniques. Learning a functional language teaches you new problem solving techniques. F# Interactive lets you do both without leaving Visual Studio or closing your open project. Whenever you have a few minutes to kill during development, you can easily open a FSI window and play around with F#. You can also use it to deep dive and explore syntax and techniques with a more extended session. Finally, using F# Interactive while programming in another .NET language is a great way to keep your F# skills sharp even if you aren’t writing F# on a daily basis.

5) Spikes and Scripting

This is probably the first use case that people think of when they see FSI (or other REPLs). In practice, I find that I use F# Interactive more for performance analysis, learning F#, and verifying Base Class Library behavior than for spiking or scripting. However, it’s worth pointing out that F# Interactive is a powerful tool for quickly exploring problem domains. By creating script files, you can build up situations to evaluate different approaches without investing a lot of time setting up a dummy project or a clumsy test harness.

Posted in F# | 6 Responses

F# Object Expressions vs Mocking Libraries: Am I Missing Something?

Object expressions are a cool feature of F#. For those unfamiliar, they allow you to easily instantiate anonymous classes:

let mutable disposed = false

let anonomousType = 
    { new IDisposable with
          member this.Dispose() =
              disposed <- true }

Assert.IsTrue(disposed)

There are times when it’s useful to instantiate anonymous classes in this fashion (a topic for another post), but I’m not convinced that mocking is one of them. However, based on based on a couple of twitter conversations that I’ve had with notable F# community members Richard Minerich and Steffen Forkmann, my opinion may be in the minority. I understand their opinions in theory, but in practice, I find that object expressions are inferior to Mocking Libraries for generating mock objects.

I will go over a few advantages of mocking libraries below, but before I do that, I would like to define what I mean by “mock object” in this post. The term “mock” is a loaded one in the testing world, and it seems like everyone has their own opinion about what the words mock, dummy, stub, spy, fake, and test double actually mean. I’m far less dogmatic about this. For the rest of the post, consider a mock to be any object that takes the place of another for the purpose of testing.

Abstraction vs Verbosity

In his tweet, Steffen says that he believes object expressions are much clearer than mock libraries. On one hand, I understand his reasoning; mock libraries use more abstraction than object expressions, and abstraction can make code more difficult to understand. Ironically, I find this argument to be very similar to one that I often hear from people who favor for loops over the use of LINQ or higher order functions like that in the F# Seq module. They prefer the verboseness of a for loop to the abstraction of a library function.

I often use a simple example to explain the advantages of the F# Seq module over for/foreach loops. The goal is to find the sum of the squares of the even numbers less than or equal to ten.

IEnumerable<int> values = new int[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };

int sum = 0;

foreach (int value in values)
{
    if (IsEven(value))
    {
        sum += Square(value);
    }
}

Assert.AreEqual(220, sum);
let numbers = [0..10]

let sum =
    numbers
    |> List.filter IsEven
    |> List.map Square
    |> mySum

I don’t know any functional programmer who would argue that the for/foreach loop is clearer than the version using the List module. They understand and accept the abstraction of higher order functions, so the abstraction actually adds clarity to the code.

Now lets consider a typical testing scenario. Given a dummy interface, IFoo:

type IFoo =
    abstract foo: int -> int

The goal is to create a mock IFoo that validates that foo was called and also returns a dummy value for foo.

First is a F# solution with object expressions:

let mutable wasCalled = false
let mock =
    { new IFoo with
        member this.foo x =
            wasCalled <- true
            0 }

let result = aFunctionThatDependsOnIFoo(mock)

Assert.IsTrue(wasCalled)
Assert.AreEqual("the expected value", result)

Now here’s a C# implementation using MOQ:

var mock = new Mock<IFoo>();
mock.Setup(f => f.foo(It.IsAny<int>())).Returns(0);

var result = aFunctionThatDependsOnIFoo(mock.Object);

mock.Verify(f => f.foo(It.IsAny<int>()), Times.Once());

Assert.AreEqual("the expected value", result);

Like the List module in the first example, MOQ encapsulates the logic required to create mock objects into helper functions, and like the foreach loop, the object expression puts all of that wiring in the test. Code like the foreach loop and object expression is simple to read and write, but it’s code that a library can and should take care of. By factoring out the common bits into a helper library, your code becomes less error prone, less brittle, and easier to maintain. As with LINQ and the Seq/List module, mocking frameworks require that you accept and understand a small degree of abstraction, but it’s the abstraction that makes your code more clear.

Scalability

Object expressions are manageable for small interfaces, but they quickly become unwieldy if you want to mock an interface that contains multiple methods. For example, let’s say you want to mock a method on an interface with 4 methods using object expressions:

type IFoo2 =
    abstract foo1: int -> int
    abstract foo2: int -> int
    abstract foo3: int -> int
    abstract foo4: int -> int

let mock =
    { new IFoo2 with
        member this.foo1 x =
            0 
        member this.foo2 x =
            0 
        member this.foo3 x =
            0 
        member this.foo4 x =
            0 }

That’s a lot of work to just mock one method one time. IFoo2 doesn’t have any complicated method signatures, and you aren’t even validating that methods are called. With a mocking library, all of that work is accomplished in two lines of code.

var mock = new Mock<IFoo2>();
mock.Setup(f => f.foo1(It.IsAny<int>())).Returns(0);

Maintainability

Let’s say that you’re using the IFoo2 interface in a production codebase with a few hundred tests. Now the requirements change and you need to add a new method to IFoo2. With the object expression syntax, you have to go through and change every object expression to add a default implementation for that method. Even if you don’t have a separate mock object for every test, it’s still a lot of overhead for a relatively small change. With mocking library, you don’t have to change any of your code. All of your tests work the same way they did before, and you can focus on the task at hand instead of maintaining your tests.

Closing Thoughts

Object expressions are a little better than creating a new type every time you want to mock, but I think the above arguments apply in both cases. I don’t know many C# developers who prefer hand rolling new mock classes to mocking libraries, and I don’t understand what makes object expressions different. That being said, I know there are some great F# coders who have a different opinion, so I would love to hear the other side of the story via comments, twitter, a fierce blog-off, etc. : )

I should also point out that while I love MOQ for creating mock objects in C#, I wouldn’t recommend it for use from F#. It relies heavily on LINQ expressions which are difficult to create via F#, and even in the best case, it pollutes your code with a lot of quotations. I would gladly pay someone a shiny quarter if they would write an F# wrapper over MOQ or (even better) a mocking library specifically targeted for F#.

Posted in F#, testing, Unit Testing | 2 Responses

Building F# Projects in Expression Blend Without Installing Visual Studio

Recently, one of our projects hit a snag when I added a F# project to our WPF application. While our developers were able to build the project in Visual Studio 2010 and the Expression Blend 4 RC, our designer was unable to build from within the Expression Blend 4 RC because he did not have Visual Studio 2010 installed.

Behind the scenes, Blend builds projects using MSBuild. Unfortunately, fsc.exe and all the other goodies required to build F# projects are not included with Expression Blend at the moment, but there are a couple of simple workarounds.

The easiest one is to download the F# CTP as a .zip and extract the contents of the FSharp-2.0.0.0\bin folder to C:\Program Files (x86)\Microsoft F#\v4.0\. Restart blend, and your project should build.

Another way to fix the problem is to extract the contents of FSharp-2.0.0.0\bin to another location on your system, and edit your .fsproj file to point to that location if the default 2010 path doesn’t exist. You should see two lines like this in your .fsproj file:

<Import Project="$(MSBuildExtensionsPath32)\FSharp\1.0\Microsoft.FSharp.Targets" Condition="!Exists('$(MSBuildBinPath)\Microsoft.Build.Tasks.v4.0.dll')" />
<Import Project="$(MSBuildExtensionsPath32)\..\Microsoft F#\v4.0\Microsoft.FSharp.Targets" Condition="Exists('$(MSBuildBinPath)\Microsoft.Build.Tasks.v4.0.dll')"  />

Changing those lines to something like the following will also allow blend to build the project. Note that the below assumes you extracted the F# CTP to C:\Program Files (x86)\FSharp-2.0.0.0 which is the location that the .msi extracts it to.

<Import Project="$(MSBuildExtensionsPath32)\..\FSharp-2.0.0.0\bin\Microsoft.FSharp.Targets" Condition="!$(MSBuildExtensionsPath32)\..\Microsoft F#\v4.0\Microsoft.FSharp.Targets" />
<Import Project="$(MSBuildExtensionsPath32)\..\Microsoft F#\v4.0\Microsoft.FSharp.Targets" Condition="Exists('$(MSBuildExtensionsPath32)\..\Microsoft F#\v4.0\Microsoft.FSharp.Targets" />
Posted in Expression Blend, F# | 3 Responses

Moving Hosting

Well, it’s been a while since my last update. Although I’ve been hard at work speaking, writing, and buying my first house, a large part of the reason for my hiatus has been switching hosting.

Most of you probably access my blog through my employer, SRT Solutions‘ site. My blog will continue to be published there, but I made the decision to switch to my own hosting so that I have more control to experiment and personalize my blog. Although it has always redirected to my blog, http://chrismarinos.com now is its official home.

This move shouldn’t affect most of you since I’ve already redirected my feedburner feed, but I wanted to let you know of the change nonetheless.

Hopefully I’ll be able to post more frequently in the coming weeks!

Posted in General | Leave a comment

Don’t Misuse Lambdas

Avoid Duplicating Code

It’s great that so many C# and VB.NET developers are taking advantage of LINQ. Unfortunately, using LINQ can encourage you to misuse lambdas. Consider the following simple example:

var results = from x in 0.Through(10)
              select x * x;

Square is a useful utility function. It shouldn’t be defined as a lambda because that ensures code duplication instead of reuse. It’s more obvious using extension method-syntax:

var results =
    0.Through(10)
    .Select(x => x * x);

The problem is that in versions of C# prior to 4.0, it is painful to define Square as a reusable method because Select is a function with multiple generic parameters. You are forced to painfully specify the type parameters or use an unnecessary lambda:

//specifying the generic types
var results =
    0.Through(10)
    .Select<int, int>(Math.Square);
 
//using an unnecessary lambda
var results =
    0.Through(10)
    .Select(x => Math.Square(x));

Even in C# 4.0, the query syntax retains the unnecessary lambda problem:

var results = from x in 0.Through(10)
              select Math.Square(x);

Fortunately, the extension method syntax is fixed in 4.0:

var results =
    0.Through(10)
    .Select(Math.Square);

Avoid Multi-Line Lambdas

It’s easy to recognize that Square should not be a lambda, but some functions are less obvious:

IEnumerable<Ninja> MakeFearsomeFightingTeam(SecretOoze ooze,
                                           IEnumerable<Turtle> turtles,
                                           Pizza pizza)
{
    return 
        turtles
        .Select(turtle =>
        {
            var transformed = ooze.Transorm(turtle);
            transformed.Say("Cowabunga!");
            transformed.Eat(pizza);
            return transformed;
        });
}

You should avoid writing multi-line lambdas like this where possible. In this case, it makes the code harder to read, especially if you add another operation after the Select. It’s difficult to understand what the lambda is doing upon first glance. Also, there’s a good chance that you will want to reuse the behavior of the lambda elsewhere in your program.

IEnumerable<Ninja> MakeFearsomeFightingTeam2(SecretOoze ooze,
                                            IEnumerable<Turtle> turtles,
                                            Pizza pizza)
{
    return
        turtles
        .Select(turtle => Ninjaify(ooze, pizza, turtle));
}
 
Ninja Ninjaify(SecretOoze ooze, Pizza pizza, Turtle turtle)
{
    var transformed = ooze.Transorm(turtle);
    transformed.Say("Cowabunga!");
    transformed.Eat(pizza);
    return transformed;
}

The named instance method gives a description to the behavior that was in the multi-line lambda, and the Select statement is more readable. The sacrifice is that Ninjaify has to take extra arguments because it cannot rely on closure. There is also no guarantee that the definition of Ninjaify will remain close to where it is used as you add more code. Additionally, C#’s syntax requires that you use an unnecessary lambda to pass the turtle argument to Ninjaify. The answer is to use a locally defined method:

IEnumerable<Ninja> MakeFearsomeFightingTeam3(SecretOoze ooze, 
                                            IEnumerable<Turtle> turtles,
                                            Pizza pizza)
{
    Func<Turtle, Ninja> Ninjaify = 
        turtle =>        
        {
            var transformed = ooze.Transorm(turtle);
            transformed.Say("Cowabunga!");
            transformed.Eat(pizza);
            return transformed;
        };
 
    return
        turtles
        .Select(Ninjaify);
}

Like a lambda, the locally defined method uses closure to avoid redefining variables, and it has the readability benefit of being defined near to where it is used. Since it is a named method, it provides a useful description of it’s behavior, and it doesn’t disrupt the flow of your Select method. It’s the best of both worlds. The only downside is that it can’t be called outside the scope of the MakeFearsomeFightingTeam3 method.

Unfortunately, C# requires you to write the full type signature for the function. The compiler will fail with a “cannot assign lambda expression to implicitly-typed local variable” if you try to use the var keyword. This is a real pain for more complicated signatures, and it limits you from using locally defined methods to their full potential.

Going Further With F#

In F#, the type signature problem goes away:

let MakeFearsomeFightingTeam4 ooze turtles pizza = 
    let ninjaify turtle =
        let transformed = ooze.Transform(turtle)
        transformed.Say("Cowabunga!")
        transformed.Eat(pizza)
        transformed

    turtles 
    |> Seq.map ninjaify

Here, you can see the benefit of F#’s type inference system and functional programming roots. The behavior of the code is retained, but the extraneous type signature is not required.

F# also fixes the problem of requiring an unnecessary lambda for named methods outside the scope of the MakeFearsomeFightingTeam4 method. Automatic currying makes it easy to pass the arguments that would be captured by closure:

let ninjaify ooze pizza turtle =  
    let transformed = ooze.Transform(turtle)  
    transformed.Say("Cowabunga!")  
    transformed.Eat(pizza)  
    transformed

let MakeFearsomeFightingTeam5 ooze turtles pizza =  
    turtles
    |> Seq.map (Ninjaify ooze pizza)

In Summary

  • Use lambdas for one line, single use functions
  • Prefer locally defined functions to multi-line lambdas, but be wary of complicated type signatures in C#
  • If a function is reusable, move it to the appropriate class or module, and use it instead of a lambda or locally defined function
  • Use F# to avoid messy type signatures and unnecessary lambdas
Posted in C#, F#, Functional | 17 Responses

It’s Beta For a Reason!

Today, I was in the process of creating a branch of Elevate to support .NET 4.0 when I came across a subtle, but breaking change in the Enumerable.Count function. The following code works in .NET 3.5, but fails in .NET 4.0 Beta 2:

[TestClass]
public class CountBug
{
    public class MyIList : IList<int>
    {
        public MyIList()
        {
            GetEnumeratorWasCalled = false;
            CountWasCalled = false;
        }
        
        //...non-relavent IList members excluded for this post...
        
        public bool GetEnumeratorWasCalled { get; set; }
 
        public IEnumerator<int> GetEnumerator()
        {
            GetEnumeratorWasCalled = true;
            return null;
        }
 
        public bool CountWasCalled { get; set; }
 
        public int Count
        {
            get
            {
                CountWasCalled = true;
                return 0;
            }
        }
    }

    [TestMethod]
    public void CountBehavior()
    {
        var list = new MyIList();

        Assert.AreEqual(0, list.Count());
        Assert.IsTrue(list.CountWasCalled);
        Assert.IsFalse(list.GetEnumeratorWasCalled);
    }
}

The Enumerable.Count method in .NET 4.0 Beta 2 fails to recognize that MyIList implements ICollection<T>, so instead of returning the .Count property, it calls the MyIList’s GetEnumerator() method and walks every item in the list to determine the count. In .NET 3.5, MyIList is identified as an ICollection<T> implementer and the .Count property is correctly used.

It’s also worth noting that it does not seem to be possible to create a MSTest test project in Visual Studio 2010 Beta 2 that targets .NET 3.5. It will silently upgrade any of your .NET 3.5 test projects to .NET 4.0 even if you select 3.5 as the target framework when creating a project!

I’ve filed the following bugs on connect, so hopefully they get resolved soon!

https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=514122

[update: I initially posted a link to the wrong bug. The above link has been corrected.]

https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=514130

Posted in C#, Elevate, Framework Bugs, Visual Studio | Leave a comment

Designing F# Functions for Currying and the |> Operator

Last week, I led a jam about F# at the Ann Arbor Study Group. One of my SRT Solutions coworkers, Ben Barefield, asked a question that warrants further discussion. After I introduced the forward pipe (|>) operator, Ben asked the following:

In F# programming, do you design functions so the last argument is one that you intend for users to pass via the forward pipe operator?

My first response was a tentative “yes”, but I felt like that put too much focus on the forward pipe operator. After some reflection, I think a better answer is to follow this more general best practice:

In F# programming, prefer ordering function arguments from least varying to most varying.

Normally, you’ll see this discussed in the context of currying and partial application, but I think that it is equally important when considering the forward pipe operator. Let’s take a look at some examples of each.

Currying and Partial Application

We’ll start with the Seq.reduce function. The signature for this function is:

Seq.reduce : (‘T -> ‘T -> ‘T) -> seq<‘T> -> ‘T

The F# documentation states that reduce is used to “Apply a function to each element of the sequence, threading an accumulator argument through the computation.” In practice, reduce is used to compute a single value from a sequence of values. For example:

> Seq.reduce (+) {0..5};;
val it : int = 15

Here, the computation starts with the first two elements of the sequence, 0 and 1. Reduce applies the addition function to these elements to return 1. This is now our current “state” which we carry over into the next step of computation. Reduce will grab the next element in the list, 2, and call our addition function with that argument and our current state of 1 to produce 3. This process continues until we get our result of 15.

Now that we know how reduce works, observe that the arguments are structured from least varying to most varying. When viewed from the standpoint of currying this is handy because it allows us to create useful residual functions through partial application:

> let mySum<‘a> = Seq.reduce (+);;

val mySum<‘a> : (seq<int> -> int)

> mySum {0..5};;
val it : int = 15

The |> Operator

In F#, it’s common to rewrite the first example from above using the forward pipe operator:

> {0..5}
   |> Seq.reduce (+);;
val it : int = 15

This compatibility with the forward pipe operator also comes naturally as a result of ordering arguments from least varying to most varying. Because the last argument is the one that is most likely to vary, it follows that it is also the argument that we are most likely to pass via the forward pipe operator.

Posted in Currying, F#, Functional | Leave a comment

Community Involvement and Elevate

First of all, thanks to all of you who have taken the time to look at Elevate. We have received a lot of excellent feedback. Most of it has been positive, and all of it has been extremely helpful. We didn’t expect to get this much feedback in such a short time, so we’ve been a little behind on providing good support for community involvement. Today, we took a couple steps to fix that problem.

First, we created a Google Group. We’d love to get feedback from you there. You can join here: http://groups.google.com/group/ElevateProject

Second, on the main CodePlex page (http://elevate.codeplex.com), we have added a road map and suggested practices for submitting patches.

Thanks again for all of the feedback we received so far!

Posted in Elevate | Leave a comment

Option Types vs Nullable Types

Some of the feedback that we’ve received about Elevate has to do with Option types and how they are different or similar to Nullable types in C#. Luke Hoban does a great job of describing some of the differences here:

http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=470052

If you’ve played around with Option types in F# or another functional language, you should be able to easily understand his argument, but if you haven’t been exposed to Option types before, you’re probably a bit confused as to how they differ from Nullable types. I’ll do my best paraphrasing and cross language explanation here to help you bridge that gap.

In Theory…

In theory, both Option types and Nullable types can be used to accomplish similar goals. Both model calculations that may or may not return a value. For Options and Nullables, an instance either represents a concrete value, or the lack of a value. Let’s take a look at a some examples.

Nullable types in C#:

[Test]
public void Nullables()
{
    int? nullableWithValue = 10;
    Assert.IsTrue(nullableWithValue.HasValue);
    Assert.AreEqual(10, nullableWithValue.Value);

    int? nullableWithoutValue = null;
    Assert.IsFalse(nullableWithoutValue.HasValue);
}

Option Types in F#:

let optionWithValue = Some 10

Assert.IsTrue(optionWithValue.IsSome)
Assert.AreEqual(10, optionWithValue.Value)

let optionWithoutValue = None

Assert.IsFalse(optionWithValue.IsSome)

Note that in F#, it’s common to combine Option types with a technique known as pattern matching which is beyond the scope of this post. Most F# programmers probably wouldn’t use the fields that I demonstrated in the sample code, but they are provided in case you do want to use them.

Option Types with Elevate:

[Test]
public void OptionTypes()
{
    Option<int> optionWithValue = Option.Some(10);
    Assert.IsTrue(optionWithValue.IsSome);
    Assert.AreEqual(10, optionWithValue.Value);
 
    Option<int> optionWithoutValue = Option<int>.None;
    Assert.IsFalse(optionWithoutValue.IsSome);
}

In Practice

In practice, things don’t quite work out the way you’d like. For example, consider a “TryFind” function. This function, given a sequence of elements and a predicate, returns the first element where the predicate returns true, or “no element” if the predicate is not matched. In F#, this is written as “Seq.tryFind”. Let’s take a look at an example usage.

let numbers = [0..10]
let five = numbers
           |> Seq.tryFind ((=) 5) 

Despite the F# syntax, this should be easy for most programmers to understand, but let’s see what this looks like in C# using the (just added) TryFind function in Elevate.

[Test]
public void TryFind()
{
    var values = 0.Through(10);
 
    Option<int> result = values.TryFind(x => x == 5);
 
    Assert.IsTrue(result.IsSome);
    Assert.AreEqual(5, result.Value);
}

Those of you familiar with LINQ will recognize that this looks very similar to the overload of .First that accepts a predicate. The difference is in the case where the predicate does not match any element in the sequence. Instead of throwing an exception, TryFind will return None.

[Test]
public void TryFindOnFailure()
{
    var values = 0.Through(10);
 
    Option<int> result = values.TryFind(x => x == 20);
 
    Assert.IsTrue(result.IsNone);
}

Now that we’ve gone over the usage of TryFind, let’s focus on how we might implement it. Here’s how we currently do it in Elevate (minus a few exception checks).

public static Option<TSource> TryFind<TSource>(this IEnumerable<TSource> source,
                                               Func<TSource, bool> predicate)
{
    var results = source.Where(predicate).GetEnumerator();

    if (results.MoveNext())
    {
        return Option.Some(results.Current);
    }
    else
    {
        return Option<TSource>.None;
    }
}

Now, let’s say that we want to implement TryFind using Nullable types instead of an Option type. You’ll notice right away that there’s a problem. Nullable types only work for structs. TryFind needs to be able to return values of any type, not just value types, so right away, we’re stuck.

There’s one other, slightly more insidious problem, though. Say that we were able to create Nullable types for classes. Our implementation for TryFind would look similar to what we have above for Option types. We would return null when no item in the input sequence was matched, otherwise we would return a value, but consider the following example.

IEnumerable<string> items = Seq.Build("Alpha", "Beta", "Gamma", null);

string result = items.TryFind(item => item == "Delta" || item == null);

Here, our result value would be null, but we wouldn’t know if null meant that no item was found, or that null was the string value that we matched. It’s a subtle and somewhat contrived example, but it does show one more way in which Option types help to clean up the code.

The Bottom Line

To sum things up, Option types and Nullable types are similar in theory, but in practice, they accomplish different goals. In general, I find that using Option types makes for cleaner code and helps to communicate the intent of algorithms more clearly. In Elevate, we use Option types in a few places where Nullable types would not be reasonable. Although these aren’t use cases that you may touch on everyday, it’s definitely good to have the option (no pun intended) to use whatever method makes the most sense for your situation, and that’s why we offer Option types in Elevate.

Posted in C#, Elevate, F#, Functional | 4 Responses

Introducing Elevate

The past few weeks, a few other SRT Solutions developers and I have been working on a new open source library called Elevate. We went public with the source on CodePlex this weekend, and although we’re still in the early stages of development, I already rely on many of the functional programming features of the library in my day to day coding. So, without further ado, I’d like to formally announce the Elevate project.

What Is Elevate?

Let’s face it, no library has it all, and the BCL is no exception. If you’re anything like me, then you occasionally find yourself re-writing some utility methods over and over again for each project that you work on. Even though you know it’s wrong, you probably re-invent the wheel from time to time for “simple” things. Maybe you carry around your own “MyUtilities.cs” file from project to project. Either way, in the back of your mind, you know that there has to be a better way.

For C++ programmers, this void is filled with Boost. Boost contains a lot of functionality that is missing from the C++ STL for one reason or another. It’s a great library for C++ development. But what about us poor C# developers?

That’s where Elevate comes in. Elevate is a Boost-like library for .NET. Our goal at SRT Solutions is to capture the things that we think are missing from the BCL and put them in Elevate so that we can share them between our project groups and the rest of the world. By devoting some of our weekly learning time to add these common bits of code to Elevate, we can save ourselves, our clients, and hopefully other .NET developers time and money.

What do we have so Far?

We can’t add everything overnight, so to start off, we’re focusing on functional programming concepts. We’ve already taken some of the more useful methods and classes from languages like F#, Ruby, and Haskell and added them to our own collection of useful C#  utilities. Here are some of the highlights below:

Building sequences:

[Test]
public void MixingAndMatching()
{
    //if you have a couple sequences of values
    var first = Seq.Build("alpha", "beta");
    var second = Seq.Build("delta", "epsilon");

    //you can combine them along with some other values to create a new sequence
    var result = Seq.Build(first, "gamma", second, "zeta");
 
    var expected = Seq.Build("alpha", "beta", "gamma", "delta", "epsilon", "zeta");
    Assert.AreEqual(expected, result);
}
 
[Test]
public void Through()
{
    //if we want to easily generate a sequence of incrementing numbers,
    //we can do it like this
    IEnumerable<int> numbers = 1.Through(15);

    var expected = Seq.Build(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15);
    Assert.AreEqual(expected, numbers);
}

“LINQ Extensions”:

[Test]
public void SelectWithIndex()
{
    //given a sequence of values
    var values = 10.Through(50, 10);
 
    //we can apply a selector function to each element based
    //on the element's value and it's index
    var result = values.SelectWithIndex((index, value) =>
                                             value / (index + 1));
 
    var expected = Seq.Build(10, 10, 10, 10, 10);
    Assert.AreEqual(expected, result);
}

[Test]
public void Chunk()
{
    //given a sequence
    var sequence = 1.Through(20);

    //we can split the sequence into a sequence of "chunks" that
    //are each a specific length
    var chunks = sequence.Chunk(5);
 
    var expectedChunks = 
        Seq.Build<IEnumerable<int>>( 
            1.Through(5).ToList(),
            6.Through(10).ToList(),
            11.Through(15).ToList(),
            16.Through(20).ToList());
 
    Assert.AreEqual(expectedChunks, chunks);
}

[Test]
public void Select2()
{
    //given two sequences
    var one = 0.Through(5);
    var two = 2.Through(12, 2);

    //we use Select2 to walk the sequences in parallel and apply a
    //selector function
    var result = one.Select2(two, (elementFromFirst, elementFromSecond) => 
            elementFromFirst * elementFromSecond);

    var expected = Seq.Build(0, 4, 12, 24, 40, 60);
    Assert.AreEqual(expected, result);
}

Pattern Matching:

 [Test]
public void PatternMatchingWithFunctions()
{
    //given a value
    var value = "alpha";

    //we can start a pattern match like this
    var result = value.Match()
        //causes the pattern match to return "empty" if value is null or empty
        .With(string.IsNullOrEmpty, stringValue => "empty")
        //match any string containing "a"
        .With(stringValue => value.Contains("a"), stringValue => "contains a!")
        .EndMatch();
 
    Assert.AreEqual("contains a!", result);
}

[Test]
public void EasierTuplePatternMatching()
{
    //given a tuple
    Tuple<string, int> tuple = Tuple.Create("Da Bears", 2); 

    //We can avoid having to specify the arguments explicitly for the
    //match portion of the predicate like this
    var result = tuple.Match()
        .WithSecond(1, (teamName, wins) => wins + 1)
        .WithFirst("Da Bears", (teamName, wins) => wins)
        .EndMatch();

    Assert.AreEqual(result, 2);
}

Option Types:

[Test]
public void OptionTypesCanContainValues()
{
    //given a value
    var value = 10;
 
    //we can wrap it in an option type like this
    Option<int> option = Option.Some(value);
 
    Assert.IsTrue(option.IsSome);
    Assert.IsFalse(option.IsNone);
    Assert.AreEqual(10, option.Value);
}

[Test]
public void MultipleOpertionsWithOptionTypes()
{
    //say we have a few functions that may or may not return a value.
    Func<int, Option<int>> divideIfEven = value =>
        ((value % 2) == 0) ? Option.Some(value / 2) : Option<int>.None;

    Func<int, Option<int>> subtractIfDivisibleByThree = value =>
        ((value % 3) == 0) ? Option.Some(value - 3) : Option<int>.None;

    Func<int, Option<int>> multiplyIfOdd = value =>
        ((value % 2) != 0) ? Option.Some(value * 2) : Option<int>.None;
 
    //we can chain these operations together like this:
    Option<int> result =
        divideIfEven(36)
        .Select(subtractIfDivisibleByThree)
        .Select(multiplyIfOdd);

    //the result of one carries on to the next to yield the expected result
    Assert.IsTrue(result.IsSome);
    Assert.AreEqual(30, result.Value);
}

These are just a few samples of the things you can do with Elevate, but there is a lot more to play around with in the actual library. Hopefully, you’re convinced that there are already a number of interesting functional programming features.

Moving Forward

If you’re interested in the above samples, head on over to http://elevate.codeplex.com and check out the source. All of the above samples are copied right out of the “Elevate.Guide” test project. We wrote this project with the goal that someone who has no experience with Elevate can get up and running quickly just by reading through it.

Over the next few weeks, I will try to post about some of the features in Elevate in more detail. If you’re a C# programmer interested in functional programming, a functional programming guru who wants to see examples of functional programming in C#, or simply someone interested in seeing cool and useful language extensions, stay tuned for more detailed posts.

Finally, we would love to hear any feedback (good or bad) and any feature requests that you might have. There are a number of ways to get in contact with us. You can submit comments below, start a discussion or submit a review on the CodePlex page, send an email through CodePlex, or send me a tweet (my username is ChrisMarinos on CodePlex and twitter). Also, speaking of Twitter, be sure to follow @elevateproject for updates!

Update: Check out our GoogleGroup at http://groups.google.com/group/ElevateProject!

Posted in C#, Elevate, F#, Functional, SrtInsights | 56 Responses