Simplify your workflow with Git Aliases

Git is a great tool.

When using it from the command line (which I would recommend to everyone) there can be a lot to remember.

As with everything, I like to customise how I work and Git is no exception.

Adding a git alias is a simple process.

From your terminal/Powershell type:
git config --global alias.{alias} {git command}

Below are the aliases I use on a daily basis which really speed up my workflow:


git config --global alias.s status
git config --global alias.l log
git config --global alias.lo log --oneline
git config --global alias.cm commit -m
git config --global alias.cd checkout development
git config --global alias.co checkout
git config --global alias.ap add -p
git config --global alias.rhh reset HEAD --hard
git config --global alias.rcb rebase -i
git config --global alias.rlt rebase -i HEAD~10
git config --global alias.pick cherry-pick
git config --global alias.ka gitk --all

If you are having trouble with Folder Redirection in Windows, you can copy these aliases into the local config of the git project: ./{gitProject}/.git/config, under the [alias] section:


[alias]
s = status
l = log
lo = log --oneline
cm = commit -m
cd = checkout development
co = checkout
ap = add -p
rhh = reset HEAD --hard
rcb = rebase -i
rlt = rebase -i HEAD~10
pick = cherry-pick
ka = gitk --all

Vim Keys in Visual Studio

Anyone who writes applications on the Windows platform will likely spend most of their time in Visual Studio.

Although I find Visual Studio is pretty bloated, it ends up being a necessity when working with clients as it’s a tool the rest of their team will be used to.

To make Visual Studio more friendly to Vim users there is a plugin called VSVim which is available in the market place:
VsVim – Visual Studio Marketplace

Below is the config I’ve been using for over a year and really makes using Visual Studio a much nicer experience.

I’ve tried to group the features and really like using , as the leader key, especially when using the Halmak keyboard layout.

Happy Vimming!!


" Navigation
nnoremap ,. :vsc View.QuickActionsForPosition
nnoremap ,, :vsc Edit.GoToAll
nnoremap ,g :vsc Edit.GoToImplementation
nnoremap ,d :vsc Edit.GoToDefinition
nnoremap ,f :vsc Edit.FindAllReferences
nnoremap ,mn :vsc View.NavigateForward
nnoremap ,mp :vsc View.NavigateBackward
nnoremap ,mm :vsc View.NavigateBackward

nnoremap ,s :vsc File.SaveAll
nnoremap ,x :vsc File.Close

" Errors - ,eX
nnoremap ,ee :vsc View.ErrorList
nnoremap ,en :vsc View.NextError
nnoremap ,ep :vsc View.PreviousError

" Refactoring - ,rX
nnoremap ,ri :vsc Refactor.ExtractInterface
nnoremap ,rr :vsc Refactor.Rename
nnoremap ,rf :vsc Edit.FormatDocument
" comment isn't working
nnoremap ,cc :vsc Edit.ToggleBlockComment
"nnoremap ,cc :vsc Edit.ToggleLineComment

" Tests - ,tX
nnoremap ,ta :vsc TestExplorer.RunAllTests
nnoremap ,tt :vsc TestExplorer.RunAllTestsInContext
nnoremap ,tc :vsc TestExplorer.RunAllTestsInContext
nnoremap ,ts :vsc TestExplorer.RunSelectedTests
nnoremap ,td :vsc TestExplorer.DebugAllTestsInContext
nnoremap ,tf :vsc TestExplorer.RunFailedTests

" Build - ,bX
nnoremap ,bb :vsc Build.BuildSolution
nnoremap ,bc :vsc Build.CleanSolution
nnoremap ,br :vsc Build.RebuildSolution
nnoremap ,bd :vsc Debug.Start
nnoremap ,bw :vsc Debug.StartWithoutDebugging
nnoremap ,bq :vsc Debug.StopDebugging

" Window - ,wX
nnoremap ,wn :vsc Window.NextTab
nnoremap ,wp :vsc Window.PreviousTab
nnoremap ,ww :vsc Window.MoveToMainDocumentGroup
nnoremap ,wa :vsc File.CloseAllButThis
nnoremap ,w/ :vsc Window.NewVerticalTabGroup
nnoremap ,wd :vsc Window.CloseDocumentGroup
nnoremap ,wh :vsc Window.MovetoPreviousTabGroup
nnoremap ,wl :vsc Window.MovetoNextTabGroup
nnoremap ,' :vsc Window.NextDocumentWindowNav

" Ncrunch - ,nX
nnoremap ,nn :vsc NCrunch.GotoNextBuildorTestFailure
nnoremap ,nh :vsc NCrunch.HotSpots
nnoremap ,nm :vsc NCrunch.Metrics
nnoremap ,na :vsc NCrunch.RunAllTestsRightNow
nnoremap ,ns :vsc NCrunch.Showcoveringtests
nnoremap ,np :vsc NCrunch.PincoveringteststoTestsWindow
nnoremap ,nu :vsc NCrunch.UnpincoveringtestsfromTestsWindow
" nnoremap ,nu :vsc NCrunch.UnpinalltestsfromTestsWindow

nnoremap ' `

Concatenating files in Powershell

When working with databases, it’s best practice to create migration scripts to which apply your database changes, rather than using GUI/designer tools which need you to manually apply changes across multiple environments.

I’ve been working with a client where there isn’t a specific migration tool to run all these migration files at the end of project, but they need to be bundled into a single file and attached to the change tracking system.

I keep these files under version control and prefix migration files with M_ and rollback files with R_

In Bash concatenating these files is simple:
cat M_* > ./migration.sql

In Powershell it isn’t too different:
Get-Content M_* | Set-Content .\migration.sql

When dealing with rollback scripts I create the rollback which opposes the migration with the same number which makes the files easy to navigate:


./M_001_AddsTable.sql
./M_002_AddsView.sql
./R_001_AddsTable.sql
./R_002_AddsView.sql

Rollback scripts are slightly more interesting as they need to be run in reverse.

We can also achieve this in Powershell with a minor tweak to our previous command:
Get-Content R_* | Sort-Object -Property Name -Descending | Get-Content | Set-Content .\migration.sql

This will firstly get all our rollback files and then reverses the order, then gets the content of each file and writes it to a single file.

Although I prefer doing these types of tasks with Bash, it’s still quite easy to remember the Powershell commands if you don’t have access to Bash on a Windows system.

Using a Dual Shock 3 controller with Steam on Linux via bluetooth

I was really surprised as to how easy this ended up being.

Setup

Open up your favourite terminal, and type bluetoothctl.

Next connect your controller by USB and you should be prompted as to whether you trust the device, type ‘yes’.

Disconnect the USB cable and press the PS button on the controller.

After a few flashes of the leds it should pair successfully.

You can repeat the above procedure for additional controllers as required.

Disconnecting

The next problem is how to disconnect the controller without needing to turn off bluetooth or power off your computer.

Back to the terminal and a simple command will disconnect all connected controllers, saving your precious battery.

bluetoothctl devices | awk '{print $2'} | xargs -I{} bluetoothctl disconnect {}

Let’s break this down.

Breakdown

bluetoothctl devices will list all connected devices

All we care about is the id of the controller so piping the output to awk we can map only second element (the id).

Next pipe the id’s back to the bluetoothctl command and disconnect the controller.

Nice and easy and you could even map this command to keyboard shortcut. I’ve simply created a bash alias, ds3disconnect to save my fingers.

Steam should detect the controller without a hitch and you can get some wireless Session flip trick goodness.

You Are Not Netflix: Microservice Madness

It seems like in recent times the term “monolith” has become a dirty word. A behemoth, a beast, something to slay and revile. The “obvious” antithesis being the trendy “microservice”. Lean, agile, forward-thinking, future-proof. Watching tech talks from giants such as Netflix we can get a glimpse into how well architected systems can have the capacity to scale globally with such distinct isolated units working together to deliver a lifetimes worth of content to all our devices.
Great you might think, that is how my project should be structured. SOA at it’s finest. The problem is… you are not Netflix.

For the majority of companies I’ve worked for and with, the average development team size has been in the region of 3 – 10, with applications that need to handle hundreds of users daily.
Day to day, these developers build and maintain relatively straightforward applications to fulfil business and customer needs, or help streamline internal workflows.

I’d feel inclined to suggest this is a common environment most developers will find themselves in. Even within a larger company it’s likely you’ll be placed within a team focusing on a specific product.

It’s at this scale I question whether the microservices actually offer any benefit over the complexities they introduce. A talk from NDC 2017 Jimmy Bogard – Avoiding Microservice Megadisasters really highlights just how disastrous microservices can be when done wrong. The horrifying reality from Jimmy’s cautionary tale is this, 9.5 minutes to render a homepage, with enough HTTP requests bouncing around to saturate the internal network. Contrast this to their existing “monolithic” WebForms site which was servicing thousands of requests and still generating billions in revenue albiet whilst showing signs of aging, decay and neglect.

In reality the developers involved probably had the best intentions. Some might have wanted to show their ‘seniority’ and ability to formuate incomprehensible logic flows and network diagrams, others might have been unsure and just doing as they were directed. In the story told the main architect jumped ship prior to the ill-fated maiden voyage but with an 18 month development cycle, it’s likely many other developers also left within that time.

The talk suggests scope creep and developers “inventing” requirements in order to further their own ambitions within the business, or add the latest buzzwords to their soon to be recirculated CV’s. None of these are good “business reasons” to adopt such a risky strategy.

My experience suggests asking the following questions before deciding to dive head first into microservice architecture:

What problem are you attempting to solve?

This is the first question when deciding to introduce any change to existing processes and procedures. Without a clear goal and a means of measuring success you are likely setting yourself up for failure.

Has the problem been identified through collection of metrics or is just based on gut-feeling and intuiton?

It’s easier to point the finger and blame one part of the application for causing performance issues but without hard evidence it’s just noise. Without a current metric and a desired outcome how can you measure success?

Is there a simpler solution which doesn’t fragment the existing infrastructure?

It’s suprising to find that the solution to a bottleneck might be as simple as adding a missing database index or lending some careful attention to some unneccessarily repetitive or cumbersome logic. Try placing logging to capture how long particular functions/IO operation are actually taking and identify where you can get the biggest wins.

How much knowledge is there of microservice architecture within the team?

A single point of knowledge suggests that there may be a skill shortage within your team, and initial training is required before moving forward. This training will empower your team to make better decisions and help handle any bumps along the way.

Is your existing deployment process automated and well-oiled?

If you are performing manual deployments of your existing application, adding more manual deployments will just compound your existing problems and even if the microservice is the solution, you’ll just move the problem to deployments. More troublesome deployments will likely lead developers to want to deploy less frequently, reducing the businesses agility and ability to implement new ideas and improvements quickly.

What monitoring and alerting is in place for existing infrastructure?

If there is little to no monitoring of existing applications/servers/databases/services, increasing any or all of these items will lead to problems that are more frequent, and harder to identify. Create a baseline of what good monitoring looks like and then ensure this is met on existing applications/infrastructure before adding more.

Are you adding the appearance of separation, but still maintaining a single point of failure?

If your microservice is reliant on the single, “main” database or another microservice, and there is still a single point of failure, it is unlikely the microservice will actually offer any benefit. The microservice should operate independently and any errors that occur should be handled gracefully by all consumers.

There are definitely circumstances where microservices do make sense. Scaling horizontally is more efficient and allows you to handle spikes in traffic without dealing with costly infrastructure on-site or in the cloud.

A microservice has the potential to increase security and reduce duplication, for instance centralising authentication and authorisation into a single microservice can mitigate a rogue code change opening up your sensitive data to everyone and their dog and save multiple developers reimplementing the same logic time and again. Obviously there are other processes that need to be applied and adding a microservice won’t suddenly solve those problems too.

Likewise the simpler solution might be to version and bundle your authentication code into an npm/Nuget package and import that where required. If spikes of traffic are tanking performance, maybe try piping requests to a queue and adopting an “eventually consistent” approach to your database reads, throttling your requests to maintain overall system performance, but still allowing business critical functions to continue.

In a 2016 blog post titled “The Majestic Monolith”, David Heinemeier Hansson discusses how Basecamp has continued with their “Majestic Monolith”, delivering a product which is available over the web, native mobile apps and desktop apps on Windows and Mac. At the time of writing a team of 12 developers were maintaining and developing their software, supporting millions of users. Granted it appears they have introduced a few “shared services” where appropriate, Basecamp ID is noted as falling into this camp, handling shared authentication for all generations of the Basecamp app, although this wasn’t without a cost, with the smaller systems making it much easier to silo knowledge and responsibilities.

He also raises an interesting point that keeping your system as a monolith can help avoid the above problem, keeping the responsibility of the product firmly in the “team” realm and not of the individual.

Ultimately my final suggestion would be, walk before you run. Ticking the microservices box, just because, could sap valuable resource from other endeavours which might benefit your team, business and code base more significantly. Prefer quick “wins” that solve identified problems over adding extra complexity and significant changes to architecture. Once your team is confident there are no other improvements to make, maybe take a look into microservices, or then again, maybe don’t.

Inlining Js and Css files in ASP.Net MVC View

As part of testing the performance of some views I’ve been working on recently I kept seeing the following warning in the Chrome Dev tools:

"[Deprecation] Synchronous XMLHttpRequest on the main thread is deprecated because of its detrimental effects to the end user's experience. For more help, check https://xhr.spec.whatwg.org/.",

When creating new views I’d created my “Components” in as 3 separate files: Component.cshtml, Component.js, Component.css. These were then being included with either link or script tags in the .cshtml file for the partial view. This is something I have become used to when working with Angular so it feels wrong to throw everything in one file.

All my $.ajax requests in the .js files were async (as they are by default so it had to be the import.

As Chrome was correctly identifying, including the files in this way wasn’t optimal, but what was the alternative?

Inlining

Now this approach might still not be ideal, but I didn’t want to add any additional steps to the existing build process or include any extra dependencies and make it simple for other developers to use in their own code, so the first step was to get the scripts content into the .cshtml.

A simple solution was to use the answer from this SO question.

First create a extension method which reads the text from the path provided, and then return it as a HtmlString so it is included on the page.

/Helpers/HtmlHelperExtensions.cs

public static class HtmlHelperExtensions
{
    public static MvcHtmlString InlineScriptBlock<TModel>(this HtmlHelper<TModel> htmlHelper, string path)
    {
        var builder = new TagBuilder("script");
        builder.Attributes.Add("type", "text/javascript");

        var physicalPath = htmlHelper.ViewContext.RequestContext.HttpContext.Server.MapPath(path);
        if (File.Exists(physicalPath))
        {
            builder.InnerHtml = File.ReadAllText(physicalPath);
        }

        return MvcHtmlString.Create(builder.ToString());
    }
}

This can then be used in a view file in place of the script import:

/Views/MyView/MyView.cshtml

@Html.InlineScriptBlock("~/Views/MyView/MyView.js")

<p>This view doesn't do much yet!</p>

Applying these changes to my code caused the error to go away, but it still felt like it needed improving, as no modifications were being made to the included javascript, it still contained unnecessary whitespace and could be made smaller. This will be covered in the next post about bundling.

Sharing bash functions between scripts

I’ve spent most of the past week writing bash functions to help with automating build and deployment at work, and something I’ve never really paid much attention to is how to keep these .sh files structured. Trying to keep functions small and generic helps with reuse and saves me from having to write more code, and once I’ve tested something is working hopefully I won’t need to touch it again.

To quote the first item of the summarised Unix philosophy:

Write programs that do one thing and do it well.

It’s also made me think about another horrible mess of code that I’ve come to depend on, my .bash_aliases file.

This has truly become a dumping ground over the years to save me from having to remember almost everything I have done at one point or another. At what I feel is a whopping 1474 lines, my little black book of functions is something that I lean on a lot, but have never really taken any time to trim or polish, apart from some headings in the form of comments.

Creating some structure

So the first step in organising this mess is trying to group similar functions together. Near the very top of my .bash_aliases file I find this:

##########  Git aliases ##################
alias gs='git status'
alias ga='git add'
alias gd='git diff'
alias gc='git commit'
alias gca='git commit -a'
alias gb='git branch'
alias gbr='git branch --remote'
#########################################

So there are some aliases that I use because I’m lazy, and shaving off those extra keystrokes should save some wear and tear on my little digits. I’ve already noted in the file what this group applies to GIT so that is a great starting point.

Let’s create a new directory to store my newly structured files:
mkdir ~/scripts
There’s a new directory in my home directory called scripts. Next is to copy and paste the contents from .bash_aliases to a new file called git.sh within this folder.

Setting up the files

I could just copy and paste the contents in Emacs, but where’s the fun in that. I can use sed to grab the lines that I want and output them to this new file without having to leave the terminal.

sed -n '19,27p' ~/.bash_aliases > ~/scripts/git.sh

So here we are using sed to get the line between 19 and 27 from the file ~/.bash_aliases and outputting them to the file ~/scripts/git.sh easy. Bash files should have the following at the top ‘#!/bin/bash’, to tell the OS which interpreter should be used for the file. This way seems slightly hacky, but again, means I don’t need to leave the terminal.

sed -i '1s:^:#\!/bin/bash\n:' ~/scripts/git.sh

This sed command inserts the required text on the first command, followed by a new line, giving us what we want.

Next we need to delete the text from .bash_aliases. Again, let’s stay in the terminal, we’ve almost written the command we need already.

sed -n '19,27d' ~/.bash_aliases

So, sed lessons are over (almost), our files are set up and I’ll try and get this post back on track.

If we were to open a terminal at this point, the commands we’d receive a command not found message. This can easily be fixed, and I’m going to use sed again because why not?

sed -i '2s:^:\n\. ~/scripts/git\.sh\n\n:' ~/.bash_aliases

Here we are inserting a new line to our .bash_aliases file using the . command to source the contents of our git.sh file. Because .bash_aliases is run every time we open a bash shell, we now have contents of the ~/scripts/git.sh available at all times, whilst also keeping our git alias code separate.

I’ve got lots of functions that can be grouped together, so I envisage creating a azure-cli.sh file, docker.sh, code-generation.sh file and many others in the future.

Repeating this over the coming months should really help whip my .bash_aliases file into shape.

Conclusion

So this post devolved into a post about the Power of sed rather than what it was initially supposed to be, but the TD;DR is that:

1) massive files are bad
2) the . command will source or import code from another file
3) grouping similar functions into separate files helps make navigation of code easier
4) with the . command we can share code between files without repeating ourselves
5) we can use sed to manipulate text files succinctly, precisely with a couple of keystrokes

If you’ve made it this far, thanks for reading.

Running MSBuild on Windows in Git Bash

So I’m in the process of automating the build and publish of a .Net WinForms application with Squirrel in Windows.

The first step in achieving this is to get the project building programmatically, outside of Visual Studio.

Coming from a Linux background I prefer working with Bash over Powershell or Batch and Git Bash is my terminal of choice within Windows.

Discovering the tools we need

The application uses framework 4.6.1 and I wasn’t sure what version of the tools were needed. To find out what tools were already installed on my system I ran:

dir HKLM:\SOFTWARE\Microsoft\MSBuild\ToolsVersions\

This listed a couple of versions, 2.0, 3.5 and 4.0.30319. I decided to try to build the project with the latest installed tools using:

C:\Windows\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe C:\path\to\solution.sln

Which caused the following error:
Project file contains ToolsVersion="15.0".

So now I know I need version 15.0 of the build tools.

One mystery solved.

Downloading and Installing the BuildTools

A quick search led me to this download page and after downloading the Build Tools for Visual Studio 2017, I ran the executable.

As I’m only currently trying to build a particular framework, under the Windows section I checked .NET desktop build tools and only checked the optional installation of Testing tools core features - Build Tools, as .NET Framework 4.6.1 SDK and targeting pack is included by default.

Now we’re cooking with gas.

So now we’ve got the correct build tools, let’s check we can build the project successfully. From a Powershell terminal, type:

C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\Bin\MSBuild.exe "C:\path\to\project\solution.sln"

Build Successful!

Working with Bash

Now we’ve proven that the tools work, we just need to make them accessible from Git Bash.

In order to be able to access MSBuild.exe from Git Bash we’ll need to add the path to MSBuild.exe used above to the $PATH environment variable. This is quick and easy.

PATH="$PATH:/c/Program Files (x86)/Microsoft Visual Studio/2017/BuildTools/MSBuild/15.0/Bin"

Some things to note from the above command:
– Forward slashes are used instead of backslashes for paths in Bash
– We can access the C:\ drive using just the drive letter in Git Bash

Now we can run the following command from our Git Bash window:

MSBuild.exe "/c/path/to/solution/solution.sln"

We should get identical output to when we ran the command initially in Powershell.

Adding switches to MSBuild

The final thing we will want to do is add some switches to the build command so we can Clean/Build/Rebuild and set the Configuration to Debug or Release.

In Powershell we could run the following:

MSBuild.exe "/c/path/to/solution/solution.sln" /t:Rebuild /p:Configuration=Release

Due to Bash using / for file paths, we need to escape any / characters with another /. So in Git Bash the above command becomes:

MSBuild.exe "/c/path/to/solution/solution.sln" //t:Rebuild //p:Configuration=Release

And there we have it, building a .NET solution with Git Bash and it only took a few minutes. By putting this in a script, we can now automate building of our WinForms solutions, ready for packaging and deployment with Squirrel.

Resources

In discovering the above, the below answers on StackOverflow helped steer me in the right direction.

  1. https://stackoverflow.com/questions/328017/path-to-msbuild
  2. https://stackoverflow.com/questions/17904199/automate-git-bisect-for-msbuild-nunit-net-windows-batch-commands-in-msysgit

Bonus Points

As described in the answer for this question, we could also script the install of the build tools, rather than using the GUI, because who in their right mind wants to use the GUI???

https://stackoverflow.com/questions/42696948/how-can-i-install-the-vs2017-version-of-msbuild-on-a-build-server-without-instal

That will be the next step so we can script the install of our build tools, making it effortless to configure a new build server.

Unable to access Puppet Learn from VM in VirtualBox

Puppet is an open-source automation platform designed to help automate software deployments and management of IT Infrastructure.

Helpfully, they have a VM which acts as a “a self-contained learning environment” so you can get to grips with what is possible.

After Downloading the VM from here, extracting the contents of the zip file and starting it in VirtualBox I was presented with the following screen.

Trying to access the IP listed here didn’t work.

The fix was to shutdown the VM, open the Settings in VirtualBox and update the Network > Adapter > “Attached To” value from “NAT” to “Bridged”.

After starting the VM again, the IP address had been updated to 192.168.1.119, which was accessible through a browser and I could continue with the guide.

UPDATE

Funnily enough, slightly further down the Puppet documentation it lists the above as a solution.

Puppet Adapter Bridged Configuration

At least if the documentation wasn’t so thorough, I’d have been able to easily sort this issue.

Using variables in strings in C#

Strings are great. Letters, numbers and symbols, all at the same time?

Awesome

Soon enough hard coded strings aren’t going to cut it and you’ll want them to be more dynamic.

Below are 3 methods of mixing variables with your strings:

The Concat Operator +

The + operator when used on string variables will allow you to join them together.

So given the variables firstName and lastName, we could join them together like so:

string firstName = "Frank";
string lastName = "Castle";
string fullName = firstName + lastName;
Console.WriteLine(fullName);

If you run this example your output will be “FrankCastle”.

Unfortunately this doesn’t give us exactly what we want.

To add a space to separate the names, update your code to the following:

string firstName = "Frank";
string lastName = "Castle";
string fullName = firstName + " " + lastName;
Console.WriteLine(fullName);

This will output “Frank Castle”. Exactly what we want.

As you can see, we’ve added an extra string in between the 2 variables with no characters, just a single space.

For those unfamiliar, Frank Castle has an alias, let’s add that in between his firstName and lastName for full effect.

string firstName = "Frank";
string lastName = "Castle";
string fullName = firstName + " 'The Punisher' " + lastName;
Console.WriteLine(fullName);

The output will be: “Frank ‘The Punisher’ Castle”.

Something to note is we’ve used (single quotes) rather than (double quotes) purposefully, so we don’t need to worry about “escaping” the double quotes. I’ll cover that in a future post.

Just to reiterate, we could have used the following to get our fullName variable, but we lose the flexibility and context that our variable names provide:

string fullName = "Frank" + " 'The Punisher' " + "Castle";

String.Format()

.Net comes with lots of handy functionality built in. The String.Format() function helps simplify string concatenation by allowing us to define how we want the output string to be structured, and then pass in the variables to use as argument after.

In order to achieve the output from our previous example we can use the following code:

string firstName = "Frank";
string lastName = "Castle";
string fullName = String.Format("{0} {1}", firstName, lastName);
Console.WriteLine(fullName);

The output again will be: “Frank Castle”.

To explain what is happening, the Format function is taking 3 arguments, the first is a string, the second and third are our variables. In the format string we are defining where we want our variables to be in the output string. {0} is our firstName argument, and {1} is our lastName argument. The reason we access them as 0 and 1 and not as 1 and 2 as you might expect is because in C# we use zero-based numbering. You will come across this when you use loops, arrays and lists. What it means to us is that our first item is located at position 0 and our second is at position 1.

Something nice is that we can now easily move our variables. If we wanted the output to read “Castle Frank”, we can do the following:

string firstName = "Frank";
string lastName = "Castle";
string fullName = String.Format("{1} {0}", firstName, lastName);
Console.WriteLine(fullName);

By swapping the index values our output string displays as expected.

We can also duplicate values by repeating their index. To get the output string “Frank Frank Castle” we do the following:

string firstName = "Frank";
string lastName = "Castle";
string fullName = String.Format("{0} {0} {1}", firstName, lastName);
Console.WriteLine(fullName);

Something else to be aware of is we can add in extra text as we please. To give Frank his full title, we just need:

string firstName = "Frank";
string lastName = "Castle";
string fullName = String.Format("{0} 'The Punisher' {1}", firstName, lastName);
Console.WriteLine(fullName);

We are still using single quotes so we don’t have to do any “escaping”.

String Interpolation

Saving the best ’til last, string interpolation. We get the simplicity offered by String.Format() and the clarity from the concatenation operator.

By simply adding a $ in front of our first double quote, we can use our variables by name. Our 3 example from above are reflected below:

string firstName = "Frank";
string lastName = "Castle";
string fullName = "{firstName} {lastName}";
Console.WriteLine(fullName);

Outputs: “Frank Castle”

string firstName = "Frank";
string lastName = "Castle";
string fullName = $"{firstName} {firstName} {lastName}";
Console.WriteLine(fullName);

Outputs: “Frank Frank Castle”

string firstName = "Frank";
string lastName = "Castle";
string fullName = $"{firstName} 'The Punisher' {lastName}";
Console.WriteLine(fullName);

Outputs: “Frank ‘The Punisher’ Castle”

Conclusion

By now this code should make sense and you should understand each method and be able to find one which suits your situation.

In practice I only use string interpolation now, but it is good to know what options are available, and forget the ones you don’t need.

An honourable mention is the StringBuilder class, which offers increased performance when dealing with lots of concatenations, and is probably worth another post in itself.

That wraps up the first post in my series for beginners, hopefully it has been useful.

More Reading

If you’d like some more information on how the + operator works see this StackOverflow answer for more details.