Converting a big project to .NET Standard without big bang

When you have a big project in .NET full framework and you want to convert to .NET standard / core, usually MultiTargeting can be a viable solution to avoid a Big Bang conversion. You starts with the very first assembly in the chain of dependency, the one that does not depends on any other assembly in the project, and you start checking compatibility with .NET standard for all referenced NuGet Packages. Once the first project is done you proceed with the remaining.

The very first step is converting all project files to new project format, leave all project to target full framework, then you can use a nice technique called MultiTargeting starting with the aforementioned first assembly of the chain. 

Multitargeting allows you to target both framework with a single Visual Studio Project

To enable it just edit project files, and change TargetFramework to TargetFrameworks (mind the final s) and specify that you want that project compiled for Full Framework and .NET Standard. Including .NET Standard in the list of target framework requires removal of all code that is dependent on Full Framework, but this is not always obtainable in a single big bang conversion, because the amount of code could be really high. 

image

Figure 1: Multi Targeting in action

To easy the transition I usually add some constants so I’ll be able to use #ifdef directive to isolate some piece of code only when the project is targeting Full Framework.

  <PropertyGroup Condition=" '$(TargetFramework)' == 'netstandard2.0'">
    <DefineConstants>NETCORE;NETSTANDARD;NETSTANDARD2_0</DefineConstants>
  </PropertyGroup>

  <PropertyGroup Condition=" '$(TargetFramework)' == 'net461'">
    <DefineConstants>NET45;NETFULL</DefineConstants>
  </PropertyGroup>

Thanks to conditional compile we can have some of the code that is compiled only for full framework, or we can have different implementation for a given class (full or netstandard)

After multitarget is enabled and netstandard is one of the target,  project usually stops compiling, because it usually contains some code that depends on full framework. There are two distinct problems: A) nuget packages that does not support netstandard, B) references to full framework assembly. To solve the problem you must use conditional referencing, setting the references only for specific framework version.

image

Figure 2: Conditional reference in action.

As you can see in Figure 2 I can reference difference nuget packages or assemblies depending on the version of framework used. The nice part is being able to reference different version of a library (ex System.Buffers) for full framework or .net standard framework.

At this point the project usually stops compiling because code references classes that are not available in netstandard, as an example in Figure 2 you can verify that netstandard misses lots of references like System.Runtime.Caching and System.Runtime.Remoting. For all code that uses references not available for netstandard project just use a #ifdef NETFULL to compile those classes only with full framework. This is not a real complete solution, but at the end you will have your project that is still fully functional with .NET full framework, and a nice list of #ifdef NETFULL that identify the only parts that are not available in netstandard. Now you can continue working as usual while slowly removing and upgrading all the code to netstandard.

Now repeat the process this for every project in the solution, usually there are two scenarios:

1) The vast majority of the code is full functional with netstandard, there are very few point in the code where you use #ifdef NETFULL.
2) Some of the classes / functionality available only in full framework are extensively used in all of your code, the amount of code with #ifdef NETFULL is becoming too big

Point 1 is the good side, you can continue working with full framework then convert remaining code with your pace. If you find yourself in point 2, as an example if some code uses MSMQ you should isolate the full framework depending code in a single class, then use Inversion Of Control to inject concrete class. As an example, instead of having lots of points in the code that uses MSMQ simply abstract all the code in a IQueue interface, create a MSMQQueue class and you have a single point of code that is not available for netstandard. You can then write code that uses Rabbit MQ and the problem is gone with a simple class rewrite.

Lets do an example: I have a InMemoryCacheHelper to abstract the usage of MemoryCache, and since MemoryCache class from System.Runtime.Caching is not available in netstandard, I simply protect the class with #if NETFULL conditional compiling.

image

Figure 3: Conditional compilation, this is version of the class for NET Full.

Looking in the documentation there is a nuget package called Microsoft.Extensions.Caching.Memory meant to be a replacement for MemoryCache standard full framework class. I can use this NuGet packages to create an implementation of InMemoryCacheHelper compatible with netstandard.

When you enable multitargeting it is quite common to manually edit references in project file, because references are different from Full Framework build and NetStandard build.

Remember that you need to reference the full framework version (1) for net461 compilation, but reference nuget package for netstandard version. This can be done manually editing references in csproj file Figure 4.

image

Figure 4: Reference the correct assembly or NugetPackage based on framework version.

Now you come back to the InMemoryCacheHelper class add an #else branch to the #ifdef NETFULL directive and start writing a version of the class that uses Microsoft.Extensions.Caching.Memory. One class after another you will have all of your code that is able to target both net full and netcore. You can rewrite the entire class or you can use the same class and use #if NETFULL inside each method, I prefer the first approach but this is what happens when I’m starting editing the netstandard version of the class

image

Figure 5: No highlight and no intellisense because we are in a conditional compilation branch that evaluates to false.

Ouch, since we are in a branch of conditional compilation that evaluate to false, VS does not offer highligh, no intellisense, everything is greyed out. At this point you should be aware that, when you use multitargeting, the order of the frameworks in the project file matters. The first framework in <TargetFrameworks> node is the one that VS would use to evaluate conditional compilation. This means that, when you are working on classes or piece of code that should target both full framework and NET standard, you need to change the order to fit your need.

In this example I needed to change <TargetFrameworks>net461;netstandard2.0;</TargetFrameworks> to <TargetFrameworks>netstandard2.0;net461</TargetFrameworks>, save project file and unload and reload the project (sometimes you need to force a project reload) and Visual Studio will consider netstandard2.0 during code editing.

image

Figure 6: Reversing the order of the frameworks, will enable intellisense for netstandard branch of the conditional compilation directive.

Now you can edit the netstandard version of your class and convert one by one all parts of the code that does not natively run on netstandard.

Happy coding.

How to delete content in Azure DevOps wiki

Today I got a simple but interesting question about Azure DevOps, how can I completely delete the content of the wiki? There are not so many reason for this, but sometimes you really want to start from scratch. Now suppose you have your wiki:

image

Figure 1: Wiki with a simple page

You have created some pages, you played a little bit with the wiki, you attached some cute pets photo and content to the wiki itself, maybe just to gain familiarity with the wiki itself.

image

Figure 2: Wiki with some content on it.

Now you want to delete everything, such as that no member of the team should be able to retrieve pages and content anymore.

Azure DevOps Wiki are nothing more than a Git Repository with MarkDown content, so you can directly manipulate git repository if you need to alter wiki history

To do a low level manipulation of the wiki, you should simply clone wiki repository locally, you can simply find repository url in the UI

image

Figure 3: Clone wiki repository from the ui.

That menu option simply lets you to grab url of the repository, then you can simply clone the repository locally and verify all the commits done in the wiki. (I use command line but you can use any UI of you choiche)

image

Figure 4: Content of the wiki, a simple git repository

Now if you look a Figure 4 you can notice that the wiki is nothing more than a git repository with a commit for each modification you did to the wiki. Now, if you really want to reset everything and start wiki from scratch, you can simply issue a

git reset --hard SHA_OF_FIRST_COMMIT

Where SHA_OF_FIRST_COMMIT is the address of the very first commit, the one with the comment Initializing wiki, in my example 86ec4c9. After the command was executed your local wikiMaster branch point to the very first commit of the repository, an empty wiki.

image

Figure 5: Your local wikiMaster branch was reset to the very first commit, now wikiMaster point to an empty wiki

Now you can simply push with –force option to reset remote branch to the very same commit.

git push --force

Open again wiki page to verify that now it reverted to the original version. Actually the server still has the previous commit in the database, but they are not reachable anymore and they will be deleted over time by internal garbage collection.

Resetting to the very first commit actually delete everything from the wiki, restoring it to its pristine content

This scenario is not really common, but a real common scenario is when you mistakenly write something in the wiki, save the page and then you want to delete what you have written. There are lots of reason for this requirement, you mistakenly inserted sensitive data like passwords or tokens, or you simply write something that you want to permanently delete.

If you look to Figure 4, suppose you simply paste a wrong image and you want to remove that image and all related content from the history of the page. If you simply edit the wiki page, remove the image, then save again the page, the data is still in the history, anyone can find again the content you want to remove. The only solution is to rewrite git history.

Since a Wiki is a git repository, everything you did remain in history of the page, if you included sensitive information, even if you edit the page, removed that information and save again is not enough.

From Figure 4 you can verify that the incriminated commit is 97e520e. If you followed my previous example you can simply reset everything to the previous commit, actually deleteing every content that was inserted after that commit.

git reset --hard 97e520e^

Special char ^ indicates first parent of a commit, so previous instruction tell git to reset to the commit parent of bad commit. After this operation a git push – force will reset the branch from the server. The incriminated content is now gone, along with every content that was inserted after. Actually you restored wiki content to a past point in time.

Git reset –hard in your wiki repository allows you to restore a Wiki on a point in time, but everything that happened after that moment will be lost.

This is not a perfect approach, suppose you realize that someone stored a password in the wiki some days ago, you do not want to lose everything but simply remove that specific content and leaving other commit unchanged. Thanks to git flexibility you can obtain this operation with an interactive rebase.

git rebase 97e520e^ -i

This will actually trigger a complete rewrite of the history from the parent of the incriminated commit to the last commit of the wiki. I’m not going to give you a complete explanation of an interactive rebase, but basically you are presented with the list of all commits, starting with the commit you want to delete to the latest commit in the branch.

image

Figure 6: Delete the commit with interactive rebase.

In Figure 6 you are seeing an example in which I have a single commit after the one you want to remove, but nothing changes if you have tons of commits after. You simply need to change the command for the first commit (the commit you want to delete) from pick to d (delete). Leave all other rows unchanged. Then simply save the script to continue (if you are not familiar with VIM simply press I to edit the file, change the file then press ESC to come back in command mode and press : then w then q then ENTER).

This command actually deletes only the commit you want to delete, leaving all following commits unchanged. You actually scissor knife removed a single bad save from your wiki.

image

Figure 7: Commit was removed, local branch has not anymore commit 97e520e

Now you should be 100% sure that no one else modified the wiki in the short timespan you need to clone and rebase the repository so you can issue a git push –force to overwrite content of the repo on AzDo instance.

A git interactive rebase is an operation where you are rewriting history, so you can selectively remove a single commit from the history.

This will actually preserve all content of the wiki, you only removed a single commit from the wiki. There is no more history of that commit inside the Wiki. (actually deleted commit is still unreachable on the server, but there is no way for other to retrieve it).

If you want to completely remove a page with all the history of that page, you need to delete multiple commits, but luckily git has a filter-branch or more advanced comment. You can find more detail here https://help.github.com/en/articles/removing-sensitive-data-from-a-repository

Have I ever told you how much I love Git? :)

Gian Maria.

Git and the Hell of case sensitiveness

If you know how git works, you are perfectly aware that, even if you work in operating systems with case insensitive file system, all commit are case sensitive. Sometimes if you change the case of a folder, then commit modification of files inside that folder, you will incur into problems, because if casing of the path changes, the files are different for the Git Engine (but not for operating systems like windows).

In the long run you will face some annoying problems, like git showing that some of the files are modified (while you didn’t touch them) and you are unable to undo changes or work with those files. This problem will become really annoying during rebase operations.

Having files with only case differences is one of the most annoying problem with Git Repositories in Windows

Luckily enough, Azure DevOps has an option for Git Repository where you can have the engine prevent commits that contains file names with only case differences, to avoid this problem entirely.

image

Figure 1: Options for Cross platform compatibility can solve most headaches

The first option completely blocks pushes that contains files not compatible across platform and is the option that we are looking for, because it will block you from pushing code that will lead to case sensitiveness problems.

The other two options are equally needed, because the second one will prevent you from pushing path with forbidden names or incompatible characters (remember that this is different between Windows and Linux). Finally the third one will block pushes that contains path with unsupported length, a problem that is really nasty for Windows Users.

In the end, if you have case sensitiveness problem in your repository and you already pushed your code, because you did not have these option enabled, I can suggest you a nice tool available in GitHub that find all problems in the repository and fix them, it is called Git Unite. You can clone the project, compile in visual studio then just launch from command line giving path of a local git repository as single arguments and it will do everything automatically.

Gian Maria

Converting a big project to new VS2017 csproj format

Visual Studio 2017 introduced a new .csproj file format for C#  project that is the key to move to .NET Standard but it is useful also for project with Full Framework, because is more human manageable.

The main drawback of this approach is that you end compatibility with older version of VS, if you open the .csproj with VS2015 you are not able to compile the project anymore. For this reason, switching to new format should be a decision that is well discussed in the team.

New project format is really human friendly, the only drawback is losing compatibility with older VS.

Luckily enough, when you have a solution with 68 project, you do not need to do all the work manually thanks to a nice tool found on GitHub, CsprojToVs2017. I’ve done manual conversion in the past, it is time consuming and frustrating, the above tools is not perfect but it does most of the work for you.

First of all I’ve run the tool to do a conversion of the entire solution without creating backup files (why should you take a backup when you can do all the test in a Git branch isolated from the rest of the code?).

csproj-to-2017 --no-backup .\mysln.sln

The tool output lots of information, warning and other suggestion, I simply stored in a file as reference but I immediately opened the solution to verify that it compiles successfully and the answer is no, but most of the work was already done for me.

Automated tool can do most of the work for you, but you will always do some manual work to finish and fine tune the conversion.

First of all the tool remove all the information of assembly from assemblyinfo.cs file, and in my situation it generates an error because I have AssemblyInformationalVersion in my assemblyinfo.cs, value of this attribute is “localCompile” and here what the tools generates.

image

Figure 1: Incorrectly Version generated by conversion tool

I still prefer all version information to be stored inside assemblyinfo.cs, so I removed all the version information from all .csproj file, added the attribute <GenerateAssemblyInfo>false</GenerateAssemblyInfo> to avoid automatic generation of assemblyinfo and finally reverted all the modification done to all assemblyinfo.cs to revert to previous content.

Then I start having a strange error:

Assets file …\obj\project.assets.json' not found. Run a NuGet package restore to generate this file

This error can be solved simply running a dotnet restore and in this particular situation it probably happens because in the sln file there are still a couple of web project that were not converted because web project still should use the old format. If we removed those two project from the solution everything compiles correctly, probably VS2017 got confused when a solution mixed the two types of project. The funny thing is that this error did not happened to every member of the team and happens only when we switched branch from a branch with the old format to the new format. This is usually not a big problem, when we switch branch we should only issue a git clean –xdf to clean every file not in source control, dotnet restore and we are ready to go. Now that we do not have active branches with the old format, this issue is gone away.

Then we start having some compilation errors because the new project formats includes automatically all .cs files inside project folders and during the years, some files were probably removed from the project , but they are still in the file system as you can see in following picture, where we have in the left the converted project to the right the original one.

image

As you can verify the file ConfigurationManagerProcessManagerConfiguration is present in file system but it is excluded from the project (right part of the picture) and it is incorrectly included by the new project format.

To solve this problem you should open side by side converted version of the project and original version and manually remove all cs files that are still in source code but are not referenced by the old project format, to avoid them to be incorrectly included after conversion.

One nice side effect of the conversion is that we found a bunch of files still in source control but not included in the solution

Then we have problem with Post Build Action, because we use Pre and Post build action to xcopy some files around but with the new format it seems not to work anymore. It turns out that in post build and pre build action some of the properties, as $(ConfigurationName) were not correctly populated, thus all the post and pre build actions generate error. The solution is really simple, convert them to a standard MsBuild Target as seen in the following snippet, where you can see the correct way to use xcopy and the original code that gets converted.

image

Conversion is really simple, original part is commented in the lower part of previous snippet, and it is composed by a series of xcopy command inside a PreBuildEvent. To made it work with the new csproj format the solution is adding a Target with a BeforeTarget=”PreBuildEvent” that will be run before the pre build event, then add a Exec task for each xcopy instruction. Remember also that the output folder is not bin/debug but you need to append the net461 (framework version).

The whole conversion took almost a couple of  hours for a 68 project solution and thanks to the automatic conversion tool, most of the work was automated, we only need to tweak a little bit project files and solves some problems due to the fact that this is a 5 years old solution.

This conversion forces also you to review your build in VSTS or whatever build system you are using. I encountered a couple of modification to do to every build.

Once you switch to the new project version it is time to check the all automated builds.

  1. First of all I need to be sure that the version of NuGet used to restore dependencies should be set at minimum to 4.6. to be sure of the NuGet version used, you should use the couple of task  NuGet Tool Installer plus Nuget, as I described in a previous post.

image

If you are using a wrong nuget version, probably the build solution will fail with an error like: Assets file ……\project.assets.json’ not found. Run a NuGet package restore to generate this file.

Another problem is due to switching between a branch that still uses old project format and branch that use the new format. It is imperative that obj folders are cleared because building again, or the build probably will fail. To accomplish this, I simply decide to clear everything before a get sources. (This was related to the issue I explained before.)

image

Symptoms of not clearing the obj folder are weird error like: Your project file doesn’t list ‘win’ as a “RuntimeIdentifier”. You should add ‘win’ to the “RuntimeIdentifiers” property in your project file and then re-run NuGet restore.

The overall process went smooth, and now we are ready to start the migration to dotnetcore.

Gian Maria.

NullReferenceException in windows when Git fetch or pull

After updating Git to newer 2.19.1. for windows, it could happen that you are not able to use anymore credential manager. The sympthom is, whenever you git fetch or pull, you got a NullReferenceException and or error  unable to read askpass response from ‘C:/Program Files/Git/mingw64/libexec/git-core/git-gui—askpass’

Git credential manager for Windows in version 2.19.1 could have some problem and generates a NullReference Exception

Clearing Windows Credential Manager does not solves the problem, you still have the same error even if you clone again the repo in another folder. To fix this you can simply download and install the newest version of the Git Credential Manager for windows. You can find everything at this address.

image

Figure 1: Download page for release of Git Credential Manager for Windows

Just install the newest version and the problem should be solved.

Gian Maria