Error with dotnet restore, corrupted header

I’m trying to compile with dotnetcore 2.0 a project on Linux, but I got this strange error when I run the dotnet restore command.


Figure 1: Error restoring packages

The exact error comes from NuGet.targets and tells me: a local file header is corrupt and points to my solution file. Clearly this project builds just fine on another computer.

Since I’m experiencing intermittent connection, I suspect that nuget cache can be somewhat corrupted, so I run this command to clear all caches.

dotnet nuget locals --clear all

This clear all the caches. After this command run, I simply re-run again the dotnet restore command and this time everything went well.

Gian Maria.

.NET core 2.0, errors installing on linux

.NET Core 2.0 is finally out, and I immediately try to install it on every machine and especially in my linux test machines. In one of my Ubuntu machine I got an installation problem, a generic error of apt-get and I was a little bit puzzled on why the installation fail.

Since in windows the most common cause of .NET Core installation error is the presence of an old package (especially the preview version), I decide to uninstall all previous installation of .NET core on that machine. Luckly enough, doing this on linux is really simple, first of all I list all installed packages that have dotnet in the name

sudo apt list --installed | grep dotnet

This is what I got after a clean installation of .NET core 2.0


Figure 1: List of packages that contains dotnet in the name

But in that specific virtual machine I got several versions and a preview of 2.0, so I decided to uninstall every pacakge using the command sudo apt-get purge packagename, finally, after all packages were uninstalled I issued a sudo apt-get clean and finally I tried to install again .NET core 2.0 and everything went good.

If you have any installation problem of .net core under linux, just uninstall everything related with dotnet core with apt-get purge and this should fix your problems.

Gian Maria.

Publishing web project on disk during build

During a build you can ask MsBuild to deploy on build using the switch /p:DeployOnBuild=true as I described in previous posts. This is mainly used to deploy the site on IIS thanks to WebDeploy, but you can also use WebDeploy to deploy on a disk path. The problem is that the path is stored in publication settings file, but what about changing that path during a Build?

The answer is simple, you can use the /p:publishUrl=xxx to override what is specified inside the publication file and choose a different directory for deploy. Es.

msbuild WebApplication1.csproj /p:Deploy
OnBuild=true /p:PublishProfile=Profile1 /p:publishUrl=c:\temp\waptest

Thanks to this simple trick you can instruct MsBuild to store deployed site in any folder of the build server.

Gian Maria.

Detect Client-side reconnection with SignalR

Signalr is really good on keeping alive the connection between server and the client and make sure that the client automatically reconnect if there are connection issue. To verify this you can write a simple test with a simple hub that each second broadcasts to all clients current server timestamp with a simple timer.

 private static Timer testTimer = null;
 static MyHub()
     testTimer = new Timer();
     testTimer.Interval = 1000;
     testTimer.Elapsed += (sender, e) =>
          var context = GlobalHost.ConnectionManager.GetHubContext();
          context .Clients.All.setTime(DateTime.Now.ToString());

Now you can simply reference the hub on a page, register for the setTime method and watch the page dynamically update each second.

function SignalrTestViewModel(option) {

    var self = this;
    //signalr configuration
    self.myHub = $.connection.myHub;

    self.serverTime = ko.observable('no date from server');

    self.myHub.client.setTime = function (time) {

This is a simple KnockoutJs view model, you can now bind a simple span to the property serverTime and watch everything works.


Figure 1: Web page automatically updated from the server

The interesting part is that you can now kill the w3wp.exe process from the task manager (if you are using IIS) or whatever hosting server you are using, and you can verify that almost immediately w3wp.exe process is bring to life again and the timer continues to count. This happens because when the client detect that the server is dead, it tries automatically to reconnect, then IIS creates another worker process and everything starts working again.

The only drawback is that the server had lost every volatile information it has collected during its life. In my situation each clients initialize some javascript code calling certain method on the hub Es. RegisterViewRoom, and I keep such information in static variables inside the hub. This works, except that if the server process goes down for whatever reason (scheduled IIS worker process recycle) these information are lost. I do not want to bother with storing data on the server, my typical situation is no more than 5 clients at a time and I want the simplest thing that could possibly work.

The simplest solution to this problem is letting the client javascript code detect when a re-connection occurs, whenever there is a reconnect, the client can call registration function again. Registration call is idempotent so there is no problem if the reconnection happens because of connectivity problem and not for a restart of the server. To detect in signalr a re-connection you can use this piece of code.

 $.connection.hub.stateChanged(function (change)

        if (change.newState === $.signalR.connectionState.reconnecting)

This simple code is used to detect when the state of the connection changed, I store this information inside a KnockoutJS View Model variable, to be informed of the actual status, then I simply detect if the new state is reconnecting and I simply call initialization function on the server to re-register information for this client connection.

Signalr is really one of the most powerful and interesting Javascript library I worked with in the past years :).

Gian Maria

Error publishing Click-once moving from .NET 3.5 to 4.5:

I’ve a customer where we set up a TFS Build that automatically compile, obfuscates assembly and finally publish with click-once on an internal server. As a part of the process, a tool is used to move the published packages from the internal server to public server, to make it available to final customers. This tool uses mage.exe to change some properties of the package and then repack to publish to final server.

When the solution moved from .NET 3.5 to .NET 4.5, published application failed to install with this error:

Below is a summary of the errors, details of these errors are listed later in the log.
* Activation of resulted in exception. Following failure messages were detected:
+ Application manifest has either a different computed hash than the one specified or no hash specified at all.
+ File, UploadTest.exe.manifest, has a different computed hash than specified in manifest.

We first thought that the culprit was the obfuscation process or something related to the build process, because publishing directly from Visual Studio generates a correct installer. Then we were able to replicate the error with a simple application with a single form so we were sure that something wrong happened during the build+publish process. Finally we determined that the problem generates when the published url for the package is changed during the build to point to final location (test server or production server).

Actually the process of changing publishing location was done with a direct call to mage.exe command line utility, and after some investigation we found that the encryption mechanism of Click-once was changed to SHA256RSA in .NET 4.5. Unfortunately mage.exe, does not automatically detect .NET version used by the application to apply the correct hash and uses SHA1 by default. If you want to use mage.exe to change some properties of a click-once application based on .NET 4.5 or greater version you must use -a command line option to choose sha256RSA algorithm to calculate manifest hash. The correct command line must contain –a sha256RSA option to generate a correct package.

In my opinion we have a couple of problem here that Microsoft could address.

1) The error message you got when you try to install published application, should states that the hash is computed with an incorrect algorithm, allowing you to better diagnose the error. Telling you that the hash is different from the one specified in the manifest is something misleading because it lead to incorrect assumption that something modifies the files after the hash is generated.

2) Mage.exe should automatically find version of Framework used by the package, and should give you a warning. If checking framework version is not simple, it would be better to always display a warning that tells the user “Warning: starting with .NET 4.5 you should use the option -a sha256RSA to resign manifest because we changed the algorithm used”.

This is a story that is true even for your application. If you change something so radical from one version to another of your application, always display clear and informative warning to the user, to avoid problems.

Gian Maria.