Recently during profiling session one method caught my eye, which in profiler's decompiled version looked like this:
public static double dec2f(Decimal value)
{
if (value == new Decimal(-1, -1, -1, true, (byte) 0))
return double.MinValue;
try
{
return (double) value;
}
catch
{
return double.MinValue;
}
}
This is part of the legacy code written years ago and according to the profiler (which was in sampling mode) there were too much time wasted in this method. In my opinion this is because try-catch block preventing inlining and I spent some time to refresh my memories about decimal to double conversion tricks. After ensuring this conversion can't throw, I removed try-catch block.
But when I looked again at simplified version of source code, I wondered why decompiled version shows strange Decimal constuctor while it is simple Decimal.MinValue usage:
public static double dec2f(decimal value)
{
if (value == DecimalMinValue)
{
return Double.MinValue;
}
return (double)value;
}
First, I thought it is a decompiler's bug, but fortunately it also shows IL version of method:
public static double dec2f(Decimal value)
{
if (value == new Decimal(-1, -1, -1, true, (byte) 0))
.maxstack 6
.locals init (
[0] float64 V_0
)
IL_0000: ldarg.0 // 'value'
IL_0001: ldc.i4.m1
IL_0002: ldc.i4.m1
IL_0003: ldc.i4.m1
IL_0004: ldc.i4.1
IL_0005: ldc.i4.0
IL_0006: newobj instance void [mscorlib]System.Decimal::.ctor(int32, int32, int32, bool, unsigned int8)
IL_000b: call bool [mscorlib]System.Decimal::op_Equality(valuetype [mscorlib]System.Decimal, valuetype [mscorlib]System.Decimal)
IL_0010: brfalse.s IL_001c
return double.MinValue;
Well, looks like it is true that the Decimal constructor is involved EVERY time you use constant like Decimal.MinValue! So next question arises - what is inside this constructor and is there any difference in using Decimal.MinValue and defining local field like:
static readonly decimal DecimalMinValue = Decimal.MinValue;
Well, the answer is - there is a difference if every penny counts in your case:
Method | Mean | StdDev |
--------------------------- |------------ |---------- |
CompareWithDecimalMinValue | 178.4235 ns | 0.4395 ns |
CompareWithLocalMinValue | 98.0991 ns | 2.2803 ns |
And the reason of this behavior is DecimalConstantAttribute which specified how to create a decimal every time you use one of the decimal's constants.