CLR 4.5: .Net Framework Kernel Improvements

In this post I’ll go through some of the enhancements and improvements done by the CLR team as part of the performance improvements in .Net 4.5. In most cases developers will not have to do anything different to take advantage of the new stuff, it will just works whenever the new framework libraries are used.

Improved Large Object heap Allocator

I’ll start by the most “asked-for” feature from the community – compaction on LOH. As you may know the .Net CLR has a very tough classification regime for its citizens(objects) any object that is greater than or equal to 85.000 bytes considered to be special(large Object) and needs different treatment and placement, that’s why the CLR allocate a special heap for those guys and then the poor guys lives in a different heap frankly called Small Object Heap (SOH).

SOH Allocations and Garbage Collections

The main difference between these two communities are basically around how much disturbance the CLR imposes in each society. When a collection happen on SOH the CLR clear the unreachable objects and reorganize the objects again (all citizens of the SOH) to compact the space and group all the free space at the beginning of the heap. this decreases the chances of fragmentation in the heap and it can only happen because the citizens of SOH are lighter and easier to move around. on the other hand when collection occur on LOH ( Only occur during Gen 2 Collection) the CLR doesn’t com[pact the memory cause it’s very expensive process to move these large object around.You see where this is leading, now we have to deal with fragmentation issues in LOH and from time to time a Out of Memory Exceptions.

LOH Allocations and Garbage Collections

Because the LOH is not compacted, memory management happens in a classical way. where the CLR keeps a free list of available blocks of memory. When allocating a large object, the runtime first looks at the free list to see if it will satisfy the allocation request. When the GC discovers adjacent objects that died, it combines the space they used into one free block which can be used for allocation.

Well the CLR team didn’t make the decision yet to compact LOH which is understandable because of the cost of that operation, on the other hand they improved the method by which the CLR manages the free lists, therefore making more effective use of fragments. So now the CLR will revisit the memory fragments “Free Slots” that earlier allocation didn’t use..

Also in Server GC mode, the runtime balances LOH allocations between each heap. Prior to .NET 4.5, we only balanced the SOH.

Many of the citizens of LOH are similar in nature which lends itself to the idea of Object Pools that can essentially reduce the LOH fragmentation.

Background mode for Server GC

Couple of years ago I blogged about a new Background GC in Workstation mode in CLR 4.0 the main idea is while executing Gen 2 collection, CLR checks at well defined points if Gen0/Gen1 has been requested, if true then Gen2 is paused until the lower collection runs to completion then it resumes. Read more about Background mode for Workstation GC.

(.NET 4.5) Gen0/Gen1 collections can proceed during Gen2 GC

New CLR supports Background mode for Server GC which introduces support for concurrent collections to server collector. It minimizes long blocking collections while continuing to maintain high application throughput.

In the above schematic diagram notice the following:

  • As in .Net 4.0 whenever Gen 0 or Gen 1 happen the managed threads allocating objects on the heap are paused until the collection is done.
  • Gen 2 will now pauses on Server mode too (As in Client/Workstation mode) during Gen 0 and Gen 1 giving it priority to finish first.
  • Gen 2 as usual runs in the background while the managed threads still allocating objects on the SOH heap.

Auto NGEN

.Net Framework prior to v4.5 install both the IL and NGened images for all the assemblies to the dev machines which consumes double the space required or more just for the framework. The CLR team conducted a research to find out what are the most commonly used assemblies out of the framework and named those the “Frequent Set” this set of assemblies only NGened now which saves a lot of space and dramatically decrease the framework disk footprint.

image

Now the perfectly good question that present itself is What about performance when non-NGened assemblies are used?!

The CLR team introduce a new replacement to the ngen engine called Auto NGen Maintenance Task. When you create a new application uses non-NGened assemblies or it’s not NGened itself here is what happen:

  1. The user runs the application
  2. Every time the application run it creates a new type of logs called “Assembly Usage Logs” in the AppData windows directory.
  3. Now the new Auto NGen Maintenance Task takes advantage of a new feature in Windows 8 called “Automatic Maintenance” where a background efficient performance enhancement jobs can run in the background while the user is no heavily using the machine.
  4. The Auto NGen Maintenance Task goes through the Logs from all the managed apps the user ran and creates a picture of what assemblies are frequently used by the user and are not NGened and which ones are NGened and not used frequently.
  5. Then the task NGen these assemblies on fly and removes the NGen images not used (This will be known as Reclaiming Native Images Process).
  6. The next time the app runs it gain some boost after it uses the NGened images.

image

Some notes about Auto NGen process

  • The assembly must targets the .NET Framework 4.5 Beta or later.
  • The Auto NGen runs only on Windows 8
  • For Desktop apps the Auto NGen applies only to GAC assemblies
  • For Metro styles apps Auto NGen applies to all assemblies
  • Auto NGen will not remove not used rooted native images (Images NGened by the developers). Basically the Auto NGen removes only images created by the same process read more about reclaiming native images here.

More Information

Related Posts

Hope this Helps,

Ahmed

Max Number of Threads Per Windows Process

Developers usually thinks that Windows has a default value for the max number of threads a process can hold. while in fact the limitation is due to the amount of address space each thread have can have.

I was working lately on some mass facility simulation applications to test some complex RFID based system. I needed to have a different thread for each person in the facility to simulate parallel and different movement of people at the same time .. and that’s when the fun started Smile.with the simulation running more than 3000 individuals at the same time, it wasn’t too log before I got the OutOfMemory Unfriendly Exception.

When you create new thread, it has some memory in the kernel mode, some memory in the user mode, plus its stack, the limiting factor is usually the stack size. The secret behind that is the default stack size is 1MB and the user-mode address space assigned to the windows process under 32 bit Windows OS is about 2 GB. that allow around 2000 thread per process (2000 * 1MB = 2GB).

Hit The Limit!

imageRunning the following code on my virtual machine with the specs in the image runs around ~ 1400 Threads, the available memory on the machine also takes place in deciding the Max number of threads can be allocated to your process.

Now running the same application multiple times on different machines with different configs will give a close or similar results.

using System;
using System.Collections.Generic;
using System.Threading;

namespace MaxNumberOfThreadsDemo
{
    public class Program
    {
        private static List<Thread> _threads = new List<Thread>();

        public static void Main(string[] args)
        {
            Console.WriteLine("Creating Threads ...");
            try
            {
                CreateThreadsWithDefaultStackSize();
                //CreateThreadsWithSmallerStackSize();
                //CreateThreadsWithSmallerThanMinStackSize();
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex);
            }

            Console.ReadLine();
            Console.WriteLine("Cleanning up Threads ...");
            Cleanup();

            Console.WriteLine("Done");
        }


        public static void CreateThreadsWithDefaultStackSize()
        {
            for(int i=0; i< 10000; i++)
            {
                Thread t = new Thread(DoWork);
                _threads.Add(t);
                t.Name = "Thread_" + i;
                t.Start();
                Console.WriteLine(string.Format("{0} started!",t.Name));
            }
            
        }
        private static  void Cleanup()
        {
            foreach (var thread in _threads)
                thread.Abort();
            _threads.Clear();
            _threads = null;
        }

        private static void DoWork()
        {
            Thread.Sleep(Int32.MaxValue-1);
        }
    }
}

image

How Can you extend the Limit?

Well, There are several ways to get around that limit, one way is by setting IMAGE_FILE_LARGE_ADDRESS_AWARE variable that will extend the user mode address space assigned to each process to be 3GB on 32x windows or 4GB on 64x.

Another way is by reducing the stack size of the created thread which will in return increases the maximum number of threads you can create. You can do that in C# by using the overload constructor when creating a new thread and provide maxStackSize.

Thread t = new Thread(DoWork,256);

Beginning with the .NET Framework version 4, only fully trusted code can set maxStackSize to a value that is greater than the default stack size (1 megabyte). If a larger value is specified for maxStackSize when code is running with partial trust, maxStackSize is ignored and the default stack size is used. No exception is thrown. Code at any trust level can set maxStackSize to a value that is less than the default stack size.

Running the following same code after setting the maxStackSize of the created threads to 256KB will increase the number of max threads created per process.

image

Now If maxStackSize is less than the minimum stack size, the minimum stack size is used. If maxStackSize is not a multiple of the page size, it is rounded to the next larger multiple of the page size. For example, if you are using the .NET Framework version 2.0 on Microsoft Windows Vista, 256KB (262144 bytes) is the minimum stack size, and the page size is 64KB (65536 bytes). according to MSDN.

In closing, resources aren’t free, and working with multiple threads can be really challenging in terms of reliability, stability, and extensibility of your application. but definitely hitting the max number of threads is something you want to think about carefully and consider maybe only in lab or simulation scenarios.

More Information

Hope this Helps,

Ahmed

CLR 4.5: Managed Profile Guided Optimization (MPGO)

Again this performance enhancement (or technology I would say ) is targeting the application startup time more focused on the large desktop mammoth though! the new technology Microsoft introducing with .Net 4.5 is not going to be brand new for you if you are a C++ developer.

The PGO build process in C++

The PGO build process in C++

In C++ shipped with .Net 1.1 a multi-step compilation known as Profile Guided Optimization (PGO) as an optimization technique can be used by the C++ developer to enhance the startup time of the application. by running the application though some common scenarios (exercising your app) with data collection running in the background then used the result of that for optimized compilation of the code. you can read more about PGO here.

As a matter of fact Microsoft has been using this technology since .Net 2.0 to generate optimized ngen-ed assemblies (eating their own dog food) and now they are releasing this technology for you to take advantage of in your own managed applications.

Crash Course to .Net Compilation

In .Net when you compile your managed code it really gets interpreted into another language (IL) and is not compiled to binaries will be running on the production machine, this happen at the run time and referred to as Just-In-Time (JIT) Compilation. The alternative way to that is to pre-compile your code prior to execution at runtime using NGEN tool (shipped with .Net).

image

Generating these native images to the production processor usually speed up lunch time of the application especially large desktop apps that require a lot of JITting now there are not JIT compilation required at all.

Why you want to use NGen?

If it’s not obvious yet here are some good reasons:

  • NGen typically improves the warm startup time of applications, and sometimes the cold startup time as well
  • NGen also improves the overall memory usage of the system by allowing different processes that use the same assembly to share the corresponding NGen image among them

Exercise your Assemblies!

The Managed Profile Guided Optimization (MPGO) technology can improve the startup and working set (memory usage) of managed applications by optimizing the layout of precompiled native images.By organize the layout of your native image so most frequently called data are located together in a minimal number of disk pages in order to optimize each request for a disk page of image data to include a higher density of useful image data for the running program which in return will reduce the number of page requests from disk. if you think more about it,  mostly it will be useful to machines with mechanical disks but if you have already moved to the Solid State Drive Nirvana the expected performance improvement will probably will be unnoticeable.

How it Works?

As I mentioned before it’s a very similar process to PGO from C++ compiler, where you exercise your assemblies, here are a simplified steps of the process:

  1. Compile and Build your large desktop application to generate the normal IL assemblies.
  2. Run MPGO tool on the IL assemblies built before.
  3. Now exercise a variety of representative user scenarios within your application.(not too little not too much you don’t want the tool be confused about how to organize the layout of your image).
  4. MPGO will store the profile created from your “training” with each IL assembly trained.
  5. Now when you use NGEN tool on the trained assemblies it will genreate an optimized native image.

image

How to use MPGO

you can find MPGO in the following directory:

C:\program files(x86)\microsoft visual studio 11.0\team tools\performance tools\mpgo.exe

  • Run VS command as administrator and execute MPGO with the correct parameters

MPGO -scenario “c:\profiles\WinApp.exe” -OutDir “C:\profiles\optimized” -AssemblyList “c:\profiles\WinApp.exe”

  • Exercise some most used scenarios on your application. then close the application
  • The IL with profiles will be generated in the output dir
  • Run Ngen on the IL+Profiles

NGEN “C:\profiles\optimized\WinApp.exe”

  • Run the optimized application.

Final Words

MPGO is very cool innovative technology but the results of it depends highly on the human factor, what scenarios you will exercise? and how you’ll run it? the profile gets created so you might end up running it multiple times with different scenarios before your get the expected lunch time you’re looking for.

More Information:

Hope this Helps,

Ahmed

CLR 4.5: Multicore Just-in-Time (JIT)

Well, technically it’s not a new CLR it’s just an update to CLR 4.0 libraries, .Net framework 4.5 will replace your old CLR libraries with a brand new bits but the runtime is technically the same. running a GetSystemVersion() function on .Net 4.0 CLR OR .Net 4.5 CLR gives the same version! “v4.0.30319”

System.Runtime.InteropServices.RuntimeEnvironment.GetSystemVersion()

The new CLR takes advantage of the revolution of multicore processor to speed up the application compilation process, since the number of new machines equipped with multicore increased in the last few years more users can benefit from this feature.

The current startup routine for managed apps as depicted blow, the application starts, the the JIT compiler kicks in compiling all the methods into binaries before the program starts to executes. you can see how this can be a very lengthy process for big server applications with tens of dependencies on managed libraries.

image

.Net 4.5 introduce the concept of Parallel JIT Compilation, where a background thread runs on a separate processor core taking care of the JIT Compilation while the main execution thread running on a different core. In ideal scenario the JIT Compilation thread gets ahead of the main thread execution thread so whenever a method is required it is already Compiled. The question now which methods are required for the startup? in order for the background thread to compile to be ready for a faster startup. That leads us to the new ProfileOptimization type.

Optimization Profilers

This is a new type introduced in .Net 4.5 to improve the startup performance of application domains in applications that require the just-in-time (JIT) compiler by performing background compilation of methods that are likely to be executed, based on profiles created during previous compilations.

Notice ProfileOptimization requires a multicore machine to take advantage of its algorithms otherwise the CLR will ignore it on single core machines.The idea is fairly simple the ProfileOptimization runs in the background of your managed application AppDomain identifying the methods that are likely to be executed at the startup time of the application, so the next time the application runs it uses the profile created from the previous run to optimize the JIT compilation process by compile those methods to be ready when its needed.

The ProfileOptimization can be created simply by setting the profile root directory location (directory must exist!) and then start the profiler and save it to a file. so with two simple methods calls you can create profile files used by your application on the next run for faster startup.

image

   There are something simple things to understand about the profiler quoted from MSDN

Profile optimization does not change the order in which methods are executed. Methods are not executed on the background thread; if a method is compiled but never called, it is simply not used. If a profile file is corrupt or cannot be written to the specified folder (for example, because the folder does not exist), program execution continues without optimization profiling.

Here’s a quick look of the TestProfile file created by the ProfileOptimization runtime type for the previous sample application.

image

JIT-compiling using multiple cores is enabled by default in ASP.NET 4.5 and Silverlight 5, so you do not need to do anything to take advantage of this feature. If you want to disable this feature, make the following setting in the Web.config file:

image

More Information:

Hope this helps,

Ahmed

Visual Debugging Experience in Modern IDEs

Couple of years ago InfoQ wrote about a Java/Eclipse developers tool called Code Bubbles. The idea was simply the current nature of the software development IDEs is really file-based (at least most of it or the most used of it ). and there are no convenient way to create and maintain a simultaneous view of different fragments of the code-base not based on the logical sequencing instead of the file storing the code.

Each bubble is a fully editable and interactive piece of code ( you can think of it as a tiny IDE in a bubble) it can contain a method or group of methods The bubbles exist on a 2-D canvas that represent the current working space.

Code Bubbles: Rethinking the User Interface Paradigm of Integrated Development Environments

A collaboration between Brown University (Where Code Bubbles was born) and Microsoft Research, integrating ideas from Brown University’s Code Bubbles project into Visual Studio result Debugger Canvas as a new power tool for Visual Studio that enable Code Bubbles Like experience, it enable visual debugging experience based on code bubbles on a 2-D canvas which in return I believe it will greatly increase developer productivity and understanding of the code set.

Introducing Debugger Canvas

At the current time of writing Debugger Canvas v1.1 is released to work with Visual Studio 2010 SP1 Ultimate only and the reason according to Microsoft, Debugger Canvas is built on top of Visual Studio Ultimate so that it can re-use the underlying technology for the Dependency Diagrams to identify and display the right fragments on the canvas.

Debugger Canvas offers a great set of features as listed in the website:

  • Switch between Debugger Canvas and file based debugging with a single click, even in the middle of a debug session
  • Debug multiple threads side by side with each thread and its most recent stack frame easily identified
  • Get an overview over recursive calls by showing one bubble per invocation
  • Navigate easily up and down the call stack in the canvas itself
  • Step into methods on a canvas using the debugger
  • Use the normal debugger features in the canvas
  • Share a canvas as an XPS image
  • Take snapshots of local variables so you can make comparisons as you step through code multiple times
  • Add related methods to the canvas using Go to Definition and Find All References

My best feature is the ability to debug multiple threads, the default behavior though is to show only one bubble per canvas for all threads sharing that code, So if you want to be able to see different threads have different executing bubbles even for shared code. The you’ll have to  go to Debugger Options in Visual Studio Options Dialog, you’ll see a new node for Debugger Canvas. Disable the Option “Reuse Bubble when the content is the same”.

image

Now you can see the same code bubble highlighted with different color indicating a different thread call to the same function.

image

I’ve been using Debugger Canvas for couple of months now and I find it really fun, amazingly in some situations a little disturbing, especially when the visual canvas gets very cluttered with bubbles and require a lot of scrolling and zooming-in but you can always switch back to the file based view. Nevertheless it’s a great tool I highly recommend especially in sophisticated multithreading algorithms debugging scenarios it helps a lot to understand what is happening, give it a shot.

More Information:

Hope this helps,

Ahmed

Agile+

Note: This blog post transferred from my OLD BLOG and was originally posted in 2009.

Almost more than 3 years since i’ve been part of projects run by Agile, i had the chance to participate it in different roles and places in the whole project lifecycle,  starting from software engineer passing by being an Architect, Business Analyst, Team leader, and finally a Project Manager. It was absolutely a long journey with good, bad experiences, joyful, sadness, difficult moments, hard situation, stressful situation, and a lot of lessons learned and knowledge gained!Being Agile

Going through my past 3 years i can see clearly the main ideas that made this journey fruity at the end, i can sum up these ideas in the following points:

  1. Don’t walk ever without a goal and a vision to achieve that goal.
  2. Believe in your work, yourself, your vision, and in your colleagues.
  3. Be Smart, Flexible, and Responsible.

Of course Agile is more than that, but i guess on the 2nd level of details, but on the 1st level of main ideas, i think this is what all about Agile!

Working as a team lead and Project Manager in Agile process gave me the chance to observe the whole situation from a higher view, a god view if you like. Also working in long running Agile projects showed me how the process evolve and how people become more familiar and adoptive to it everyday, it gave me the chance to experience and see all the lessons and practices I’ve been reading in Agile books, cause long running projects is a complete life with its ups and downs, failures and success, good persons and bad persons. and if you want you can observe all of that during the long running project.

The past week, i had successfully with my team delivered a 1½ year long project to the customer. summing up the good practices and lessons from this experience lead me to the following list:

  1. Focus on People: They are the constructors in this industry; they are everything, they deserve all the attention! 
    • Share the vision with them clearly and honestly, people perform better when they understand.
    • Build the trust bridge between you and everyone in the team, speak with them individually, listen to their thoughts and complains, take actions when it’s necessary, and if it’s not possible; explain why!
    • People are more important than Processes and Tools! never pay more attention to a tool more than a person in your team.
    • Give them a space to perform, back off and let them finish the work, observe closely the progress but don’t let them feel it. The more the pressure increases in the project the more space you need to give them so they can finish!
    • Show them what goals they are hitting, share the goals achievement lists with them; take with them on group explain what were the goals and what we had achieved; speak with each member individually showing him the goals he needed to achieve, show him what he hit and what he miss!

    1 - Focus on People

    • Be Positive. People are very sensitive to negative emotions, be always positive with them, give them hope and inspiration.
    • Appraise them with any possible facility you got in hand, even if this facility is just some congratulation words.
    • Make a family of your team; get them close to each other. let them set together regularly, work in same location and space, hang out with them.
    • Understand their needs, to be able to motivate them.
    • Celebrate success with them.
  2. Build a Team: 2 - Build a Team
    • Think of a small group of members (5 to 8).
    • Hire the right team members; interview them yourself, test them yourself.
    • Use cross functional team; everyone is working everywhere in the project. 
    • Build the sense of responsibility inside them, learn them how to value their work and care about their final output.
    • Agile teams are self-organized; don’t just select someone in the team to be responsible after you about organizing people; instead teach them how to plan and think of priorities and then leave them to organize themselves.
    • Evaluate team as a unit; don’t just evaluate each member individually also evaluate the whole team as one unit including yourself and share the result with them and think with them where  are the problems and how you can fix it ?
  3. 3 - Be Simple, Be IncrementalBe Simple, Be Incremental: 
    • Start thinking about your system as advanced and huge as you can imagine.
    • Set clear defined goals for you project, and incrementally build it step by step.
    • At your first design focus on solving the project risks as simple as you can.
    • Put extensibility in mind but always remember that simplicity should be in the same equation!
    • Work Iteratively.
  4. Simple Executable, and collective ownership Architecture:
    • Don’t build the system architecture by yourself; instead lead the building process; give everyone in the team the chance to share with his ideas in the design. 4 - Architecture
    • Build Executable Architecture, use small spikes/prototypes to sketch your ideas in code, don’t just sketch ideas on papers.
    • Break down the design into small pieces and allow each one in the team to work on a piece.
    • Use Simple tools (Whiteboards) for communicating your thought and ideas; architectural diagrams and models made for that purpose; teach your team to think and express in visual ways.
    • The more you hear this from your team member “I’ve design this ..”, the more confident you are on the right way cause you had build a collective ownership architecture.
    • Start with simple, exactable, and extensible architecture in project start, and accumulatively enhance and add new feature to your design all the way to project’s end!
  5. Accept Changes All the Time:
    • Avoid defining all requirement at the start of the project! instead try to understand the available requirement.
    • Hunt as much assumption as you can at the start; move your goal from knowing everything to clear unknown points; remember that unresolved assumption become risks during the project constructions!

    5 - Accept Changes

    • Build a clear simple product backlog at the start contains main project decisions and features.
    • Dive deeply in feature’s details at the start of the iteration containing the feature!
    • Accept the fact of changing requirement and priorities and work for it from the beginning!
    • Be honest and advise the customer, customers accept negotiation and sometimes don’t see the clear picture that’s why they ask for changes, be his technical eye and let him feel you are work for his benefit and please do.
  6. Beautiful Code: 6 - Beautiful Code
    • Select a standard way to write the code, the more people from the team involved in choosing this standards the more people comply to it.
    • Emphasize to people why we need to follow standards.
    • Teach them to use comments whenever it’s applicable to be guide on the rood for other colleagues.
    • Enhance and review your standards model as project goes.
    • Use Code Review meetings as a tool to share experiences between team members and enhance the project’s code.
    • Encourage Code Spikes, Pair Programming, TDD, and anyway the team feel it will increase its productivity as long as it will not affect any tradeoffs of the project.
  7. Testing means Done:
    • Having a specialized role in your project for testing is not a bad idea, but making this role responsible for the whole testing process is a bad practice.
    • Everybody including you should be part of the testing process.
    • Testing a feature is required to mark it “Done”; that’s why developer must be involved in testing scenarios.
    • Testers are no less than developers; they are carrying 50% of the product manufacturing.
    • Teach developers how important Unit test for their code as an early safety net to discover issues before functional tests.
    • emphasize the concept of collective ownership for unit tests, anyone changes anywhere needs to makes the previous unit tests green and adds new green unit test for his modifications. 7 - Testing
    • Use different types of tests depending on your project type; but any project should have at least 3 main type of test (a) Unit tests. (b) Functional Test. (c) Acceptance Tests.
    • Everybody in the project should be part of the testing cycle, from the customer (acceptance tests) to developers (unit tests); the more you can involve people in running tests on the project the better quality product you will have at the end.
    • Whenever it’s possible GO for Documented Tests.
  8. Art of Documentation:
    • Fact: No one love writing documents; and no one reads them, but yet there are a very slick slice of docs are important.
    • 8 - Art of DocumentationDon’t make people fill documents that are not important or you know it will not be useful.
    • Show people why this document are useful before they writes it; learn people how to use the project documents instead of asking each others; remind them whenever they use some docs they had wrote before how this docs was helpful for them now.
    • Use visual documents, diagrams, flowcharts, charts, and images.
    • Minimize documentation to the absolute necessary.
    • Everyone owns any document; we are working together, anyone allowed to update in any document as long as there are a good reason and we keep history of the changes.
  9. Smart Process:
    • If you don’t have a process to start with, build a simple one that fits the current environment and people then start enhancing it as projects moves.
    • If you do have a process examine if it fits the current environment and people; make necessary changes then start immediately.
    • Don’t copy processes and rules from books; be smart read what others do and select, develop what you can use of their practices in your current project. 9 - Smart Process
    • Discuss your process with the team; there are always a room for improvement.
    • enhance one practice or activity each phase; don’t make major changes, people hate major changes they don’t believe it will do anything and they take time to adopt to it.
    • Keep your process stable for some time, people need to be comfortable and adoptive with the process before they can share with ideas and enhancements!
    • Forget about Tools, focus about practices.
    • Processes are not written on stones, do changes whenever you feel you need it; be flexible when situations require that.
    • Make your process clear and understandable for everyone in the team; don’t hide anything or use any secret roots in the process.
  10. A Manager for A Project: 10 - A Manager for A Project
    • Be transparent, honest, direct, and positive.
    • Teach people that management is part of the project roles it’s not a higher level or position.
    • Know that “you can Delegate Tasks, but you can’t Delegate Responsibility”; you are always responsible even if it’s not your fault.
    • Responsibility is share between everyone in the team, if one failed we all failed!
    • Smash the manager’s door red lamp; work closely with your team, set next to them, be transparent about your work with them, whenever it’s possible share information about your tasks and work with them, don’t just say i’m busy when you are needed, explain why you are busy.
    • Lead people, Manage Project.
    • Listen to the team, listen to everyone thoughts and complains, build trust between you and people work with you, make them comfortable talking about problems and criticizing anything they want including you!
    • Delegate some of your tasks to team whenever it’s possible, so they can see and feel what kind of work you are working in.

Of course there are more about Agile. I tried to collect the best practices from my last experience in this long project, i though it might be helpful. For myself as i’m turning this page of the past 3 years of my career and looking forward for the new phase of my Agile Career Life, sometimes i think to myself this is my Agile+ Smile

Hope this Helps,
Ahmed

The Future of Programming Languages

Note: This blog post transferred from my OLD BLOG and was originally posted in 2008.

Fascinating expert panel discussion in PDC 2008 about the future of programming from some of the leading programming languages experts in the world. How programming will be affected by a number of fundamental changes occurring like many-core machines, cloud computing, and more. What’s the biggest challenges that face the industry.

The Panel’s Speakers:

Anders Hejlsberg
Technical Fellow in the Developer Division. He is an influential creator of development tools and programming languages. He is the chief designer of the C# programming language and a key participant in the development of the Microsoft .NET framework.

Gilad Bracha
The creator of the Newspeak programming language. He is currently Distinguished Engineer at Cadence Design Systems. Previously, he was a Computational Theologist and Distinguished Engineer at Sun Microsystems. He is co-author of the Java Language Specification, and a researcher in the area of object-oriented programming languages.

Douglas Crockford
Discovered the JSON data interchange format. He is leading an effort at ECMA to develop a secure successor to JavaScript. He is the author of JavaScript: The Good Parts.

panel2PDC 2008: The Future of Programming Languages

Wolfram Schulte
A principal researcher and the founding manager of the Research in Software Engineering area, at Microsoft Research Redmond, USA. Wolfram’s research concerns the practical application of formal methods. At Microsoft, Wolfram co-lead research projects on language design and runtimes (the AsmL, Cw, TPL projects), software testing (the Pex, SpecExplorer, and nModel projects), software analysis and verification (the Spec#, Vcc and Hypervisor project), and lately on model-driven engineering of applications for the cloud (formula and bam).

Jeremy Siek
An Assistant Professor at the University of Colorado. Jeremy’s areas of research include generic programming, programming language design, and compiler optimization. Jeremy received a Ph.D. at Indiana University in 2005. His thesis laid the foundation for the constrained templates feature in the next C++ Standard. Also while at Indiana, Jeremy developed the Boost Graph Library, a C++ generic library for graph algorithms and data structures.

Topics Discussed:

panel3 PDC 2008: The Future of Programming Languages

  • What’s the biggest problem the industry facing today? and how programming languages tend to solve that?
  • What’s the difference between modeling and programming?
  • Is programming language syntax important? or it really doesn’t matter?
  • What’s the difference  between creating a new library versus creating a language extension?
  • When we’ll see Meta Programming concepts in other mainstream programming languages?
  • Is “Eval” in meta programming languages is Evil?
  • In the time of IDEs, Do languages really still matter?
  • Is C++ the universal languages that have everything you need?
  • Why we can’t evolving and show our programs in kind of structure way, to allow all sort of meta programming?
  • What’s your opinion about transactional memories and software/hardware hybrid?
  • Why keep adding things into programming languages?
  • Can structural types and type inference bridge the gap between dynamic and static language worlds?
  • Do contracts seems to be in the next generations of programming languages?
  • Why C# wants to be JavaScript?

Discussion Goodies:

  • Gilad: In today’s programming languages you can’t do the “What If” Experiences easily
  • Douglas: In late 1950 there was some work on a new concept called automatic programming, the idea behind this concept was programming is hard and we just need a higher level way to specify what need to be done and the computer can write the program for you; The result of this work was “Fortran”!
  • Douglas: Programming is so hard and we keep pushing to some place that’s not so hard, but eventually that  become programming too!
  • Jeremy: Modeling is always inherently a higher level of abstraction than programming, and it doesn’t have to be.
  • Anders : Our Brain is a pattern recognizer, and syntax is one of the patterns that we feed in.
  • Gilad: The smaller the issue the more people discuss it.
  • Douglas: We’d like to think of the process of programming language design is scientific process or at least engineering process, but it’s really all about fashion!
  • Douglas: The success of programming language has more to do with fashion than the quality of the ideas behind them, i think that’s unfortunate but i think that’s real.
  • Wolfram: If you actually  getting older you care more about the inner values of things and don’t care so much about the outer interface.
  • Jeremy: As Humans we are coming up with new notation, new abstractions, and new languages all the time. So in some sense i think when you are trying to limit that you get weird things happening.
  • Jeremy: I think we are really hurting ourselves by not allowing a full programming languages extensions.
  • Anders:  With C# 3.0 we build a DSL called LINQ.
  • Anders: I think enterprise apps today live and die by meta programming, but pretty much every app we write has some form of code generation.
  • Douglas: Eval as it currently implemented in JavaScript is clearly evil.
  • Wolfram: meta programming on the operating system level is really really a bad  idea, you don’t want to open the box.
  • Gilad: Meta programming is all about one program manipulate another, Reflection is about the program manipulating itself.
  • Gilad: It’s very important that you don’t live in this close worlds, and ideally has this kind of IDE that support multiple languages and let them interoperate.
  • Anders: Frameworks and IDEs in the terms of the learning efforts these days totally important than the languages. but that doesn’t mean that the languages is not important, they just occupy a different position in the eco system.
  • Wolfram: I think that languages and may be even the libraries doesn’t really matter so much, what you really want to see is “What’s the problem you wanna solve”, and then based on the problem you pick right languages, libraries, and IDEs.
  • Wolfram: Pick the language and IDE that is appropriate for your domain, and don’t stick to a language and use it for everything you wanna do.
  • Anders: If you looked at database system they are much more restricted on what operations you can apply to data.
  • Anders: Completely functional programming languages are outlaw of program side effects.
  • All the Speakers: Languages should be designed by dictators not by the comities!
  • Douglas: The community that uses a language is like a business, so when the language is small, experiment, and adult is like a startup. And startup is allowed to do crazy things, but once the community gets big its like a big company, And big companies have to be stagy, though it might want to claim it’s kind of innovative but it really can’t, it has to have constraint on its ability to innovate because if it gets wild it will through itself a part.
  • Douglas: The more people you have using the language, the less licenses we should have in modifying that language.
  • Douglas: Even though more people writing JavaScript than any other languages, Most of them hate it!
  • Douglas: There is no love in the JavaScript community.
  • Anders: I think it’s crazy to say every language should be static all the way down, or we should toss types. Neither are correct, it’s somewhere in between.
  • Jeremy: What we need is a gradual type system that lets you mix between real dynamic typing where the checks happen at runtime vs static typing where you actually have the checks.
  • Douglas: A few years ago dynamic languages were defiantly out of style, today they in. Sounds like Fashion to me.
  • Anders & Douglas: Learning lots of new languages is good for you, if you program mainly in one language. It makes your mind think in different way and this always help you.

I think this is a very interesting, informational, yet very funny discussion you must see if you are in the software industry.

Download TL57 Panel: The Future of Programming Languages

Hope this Helps,

Ahmed

Top 10 Code Review Tips & Tricks

Note: This blog post transferred from my OLD BLOG and was originally posted in 2007.

Developers

Working closely with a lot of Agiled Teams i discoverd that new people for agile process have some issues related to the purpose of coding review process and code review meetings . lately i’ve beeen coaching a new .net development team on Agile , and i came across the same situation again , with the same old question .. What is Code Review for ?

Code Review have 2 main purposes :

First to make sure that the code that is being produced has sufficient quality to be released. In other words, it’s the acid test for whether the code should be promoted to the next step in the process.

Second as a teaching tool to help developers learn when and how to apply techniques to improve code quality, consistency, and maintainability. Through thoughtfully evaluating code on a recurring basis, developers have the opportunity to learn different and potentially better ways of coding.

To Achieve the best case for code reviewing scenarios , i created a simple top 10 techniques list that helped me alot through the projects i worked in (as developer,Team Lead,Architect or even System Designer) to enhance and accelerate the code reviewing process for my code .

  1. CodeReview.Checklist.Add().

Start creating your own checklist of the areas code reviews will cover , you may follow the code slandered for the company to start creating your checklist, go through your code with the checklist and fix whatever you find. Not only will you reduce the number of things that the team finds, you’ll reduce the time to complete the code review meeting—and everyone will be happy to spend less time in the review.

  1. CodingStandards.Update().

One of the challenges that a developer has in the company with combative code review practices is that they frequently don’t know where the next problem will come from. rapidly updating the coding standards will help developer to learn from past lessons and minimize the opportunity of having the issue in next review meetings .

  1. Developer.Me != Developer.Code .

Remember Development is about creativity . if you have some bugs in your code this doesn’t necessary means you are a bad developer . your colleagues who are reviewing the code generally aren’t trying to say that you’re a bad developer (or person) by pointing out something that you missed, or a better way of handling things. They’re doing what they’re supposed to be doing by pointing out better ways. Even if they’re doing a bad job of conveying it, it’s your responsibility to hear past the attacking comments and focus on the learning that you can get out of the process. You need to strive to not get defensive.

  1. Developer.Code.UnitTest().

Always remember writing comprehensive unit tests will save a lot of the time that we would traditionally spend reviewing proper functionality . Instead, we focus on improving the functionality of code that has already been shown to work

  1. Developer.Code.IsObjectOrianted.

Make sure all the time you are following the best practices of object oriented, every single piece of code can be done in usual old structured way BUT this is not the way we think , refractor your code to be object oriented code .

  1. Developer.Code.DesignPatterns.Add().

Whenever possible use commonly known design patterns that suits your situation, design patterns are widely known between developer community and gained their trust and confident and had been tested on a lot of situations proving its fitness . so be in the safe side and use it if you need it.

  1. Developer.Code.IsComplex.

Think agile , keep it simple all the time , assumptions leads to complex design and complex code . focus on the current problem and use the shortest way to solve it this will reduce your code complexity and in turn the probability of bugs.

  1. Developer.Code.HandleExceptions() .

The code required to handle exceptions ” Try/Catch .. ” is the cheapest code in the world , it will cost you nothing , so don’t forget to handle your code expected and generic exceptions .

  1. Developer.Code.Build.IsFreeOfCodeAnalysisWarning .

The ideal case for your code is to build probably without any code analysis warnings , always remember code analysis warning is your friend, it help you to avoid future bugs and security vulnerabilities that might defect your code. So pay attentions to them .

  1. Developer.Code.IsClean .

Follow the naming conventions as your company coding standards refer. Keep the code files outline neat and clean. Remove any redundant code or comments . Organize your code in regions with descriptive names . Avoid grouping multiple types in one code file. Use XML tags for documenting types and members. Simply learn the art of beautiful code .

Hope this Helps

Ahmed

Silent Overflow in CLR

Note: This blog post transferred from my OLD BLOG and was originally posted in 2007.

One of the things you want to avoid in your arithmetic operations on primitive types is the Silent Overflow , for example :

 
as you can see the value of Y is not the desired value , this is because the illegal operation on 2 different primitive types . what just happened is known as Silent Overflow , this kind of silent overflow handled in each programming language in a different way , language like C# consider this kind of overflow is not an error that’s why it allow it by default . other language like VB always consider this kind of overflow as an error and throw exception .
 
What’s matter here is the desired behavior the program need , in most scenarios you will need this kind of silent overflow not to happen in your code or you will get strange unexpected behaviors .
 
Fortunately the common language runtime (CLR) offers IL instructions that make it very easy task for each .Net language compiler to choose the desired behavior
 
add adds two values together, with no overflow checking.
add.ovf adds two values together.However,it throws System.OverflowException if an overflow occurs.
sub subtraction with no overflow checking.
sub.ovf subtract two values , throw System.OverflowException if an overflow occurs.
mul multiplication with no overflow checking.
mul.ovf multiply two values , throw System.OverflowException if an overflow occurs.
conv convert value with no overflow checking.
conv.ovf convert value  , throw System.OverflowException if an overflow occurs.

Now let’s take a closer look on how C# compiler take advantage of this set of IL instructions . first, by default C# compiler doesn’t use overflow checking instructions to run the code faster , this means that the add,subtract,multiply and conversion instruction in C# will not include overflow checking

as you can see the C# compiler generated IL uses the add instruction as it the default behavior for C# to generate silent overflow , generally you can turn that off by compile your code using the /checked+ compiler switch , which tell the C# compiler to uses the safe version of the add with overflow check , add.ovf which will prevent this kind of silent overflow and throw OverflowException if overflow occur.

Sometimes you don’t want to turn the overflow globally for the whole application , rather you want to enable it only in some places in your code that’s when checked and unchecked C# operators come into the picture .

Simply checked operator tells the C# compiler to use the safe IL instruction for this operation , and the unchecked use the normal IL instructions (which is the normal behavior in C#).

now let’s modify our simple example to use the checked operator :

using the checked operator in our example generates OverflowException because the C# Compiler generated the IL using the safe add instruction add.ovf as you can see .

C# also offers checked and unchecked statements for wider scope of overflow checking but still determined by the developer inside his/her code .

same result as you can see , OverflowException has been thrown and the generate IL uses add.ovf , just the IL get larger because the C# compiler generated more (no operation instruction nop) which you can get ride of by generating more optimized version of your app.

Hope this Helps,

Ahmed

CLR 4.0: Code Contracts

Note: This blog post transferred from my OLD BLOG and was originally posted in 2008.

As part of making the .Net framework fully support Design by Contract approaches, the BCL team along with the CLR team had integrated very powerful project from Microsoft Research (MSR) named Code Contracts, and what code contracts do for you as giving you declarative, integrated, and easy way to express all the facts you already know about your program by design to make it work better.

Before you used to do that using Assert statements or even using some comments on top of your API telling people what’s the rules they should follow in order to perfectly use your Methods, one major thing about those methods that it wasn’t a Must to Follow, on the contrary Code Contracts is enforcing and checked statically at compile time and dynamically at runtime.

Note: At this CTP build of Visual Studio and .Net Framework, the Static Analysis tools is not supported yet, but the dynamic runtime checking is supported

Microsoft decided to provide Code Contracts as Library Solution instead of Language Solution that’s to make them widely used by all the languages using .Net framework, So new in .Net Framework 4.0 BCL under System.Diagnostics.Contracts you will find the code contracts library.

All Code contracts grouped under CodeContract static Class as Static Methods that return void, each Contract Methods has 2 overloads; One takes a Boolean expression that must be true in order for contract to be valid and another overload that take expression and a message to display in case of contract violation, things you should know about contracts:

  • Declarative
  • Come at the beginning of the function
  • Think about them as part of the method signature.
  • Your contracts can have only Boolean Expressions.
  • You can but methods calls inside your contracts but this method must be marked as Pure.
image

How to use Contracts

Defining contracts is fairly easy as adding couple of checking lines at the start of the method, check the following sample calculator class to understand how you can use contracts inside your classes:

 1: public class Calculator
 2: {
 3:     /// <summary>
 4:     /// Adds 2 numbers
 5:     /// </summary>
 6:     /// <param name="x">First number can't be zero</param>
 7:     /// <param name="y">Second number can't be zero</param>
 8:     /// <returns>Sum of the 2 numbers</returns>
 9:     public int Add(int x, int y)
 10:     {
 11:         CodeContract.Requires(x > 0 && y > 0);
 12:
 13:         return x + y;
 14:     }
 15:
 16:     /// <summary>
 17:     /// Subtract 2 positive numbers
 18:     /// </summary>
 19:     /// <param name="x">First Number must be larger than the second</param>
 20:     /// <param name="y">Second number</param>
 21:     /// <returns>Result of subtraction, can't be negative</returns>
 22:     public int Sub(int x, int y)
 23:     {
 24:         CodeContract.Requires(x > y);
 25:         CodeContract.Ensures(CodeContract.Result<int>() > -1);
 26:
 27:         return x - y;
 28:     }
 29: }

Types of Contracts

CodeContracts has 3 main types:

1. Preconditions

Use them to validate your method parameters, they must be true at method entry for successful execution of the method.  It’s the responsibility of the caller to make sure these conditions are met.

    • CodeContract.Requires(x>= 0);
    • CodeContract.RequiresAlways(x>= 0);

The difference between Requires and RequiresAlways is the last one always included even in release builds, so you can use it for contracts that you want to include in your release.

2. Postconditions

They are declared at the beginning of a method, just like preconditions.  The tools take care of checking them at the right times.their jobs is to ensure that methods has successful closure.

    • CodeContract.Ensures(z != null); // must be true if method closes successfully
    • CodeContract.EnsuresOnThrow<IOException>(z != null); // Grantuee some variable status in case of specific exceptions.

Also it is very often necessary to refer to certain values in postconditions, such as the result of the method, or the value of a variable at method entry.  CodeContract class allow this using some special referral methods; they are valid only in a postcondition.

    • CodeContract.Ensures(CodeContract.Result<Int32>() >= 0); //represents the value results from a method
    • CodeContract.Ensures(x > CodeContract.OldValue(x)); // represents the value as it was at the start of the method or property.  It captures the pre-call value in a shallow copy.

3. Object Invariants

They are contracts that must be true at all public methods exists in an object. They are contained in a separate method that is marked with the ContractInvariantMethodAttribute.  The method must be parameter-less and return void.  That method contains some number of calls to the CodeContract.Invariant method.

   [ContractInvariantMethod]
void ObjectInvariant() {
CodeContract.Invariant(someData >= 0);
}

What do you ship?

image The final question that matters is what I’ll deliver the customer, in reality Code contracts has a very intelligent way of generating its own contract assemblies, so you have your old normal .net assemblies without contract that you can release to your customers and don’t worry about the performance hit, and another assembly contains the contracts only without any code; you can deliver those assemblies to your customer that you want them have the same code checking experience you have during compile time.

More Information:

  1. Microsoft Research’s code contracts website.
  2. PDC 2008: PC49, Microsoft .NET Framework – CLR Futures (Video|PPTX).
  3. PBC 2008: TL51 Research: Contract Checking and Automated Test Generation with Pex (Video|PPTX).
  4. Introduction to Code Contracts [Melitta Andersen].

Related Articles:

  1. CLR 4.0: Type Embedding.
  2. CLR 4.0: New Enhancements in the Garbage Collection
  3. CLR 4.0: Corrupted State Exceptions

Hope this Helps,

Ahmed

CLR 4.0: Dynamic Languages Support

Note: This blog post transferred from my OLD BLOG and was originally posted in 2008.

As more and more dynamic languages start to show up in the .net world like IronPython, IronRuby, and F# the need for some changes in the CLR and the BCL to natively support those languages become important. But what’s really beautiful about CLR is it’s well designed framework, and eventually Microsoft discovered that the changes they need to make is not that much, Jim Hugunin built a Full Dynamic Language Runtime (DLR) on top of the CLR and he didn’t ask them for a lot of changes.

Some of the Changes in CLR 4.0 in order to support Functional and Dynamic Languages are:

    1. BigIntegers
    2. Tuples

BigIntegers

Jim Hugunin asked Anders Hejlsberg What is the sum of 2 billion and 2 billion; Anders answered – 2 billion :D, this joke was the primer in a lot of the last PDC sessions! And the fact that C# compiler simulate the nature of 8 bytes numbers in the current microprocessor architecture is not something the C# community to be ashamed of, because this is the what the microprocessor designed to do and you don’t want to have different behaviors. But the the fact that still in normal numeric processing you have the normal behavior of adding 2 large numbers like 2 billion should give you the result of 4 billion, and it make all the sense.

So, if you’ve try to execute this operation (2000000000 + 2000000000) on the current C# compiler you should get the following result, and of course the compiler is smart enough to detect that you are doing an operation that will cause an arithmetic overflow, so you have to use the uncheck key work to stop the compiler check on arithmetic overflow exception (for more details review my post: Silent Overflow in CLR).
image
image New in CLR 4.0 and BCL 4.0, you will find a new type BigInteger under System.Numerics namespace, the CLR team cooperated with the Optima team “Microsoft Solver Foundation” to provide reliable, optimum, and fast BigInteger Type in .Net base class library. supporting this new type as library solution means all languages can benefit of this functionality no matter it’s dynamic or not. so C# will have BigIntegers as IronPython too.

Tuples

The Tuple is a mathematical term and it simply means an ordered list of values, those values are the internal items of this tuple and you can access those items by referencing them directly with their position in the sequence. To be more specific the Tuple is kind of On Fly Struct, you don’t have to define names for its internal items to access, you just access them by index!

Tuple requested to be supported by F# team and IronPython team, and they are supported natively in this languages. The CLR team have worked hardly with all the language teams and come up with final implementation for a Tuple library that everyone is happy with.

So new in .Net 4.0 under the system namespace you will find a generic type called Tupletakes up to 7 generic types although hi didn’t saw tuples takes that much items on fly! with library solution support the other languages like C# will not natively need to support tuples, so there will be no keyword for tuple in C#.here is an example on how to use tuples:

var tuple1 = Tuple.Create(4, ‘ahmed’);
var item1 = tuple1 .Item1;
var item2 = tuple1 .Item2;


image

A typical usage for Tuple is as a return type for a method need to return multiple values instead of using out parameters you can use a tuple because it’s easy to create on fly, for instance in this quick example i’ve built a function adds a new item into a collection the logic of this function is to search first if the item exist already in the collection; and if item founded, function return a Boolean value of false determining failure of adding the item and also the index of the already existed item, you could use out parameter of course for doing the same job which i still prefer!

 1: public static

Tuple

<bool,int> AddItem(string item)
 2: {
 3:

var

 result;
 4:
 5:     //if item exists in the collection, return A tuple with 
 6:    // false, and the index of the existed item in the collection.
 7:     if (collection.Contains(item))
 8:     {
 9:         result =

Tuple

.Create(false, collection.IndexOf(item));
 10:     }
 11:     // if item doesn't exist in the collection, add the item and return 
 12:    // a tuple with true and the index of the inserted item.
 13:     else
 14:     {
 15:         collection.Add(item);
 16:         result =

Tuple

.Create(true, collection.Count - 1);
 17:     }
 18:
 19:     return result;
 20: }

Last advise about Tuples, don’t use them in public APIs and methods, it will makes your code harder to understand although it’s easier to create Tuples but the internal data structure of the Tuple must be documented very well and known by the user of your function in order to use it right, on the contrary of out parameters the user of your method just need to read your parameter name to understand its job!

More Information:

  1. PDC 2008: PC49, Microsoft .NET Framework – CLR Futures (Video|PPTX).
  2. PBC 2008: TL10 Deep Dive, Dynamic Languages in Microsoft .NET(Video|PPTX).
  3. Wikipedia: Tuple

Related Articles:

  1. CLR 4.0: Type Embedding.
  2. CLR 4.0: New Enhancements in the Garbage Collection.
  3. CLR 4.0: Corrupted State Exceptions.
  4. CLR 4.0: Code Contracts.

Hope this Helps,

Ahmed

CLR 4.0: Corrupted State Exceptions

Note: This blog post transferred from my OLD BLOG and was originally posted in 2008.

image image

Thread Created by the CLR

Thread Created outside the CLR

Threads that can run managed code can be classified into two types.

  1. There are threads that are created by the CLR, and for such threads, the CLR controls the base (the starting frame) of the thread.
  2. There are also threads that are created outside the CLR but enter it at some later point to execute managed code; for such threads, the CLR does not control the thread base.
The CLR uses the following algorithm to trigger and handle thrown exceptions within the CLR Created Threads. After the CLR walks up the exception call stack, if the CLR can’t find a managed exception handler in Main, the exception will reach the native frame within the CLR where the thread started. In this frame, the CLR has established an exception filter that will apply the policy to swallow (blind catch) exceptions, if applicable. If the policy indicates not to swallow the exception ( default in .NET Framework 2.0 and later), the filter triggers the CLR’s unhandled exception processing.The Unhandled Exception Processing History in .Net

In the .NET Framework 1.0 and 1.1, unhandled exceptions on threads that were created within the CLR were swallowed at the thread base (the native function at which the thread started in the CLR).
Which is, the behavior we don’t want, since the CLR has no clue about the reason the exception was raised in the first place. Thus, swallowing such an exception, is a mistake since the extent of application or process state corruption cannot be determined. What if the exception was the kind that would indicate a corrupted process state such as Access Violation, for instance?

In the .NET Framework 2.0, this behavior was changed. Unhandled exceptions on threads created by the CLR are no longer swallowed. If the exception is not handled by any managed frame on the stack, the CLR will let it go unhandled to the OS after triggering the unhandled exception process. The unhandled exception, then, will result in an application crash.

And for backward compatibility with legacy application that built on top of CLR 1.0, CLR 2.0 provided a flag was that could be set in the application configuration file’s runtime section to have the old behavior of swallowing the unhandled exceptions:

 1: <configuration>
 2:   <runtime>
 3:     <legacyUnhandledExceptionPolicy enabled="true"/>
 4:   </runtime>
 5: </configuration>
image

Super Exceptions (New in CLR 4.0)

CLR 4.0 will not allow you anymore to catch exceptions that may corrupt the state of the current running process, the CLR 4.0 introduced this new concept of Corrupted State Exceptions (i like to think of those as Super Exceptions), those exceptions that could corrupt the state of the process and causing losing user data or weird application behaviors like:

  • Access Violation Exception
  • Invalid Memory Exception
  • etc ..

Those kind of exceptions is dangerous and the best scenario most of the time is to stop processing as quick as you can before you lose user important data or cause the application to behave in unexpected ways.

A sample scenario of when those exception could happen like the following scenario, if you try to run this sample code while you another process accessing “file.txt” you will got Access Violation Exception. In .Net 2.0 you can catch and swallow this kind of exceptions as in this code snippet; But this will not be the case anymore, which means that the catch statements will not be able anymore to see this Corrupted State Exceptions, and no matter what they will popup and stop the process of continuing its work.

 1: class Program
 2: {
 3:     static void Main(string[] args)
 4:     {
 5:         SaveFile("file.txt");
 6:         Console.ReadLine();
 7:     }
 8:
 9:     public static void SaveFile(string fileName)
 10:     {
 11:         try
 12:         {
 13:             FileStream fs = new FileStream(fileName, FileMode.Create);
 14:         }
 15:         catch (Exception ex)
 16:         {
 17:             Console.WriteLine("File open error");
 18:             throw new IOException();
 19:         }
 20:     }
 21: }

The CLR team knew that there are some rear circumstances that you will need to handle those Corrupted State ExceptionThe Super-Exception “, may be for doing some log routine but not to continue processing. So CLR 4.0 provide a new attribute called HandleProcessCorruptedStateExceptions, you can use to decorate the methods you want to use in doing some log routines or whatever notification scenarios about those super exceptions.

 1: [HandleProcessCorruptedStateExceptions]
 2: public static void HandleCorruptedStateException()
 3: {
 4:     // Write your Super-Exception Notification code here
 5:     // log, or Send Mail etc ..
 6: }
Also for backward compatibility purposes CLR 4.0 provides a new process wide compact switch you can easily set to have the old behavior of catching those Corrupted State Exceptions as in previous versions of CLR.
 1: <configuration>
 2:   <runtime>
 3:     <legacyCorruptedStateExceptionsPolicy enabled="true"/>
 4:   </runtime>
 5: </configuration>

Related Stuff:

  1. PDC 2008: PC49, Microsoft .NET Framework – CLR Futures (Video|PPTX).
  2. Unhandled Exception Processing In The CLR (Gaurav Khanna)
  3. CLR 4.0: Type Embedding
  4. CLR 4.0: New Enhancements in the Garbage Collection

Hope this Helps,

Ahmed

CLR 4.0: New Enhancements in the Garbage Collection

Note: This blog post transferred from my OLD BLOG and was originally posted in 2008.

image

The current Garbage Collection does pretty good job in reclaiming the memory of Gen 0 and Gen 1, those Generation’s objects live in ephemeral segments which is very small and GC reclaims their memory very fast, on the contrary most of Gen 2 objects live in other large segments which make Gen 2 large objects collection slower than other collections.

The GC team actually made great improvements in collection algorithms on both the server and the workstation to make it faster and reduce latency.

Enhancements in Server Garbage Collection

The current server GC is very efficient in terms of maximizing the overall throughput; this because GC’s Gen 2 actually pauses all the current running managed code on the server while it runs. And It turns out that this makes the GC as fast as all of us need “BUT” the cost is generating those long pauses on the server managed code execution, and increasing the latency of course!

What the CLR team did in v4.0 is they allow you to be notified before Gen 2 collection (LOH collection or Large Object Heap Collection) happens. You might ask how this could help me in reducing those latency on my server? And in fact there are good news and bad news; the good news is yes this will help you to reduce the latency and reduce the long pauses on your server, and the bad news is this will not help everyone in reality it will help you if you only uses some Load Balancing techniques.

What now CLR offering is a notification model you can use to know when GC starts Gen 2 collection on the current server, so you can switch the user traffic through your load balancer to another application server and then start Gen 2 collection for the old traffic on the first application server; your user will not feel that same latency and long pauses as before.

 

I’m gona walk you through sample code to learn you how to benefit from this new enhancement in your server applications. 

image
   1: public class Program
   2: {
   3:     public static void Main(string[] args)
   4:     {
   5:         try
   6:         {
   7:             // Register on the FullGCNotification service
   8:             // Set the Maximum generation Threashold and 
   9:             // the large object heap threshold
  10:             GC.RegisterForFullGCNotification(10, 10);
  11:  
  12:             // wait for the notification to happen on a new thread
  13:             Thread fullGCThread = new Thread(new ThreadStart(WaitForFullGC));
  14:             fullGCThread.Start();
  15:         }
  16:         catch (InvalidOperationException ex)
  17:         {
  18:             Console.WriteLine(ex.Message); 
  19:         }
  20:     }
  21:     public static void WaitForFullGC()
  22:     {
  23:         while (true)
  24:         {
  25:             // This is a blocking call, once it returns with succeed
  26:             // status, this means that Gen 2 collection is about to happen
  27:             GCNotificationStatus status = GC.WaitForFullGCApproach();
  28:  
  29:             if (status == GCNotificationStatus.Succeeded)
  30:             {
  31:                 // now you call your custom procedure to switch
  32:                 // the trafic to another server
  33:                 OnFullGCApproachNotify();
  34:             }
  35:  
  36:             // Now you are waiting for  GC to complete Gen 2 collection
  37:             status = GC.WaitForFullGCComplete();
  38:             // once it finish you call your custom procedure to switch back
  39:             // the traffic to your first server
  40:             if (status == GCNotificationStatus.Succeeded)
  41:             {
  42:                 OnFullGCCompleteNotify();
  43:             }
  44:  
  45:         }
  46:     }
  47:  
  48:     private static void OnFullGCApproachNotify()
  49:     {
  50:         // 1. Direct the new traffic away from this server
  51:         // 2. Wait for the old traffic to finish
  52:         // 3. Call GC.Collect, and this is the interesting part because
  53:         // Microsoft always tells you not to call GC.Collect yourself.
  54:         // but here you will need to do that because there are no more traffic
  55:         // redirected to this server, so you might wait forever before the GC starts
  56:         // so you need to start the GC.Collect() yourself
  57:         GC.Collect();
  58:     }
  59: }
 
Enhancements in Workstation Garbage Collection
 
Today’s CLR have a Concurrent Collection algorithm for workstation’s GC, this algorithm can do most of Gen 2 objects collection without pausing the running managed code too much at least not as much as on GC’s Server algorithms.
So the problem occur when ephemeral segments fills up during GC is busy making Gen 2 collection on other segments then a new objects allocated from ephemeral segments that’s when the pauses happen on the workstation and the user feels the latency; Concurrent Collection Algorithm can’t run Gen 0 and Gen 1 at the same time as Gen 2 is occurring.
 
New in CLR 4.0 a new collection algorithm used for Workstation’s GC collection instead of the Concurrent Collection Algorithm, this new algorithm is called Background Collection Algorithm, one key thing about this new Background Algorithm is it can do Gen 0 and Gen 1 at the same time Gen 2 is occurring; and in that way you will not see long pauses in your client application as before only in very unusual circumstances.
image
In the upper chart you can see a statistical comparison between the Old Concurrent Algorithm and the Background Algorithm performance. As you ca see at the start of the application in both of the algorithms there is one pause but it takes half the time in the Background algorithm, one the application continue working in the case of Concurrent GC  there are multiple long pauses, on the contrary in the Background GC you se far a few longer pauses as before.
 
With new GC Notification Algorithm on server managed applications and new Background Collection Algorithm on Workstation managed applications; the CLR team leverage a new performance experiences with fewer long pauses and great latency.
 
Related Stuff:
  1. PDC 2008: PC49, Microsoft .NET Framework – CLR Futures (Video|PPTX).
  2. Garbage Collection: Automatic Memory Management in the Microsoft .NET Framework (Jeffrey Richter).
Hope this Helps,
Ahmed

CLR 4.0: Type Embedding

Note: This Post transfered from my OLD Blog and was originally posted in 2008.

 

As new enhancement in CLR version 4.0 (will be released in 2010) is the concept of Type Embedding. The actual motivation for this new concept was the miserable story of deploying application that uses Primary Interop Assemblies (PIA).

In previous versions of CLR you’ve needed to deploy your managed assembly plus the PIA to the client machine, in some scenarios like developing against the MS Office’s PIAs, you need to deploy the whole Office PIA Redist which is about 6.3 MB to make your little application works!!

and the problem getting worth if you try to target a machine that have different version of MS Office’s PIAs

This was a very bad experience if you have been through it before.

image

So the Scenario was trying to develop against PIAs leads to:

  • Complex deployment scenarios
  • Targeting multiple hosting environment.
  • Tight type coupling.

MS started a new project code-name NOPIA as part of the next version of CLR 4, its target was to eliminate runtime dependency on Interop Assemblies at compile time easily, So you can do that with just flipping a switch in the visual studio assembly reference properties window or just by compiling your code with /link switch.

What’s NOPIA

Using this new capability you are actually telling the CLR to embed all the necessary information to call the com object embedded into the managed assembly itself, so you don’t need the PIA assembly to be deployed with your application anymore.

The embedded information represented as “Local Types” which is a partial copies of the types exist in the PIA.

As an example we will develop this sample Hello Buddy console Application simply takes a name and interact with Word PIA ( Microsoft.Office.Word.dll ) from Office PIA Redist, to create new word document and Write “Hello name” statement.

image

I’m using Visual Studio 2010, .Net 4.0, and CLR v4.0 CTP to develop this application, you can download this CTP here.

  • Create new C# console application.
  • Add reference for Microsoft.Office.Interop.Word.dll
  • Add this class to your applicaion.
 1: using System;
 2: using Word = Microsoft.Office.Interop.Word;
 3:
 4: namespace HelloBuddy
 5: {
 6:     public class Program
 7:     {
 8:         static void Main(string[] args)
 9:         {
 10:             SayHi("Ahmed");
 11:             Console.ReadLine();
 12:         }
 13:
 14:         public static void SayHi(string name)
 15:         {
 16:             Word.Application wordApp = new Word.Application();
 17:
 18:             wordApp.Visible = true;
 19:             wordApp.Activate();
 20:
 21:             object falseValue = false;
 22:             object trueValue = true;
 23:             object missing = Type.Missing;
 24:
 25:             Word.Document doc = wordApp.Documents.Add(ref missing, ref missing, ref missing, ref missing);
 26:
 27:             object start1 = 0;
 28:             object end1 = 0;
 29:
 30:             Word.Range rng = doc.Range(ref start1, ref missing);
 31:             rng.Font.Name = "Tahoma";
 32:             rng.InsertAfter("Hello " + name);
 33:
 34:         }
 35:
 36:     }
 37: }
  • Compile and run
  • You should be apple to see MS Word window open with “Hello Ahmed” statement.Check the compiled assemblies in your bin directory, you’ll find:
    1. HelloBuddy.exe
    2. Microsoft.Office.Interop.Word.dll.

    Attach “HelloBudy.exe” to your VS Debugger, and check the loaded modules, you will find the Office PIA assembly is loaded, and this is the one we are trying to get rid of

image

loadedModules1NOPIA Mode

Now let’s switch to NOPIA Mode, and embed those PIA used types into our console assembly, to do that:

  • Select the PIA assembly reference and choose properties
  • In Properties window, change property “Embed Interop Types” to True
  • Recompile your application.

If you check the Bin directory you will find only the application assembly ”HelloBuddy.exe ”, there is no PIA assembly. The Only difference is application assembly becoming little bit bigger because it now embed the partial type info from the original PIA, but still smaller than deploying the whole PIA file.

Attach “HelloBudy.exe” to your VS Debugger, and check the loaded modules, you will find only your application assembly, no more PIA.

Behind the Scene

If you are interested to know what actually CLR do behind the scenes, open your application assembly with Reflector and check the types inside it, you will find that CLR has injected Microsoft.Office.interop.Word namespace into your application’s assembly.

embed
In this namespace you can find only the set of types “local types” that you have used from the PIA into your application. Types like  Application, Document, Range, Font.
In fact the CLR rips only the types necessary to complete the calls you have made from your application, More than that if you check the emitted code for those types, for instance the Application type. CLR extracted only the functions you have called and replaced all the other functions and Type members with the magic _VblGap calls. Those _VtblGap pseudo methods are emitted in place of unused methods to maintain vtable compatibility.
embedIL0

NOPIA Limitations

Ofcourse there is nothing without limitations. NOPIA has some limitation of what you can embed into your assemblies:

  1. Can’t Embed IL (no classes or static methods)
  2. Only metadata is locally embedded (interfaces, delegates, enums, structs )
  3. Only types from Interop Assemblies can be embedded. Compilers check for these attributesa. [assembly:Guid(…)]b. [assembly:ImportedFromTypeLib(…)]

As you can see how the MS Future CLR v4.0 provides an easy way to develop application against COM interop assemblies, along with very friendly, powerful deployment scenario.

Related stuff

Hope this helps,

Ahmed