Skip to main content

Posts

Running one process on multiple processors

A thread in a process can migrate from processor to processor, with each migration reloading the processor cache. Under heavy system loads, specifying which processor should run a specific thread can improve performance by reducing the number of times the processor cache is reloaded. The association between a processor and a thread is called the processor affinity. Each processor is represented as a bit. Bit 0 is processor one, bit 1 is processor two, and so forth. If you set a bit to the value 1, the corresponding processor is selected for thread assignment. When you set the  ProcessorAffinity  value to zero, the operating system's scheduling algorithms set the thread's affinity. When the  ProcessorAffinity  value is set to any nonzero value, the value is interpreted as a bitmask that specifies those processors eligible for selection. The following table shows a selection of  ProcessorAffinity  values for an eight-processor system. Bitmask Binary value Eligible

WebSite vs WebApplication

The only similarity between a web site and a web application is that they both access HTML documents using the HTTP protocol over an internet or intranet connection. However, there are some differences which I shall attempt to identify in the following matrix: Web Site Web Application 1 Will usually be available on the internet, but may be restricted to an organisation's intranet. Will usually be restricted to the intranet owned by a particular organisation, but may be available on the internet for employees who travel beyond the reach of that intranet. 2 Can never be implemented as a desktop application. May have exactly the same functionality as a desktop application. It may in fact be a desktop application with a web interface. 3 Can be accessed by anybody. Can be accessed by authorised users only. 4 Can contain nothing but a collection of static pages. Although it is possible to pull the page content from a database such pages are rarely updated after they have been crea

Are cloud storage providers good for primary data storage?

Why not use a cloud storage provider? The most persuasive argument against using cloud storage for primary storage  is application performance. Application performance is highly sensitive to storage response times. The longer it takes for the application's storage to respond to a read or write request, the slower that application performs.  Public cloud storage by definition resides in a location geographically distant from your physical storage when measured in cable distance. Response time for an application is measured in round-trip time (RTT). There are numerous factors that add to that RTT. One is speed of light latency, which there is no getting around today. Another is TCP/IP latency. Then there is a little thing called packet loss that can really gum up response time because of retransmissions. It is easy to see that for the vast amount of SMB(small mid sized business) primary applications, public cloud storage performance will be unacceptable.  When do cloud storage

Encapsulation: Local change - Local effect principle

One of the central principles of object oriented programming is Encapsulation. Encapsulation states that the implementation details of an object are hidden behind the methods that provide access to that data. But why is encapsulation a good idea? Why bother to do it in the first place? Just stating that it's "good OO design" isn't sufficient justification. There is one primary justification of encapsulation. It's a principle I call "Local Change - Local Effect". If you change code in one spot, it should only require changes in a small neighborhood surrounding the original change. When used properly, encapsulation allows software to change gradually without requiring bulk changes throughout the system (Change of code in one place requires code change in many places is known as Domino effect). Encapsulation helps follow this principle by allowing changes in the representation of an object's state. The methods for the object may be affected, but ca

Limitations of COM Interop

Following is the list of some shortcomings: Static/shared members: COM objects are fundamentally different from .Net types. One of the differences is lack of support for static/shared members. Parameterized Constructors: COM types don't allow parameters to be passed into a constructor. This limits the control you have over initialization and the use of overloaded constructors. Inheritance: One of the biggest issues is the limitations COM objects place on the inheritance chain. Members that shadow members in a base class aren't recognizable, and therefore, aren't callable or usable in real sense. Portability: Operating Systems other than Windows don't have registry. Reliance on Windows registry limits the number of environments a .Net application can be ported to.

Why Visual Studio hangs

Every once in a while, VS seems to take forever to display a screen to the point that it seems to hang. Most of the time, it hangs, while accessing Fonts and Colors page in Tools/Options dialog. The issue is not that there is some weird code that executes very slowly. It happens that this page is implemented using .NET components. Now the majority of VS is built with native code and during most of its execution,, the CLR is never loaded. However, when the user accesses one of these features, the CLR must be loaded, before we can begin executing the relevant IL. It is this process that is time-consuming and annoying to the user. There are two problems for the users here: first, there is no feedback during loading of the CLR; second: the problem can occur multiple times within a single session of VS. I am trying to figure out the reason for this second issue. Let me know, if any of you knows.