Random Post: DICE Paper Ready at Long Last
RSS .92| RSS 2.0| ATOM 0.3
  • Home
  • About
  •  

    NCache Concurrency Controls

    March 8th, 2012

    Recently I have been looking into .NET-friendly in-memory data grid (IMDG) software. One of the products I have been experimenting with is Alachisoft’s NCache.

    When I analyze a distributed caching product, I like to focus first on concurrency control. Before examining performance, scalability, or advanced architectural features, it is valuable to confirm that the software can be used to construct applications that remain reliable when data is shared and subject to updates. To do this I usually construct a test case involving extreme data hot spots, and exercise the features offered by the product for managing data access in these heavy contention situations.

    If the product I’m working with offers features for dividing work into ACID style transactions then I explore those features as part of my analysis of concurrent data access. NCache does not provide features for using ACID transactions, so my work with NCache has focused on its locking features.

    NCache provides both optimistic and pessimistic concurrency control. The pessimistic locking model turns out to be unusual, at least in my experience. Although the documentation on this feature is quite sparse, with help from the (very responsive) tech. support team at Alachisoft I was able to get an understanding of how NCache’s pessimistic locking works and how to use the NCache API to control it.

    Here are some key points about NCache’s pessimistic locking:

    1. The NCache data access API provides specialized methods for acquiring locks, releasing locks and accessing items in the cache under locking constraints.

    2. Each lock represents the state of an item of data in the cache.

    3. Locks are not granted to a thread or a process. Every lock is equally visible to and manipulable by all cache client threads, regardless of which thread originally requested the lock.

    4. Locking is completely cooperative. If locking is used to control access to a data item then, to ensure that exclusive access to that item will be maintained until the routine that requested the lock relinquishes it, every potentially concurrent access to that item must be made using the locking form of the NCache data access API. Using a non-locking API method to access a locked item will bypass the lock.

    5. In at least one case, using a non-locking API method will not only bypass the lock but will remove it.

    6. NCache’s locks will never cause a client thread to block. Data access methods that access a locked item in the cache will return right away. Clients must include logic to interpret the results of the method invocation and retry it if appropriate.

    7. In some cases, to determine why a method has not been successful the side effects of the method must be examined.

    This final point requires a fuller explanation. Most of the the pessimistic locking API methods take a reference to an object called a LockHandle as one of their arguments. A LockHandle stores the information that identifies a lock on a data item. The values stored in the LockHandle you pass in may be changed by the method. Depending on which method you use and the state (locked or unlocked) of the item you are trying to access, to interpret the results of your locking call it may be necessary to inspect the values in the LockHandle you passed in. Here is an example:

    The Get() method takes a key and returns the associated cached item if there is one. The locking form of the Get() method takes three additional arguments, one of which is a LockHandle reference. Depending on the values of those arguments, the method can be used to lock an object and retrieve it in a single call. In this case the method will return null if there is no object in the cache with the specified key value, or if the object exists but is already locked. To distinguish between these two cases the caller must examine the post-call values in the LockHandle.

    NCache’s concurrency features work as intended, and with them I was able to maintain the data consistency required by my test scenario. Both the optimistic and the pessimistic features delivered correct results when data items were subject to vigorous concurrent access. However, when compared with the concurrency features provided by well known competitors, NCache’s concurrency controls are quirky and not particularly developer-friendly. Its vulnerability to pessimistic locks being accidentally bypassed or removed by clients that don’t conform to the cooperative lock management protocol makes applications that depend on exclusive data access prone to bugs. The non-blocking approach may be welcome in some situations, but the lack of blocking calls is likely to lead to increased client code complexity and reduced performance in many cases. The lack of support for ACID transactions poses a significant obstacle to building applications that must maintain strict data consistency. For some use cases NCache’s pessimistic locking or its optimistic concurrency features may suffice, but developers designing a system with shared access to writable data should study its concurrency control feature set carefully before choosing NCache.