A Switching NamedCache Implementation
I decided to write this blog after the technique had been mentioned on the Oracle Coherence forum a few times as a solution to various requirements but without any real detail or without covering some of the questions that this technique poses. The technique in question is basically where you have two caches on the server side and the client needs to use one or the other to get data depending on a flag in another cache. One example of the requirement would be this forum post
Personally I think that this solution is not always suitable but if you want to go down this route then it can be done in such a way that the client code is kept as simple as possible – in fact using the techniques I am going to write about here the client does not know they are using anything other than a normal NamedCache.
So, to be clear, lets state our requirement: We have two caches on the server side – lets call them CacheA and CacheB and we have a replicated cache that contains a flag to tell us whether to use CacheA or CacheB for data, we will call this the FlagCache. The flag cache will have an entry in it with a well known key that we use to get the flag, we’ll call this the FlagKey. As we are only flipping between two caches we can make the flag value a boolean although you could expand this to have more than two underlying caches if you had some requirement that needed it.
To use this technique every time we need to use the cache we need to know which cache to go to so the steps are…
Boolean flag = flagCache.get(flagKey); NamedCache cacheToUse; if (flag) { cacheToUse = cacheA; } else { cacheToUse = cacheB; } cacheToUse. ... do something ...
OK, not too much code but do we really want to do that all over our client code – no we do not. We could encapsulate the code above in some sort of utility method that returns the cache to use, but again we need to call this method everywhere before we do anything. A better technique is for the client to not have to even know that it is using more than one underlying cache and this is how we do it.
Wrap The Underlying Caches
First we write a wrapper class that implements NamedCache. This wrapper encapsulates both CacheA and CacheB and the code to check which cache to call. When any of its methods are called it then forward the method call to the correct cache using the code we used above. Sounds simple, and it is, but there are a few gotchas we will cover later.
Here is the wrapper class
import com.tangosol.net.CacheService; import com.tangosol.net.NamedCache; import com.tangosol.util.Filter; import com.tangosol.util.MapListener; import com.tangosol.util.ValueExtractor; import java.util.Collection; import java.util.Comparator; import java.util.Map; import java.util.Set; /** * @author Jonathan Knight */ public class SwitchingNamedCache implements NamedCache { private NamedCache cacheA; private NamedCache cacheB; private NamedCache flagCache; private Object flagKey; public SwitchingNamedCache(NamedCache cacheA, NamedCache cacheB, NamedCache flagCache, Object flagKey) { this.cacheA = cacheA; this.cacheB = cacheB; this.flagCache = flagCache; this.flagKey = flagKey; } public NamedCache getCurrentCache() { Boolean flag = (Boolean) flagCache.get(flagKey); NamedCache currentCache; if (flag == null || flag) { currentCache = cacheA; } else { currentCache = cacheB; } return currentCache; } public Set entrySet() { return getCurrentCache().entrySet(); } public Set keySet() { return getCurrentCache().keySet(); } public Collection values() { return getCurrentCache().values(); } public boolean containsKey(Object oKey) { return getCurrentCache().containsKey(oKey); } public boolean isEmpty() { return getCurrentCache().isEmpty(); } public int size() { return getCurrentCache().size(); } public boolean containsValue(Object oValue) { return getCurrentCache().containsValue(oValue); } public Object get(Object oKey) { return getCurrentCache().get(oKey); } public void addMapListener(MapListener listener) { getCurrentCache().addMapListener(listener); } public void removeMapListener(MapListener listener) { getCurrentCache().removeMapListener(listener); } public void addMapListener(MapListener listener, Object oKey, boolean fLite) { getCurrentCache().addMapListener(listener, oKey, fLite); } public void removeMapListener(MapListener listener, Object oKey) { getCurrentCache().removeMapListener(listener, oKey); } public void addMapListener(MapListener listener, Filter filter, boolean fLite) { getCurrentCache().addMapListener(listener, filter, fLite); } public void removeMapListener(MapListener listener, Filter filter) { getCurrentCache().removeMapListener(listener, filter); } public String toString() { return "SwitchingNamedCache currentCache=" + getCurrentCache().toString(); } public boolean lock(Object oKey, long cWait) { return getCurrentCache().lock(oKey, cWait); } public boolean lock(Object oKey) { return getCurrentCache().lock(oKey); } public boolean unlock(Object oKey) { return getCurrentCache().unlock(oKey); } public void clear() { getCurrentCache().clear(); } public Object put(Object oKey, Object oValue) { return getCurrentCache().put(oKey, oValue); } public void putAll(Map map) { getCurrentCache().putAll(map); } public Object remove(Object oKey) { return getCurrentCache().remove(oKey); } public String getCacheName() { return getCurrentCache().getCacheName(); } public CacheService getCacheService() { return getCurrentCache().getCacheService(); } public boolean isActive() { return getCurrentCache().isActive(); } public void release() { getCurrentCache().release(); } public void destroy() { getCurrentCache().destroy(); } public Map getAll(Collection colKeys) { return getCurrentCache().getAll(colKeys); } public Object put(Object oKey, Object oValue, long cMillis) { return getCurrentCache().put(oKey, oValue, cMillis); } public Set keySet(Filter filter) { return getCurrentCache().keySet(filter); } public Set entrySet(Filter filter) { return getCurrentCache().entrySet(filter); } public Set entrySet(Filter filter, Comparator comparator) { return getCurrentCache().entrySet(filter, comparator); } public void addIndex(ValueExtractor extractor, boolean fOrdered, Comparator comparator) { cacheA.addIndex(extractor, fOrdered, comparator); cacheB.addIndex(extractor, fOrdered, comparator); } public void removeIndex(ValueExtractor extractor) { cacheA.removeIndex(extractor); cacheB.removeIndex(extractor); } public Object invoke(Object oKey, EntryProcessor agent) { return getCurrentCache().invoke(oKey, agent); } public Map invokeAll(Collection collKeys, EntryProcessor agent) { return getCurrentCache().invokeAll(collKeys, agent); } public Map invokeAll(Filter filter, EntryProcessor agent) { return getCurrentCache().invokeAll(filter, agent); } public Object aggregate(Collection collKeys, EntryAggregator agent) { return getCurrentCache().aggregate(collKeys, agent); } public Object aggregate(Filter filter, EntryAggregator agent) { return getCurrentCache().aggregate(filter, agent); } }
Using the Wrapper Class
So how does our client code use the SwitchingNamedCache? Well, the client could construct an instance every time like this.
NamedCache cacheA = CacheFactory.getCache("cache-a"); NamedCache cacheB = CacheFactory.getCache("cache-b"); NamedCache flagCache = CacheFactory.getCache("flag-cache"); NamedCache cache = new SwitchingNamedCache(cacheA, cacheB, flagCache, "cache-a-b-flag"); ... use cache as normal ...
Again not too hard but far better if the client didn’t need to do this – as I said we do not want the client to know about the fact the data is coming from multiple caches.
To do this we can use some cache configuration to do the work for us. The magic that allows us to do this, and something that is probably not used much by many people is the macros that can be used in the cache configuration file described in the Coherence Docs here In particular we are going to use the {cache-ref} macro.
Here is an example cache configuration file that uses our SwitchingNamedCache.
In our example say we have a cache of exchange rates and at a certain time we want to switch from one set of rates to another so we do this with two caches, A-exchange-rates and B-exchange-rates but the client only ever needs to ask for the exchange-rates cache.
<cache-config xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <defaults> <serializer>pof</serializer> </defaults> <caching-scheme-mapping> <cache-mapping> <cache-name>exchange-rates</cache-name> <scheme-name>switching-scheme</scheme-name> <init-params> <init-param> <param-name>cache-A</param-name> <param-value>A-exchange-rates</param-value> </init-param> <init-param> <param-name>cache-B</param-name> <param-value>B-exchange-rates</param-value> </init-param> </init-params> </cache-mapping> <cache-mapping> <cache-name>A-*</cache-name> <scheme-name>distributed-scheme</scheme-name> </cache-mapping> <cache-mapping> <cache-name>B-*</cache-name> <scheme-name>distributed-scheme</scheme-name> </cache-mapping> <cache-mapping> <cache-name>flag-cache</cache-name> <scheme-name>replicated-scheme</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>distributed-scheme</scheme-name> <service-name>DistributedService</service-name> <thread-count>10</thread-count> <backing-map-scheme> <local-scheme> <unit-calculator>BINARY</unit-calculator> </local-scheme> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> <replicated-scheme> <scheme-name>replicated-scheme</scheme-name> <service-name>ReplicatedCache</service-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </replicated-scheme> <class-scheme> <scheme-name>switching-scheme</scheme-name> <class-name>com.thegridman.coherence.cache.SwitchingNamedCache</class-name> <init-params> <init-param> <param-type>{cache-ref}</param-type> <param-value>{cache-A}</param-value> </init-param> <init-param> <param-type>{cache-ref}</param-type> <param-value>{cache-B}</param-value> </init-param> <init-param> <param-type>{cache-ref}</param-type> <param-value>flag-cache</param-value> </init-param> <init-param> <param-type>String</param-type> <param-value>flag-key</param-value> </init-param> </init-params> </class-scheme> </caching-schemes> </cache-config>
There are probably a few slightly different ways we could have done the configuration above but we’ll go through what we have done here – and you’ll have to excuse the bad formatting where the init-param tags are but for some reason I could not get them to use the correct number of spaces, either WordPress or the formatting plugin is messing it up.
First we have a simple distributed-scheme which both of our “A” and “B” exchange rate caches map to. We also have a replicated scheme that holds the flag that indicates whether we should use the “A” or “B” cache.
You can see we have defined a mapping for the exchange-rates cache to a class-scheme, which is our SwitchingNamedCache class. The class takes four parameters, the “A” and “B” caches, the cache to check for the flag, and they key of the entry that holds the flag. This is where we use the {cache-ref} macro that covnverts the string value into the actual NamedCache reference that will be passed to the SwitchingNamedCache constructor.
Now all the client or application code ever needs to do is access the exchange-rates cache as it would any other cache. Any application code that uses exchange-rates has no idea that it is reading from the “A” or “B” cache, which is exactly what we want.
so if we did this in our code somewhere…
// Populate the A cache with some rates NamedCache cacheA = CacheFactory.getCache("A-exchange-rates"); cacheA.put("GBP-USD", 1.53d); // Set the flag to make the A cache the one to be used NamedCache flagCache = CacheFactory.getCache("flag-cache"); flagCache.put("flag-key", true);
…we have now initialised the “A” cache with a rate and set the flag to true so application code will use the “A” cache.
In our client we do this to get a rate just like normal
NamedCache exchangeRates = CacheFactory.getCache("exchange-rates"); double rate = exchangeRates.get("GBP-USD");
The client will get the rate of 1.53 Simple code, no messing about with flags etc.
Now we want to switch to the “B” cache so somewhere we do this…
// Make sure the B cache has some rates NamedCache cacheB = CacheFactory.getCache("B-exchange-rates"); cacheB.put("GBP-USD", 1.50d); // Flip the flag so the the B cache is used NamedCache flagCache = CacheFactory.getCache("flag-cache"); flagCache.put("flag-key", false);
In our client we still only do this to get a rate just like normal
NamedCache exchangeRates = CacheFactory.getCache("exchange-rates"); double rate = exchangeRates.get("GBP-USD");
This time the client will get the rate of 1.50
So, there we go, the client end is very simple with no messing about.
Caveats
Now, there are some things that will not work very well with this setup, or any sort of cache switching setup, which we will cover now.
Indexes
Adding and removing indexes should really be done to both the “A” and “B” cache regardless of which is in use. An application may add an index when it starts in which case when the swithc is made the index will not be there and so queries will suddenly be slower and less efficient. The code shown for the SwitchingNamedCache actually passes the addIndex and removeIndex calls to both the “A” and “B” caches.
Map Listeners
Adding Map listeners is not really supported as an application could add a listener to the cache and when the switch occurs the listener will then be on the wrong cache. We could do something clever in the SwitchingNamedCache to keep track of listeners and when the switch occurs move them to the correct cache but the move cannot easily be done in an atomic way that guarantees you do not get messages you do not want or make sure you do not miss messages you want.
Continuous Query Cache
As listeners are not really supported neither are CQCs as these rely on MapListeners to do their job. In the example above you would not be able to create a CQC that wrapped the exchange-rates cache, or rather you could create a CQC but it would not work very well after the switch.
Near Caching
Similar to MapListeners and CQCs, near caches also rely on events to work properly. In this case though you can still use near caches, but you need to apply them to the “A” and “B” caches rather than to the SwitchingNamedCache. To do this in the example above we would create a near-scheme in the cache-schemes section of the configuration like this
<near-scheme> <scheme-name>near-scheme</scheme-name> <front-scheme> <local-scheme/> </front-scheme> <back-scheme> <distributed-scheme> <scheme-ref>distributed-scheme</scheme-ref> </distributed-scheme> </back-scheme> </near-scheme>
We would then change the mapping of the A-* and B-* caches to map to the near-scheme instead of the distributed-scheme.
<cache-mapping> <cache-name>A-*</cache-name> <scheme-name>near-scheme</scheme-name> </cache-mapping> <cache-mapping> <cache-name>B-*</cache-name> <scheme-name>near-scheme</scheme-name> </cache-mapping>
The configuration above would be used in storage disabled clients as you would not normally have near caches on your storage enabled nodes.
Timing
The one other caveat is to do with timing of calls to the wrapped cache by the application code.
Consider the following example…
NamedCache cache = Cachefactory.getCache"exchange-rates"); Filter filter = ... create some sort of Filter ... Set<Map.Entry> entries = cache.entrySet(filter); for (Map.Entry entry : entries) { ... do some processing ... }
Now, the code above is pretty simple but… what happens if some other code executes that flips the cache flag while this code is executing – say between lines 3 and 4 or while the for loop is being iterated over. For a number of cases this might not be a problem but it depends on what the code inside the for loop is doing. If the code is relying on having the correct data for a point in time then that is no longer the case. This problem though is no different to a similar application that uses say a DB rather than Coherence and the table it is executing against gets updated during processing, but it is at least something to be aware of.
Efficiency
Rather than the getCurrentCache method having to call the replicated cache every time it might be more efficient to add a listener to the flag cache that gets notified when the switch occurs. This could then set a field to the value of the cache to use rather than having to do the checke every time.
Conclusions
So there we have it, not too complicated if you really need to do this sort of thing. Personally, in all my years of using Coherence I have not needed it but as I said I have seen it mentioned on the Coherence forum as a possible solution to requirements. The {cache-ref} macro is really very useful for this sort of thing and we have made the client side very simple. The main advantage that gives is that anyone writing application code does not need to worry about having to put in a lot of boiler plate code to work out which cache to use, so that means less bugs where developers forget to do it. Application code can obtain a reference to the cache and pass it around the code without having to worry that the cache reference might become stale if the cache to use flag gets flipped at some point.
Is it legal to create our own implementation of the NamedCache? Does it not constitute as a derived work? Hmm… Just curious.
Of course it is legal, it is just an implementation of an interface. If this was illegal so would writing your own implementations of Filter, EntryProcessor, Aggregator etc…
Hi Jk,
We had nearly similar scenario and we took a below apporach.
1) we maintain a cache regisry which consist for a cache object as
Cache name : cache-regisry
which will hold key , value with memeber variables for CacheInfo as
CacheInfo{
cacheName;
namedCacheName;
clusterName;
}
We have a wrapper for connecting to Coherence, where we connect to coherence and had a override CacheFactory.getCache(..) method to lookup cache-registry cache and find out what’s namedCacheName to be used for cache. This will make client independent from original named cache name at cluster. Incase of rename cache or moving to different cache or cluster. The cache registy will always have a rigth named cache name.