Memcached is high-performance, distributed memory object caching system. Memcached implementation for Magnolia CMS brings you the advantages of a distributed cache:
- sharing of cache items between multiple instances of Magnolia
- cached items persist restart of Magnolia instance
- memcached servers may run on any server in the network thus don't consume your server memory
<dependency> <groupId>info.magnolia.cache</groupId> <artifactId>magnolia-cache-memcached</artifactId> <version>5.4.7</version> </dependency>
If you've never used memcached, look at how to install memcached server. You need at least one memcache server per cache. That means you need for every cache configuration under
/modules/cache/config/contentCaching one entry under
/modules/cache/config/cacheFactory/caches. This is at least (by default) defaultPageCache and
Magnolia Memcached implementation uses Spymemcached client which has its own configuration options that can be set in cacheFactory class info.magnolia.cache.memcached.spy.MemcachedFactory caches defaultPageCache class info.magnolia.cache.memcached.spy.MemcachedConnectionFactoryBuilder readBufSizeN2B 10000 useNagleAlgorithmN2B true protocolN2B BINARY timeoutExceptionThresholdN2B 998 opQueueMaxBlockTimeN2B -1 shouldOptimizeN2B false opTimeoutN2B -1 daemonN2B false maxReconnectDelayN2B 30 servers 0 localhost:11211 uuid-key-mapping extends ../defaultPageCache servers 0 localhost:11212
Node name Value
|Parameter||Default value||Description / Available values|
|Size of the read buffer.|
|There are several elements of the design that each allow high throughput.|
|Improves the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network.|
Set the maximum reconnect delay.
Set the maximum amount of time (in milliseconds) a client is willing to wait for space to become available in an output queue.
Set the maximum timeout exception threshold.
Set the default operation timeout in milliseconds.
Needs to be set as content node with class property:
|Memcache server/s to use for this cache in format |
One of the advantage of memcached implementation is sharing of cache entries between multiple instances of Magnolia. If the cache item is processed by one of the public instances, it's send to memcached server/s and others Magnolia instances don't need to render the content again but use the item cached in memcached servers.
We were requesting content from two magnolia instances at the same time for the tests and you can see the results on the first graph. As you can see the throughput is 2x bigger since the instances shares the cache items. This of course applies only for the first requests for a content but that's the time after flushing of cache when the load on the server is the biggest.
The second graph shows performance when the items are already precached. Memcached is a little slower than ehcache in this case.
Memcached Client License
Please note that the Spymemcached client uses it's own licence:
* Copyright (C) 2006-2009 Dustin Sallings
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALING
* IN THE SOFTWARE.