Nuxeo provides a Redis integration via the
The idea is that, at least for now, Redis is not a hard requirement for running the Nuxeo Platform; We simply use Redis as a backend to provide alternate implementation of some services inside the platform. However, we do provide these implementations because we think they can be useful.
Nuxeo can use Redis to store both data to be persisted (e.g. jobs list) and transient data (e.g. cache data). After a normal cluster shutdown, you can flush (erase) the transient data in Redis. Note however that Nuxeo can not work with a Redis configured as an LRU cache; there should be no eviction under memory pressure.
Nuxeo Core Cache provides a service to declare and use caches. This cache system:
- is used in Nuxeo Directories (caching directories entries)
- is used in UserManager (caching principals)
- is used in Nuxeo Drive (caching synchronization roots)
- can be used in your custom code
The Cache has a default "in memory" implementation, but
nuxeo-core-redis provides another implementation that allows:
- To have out of JVM memory cache storage. It opens the way to have big caches without hurting the JVM.
- To share the same cache between several Nuxeo nodes. In cluster mode this can increase cache efficiency.
- To manage cluster wide invalidations. Updating the user on one node will impact the central cache: all nodes see the exact same data.
You can configure the backend storage on a per cache basis: Directory A could use Redis while directory B could use in memory.
The WorkManager handles asynchronous jobs:
- Schedule Jobs and store them in queues
- Assign execution slots to queues
- Execute the jobs
In the default implementation, job queues are in the JVM memory. But this model has some limitations:
- Stacking a lot of jobs will consume JVM Memory
- In cluster mode each Nuxeo node maintains its own queue
- When a Nuxeo server is restarted, all the queued jobs are lost
nuxeo-core-redis provides an alternate implementation of the queuing system based on Redis:
- Jobs are then stored outside of JVM memory. That's why Work have to be serializable: they are serialized and stored inside Redis.
- Jobs can be shared across cluster nodes. This allows to dedicate some nodes to some specific processing.
- Jobs survive a Nuxeo restart. When Redis persistence is activated, the jobs even survive a Redis restart.
By default Lock managed on documents are stored inside the repository backend.When the locking system is heavily used, it can create a lot of very small transactions against the database backend.
nuxeo-core-redis provides a Redis-based implementation of the locking system that is resilient as the database implementation, but easier to scale.
Managing VCS row cache invalidations with Redis instead of using the database can improve performance on concurrent writes and provides synchronous invalidations.
Since Nuxeo 8.10 the DBS layer (used by MongoDB or Marklogic backends) has a cache its invalidation in cluster mode requires Redis.
RedisTransientStore is a Redis-based implementation of the Transient Store.
It is the one used by the default Transient Store if Redis is enabled.
It allows the parameters associated to the stored blobs to be shared accross cluster nodes.
The following code can be used with the Redis client to delete old workers marked as running:
local keys = redis.call('KEYS', 'nuxeo:work:run:*') for _,k in ipairs(keys) do redis.call('DEL', k) end
Copy this code to a file named
delete_running_works.lua, change the Redis prefix (the code uses the default
nuxeo prefix) and the clean-up can be done by calling this command:
redis-cli --eval /path/to/delete_running_works.lua
It finds all keys prefixed with
nuxeo:work:run then delete these keys.