Nuxeo supports the following PostgreSQL versions:
9.3, 9.4
We always recommend that you use the latest stable version, which is PostgreSQL 9.4 at the time of this writing.
The database needs to be configured to work properly with Nuxeo. Some settings must be changed in order for Nuxeo to work. Other settings should be changed in order for Nuxeo to have good performance.
This FAQ will give you some hints to configure your database, but please refer to your DBA or the PostgreSQL documentation for more information (http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server).
Most settings have to be changed in the postgresql.conf
file. Some SQL commands may have to be executed directly at the PostgreSQL console (psql
).
Mandatory Changes
Two-Phase Commit
You shouldn't need to change this, because two-phase commit isn't used in basic configurations.
However, if you use XA datasources (see the JDBC Datasource Configuration documentation for more on this), you will need to update the default max_prepared_transactions
settings. You can use the same value as max_connections
.
max_prepared_transactions = 100
Implicit Casts
Jena (used for relations and comments) and jBPM (used for workflows) assume some implicit value casting in the SQL they generate.
To make Nuxeo work with PostgreSQL >= 8.3, you must therefore execute the following commands in your PostgreSQL console when connected to the template1
database, so that any database created afterward will come with the required CASTs (if your database is already created, execute the commands in your database as well):
CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(int4out($1));';
CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS IMPLICIT;
COMMENT ON FUNCTION pg_catalog.text(integer) IS 'convert integer to text';
CREATE FUNCTION pg_catalog.text(bigint) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(int8out($1));';
CREATE CAST (bigint AS text) WITH FUNCTION pg_catalog.text(bigint) AS IMPLICIT;
COMMENT ON FUNCTION pg_catalog.text(bigint) IS 'convert bigint to text';
This change is mandatory for PostgreSQL >= 8.3 since casts have been simplified. If you don't change this option you will have operator does not exist
exceptions.
Possible errors if you don't update the casts as described above are:
org.postgresql.util.PSQLException: ERROR: operator does not exist: integer = character varying
org.postgresql.util.PSQLException: ERROR: operator does not exist: bigint = character varying
com.hp.hpl.jena.db.RDFRDBException: Exception while checking db format - com.hp.hpl.jena.db.RDFRDBException: Internal SQL error in driver - org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block
com.hp.hpl.jena.db.RDFRDBException: Internal SQL error in driver - org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block
For further details, please see this url. You might also be interested in this migration helper.
Create the Role and Database for Nuxeo
For instance (please change the password and the nuxeo.conf
file of your instance accordingly):
$ createuser -U postgres -W -P nuxeo
$ createdb -U postgres -W -O nuxeo -E UTF8 nuxeo
Or from the psql command prompt:
CREATE ROLE nuxeo WITH PASSWORD 'nuxeo' LOGIN;
CREATE DATABASE nuxeo ENCODING 'UTF8' OWNER nuxeo;
Note that using the UTF8
encoding for your database is important.
Edit the PostgreSQL configuration file to change how the nuxeo
user authenticates to the database
Edit the file pg_hba.conf
, make sure that the nuxeo
user (or all
users, if the nuxeo
user is not individually listed), have an authentication METHOD
of md5
, rather than ident
. Otherwise you will have some FATAL: Ident authentication failed for user "nuxeo"
errors in the logs.
# TYPE DATABASE USER CIDR-ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all md5
# IPv4 local connections:
host all all 127.0.0.1/32 md5
Performance Tuning
Shared Buffers and System Cache
One of the most important thing for PostgreSQL is to have lots of shared buffers along with free memory that can be used by the system cache.
Refer to the section "Adapt your configuration to your hardware" to get the correct value.
If you plan to use 1 GB of shared buffers, update the following property in your postgresql.conf
file:
shared_buffers = 1GB
If you use PostgreSQL < 9.3 the shared memory must be available on the system side using sysctl
, you need to enable a little bit more at the OS level, for instance try 1 GB + 128 MB:
sysctl kernel.shmmax=1207959552
Then restart the PostgreSQL.
If there is no enough shared memory you will have an explicit error message and you should try with a bigger kernel.shmmax
value.
Once PostgreSQL starts properly with the chosen kernel.shmmax
value, it should be registered in the /etc/sysctl.conf
file by adding the following line.
kernel.shmmax = <SHMMAX_VALUE>
Using effective_cache_size
PostgreSQL is informed of how much memory the system will use for disk caching. This is used as a hint when executing queries, note that this memory is not allocated by PostgreSQL itself.
To set effective_cache_size
value, you need to run your application once and check how much memory is used by system cache. This can be done using the free command and using the free
value for -/+ buffers/cache
. Then you can set this value in the configuration:
effective_cache_size = 1536MB
Memory for Workers
Increasing the work_mem
parameter allows PostgreSQL to do larger in-memory sorts which is much faster than disk sorts. Keep in mind that work_mem
size will be taken up by each connection (a pool of 20 connections will take up to 20 * work_mem).
work_mem = 12MB
Increasing the maintenance_work_mem
will speed up the vacuum procedure.
maintenance_work_mem = 512MB
Buffering Writes
The default wal_buffers
can be increase to improve write access time. Increasing the checkpoint segments and completion target helps to spread out the writes.
wal_buffers = 16MB
checkpoint_segments = 32
checkpoint_completion_target=0.8
Index vs Table Scan
The random_page_cost
parameter influences this query planner's choice. The value to use depends on your disk IO, here are some advices :
# random_page_cost = 4 # Slow disk AWS EBS
# random_page_cost = 2 # Recent HD
# random_page_cost = 1 # SSD
Updating the Planner Statistics
PostgreSQL computes statistics on table content in order to plan for the best performance when executing queries with joins and complex filters. The default configuration in PostgreSQL <= 8.3 is default_statistics_target
set to the value 10 which can lead to not accurate enough estimates. In 8.4 this value is now set to 100 by default. We recommend a higher value like 500:
default_statistics_target = 500
If the database is already populated you need to execute ANALYZE
to update the statistics.
Vacuuming
The autovacuum is enabled by default since PostgreSQL 8.3.
Exceptionally, a full vacuum can be done at downtime to recover disk space, it should be followed with a reindexdb
command.
Monitoring
We recommend the following setting to have a meaningful log in production
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d '
log_min_duration_statement = 400
log_checkpoints=on
log_lock_waits=on
log_temp_files=0
log_autovacuum_min_duration=0
log_statement = ddl
track_functions=pl
Also to have an effective monitoring you should install the following extensions : pg_stat_statements, pg_buffercache
Install postgresql-contrib package.
sudo apt-get install postgresql-contrib
Login to you database as postgres user and create the extensions (require PostgreSQL >= 9.1).
sudo su postgres -c'psql -U postgres -d nuxeo -c"CREATE EXTENSION pg_buffercache;"' sudo su postgres -c'psql -U postgres -d nuxeo -c"CREATE EXTENSION pg_stat_statements;"'
Update the configuration.
shared_preload_libraries = 'pg_stat_statements, auto_explain' # custom_variable_classes = 'pg_stat_statements, auto_explain' # uncomment if you are on PostgreSQL 9.1 pg_stat_statements.max = 10000 pg_stat_statements.track = top auto_explain.log_min_duration = -1 auto_explain.log_analyze = 'false'
Restart the database.
sudo /etc/init.d/postgresql restart
See the PostgreSQL section of the Monitoring and Maintenance page.
Adapt Your Configuration to Your Hardware
Here are some values that can be used as a starting point for a dedicated server depending on the amount of memory.
Amount of RAM | 4 GB | 8 GB | 16 GB | 32 GB |
---|---|---|---|---|
shared_buffers | 1GB | 2GB | 4GB | 8GB |
effective_cache_size | 1536MB | 4GB | 8GB | 16GB |
work_mem | 12MB | 12MB | 16MB | 20MB |
maintenance_work_mem | 512MB | 1GB | 1GB | 1GB |
max_connections | 63 | 103 | 153 | 203 |
Specific Configuration
Accent-Insensitive Full-Text Search
If you want accent-insensitive full-text search, you'll need to install the unaccent contribution, create a new text search configuration, and specify its use in Nuxeo.
Unaccent is described here http://www.postgresql.org/docs/9.0/static/unaccent.html.
Install it by running unaccent.sql
script. For Ubuntu users, this file is located at /usr/share/postgresql/9.0/contrib/unaccent.sql
.
Connect to your database and run the following instructions:
CREATE TEXT SEARCH CONFIGURATION fr ( COPY = french );
ALTER TEXT SEARCH CONFIGURATION fr ALTER MAPPING FOR asciihword, asciiword, hword_asciipart, hword, hword_part, word WITH unaccent, french_stem;
Then replace in your default-repository-config.xml
file the french
analyzer by the one you just defined (fr
in this example).
Mass Import Specific Tuning
When doing mass import you can disable the full-text trigger and full-text index. They must be dropped after a successful login on a running Nuxeo DM because DDL SQL commands are executed on the first access.
ALTER TABLE fulltext DISABLE TRIGGER nx_trig_ft_update;
DROP INDEX IF EXISTS fulltext_fulltext_idx;
DROP INDEX IF EXISTS fulltext_fulltext_description_idx;
DROP INDEX IF EXISTS fulltext_fulltext_title_idx;
After the import you can update the full-text column like this:
ALTER TABLE fulltext ENABLE TRIGGER nx_trig_ft_update;
-- Let the trigger update the fulltext column
UPDATE fulltext SET fulltext = ''::TSVECTOR WHERE length(fulltext) is NULL;
-- For Nuxeo up to 5.4
CREATE INDEX fulltext_fulltext_idx ON fulltext USING gin (fulltext);
-- For Nuxeo >= 5.5
CREATE INDEX fulltext_fulltext_title_idx ON fulltext USING gin (nx_to_tsvector(fulltext_title::character varying));
CREATE INDEX fulltext_fulltext_description_idx ON fulltext USING gin (nx_to_tsvector(fulltext_description::character varying));
CREATE INDEX fulltext_fulltext_idx ON fulltext USING gin (nx_to_tsvector(fulltext::character varying));
Changing temporarily the PostgreSQL configuration during the import provides performance benefits:
full_page_writes = off
fsync = off
synchronous_commit = off
Do not do this if there is already any data you care about in any database on your PostgreSQL cluster, Please refer to the PostgreSQL reference manual.
Using uuid idType
If you want to use the PostgreSQL uuid
type instead of the default varchar(36)
(this is the case when you set nuxeo.vcs.idtype=uuid
in the nuxeo.conf
file)
you need to create a new operator to support GIN
index on uuid[]
type.
CREATE OPERATOR CLASS _uuid_ops DEFAULT
FOR TYPE _uuid USING gin AS
OPERATOR 1 &&(anyarray, anyarray),
OPERATOR 2 @>(anyarray, anyarray),
OPERATOR 3 <@(anyarray, anyarray),
OPERATOR 4 =(anyarray, anyarray),
FUNCTION 1 uuid_cmp(uuid, uuid),
FUNCTION 2 ginarrayextract(anyarray, internal, internal),
FUNCTION 3 ginqueryarrayextract(anyarray, internal, smallint, internal, internal, internal, internal),
FUNCTION 4 ginarrayconsistent(internal, smallint, anyarray, integer, internal, internal, internal, internal),
STORAGE uuid;
Possible error if you don't create the operator described above is:
ERROR: data type uuid[] has no default operator class for access method "gin"
Limitations
PostgreSQL is very good, very fast, and is our reference database. To be completely honest it still has some limitations that other databases don't have:
- Its full-text engine doesn't know how to do full-text phrase search, which means we have to use a slower and not completely equivalent workaround when phrase search is required (NXP-6720).
Reporting Problems
If you have a database configuration problem and you want to fill a JIRA ticket, there are some information to report:
The PostgreSQL server state: is it dedicated or shared, which OS, how many CPU, RAM, is it a virtual machine...
How much memory is available on the database server (
free -m
output).Amount of Nuxeo documents and PostgreSQL configuration. Using the following commands:
Log in on your database with the postgres user.
sudo su - postgres
Get the Nuxeo SQL script to dump your configuration.
wget --no-check-certificate https://gist.github.com/bdelbosc/5507796/raw/dump-nuxeo-postgres-config.sql
Execute the SQL script with psql against the Nuxeo DB (not the default database named postgres).
psql nuxeo -f dump-nuxeo-postgres-config.sql
Note that this SQL script can generate some errors logs when optional tables are not present or depending on the PostgreSQL version, the result file is still relevant.
Attach the output file located in
/tmp/pgconf.txt
into the JIRA ticket. An example of such a result file is here, so that you can check that yours has the correct format.- If you are monitoring the slowest queries (See monitoring section) you can zip and attach the
postgresql
log file to the JIRA ticket.