Maximizing performance
Now that you have ensured that the data for your Evergreen system is safe and recoverable in the event of a disaster, you can begin to improve its performance with confidence. While you can improve performance in many ways, this chapter will give you an overview of tuning your ejabberd XMPP server, optimizing the performance of your PostgreSQL database server, and enhancing the configuration of your OpenSRF application servers.
Making your ejabberd XMPP server faster
When you installed Evergreen, you had to change a number of ejabberd configuration settings to increase its performance. However, some Linux distributions prevent ejabberd from using more than one CPU core at a time by default. On a production system running ejabberd on Debian or Ubuntu, you can enable the ejabberd XMPP server to use more than one CPU core at a time to process requests.
- Edit /etc/default/ejabberd and change the SMP setting from #SMP=disable to SMP=auto
- Restart ejabberd.
- Restart the OpenSRF services.
Improving the performance of your PostgreSQL database server
Almost every Evergreen operation requires communicating with a PostgreSQL database. If the PostgreSQL database is slow, then Evergreen is slow. This section describes how to ensure that PostgreSQL is as fast as it can be, given the resources that you have available for your Evergreen system.
Install your PostgreSQL database onto its own separate, dedicated hardware
The performance of your database server depends on having access to large amounts of RAM so that it can cache frequently-accessed data in memory and avoid having to read from the disk, which is a much slower operation. Correspondingly, if other software on the system blocks PostgreSQL from accessing the hard drives for read or write operations, then the performance of Evergreen will suffer. Therefore, one of the best options for maximizing the performance of Evergreen is to dedicate a separate server to the PostgreSQL database.
Write transaction logs to a separate physical drive
If your database server has enough disk drives, put the transaction log on a separate physical drive or volume from your data. This prevents writes to the transaction log from blocking reads and writes of your data.
Add RAM to your database server
If your server can cache the entire contents of your database in RAM, then the performance of your database will be many times faster than if it has to occasionally access data from disk. Your database size may not allow this, however; for example, an Evergreen system with 2.25 million bibliographic records currently uses over 120 GB of data. Alternately, your performance can still benefit if you have enough RAM to keep the largest, most frequently accessed database tables cached in memory. The same Evergreen system holds approximately 20 GB of data in the core index tables.
Add CPU cores to your database server
If you have the luxury of having enough RAM to cache your database in memory, then adding CPU cores to your database server can improve performance as each CPU core can handle one concurrent query. For example, if 10 users submit search requests at the same time on your Evergreen server, a database server with only 1 CPU core will finish the first query before it can handle the next query, while a database server with 16 CPU cores would be able to process all of the queries simultaneously.
Tune your PostgreSQL database
In the interests of conserving RAM and CPU resources, the default configuration for PostgreSQL on Linux distributions tends to be quite conservative. While that configuration can support a freshly installed Evergreen system, production Evergreen systems must adjust that configuration to maximize their performance. On most Linux distributions, the PostgreSQL configuration is kept in a file named postgresql.conf. The pgtune utility can be used to generate some suggestions for your system, or you can start with the following rules of thumb:
- shared_buffers: This setting dedicates memory to caching frequently accessed data and blocks the operating system from accessing that memory. Set this to 1/4 of the available RAM on your server.
- max_connections: This setting determines how many concurrent physical connections your database server will support. If you are not using a connection pooling solution such as pgBouncer (http://wiki.postgresql.org/wiki/PgBouncer), this setting needs to be more than the total number of maximum children defined in the opensrf.xml configuration file for all services that connect to the database, plus extra connections for manual connections to the database and any scripts you might need to run. If this number is too low, your system may run out of available database connections and return random errors. As a caveat, each physical connection increases the maximum amount of memory consumed by PostgreSQL and can lead to running out of memory on the database server.
- default_statistics_target: This setting determines how much of the data in each table PostgreSQL samples to determine how to access that data when it processes a query. If the statistics target is too low, PostgreSQL may choose a bad (slow) plan. Set this to 1000.
- effective_cache_size: This setting represents how much memory is available to the operating system to cache frequently accessed data, and affects PostgreSQL's choice of access plans. Set this to approximately 60% of the available RAM on your server.
Running reports against a replica database
If you have the luxury of having a replica database server, you can use it for more than just disaster recovery. PostgreSQL versions 9.0 and later support reads against replica databases. In this scenario, you can point the Evergreen reporter service at your replica database for an easy performance win:
- Reports cannot tie up your production database server with long-running queries, which can be a severe problem if a report template contains a particularly complicated set of relationships
- You can reduce the max_connections required in your database configuration, freeing up memory for caching data
- Your production database server can cache the data that is most necessary for searches and circulation transactions rather than the data required by arcane reports.
To run your reports against a replica database:
- In your opensrf.xml file, find the <reporter> section. When you installed Evergreen, eg_db_config.pl will have set the <database> and <state_store> connection information to connect to your production database server.
- Change the <database> connection information to connect to the replica database.
- Keep the <state_store> connection information pointing at your production database server. The reporter process needs to be able to write to the database to update the status of each scheduled report, and a replica database built on streaming replication is, by definition, a read-only database.
Using a replication tool like Slony can result in a writable replica database, but that is outside the scope of this document.
- Restart the Perl OpenSRF services to load the changed configuration.
- Run clark-kent.pl for the report generator to load the new database connection information.
Adding another Evergreen application server
If you have the resources to add another server to your Evergreen system, consider using it as an additional Evergreen application server to run selected OpenSRF services. At the same time, you can use the divide and conquer strategy to distribute the ejabberd, Apache, and memcached services between your two Evergreen application servers. In the following scenario, we refer to the first Evergreen application server in your system as app_server_1 and the second Evergreen application server in your system as app_server_2.
app_server_1 configuration
We will use app_server_1 for the following services:
- Apache
- All OpenSRF applications defined in opensrf.xml
- A network share serving up /openils (typically via NFS)
- A network share serving up a writable location for MARC batch import (typically via NFS)
app_server_2 configuration
We will use app_server_2 for the following services:
- ejabberd
- memcached
- OpenSRF router
- All OpenSRF applications defined in opensrf.xml
- A read-only network share from app_server_1 mounted at /openils
- A writable network share from app_server_1 mounted at /nfs-cluster
Enabling Apache to access the correct memcached server
When the Apache server is not on the same server as your memcached server, you must update the Apache configuration in eg_vhost.conf to set the OSRFTranslatorCacheServer value to the IP address (or hostname) and port on which the memcached server is available.
Starting and stopping OpenSRF services across multiple servers
Within a single cluster of Evergreen servers, you can only have one OpenSRF router process running at a time. To stop the OpenSRF Perl and C services cleanly, perform the following steps:
- app_server_1: Stop the OpenSRF C services.
- app_server_2: Stop the OpenSRF C services.
- app_server_1: Stop the OpenSRF Perl services.
- app_server_2: Stop the OpenSRF Perl services.
- app_server_2: Stop the OpenSRF Router.
To start the OpenSRF services, perform the following steps:
- app_server_2: Start the OpenSRF Router.
- app_server_2: Start the OpenSRF Perl services.
- app_server_1: Start the OpenSRF Perl services.
- app_server_2: Start the OpenSRF C services.
- app_server_1: Start the OpenSRF C services.
Providing access to the network shares
The /openils directory on a single-server Evergreen instance is used to read and write many different files. However, on a multiple-server Evergreen instance, administrators typically turn the /openils directory into a network share that is writable by only one Evergreen application server and read-only for other application servers. This can complicate your configuration significantly, as a default install of OpenSRF and Evergreen attempts to write log and process ID (PID) files to the /openils/var/log directory.
Similarly, the MARC Batch Importer (open-ils.vandelay) service needs to be able to store files temporarily in a common directory, such as /nfs-cluster.
Defining which services to run on each server
The stock opensrf.xml configuration file includes a <hosts> section at the very end that lists:
- One or more hostnames as XML elements; for example, <localhost>...</localhost> in the stock configuration file.
- Each hostname contains one <activeapps> element, which in turn contains a set of one or more <appname> elements naming the OpenSRF applications that should be started on that given host.