Skip to main content

Backlog Enterprise Operations Guide

Latest version

Table of Contents

Directory structure

Backlog is installed in the following configurations:

<Installation directory>
├── backlog-job-worker
│     └── hikaricp.properties #Maximum number of connections for database access
├── data #Store location of each data
├── logs #Store location of each log file
├── .env #File of each setting value.
└── docker-compose.yml #Definition file of each container

Command line of start/stop on Backlog

Stop

Stop running Backlog.

cd <Installation directory>
docker-compose down

Start

Start Backlog.

cd <Installation directory>
docker-compose up -d

Services

ServiceDescription
backlog-apiBacklog API
backlog-davsvnBacklog file and subversion feature
backlog-job-workerInternal data processing
backlog-solrFull-text Backlog search
backlog-webBacklog web service
cronInternal data processing
database-migrationDatabase version control
elasticsearchFull-text Backlog search
elasticsearch-initInitial setting for full-text search
fluentdCollect Backlog logs
git-backlog-workerInternal data processing
git-httpBacklog Git HTTP service
git-rpcInternal data processing
git-sshBacklog Git SSH service
git-webhook-workerInternal data processing
kanban-backendBacklog kanban board
kanban-notificationInternal data processing
memcachedVarious cache storage locations
nginxReverse proxy
redisVarious cache storage locations

Port numbers

Services are published using the following port numbers. If you’re using iptables or other software, please allow external access to these numbers.

Port number
HTTPS443
Git SSH8972

File data storage directory

Backlog file data is stored in the following directories:

data
├── attachment
│       └── pull_request
├── davsvn
│       ├── share
│       └── svn
├── elasticsearch
├── git
├── image
└── solr
        ├── issue
        ├── pull_request
        ├── shared_file
        └── wiki
Directory pathDescription
data/attachment/pull_requestStored files attached to pull requests for Git features
data/davsvn/shareFiles managed by the file features are stored
data/davsvn/svnRepositories for subversion feature are stored
data/elasticsearchSearch index
data/gitRepositories for Git feature are stored
data/imageImage files are stored
data/solrSearch index

Log file storage directory

Logs for each service are stored in <Installation directory>/logs.

logs
├── ${tag[1]} #Directory for temporary stores logs
│       └── ${tag[1]}
├── backlog-api
├── backlog-davsvn
├── backlog-davsvn-mntlog
│       ├── dav-job-worker
│       ├── httpd-davsvn
│       ├── svn-hook
│       └── svnserve
├── backlog-job-worker
├── backlog-solr
├── backlog-web
├── database-migration
├── elasticsearch
├── elasticsearch-init
├── git-http
├── git-rpc
├── git-rpc-mntlog
├── git-ssh
├── kanban-backend
├── kanban-notification
├── memcached
├── nginx
└── redis

Parameters for .env

Parameter nameInitial valueDescriptionInput from configuration tool
BACKLOG_DB_HOSTHost of the database to connect✔︎
BACKLOG_DB_PORTPort of the database to connect✔︎
BACKLOG_DB_NAMESchema name of the database to connect✔︎
BACKLOG_DB_USERUser name of the database to connect✔︎
BACKLOG_DB_PASSWORDPassword of the database to connect✔︎
BACKLOG_SMTP_HOSTHost of the SMTP server✔︎
BACKLOG_TIMEZONETime zone✔︎(first time only)
BACKLOG_MAIL_NOTIFICATIONS_ADDRESS_FORMATOrigin email address of the email sent from Backlog.
If not specified, the email address will be that of the person who registered for the issue, etc. Please note that this email address may be considered spoofed or unsolicited.
BACKLOG_WEB_PLAY_SESSION_SECRET_KEYAutomatically generated by the configuration toolNever change this parameter
BACKLOG_API_PLAY_SESSION_SECRET_KEYAutomatically generated by the configuration toolNever change this parameter
FIXED_IP_ADDRESS_PREFIX10.254.249The fixed IP prefix used by docker compose networking
LOG_REMAIN_DAYS7Log storage period
LDAPS_USINGfalseTurns LDAPS connection ON/OFF
GIT_SSH_HOST_PRIVATE_KEY_ENCAutomatically generated by the configuration toolNever change this parameter
KANBAN_OAUTH2_CLIENT_IDAutomatically generated values by the configuration toolNever change this parameter
BACKLOG_DATA_DIRECTORY./dataDirectory of stored Backlog data
BACKLOG_LOG_DIRECTORY./logsDirectory of stored logs
BACKLOG_CERT_DIRECTORYDirectory of stored SSL certificates and private keys✔︎(first time only)
BACKLOG_WEB_JAVA_OPTS‘-Xmx2048M -Xms512M -XX:MaxMetaspaceSize=512m’JVM startup options for backlog-web
BACKLOG_API_JAVA_OPTS‘-Xmx1024M -Xms512M -XX:MaxMetaspaceSize=512m’JVM startup options for backlog-api
BACKLOG_JOBWORKER_JAVA_OPTS‘-Xmx1024M -Xms512M -XX:MaxMetaspaceSize=256m’JVM startup options for backlog-job-worker
BACKLOG_KANBAN_JAVA_OPTS‘-Xmx1024M -Xms512M -XX:MaxMetaspaceSize=256m’JVM startup options for kanban-backend
BACKLOG_WEB_DB_MAXIMUM_POOL_SIZE10Maximum number of database connections for backlog-web
BACKLOG_API_DB_MAXIMUM_POOL_SIZE10Maximum number of database connections for backlog-api
BACKLOG_DAVSVN_START_SERVERS5Configuration of HTTP server for backlog-davsvn. More information
BACKLOG_DAVSVN_MIN_SPARE_SERVERS5Configuration of HTTP server for backlog-davsvn. More information
BACKLOG_DAVSVN_MAX_SPARE_SERVERS10Configuration of HTTP server for backlog-davsvn. More information
BACKLOG_DAVSVN_MAX_REQUEST_WORKERS256Configuration of HTTP server for backlog-davsvn. More information
BACKLOG_DAVSVN_MAX_CONNECTIONS_PER_CHILD0Configuration of HTTP server for backlog-davsvn. More information

Back-up data

The data handled by Backlog is stored in the following files, which should be backed up regularly.

Creating an advanced configuration file

Create docker-compose.override.yml on the same level asdocker-compose.yml. In docker-compose.override.yml, list the services you want to change and their configuration values. After it’s created, restart with the Start command.

For example:

services:
  backlog-web:
    environment:
    ...

  backlog-api:
    environment:
    ...

Get more information.

Integrating with Active Directory

Change the port number of LDAPS

For an LDAPS connection, enter the LDAPS port number in the advanced configuration file you created. After it’s created, restart with the Start command.

backlog-api:
  environment:
    - LDAPS_PORT=8636 # Default 636

backlog-web:
  environment:
    - LDAPS_PORT=8636 # Default 636

Use Active Directory Certificate Service

If you use an LDAPS connection and the Active Directory Certificate Service, place the certificate issued by Active Directory Certificate Service in any directory. After it’s placed, add the following information to the advanced configuration file you created, and restart with the Start command.

backlog-api:
  volumes:
    - /path/to/cert/dir:/mnt/certs #Please replace `/path/to/cert/dir` with the directory where you placed the certificate
  command: -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true

backlog-web:
  volumes:
    - /path/to/cert/dir:/mnt/certs #Please replace `/path/to/cert/dir` with the directory where you placed the certificate
  command: -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true

Create users with Active Directory

Create Backlog users with information from Active Directory. Set up zone definitions and SRV resource records for Active Directory before operation. Active Directory is supported on Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019. Get detailed instructions.

Regenerate the search index

If the results of a keyword search are not displayed correctly, run the command to regenerate the search indexing. Note that playback is delayed if there’s a large amount of index data.

Problem with issue search (issue list)

Use the following:

docker-compose up elasticsearch-tool

Problem with global navigation keyword searches

Check if the following message is output in the backlog-solr log within the log storage directory:

java.lang.OutOfMemoryError: Java heap space

If the message appears in the log, there might be insufficient memory for Solr. Open the docker-compose.override.yml file and update the Solr memory settings.

backlog-solr:
  environment:
    - SOLR_JAVA_MEM=-Xms1024m -Xmx1024m # Default -Xms512m -Xmx512m

With the memory settings updated, restart using the same command you used to start.
After restarting, execute the following. The search index will be recreated.

docker-compose up solr-tool

If the message doesn’t appear in the log, use the following:

docker-compose up solr-tool

Change the speed of search index re-generation

Change the limits for indexing jobs in the advanced configuration file you created. After editing, restart with the Start command.

backlog-job-worker:
  environment:
    - BACKLOG_WORKER_JOB_INDEXING_ISSUE_JOB_PERMITS=16 # Default 6

Change the maximum of bucket size requests

If the wiki has too many characters and can’t be saved, for example, change the bucket size limit in the advanced configuration file you created. After editing, restart with the Start command.

backlog-api:
  environment:
    - MAX_MEMORY_BUFFER=25MB # Default 20MB
    - MAX_DISK_BUFFER=60MB # Default 55MB

backlog-web:
  environment:
    - MAX_MEMORY_BUFFER=25MB
    - MAX_DISK_BUFFER=60MB

Change the maximum number of simultaneous Nginx connections

Copy nginx.conf locally with the following command. After editing, restart with the Start command.

docker-compose cp nginx:/etc/nginx/nginx.conf .

Edit nginx.conf and docker-compose.override.yml.

nginx:
  volumes:
    - ./nginx.conf:/etc/nginx/nginx.conf

Get more information on configuring nginx.conf.

Notes on using MySQL 5.7

Troubleshooting

MySQL 5.7 may cause issues in Backlog if the MySQL query cache is enabled. If query_cache_type has an existing value in my.cnf, set it to 0. If not, there’s no need to add it as the default value for query_cache_type in MySQL 5.7 is 0.

[mysqld]
max_allowed_packet=128M
character-set-server=utf8mb4
sql_mode = "NO_ENGINE_SUBSTITUTION"
query_cache_type = 0

[mysql]
default-character-set=utf8mb4

[client]
default-character-set=utf8mb4

Increase the file size limit for issue attachments

You can attach files up to 50MB to an issue. If you have any problems, try adjusting the following settings:

Increase the request packet size limit

Change the MAX_DISK_BUFFER in the advanced configuration file to 55MB or higher. Then, restart using the Start command.

backlog-api:
  environment:
    - MAX_DISK_BUFFER=55MB #default 55MB

backlog-web:
  environment:
    - MAX_DISK_BUFFER=55MB

Modify or add MySQL settings

In the my.cnf file, set the value of max_allowed_packet to 128M or higher. This is required for both MySQL 8.0 and MySQL 5.7. After editing, restart MySQL.

[mysqld]
max_allowed_packet=128M

About the trademark

  • MySQL and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
  • Docker are trademarks or registered trademarks of Docker, Inc. in the United States and/or other countries.
  • Nginx are registered trademarks of Nginx Software Inc. in the United States and/or other countries.
  • Active Directory is either a registered trademark or a trademark of Microsoft Corporation in the United States and/or other countries.
  • Other company and product names in this document may be trademarks or registered trademarks of their respective owners.

Author: Backlog Support <support@backlog.com>