Backlog Enterprise Operations Guide
Latest version
Table of Contents
Directory structure
Backlog is installed in the following configurations:
<Installation directory>
├── backlog-job-worker
│ └── hikaricp.properties #Maximum number of connections for database access
├── data #Store location of each data
├── logs #Store location of each log file
├── .env #File of each setting value.
└── docker-compose.yml #Definition file of each container
Command line of start/stop on Backlog
Stop
Stop running Backlog.
cd <Installation directory>
docker-compose down
Start
Start Backlog.
cd <Installation directory>
docker-compose up -d
Services
Service | Description |
---|---|
backlog-api | Backlog API |
backlog-davsvn | Backlog file and subversion feature |
backlog-job-worker | Internal data processing |
backlog-solr | Full-text Backlog search |
backlog-web | Backlog web service |
cron | Internal data processing |
database-migration | Database version control |
elasticsearch | Full-text Backlog search |
elasticsearch-init | Initial setting for full-text search |
fluentd | Collect Backlog logs |
git-backlog-worker | Internal data processing |
git-http | Backlog Git HTTP service |
git-rpc | Internal data processing |
git-ssh | Backlog Git SSH service |
git-webhook-worker | Internal data processing |
kanban-backend | Backlog kanban board |
kanban-notification | Internal data processing |
memcached | Various cache storage locations |
nginx | Reverse proxy |
redis | Various cache storage locations |
Port numbers
Services are published using the following port numbers. If you’re using iptables or other software, please allow external access to these numbers.
Port number | |
---|---|
HTTPS | 443 |
Git SSH | 8972 |
File data storage directory
Backlog file data is stored in the following directories:
data
├── attachment
│ └── pull_request
├── davsvn
│ ├── share
│ └── svn
├── elasticsearch
├── git
├── image
└── solr
├── issue
├── pull_request
├── shared_file
└── wiki
Directory path | Description |
---|---|
data/attachment/pull_request | Stored files attached to pull requests for Git features |
data/davsvn/share | Files managed by the file features are stored |
data/davsvn/svn | Repositories for subversion feature are stored |
data/elasticsearch | Search index |
data/git | Repositories for Git feature are stored |
data/image | Image files are stored |
data/solr | Search index |
Log file storage directory
Logs for each service are stored in <Installation directory>/logs
.
logs
├── ${tag[1]} #Directory for temporary stores logs
│ └── ${tag[1]}
├── backlog-api
├── backlog-davsvn
├── backlog-davsvn-mntlog
│ ├── dav-job-worker
│ ├── httpd-davsvn
│ ├── svn-hook
│ └── svnserve
├── backlog-job-worker
├── backlog-solr
├── backlog-web
├── database-migration
├── elasticsearch
├── elasticsearch-init
├── git-http
├── git-rpc
├── git-rpc-mntlog
├── git-ssh
├── kanban-backend
├── kanban-notification
├── memcached
├── nginx
└── redis
Parameters for .env
Parameter name | Initial value | Description | Input from configuration tool |
---|---|---|---|
BACKLOG_DB_HOST | Host of the database to connect | ✔︎ | |
BACKLOG_DB_PORT | Port of the database to connect | ✔︎ | |
BACKLOG_DB_NAME | Schema name of the database to connect | ✔︎ | |
BACKLOG_DB_USER | User name of the database to connect | ✔︎ | |
BACKLOG_DB_PASSWORD | Password of the database to connect | ✔︎ | |
BACKLOG_SMTP_HOST | Host of the SMTP server | ✔︎ | |
BACKLOG_TIMEZONE | Time zone | ✔︎(first time only) | |
BACKLOG_MAIL_NOTIFICATIONS_ADDRESS_FORMAT | Origin email address of the email sent from Backlog. If not specified, the email address will be that of the person who registered for the issue, etc. Please note that this email address may be considered spoofed or unsolicited. | ||
BACKLOG_WEB_PLAY_SESSION_SECRET_KEY | Automatically generated by the configuration tool | Never change this parameter | |
BACKLOG_API_PLAY_SESSION_SECRET_KEY | Automatically generated by the configuration tool | Never change this parameter | |
FIXED_IP_ADDRESS_PREFIX | 10.254.249 | The fixed IP prefix used by docker compose networking | |
LOG_REMAIN_DAYS | 7 | Log storage period | |
LDAPS_USING | false | Turns LDAPS connection ON/OFF | |
GIT_SSH_HOST_PRIVATE_KEY_ENC | Automatically generated by the configuration tool | Never change this parameter | |
KANBAN_OAUTH2_CLIENT_ID | Automatically generated values by the configuration tool | Never change this parameter | |
BACKLOG_DATA_DIRECTORY | ./data | Directory of stored Backlog data | |
BACKLOG_LOG_DIRECTORY | ./logs | Directory of stored logs | |
BACKLOG_CERT_DIRECTORY | Directory of stored SSL certificates and private keys | ✔︎(first time only) | |
BACKLOG_WEB_JAVA_OPTS | ‘-Xmx2048M -Xms512M -XX:MaxMetaspaceSize=512m’ | JVM startup options for backlog-web | |
BACKLOG_API_JAVA_OPTS | ‘-Xmx1024M -Xms512M -XX:MaxMetaspaceSize=512m’ | JVM startup options for backlog-api | |
BACKLOG_JOBWORKER_JAVA_OPTS | ‘-Xmx1024M -Xms512M -XX:MaxMetaspaceSize=256m’ | JVM startup options for backlog-job-worker | |
BACKLOG_KANBAN_JAVA_OPTS | ‘-Xmx1024M -Xms512M -XX:MaxMetaspaceSize=256m’ | JVM startup options for kanban-backend | |
BACKLOG_WEB_DB_MAXIMUM_POOL_SIZE | 10 | Maximum number of database connections for backlog-web | |
BACKLOG_API_DB_MAXIMUM_POOL_SIZE | 10 | Maximum number of database connections for backlog-api | |
BACKLOG_DAVSVN_START_SERVERS | 5 | Configuration of HTTP server for backlog-davsvn. More information | |
BACKLOG_DAVSVN_MIN_SPARE_SERVERS | 5 | Configuration of HTTP server for backlog-davsvn. More information | |
BACKLOG_DAVSVN_MAX_SPARE_SERVERS | 10 | Configuration of HTTP server for backlog-davsvn. More information | |
BACKLOG_DAVSVN_MAX_REQUEST_WORKERS | 256 | Configuration of HTTP server for backlog-davsvn. More information | |
BACKLOG_DAVSVN_MAX_CONNECTIONS_PER_CHILD | 0 | Configuration of HTTP server for backlog-davsvn. More information |
Back-up data
The data handled by Backlog is stored in the following files, which should be backed up regularly.
Creating an advanced configuration file
Create docker-compose.override.yml
on the same level asdocker-compose.yml
. In docker-compose.override.yml
, list the services you want to change and their configuration values. After it’s created, restart with the Start command.
For example:
services:
backlog-web:
environment:
...
backlog-api:
environment:
...
Integrating with Active Directory
Change the port number of LDAPS
For an LDAPS connection, enter the LDAPS port number in the advanced configuration file you created. After it’s created, restart with the Start command.
backlog-api:
environment:
- LDAPS_PORT=8636 # Default 636
backlog-web:
environment:
- LDAPS_PORT=8636 # Default 636
Use Active Directory Certificate Service
If you use an LDAPS connection and the Active Directory Certificate Service, place the certificate issued by Active Directory Certificate Service in any directory. After it’s placed, add the following information to the advanced configuration file you created, and restart with the Start command.
backlog-api:
volumes:
- /path/to/cert/dir:/mnt/certs #Please replace `/path/to/cert/dir` with the directory where you placed the certificate
command: -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true
backlog-web:
volumes:
- /path/to/cert/dir:/mnt/certs #Please replace `/path/to/cert/dir` with the directory where you placed the certificate
command: -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true
Create users with Active Directory
Create Backlog users with information from Active Directory. Set up zone definitions and SRV resource records for Active Directory before operation. Active Directory is supported on Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019. Get detailed instructions.
Regenerate the search index
If the results of a keyword search are not displayed correctly, run the command to regenerate the search indexing. Note that playback is delayed if there’s a large amount of index data.
Problem with issue search (issue list)
Use the following:
docker-compose up elasticsearch-tool
Problem with global navigation keyword searches
Check if the following message is output in the backlog-solr log within the log storage directory:
java.lang.OutOfMemoryError: Java heap space
If the message appears in the log, there might be insufficient memory for Solr. Open the docker-compose.override.yml file and update the Solr memory settings.
backlog-solr:
environment:
- SOLR_JAVA_MEM=-Xms1024m -Xmx1024m # Default -Xms512m -Xmx512m
With the memory settings updated, restart using the same command you used to start.
After restarting, execute the following. The search index will be recreated.
docker-compose up solr-tool
If the message doesn’t appear in the log, use the following:
docker-compose up solr-tool
Change the speed of search index re-generation
Change the limits for indexing jobs in the advanced configuration file you created. After editing, restart with the Start command.
backlog-job-worker:
environment:
- BACKLOG_WORKER_JOB_INDEXING_ISSUE_JOB_PERMITS=16 # Default 6
Change the maximum of bucket size requests
If the wiki has too many characters and can’t be saved, for example, change the bucket size limit in the advanced configuration file you created. After editing, restart with the Start command.
backlog-api:
environment:
- MAX_MEMORY_BUFFER=25MB # Default 20MB
- MAX_DISK_BUFFER=60MB # Default 55MB
backlog-web:
environment:
- MAX_MEMORY_BUFFER=25MB
- MAX_DISK_BUFFER=60MB
Change the maximum number of simultaneous Nginx connections
Copy nginx.conf locally with the following command. After editing, restart with the Start command.
docker-compose cp nginx:/etc/nginx/nginx.conf .
Edit nginx.conf and docker-compose.override.yml.
nginx:
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
Get more information on configuring nginx.conf.
Notes on using MySQL 5.7
Troubleshooting
MySQL 5.7 may cause issues in Backlog if the MySQL query cache is enabled. If query_cache_type has an existing value in my.cnf, set it to 0. If not, there’s no need to add it as the default value for query_cache_type in MySQL 5.7 is 0.
[mysqld]
max_allowed_packet=128M
character-set-server=utf8mb4
sql_mode = "NO_ENGINE_SUBSTITUTION"
query_cache_type = 0
[mysql]
default-character-set=utf8mb4
[client]
default-character-set=utf8mb4
Increase the file size limit for issue attachments
You can attach files up to 50MB to an issue. If you have any problems, try adjusting the following settings:
Increase the request packet size limit
Change the MAX_DISK_BUFFER in the advanced configuration file to 55MB or higher. Then, restart using the Start command.
backlog-api:
environment:
- MAX_DISK_BUFFER=55MB #default 55MB
backlog-web:
environment:
- MAX_DISK_BUFFER=55MB
Modify or add MySQL settings
In the my.cnf file, set the value of max_allowed_packet to 128M or higher. This is required for both MySQL 8.0 and MySQL 5.7. After editing, restart MySQL.
[mysqld]
max_allowed_packet=128M
About the trademark
- MySQL and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
- Docker are trademarks or registered trademarks of Docker, Inc. in the United States and/or other countries.
- Nginx are registered trademarks of Nginx Software Inc. in the United States and/or other countries.
- Active Directory is either a registered trademark or a trademark of Microsoft Corporation in the United States and/or other countries.
- Other company and product names in this document may be trademarks or registered trademarks of their respective owners.