Backlog Enterprise Operations Guide
Version 2.0.0
Table of Contents
Directory structure
Backlog is installed in the following configurations:
<Installation directory>
├── backlog-job-worker
│ └── hikaricp.properties #Maximum number of connections for database access
├── data #Store location of each data
├── logs #Store location of each log file
├── .env #File of each setting value.
└── docker-compose.yml #Definition file of each container
Command line of start/stop on Backlog
Stop
Stop running Backlog.
cd <Installation directory>
docker compose down
Start
Start Backlog.
cd <Installation directory>
docker compose up -d
Services
Service | Description |
---|---|
backlog-api | Backlog API |
backlog-davsvn | Backlog File and subversion feature |
backlog-job-worker | Internal data processing |
backlog-solr | Full-text Backlog search |
backlog-web | Backlog web service |
cron | Internal data processing |
database-migration | Database version control |
elasticsearch | Full-text Backlog search |
elasticsearch-init | Initial setting for full-text search |
fluentd | Collect Backlog logs |
git-backlog-worker | Internal data processing |
git-http | Backlog Git HTTP service |
git-rpc | Internal data processing |
git-ssh | Backlog Git SSH service |
git-webhook-worker | Internal data processing |
kanban-backend | Backlog kanban board |
kanban-notification | Internal data processing |
memcached | Various cache storage locations |
nginx | Reverse proxy |
redis | Various cache storage locations |
Port numbers
Services are published using the following port numbers. If you’re using iptables or other software, please allow external access to these numbers.
port number | |
---|---|
HTTPS | 443 |
Git SSH | 8972 |
File data storage directory
Backlog file data is stored in the following directories:
data
├── attachment
│ └── pull_request
├── davsvn
│ ├── share
│ └── svn
├── elasticsearch
├── git
├── image
└── solr
├── issue
├── pull_request
├── shared_file
└── wiki
Directory path | Description |
---|---|
data/attachment/pull_request | Stored files attached to pull requests for Git features |
data/davsvn/share | Files managed by the File features are stored |
data/davsvn/svn | Repositories for subversion feature are stored |
data/elasticsearch | Search index |
data/git | Repositories for Git feature are stored |
data/image | Image files are stored |
data/solr | Search index |
Log file storage directory
Logs for each service are stored in <Installation directory>/logs
.
logs
├── ${tag[1]} #Directory for temporary stores logs
│ └── ${tag[1]}
├── backlog-api
├── backlog-davsvn
├── backlog-davsvn-mntlog
│ ├── dav-job-worker
│ ├── httpd-davsvn
│ ├── svn-hook
│ └── svnserve
├── backlog-job-worker
├── backlog-solr
├── backlog-web
├── database-migration
├── elasticsearch
├── elasticsearch-init
├── git-http
├── git-rpc
├── git-rpc-mntlog
├── git-ssh
├── kanban-backend
├── kanban-notification
├── memcached
├── nginx
└── redis
Parameters for .env
Parameter name | Initial value | Description | Input from configuration tool |
---|---|---|---|
BACKLOG_DB_HOST | Host of the database to connect | ✔︎ | |
BACKLOG_DB_PORT | Port of the database to connect | ✔︎ | |
BACKLOG_DB_NAME | Schema name of the database to connect | ✔︎ | |
BACKLOG_DB_USER | User name of the database to connect | ✔︎ | |
BACKLOG_DB_PASSWORD | Password of the database to connect | ✔︎ | |
BACKLOG_SMTP_HOST | Host of the SMTP server | ✔︎ | |
BACKLOG_TIMEZONE | Time zone | ✔︎(first time only) | |
BACKLOG_MAIL_NOTIFICATIONS_ADDRESS_FORMAT | Origin email address of the email sent from Backlog. If not specified, the email address will be that of the person who registered for the issue, etc. Please note that this email address may be considered spoofed or unsolicited. | ||
BACKLOG_WEB_PLAY_SESSION_SECRET_KEY | Automatically generated by the configuration tool | Never change this parameter | |
BACKLOG_API_PLAY_SESSION_SECRET_KEY | Automatically generated by the configuration tool | Never change this parameter | |
FIXED_IP_ADDRESS_PREFIX | 10.254.249 | The fixed IP prefix used by docker compose networking | |
LOG_REMAIN_DAYS | 7 | Log storage period | |
LDAPS_USING | false | Turns LDAPS connection ON/OFF | |
GIT_SSH_HOST_PRIVATE_KEY_ENC | Automatically generated by the configuration tool | Never change this parameter | |
KANBAN_OAUTH2_CLIENT_ID | Automatically generated values by the configuration tool | Never change this parameter | |
BACKLOG_DATA_DIRECTORY | ./data | Directory of stored Backlog data | |
BACKLOG_LOG_DIRECTORY | ./logs | Directory of stored logs | |
BACKLOG_CERT_DIRECTORY | Directory of stored SSL certificates and private keys | ✔︎(first time only) | |
BACKLOG_WEB_JAVA_OPTS | ‘-Xmx2048M -Xms512M -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=256m’ | JVM startup options for backlog-web | |
BACKLOG_API_JAVA_OPTS | ‘-Xmx1024M -Xms512M -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=256m’ | JVM startup options for backlog-api | |
BACKLOG_JOBWORKER_JAVA_OPTS | ‘-Xmx1024M -Xms512M -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=256m’ | JVM startup options for backlog-job-worker | |
BACKLOG_KANBAN_JAVA_OPTS | ‘-Xmx1024M -Xms512M -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=256m’ | JVM startup options for kanban-backend | |
BACKLOG_WEB_DB_MAXIMUM_POOL_SIZE | 10 | Maximum number of database connections for backlog-web | |
BACKLOG_API_DB_MAXIMUM_POOL_SIZE | 10 | Maximum number of database connections for backlog-api | |
BACKLOG_DAVSVN_START_SERVERS | 5 | Configuration of HTTP server for backlog-davsvn. More information | |
BACKLOG_DAVSVN_MIN_SPARE_SERVERS | 5 | Configuration of HTTP server for backlog-davsvn. More information | |
BACKLOG_DAVSVN_MAX_SPARE_SERVERS | 10 | Configuration of HTTP server for backlog-davsvn. More information | |
BACKLOG_DAVSVN_MAX_REQUEST_WORKERS | 256 | Configuration of HTTP server for backlog-davsvn. More information | |
BACKLOG_DAVSVN_MAX_CONNECTIONS_PER_CHILD | 0 | Configuration of HTTP server for backlog-davsvn. More information |
Back-up data
The data handled by Backlog is stored in the following files, which should be backed up regularly.
Creating an advanced configuration file
Create docker-compose.override.yml
on the same level as docker-compose.yml
. In docker-compose.override.yml
, list the services you want to change and their configuration values. After it’s created, restart with the Start command.
For example:
version: "3"
services:
backlog-web:
environment:
...
backlog-api:
environment:
...
Integrating with Active Directory
Change the port number of LDAPS
For an LDAPS connection, enter the LDAPS port number in the advanced configuration file you created. After it’s created, restart with the Start command.
backlog-api:
environment:
- LDAPS_PORT=8636 #Default 636
backlog-web:
environment:
- LDAPS_PORT=8636 #Default 636
Use Active Directory Certificate Service
If you use an LDAPS connection and the Active Directory Certificate Service, place the certificate issued by Active Directory Certificate Service in any directory. After it’s placed, add the following information to the advanced configuration file you created, and restart with the Start command.
backlog-api:
volumes:
- /path/to/cert/dir:/mnt/certs #Please replace `/path/to/cert/dir` with the directory where you placed the certificate
command: -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true
backlog-web:
volumes:
- /path/to/cert/dir:/mnt/certs #Please replace `/path/to/cert/dir` with the directory where you placed the certificate
command: -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true
Create users with Active Directory
Create Backlog users with information from Active Directory. Set up zone definitions and SRV resource records for Active Directory before operation. Active Directory is supported on Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019. Get detailed instructions.
Regenerate the search index
If the results of a keyword search are not displayed correctly, run the command to regenerate the search indexing. Note that playback is delayed if there’s a large amount of index data.
Problem with the issue search (issue list)
Use the following:
docker compose up elasticsearch-tool
Problem with global navigation keyword searches
Use the following:
docker compose up solr-tool
Change the speed of search index re-generation
Change the limits for indexing jobs in the advanced configuration file you created. After editing, restart with the Start command.
backlog-job-worker:
environment:
- BACKLOG_WORKER_JOB_INDEXING_ISSUE_JOB_PERMITS=16 # Default 6
Change maximum of bucket size requests
If the wiki has too many characters and can’t be saved, for example, change the bucket size limit in the advanced configuration file you created. After editing, restart with the Start command.
backlog-api:
environment:
- MAX_MEMORY_BUFFER=25MB # Default 20MB
- MAX_DISK_BUFFER=25MB # Default 20MB
backlog-web:
environment:
- MAX_MEMORY_BUFFER=25MB
- MAX_DISK_BUFFER=25MB
Change the maximum number of simultaneous Nginx connections
Copy nginx.conf locally with the following command. After editing, restart with the Start command.
docker compose cp nginx:/etc/nginx/nginx.conf .
Edit nginx.conf and docker-compose.override.yml.
nginx:
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
Get more information on configuring nginx.conf.
About the trademark
- MySQL and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
- Docker are trademarks or registered trademarks of Docker, Inc. in the United States and/or other countries.
- Nginx are registered trademarks of Nginx Software Inc. in the United States and/or other countries.
- Active Directory is either a registered trademark or a trademark of Microsoft Corporation in the United States and/or other countries.
- Other company and product names in this document may be trademarks or registered trademarks of their respective owners.