# Administration of a Teamscale Installation
This article gives a detailed reference of the configuration options of a Teamscale installation. It assumes that the basic installation of Teamscale is completed and Teamscale can be accessed via the web interface.
- Configuring Teamscale
- Separate Working Data from Installation Files
- Usage Data Reporting (optional)
- Configuring Session Timeout
- Performance Considerations
# Configuring Teamscale
The Teamscale installation can be adjusted to the host environment with configuration files, environment variables or JVM arguments.
Most configuration files are already shipped with the Teamscale installation in the
If not explicitly specified, this is the only location for loading configuration files.
Otherwise, configuration files are always loaded in the following order by Teamscale:
Dedicated Configuration Directory: If the environment variable
TEAMSCALE_CONFIGis set, the folder which the variable points to is the primary location for loading configuration files. The default installation does not specify this variable.
Process Working Directory: A folder named
configwithin the working directory of the Teamscale Java process. The process working directory is the directory Teamscale is started from and equals the installation directory if not configured differently. See Separate Working Data from Installation Files for more information.
Installation Directory: A folder named
configwithin the installation directory. The installation directory is determined by the environment variable
TEAMSCALE_HOME. This variable is set by startup scripts.
This yields the following resolution order:
As soon a configuration file is found, other directories are no longer scanned.
# Primary Settings – teamscale.properties
teamscale.properties file contains the most central configuration options for running Teamscale,
e.g., specifying the amount of workers or where data is stored.
The file can be provided in any of the configuration directories.
Loading of a specific config file can be forced with startup argument
In addition properties can be provided with the environment variable
TS_PROPERTIES and overwrite those in the configuration file.
The table below shows the options available in the
| || ||HTTP server port|
| ||Prefix of URLs|
| ||Bind address of the HTTP server|
| || ||Database directory where all data is stored, relative to the process working directory.|
| || ||This is an expert setting and should not be changed. |
Valid options are:
| || ||The cache size used by the database in MB|
| || ||The number of concurrent analysis worker jobs. |
See Configuring Workers for details.
| || ||Log level for logging service calls - one of |
| || ||Whether to log the user of service calls|
| ||Port to be used for HTTPS. |
If this option is not set, HTTPS is disabled. See this guide to enable HTTPS.
| ||The absolute path to the Java keystore containing the certificate and private key|
| ||The password for the keystore|
| ||The alias of the certificate and private key in the keystore|
| || ||The directory where custom check JARs can be deployed. |
A relative path will be resolved relative to the process working directory first and then, if not found there, relative to the installation directory.
To operate RocksDB on a Windows environment, please be aware that you also need to install the Visual C++ Redistributable for Visual Studio 2015.
# Configuring the Webserver
In the default configuration, Teamscale starts a web server on your
machine, using port
8080, accessible via
This web server will be also available from
other machines in your network. If you do not want this, remove the
comment before the line
server.bind-hostname=localhost in the
# Configuring Workers
When working with multiple projects,
engine.workers can be used to parallelize analyses.
- Increasing this value requires the JVM memory settings to be adapted as well.
- Allocate about 2GB per worker. For best performance use as many workers as cpu cores are available.
- On larger instances, use one less worker so one core remains free for handling service requests during high-load situations.
# JVM Settings – jvm.properties
The config file
jvm.properties contains variables that are loaded before the JVM starts.
Please be aware that this file is no regular shell or batch script.
Multiline variables with
\ escaping and environment variable expansion will not work.
# JVM Memory
By default, the Teamscale start script will launch a JVM with a maximum Java heap size of 2.500mb.
You can change this by adjusting
Alternatively, you can set the environment variable
TEAMSCALE_MEMORY which takes precedence over the value specified in
Dealing with Memory Problems
If Teamscale runs into memory-related problems , please refer to this troubleshooting section.
# JVM Arguments
Additional flags (e.g,
-Dmy.flag=value) that should be passed to the JVM can be specified using the
In addition, you can specify flags using the regular environment variable
The value of
TEAMSCALE_VM_ARGS is passed to the JVM before the values of
The JVM is always started with flags
# License – teamscale.license
Teamscale needs a valid license to run. Teamscale automatically searches several locations for a valid Teamscale license (in this order):
- The content of the environment variable
TS_LICENSE. The following example shows how to copy the content of a license file into the environment variable when starting Teamscale with the shell script.
TS_LICENSE=$(cat ~/custom_folder/teamscale.license) export TS_LICENSE ./teamscale.sh
A file named
teamscale.licensein any of the configuration directories
A file named
teamscale.licensein the home directory of the user running Teamscale
You need to ensure that a valid license exists in one of these locations before starting Teamscale.
# Logging – log4j2.yaml
Teamscale writes a log-file named
logs/teamscale.log in the process working directory.
To configure Teamscale's logging settings, you can do so in the file
log4j2.yaml in one of the configuration directories.
This is a Log4j 2 configuration file in the YAML format, which you can adjust according to the guidelines available at the official documentation page.
If you need to override the file location for some reason, you can set the Java system property
log4j.configurationFile to point to the desired path.
Alternatively, the environment variable
TS_LOGGING_CONF may hold the content of a properties file as documented here.
Legacy Log4j 1 configuration file
Prior to the Teamscale release version 4.8, the given properties had to use the naming required by Log4j 1. Starting with Teamscale 4.8, the Log4j 2 property names are used instead. Please note that the new configuration format is not backwards-compatible.
Teamscale will refuse to start if the old configuration file is found.
# Encryption – teamscale.key
Teamscale encrypts all tables in the storage system that might contain
sensitive information, such as passwords to your SVN server. The
encryption algorithm used is AES-256. By default, Teamscale uses a
default key that is hard-coded. To further improve security, you can
provide your own key. For this, write your key or passphrase into the
teamscale.key in one of the
config directories. Teamscale then uses
the content of this file for initializing the key that is used to
protect your data. The file should be protected using the usual file-system permissions.
When creating backups, you can select to use this local key instead of
Teamscale's default key for encrypting the backup. The benefit of this
method is that nobody without your key file can read the encrypted parts
of the backup. On the flip-side, you can not import this backup into an
instance that does not know the secret key of the instance. To import
into an instance that uses a different encryption key, you either have
to use a backup that uses Teamscale's key instead of the local key, or
you must provide the key as alternative decryption key. For this,
place the key into the
config directory using any file name with the
.key. When reading a backup, Teamscale will automatically
find the correct key to use.
# Stylesheet – custom.css
The Teamscale installation can be customized with a separate stylesheet by creating a file
custom.css in one of the configuration directories with CSS rules.
# Separate Working Data from Installation Files
The process working directory is used to resolve configuration files and relative directories specified in configuration files, e.g., the directory for data storage, custom checks or log files. If not explicitly configured the process working directory of Teamscale is identical to the Teamscale installation directory.
For a simpler update process of Teamscale you may prefer separating files provided by the Teamscale installation from manually edited configuration files and data calculated during analysis. This way you can simply replace the whole installation directory on an update, which may reduce manual effort if Teamscale is installed as Windows or Linux service.
Changing the process working directory depends on the used way of installing Teamscale on the host environment:
- Plain Docker: Specify
-wwhen starting the container. The provided path should be mapped to the host or a volume.
- Docker Compose: Specify the
working_dirkey for the Teamscale service in your
docker-compose.yml. The provided path should be mapped to the host or a volume.
- Windows service: Specify
- Linux systemd service: Specify
- Stock startup script: When using
cdto the directory you want to be the process working directory and start Teamscale by specifying the path to the startup script, e.g.,
Please be aware that changing the process working directory after initial analysis may cause already calculated data to no longer be available in Teamscale as the storage directory will most likely be resolved to another location. You can, however, simply copy the existing storage directory to the new process working directory location.
# Usage Data Reporting (optional)
You can help us to improve Teamscale by activating Usage Data Reporting in the Admin settings. This option will regularly transfer information about the used features and statistics about errors to our servers. You can configure, which information you are willing to share and also see a preview of the shared information. The preview dialog also contains a link to a web form that allows a one-time usage data reporting by copying the displayed preview information there. Please note that the automatic reporting needs out-bound HTTPS connection to our own servers.
Teamscale will only report generic information, but never sensitive information such as code or user data.
# Configuring Session Timeout
By default, a user that logs into Teamscale obtains a session that lasts 48 hours. This allows users to stay logged in as long as they are active every day - even in case of time zone changes. If you prefer to have a different session timeout, you can configure it with a JVM argument. E.g., to enforce a session timeout of 8 hours, add:
# Performance Considerations
If your Teamscale instance processes only a moderate amount of code, performance will not be an issue. However, the more projects and users you add to an instance, the more you will have to think about performance. While obviously more CPU cores and RAM will help in most cases, often the main bottleneck is I/O performance.
# CPUs and RAM
More CPU cores means that you can use more workers, which allows Teamscale to process more analysis steps in parallel. However, there is a limit to the amount of parallelization of a single project. So if you have only few huge projects, more workers will not necessarily help. You should never configure more workers than you have CPU cores. In case of many users or overall slow response times of the web UI, you even should configure less workers than CPU cores, to keep some spare resources for the service layer.
You should plan with a minimum of 2 GB RAM per worker.
If you have few workers, add 2 GB RAM for the scheduler and the service layer.
For larger instances, this extra RAM does not really matter.
For complex code bases (e.g. ABAP with taint analysis, huge amount of cloning, etc.) you should add significantly more RAM per worker; for taint analysis in ABAP, we often have 5 GB per worker.
Also remember that you can not use all the RAM of the machine for the Java VM, as the database itself needs some RAM for caching (outside of JVM) and the operating system needs RAM as well.
As Teamscale can use RAM for caching of intermediate results at a lot of layers and stages, giving more RAM to the Java VM (in
jvm.properties) will often help to improve performance.
# Disk Performance
Teamscale is very I/O sensitive, as it processes a lot of data. The most crucial factor is the number of random access read/write operations, that the disk can perform per second. We strongly recommend using a local SSD if possible. Never use a network drive, as usually the performance will be bad.
# Measuring I/O Performance
The number we use for comparing I/O performance is the number of I/O operations per second (IOPS).
There are different tools for measuring this number and the exact values are only comparable between measurements with the same tool, as they depend on implementation details.
We use the tool
fio for measuring disk performance, which is available for Linux and Windows.
To determine the IOPS for your disk, run the following command:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
The output will contain lines such as the following, which contain the IOPS for read and write operations (the first number in each line):
read: IOPS=169k, BW=659MiB/s (691MB/s)(3070MiB/4658msec)
write: IOPS=56.4k, BW=220MiB/s (231MB/s)(1026MiB/4658msec); 0 zone resets
To compare your numbers, use the following table:
|Server/Disk||Read IOPS||Write IOPS||Assessment|
|Old magnetic disk||175||60||Too slow for Teamscale|
|AWS instance (m5.x2large with 500GB of GPIO EBS volume/SSD)||2256||752||Too slow for Teamscale|
|Customer cloud server (backed by network drive)||6950||2300||Too slow for Teamscale|
|AWS instance (m5d.x2large with 1000GB of provisioned IOPS SSD (50.000 IOPS configured))||14.2k||4.7k||Ok for medium sized instances|
|Customer server with fast local SSD||62k||21k||Ok for larger instances|
|Local SSD (Laptop from 2018)||99.4k||33.2k||Ok for larger instances|
|GCP instance with local SSD||105k||35.2k||Ok for larger instances|
|AWS instance (m5d.x2large using local NVMe SSD)||143k||47.7k||For very large instance|
|Local SSD (Laptop from 2021)||146k||48.9k||For very large instance|
# Database Cache
database.cache-size in the
teamscale.properties file controls the amount of memory used for database caching.
The default value of 512 MB works well for small and medium instances.
If you encounter performance issues, you should experiment with larger cache sizes.
For huge instances, we have data cache sizes of 10 GB and more.
Keep in mind, that the database cache is separate from the memory allocated for the Java VM, so make sure that both numbers (and some RAM for the operating system) fit well into the overall amount of RAM of the server.
Too much database cache
Giving too much RAM to the database cache can even lead to reduced performance and stability of the instance in rare cases. Please make sure to test new settings in a non-productive environment first and be prepared to switch back to the original settings.
# Database Sharding
Database sharding describes the process of using multiple databases for one Teamscale instance. Note that this is an advanced topic and only needed for very large instances. The database layers used by Teamscale do scale well to an on-disk size of about 1 TB. Beyond this point, we often observe the database to slow down significantly. Additionally, the locking mechanisms in the database may cause delays and reduced parallelism when lots of analyses (workers) attempt to access the database at the same time.
To resolve this situation, you can configure Teamscale to distribute its data across multiple databases.
Ideally, these databases would be split across multiple disks, but we see significant performance improvements even when the databases reside on the same disk.
To activate sharding, the parameter
teamscale.properties is used.
To activate randomized sharding, where projects are mapped to shards in a random fashion, use the value
N should be replaced by the number of shards.
The databases for shards are created in subdirectories of the storage directory provided by the
While randomized is helpful for initial tests, you usually want to control the mapping between projects and shards, e.g. to move large or heavily utilized projects to separate shards. For this, we use pattern based sharding, which is configured like in this example:
The first part (
pattern:) is fixed and denotes pattern based sharding.
This is followed by names of shards and mapping patterns.
Each project (and the global data) is mapped to the first shard whose regular expression pattern matches any of the public ids of the project.
To make sure that all projects can be mapped, the last shard should catch all projects by using
.* as pattern.
Note that the database cache is applied per shard, so the amount of RAM used for caching is multiplied by the number of shards. Keep this in mind for the RAM allocation of Teamscale and the database caches.
Changing the sharding configuration will invalidate the storage directory, so you have to start with an empty storage directory from scratch (or a backup). As you can not change this configuration on the fly, make sure to test the sharding settings thoroughly on a non-production instance.
Too many shards
We have seen cases, where too many shards led to memory exhaustion and hence crashes of the instance and even the server.