Skip to content

Administration of a Teamscale Installation

This article gives a detailed reference of the configuration options of a Teamscale installation. It assumes that the basic installation of Teamscale is completed and Teamscale can be accessed via the web interface.

Configuring Teamscale

The Teamscale installation can be adjusted to the host environment with configuration files, environment variables or JVM arguments.

Most configuration files are already shipped with the Teamscale installation in the config directory. If not explicitly specified, this is the only location for loading configuration files. Otherwise, configuration files are always loaded in the following order by Teamscale:

  • Dedicated Configuration Directory: If the environment variable TEAMSCALE_CONFIG is set, the folder which the variable points to is the primary location for loading configuration files. The default installation does not specify this variable.

  • Process Working Directory: A folder named config within the working directory of the Teamscale Java process. The process working directory is the directory Teamscale is started from and equals the installation directory if not configured differently. See Separate Working Data from Installation Files for more information.

  • Installation Directory: A folder named config within the installation directory. The installation directory is determined by the environment variable TEAMSCALE_HOME. This variable is set by startup scripts.

This yields the following resolution order: $TEAMSCALE_CONFIG/config.file, $PWD/config/config.file, $TEAMSCALE_HOME/config/config.file. As soon a configuration file is found, other directories are no longer scanned.

Primary Settings – teamscale.properties

The teamscale.properties file contains the most central configuration options for running Teamscale, e.g., specifying the amount of workers or where data is stored.

The file can be provided in any of the configuration directories. Loading of a specific config file can be forced with startup argument -c /path/to/config.properties. In addition properties can be provided with the environment variable TS_PROPERTIES and overwrite those in the configuration file.

The table below shows the options available in the teamscale.properties file.

OptionDefaultDescription
server.port8080HTTP server port
server.urlprefixPrefix of URLs
server.bind-hostnameBind address of the HTTP server
database.directorystorageDatabase directory where all data is stored, relative to the process working directory.
database.typeleveldbThis is an expert setting and should not be changed.
Valid options are: leveldb, rocksdb and xodus
database.cache-size512The cache size used by the database in MB
engine.workers2The number of concurrent analysis worker jobs.
See Configuring Workers for details.
servicelog.loglevelWARNLog level for logging service calls - one of OFF, INFO, WARN, ERROR
servicelog.logipfalseWhether to log the ip address of service calls
servicelog.loguserfalseWhether to log the user of service calls
https.portPort to be used for HTTPS.
If this option is not set, HTTPS is disabled. See this guide to enable HTTPS.
https.keystore-pathThe absolute path to the Java keystore containing the certificate and private key
https.keystore-passwordThe password for the keystore
https.certificate-aliasThe alias of the certificate and private key in the keystore
custom-checks.dircustom-checksThe directory where custom check JARs can be deployed.
A relative path will be resolved relative to the process working directory first and then, if not found there, relative to the installation directory.

RocksDB

To operate RocksDB on a Windows environment, please be aware that you also need to install the Visual C++ Redistributable for Visual Studio 2015.

Configuring the Webserver

In the default configuration, Teamscale starts a web server on your machine, using port 8080, accessible via http://localhost:8080.

This web server will be also available from other machines in your network. If you do not want this, remove the comment before the line server.bind-hostname=localhost in the file teamscale.properties.

Configuring Workers

When working with multiple projects, engine.workers can be used to parallelize analyses.

  • Increasing this value requires the JVM memory settings to be adapted as well.
  • Allocate about 2GB per worker. For best performance use as many workers as cpu cores are available.
  • On larger instances, use one less worker so one core remains free for handling service requests during high-load situations.

JVM Settings – jvm.properties

The config file jvm.properties contains environment variables that are loaded before the JVM starts.

File format

Please be aware that this file is no regular shell or batch script. Multiline variables with \ escaping and environment variable expansion will not work.

Alternatively, one can specify environment variables directly on system or service level, e.g. teamscale-service.xml, docker-compose.yml, teamscale.service.

JVM Memory

By default, the Teamscale start script will launch a JVM with a maximum Java heap size of 4GB. You can change this by adjusting JVM_MEMORY in jvm.properties. Alternatively, you can set the environment variable TEAMSCALE_MEMORY which takes precedence over the value specified in jvm.properties

Dealing with Memory Problems

If Teamscale runs into memory-related problems , please refer to this troubleshooting section.

JVM temporary directory

The JVM uses the temporary directory of the executing user to store temporary files, e.g. /tmp, C:\Users\<username>\AppData\Local\Temp. This can be changed by setting the environment variable TEAMSCALE_TEMP. Relative paths are supported and the directory is created if not existing.

TIP

If you are running multiple Teamscale instances on the same server, it is recommended to specify separate temporary directories.

JVM Arguments

Additional flags (e.g, -Dmy.flag=value) that should be passed to the JVM can be specified using the JVM_EXTRA_ARGS variable. In addition, you can specify flags using the environment variables JAVA_OPTS, TEAMSCALE_OPTS and TEAMSCALE_VM_ARGS.

The JVM is always started with a predefined set of flags, e.g. -Djava.awt.headless=true. These flags can be overridden by specifying these again in, e.g. JVM_EXTRA_ARGS.

The order of arguments passed to the JVM is:

  • Any default JVM arguments
  • JAVA_OPTS
  • Memory and temporary directory
  • TEAMSCALE_OPTS
  • TEAMSCALE_VM_ARGS
  • JVM_EXTRA_ARGS

License – teamscale.license

Teamscale needs a valid license to run. Teamscale automatically searches several locations for a valid Teamscale license (in this order):

  1. The content of the environment variable TS_LICENSE. The following example shows how to copy the content of a license file into the environment variable when starting Teamscale with the shell script.
shell
TS_LICENSE=$(cat ~/custom_folder/teamscale.license)
export TS_LICENSE
./teamscale.sh
  1. A file named teamscale.license in any of the configuration directories

  2. A file named teamscale.license in the home directory of the user running Teamscale

You need to ensure that a valid license exists in one of these locations before starting Teamscale.

Logging – log4j2.yaml

Teamscale writes a log-file named logs/teamscale.log in the process working directory.

To configure Teamscale's logging settings, you can do so in the file log4j2.yaml in one of the configuration directories. This is a Log4j 2 configuration file in the YAML format, which you can adjust according to the guidelines available at the official documentation page. If you need to override the file location for some reason, you can set the Java system property log4j.configurationFile to point to the desired path. Alternatively, the environment variable TS_LOGGING_CONF may hold the content of a properties file as documented here.

Legacy Log4j 1 configuration file

Prior to the Teamscale release version 4.8, the given properties had to use the naming required by Log4j 1. Starting with Teamscale 4.8, the Log4j 2 property names are used instead. Please note that the new configuration format is not backwards-compatible.

Teamscale will refuse to start if the old configuration file is found.

Encryption – teamscale.key

Teamscale encrypts all tables in the storage system that might contain sensitive information, such as passwords to your SVN server. The encryption algorithm used is AES-256. By default, Teamscale uses a default key that is hard-coded. To further improve security, you can provide your own key. For this, write your key or passphrase into the file teamscale.key in one of the config directories. Teamscale then uses the content of this file for initializing the key that is used to protect your data. The file should be protected using the usual file-system permissions.

If an own key is provided as described, Teamscale will use it for encrypting every backup. The benefit of this method is that nobody without your key file can read the encrypted parts of the backup. On the flip-side, you can not import this backup into an instance that does not know the secret key of the instance. To import into an instance that uses a different encryption key, you must provide the key as alternative decryption key. For this, place the key into the config directory using any file name with the extension .key. When reading a backup, Teamscale will automatically find the correct key to use.

Global Server Options – admin-settings.json

The file admin-settings.json allows to configure all values that are available in the Admin > Settings view. This configuration file can be used to set these values in a non-interactive way to simplify provisioning of the Teamscale installation. The file contains a single JSON object, where each entry corresponds to one option that is used instead of the value configured in the server. Options that are set in the configuration file are no longer shown in the settings UI. Additionally, the configuration file allows to give an array of option names using the hidden key, which also should not be shown in the settings UI. This is especially useful for options that support multiple instances.

The example below hides the TFS and Crowd authentication options (and hence essentially disallows their configuration). Additionally, the base URL is fixed and a LDAP server is configured. Note that the configured LDAP server will not show up in the settings UI, but the UI will allow to configure additional LDAP servers. To construct the content of the JSON file, you can use the download buttons in the settings UI.

json
{
  "hidden": [
    "auth.tfs.server",
    "auth.crowd.server"
  ],
  "baseurl": {
    "baseUrl": "https://teamscale.acme.comm"
  },
  "auth.ldap.server/MainLdap": {
    "hostname": "ldap.acme.eu",
    "port": 389,
    "useSSL": false,
    "baseDN": "dc=acme,dc=com",
    "groupsBaseDN": "",
    "groupAttribute": "cn",
    "bindDN": "cn=admin,dc=acme,dc=com",
    "bindPassword": "secret-password",
    "loginAttribute": "sAMAccountName",
    "memberUid": "member",
    "firstNameAttribute": "givenName",
    "lastNameAttribute": "sn",
    "emailAttribute": "mail",
    "updateSchedule": "0 0 * * *",
    "userServer": ""
  }
}

Stylesheet – custom.css

The Teamscale installation can be customized with a separate stylesheet by creating a file custom.css in one of the configuration directories with CSS rules.

Separate Working Data from Installation Files

The process working directory is used to resolve configuration files and relative directories specified in configuration files, e.g., the directory for data storage, custom checks or log files. If not explicitly configured the process working directory of Teamscale is identical to the Teamscale installation directory.

For a simpler update process of Teamscale you may prefer separating files provided by the Teamscale installation from manually edited configuration files and data calculated during analysis. This way you can simply replace the whole installation directory on an update, which may reduce manual effort if Teamscale is installed as Windows or Linux service.

Changing the process working directory depends on the used way of installing Teamscale on the host environment:

  • Plain Docker: Specify --workdir or -w when starting the container. The provided path should be mapped to the host or a volume.
  • Docker Compose: Specify the working_dir key for the Teamscale service in your docker-compose.yml. The provided path should be mapped to the host or a volume.
  • Windows service: Specify workingdirectory in teamscale-service.xml.
  • Linux systemd service: Specify WorkingDirectory in the teamscale.service unit file.
  • Stock startup script: When using teamscale.sh or teamscale.bat, simply cd to the directory you want to be the process working directory and start Teamscale by specifying the path to the startup script, e.g., /path/to/teamscale/installation/teamscale.sh.

DANGER

Please be aware that changing the process working directory after initial analysis may cause already calculated data to no longer be available in Teamscale as the storage directory will most likely be resolved to another location. You can, however, simply copy the existing storage directory to the new process working directory location.

Usage Data Reporting (optional)

You can help us to improve Teamscale by activating Usage Data Reporting in the Admin settings. This option will regularly transfer information about the used features and statistics about errors to our servers. You can configure, which information you are willing to share and also see a preview of the shared information. The preview dialog also contains a link to a web form that allows a one-time usage data reporting by copying the displayed preview information there. Please note that the automatic reporting needs out-bound HTTPS connection to our own servers.

Usage Reporting

Teamscale will only report generic information, but never sensitive information such as code or user data.

Configuring Session Timeout

By default, a user that logs into Teamscale obtains a session that lasts 8 hours. This allows users to stay logged in as long as they are active every day - even in case of time zone changes. If you prefer to have a different session timeout, you can configure it with a JVM argument. E.g., to enforce a session timeout of 12 hours, add:

-Dcom.teamscale.session-timeout=12

Configuring CORS Settings

TIP

Please note if Teamscale is running behind a reverse proxy (e.g. NGINX), these settings should be applied within the reverse proxy and not Teamscale. More information about running Teamscale behind a reverse proxy can be found here.

Background: HTML5 Cross-Site Origin Resource Sharing (CORS) is a mechanism in the HTML5 standard that provides a means for browser and server to agree on the resources that may be loaded from domains outside of the web page. JavaScript, for instance, may request access to a specific dynamically generated URL. Among others, the following three HTTP headers allow this fine-grained approach to access control:

  • Origin
  • Access-Control-Allow-Origin
  • Access-Control-Allow-Credentials

The client browser sends the Origin-Header to indicate that the client would like to share resources with that (external) origin. The web server responds with the header Access-Control-Allow-Origin to indicate which domains may access the resources of the server, and may further grant permissions for specific options. If the request and the permissions match, the browser releases the request for processing, e.g. by JavaScript.

You can configure the CORS policy applied by Teamscale using the following system properties:

NameDefault ValueDescription
com.teamscale.server.cors.allowed-originsEmptycomma separated list of origins that are allowed to access the resources. Note that using wild cards can result in security problems for requests identifying hosts that do not exist. If an allowed origin contains one or more * characters (for example http://*.domain.com), then * characters are converted to .*, . characters are escaped to \. and the resulting allowed origin interpreted as a regular expression. Allowed origins can therefore be more complex expressions such as https?://*.domain.[a-z]{3} that matches http or https, multiple subdomains and any 3 letter top-level domain (.com, .net, .org, etc.).
com.teamscale.server.cors.allowed-headersX-Requested-With,Content-Type,Accept,Origincomma separated list of HTTP headers that are allowed to be specified when accessing the resources. If the value is a single *, this means that any headers will be accepted.
com.teamscale.server.cors.allowed-methodsGET,POST,HEADcomma separated list of HTTP methods that are allowed to be used when accessing the resources.
com.teamscale.server.cors.allow-credentialstruea boolean indicating if the resource allows requests with credentials.

As the default value for com.teamscale.server.cors.allowed-origins is empty, Teamscale will not allow any CORS and is thus secure by default. If you intend to include Teamscale in other websites (e.g. Jira Dashboards, Azure DevOps Extension) you have to configure the settings accordingly. Usually, you just have to adapt the allowed origins, e.g.:

  • Azure DevOps: -Dcom.teamscale.server.cors.allowed-origins=https://dev.azure.com,https://<your-domain>.visualstudio.com
  • Jira Cloud: -Dcom.teamscale.server.cors.allowed-origins=https://<your-domain>.atlassian.net

Performance Considerations

If your Teamscale instance processes only a moderate amount of code, performance will not be an issue. However, the more projects and users you add to an instance, the more you will have to think about performance. While obviously more CPU cores and RAM will help in most cases, often the main bottleneck is I/O performance.

CPUs and RAM

More CPU cores means that you can use more workers, which allows Teamscale to process more analysis steps in parallel. However, there is a limit to the amount of parallelization of a single project. So if you have only few huge projects, more workers will not necessarily help. You should never configure more workers than you have CPU cores. In case of many users or overall slow response times of the web UI, you even should configure less workers than CPU cores, to keep some spare resources for the service layer.

You should plan with a minimum of 2 GB RAM per worker. If you have few workers, add 2 GB RAM for the scheduler and the service layer. For larger instances, this extra RAM does not really matter. For complex code bases (e.g. ABAP with taint analysis, huge amount of cloning, etc.) you should add significantly more RAM per worker; for taint analysis in ABAP, we often have 5 GB per worker. Also remember that you can not use all the RAM of the machine for the Java VM, as the database itself needs some RAM for caching (outside of JVM) and the operating system needs RAM as well. As Teamscale can use RAM for caching of intermediate results at a lot of layers and stages, giving more RAM to the Java VM (in jvm.properties) will often help to improve performance.

Disk Performance

Teamscale is very I/O sensitive, as it processes a lot of data. The most crucial factor is the number of random access read/write operations, that the disk can perform per second. We strongly recommend using a local SSD if possible. Never use a network drive, as usually the performance will be bad.

Measuring I/O Performance

The number we use for comparing I/O performance is the number of I/O operations per second (IOPS). There are different tools for measuring this number and the exact values are only comparable between measurements with the same tool, as they depend on implementation details. We use the tool fio for measuring disk performance, which is available for Linux and Windows. To determine the IOPS for your disk, run one of the two commands below matching your operating system.

Linux:

shell
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75

Windows:

shell
fio --randrepeat=1 --ioengine=windowsaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75

The output will contain lines such as the following, which contain the IOPS for read and write operations (the first number in each line):

  read: IOPS=169k, BW=659MiB/s (691MB/s)(3070MiB/4658msec)
  write: IOPS=56.4k, BW=220MiB/s (231MB/s)(1026MiB/4658msec); 0 zone resets

To compare your numbers, use the following table:

Server/DiskRead IOPSWrite IOPSAssessment
Old magnetic disk17560Too slow for Teamscale
AWS instance (m5.x2large with 500GB of GPIO EBS volume/SSD)2256752Too slow for Teamscale
Customer cloud server (backed by network drive)69502300Too slow for Teamscale
AWS instance (m5d.x2large with 1000GB of provisioned IOPS SSD (50.000 IOPS configured))14.2k4.7kOk for medium sized instances
Customer server with fast local SSD62k21kOk for larger instances
Local SSD (Laptop from 2018)99.4k33.2kOk for larger instances
GCP instance with local SSD105k35.2kOk for larger instances
AWS instance (m5d.x2large using local NVMe SSD)143k47.7kFor very large instance
Local SSD (Laptop from 2021)146k48.9kFor very large instance

Database Cache

The parameter database.cache-size in the teamscale.properties file controls the amount of memory used for database caching. The default value of 512 MB works well for small and medium instances. If you encounter performance issues, you should experiment with larger cache sizes. For huge instances, we have data cache sizes of 10 GB and more. Keep in mind, that the database cache is separate from the memory allocated for the Java VM, so make sure that both numbers (and some RAM for the operating system) fit well into the overall amount of RAM of the server.

Too much database cache

Giving too much RAM to the database cache can even lead to reduced performance and stability of the instance in rare cases. Please make sure to test new settings in a non-productive environment first and be prepared to switch back to the original settings.

Database Sharding

Database sharding describes the process of using multiple databases for one Teamscale instance. Note that this is an advanced topic and only needed for very large instances. The database layers used by Teamscale do scale well to an on-disk size of about 1 TB. Beyond this point, we often observe the database to slow down significantly. Additionally, the locking mechanisms in the database may cause delays and reduced parallelism when lots of analyses (workers) attempt to access the database at the same time.

To resolve this situation, you can configure Teamscale to distribute its data across multiple databases. Ideally, these databases would be split across multiple disks, but we see significant performance improvements even when the databases reside on the same disk. To activate sharding, the parameter database.sharding in teamscale.properties is used. To activate randomized sharding, where projects are mapped to shards in a random fashion, use the value randomized:N, where N should be replaced by the number of shards. The databases for shards are created in subdirectories of the storage directory provided by the database.directory parameter.

While randomized is helpful for initial tests, you usually want to control the mapping between projects and shards, e.g. to move large or heavily utilized projects to separate shards. For this, we use pattern based sharding, which is configured like in this example:

properties
database.sharding=pattern:GLOB:.*global.*:CLUSTER1:foo_.*:CLUSTER2:.*

The first part (pattern:) is fixed and denotes pattern based sharding. This is followed by names of shards and mapping patterns. Each project (and the global data) is mapped to the first shard whose regular expression pattern matches any of the public ids of the project. To make sure that all projects can be mapped, the last shard should catch all projects by using .* as pattern.

Database cache

Note that the database cache is applied per shard, so the amount of RAM used for caching is multiplied by the number of shards. Keep this in mind for the RAM allocation of Teamscale and the database caches.

Redeployment needed

Changing the sharding configuration will invalidate the storage directory, so you have to start with an empty storage directory from scratch (or a backup). As you can not change this configuration on the fly, make sure to test the sharding settings thoroughly on a non-production instance.

Too many shards

We have seen cases, where too many shards led to memory exhaustion and hence crashes of the instance and even the server.

Storage String Abbreviation Caching

Teamscale uses a global internal string abbreviator, which is used to compress keys and values for specific large database tables. Lookup results done in the global abbreviation table are kept in a cache, to improve lookup performance. The cache size in MB can be configured using a JVM argument:

-Dcom.teamscale.persistence.string-abbreviation-cache-mb=XYZ

The cache defaults to a size of 200MB.

The Teamscale UI provides insight into the cache performance und utilization under System -> System Information in the Internal String Abbreviator Cache Statistics section.