2 Installation and Configuration #
2.1 Planning #
For testing your own OBS instance, or for small setups, such as if you only want to package a few scripts into RPMS and create proper installation sources from them, the ready-to-use obs-server appliance images are the easiest way. You can download them from http://openbuildservice.org/download/.
However, to use the OBS for large Linux software development with many packages, projects and users, consider setting up a regular installation. Depending on the number of users, projects, and architectures, you can split up the back-end (called partitioning) and have separate hosts for the front-end and the database.
For most installations, it is OK to run everything except workers on one host, if it has sufficient resources.
For flexibility and if you want some kind of high availability it is recommended to use virtualization for the different components.
2.1.1 Resource Planning #
Normally, for an small or middle-sized installation, a setup with everything on one host (except workers) is sufficient. You should have a separate /srv volume for the back-end data. We recommend that you use XFS as file system.
For each scheduler architecture, you should add 4 GB RAM and one CPU core. For each build distribution you should add at least 50GB disk space per architecture.
A medium instance with about 50 users can easily run on a machine with 16GB RAM, 4 cores and 1 TB storage. The storage, of course, depends on the size of your projects and how often you have new versions.
For bigger installations, you can use separate networks for back-end communication, workers and front-end.
As of May 2021, the reference installation on build.opensuse.org, which has a lot of users and distributions, runs on a partitioned setup with:
a mysql cluster as database
api-server: 16GB RAM, 4 cores, 50GB disk
separate binary back-ends (scheduler, dispatcher, reposerver, publisher, warden)
source server: 11 GB RAM, 4 cores, 3 TB disk. The RAM is used mainly for caching.
main back-end: 62 GB RAM (oversized), 16TB disk
a lot of workers (see - https://build.opensuse.org/monitor)
For build time and performance, the count and performance of available worker hosts is more important than the other parts.
2.2 Simple Installation #
In this document, we call "simple installation" an OBS installation where all OBS services are running on the same machine.
It is very important that you read the README.SETUP file coming with your OBS version and follow the instructions there, because it may provide additional, version-specific information.
Before you start the installation of the OBS, you should make sure that your hosts have the correct fully qualified hostname, and that DNS is working and can resolve all names.
2.2.1 Back-end Installation #
The back-end hosts all sources and built packages. It also schedules the jobs. To install it, install the "obs-server" package. After installation, it's a good idea to check the service configuration in /usr/lib/obs/server/BSConfig.pm, although the defaults should be good enough for simple cases.
Read more about configuring the backend in Section 2.4, “Distributed Setup”.
The back-end consists of a number of systemd units (services):
Service | Description | Remark |
---|---|---|
obssrcserver.service |
Source server | |
obsrepserver.service |
Repository server | |
obsservice.service |
Source services server | |
obsdodup.service |
Repository metadata download |
since 2.7 |
obsdeltastore.service |
Delta storage |
since 2.7 |
obsscheduler.service |
Scheduler | |
obsdispatcher.service |
Dispatcher proxy | |
obsservicedispatch.service |
Dispatcher | |
obspublisher.service |
Publisher | |
obssigner.service |
Signer proxy | |
obssignd.service |
Signer | |
obswarden.service |
Warden | |
obsclouduploadworker.service |
Cloud upload worker |
Only needed for cloud upload feature |
obsclouduploadserver.service |
Cloud upload server |
Only needed for cloud upload feature |
These services are controlled via systemctl. Basically, you can enable/disable a service to start when the system boot, and you can start/stop/restart it in a running system as well. For more information, see the systemctl man page. For example, to restart the repository server, do:
systemctl restart obsrepserver.service
When starting the various services, obssrcserver.service (the source server) must be started first, and obsrepservice.service (the repository server) second, followed by the remaining services in any order. When installing manually, you will need to first enable the services with
systemctl enable <name>
so they start automatically at boot. In this case, the start order will be enforced via the respective systemd unit files. Should you want to start the services manually, you will need to ensure the correct ordering yourself, by starting the source server first and the repository server second, like so:
systemctl start obssrcserver.service systemctl start obsrepserver.service
followed by the remaining services in any order.
The start-up commands start services which are accessible from the outside. If the system is connected to an untrusted network, either block the ports with a firewall or do not run the commands at all.
2.2.1.1 Cloud Upload Setup #
In order to setup the Cloud Upload feature you will need to configure the tools required per each cloud provider. Right now we only support the AWS Amazon Cloud (https://aws.amazon.com) and Microsoft Azure (https://portal.azure.com) as providers.
Before you can start uploading images to the Amazon Web Services (AWS) and/or Microsoft Azure, you have to:
Install the obs-cloud-uploader package
zypper in obs-cloud-uploader
Start the cloud upload services
systemctl start obsclouduploadworker.service systemctl start obsclouduploadserver.service
At last you have to register the cloud uploader service in /usr/lib/obs/server/BSConfig.pm, for example, by adding following line:
our $clouduploadserver = "http://$hostname:5452";
Ensure that the system time of your cloud uploader instance is correct. AWS is relying on the timestamps of the requests it receives. Having an incorrect system time will cause cloud uploads to fail.
2.2.1.1.1 AWS Amazon Cloud #
2.2.1.1.1.1 Authentication Workflow #
We are going to use the role based authentication provided by Amazon to enable the OBS instance to upload images to other user's accounts.
The users will obtain an external ID (automatically created and unique) and the OBS account ID to create an Identity and Access Management (IAM) role. After the user created the role, he needs to provide the Amazon Resource Name (ARN) of the role to OBS. OBS will use this ARN to obtain temporary credentials, therefore an uploader account is necessary which we need to configure (see AWS authentication credentials setup). OBS will use the ARN to obtain temporary credentials for the users account to upload the appliance. The ARN and the external ID are not considered as a secret.
The whole workflow is described in the AWS documentation.
2.2.1.1.1.2 Credentials Setup #
For uploading images to AWS, OBS is using the AWS CLI tool. Before you can start uploading your images, you have to enter the AWS credentials to the /etc/obs/cloudupload/.aws/credentials configuration file. These credentials will then be used by OBS to retrieve the temporary credentials from the ARN provided by users. More information about IAM role base authorization can be found in the Amazon documentation).
2.2.1.1.2 Microsoft Azure #
2.2.1.1.2.1 Authentication Workflow #
The authentication is done via Microsoft's Active Directory. The user has to create a new application and needs to provide those two credentials to OBS:
Application ID
The Application ID is a unique ID that represents an Active Directory Application.
Application Key
The Application Key can be generated for every application and is the password.
OBS communicates with the REST API of Microsoft Azure to authenticate and upload images.
2.2.1.1.2.2 Configuration #
The Application ID and the Application Key will be stored encrypted in the database. As for that, it's required to generate an SSL secret and public key that has to be stored on the server where the obs-cloud-uploader package has been installed.
To generate that SSL certificate, execute the following commands:
cd /etc/obs/cloudupload openssl genrsa -out secret.pem openssl rsa -in secret.pem -out _pubkey -outform PEM -pubout
2.2.1.1.2.3 Credentials setup #
It's important that the public key is named _pubkey and the secret key is named secret.pem and are kept in /etc/obs/cloudupload.
2.2.2 Front-end Installation #
You need to install the "obs-api" package for this and a MySQL server.
2.2.2.1 MySQL Setup #
Make sure that the mysql server is started on every system reboot (use "insserv mysql" for permanent start). You should run mysql_secure_installation and follow the instructions.
Create the empty production databases:
# mysql -u root -p mysql> create database api_production; mysql> quit
Use a separate MySQL user (for example,
obs
) for the OBS access:
# mysql -u root -p mysql> create user 'obs'@'%' identified by 'TopSecretPassword'; mysql> create user 'obs'@'localhost' identified by 'TopSecretPassword'; mysql> GRANT all privileges ON api_production.* TO 'obs'@'%', 'obs'@'localhost'; mysql> FLUSH PRIVILEGES; mysql> quit
Configure your MySQL user and password in the "production" section of the api config: /srv/www/obs/api/config/database.yml
Example:
# MySQL (default setup). Versions 4.1 and 5.0 are recommended. # # Get the fast C bindings: # gem install mysql # (on OS X: gem install mysql -- --include=/usr/local/lib) # And be sure to use new-style password hashing: # http://dev.mysql.com/doc/refman/5.0/en/old-client.html production: adapter: mysql2 database: api_production username: obs password: TopSecretPassword encoding: utf8 timeout: 15 pool: 30
Now populate the database
cd /srv/www/obs/api/ sudo RAILS_ENV="production" rake db:setup sudo RAILS_ENV="production" rake writeconfiguration sudo chown -R wwwrun.www log tmp
Now you are done with the database setup.
2.2.2.2 Apache Setup #
Now we need to configure the Web server. By default, you can reach the familiar web user interface and also api both on port 443 speaking https. Repositories can be accessed via http on port 82 (once some packages are built). An overview page about your OBS instance can be found behind 'http://localhost'.
The obs-api package comes with an Apache vhost file, which does not need to get modified when you stay with these defaults: /etc/apache2/vhosts.d/obs.conf
Install the required packages via
zypper in obs-api apache2 apache2-mod_xforward rubygem-passenger-apache2 memcached
Add the following Apache modules in /etc/sysconfig/apache2
:
APACHE_MODULES="... passenger rewrite proxy proxy_http xforward headers socache_shmcb"
Enable SSL in /etc/sysconfig/apache2 via
APACHE_SERVER_FLAGS="SSL"
For production systems you should order official SSL certificates. For testing follow the instructions to create a self signed SSL certificate:
mkdir /srv/obs/certs openssl genrsa -out /srv/obs/certs/server.key 4096 openssl req -new -key /srv/obs/certs/server.key \ -out /srv/obs/certs/server.csr openssl x509 -req -days 365 -in /srv/obs/certs/server.csr \ -signkey /srv/obs/certs/server.key -out /srv/obs/certs/server.crt cat /srv/obs/certs/server.key /srv/obs/certs/server.crt \ > /srv/obs/certs/server.pem
At the time of this writing (2024), we consider the 4K RSA key to be a safe implementation, but you might want to check out the current standards by consulting (on both client and server side) the output of
man crypto-policies
and
man update-crypto-policies
To allow the usage of https API in Web UI code you need to trust your certificate as well:
cp /srv/obs/certs/server.pem /etc/ssl/certs/ c_rehash /etc/ssl/certs/
2.2.2.3 API Configuration #
Check and edit /srv/www/obs/api/config/options.yml
If you change the hostnames/ips of the API, you need to adjust frontend_host accordingly. If you want to use LDAP, you need to change the LDAP settings as well. Look at the Section 5.8, “Managing Users and Groups” for details. You will find examples and more details in the Section 3.1, “Configuration Files”.
It is strongly recommended to enable
use_xforward: true
as well here, to tell Rails to forward requests to the back-end for asynchronous processing. (Without this setting, the front-end will block while the back-end handles each request.)
Afterwards, you can start the OBS API and make it permanent via
systemctl enable apache2 systemctl start apache2 systemctl enable obs-api-support.target systemctl start obs-api-support.target systemctl enable memcached.service systemctl start memcached.service
Now you have you own empty instance running and you can do some online configuration steps.
2.2.3 Online Configuration #
To customize the OBS instance you may need to configure some settings via the OBS API and Web user interface.
First you should change the password of the Admin account, for this you need first login as user Admin in the Web UI with the default password "opensuse". Click on the Admin link (right top of the page), here you can change the password.
After changing the Admin password, set up osc
to use the
Admin account for more changes. Here an example:
osc -c ~/.obsadmin_osc.rc -A https://api.testobs.org
Follow the instructions on the terminal.
The password is stored in clear text in this file by default, so you need to give this file restrictive access rights, only read/write access for your user should be allowed. osc allows to store the password in other ways (in keyrings for example), refer to the osc documentation for this.
Now you can check out the main configuration of the OBS:
osc -c ~/.obsadmin_osc.rc api /configuration >/tmp/obs.config cat /tmp/obs.config <configuration> <title>Open Build Service</title> <description> <p class="description"> The <a href="http://openbuildservice.org"> Open Build Service (OBS)</a> is an open and complete distribution development platform that provides a transparent infrastructure for development of Linux distributions, used by openSUSE, MeeGo and other distributions. Supporting also Fedora, Debian, Ubuntu, RedHat and other Linux distributions. </p> <p class="description"> The OBS is developed under the umbrella of the <a href="http://www.opensuse.org">openSUSE project< /a>. Please find further informations on the < a href="http://wiki.opensuse.org/openSUSE:Build_Service">openSUSE Project wiki pages</a>. </p> <p class="description"> The Open Build Service developer team is greeting you. In case you use your OBS productive in your facility, please do us a favor and add yourself at < a href="http://wiki.opensuse.org/openSUSE:Build_Service_installations"> this wiki page</a>. Have fun and fast build times! </p> </description> <name>private</name> <download_on_demand>on</download_on_demand> <enforce_project_keys>off</enforce_project_keys> <anonymous>on</anonymous> <registration>allow</registration> <default_access_disabled>off</default_access_disabled> <allow_user_to_create_home_project>on</allow_user_to_create_home_project> <disallow_group_creation>off</disallow_group_creation> <change_password>on</change_password> <hide_private_options>off</hide_private_options> <gravatar>on</gravatar> <cleanup_empty_projects>on</cleanup_empty_projects> <disable_publish_for_branches>on</disable_publish_for_branches> <admin_email>unconfigured@openbuildservice.org</admin_email> <unlisted_projects_filter>^home:.+</unlisted_projects_filter> <unlisted_projects_filter_description>home projects</unlisted_projects_filter_description> <schedulers> <arch>armv7l</arch> <arch>i586</arch> <arch>x86_64</arch> </schedulers> </configuration>
unlisted_projects_filter only admit Regular Expression (see RLIKE specifications of MySQL/MariaDB for more information) and unlisted_projects_filter_description is part of the link shown in the project list for filtering
You should edit this file according to your preferences, then sent it back to the server:
osc -c ~/.obsadmin_osc.rc api /configuration -T /tmp/obs.config
If you want to use an interconnect to another OBS instance to reuse the build targets you can do this as Admin via the Web UI or create a project with a remoteurl tag (see Section 3.4.2, “Project Metadata”)
<project name="openSUSE.org"> <title>openSUSE.org Project</title> <description> This project refers to projects hosted on the Build Service [...] Use openSUSE.org:openSUSE:12.3 for example to build against the openSUSE:12.3 project as specified on the opensuse.org Build Service. </description> <remoteurl>https://api.opensuse.org/public</remoteurl> </project>
You can create the project using a file with the above content with osc like this:
osc -c ~/.obsadmin_osc.rc meta prj openSUSE.org -F /tmp/openSUSE.org.meta
You also can import binary distribution, see Section 5.2.2, “Importing Distributions” for this.
The OBS has a list of available distributions used for build. This list is displayed to user, if they are adding repositories to their projects. This list can be managed via the API path /distributions
osc -c ~/.obsadmin_osc.rc api /distributions > /tmp/distributions.xml
Example distributions.xml file:
<distributions> <distribution vendor="SUSE" version="SLE-12-SP1" id="137"> <name>SLE-12-SP1</name> <project>SUSE:SLE-12-SP1</project> <reponame>SLE-12-SP1</reponame> <repository>standard</repository> <link>http://www.suse.com/</link> <icon url="https://static.opensuse.org/distributions/logos/suse-SLE-12-8.png" width="8" height="8"/> <icon url="https://static.opensuse.org/distributions/logos/suse-SLE-12-16.png" width="16" height="16"/> <architecture>x86_64</architecture> </distribution> </distributions>
You can add your own distributions here and update the list on the server:
osc -c ~/.obsadmin_osc.rc api /distributions -T /tmp/distributions.xml
2.3 Worker Farm #
To not burden your OBS back-end daemons with the unpredictable load package builds can produce (think someone builds a monstrous package like LibreOffice) you should not run OBS workers on the same host as the rest of the back-end daemons.
You back-end need to be configured to use the correct hostnames for the repo and source server and the ports need to be reachable by the workers. Also, the IP addresses of the workers need to be allowed to connect the services. (look at the /usr/lib/obs/server/BSConfig.pm::ipaccess array).
You can deploy workers quite simply using the worker appliance. Or install a minimum system plus the obs-worker package on the hardware.
Edit the /etc/sysconfig/obs-server file, at least OBS_SRC_SERVER, OBS_REPO_SERVERS and OBS_WORKER_INSTANCES need to be set. More details in the Section 3.1, “Configuration Files”.
start the worker:
systemctl enable obsworker systemctl start obsworker
2.4 Distributed Setup #
All OBS back-end daemons can also be started on individual machines in your network. Also, the front-end Web server and the MySQL server can run on different machines. Especially for large scale OBS installations this is the recommended setup.
A setup with partitioning is very similar to the steps of the simple setup. Here we are only mention the differences to the simple setup.
You need to make sure that the different machines can communicate via the network, it is very recommended to use a separate network for this to isolate it from the public part.
On all back-end hosts you need to install the obs-server package. On the front-end host you need to install the obs-api package.
Only one source server instance can be exist on a single OBS installation.
The binary back-end can be split on project level, this is called partitioning.
On one partition following services needs to be configured and run:
repserver
schedulers
dispatcher
warden
publisher
You do not need to share any directories on File System level between the partitions.
Here some example for partitioning:
A main partition for everything not in the others (host mainbackend)
A home partition for all home projects of the users (host homebackend)
A release partition for released software projects (host releasebackend)
The configuration is done in the back-end config file /usr/lib/obs/server/BSConfig.pm. Most parts of the file can be shared between the back-ends.
Here the important parts of the mainbackend of out testobs.org installation:
[...] my $hostname = Net::Domain::hostfqdn() || 'localhost'; # IP corresponding to hostname (only used for $ipaccess); fallback to localhost since inet_aton may fail to resolve at shutdown. my $ip = quotemeta inet_ntoa(inet_aton($hostname) || inet_aton("localhost")); my $frontend = 'api.testobs.org'; # FQDN of the Web UI/API server if it's not $hostname # If defined, restrict access to the backend servers (bs_repserver, bs_srcserver, bs_service) our $ipaccess = { '127\..*' => 'rw', # only the localhost can write to the backend "^$ip" => 'rw', # Permit IP of FQDN "10.20.1.100" => 'rw', # Permit IP of srcsrv.testobs.org "10.20.1.101" => 'rw', # Permit IP of mainbackend.testobs.org "10.20.1.102" => 'rw', # Permit IP of homebackend.testobs.org "10.20.1.103" => 'rw', # Permit IP of releasebackend.testobs.org '10.20.2.*' => 'worker', # build results can be delivered from any client in the network }; # IP of the Web UI/API Server (only used for $ipaccess) if ($frontend) { my $frontendip = quotemeta inet_ntoa(inet_aton($frontend) || inet_aton("localhost")); $ipaccess->{$frontendip} = 'rw' ; # in dotted.quad format } # also change the SLP reg files in /etc/slp.reg.d/ when you touch hostname or port our $srcserver = "http://srcsrv.testobs.org:5352"; our $reposerver = "http://mainbackend.testobs.org:5252"; our $serviceserver = "http://service.testobs.org:5152"; # Needed if you want to use the cloud upload feature our $clouduploadserver = "http://$hostname:5452"; # our @reposervers = (" http://mainbackend.testobs.org:5252, http://homebackend.testobs.org:5252, http://releasebackend.testobs.org:5252 "); # you can use different ports for worker connections our $workersrcserver = "http://w-srcsrv.testobs.org:5353"; our $workerreposerver = "http://w-mainbackend.testobs.org:5253"; [...] our $partition = 'main'; # # this defines how the projects are split. All home: projects are hosted # on an own server in this example. Order is important. our $partitioning = [ 'home:' => 'home', 'release' => 'release' '.*' => 'main', ]; our $partitionservers = { 'home' => 'http://homebackend.testobs.org:5252', 'release' => 'http://releasebackend.testobs.org:5252', 'main' => 'http://mainbackend.testobs.org:5252', }; [...]
On the other partition server you need to change "our $reposerver", "our $workerreposerver" and "our $partition".
On all partition servers you need to start:
systemctl start obsrepserver.service systemctl start obsscheduler.service systemctl start obsdispatcher.service systemctl start obspublisher.service systemctl start obswarden.service
On the worker machines you should set of repo servers in the OBS_REPO_SERVERS variable. You can also define workers with a subset of the repo servers to prioritize partitions.
2.5 Monitoring #
In this chapter you will find some general monitoring instructions for the Open Build Service. All examples are based on Nagios plugins, but the information provided should be easily adaptable for other monitoring solutions.
2.5.1 Endpoint Checks #
2.5.1.1 HTTP Checks: Checking Whether the HTTP Server Responds #
This check will output a critical if the HTTP server with ip address 172.19.19.19 (-I 172.19.19.19) listening on port 80 (-p 80) does not answer and output a warning if the HTTP return code is not 200. The server name that will be used is server (-H server) which is important if different virtual hosts are listening on the same port.
check_http -H server -I 172.19.19.19 -p 80 -u http://server
The same check, but this time it will check a ssl enabled HTTP server.
check_http -S -H server -I 172.19.19.19 -p 443 -u https://server
It is also possible to check the presence of a certain string in the HTTP response. In this case it will check for the string Source Service Server.
check_http -s "Source Service Server" -S -H server -I 172.19.19.19 -p 5152
Open Build Service HTTP endpoints that should be checked:
Web Interface / API: port 443
Repository Server: port 82
Package Repository Server: port 5252
Source Repository Server: port 5352
Source Service Server: port 5152
Cloud Upload Server: port 5452
2.5.2 Common Checks #
This is a list of common checks that should be run on each individual server.
2.5.2.1 Disk Space: Checking Available Disk Space #
This check will output a warning if less than 10 percent disk space is available (-w 10) and output a critical if less than 5 percent disk space are available (-c 5). It will check all file systems except file systems with type none (-x none).
check_disk -w 10 -c 5 -x none
2.5.2.2 Memory Usage: Checking Available Memory #
This check will output a warning if less than 10 percent memory is available (-w 10) and output a critical if less than 5 percent memory is available (-c 5). OS caches will be counted as free memory (-C) and it will check the available memory (-f). check_mem.pl is not a standard Nagios plugin and can be downloaded at https://exchange.nagios.org/.
check_mem.pl -f -C -w 10 -c 5
2.5.2.3 NTP: Checking Date and Time #
This check will compare the local time with the time provided by the NTP server pool.ntp.org (-H pool.ntp.org). It will output a warning if the time differs by 0.5 seconds (-w 0.5) and output a critical if the time differs by 1 seconds (-c 1).
check_ntp_time -H pool.ntp.org -w 0.5 -c 1
2.5.2.4 Ping: Checking That the Server Is Alive #
This plugin checks if the server responds to a ping request and it will output a warning if the respond time exceeds 200ms or 30 percent package loss (-w 200.0,30%) and output a critical if the respond time exceeds 500ms or 60 percent package loss.
check_icmp -H server -w 200.0,30% -c 500.0,60%
2.5.2.5 Load: Checking the Load on the Server #
This check will output a warning if the load value exceeded 7.0 in the last minute, 6.0 in the last 5 minutes or 5.0 in the last 15 minutes (-w 7.0,6.0,5.0). It will output a critical if the load value exceeded 12.0 in the last minute, 8.0 in the last 5 minutes or 6.0 in the last 15 minutes (-c 12.0,8.0,6.0).
check_load -w 7.0,6.0,5.0 -c 12.0,8.0,6.0
2.5.2.6 Disk Health: Checking the Health of Local Hard Disks #
This check is only relevant on physical systems with local storage attached to it. It will check the disk status utilizing the S.M.A.R.T interface and it will output a critical if any of the S.M.A.R.T values exceeds critical limits. check_smartmon is not a standard Nagios plugin and can be downloaded at https://exchange.nagios.org/.
check_smartmon --drive /dev/sda --drive /dev/sdb
2.5.3 Other Checks #
2.5.3.1 MySQL: Checking That the MySQL Database Is Responding #
This check will check that the MySQL database server is running and that the database api_production is available.
check_mysql -H localhost -u nagios -p xxxxxx -d api_production
MySQL Databases to check:
api_production
mysql
2.5.3.2 Backup Status: Checking That a Valid Backup Is Available #
It is always advisable to check that the last backup run was successful and a recent backup is available. The check itself depends on the Backup solution that is used.