Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]

4 Administration

4.1 Tools

4.1.1 obs_admin

obs_admin is a command-line tool used on the back-end server(s) to manage running services, submit maintenance tasks, and debug problems. It should be only used by experienced admins.

It has built-in help which you can display with obs_admin --help.

Options to control the running services:

Job Controlling

 --shutdown-scheduler <architecture>
   Stops the scheduler nicely with dumping out its current state
   for fast startup.

 --check-project <project> <architecture>
 --check-project <project> <repository> <architecture>
 --check-all-projects <architecture>
   Check status of a project and its repositories again

 --deep-check-project <project> <architecture>
 --deep-check-project <project> <repository> <architecture>
   Check status of a project and its repositories again
   This deep check also includes the sources, in case of lost events.

 --check-package <project> <package> <architecture>
   Check status of a package in all repositories

 --publish-repository <project> <repository>
   Creates an event for the publisher. The scheduler is NOT scanning for new packages.
   The publisher may skip the event, if nothing has changed.
   Use --republish-repository when you want to enforce a publish.

 --unpublish-repository <project> <repository>
   Removes the prepared :repo collection and let the publisher remove the result. This
   is also updating the search database.
   WARNING: this works also for locked projects!

 --prefer-publish-event <name>
   prefers a publish event to be next. <name> is the file name inside of the publish
   event directory.

 --republish-repository <project> <repository>
   enforce to publish a repository

 --rebuild-full-tree <project> <repository> <arch>
   rebuild the content of :full/ directory

 --clone-repository <source project> <source repository> <destination repository>
 --clone-repository <source project> <source repository> <destination project> <destination repository>
   Clone an existing repo into another existing repository.
   Usefull for creating snapshots.

 --rescan-repository <project> <repository> <architecture>
   Asks the scheduler to scan a repository for new packages and add
   them to the cache file.

 --force-check-project <project> <repository> <architecture>
   Enforces the check of an repository, even when it is currently blocked due to amount of
   calculating time.

   creates a patchinfo submission based on an updateinfo information.

Options for maintenance are:

Maintenance Tasks

Note: the --update-*-db calls are usually only needed when corrupt data has been created, for
      example after a file system corruption.

 --update-source-db [<project>]
   Update the index for all source files.

   Updates the index for all requests.

 --remove-old-sources <days> <y> (--debug)
   WARNING: this is an experimental feature atm. It may trash your data, but you have anyway
            a backup, right?
   remove sources older than <x> days, but keep <y> number of revisions
   --debug for debug output

Options for debugging:

Debug Options

 --dump-cache <project> <repository> <architecture>
   Dumps out the content of a binary cache file.
   This shows all the content of a repository, including all provides
   and requires.

 --dump-state <architecture>

 --dump-project-from-state <project> <arch>
   dump the state of a project.

 --dump-relsync <file>
   To dump content of :relsync files.

 --set-relsync <file> <key> <value>
   Modify key content in a a :relsync file.

 --check-meta-xml <project>
 --check-meta-xml <project> <package>
   Is parsing a project or package xml file and puts out error messages, in case of errors.

 --check-product-xml <file>
   Is parsing a product xml file and puts out error messages, in case of errors.
   It does expand all xi:include references and validates the result.

 --check-product-group-xml <file>
   Is parsing a group xml file from a product definition and puts out error messages, in case of errors.

 --check-kiwi-xml <file>
 --check-kiwi-xml <project> <package>
   Is parsing a KIWI xml file and puts out error messages, in case of errors.

 --check-constraints <file>
 --check-constraints <project> <package>
   Validates a _constraints file

 --check-pattern-xml <file>
   Is parsing a pattern xml file and puts out error messages, in case of errors.

 --check-request-xml <file>
   Is parsing a request xml file and puts out error messages, in case of errors.

 --parse-build-desc <file> [<arch> [<buildconfigfile>]]
   Parse a spec, dsc or KIWI file with the Build script parser.

   Show all architectures which are configured in configuration.xml to be supported by this instance.

 --show-delta-file <file>
   Show all instructions of a OBS delta file

 --show-delta-store <file>
   Show delta store statistics

4.1.2 osc

The osc command-line client is mainly used by developers and packagers. But for some tasks, admin people also need this tool. It too has builtin help: use osc --help. The tool needs to be configured first to know the OBS API URL and your user details.

To configure the osc tool the first time you need to call it with

osc -A <URL to the OBS API>
For example:
osc -A https://api.testobs.org

Follow the instructions on the terminal.


The password is stored in clear text in the .oscrc file by default, so you need to give this file restrictive access rights, only read/write access for your user should be allowed. osc allows to store the password in other ways (in keyrings for example) and may use different methods for authentication like Kerberos see Section, “Kerberos”

For the admins the most important osc subcommands are:

  • meta - to create or update projects or package data

  • API - to read and write online configuration data osc meta Subcommand

meta: Show meta information, or edit it

Show or edit build service metadata of type <prj|pkg|prjconf|user|pattern>.

This command displays metadata on buildservice objects like projects,
packages, or users. The type of metadata is specified by the word after
"meta", like e.g. "meta prj".

prj denotes metadata of a buildservice project.
prjconf denotes the (build) configuration of a project.
pkg denotes metadata of a buildservice package.
user denotes the metadata of a user.
pattern denotes installation patterns defined for a project.

To list patterns, use 'osc meta pattern PRJ'. An additional argument
will be the pattern file to view or edit.

With the --edit switch, the metadata can be edited. Per default, osc
opens the program specified by the environmental variable EDITOR with a
temporary file. Alternatively, content to be saved can be supplied via
the --file switch. If the argument is '-', input is taken from stdin:
osc meta prjconf home:user | sed ... | osc meta prjconf home:user -F -

For meta prj and prjconf updates optional commit messages can be applied
with --message.

When trying to edit a non-existing resource, it is created implicitly.

    osc meta prj PRJ
    osc meta pkg PRJ PKG
    osc meta pkg PRJ PKG -e

    osc meta <prj|prjconf> [-r|--revision REV] ARGS...
    osc meta <prj|pkg|prjconf|user|pattern> ARGS...
    osc meta <prj|pkg|prjconf|user|pattern> [-m|--message TEXT] -e|--edit
    osc meta <prj|pkg|prjconf|user|pattern> [-m|--message TEXT] -F|--file
    osc meta pattern --delete PRJ PATTERN
    osc meta attribute PRJ [PKG [SUBPACKAGE]] [--attribute ATTRIBUTE]
    [--create|--delete|--set [value_list]]
    -h, --help          show this help message and exit
    --delete            delete a pattern or attribute
                        set attribute values
    -R, --remove-linking-repositories
                        Try to remove also all repositories building against
                        remove ones.
    -c, --create        create attribute without values
    -e, --edit          edit metadata
    -m TEXT, --message=TEXT
                        specify log message TEXT. For prj and prjconf meta
    -r REV, --revision=REV
                        checkout given revision instead of head revision.
                        prj and prjconf meta only
    -F FILE, --file=FILE
                        read metadata from FILE, instead of opening an
                        '-' denotes standard input.
    -f, --force         force the save operation, allows one to ignores some
                        errors like depending repositories. For prj meta
                        include project values, if missing in packages
                        include defined attribute defaults
    -a ATTRIBUTE, --attribute=ATTRIBUTE
                        affect only a given attribute osc api Subcommand

api: Issue an arbitrary request to the API

Useful for testing.

URL can be specified either partially (only the path component), or fully
with URL scheme and hostname ('http://...').

Note the global -A and -H options (see osc help).

  osc api /source/home:user
  osc api -X PUT -T /etc/fstab source/home:user/test5/myfstab
  osc api -e /configuration

    osc api URL

    -h, --help          show this help message and exit
    -a NAME STRING, --add-header=NAME STRING
                        add the specified header to the request
    -T FILE, -f FILE, --file=FILE
                        specify filename to upload, uses PUT mode by default
    -d STRING, --data=STRING
                        specify string data for e.g. POST
    -e, --edit          GET, edit and PUT the location
                        specify HTTP method to use (GET|PUT|DELETE|POST)

The online API documentation is available at https://build.opensuse.org/apidocs (https://build.opensuse.org/apidocs)

Some examples for admin stuff:

# Read the global configuration file
  osc api /configuration
# Update the global configuration
  osc api /configuration -T /tmp/configuration.xml

# Read the distributions list
  osc api /distributions
# Udate the distributions list
  osc api /distributions -T /tmp/distributions.xml

# retrieve statistics
  osc api /statistics/latest_added

4.2 Managing Build Targets

4.2.1 Interconnect

Using another Open Build Service as source for build targets is the easiest way to start. The advantage is, that you save local resources and you do not need to build everything from scratch. The disadvantage is that you depend on the remote instance, if it has a downtime your instance cannot do any builds for these targets, if the remote admins decide to remove some targets you cannot use them anymore.

The easiest way to interconnect with some of the public OBS instances is to use the Web UI. You need to log in with an administrator account of your instance to do this. On the start page of an administrator account you will find a Configuration link. On the Configuration page you find an Interconnect tab on the top, use this and select the public side you want.

If you want to connect to a not listed instance, you can simple create a remote project using the osc meta prj command. A remote project differs from a local project as it has a remoteurl tag (see Section 2.4.2, “Project Metadata”).


<project name="openSUSE.org">
  <title>openSUSE.org Project Link</title>
This project refers to projects hosted on the openSUSE Build Service

Sending this via osc to the server:

osc meta prj -m "add openSUSE.org remote" -F /tmp/openSUSE.org.prj

4.2.2 Importing Distributions

With local hosted distributions packages you are independent from other parties. On sides with no or bad internet connections, this is the only way to go. You do not need to build the distribution packages on your instance, you can use binary packages for this. Here are different ways to get a local build repository:

  1. mirror a distribution from another OBS instance

  2. mirror a binary distribution from a public mirror and import the binaries

  3. use already existing local install repositories (for example, from an SMT instance)

  4. use the install media to import the binaries

These tasks need to be run on the obs back-end. In a partition setup you need to run it on the partition which would the owner for the project. Mirroring from a Remote OBS Instance

Mirroring a project from a remote OBS instance can be done with the obs_mirror_project script which is supplied with the obs sources and via the obs-utils package. You can get the latest version from GitHub: https://raw.githubusercontent.com/openSUSE/open-build-service/master/dist/obs_mirror_project.

The usage:

Usage: obs_mirror_project.rb -p PROJECT -r REPOSITORY
                            [-a ARCHITECTURE] [-d DESTINATION] [-A APIURL] [-t] [-v]

Example: (mirror openSUSE 13.1 as base distro)
obs_mirror_project -p openSUSE:13.1 -r standard -a i586,x86_64
Options help:
    -p, --proj PROJECT         Project Name: eg. openSUSE:13.1,Ubuntu:14.04,etc.
    -r, --repo REPOSITORY      Repository Name:   eg. standard,qemu,etc.
    -a, --arch Architecture    Architecture Name: eg. i586,x86_64,etc.
    -d, --dest DESTINATION     Destination Path:  eg. /obs
                                           Default: PWD (current working directory)
    -A, --api APIURL           OSC API URL :Default: https://api.opensuse.org
    -t, --trialrun             Trial run: not executing actions
    -v, --verbose              Verbose
    -h, --help                 Display this screen Importing Binary Packages

This is the same procedure for all local sources. If you have a local copy of a distribution, you can either use symbolic links to the binary packages or copy them in a directory on the back-end repo server under the /srv/obs/build directory. You should follow the common name schema for build repository here. As first step you should create an empty project for the distribution, you can use the Web UI or the osc command-line tool. Then you add a repository with the name standard and the build architectures you want. Here an example project meta file:

<project name="SUSE:13.2">
  <title>openSUSE 13.2 build repositories</title>
  <description>openSUSE 13.2 build repositories</description>
  <person userid="Admin" role="maintainer"/>
    <disable repository="standard"/>
  <repository name="standard">

After you have created the project with these settings, the /srv/obs/build directory should have a tree for SUSE:13.2:

├── build
│   └── SUSE:13.2
│       └── standard
│           ├── i586
│           │   ├── :bininfo
│           │   └── :schedulerstate
│           └── x86_64
│               ├── :bininfo
│               └── :schedulerstate

All the directories under /srv/obs/build have to be owned by the obsrun user and group. The obsrun user need write access to them. If not the scheduler process will crash on your instance.

You need to import the project configuration as well, you can get them for example from the openSUSE Build Service.

osc -A https://api.opensuse.org meta prjconf openSUSE:13.2 >/tmp/13.2.prjconf
osc meta prjconf -m 'Original version from openSUSE' SUSE:13.2 -F /tmp/13.2.prjconf

Now you need to create the directory ':full' for the binary sources under each architecture, this should be owned by obsrun too.

testobs:/srv/www/obs/api # mkdir /srv/obs/build/SUSE\:13.2/standard/i586/:full
testobs:/srv/www/obs/api # mkdir /srv/obs/build/SUSE\:13.2/standard/x86_64/:full
testobs:/srv/www/obs/api # chown obsrun:obsrun \
testobs:/srv/www/obs/api # chown obsrun:obsrun \

Now you can copy (or link) all binary packages for the architecture in the :full directory. You need the architecture specific package and the noarch packages as well.


If you import packages for enterprise distributions like SLES12 you also need the packages from the SDK. Maybe you need packages from add-on products as well, depending what software you want to build.

Finally you should trigger a rescan for the project on the back-end server using obs_admin:

testobs # obs_admin --rescan-repository SUSE:13.2 standard i586
testobs # obs_admin --rescan-repository SUSE:13.2 standard x86_64

This reads all packages and creates the dependency tree.

4.3 Source Services

Source Services are tools to validate, generate or modify sources in a trustable way. They are designed as smallest possible tools and can be combined following the powerful idea of the classic UNIX design.

Design goals of source services were:

  • server side generated files must be easy to identify and must not be modifiable by the user. This way other users can trust them to be generated in the documented way without modifications.

  • generated files must never create merge conflicts

  • generated files must be a separate commit to the user change

  • services must be runnable at any time without user commit

  • services must be runnable on server and client side in the same way

  • services must be designed in a safe way. A source checkout and service run must never harm the system of a user.

  • services shall be designed in a way to avoid unnecessary commits. This means there shall be no time-dependent changes. In case the package already contains the same file, the newly generated file must be dropped.

  • local services can be added and used by everybody.

  • server side services must be installed by the admin of the OBS server.

  • services can be defined per package or project wide.

4.3.1 Using Services for Validation

Source Services may be used to validate sources. This can happen per package, which is useful when the packager wants to validate that downloaded sources are really from the original maintainer. Or validation can happen for an entire project to apply general policies. These services cannot get skipped in any package

Validation can happen by validating files (for example using the verify_file or source_validator service. These services just fail in the error case which leads to the build state "broken". Or validation can happen by redoing a certain action and store the result as new file as download_files is doing. In this case the newly generated file will be used instead of the committed one during build.

4.3.2 Different Modes When Using Services

Each service can be used in a special mode defining when it should run and how to use the result. This can be done per package or globally for an entire project. Default Mode

The default mode of a service is to always run after each commit on the server side and locally before every local build. trylocal Mode

The trylocal mode is running the service locally when using current osc versions. The result gets committed as standard files and not named with _service: prefix. Additionally the service runs on the server by default, but usually the service should detect that the result is the same and skip the generated files. If they differ (for example, because the Web UI or API was used), they are generated and added on the server. localonly Mode

The localonly mode is running the service locally when using current osc versions. The result gets committed as standard files and not named with _service: prefix. The service is never running on the server side. It is also not possible to trigger it manually. serveronly Mode

The serviceonly mode is running the service on the server only. This can be useful, when the service is not available or can not work on developer workstations. buildtime Mode

The service is running inside of the build job, for local and server side builds. A side effect is that the service package is becoming a build dependency and must be available. Every user can provide and use a service this way in their projects. The generated sources are not part of the source repository, but part of the generated source packages. Network access is not be available when the workers are running in a secure mode. disabled Mode

The disabled mode is neither running the service locally or on the server side. It can be used to temporarily disable the service but keeping the definition as part of the service definition. Or it can be used to define the way how to generate the sources and doing so by manually calling osc service runall The result will get committed as standard files again.

4.3.3 Storage of Source Service Definitions

The called services are always defined in a _service file. It is either part of the package sources or used project-wide when stored inside the _project package.

The _service file contains a list of services which get called in this order. Each service may define a list of parameters and a mode. The project wide services get called after the per package defined services. The _service file is an xml file like this example:

  <service name="download_files" mode="trylocal" />
  <service name="verify_file">
    <param name="file">krabber-1.0.tar.gz</param>
    <param name="verifier">sha256</param>
    <param name="checksum">7f535a96a834b31ba2201a90c4d365990785dead92be02d4cf846713be938b78</param>
  <service name="update_source" mode="disabled" />

This example downloads the files via download_files service via the given URLs from the spec file. When using osc this file gets committed as part of the commit. Afterwards the krabber-1.0.tar.gz file will always be compared with the sha256 checksum. And last but not least there is the update_source service mentioned, which is usually not executed. Except when osc service runall is called, which will try to upgrade the package to a newer source version available online.

4.3.4 Dropping a Source Service Again

Sometimes it is useful to continue working on generated files manually. In this situation the _service file needs to be dropped, but all generated files need to be committed as standard files. The OBS provides the "mergeservice" command for this. It can also be used via osc by calling osc service merge.

4.4 Source Publisher

The job of the source publish service is to publish all sources for directly before published binaries. This will include the sources of repackaged binaries. For example, the sources of RPMs which are used inside of product, appliance or container images. A prerequisite for this is that OBS has enabled content tracking for the used projects.

4.4.1 Configuring Source Publisher

The source publishing can be configured via the file /usr/lib/obs/server/BSConfig.pm, where it can be enabled globally or just for some projects. It is possible to use regular expressions here.

Publishing can be enabled by defining the rsync module to push the content:

our $sourcepublish_sync = 'rsync://your_rsync_server/rsync_module';

By default every project get published, but it is possible to define a whitelist via:

our $sourcepublish_filter = [ "openSUSE:.*", "SUSE:.*" ];

4.4.2 Considerations

The source publishing service is publishing the sources as they are hosted in Open Build Service. This means these are the unprocessed sources and the content is not identical to the content of source RPMs for example. Instead these are the sources which are the base for source RPMs.

As a consequence hints like NoSource: tags in spec files are ignored. The only way to disable publishing for the user is to disable access or sourceaccess via the flags.

The filesystem structure is $project/$package/$srcmd5/. A inside of binary builds can be used to find the right sources.

Open Build Service will care for de-duplication on the rsync server. This must get implemented there.

4.5 Dispatch Priorities

The dispatcher takes a job from the scheduler and assign it to a free worker. It tries to share the available build time fair between all the project repositories with pending jobs. To achieve this the dispatcher calculates a load per project repository of the used build time (similar to the system load in Unix operating systems). The dispatcher assigned jobs to build clients from the repository with the lowest load (thereby increasing its load). It is possible to tweak this mechanism via dispatching priorities assigned to the repositories via the /build/_dispatchpriosAPI call or via the dispatch_adjust array in the BSConfig.pmSection, “BSConfig.pm” configuration file.

4.5.1 The /build/_dispatchprios API Call

The /build/_dispatchprios API call allows an Admin to set a priority for defined projects and repositories using the HTML put method. With the HTML get method the current XML priority file can be read.

  <prio project="ProjectName" repository="RepoName" arch="Architecture" adjust="Number" />

The attributes project, repository and arch are all optional, if for example arch and repository are missing the entry is used for all repositories and architectures for the given project. It is not supported to use regular expressions for the names. The adjust value is taken as logarithmic scale factor to the current load of the repositories during the compare. Projects without any entry get a default priority of 0, higher values cause the matching projects to get more build time.

Example dispatchprios XML file

   <prio project="DemoProject1" repository="openSUSE_Leap_42.1" adjust="10" />
   <prio project="Test1" adjust="5" />
   <prio project="Test11" repository="openSUSE_13.2" arch="i586" adjust="-10"/>
Table 4.1: Rounded Scale Factors Resulting from a Priority
priority scale factor   priority scale factor





































4.5.2 dispatch_adjust Array

With the dispatch_adjust array in the BSConfig.pm file the dispatch priorities of project repositories based on regular expressions for the project, repository name and maybe architecture. Each match will add or subtract a value to the priority of the repository. The default priority is 0, higher values cause the matching projects to get more build time.

Each entry in the dispatch_adjust array has the format

'regex string'  => priority adjustment

The full name of a build repository looks like



If a repository matches a string the adjustment is added to the current value. The final value is the sum of the adjustments of all matched entries. This sum is the same logarithmic scale factor as described in the previous section.

Example dispatch_adjust definition in the BSConfig.pm

our $dispatch_adjust = [
    'Devel:' => 7,
    'HotFix:' => +20,
    '.+:test.*' => -10,
    'home:' => -3,
    'home:king' => +30,
    '.+/SLE12-SP2' => -40,

The above example could have the following background: All Devel projects should get some higher priority so the developer jobs getting more build time. The projects under HotFix are very important fixes for customers and so they should get a worker as soon as possible. All projects with test in the name get some penalty, also home projects are getting only about half of the build time as a normal project, with the exception of the home project from king, the user account of the boss. The SLES12-SP2 repository is not in real use yet, but if here is nothing else to do, build for it as well.


The dispatcher calculates the values form the 'dispatch_adjust' array first, if the same project and repository also has an entry in the dispatchprios XML file, the XML file entry will overwrite the calculated priority. The best practice is to only use one of the methods.

4.6 Publisher Hooks

The job of the publisher service is to publish the built packages and/or images by creating repositories that are made available through a web server.

It can be configured to use custom scripts to copy the build results to different servers or do anything with them that comes to mind. These scripts are called publisher hooks.

4.6.1 Configuring Publisher Hooks

Hooks are configured via the configuration file /usr/lib/obs/server/BSConfig.pm, where one script per project is linked to the repository that should be run if the project/repository combination is published. It is possible to use regular expressions here.

The script is called by the user obsrun with the following parameters:

  1. information about the project and its repository (for example, training/SLE11-SP1)

  2. path to published repository (for example, /srv/obs/repos/training/SLE11-SP1)

  3. changed packages (for example, x86 64/test.rpm x86 64/utils.rpm)

The hooks are configured by adding a hash reference named $publishedhook to the BSConfig.pm configuration file. The key contains the project, and the value references the accompanying script. If the value is written as an array reference it is possible to call the hook with self-defined parameters.

The publisher will add the 3 listed parameters at the end, after the self-defined parameters (in /usr/lib/obs/server/BSConfig.pm):

our $publishedhook = {
 "Product/SLES12"     => "/usr/local/bin/script2run_sles12",
 "Product/SLES11-SP3" => "/usr/local/bin/script2run_sles11",
 "Product/SLES11-SP4" => "/usr/local/bin/script2run_sles11",

Regular expressions or substrings can be used to define a script for more than one repository in one project. The use of regular expressions has to be activated by defining $publishedhook use regex = 1; as follows (in /usr/lib/obs/server/BSConfig.pm):

our $publishedhook_use_regex = 1;
our $publishedhook = {
    "Product\/SLES12"     => "/usr/local/bin/script2run_sles12",
    "Product\/SLES11.*"   => "/usr/local/bin/script2run_sles11",

With self defined parameters:

our $publishedhook_use_regex = 1;
our $publishedhook = {
    "Product\/SLES11.*" => ["/usr/local/bin/script2run", "sles11", "/srv/www/public_mirror"],

The configuration is read by the publisher at startup only, so it has to be restarted after configuration changes have been made. The hook script’s output is not logged by the publisher and should be written to a log file by the script itself. In case of a broken script,this is logged in the publisher’s log file (/srv/obs/log/publisher.log by default):

Mon Mar  7 14:34:17 2016 publishing Product/SLES12
    fetched 0 patterns
    running createrepo
    calling published hook /usr/local/bin/script2run_sles12
    /usr/local/bin/script2run_sles12 failed: 65280
    syncing database (6 ops)

Interactive scripts are not working and will fail immediately.

If you need to do a lot of work in the hook script and do not want to block the publisher all the time, you should consider using a separate daemon that does all the work and just gets triggered by the configured hook script.

The scripts are called without a timeout.

4.6.2 Example Publisher Scripts Simple Publisher Hook

The following example script ignores the packages that have changed and copies all RPMs from the repository directory to a target directory:

# Global substitution! To handle strings like Foo:Bar:testing - two
rsync -a --log-file=$LOGFILE $PATH_TO_REPO/ $DST_REPO_DIR/$PRJ_PATH/

For testing purposes, it can be invoked as follows:

$ sudo -u obsrun /usr/local/bin/publish-hook.sh Product/SLES11-SP1 \
    /srv/obs/repos/Product/SLE11-SP1 Advanced Publisher Hook

The following example script reads the destination path from a parameter that is configured with the hook script:

# Global substion! To handle strings like Foo:Bar:testing - two
rsync -a --log-file=$LOGFILE $PATH_TO_REPO/ $DST_REPO_DIR/$PRJ_PATH/

For testing purposes, it can be invoked as follows:

$ sudo -u obsrun /usr/local/bin/publish-hook.sh \
    /srv/www/public_mirror/Product/SLES11-SP1 \

The following example script only copies packages that have changed, but does not delete packages that have been removed:


shift 3


while [ $# -gt 0 ]
  dir=(${1//\// })
  if [ ! -d  "$DST_REPO_DIR/$PRJ_PATH/$dir" ]; then
    mkdir -p $DST_REPO_DIR/$PRJ_PATH/$dir

createrepo $DST_REPO_DIR/$PRJ_PATH/.

For testing purposes, it can be invoked as follows:

$ sudo -o obsrun /usr/local/bin/publish-hook.sh  /srv/www/public_mirror \
   Product/SLES11-SP1 /srv/obs/repos/Product/SLE11-SP1 \
   src/icinga-1.13.3-1.3.src.rpm x86_64/icinga-1.13.3-1.3.x86_64.rpm \

4.7 Unpublisher Hooks

The job of the publisher service is to publish the built packages and/or images by creating repositories that are made available through a web server.

The OBS Publisher can be configured to use custom scripts to be called whenever already published packages get removed. These scripts are called unpublisher hooks. Unpublisher hooks are run before the publisher hooks.

4.7.1 Configuring Unpublisher Hooks

Hooks are configured via the configuration file /usr/lib/obs/server/BSConfig.pm, where one script per project is linked to the repository that should be run if the project/repository combination is removed. It is possible to use regular expressions here.

The script is called by the user obsrun with the following parameters:

  1. information about the project and its repository (for example, training/SLE11-SP1)

  2. repository path (for example, /srv/obs/repos/training/SLE11-SP1)

  3. removed packages (for example, x86 64/test.rpm x86 64/utils.rpm)

The hooks are configured by adding a hash reference named $unpublishedhook to the BSConfig.pm configuration file. The key contains the project and the value references the accompanying script. If the value is written as an array reference, it is possible to call the hook with custom parameters.

The publisher adds the three listed parameters at the end, directly after the custom parameters (in /usr/lib/obs/server/BSConfig.pm):

our $unpublishedhook = {
    "Product/SLES12"     => "/usr/local/bin/script2run_sles12",
    "Product/SLES11-SP3" => "/usr/local/bin/script2run_sles11",
    "Product/SLES11-SP4" => "/usr/local/bin/script2run_sles11",

Regular expressions or substrings can be used to define a script for more than one repository in one project. The use of regular expressions needs to be activated by defining $unpublishedhook use regex = 1; (in /usr/lib/obs/server/BSConfig.pm):

our $unpublishedhook_use_regex = 1;
our $unpublishedhook = {
    "Product\/SLES12"     => "/usr/local/bin/script2run_sles12",
    "Product\/SLES11.*"   => "/usr/local/bin/script2run_sles11",

With custom parameters:

our $unpublishedhook_use_regex = 1;
our $unpublishedhook = {
    "Product\/SLES11.*" => [
      "/usr/local/bin/script2run", "sles11", "/srv/www/public_mirror"

The configuration is read by the publisher at startup only, so it has to be restarted after configuration changes have been made. The hook script’s output is not logged by the publisher and should be written to a log file by the script itself. In case of a broken script, this is logged in the publisher’s log file (/srv/obs/log/publisher.log by default):

Mon Mar  7 14:34:17 2016 publishing Product/SLES12
    fetched 0 patterns
    running createrepo
    calling unpublished hook /usr/local/bin/script2run_sles12
    /usr/local/bin/script2run_sles12 failed: 65280
    syncing database (6 ops)

Interactive scripts are not working and will fail immediately.

If you need to do a lot of work in the hook script and do not want to block the publisher all the time, consider using a separate daemon that does all the work and just gets triggered by the configured hook script.

The scripts are called without a timeout.


Reminder: If unpublish hooks and publish hooks are defined, the unpublish hook runs before the publish hook.

4.7.2 Example Unpublisher Scripts Simple Unpublisher Hook

The following example script deletes all packages from the target directory that have been removed from the repository.

# Global substitution! To handle strings like Foo:Bar:testing - two

shift 2

while [ $# -gt 0 ]
  rm -v $DST_REPO_DIR/$PRJ_PATH/$1 >>$LOGFILE 2>&1

For testing purposes, it can be invoked as follows:

$ sudo -u obsrun /usr/local/bin/unpublish-hook.sh \
    Product/SLES11-SP1                            \
    /srv/obs/repos/Product/SLE11-SP1              \
    src/icinga-1.13.3-1.3.src.rpm                 \
    x86_64/icinga-1.13.3-1.3.x86_64.rpm           \
    x86_64/icinga-devel-1.13.3-1.3.x86_64.rpm Advanced Unpublisher Hook

The following example script reads the destination path from a parameter that is configured via the hook script:

# Global substitution! To handle strings like Foo:Bar:testing - two

shift 3

while [ $# -gt 0 ]
  rm -v $DST_REPO_DIR/$PRJ_PATH/$1 >>$LOGFILE 2>&1

For testing purposes, it can be invoked as follows:

$ sudo -u obsrun /usr/local/bin/unpublish-hook.sh \
    /srv/www/public_mirror/Product/SLES11-SP1     \
    /srv/obs/repos/Product/SLE11SP1               \
    src/icinga-1.13.3-1.3.src.rpm                 \
    x86_64/icinga-1.13.3-1.3.x86_64.rpm           \

4.8 Managing Users and Groups

The OBS has an integrated user and group management with a role based access rights model. In every OBS instance, at least one user need to exist and have the global Admin role assigned. Groups can be defined by the Admin and instead of adding a list of users to a project/package role user can be added to a group and the group will be added to a project or package role.

4.8.1 User and Group Roles

The OBS role model has one global role: Admin, which can be granted to users. An OBS admin has access to all projects and packages via the API interface and the web user interface. Some menus in the Web UI do not allow changes by an Admin (for example, the Repository menu) as long the Admin is not a Maintainer for the project as well. But the same change can be done via editing the metadata directly. The other roles are specific to projects and packages and can be assigned to a user or a group.

Table 4.2: Roles in OBS
Role Description Remarks


Read and write access to projects or packages


Read access to projects or packages

should be unique per package


Read access to sources


Read access to the binaries


Default reviewer for a package or project

4.8.2 Standalone User and Group Database

OBS provides its own user database which can also store a password. The authentication to the API happens via HTTP BASIC AUTH. See the API documentation to find out how to create, modify or delete user data. Also a call for changing the password exists.

Users can be added by the maintainer or if registration is allowed via the registration menu on the Web UI. It can be configured that a confirmation is needed after registration before the user may login.

4.8.3 Users and Group Maintainers

Administrators can create groups, add users to them, remove users from them and give Maintainer rights to users. This way, a maintainer will be able to also add, remove and give maintainer rights to other users.

osc api -d "<group><title><group-title></title><email><group-email></email><maintainer userid="<user-name>"/><person><person userid="<user_name>"/></person></group>' -X PUT "/group/<group-title>" 

4.8.4 Gravatar for Groups

In certain cases, it might be desirable to show a Gravatar for a group, similar to the users. In order to show a Gravatar, an email address is needed. Therefore, it is necessary that an admin adds an email address to the group through the API. This can be a achieved by

osc api -X POST "/group/<group-title>?cmd=set_email&email=<groups-email-address>"

4.8.5 Proxy Mode

The proxy mode can be used for specially secured instances, where the OBS web server shall not get connected to the network directly. There are authentication proxy products out there which do the authentication and send the user name via an HTTP header to OBS. Originally, this was developed for IChain - a legacy single login authentication method from Novell. This also has the advantage that the user password never reaches OBS.

The proxy mode can also be used for LDAP or Active Directory, but only for authentication.


With enabled proxy mode the OBS trust the username in the http header. Since this was verified by the Web server and the Web server only forward requests for a verified and authenticated session, this is safe, as long you make sure that the direct web/API interface of the OBS is not reachable from the outside.

With the proxy mode the user still need to be registered in the OBS and all OBS roles and user properties are managed inside the OBS. OBS Proxy Mode Configuration

Currently the LDAP configuration is in the options.yml file.

Table 4.3: Options for Proxy Mode Configuration
Config item Description Values default Remarks


turn proxy mode on/off

:off :on

need to be :off if ldap_mode: is :on

4.8.6 LDAP/Active Directory


The LDAP support was considered experimental and not officially supported. It is officially supported since 2.8.3 release.

Using LDAP or Active Directory as source for user and optional group information in environments which already have such a server has the advantage for the admin people that the user related information only need to be maintained in one place. In the following sections we are writing LDAP, but this includes Microsoft's Active Directory as well. Only in parts where differences exists Active Directory (AD) will be explicit mentioned.

In this mode the OBS contact the LDAP server directly from the OBS API, if the user was found and provides the correct password the user is added transparently to the OBS user database. The password or password hash is not stored in the OBS database. Because the user database password field is mandatory, a random hash is stored instead. The LDAP interface allows to restrict the access to users which are in a special LDAP group. Optional also groups can be discovered from the LDAP server. This can be also filtered.

Before anybody can add a user to a package or project with a role, the user need to had logged in at least one time, since the check for available users is local only. If the LDAP group mode is enabled, LDAP groups are also added transparently, if an existing group on the LDAP server is added to a project or package.

On bigger installations this mode can result in many search requests to the LDAP server and slow down access to projects and packages, because on every role check an LDAP search operation will contact the LDAP server. As alternative method group mirroring was implemented. This allows that the internal OBS group database is updated with the group membership information during the user authentication. All role test are made local against the OBS database and do not need additional LDAPoperations.


The local user group membership in :mirror mode is updated as follows: When the user logins, the user memberOf attributes are parsed and compared with the global OBS grouplist, if a group matches, the user is added, if they are no longer a group member, they are removed. since this maybe a costly operation, depending on the group counts, this is only done on a full login. After a full login the user status is cashed for 2 minutes, if the user do a login during this time, nothing will be checked or updated. Here is a second mechanism to update user membership: If somebody adds a new Group in the OBS, the member attributes of the group are parsed and all current users which are in the local database become members. OBS LDAP Configuration

Currently the main OBS LDAP configuration is in the file options.yml. Beside the settings in that file, the openLDAP configuration file is also evaluated by the Ruby LDAP implementation. This configuration file is usually located at /etc/openldap/ldap.conf. You can set here additional TLS/SSL directives like TLS_CACERT, TLS_CACERTDIR and TLS_REQCERT. For more information refer to the openLDAP man page (man ldap.conf).


When LDAP mode is activated, users can only log in via LDAP. This also includes existing admin accounts. To make a LDAP user an admin, use a rake task which can be run on the OBS instance. For example, to make user tux, use:

cd /srv/www/obs/api
bundle exec rake user:give_admin_rights tux RAILS_ENV=production
Table 4.4: LDAP Configuration Options
Config item Description Values default Remarks


OBS LDAP mode on/off

:off :on


List of LDAP servers

colon-separated list


tries to ping LDAP server

int 15


timeout of an LDAP search

int 0…N 5

0 wait for ever


User attribute for Group membership


case sensitive


Group attribute for members



use ldaps port and protocol

:off :on


usr Start TLS on LDAP protocol

:off :on


LDAP portnumbers

if not set 389 for LDAP, 636 for LDAPS


Windows 2003 AD requires

:off :on


company’s LDAP search base for the users who will use OBS



user ID attribute

sAMAccountName uid

sAMAccountName for AD, uid for openldap


Full user name



Attribute for users email



Bind user for LDAP search

for example, cn=ldapbind, ou=system, dc=mycompany, dc=com


Password for the ldap_search_user


Search filter for OBS users

for example, a group membership, empty all users allowed


How user how the credentials are verified

:ldap :local

only use :ldap


Used auth mech

:md5 :cleartext

only if local


Used auth attribute for :local


do not use


Import OBS groups from LDAP

:off :on :mirror

see text


company’s LDAP search base for groups


Attribute of the group name



Object class for group



Group name for OBS Admins

if set, members of that group become OBS admin role

Example LDAP section of the options.yml file:

# LDAP options

ldap_mode: :on
# LDAP Servers separated by ':'.
# OVERRIDE with your company's ldap servers. Servers are picked randomly for
# each connection to distribute load.
ldap_servers: ldap1.mycompany.com:ldap2.mycompany.com

# Max number of times to attempt to contact the LDAP servers
ldap_max_attempts: 15

# timeout of an ldap search requests to avoid infinitely lookups (in seconds, 0 no timeout)
ldap_search_timeout: 5

# The attribute the user member of is stored in (case sensitive !)
ldap_user_memberof_attr: memberOf

# Perform the group_user search with the member attribute of group entry or memberof attribute of user entry
# It depends on your ldap define
# The attribute the group member is stored in
ldap_group_member_attr: member

# If you're using ldap_authenticate=:ldap then you should ensure that
# ldaps is used to transfer the credentials over SSL or use the StartTLS extension
ldap_ssl: :on

# Use StartTLS extension of LDAP
ldap_start_tls: :off

# LDAP port defaults to 636 for ldaps and 389 for ldap and ldap with StartTLS
# Authentication with Windows 2003 AD requires
ldap_referrals: :off

# OVERRIDE with your company's ldap search base for the users who will use OBS
ldap_search_base: ou=developmentt,dc=mycompany,dc=com
# Account name attribute (sAMAccountName for Active Directory, uid for openLDAP)
ldap_search_attr: sAMAccountName
# The attribute the users name is stored in
ldap_name_attr: cn
# The attribute the users email is stored in
ldap_mail_attr: mail
# Credentials to use to search ldap for the username
ldap_search_user: "cn=ldapbind,ou=system,dc=mycompany,dc=com"
ldap_search_auth: "top secret"

# By default any LDAP user can be used to authenticate to the OBS
# In some deployments this may be too broad and certain criteria should
# be met; eg group membership
# To allow only users in a specific group uncomment this line:
ldap_user_filter: (memberof=cn=obsusers,ou=groups,dc=mycompany,dc=com)
# Note this is joined to the normal selection like so:
# (&(#{dap_search_attr}=#{login})#{ldap_user_filter})
# giving an ldap search of:
#  (&(sAMAccountName=#{login})(memberof=CN=group,OU=Groups,DC=Domain Component))
# Also note that openLDAP must be configured to use the memberOf overlay

# ldap_authenticate says how the credentials are verified:
#   :ldap = attempt to bind to ldap as user using supplied credentials
#   :local = compare the credentials supplied with those in
#            LDAP using #{ldap_auth_attr} & #{ldap_auth_mech}
#       if :local is used then ldap_auth_mech can be
#       :md5
#       :cleartext
ldap_authenticate: :ldap
ldap_auth_mech: :md5
# This is a string
ldap_auth_attr: userPassword

# Whether to search group info from ldap, it does not take effect it is not set
# Please also set below ldap_group_* configs correctly to ensure the operation works properly
# Possible values:
#         :off     disabled
#         :on      enabled; every group member operation ask the LDAP server
#         :mirror  enabled; group membership is mirrored and updated on user login
ldap_group_support: :mirror

# OVERRIDE with your company's ldap search base for groups
ldap_group_search_base: ou=obsgroups,dc=mycompany,dc=com

# The attribute the group name is stored in
ldap_group_title_attr: cn

# The value of the group objectclass attribute
# group for Active Directory, groupOfNames in openLDAP
ldap_group_objectclass_attr: group

# The LDAP group for obs admins
# if this group is set and a user belongs to this group they get the global admin role
ldap_obs_admin_group: obsadmins

4.8.7 Authentication Methods LDAP Methods

The LDAP mode has 2 methods to check authorization:

  1. LDAP bind method. With the provided credentials, an LDAP bind request is tried.

  2. Local method. The provided credentials checked locally against the content of the userPassword attribute.


The local method should be not used, since the userPassword attribute in most LDAP installations will not be available until you are bind with a privilege user. Kerberos

In OBS you can use single sign on via Kerberos tickets.

OBS Kerberos configuration resides in the options.yml file.

Table 4.5: Kerberos Configuration Options
Config item Description Example


Kerberos key table: file where long-term keys for one or more principals are stored



Kerberos OBS principal: OBS unique identity to which Kerberos can assign tickets



Kerberos realm: authentication administrative domain


Example Kerberos section of the options.yml file:


# Kerberos options

kerberos_mode: true
kerberos_keytab: "/etc/krb5.keytab"
kerberos_service_principal: "HTTP/hostname.example.com@EXAMPLE.COM"
kerberos_realm: "EXAMPLE.COM"


Once Kerberos is enabled, only users with logins that match users known to Kerberos will be able to authenticate to OBS. It is recommended to give admin rights to a matching user before enabling Kerberos mode. OBS Token Authorization

OBS 2.5 provides a mechanism to create tokens for specific operations. This can be used to allow certain operations in the name of a user to others. This is esp. useful when integrating external infrastructure. The create token should be kept secret by default, but it can also be revoked at any time if it became obsolete or leaked. Managing Tokens of a User

Tokens belong always to a user. A list of active tokens can received via

osc token
osc token --delete <TOKEN> Executing a Source Service

A token can be used to execute a source service. The source service has to be setup for the package first, check the source service chapter for this. A typical example is to update sources of a package from git. A source service for that can be setup with

osc add git://....

A token can be registered as generic token, means allowing to execute all source services in OBS if the user has permissions. You can create such a token and execute the operation with

osc token --create
osc token --trigger <TOKEN> <PROJECT> <PACKAGE>
osc api -X POST /trigger/runservice?token=<TOKEN>&project=<PROJECT>&package=<PACKAGE>

You can also limit the token to a specific package. The advantage is that the operation is limited to that package, so less bad things can happen when the token leaks. Also you do not need to specify the package on execution time. Create and execute it with

osc token --create <PROJECT> <PACKAGE>
osc token --trigger <TOKEN>
osc api -X POST /trigger/runservice?token=<TOKEN>

4.9 Message Bus for Event Notifications

The OBS has an integrated notification subsystem for sending events that are happening in our app through a message bus. We have chosen RabbitMQ (https://www.rabbitmq.com/) as our message bus server technology based on the AMQP (https://www.amqp.org/) protocol.

4.9.1 RabbitMQ

RabbitMQ claims to be "the most popular open source message broker". Meaning that it can deliver asynchronous messages in many different exchange ways (one to one, broadcasting, based on topics). It also includes a flexible routing system based on queues.

RabbitMQ is lightweight and easy to deploy on premises and in the cloud. It supports multiple messaging protocols too. And can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements. Configuration

Currently the RabbitMQ configuration is in the file options.yml. All those options there start with the prefix amqp. These configuration items match with some of the calls we do using the Bunny (http://rubybunny.info/) gem.

Table 4.6: RabbitMQ Configuration Options
Config item Description Values default Remarks


Namespace for the queues of this instance


Is a prefix for the queue names


Connection configuration

See this guide (http://rubybunny.info/articles/connecting.html) to know which are the parameters allowed.


Server host

A valid hostname


Server port



User account


Account password


Virtual host


Name for the exchange


Exchange configuration

See this guide (http://rubybunny.info/articles/exchanges.html) to know more about exchanges.


Type of communication for the exchange



If set, the exchange is deleted when all queues have finished using it



More configuration for plugins / extensions


Queues configuration

See this guide (http://rubybunny.info/articles/queues.html) to know more about queues.


Should this queue be durable?



Should this queue be automatically deleted when the last consumer disconnects?



Should this queue be exclusive (only can be used by this connection, removed when the connection is closed)?



Additional optional arguments (typically used by RabbitMQ extensions and plugins)

Example of the RabbitMQ section of the options.yml file:

# RabbitMQ based message bus
# Prefix of the message bus rooting key

amqp_namespace: 'opensuse.obs'

# Connection options -> http://rubybunny.info/articles/connecting.html

  host: rabbit.example.com
  port: 5672
  user: guest
  pass: guest
  vhost: /vhost

# Exchange options -> http://rubybunny.info/articles/exchanges.html

amqp_exchange_name: pubsub
  type: topic
  auto_delete: false
    persistent: true
    passive: true

# Queue options -> http://rubybunny.info/articles/queues.html
  durable: false
  auto-delete: false
  exclusive: false
    extension_1: blah
Table 4.7: List of event messages / queues for the message bus
Queue Name Description Payload


A package build has succeeded

:repository, :arch, :release, :readytime, :srcmd5, :rev, :reason, :bcnt, :verifymd5, :hostarch, :starttime, :endtime, :workerid, :versrel, :previouslyfailed


A package build has failed

:repository, :arch, :release, :readytime, :srcmd5, :rev, :reason, :bcnt, :verifymd5, :hostarch, :starttime, :endtime, :workerid, :versrel, :previouslyfailed, :faillog


A package build has succeeded with unchanged result

:repository, :arch, :release, :readytime, :srcmd5, :rev, :reason, :bcnt, :verifymd5, :hostarch, :starttime, :endtime, :workerid, :versrel, :previouslyfailed


A new package was created

:project, :package, :sender


The package metadata was updated

:project, :package, :sender


A package was deleted

:project, :package, :sender, :comment


A package was undeleted

:project, :package, :sender, :comment


A package was branched

:project, :package, :sender, :targetproject, :targetpackage, :user


A package has committed changes

:project, :package, :sender, :comment, :user, :files, :rev, :requestid


Sources of a package were uploaded

:project, :package, :sender, :comment, :filename, :requestid, :target, :user


Source service succeeded for a package

:comment, :project, :package, :sender, :rev, :user, :requestid


Source service failed for a package

:comment, :error, :project, :package, :sender, :rev, :user, :requestid


A package has changed its version

:project, :package, :sender, :comment, :requestid, :files, :rev, :newversion, :user, :oldversion


A new comment for the package was created

:project, :package, :sender, :commenters, :commenter, :comment_body, :comment_title


A new project was created

:project, :sender


The project configuration was updated

:project, :sender, :files, :comment


A project was updated

:project, :sender


A project was deleted

:project, :comment, :requestid, :sender


A project was undeleted

:project, :comment, :sender


A new comment for the project was created

:project, :commenters, :commenter, :comment_body, :comment_title


Binary was published in the repository

:project, :repo, :payload


Publish State of Repository has changed

:project, :repo, :state


A repository was published

:project, :repo


Repository (re)started building

:project, :repo, :arch, :buildid


Repository finished building

:project, :repo, :arch, :buildid


Status Check for Finished Repository Created

:project, :repo, :arch, :buildid


A request was created

:author, :comment, :description, :number, :actions, :state, :when, :who, :diff (local projects)


A request was changed (admin only)

:author, :comment, :description, :number, :actions, :state, :when, :who


A request was deleted

:author, :comment, :description, :number, :actions, :state, :when, :who


The state of a request was changed

:author, :comment, :description, :number, :actions, :state, :when, :who, :oldstate


A request requires a review

:author, :comment, :description, :number, :actions, :state, :when, :who, :reviewers, :by_user, :by_group, :by_project, :by_package, :diff (local projects)


Request was reviewed

:reviewers, :by_user, :by_group, :by_project, :by_package


All reviews of request have been completed

:author, :comment, :description, :number, :actions, :state, :when, :who, :reviewers, :by_user, :by_group, :by_project, :by_package, :diff (local projects)


A new comment for the request was created

:author, :comment, :description, :number, :actions, :state, :when, :who, :commenters, :commenter, :comment_body, :comment_title, :request_number




Status Check for Published Repository Created

:project, :repo, :buildid

4.10 Backup

Open Build Service configuration and content needs usually a backup. The following explains suggested strategies and places considered for a backup.

4.10.1 Places to consider

The following is pointing to the places with admin configurations or user content. The default location places are considered here. Frontend Configuration

  • /srv/www/obs/api/config

  • /srv/www/obs/api/log (optional)

The configuration is not changing usually. It is enough to backup it after config changes. Frontend Database

The MySQL/MariaDB database backup can be done in different ways. Please consider the database manual for details. One possible way is to create dumps via mysqldump tool. The backup should be done at the same point of time as the source server. Inconsistencies can be resolved using the check_consistency tool. Backend Configuration

The backend has a single configuration file which may got altered. This is by default /usr/lib/obs/server/BSConfig.pm . The file is not supposed to be changed usually and it can only be done by the system root user. A backup after a change is sufficient. Backend Content

All backend content is below /srv/obs directory. This include the sources, build results and also all configuration changes done by the OBS admin users.

4.10.2 Backup startegies

A backup is ideally taken only from a not running service. In real live this is usually not possbile, so it is important to run a backup on a production system. Database

mysql backup either directly from a non-primary node in the galera cluster (table dump locks the database during operation) or from a mysql slave attached to the cluster. Sources

The sources are supposed to be backup at the same time as the database. This can get achieved by either having a dedicated instance for the source server or by having a backup of the following directories.

  • /srv/obs/projects

  • /srv/obs/sources Build Results

Full backups via snapshots, either offered by the SAN storage or via LVM snapshot methods. Consistency is normally on repository level. Any inconsistency will be found by the scheduler and content will be retriggered. This is not true for disabled builds like released builds.

4.11 Restore

A restored system might contain inconsistencies if it was taken from a running service. These can be resolved as follows.

4.11.1 Check and repair database inconsistencies

If either database portions or sources got restored there are chances for inconsistencies. These can be found via

geeko > cd /srv/www/obs/api/ 
	           geeko > ./bin/rails c 
	           geeko > ConsistencyCheckJob.new.perform

Single projects can be either checked with

geeko > cd /srv/www/obs/api/ 
	           geeko > ./bin/rake check_project project="YOUR_PROJECT"

or inconsistencies fixed via

geeko > cd /srv/www/obs/api/ 
	           geeko > ./bin/rake fix_project project="YOUR_PROJECT"

4.11.2 Binaries

All build results are evaluated by the scheduler. Therefore any inconsisency can be detected by the scheduler. One way is to enforce a cold start, which means that the scheduler would rescan all sources and binaries and trigger builds where needed. This can be achieved by

geeko > rcobsscheduler stop     # ensure no scheduler is running 
	           geeko > rm /srv/obs/run/*.state # remove all state files 
	           geeko > rcobsscheduler start

The scheduler state will be visible as in cold start. It may take a longer time, so it might be more efficent to check only certain projects or architectures if needed. This can be triggered in a running system by executing

geeko > obs_admin --check-project PROJECT ARCHITERCTURE

A deep check is necessary in case sources have been restored:

geeko > obs_admin --deep-check-project PROJECT ARCHITERCTURE

4.12 Repair Data Corruption

On-disk data might be corrupted indepdent of a restore. For example due to power outage, filesystem or disk errors. A MySQL/Maria database in a cluster should repair itself in that case. Data on disk in the backend parts can be checked and fixed using an dedicated tool. See the help of the tool for further details or run

geeko > /usr/lib/obs/server/bs_check_consistency --check-all

Data can be repaired using the fix options.

4.13 Spider Identification

OBS is hiding specific parts/pages of the application from search crawlers (DuckDuckGo, Google, etc.), mostly for performance reasons. Which user-agent strings are identified as crawlers configured in the file /srv/www/obs/api/config/crawler-user-agents.json.

To update that list, you must run the command bundle exec rake voight_kampf:import_user_agents in the root directory of your OBS instance. This downloads the current crawler list of user agents as a JSON file into the config/ directory of the Rails application.

If you want to extend or edit this list, switch to the config/ directory and open the crawler-user-agents.json file with the editor of your choice. The content can look like this:

       "pattern": "Googlebot\\/",
       "url": "http://www.google.com/bot.html"
       "pattern": "Googlebot-Mobile"
       "pattern": "Googlebot-Image"

To add a new bot to this list, a pattern must be defined. This is required to identify a bot. Almost all bots have their own user agent that they are sending to a Web server to identify them. For example, the user agent of the Googlebot looks like this:

Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

To choose the pattern for the new bot, compare the user agent of the bot you want to identify with others and look for a part that is unique (like in the Googlebot example, the part: Googlebot).

Let's assume we want to add the bot Geekobot to the list of bots and the user agent looks like this:

Mozilla/5.0 (compatible; Geekobot/2.1; +https://www.opensuse.org)

Our unique part would be Geekobot. So we add a new entry to the list of bots:

       "pattern": "Googlebot\\/",
       "url": "http://www.google.com/bot.html"
       "pattern": "Googlebot-Mobile"
       "pattern": "Googlebot-Image"
       "pattern": "Geekobot"

You can also use regular expressions in the pattern element.

Save the file and restart the Rails application and the bot Geekobot should be identified properly.

4.14 Worker in Kubernetes

Warning: Alpha Implementation

This is Alpha implementation and not recommended for production.

The kubernetes device plugin deployed here makes several assumptions about which and how many containers will have access to KVM device.

The plugin also assumes availability of /dev/kvm on every node where the device-plugin-container is running

The build service worker itself has many backends to run its jobs. One of the preferred backends is KVM.

This backend allows building inside a VM. This has many advantages from security and isolation perspective.

When a build worker is running inside the containerized environment (for example, using Kubernetes) access to KVM is not available.

For such situations Kubernetes provides access to host devices (for example: KVM, GPU…​) through device plugins.

So, /dev/kvm can be made available to containers via Kubernetes using device plugin API (https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/).

One of the implementations of K8s devices plugin for KVM is available here : https://github.com/kubevirt/kubernetes-device-plugins

  1. Use the following manifest to deploy the KVM device plugin in a container.

    This plugin is packaged as k8s-device-plugin-kvm and corresponding container built here: https://build.opensuse.org/package/show/home:sjamgade:branches:devel:CaaSP:Head:ControllerNode/kubernetes-device-plugins-docker

    apiVersion: apps/v1
    kind: Deployment
       name: kvm-deployment
       replicas: 1
            app: kvm-server
             app: kvm-server
           - name: kvm-pod
             command: ["/usr/bin/k8s-kvm"]
             args: ["-v", "3","-logtostderr"]
             image: registry.opensuse.org/home/sjamgade/branches/devel/caasp/head/controllernode/containers/my_container
             imagePullPolicy: IfNotPresent
                - NET_ADMIN
                - SYS_NICE
              privileged: True
              runAsUser: 0
                  - name: device-plugins-socket
                    mountPath: /var/lib/kubelet/device-plugins
           hostname: kvm-server
             - name: device-plugins-socket
               path: /var/lib/kubelet/device-plugins
  2. Build container image of the build service locally and load it to all worker nodes. There is sample project file here: https://build.opensuse.org/package/show/home:sjamgade:branches:OBS:Server:Unstable/OBS-Appliance docker load < "/path/to/docker.archive.tar.gz"

  3. Use the following manifest to deploy the build service worker.

    Here ports are hard-coded to allow easy integration with local kubelet without requiring a separate ingress-controller

    apiVersion: apps/v1
    kind: Deployment
       name: worker-deployment-1
       replicas: 1
            app: obsworkerappname
             app: obsworkerappname
           - name: test-worker-pod
             command: ["/bin/bash"]
             args: ["-c", "sleep 1d  && echo Sleep expired > /dev/termination-log"]
             image: docker.io/library/obsworker
             imagePullPolicy: Never
                devices.kubevirt.io/kvm: "1"
                cpu: 100m
                devices.kubevirt.io/kvm: "1"
                - NET_ADMIN
                - SYS_NICE
              privileged: false
              runAsUser: 0
                  - name: boot-dir
                    mountPath: /boot
                  - name: modules-dir
                    mountPath: /lib/modules
             - name: boot-dir
                 path: /boot
             - name: modules-dir
                 path: /lib/modules
           hostname: obs-worker-1
    apiVersion: v1
    kind: Service
      name: myobsservice
        servicename: obsworkerservicename
        app: obsworkerappname
      type: NodePort
      externalTrafficPolicy: "Local"
        - name: woker-1
          protocol: TCP
          port: 32515
          targetPort: 32515
          nodePort: 32315
        - name: woker-2
          protocol: TCP
          port: 32516
          targetPort: 32516
          nodePort: 32516
  4. Save the following into a file launchworker.sh. Later use this file to launch the worker. Make sure you uncomment the OBS_REPO_SERVERS line and change the IP address to your build servers address

    cat << EOH > /etc/buildhost.config
    obsworker restart
  5. Use the following command to launch the build service worker.

    cat launchworker.sh | kubectl exec -i -t test-worker-pod bash
Print this page