Project configuration structures

See also

Please refer to Project configuration for examples and explanation of how these structures are used in Universum

class Step(name: str = '', command: List[str] | None = None, environment: Dict[str, str] | None = None, artifacts: str = '', report_artifacts: str = '', artifact_prebuild_clean: bool = False, directory: str = '', critical: bool = False, background: bool = False, finish_background: bool = False, code_report: bool = False, pass_tag: str = '', fail_tag: str = '', if_env_set: str = '', if_succeeded: Configuration | None = None, if_failed: Configuration | None = None, **kwargs)[source]

An instance of this class defines a single launch of an external command by Universum. It determines execution parameters and result handling. Individual build steps are collected in a Configuration object, that allows to create linear and nested step structures. See https://universum.readthedocs.io/en/latest/configuring.html#build-step for details.

Step keys

Parameters supplied via named attributes during construction are annotated and can be type-checked for safety. Users may also define custom parameters in kwargs - these parameters will be stored internally in dict fields and can be retrieved by indexing. The following list enumerates all the keys used by Universum:

name

The human-readable name of a build step. The name is used as the title of the build log block corresponding to the execution of this step. It is also used to generate name of the log file if the option for storing to log files is selected. If several build steps have the same names, and logs are stored to files (see --out / -o command-line parameter for details), all logs for such steps will be stored to one file in order of their appearances.

command

The command line for the build step launch. For every step the first list item should be a console command (e.g. script name, or any other command like ls), and other list items should be the arguments, added to the command (e.g. --debug or -la). Every command line element, separated with space character, should be passed as a separate string argument. Lists like ["ls -a"] will not be processed correctly and therefore should be splat into ["ls", "-a"]. Lists like ["build.sh", "--platform A"] will not be processed correctly and thus should be plat into ["build.sh", "--platform", "A"]. A build step can have an empty list as a command - such step won’t do anything except showing up the step name in Universum execution logs. Some common actions, such as echo, are bash features and not actual programs to run. These features should be called as ["bash", "-c", "echo -e 'Some line goes here'"]. Any other shell can also be used instead of bash. Note that in this case the string to be passed to bash is one argument containing white spaces and therefore not splat by commas.

Note

When a process is launched via shell, that shell also parses the arguments, which does not happen with arguments of a direct process launch. In case of direct process launch the arguments including special characters (e.g. *.txt) are treated as usual strings. To make the lack of filename expansion (globbing) more obvious and to make the relaunch of executed command by copy-pasting more convenient, these arguments are printed within quotes in log. To use bash for globbing, please also use bash -c as explained above.

environment

Required environment variables, e.g. environment={"VAR1": "String", "VAR2": "123"} Can be set at any step level, but re-declaring variables is not supported, so please make sure to mention every variable only one time at most.

artifacts

Path to the file or directory to be copied to the working directory as an execution result immediately after step finish. Can contain shell-style pattern matching (e.g. “out/*.html”), including recursive wildcards (e.g. “out/**/index.html”). If not stated otherwise (see --no-archive command-line parameter for details), artifact directories are copied as archives. If artifact_prebuild_clean key is either absent or set to False and stated artifacts are present in downloaded sources, it is considered a failure and configuration execution will not proceed. If no required artifacts were found in the end of the Universum run, it is also considered a failure. In case of shell-style patterns build is failed if no files or directories matching pattern are found.

Note

Universum checks both the artifact itself and an archive with the same name, because it does not know in advance whether the step is going to create a file or a directory.

report_artifacts

Path to the special artifacts for reporting (e.g. to Swarm). Unlike artifacts key, report_artifacts are not obligatory and their absence is not considered a build failure. A directory cannot be stored as a separate artifact, so when using --no-archive option, do not claim directories as report_artifacts. Please note that any file can be put as artifact, report_artifact, or both. A file that is both in artifacts and report_artifacts, will be mentioned in a report and will cause build failure when missing.

Note

GitHub Actions doesn’t support this key.

artifact_prebuild_clean

A flag to signal that artifacts must be cleaned before the build. Cleaning of single files, directories and sets of files defined by shell-style patterns is supported. By default, artifacts are not stored in VCS, and artifact presence before build most likely means that working directory is not cleaned after previous build and therefore might influence build results. But sometimes deliverables have to be stored in VCS, and in this case instead of stopping the build they should be simply cleaned before it. This is where artifact_prebuild_clean=True key is supposed to be used. This flag is ignored, if both artifacts and report_artifacts are not set.

directory

Path to a current working directory for launched process. Absent directory has equal meaning to empty string passed as a directory value and means that command will be launched from the project root directory. See https://universum.readthedocs.io/en/latest/configuring.html#execution-directory for details.

critical

A flag used in case of a linear step execution, when the result of some step is critical for the subsequent step execution. If some step has critical key set to True and executing this step fails, no more steps will be executed during this run. However, all background steps, which have already started will be finished regardless of critical step results.

background

A flag used to signal that the current step should be executed independently in parallel with all other steps. All logs from such steps are written to file, and the results of execution are collected in the end of Universum run. Next step execution begins immediately after starting a background step, not waiting for it to be completed. Several background steps can be executed simultaneously.

finish_background

A flag used to signal that the current step should be executed only after ongoing background steps (if any) are finished.

code_report

A flag used to signal that the current step performs static or syntax analysis of the code. Usually set in conjunction with adding --result-file="${CODE_REPORT_FILE}" to ‘command’ arguments. Analyzers currently provided by Universum are: pylint, svace and uncrustify. See https://universum.readthedocs.io/en/latest/code_report.html for details.

pass_tag

A tag used to mark successful TeamCity builds. This tag can be set independenty of fail_tag value per each step. The value should be set to a strings without spaces as acceptable by TeamCity as tags. Every tag is added (if matching condition) after executing build step it is set in, not in the end of all run. Not applicable for conditional steps.

fail_tag

A tag used to mark failed TeamCity builds. See pass_tag for details. Not applicable for conditional steps.

if_succeeded

Another Configuration, that will be executed in case of this step will succeed. Having this parameter non-None will make the current step conditional.

if_failed

Another Configuration, that will be executed in case of this step will fail. Having this parameter non-None will make the current step conditional.

Each parameter is optional, and is substituted with a falsy value, if omitted.

Example of a simple Step construction:

>>> Step(foo='bar')
{'foo': 'bar'}
>>> step = Step(name='foo', command=['1', '2', '3'], _extras={'_extras': 'test', 'bool': True}, myvar=1)
>>> step
{'name': 'foo', 'command': ['1', '2', '3'], '_extras': {'_extras': 'test', 'bool': True}, 'myvar': 1}
>>> step['name']
'foo'
>>> step['_extras']
{'_extras': 'test', 'bool': True}
>>> step['myvar']
1
Example of using Configuration objects to wrap individual build steps:

>>> make = Configuration([Step(name="Make ", command=["make"], pass_tag="pass_")])
>>> target = Configuration([
...              Step(name="Linux", command=["--platform", "Linux"], pass_tag="Linux"),
...              Step(name="Windows", command=["--platform", "Windows"], pass_tag="Windows")
...          ])
>>> configs = make * target
>>> configs.dump()
"[{'name': 'Make Linux', 'command': 'make --platform Linux', 'pass_tag': 'pass_Linux'},\n{'name': 'Make Windows', 'command': 'make --platform Windows', 'pass_tag': 'pass_Windows'}]"

This means that tags “pass_Linux” and “pass_Windows” will be sent to TeamCity’s build.

Note

All the paths, specified in command, artifacts and directory parameters, can be absolute or relative. All relative paths start from the project root (see get_project_root()).

__repr__() str[source]

This function simulates dict-like representation for string output. This is useful when printing contents of Configuration objects, as internally they wrap a list of Step objects

Returns:

a dict-like string

>>> step = Step(name='foo', command=['bar'], my_var='baz')
>>> repr(step)
"{'name': 'foo', 'command': 'bar', 'my_var': 'baz'}"
__eq__(other: Any) bool[source]

This functions simulates dict-like check for match

Parameters:

otherdict to compare values, or Step object to check equality

Returns:

True if other matches

>>> step1 = Step(name='foo', my_var='bar')
>>> step2 = Step(name='foo', my_var='bar')
>>> step3 = Step(name='foo', my_var='bar', critical=True)
>>> step1 == step1
True
>>> step1 == step2
True
>>> step1 == step3
False
>>> step1 == {'name': 'foo', 'my_var': 'bar'}
True
>>> step1 == {'name': 'foo', 'my_var': 'bar', 'critical': False}
True
>>> step1 == {'name': 'foo', 'my_var': 'bar', 'test': None}
True
>>> step1 == {'name': 'foo', 'my_var': 'bar', 'test': ''}
True
>>> step1 == {'name': 'foo', 'my_var': 'bar', 'test': ' '}
False
__getitem__(key: str) Any[source]

This functions simulates dict-like legacy read access

Note

It is recommended to use field-like access to non user-defined attributes of Step. This preserves data type information for mypy static analysis.

Parameters:

key – client-defined item

Returns:

client-defined value

>>> step = Step(name='foo', my_var='bar')
>>> step['name'] == step.name == 'foo'
True

Note that step[‘name’] has type ‘Any’, but step.name has type ‘str’

>>> step['my_var']
'bar'
>>> step['test']
__setitem__(key: str, value: Any) None[source]

This functions simulates dict-like legacy write access

Note

It is recommended to use field-like access to non user-defined attributes of Step. This allows static analysis to catch possible type mismatch.

Parameters:
  • key – client-defined key

  • value – client-defined value

>>> import warnings
>>> def do_and_get_warnings(f):
...     with warnings.catch_warnings(record=True) as w:
...         warnings.simplefilter("always")
...         f()
...         return w
>>> def assign_legacy(o, k, v):
...     o[k] = v
>>> def assign_name_legacy(o, v):
...     o['name'] = v
>>> def assign_name_new(o, v):
...     o.name = v
>>> step = Step(name='foo', my_var='bar')
>>> do_and_get_warnings(lambda : assign_legacy(step, 'name', 'bar'))  
[<warnings.WarningMessage object at ...>]
>>> step['name']
'bar'
>>> do_and_get_warnings(lambda : assign_name_new(step, 'baz'))  
[]
>>> step['name']
'baz'
>>> do_and_get_warnings(lambda : assign_legacy(step, 'directory', 'foo'))  
[<warnings.WarningMessage object at ...>]
>>> do_and_get_warnings(lambda : assign_legacy(step, 'test', 42))
[]
>>> do_and_get_warnings(lambda : assign_legacy(step, '_extras', {'name': 'baz'}))
[]
>>> step
{'name': 'baz', 'directory': 'foo', 'my_var': 'bar', 'test': 42, '_extras': {'name': 'baz'}}
get(key: str, default: Any | None = None) Any[source]

This functions simulates dict-like legacy read access

Note

It is recommended to use field-like access to non user-defined attributes of Step. This preserves data type information for mypy static analysis.

Parameters:
  • key – client-defined item

  • default – value to return if key is absent in dict

Returns:

client-defined value

>>> import warnings
>>> def do_and_get_warnings(f):
...     with warnings.catch_warnings(record=True) as w:
...         warnings.simplefilter("always")
...         f()
...         return w
>>> step = Step(name='foo', my_var='bar', t1=None, t2=False)
>>> do_and_get_warnings(lambda : step.get('name', 'test'))  
[<warnings.WarningMessage object at ...>]

Note that step.get(‘name’) has type ‘Any’, but step.name has type ‘str’

>>> step.get('my_var', 'test')
'bar'
>>> step.get('my_var_2', 'test')
'test'
>>> step.get('command', 'test')
'test'
>>> step.get('t1') is None
True
>>> step.get('t2')
False
__add__(other: Step) Step[source]

This functions defines operator + for Step class objects by concatenating strings and contents of dictionaries. Note that critical attribute is always taken from the second operand.

Parameters:

otherStep object

Returns:

new Step object, including all attributes from both self and other objects

>>> step1 = Step(name='foo', command=['foo'], critical=True, my_var1='foo')
>>> step2 = Step(name='bar', command=['bar'], background=True, my_var1='bar', my_var2='baz')
>>> step1 + step2
{'name': 'foobar', 'command': ['foo', 'bar'], 'background': True, 'my_var1': 'foobar', 'my_var2': 'baz'}
>>> step2 + step1
{'name': 'barfoo', 'command': ['bar', 'foo'], 'critical': True, 'background': True, 'my_var1': 'barfoo', 'my_var2': 'baz'}
replace_string(from_string: str, to_string: str) None[source]

Replace instances of a string, used for pseudo-variables

Parameters:
  • from_string – string to replace, e.g. ${CODE_REPORT_FILE}

  • to_string – value to put in place of from_string

>>> step = Step(name='foo test', command=['foo', 'baz', 'foobar'], myvar1='foo', myvar2='bar', myvar3=1)
>>> step.replace_string('foo', 'bar')
>>> step
{'name': 'foo test', 'command': ['bar', 'baz', 'barbar'], 'myvar1': 'bar', 'myvar2': 'bar', 'myvar3': 1}
>>> step = Step(artifacts='foo', report_artifacts='foo', directory='foo')
>>> step.replace_string('foo', 'bar')
>>> step
{'directory': 'bar', 'artifacts': 'bar', 'report_artifacts': 'bar'}
stringify_command() bool[source]

Concatenates components of a command into one element

Returns:

True, if any of the command components have space inside

>>> step = Step(name='stringify test', command=['foo', 'bar', '--baz'])
>>> step.stringify_command()
False
>>> step
{'name': 'stringify test', 'command': 'foo bar --baz'}
>>> step.stringify_command()
True
__hash__ = None
class Configuration(lst: List[Dict[str, Any]] | List[Step] | None = None)[source]

Configuration is a class for establishing project configurations. Each Configuration object wraps a list of build steps either from a pre-constructed Step objects, or from the supplied dict data.

>>> cfg1 = Configuration([{"field1": "string"}])
>>> cfg1.configs
[{'field1': 'string'}]

Build-in method all() generates iterable for all configuration dictionaries for further usage:

>>> for i in cfg1.all(): i
{'field1': 'string'}

Built-in method dump() will generate a printable string representation of the object. This string will be printed into console output

>>> cfg1.dump()
"[{'field1': 'string'}]"

Adding two objects will extend list of dictionaries:

>>> cfg2 = Configuration([{"field1": "line"}])
>>> for i in (cfg1 + cfg2).all(): i
{'field1': 'string'}
{'field1': 'line'}

While multiplying them will combine same fields of dictionaries:

>>> for i in (cfg1 * cfg2).all(): i
{'field1': 'stringline'}

When a field value is a list itself -

>>> cfg3 = Configuration([dict(field2=["string"])])
>>> cfg4 = Configuration([dict(field2=["line"])])

multiplying them will extend the inner list:

>>> for i in (cfg3 * cfg4).all(): i
{'field2': ['string', 'line']}
__eq__(other: Any) bool[source]

This function checks wrapped configurations for match

Parameters:

otherConfiguration object or list of Step objects

Returns:

True if stored configurations match

>>> l = [{'name': 'foo', 'critical': True}, {'name': 'bar', 'myvar': 'baz'}]
>>> v1 = Configuration(l)
>>> step1 = Step(name='foo', critical=True)
>>> step2 = Step(name='bar', myvar='baz')
>>> v2 = Configuration([step1]) + Configuration([step2])
>>> v3 = Configuration() + Configuration([l[0]]) + Configuration([step2])
>>> v1 == v1
True
>>> v1 == v2
True
>>> v1 == v3
True
>>> v1 == v1 + Configuration()
True
>>> v1 == v1 * 1
True
>>> v1 == v1 * Configuration()
True
>>> v1 == v2 + v3
False
__bool__() bool[source]

This function defines truthiness of Configuration object

Returns:

True if Configuration class object is empty

>>> cfg1 = Configuration()
>>> bool(cfg1)
False
>>> cfg2 = Configuration([])
>>> bool(cfg2)
False
>>> cfg3 = Configuration([{}])
>>> bool(cfg3)
True
__add__(other: Configuration | List[Step]) Configuration[source]

This functions defines operator + for Configuration class objects by concatenating lists of dictionaries into one list. The order of list members in resulting list is preserved: first all dictionaries from self, then all dictionaries from other.

Parameters:

otherConfiguration object OR list of project configurations to be added to self

Returns:

new Configuration object, including all configurations from both self and other objects

__mul__(other: Configuration | int) Configuration[source]

This functions defines operator * for Configuration class objects. The resulting object is created by combining every self list member with every other list member using combine() function.

Parameters:

otherConfiguration object OR an integer value to be multiplied to self

Returns:

new Configuration object, consisting of the list of combined configurations

all() Iterable[Step][source]

Function for configuration iterating.

Returns:

iterable for all dictionary objects in Configuration list

dump(produce_string_command: bool = True) str[source]

Function for Configuration objects pretty printing.

Parameters:

produce_string_command – if set to False, prints “command” as list instead of string

Returns:

a user-friendly string representation of all configurations list

filter(checker: Callable[[Step], bool], parent: Step | None = None) Configuration[source]

This function is supposed to be called from main script, not configuration file. It uses provided checker to find all the configurations that pass the check, removing those not matching conditions.

Parameters:
  • checker – a function that returns True if configuration passes the filter and False otherwise

  • parent – an inner parameter for recursive usage; should be None when function is called from outside

Returns:

new Configuration object without configurations not matching checker conditions

__hash__ = None
set_project_root(project_root: str) None[source]

Function to be called from main script; not supposed to be used in configuration file. Stores generated project location for further usage. This function is needed because project sources most likely will be copied to a temporary directory of some automatically generated location and the CI run will be performed there.

Parameters:

project_root – path to actual project root

get_project_root() str[source]

Function to be used in configuration file. Inserts actual project location after that location is generated. If project root is not set, function returns current directory.

Returns:

actual project root