API reference



An AbstractDependency is a binary dependency of the JLL package. Dependencies are installed to ${prefix} in the build environment.

Concrete subtypes of AbstractDependency are

  • Dependency: a JLL package that is necessary for to build the package and to load the generated JLL package.
  • BuildDependency: a JLL package that is necessary only to build the package. This will not be a dependency of the generated JLL package.
  • HostBuildDependency: similar to BuildDependency, but it will install the artifact for the host platform, instead of that for the target platform.

An AbstractSource is something used as source to build the package. Sources are installed to ${WORKSPACE}/srcdir in the build environment.

Concrete subtypes of AbstractSource are:


A special platform to be used to build platform-independent tarballs, like those containing only header files. FileProduct is the only product type allowed with this platform.

ArchiveSource(url::String, hash::String; unpack_target::String = "")

Specify a remote archive in one of the supported archive formats (e.g., TAR or ZIP balls) to be downloaded from the Internet from url. hash is the 64-character SHA256 checksum of the file.

In the builder environment, the archive will be automatically unpacked to ${WORKSPACE}/srcdir, or in its subdirectory pointed to by the optional keyword unpack_target, if provided.


Define a binary dependency that is necessary only to build the package. The argument can be either a string with the name of the JLL package or a Pkg.PackageSpec.

Dependency(dep::Union{PackageSpec,String}, build_version; compat)

Define a binary dependency that is necessary to build the package and load the generated JLL package. The argument can be either a string with the name of the JLL package or a Pkg.PackageSpec.

The optional keyword argument build_version can be used to specify the version of the dependency to be installed when building it.

The optional keyword argument compat can be used to specify a string for use in the Project.toml of the generated Julia package. If compat is non-empty and build_version is not passed, the latter defaults to the minimum version compatible with the compat specifier.

DirectorySource(path::String; target::String = basename(path), follow_symlinks=false)

Specify a local directory to mount from path.

The content of the directory will be mounted in ${WORKSPACE}/srcdir, or in its subdirectory pointed to by the optional keyword target, if provided. Symbolic links are replaced by a copy of the target when follow_symlinks is true.


Use docker as an execution engine; a reasonable backup for platforms that do not have user namespaces (e.g. MacOS, Windows).


An ExecutableProduct is a Product that represents an executable file.

On all platforms, an ExecutableProduct checks for existence of the file. On non-Windows platforms, it will check for the executable bit being set. On Windows platforms, it will check that the file ends with ".exe", (adding it on automatically, if it is not already present).

ExecutableProduct(binname, varname::Symbol, dir_path="bin")

Declares an ExecutableProduct that points to an executable located within the prefix. binname specifies the basename of the executable, varname is the name of the variable in the JLL package that can be used to call into the library. By default, the library is searched in the bindir, but you can specify a different directory within the prefix with the dir_path argument.

FileProduct(path::AbstractString, varname::Symbol, dir_path = nothing)

Declares a FileProduct that points to a file located relative to the root of a Prefix, must simply exist to be satisfied.

FileSource(url::String, hash::String; filename::String = basename(url))

Specify a remote file to be downloaded from the Internet from url. hash is the 64-character SHA256 checksum of the file.

In the builder environment, the file will be saved under ${WORKSPACE}/srcdir with the same name as the basename of the originating URL, unless the the keyword argument filename is specified.


A FrameworkProduct is a Product that encapsulates a macOS Framework. It behaves mostly as a LibraryProduct for now, but is a distinct type. This implies that for cross-platform builds where a library is provided as a Framework on macOS and as a normal library on other platforms, two calls to BinaryBuilder's build_tarballs are needed: one with the LibraryProduct and all non-macOS platforms, and one with the FrameworkProduct and the MacOS platforms.

FrameworkProduct(fwnames, varname::Symbol)

Declares a macOS FrameworkProduct that points to a framework located within the prefix, with a name containing fwname appended with .framework. As an example, given that fwname is equal to QtCore, this would be satisfied by the following path:

GitSource(url::String, hash::String; unpack_target::String = "")

Specify a remote Git repository to clone form url. hash is the 40-character SHA1 revision to checkout after cloning.

The repository will be cloned in ${WORKSPACE}/srcdir, or in its subdirectory pointed to by the optional keyword unpack_target, if provided.


Define a binary dependency that is necessary only to build the package. Different from the BuildDependency, the artifact for the host platform will be installed, instead of that for the target platform.

The argument can be either a string with the name of the JLL package or a Pkg.PackageSpec.


A LibraryProduct is a special kind of Product that not only needs to exist, but needs to be dlopen()'able. You must know which directory the library will be installed to, and its name, e.g. to build a LibraryProduct that refers to "/lib/libnettle.so", the "directory" would be "/lib", and the "libname" would be "libnettle". Note that a LibraryProduct can support multiple libnames, as some software projects change the libname based on the build configuration.

LibraryProduct(libname, varname::Symbol; dir_paths=String[],

Declares a LibraryProduct that points to a library located within the prefix. libname specifies the basename of the library, varname is the name of the variable in the JLL package that can be used to call into the library. By default, the library is searched in the libdir, but you can add other directories within the prefix to the dir_paths keyword argument. You can specify the flags to pass to dlopen as a vector of Symbols with the dlopen_flags keyword argument. If the library should not be dlopen'ed automatically by the JLL package, set dont_dlopen=true.

For example, if the libname is libnettle, this would be satisfied by the following paths:

  • lib/libnettle.so or lib/libnettle.so.6 on Linux and FreeBSD;
  • lib/libnettle.6.dylib on macOS;
  • lib/libnettle-6.dll on Windows.

Libraries matching the search pattern are rejected if they are not dlopen()'able.

If you are unsure what value to use for libname, you can use Base.BinaryPlatforms.parse_dl_name_version:

julia> using Base.BinaryPlatforms

julia> parse_dl_name_version("sfml-audio-2.dll", "windows")[1]

If the library would have different basenames on different operating systems (e.g., libz.so on Linux and FreeBSD, libz.dylib on macOS, and zlib.dll on Windows), libname can be also a vector of Strings with the different alternatives:

LibraryProduct(["libz", "zlib"], :libz)

A Product is an expected result after building or installation of a package.

Examples of Products include LibraryProduct, FrameworkProduct, ExecutableProduct and FileProduct. All Product types must define the following minimum set of functionality:

  • locate(::Product): given a Product, locate it within the wrapped Prefix returning its location as a string

  • satisfied(::Product): given a Product, determine whether it has been successfully satisfied (e.g. it is locateable and it passes all callbacks)

  • variable_name(::Product): return the variable name assigned to a Product

  • repr(::Product): Return a representation of this Product, useful for auto-generating source code that constructs Products, if that's your thing.


A UserNSRunner represents an "execution context", an object that bundles all necessary information to run commands within the container that contains our crossbuild environment. Use run() to actually run commands within the UserNSRunner, and runshell() as a quick way to get an interactive shell within the crossbuild environment.


Building large dependencies can take a lot of time. This state object captures all relevant state of this function. It can be passed back to the function to resume where we left off. This can aid debugging when code changes are necessary. It also holds all necessary metadata such as input/output streams.




Strip out any tags that are not the basic annotations like libc and call_abi.

accept_apple_sdk(ins::IO, outs::IO) -> Bool

Ask the user whether they accept the terms of the macOS SDK, and return a boolean with their choice. Write messages to outs, read input from ins.

choose_shards(p::AbstractPlatform; rootfs_build, ps_build, GCC_builds,
                           LLVM_builds, archive_type)

This method chooses, given a Platform, which shards to download, extract and mount, returning a list of CompilerShard objects. At the moment, this always consists of four shards, but that may not always be the case.


On Linux, the user id inside of the docker container doesn't correspond to ours on the outside, so permissions get all kinds of screwed up. To fix this, we have to chown -R $(id -u):$(id -g) $prefix, which really sucks, but is still better than nothing. This is why we prefer the UserNSRunner on Linux.

collect_jlls(manifest::Dict, dependencies::Vector{<:AbstractString})

Return a Set of all JLL packages in the manifest with dependencies being the list of direct dependencies of the environment.

             compressor_stream = GzipCompressorStream,
             level::Int = 9,
             extension::AbstractString = ".gz",
             verbose::Bool = false)

Compress all files in dir using the specified compressor_stream with compression level equal to level, appending extension to the filenames. Remove the original uncompressed files at the end.

download_all_artifacts(; verbose::Bool=false)

Helper function to download all shards/helper binaries so that no matter what happens, you don't need an internet connection to build your precious, precious binaries.

download_source(source::AbstractSource; verbose::Bool = false)

Download the given source. All downloads are cached within the BinaryBuilder downloads storage directory.


Return the path to file that, if exists, indicates that the user accepts to download macOS SDK. The file is automatically created when the package is loaded if the environment variable BINARYBUILDER_AUTOMATIC_APPLE is set to "true".

expand_cxxstring_abis(p::AbstractPlatform; skip=Sys.isbsd)

Given a Platform, returns an array of Platforms with a spread of identical entries with the exception of the cxxstring_abi tag within the Platform object. This is used to take, for example, a list of supported platforms and expand them to include multiple GCC versions for the purposes of ABI matching.

If the given Platform already specifies a cxxstring_abi (as opposed to nothing) only that Platform is returned. If skip is a function for which skip(platform) evaluates to true, the given platform is not expanded. By default FreeBSD and macOS platforms are skipped, due to their lack of a dependence on libstdc++ and not needing this compatibility shim.


Given a Platform, returns an array of Platforms with a spread of identical entries with the exception of the libgfortran_version tag within the Platform. This is used to take, for example, a list of supported platforms and expand them to include multiple GCC versions for the purposes of ABI matching. If the given Platform already specifies a libgfortran_version (as opposed to nothing) only that Platform is returned.


Given a Platform, returns a vector of Platforms with differing march attributes as specified by the ARCHITECTURE_FLAGS mapping. If the given Platform alread has a march tag specified, only that platform is returned.

julia> using BinaryBuilderBase

julia> expand_microarchitectures(Platform("x86_64", "freebsd"))
4-element Vector{Platform}:
 FreeBSD x86_64 {march=x86_64}
 FreeBSD x86_64 {march=avx}
 FreeBSD x86_64 {march=avx2}
 FreeBSD x86_64 {march=avx512}

julia> expand_microarchitectures(Platform("armv7l", "linux"))
2-element Vector{Platform}:
 Linux armv7l {call_abi=eabihf, libc=glibc, march=armv7l}
 Linux armv7l {call_abi=eabihf, libc=glibc, march=neonvfpv4}

julia> expand_microarchitectures(Platform("aarch64", "linux"))
4-element Vector{Platform}:
 Linux aarch64 {libc=glibc, march=armv8_0}
 Linux aarch64 {libc=glibc, march=armv8_4_crypto_sve}
 Linux aarch64 {libc=glibc, march=armv8_2_crypto}
 Linux aarch64 {libc=glibc, march=armv8_1}

julia> expand_microarchitectures(Platform("i686", "windows"))
2-element Vector{Platform}:
 Windows i686 {march=pentium4}
 Windows i686 {march=prescott}

Expand all platforms in the given vector with the supported microarchitectures.

julia> using BinaryBuilderBase

julia> expand_microarchitectures(filter!(p -> Sys.islinux(p) && libc(p) == "glibc", supported_platforms()))
13-element Vector{Platform}:
 Linux i686 {libc=glibc, march=pentium4}
 Linux i686 {libc=glibc, march=prescott}
 Linux x86_64 {libc=glibc, march=x86_64}
 Linux x86_64 {libc=glibc, march=avx}
 Linux x86_64 {libc=glibc, march=avx2}
 Linux x86_64 {libc=glibc, march=avx512}
 Linux aarch64 {libc=glibc, march=armv8_0}
 Linux aarch64 {libc=glibc, march=armv8_4_crypto_sve}
 Linux aarch64 {libc=glibc, march=armv8_2_crypto}
 Linux aarch64 {libc=glibc, march=armv8_1}
 Linux armv7l {call_abi=eabihf, libc=glibc, march=armv7l}
 Linux armv7l {call_abi=eabihf, libc=glibc, march=neonvfpv4}
 Linux powerpc64le {libc=glibc, march=power8}
gcc_version(p::AbstractPlatform, , GCC_builds::Vector{GCCBuild};

Returns the closest matching GCC version number for the given particular platform, from the given set of options. The compiler ABI and the microarchitecture of the platform will be taken into account. If no match is found, returns an empty list. If the keyword argument llvm_version is passed, it is used to filter the version of GCC for FreeBSD platforms.

This method assumes that the compiler ABI of the platform represents a platform that binaries will be run on, and thus versions are always rounded down; e.g. if the platform supports a libstdc++ version that corresponds to GCC 5.1.0, but the only GCC versions available to be picked from are 4.8.5 and 5.2.0, it will return 4.8.5, as binaries compiled with that version will run on this platform, whereas binaries compiled with 5.2.0 may not.

generate_compiler_wrappers!(platform::AbstractPlatform; bin_path::AbstractString,
                            host_platform::AbstractPlatform = Platform("x86_64", "linux"; libc = "musl", cxxstring_abi = "cxx11"),
                            compilers::Vector{Symbol} = [:c],
                            allow_unsafe_flags::Bool = false,
                            lock_microarchitecture::Bool = true)

We generate a set of compiler wrapper scripts within our build environment to force all build systems to honor the necessary sets of compiler flags to build for our systems. Note that while platform_envs() sets many environment variables, those values are intended to be optional/overridable. These values, while still overridable by directly invoking a compiler binary directly (e.g. /opt/{target}/bin/{target}-gcc), are much more difficult to override, as the flags embedded in these wrappers are absolutely necessary, and even simple programs will not compile without them.

generate_per_uid_squashfs(cs, new_uid = getuid())

In order for the sandbox to work well, we need to have the uids of the squashfs images match the uid of the current unprivileged user. Unfortunately there is no mount-time option to do this for us. Fortunately, squashfs is simple enough that if the ID table is uncompressed, we can just manually patch the uids to be what we need. This function performs this operation, by rewriting all UIDs and GIDs to the given new_uid (which defaults to the current user's UID).

                      preferred_gcc_version = nothing,
                      preferred_llvm_version = nothing,
                      compilers = nothing)

Return the concrete platform for the given platform based on the GCC compiler ABI. The set of shards is chosen by the keyword arguments (see choose_shards).

get_concrete_platform(platform::Platform, shards::Vector{CompilerShard})

Return the concrete platform for the given platform based on the GCC compiler ABI in the shards.

import_docker_image(rootfs::CompilerShard; verbose::Bool = false)

Checks to see if the given rootfs has been imported into docker yet; if it hasn't, then do so so that we can run things like:

docker run -ti binarybuilder_rootfs:v2018.08.27 /bin/bash

Which, after all, is the foundation upon which this whole doodad is built.

is_ecryptfs(path::AbstractString; verbose::Bool=false)

Checks to see if the given path (or any parent directory) is placed upon an ecryptfs mount. This is known not to work on current kernels, see this bug for more details: https://bugzilla.kernel.org/show_bug.cgi?id=197603

This method returns whether it is encrypted or not, and what mountpoint it used to make that decision.

is_mounted(cs::CompilerShard, build_prefix::String)

Return true if the given shard is mounted. Uses run() so will error out if something goes awry.

libdirs(prefix::Prefix, platform = HostPlatform())

Returns the library directories for the given prefix (note that this differs between unix systems and windows systems, and between 32- and 64-bit systems).

locate(ep::ExecutableProduct, prefix::Prefix;
       platform::AbstractPlatform = HostPlatform(),
       verbose::Bool = false,
       isolate::Bool = false)

If the given executable file exists and is executable, return its path.

On all platforms, an ExecutableProduct checks for existence of the file. On non-Windows platforms, it will check for the executable bit being set. On Windows platforms, it will check that the file ends with ".exe", (adding it on automatically, if it is not already present).

locate(fp::FileProduct, prefix::Prefix;
       platform::AbstractPlatform = HostPlatform(),
       verbose::Bool = false,
       isolate::Bool = false)

If the given file exists, return its path. The platform and isolate arguments are is ignored here, but included for uniformity. For ease of use, we support a limited number of custom variable expansions such as ${target}, and ${nbits}, so that the detection of files within target-specific folders named things like /lib32/i686-linux-musl is simpler.

locate(lp::LibraryProduct, prefix::Prefix;
       verbose::Bool = false,
       platform::AbstractPlatform = HostPlatform())

If the given library exists (under any reasonable name) and is dlopen()able, (assuming it was built for the current platform) return its location. Note that the dlopen() test is only run if the current platform matches the given platform keyword argument, as cross-compiled libraries cannot be dlopen()ed on foreign platforms.

logdir(prefix::Prefix; subdir::AbstractString="")

Returns the logs directory for the given prefix. If subdir is a non-empty string, that directory it is appended to the logdir of the given prefix.


Return the location this compiler shard should be mounted at. We basically analyze the name and platform of this shard and return a path based on that.

mount(cs::CompilerShard, build_prefix::String)

Mount a compiler shard, if possible. Uses run() so will error out if something goes awry. Note that this function only does something when using a .squashfs shard, with a UserNS or Docker runner, on Linux. All other combinations of shard archive type, runner and platform result in a no-op from this function.

package(prefix::Prefix, output_base::AbstractString,
        platform::AbstractPlatform = HostPlatform(),
        verbose::Bool = false, force::Bool = false)

Build a tarball of the prefix, storing the tarball at output_base, appending a version number, a platform-dependent suffix and a file extension. If no platform is given, defaults to current platform. Returns the full path to, the SHA256 hash and the git tree SHA1 of the generated tarball.


Given a platform, generate a Dict mapping representing all the environment variables to be set within the build environment to force compiles toward the defined target architecture. Examples of things set are PATH, CC, RANLIB, as well as nonstandard things like target.

preferred_cxxstring_abi(platform::AbstractPlatform, shard::CompilerShard;
                        gcc_builds::Vector{GCCBuild} = available_gcc_builds)

Return the C++ string ABI preferred by the given platform or GCCBootstrap shard.

preferred_libgfortran_version(platform::AbstractPlatform, shard::CompilerShard;
                              gcc_builds::Vector{GCCBuild} = available_gcc_builds)

Return the libgfortran version preferred by the given platform or GCCBootstrap shard.

runshell(platform::AbstractPlatform = HostPlatform())

Launch an interactive shell session within the user namespace, with environment setup to target the given platform.

          platform::AbstractPlatform = HostPlatform(),
          verbose::Bool = false,
          isolate::Bool = false)

Given a Product, return true if that Product is satisfied, e.g. whether a file exists that matches all criteria setup for that Product. If isolate is set to true, will isolate all checks from the main Julia process in the event that dlopen()'ing a library might cause issues.

setup_dependencies(prefix::Prefix, dependencies::Vector{PackageSpec}, platform::AbstractPlatform; verbose::Bool = false)

Given a list of JLL package specifiers, install their artifacts into the build prefix. The artifacts are installed into the global artifact store, then copied into a temporary location, then finally symlinked into the build prefix. This allows us to (a) save download bandwidth by not downloading the same artifacts over and over again, (b) maintain separation in the event of catastrophic containment failure, avoiding hosing the main system if a build script decides to try to modify the dependent artifact files, and (c) keeping a record of what files are a part of dependencies as opposed to the package being built, in the form of symlinks to a specific artifacts directory.

setup_workspace(build_path::String, sources::Vector{SetupSource};
                verbose::Bool = false)

Sets up a workspace within build_path, creating the directory structure needed by further steps, unpacking the source within build_path, and defining the environment variables that will be defined within the sandbox environment.

This method returns the Prefix to install things into, and the runner that can be used to launch commands within this workspace.


Return the path to this shard on-disk; for unpacked shards, this is a directory. For squashfs shards, this is a file. This will not cause a shard to be downloaded.


Return the list of supported platforms as an array of Platforms. These are the platforms we officially support building for, if you see a mapping in get_shard_hash() that isn't represented here, it's probably because that platform is still considered "in beta".

Platforms can be excluded from the list by specifying an array of platforms to exclude i.e. supported_platforms(exclude=[Platform("i686", "windows"), Platform("x86_64", "windows")]) or a function that returns true for exclusions i.e.


Create a temporary prefix, passing the prefix into the user-defined function so that build/packaging operations can occur within the temporary prefix, which is then cleaned up after all operations are finished. If the path provided exists already, it will be deleted.

Usage example:

out_path = abspath("./libfoo")
temp_prefix() do p
    # <insert build steps here>

    # tarball up the built package
    tarball_path, tarball_hash = package(p, out_path)

On Linux systems, return the strings returned by the uname() function in libc

unmount(cs::CompilerShard, build_prefix::String)

Unmount a compiler shard from a given build prefix, if possible. Uses run() so will error out if something goes awry. Note that this function only does something when using a squashfs shard on Linux. All other combinations of shard archive type and platform result in a no-op.

autobuild(dir::AbstractString, src_name::AbstractString,
          src_version::VersionNumber, sources::Vector,
          script::AbstractString, platforms::Vector,
          products::Vector, dependencies::Vector;
          verbose = false, debug = false,
          skip_audit = false, ignore_audit_errors = true,
          autofix = true, code_dir = nothing,
          meta_json_file = nothing, require_license = true, kwargs...)

Runs the boiler plate code to download, build, and package a source package for a list of platforms. This method takes a veritable truckload of arguments, here are the relevant actors, broken down in brief:

  • dir: the root of the build; products will be placed within dir/products, and mountpoints will be placed within dir/build/.

  • src_name: the name of the source package being built and will set the name of the built tarballs.

  • src_version: the version of the source package.

  • platforms: a list of platforms to build for.

  • sources: a vector of all sources to download and unpack before building begins, as AbstractSources.

  • script: a string representing a shell script to run as the build.

  • products: the list of Products which shall be built.

  • dependencies: a vector of JLL dependency packages as AbstractDependency that should be installed before building begins.

  • verbose: Enable verbose mode. What did you expect?

  • debug: cause a failed build to drop into an interactive shell so that the build can be inspected easily.

  • skip_audit: disable the typical audit that occurs at the end of a build.

  • ignore_audit_errors: do not kill a build even if a problem is found.

  • autofix: give BinaryBuilder permission to automatically fix issues it finds during audit passes. Highly recommended.

  • code_dir: sets where autogenerated JLL packages will be put.

  • require_license enables a special audit pass that requires licenses to be installed by all packages.

detect_cxxstring_abi(oh::ObjectHandle, platform::AbstractPlatform)

Given an ObjectFile, examine its symbols to discover which (if any) C++11 std::string ABI it's using. We do this by scanning the list of exported symbols, triggering off of instances of St7__cxx11 or _ZNSs to give evidence toward a constraint on cxx11, cxx03 or neither.

detect_libstdcxx_version(oh::ObjectHandle, platform::AbstractPlatform)

Given an ObjectFile, examine its dynamic linkage to discover which (if any) libgfortran it's linked against. The major SOVERSION will determine which GCC version we're restricted to.

analyze_instruction_set(oh::ObjectHandle, platform::AbstractPlatform; verbose::Bool = false)

Analyze the instructions within the binary located at the given path for which minimum instruction set it requires, taking note of groups of instruction sets used such as avx, sse4.2, i486, etc....

Some binary files (such as libopenblas) contain multiple versions of functions, internally determining which version to call by using the cpuid instruction to determine processor support. In an effort to detect this, we make note of any usage of the cpuid instruction, disabling our minimum instruction set calculations if such an instruction is found, and notifying the user of this if verbose is set to true.

Note that this function only really makes sense for x86/x64 binaries. Don't run this on armv7l, aarch64, ppc64le etc... binaries and expect it to work.

audit(prefix::Prefix, src_name::AbstractString = "";
                      platform::AbstractPlatform = HostPlatform(),
                      verbose::Bool = false,
                      silent::Bool = false,
                      autofix::Bool = false,
                      has_csl::Bool = true,
                      require_license::Bool = true,

Audits a prefix to attempt to find deployability issues with the binary objects that have been installed within. This auditing will check for relocatability issues such as dependencies on libraries outside of the current prefix, usage of advanced instruction sets such as AVX2 that may not be usable on many platforms, linkage against newer glibc symbols, etc...

This method is still a work in progress, only some of the above list is actually implemented, be sure to actually inspect Auditor.jl to see what is and is not currently in the realm of fantasy.

check_license(prefix, src_name; verbose::Bool = false,, silent::Bool = false)

Check that there are license files for the project called src_name in the prefix.

collect_files(path::AbstractString, predicate::Function = f -> true)

Find all files that satisfy predicate() when the full path to that file is passed in, returning the list of file paths.


Return a (sorted) list of compatible microarchitectures, starting from the most compatible to the most highly specialized. If no microarchitecture is specified within p, returns the most generic microarchitecture possible for the given architecture.

detect_libgfortran_version(oh::ObjectHandle, platform::AbstractPlatform)

Given an ObjectFile, examine its dynamic linkage to discover which (if any) libgfortran it's linked against. The major SOVERSION will determine which GCC version we're restricted to.

instruction_mnemonics(path::AbstractString, platform::AbstractPlatform)

Dump a binary object with objdump, returning a list of instruction mnemonics for further analysis with analyze_instruction_set().

Note that this function only really makes sense for x86/x64 binaries. Don't run this on armv7l, aarch64, ppc64le etc... binaries and expect it to work.

This function returns the list of mnemonics as well as the counts of each, binned by the mapping defined within instruction_categories.

is_for_platform(h::ObjectHandle, platform::AbstractPlatform)

Returns true if the given ObjectHandle refers to an object of the given platform; E.g. if the given platform is for AArch64 Linux, then h must be an ELFHandle with h.header.e_machine set to ELF.EM_AARCH64.

In particular, this method and platform_for_object() both exist because the latter is not smart enough to deal with :glibc and :musl yet.

minimum_march(counts::Dict, p::AbstractPlatform)

This function returns the minimum instruction set required, depending on whether the object file being pointed to is a 32-bit or 64-bit one:

  • For 32-bit object files, this returns one of ["i686", "prescott"]

  • For 64-bit object files, this returns one of ["x86_64", "avx", "avx2", "avx512"]


Returns the platform the given ObjectHandle should run on. E.g. if the given ObjectHandle is an x86_64 Linux ELF object, this function will return Platform("x86_64", "linux"). This function does not yet distinguish between different libc's such as :glibc and :musl.


We require that all shared libraries are accessible on disk through their SONAME (if it exists). While this is almost always true in practice, it doesn't hurt to make doubly sure.

translate_symlinks(root::AbstractString; verbose::Bool=false)

Walks through the root directory given within root, finding all symlinks that point to an absolute path within root, and rewriting them to be a relative symlink instead, increasing relocatability.

update_linkage(prefix::Prefix, platform::AbstractPlatform, path::AbstractString,
               old_libpath, new_libpath; verbose::Bool = false)

Given a binary object located at path within prefix, update its dynamic linkage to point to new_libpath instead of old_libpath. This is done using a tool within the cross-compilation environment such as install_name_tool on MacOS or patchelf on Linux. Windows platforms are completely skipped, as they do not encode paths or RPaths within their executables.


Walks through the given root directory, finding broken symlinks and warning the user about them. This is used to catch instances such as a build recipe copying a symlink that points to a dependency; by doing so, it implicitly breaks relocatability.

clone(url::String, source_path::String)

Clone a git repository hosted at url into source_path, with a progress bar displayed to stdout.


Ask the user where the source code is coming from, then download and record the relevant parameters, returning the source url, the local path it is stored at after download, and a hash identifying the version of the code. In the case of a git source URL, the hash will be a git treeish identifying the exact commit used to build the code, in the case of a tarball, it is the sha256 hash of the tarball itself.

edit_script(state::WizardState, script::AbstractString)

For consistency (and security), use the sandbox for editing a script, launching vi within an interactive session to edit a buildscript.

interactive_build(state::WizardState, prefix::Prefix,
                  ur::Runner, build_path::AbstractString)

Runs the interactive shell for building, then captures bash history to save
reproducible steps for building this source. Shared between steps 3 and 5
match_files(state::WizardState, prefix::Prefix,
            platform::AbstractPlatform, files::Vector; silent::Bool = false)

Inspects all binary files within a prefix, matching them with a given list of files, complaining if there are any files that are not properly matched and returning the set of normalized names that were not matched, or an empty set if all names were properly matched.


Given a filename, normalize it, stripping out extensions. E.g. the file path "foo/libfoo.tar.gz" would get mapped to "libfoo".


Pick the first platform for use to run on. We prefer Linux x86_64 because that's generally the host platform, so it's usually easiest. After that we go by the following preferences:

  • OS (in order): Linux, Windows, OSX
  • Architecture: x86_64, i686, aarch64, powerpc64le, armv7l
  • The first remaining after this selection
provide_hints(state::WizardState, path::AbstractString)

Given an unpacked source directory, provide hints on how a user might go about building the binary bounty they so richly desire.


It all starts with a single step, the unabashed ambition to leave your current stability and engage with the universe on a quest to create something new, beautiful and unforeseen. It all ends with compiler errors.

This step selects the relevant platform(s) for the built binaries.


Starts initial build for Linux x86_64, which is our initial test target platform. Sources that build properly for this platform continue on to attempt builds for more complex platforms.

step3_interactive(state::WizardState, prefix::Prefix, platform::AbstractPlatform,
                  ur::Runner, build_path::AbstractString)

The interactive portion of step3, moving on to either rebuild with an edited script or proceed to step 4.

step4(state::WizardState, ur::Runner, platform::AbstractPlatform,
      build_path::AbstractString, prefix::Prefix)

The fourth step selects build products after the first build is done

with_gitcreds(f, username::AbstractString, password::AbstractString)

Calls f with an LibGit2.UserPasswordCredential object as an argument, constructed from the username and password values. with_gitcreds ensures that the credentials object gets properly shredded after it's no longer necessary. E.g.:

julia with_gitcreds(user, token) do creds LibGit2.clone("https://github.com/foo/bar.git", "bar"; credentials=creds) end`


Return the relative path within an Yggdrasil clone where this project (given its name) would be stored. This is useful for things like generating the build_tarballs.jl file and checking to see if it already exists, etc...

Note that we do not allow case-ambiguities within Yggdrasil, we check for this using the utility function case_insensitive_file_exists(path).

yn_prompt(state::WizardState, question::AbstractString, default = :y)

Perform a [Y/n] or [y/N] question loop, using default to choose between the prompt styles, and looping until a proper response (e.g. "y", "yes", "n" or "no") is received.


Command Line

build_tarballs(ARGS, src_name, src_version, sources, script, platforms,
               products, dependencies; kwargs...)

This should be the top-level function called from a build_tarballs.jl file. It takes in the information baked into a build_tarballs.jl file such as the sources to download, the products to build, etc... and will automatically download, build and package the tarballs, generating a build.jl file when appropriate.

Generally, ARGS should be the top-level Julia ARGS command-line arguments object. build_tarballs does some rudimentary parsing of the arguments. To see what it can do, you can call it with --help in the ARGS or see the Command Line section in the manual.

The kwargs are passed on to autobuild, see there for a list of supported ones. A few additional keyword arguments are accept:

  • julia_compat can be set to a version string which is used to set the supported Julia version in the [compat] section of the Project.toml of the generated JLL package. The default value is "1.0".

  • lazy_artifacts sets whether the artifacts should be lazy.

  • init_block may be set to a string containing Julia code; if present, this code will be inserted into the initialization path of the generated JLL package. This can for example be used to invoke an initialization API of a shared library.


The init_block keyword argument is experimental and may be removed in a future version of this package. Please use it sparingly.


The build_tarballs function also parses command line arguments. The syntax is described in the --help output:

Usage: build_tarballs.jl [target1,target2,...] [--help]
                         [--verbose] [--debug]
                         [--deploy] [--deploy-bin] [--deploy-jll]
                         [--register] [--meta-json]

    targets             By default `build_tarballs.jl` will build a tarball
                        for every target within the `platforms` variable.
                        To override this, pass in a list of comma-separated
                        target triplets for each target to be built.  Note
                        that this can be used to build for platforms that
                        are not listed in the 'default list' of platforms
                        in the build_tarballs.jl script.

    --verbose           This streams compiler output to stdout during the
                        build which can be very helpful for finding bugs.
                        Note that it is colorized if you pass the
                        --color=yes option to julia, see examples below.

    --debug=<mode>      This causes a failed build to drop into an
                        interactive shell for debugging purposes.  `<mode>`
                        can be one of `error`, `begin` or `end`.  `error`
                        drops you into the interactive shell only when there
                        is an error during the build, this is the default
                        when no mode is specified.  `begin` forces an error
                        at the beginning of the build, before any command in
                        the script is run.  `end` forces an error at the end
                        of the build script, useful to debug a successful
                        build for which the auditor would fail.

    --deploy=<repo>     Deploy binaries and JLL wrapper code to a github
                        release of an autogenerated repository.  Uses
                        `github.com/JuliaBinaryWrappers/<name>_jll.jl` by
                        default, unless `<repo>` is set, in which case it
                        should be set as `<owner>/<name>_jll.jl`.  Setting
                        this option is equivalent to setting `--deploy-bin`
                        and `--deploy-jll`.  If `<repo>` is set to "local"
                        then nothing will be uploaded, but JLL packages
                        will still be written out to `~/.julia/dev/`.

    --deploy-bin=<repo> Deploy just the built binaries

    --deploy-jll=<repo> Deploy just the JLL code wrappers

    --register=<depot>  Register into the given depot.  If no path is
                        given, defaults to `~/.julia`.  Registration
                        requires deployment of the JLL wrapper code, so
                        so using `--register` without `--deploy` or the
                        more specific `--deploy-jll` is an error.

    --meta-json         Output a JSON representation of the given build
                        instead of actually building.  Note that this can
                        (and often does) output multiple JSON objects for
                        multiple platforms, multi-stage builds, etc...

    --help              Print out this message.

    julia --color=yes build_tarballs.jl --verbose
        This builds all tarballs, with colorized output.

    julia build_tarballs.jl x86_64-linux-gnu,i686-linux-gnu
        This builds two tarballs for the two platforms given, with a
        minimum of output messages.