Skip to main content
warning

🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.

cortex engines

This command allows you to manage various engines available within Cortex.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex engines [options] [subcommand]
# Beta
cortex-beta engines [options] [subcommand]
# Nightly
cortex-nightly engines [options] [subcommand]

Options:

OptionDescriptionRequiredDefault valueExample
-h, --helpDisplay help information for the command.No--h

cortex engines get

info

This CLI command calls the following API endpoint:

This command returns an engine detail defined by an engine engine_name.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex engines get <engine_name>
# Beta
cortex-beta engines get <engine_name>
# Nightly
cortex-nightly engines get <engine_name>

For example, it returns the following:


┌─────────────┬────────────────────────────────────────────────────────────────────────────┐
│ (index) │ Values │
├─────────────┼────────────────────────────────────────────────────────────────────────────┤
│ name │ 'onnx' │
│ description │ 'This extension enables chat completion API calls using the Cortex engine' │
│ version │ '0.0.1' │
│ productName │ 'Cortex Inference Engine' │
└─────────────┴────────────────────────────────────────────────────────────────────────────┘

info

To get an engine name, run the engines list command first.

Options:

OptionDescriptionRequiredDefault valueExample
engine_nameThe name of the engine that you want to retrieve.Yes-llama-cpp
-h, --helpDisplay help information for the command.No--h

cortex engines list

info

This CLI command calls the following API endpoint:

This command lists all the Cortex's engines.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex engines list [options]
# Beta
cortex-beta engines list [options]
# Nightly
cortex-nightly engines list [options]

For example, it returns the following:


+---+--------------+-------------------+---------+----------------------------+---------------+
| # | Name | Supported Formats | Version | Variant | Status |
+---+--------------+-------------------+---------+----------------------------+---------------+
| 1 | onnxruntime | ONNX | | | Incompatible |
+---+--------------+-------------------+---------+----------------------------+---------------+
| 2 | llama-cpp | GGUF | 0.1.34 | linux-amd64-avx2-cuda-12-0 | Ready |
+---+--------------+-------------------+---------+----------------------------+---------------+
| 3 | tensorrt-llm | TensorRT Engines | | | Not Installed |
+---+--------------+-------------------+---------+----------------------------+---------------+

Options:

OptionDescriptionRequiredDefault valueExample
-h, --helpDisplay help for command.No--h

cortex engines install

info

This CLI command calls the following API endpoint:

This command downloads the required dependencies and installs the engine within Cortex. Currently, Cortex supports three engines:

  • llama-cpp
  • onnxruntime
  • tensorrt-llm

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex engines install [options] <engine_name>
# Beta
cortex-beta engines install [options] <engine_name>
# Nightly
cortex-nightly engines install [options] <engine_name>

For Example:


## Llama.cpp engine
cortex engines install llama-cpp
## ONNX engine
cortex engines install onnxruntime
## Tensorrt-LLM engine
cortex engines install tensorrt-llm

Options:

OptionDescriptionRequiredDefault valueExample
engine_nameThe name of the engine you want to install.Yes--
-h, --helpDisplay help for command.No--h

cortex engines uninstall

This command uninstalls the engine within Cortex.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex engines uninstall [options] <engine_name>
# Beta
cortex-beta engines uninstall [options] <engine_name>
# Nightly
cortex-nightly engines uninstall [options] <engine_name>

For Example:


## Llama.cpp engine
cortex engines uninstall llama-cpp
## ONNX engine
cortex engines uninstall onnxruntime
## Tensorrt-LLM engine
cortex engines uninstall tensorrt-llm

Options:

OptionDescriptionRequiredDefault valueExample
engine_nameThe name of the engine you want to uninstall.Yes--
-h, --helpDisplay help for command.No--h