A reusable framework for computer vision pipelines.
![Configure and Lauch Vision Pipelines, Intel,](https://d1qg7561fu8ubi.cloudfront.net/blog/configure-and-lauch-vision-pipelines.jpg)
Photo by Nathália Rosa on Unsplash
Computer vision (CV) pipelines use many AI models based on use case and complexity. In the retail checkout space, developers design and assemble their applications with models such as object detection, person recognition, instance segmentation , and more.
The profile launcher, developed by Intel, introduces a repeatable framework for automated self-checkout use cases, enabling developers to easily configure and launch pipelines in a consistent and maintainable way. In this post, we’ll provide an overview of the profile launcher and share how to configure it for various CV models.
Overview of the profile launcher
The profile launcher is designed to work with OpenVINO™ Model Server . The profile launcher offers pipeline profiles for target use cases, making it easy for developers to configure pipelines for their use cases, such as using object detection via YOLOv5 * to detect grocery products like carbonated beverages or potato chips. The launcher also enables retailers to include multiple models in a single pipeline, such as running object detection and product classification simultaneously. The launcher comprehends all phases of the pipeline — decoding, preprocessing, inferencing, and postprocessing — allowing the developer to focus on integrating the pipeline into the use case.
![Configure and Lauch Vision Pipelines, Intel,](https://d1qg7561fu8ubi.cloudfront.net/blog/configure-and-lauch-vision-pipelines-1.png)
Figure 1: Vision Data Flow. Source: Intel .
The profile launcher supports the full end-to-end vision pipeline for automated self-checkouts.
The profile launcher can support pipelines using several types of OpenVINO Model Servers: one for distributed client/server via multiple containers and one for a single-container CAPI architecture, which cross-compiles everything, including the entire pipeline and application. The overall view of the profile launcher looks like this:
![Configure and Lauch Vision Pipelines, Intel,](https://d1qg7561fu8ubi.cloudfront.net/blog/configure-and-lauch-vision-pipelines-2.jpg)
Figure 2: Profile launcher
The profile launcher supports both monolithic and distributed deployments.
For single-container use cases, the profile launcher starts the pipeline profile with one integrated Docker* container running the pipeline models, the OpenVINO Model Server, preprocessing, decoding, postprocessing, and result rendering:
![Configure and Lauch Vision Pipelines, Intel,](https://d1qg7561fu8ubi.cloudfront.net/blog/configure-and-lauch-vision-pipelines-3.jpg)
Figure 3: Profile launcher – Single Container
For single-container deployments, the profile launcher runs all parts of the pipeline in one Docker container.
For distributed architecture deployments, multiple containers are instantiated. In the following example, the profile launcher starts two container instances: one for the pipeline application (the client) and one for the OpenVINO Model Server instance. The communication between the client and the server is based on gRPC API calls. This client/server architecture enables more scalability and pipeline distribution, such as running multiple pipelines with object detection and classification in one ensemble profile.
![Configure and Lauch Vision Pipelines, Intel,](https://d1qg7561fu8ubi.cloudfront.net/blog/configure-and-lauch-vision-pipelines-4.jpg)
Figure 4: Profile launcher – Distributed
For distributed deployments, the pipeline launcher creates a container for each client and server, enabling multiple models to run simultaneously.
The profile launcher consists of a configuration file. The following is a sample configuration of the classification profile:
OvmsSingleContainer: false
OvmsServer:
ServerDockerScript: start_ovms_server.sh
ServerDockerImage: openvino/model_server:2023.1-gpu
ServerContainerName: ovms-server
ServerConfig: "/models/config.json"
StartupMessage: Starting OVMS server
InitWaitTime: 10s
EnvironmentVariableFiles:
- ovms_server.env
# StartUpPolicy:
# when there is an error on launching ovms server startup, choose one of these values for the behavior of profile-launcher:
# remove-and-restart: it will remove the existing container with the same container name if any and then restart the container
# exit: it will exit the profile-launcher and
# ignore: it will ignore the error and continue (this is the default value if not given or none of the above)
StartUpPolicy: ignore
OvmsClient:
DockerLauncher:
Script: docker-launcher.sh
DockerImage: python-demo:dev
ContainerName: classification
Volumes:
- "$RUN_PATH/results:/tmp/results"
- ~/.Xauthority:/home/dlstreamer/.Xauthority
- /tmp/.X11-unix
PipelineScript: ./classification/python/entrypoint.sh
PipelineInputArgs: "" # space delimited like we run the script in command and take those input arguments
EnvironmentVariableFiles:
- classification.env
This configuration can be easily repeated across all profiles you’d like to deploy. For single-container deployments, it’s not required to specify the OVMS Server configuration. Read a detailed description of the configuration objects .
Installing the profile launcher
Developers can take advantage of the profile launcher by downloading it from the GitHub* repo . Once downloaded, the configuration can be used as is for automated self-checkout use cases or modified for other vision pipelines, such as object detection pipelines .
To create a new profile, add a new profile folder under the res/ folder and copy an existing configuration.yaml example into the folder, such as the object detection example we covered earlier. Then change the configurations with the new models you’d like to use.
Get started with the profile launcher
The profile launcher provides a reusable framework for AI applications, making it easy for developers to configure and launch CV pipelines via a repeatable and consistent process. This framework removes the overhead of building end-to-end pipelines and provides developers with the bandwidth to create unique and differentiating features for their applications. To get started, head over to the automated self-checkout repository and follow the links to the framework.
- AI-Enabled Computer Vision Checkout Framework
- Automated Self-Checkout Retail Reference Implementation
About the author
Dr. Jim Wang, Senior Software Engineer, Intel
Jim Wang has a Ph.D. in electrical engineering from Arizona State University and has been working on developing client/server and web applications for over 20 years. Prior to Intel, Jim worked for Boeing*, where he developed a predictive application that monitors the health of aircraft turbine engines. On Intel’s Network and Edge (NEX) Group, Jim is currently focused on building edge frameworks and applications like the automated self-checkout retail reference implementation and contributes to the development of EdgeX Foundry within the NEX software group. Connect with him on LinkedIn and GitHub .