VSS Frontend UI Cannot Connect to Backend (localhost:60000) on Video Upload

I am trying to deploy the NVIDIA Video Search & Summarization (VSS) system using the official instructions, but I encounter an issue where the frontend UI on port 9100 fails to connect to the backend (localhost:60000) when uploading a video.

Environment Details

  • OS: Ubuntu 22.04

  • GPU: NVIDIA H100 (Single GPU)

  • Docker Compose version: v2.37.1

  • **Repo/notebook I followed ->1_Deploy_VSS_docker_Crusoe.ipynb

  • My script: Pastebin link

Setup Output :

🔧 Starting VSS Docker Setup…
✓ Environment variables configured
✓ Changed directory to: /home/syedmobassir.hossain/NVIDIA_NIM/video-search-and-summarization/deploy/docker/local_deployment_single_gpu
✓ NIM cache directory created
🔐 Logging into NVIDIA Container Registry…
✓ Successfully logged in
🚀 Starting LLaMA 3.1, 3.2 Rerank, and Embedding containers…
✓ Docker Compose available: Docker Compose version v2.37.1
🚀 Starting VSS services…
[+] Running 3/3
✔ local_deployment_single_gpu-via-server-1 Started
✔ local_deployment_single_gpu-graph-db-1 Started

via-server-1 | Backend is running at http://0.0.0.0:60000
via-server-1 | Frontend is running at http://0.0.0.0:9100

🎉 VSS Server started successfully!

###docker ps Output :

CONTAINER ID IMAGE … PORTS
11d816d61994 nvcr .io/nvidia/blueprint/vss-engine:2.3.0 … 0.0.0.0:9100->9100/tcp, 0.0.0.0:60000->60000/tcp
86ff1bde56e1 neo4j:5.26.4 … 7474->7474/tcp, 7687->7687/tcp
82727baa8465 nvcr .io/nim/nvidia/llama-3.2-nv-embedqa-1b-v2:latest … 8006->8000/tcp
725b52024600 nvcr .io/nim/nvidia/llama-3.2-nv-rerankqa-1b-v2:latest … 8007->8000/tcp

Problem

Although all containers appear to be running fine and the frontend UI is accessible at port 9100, trying to upload a video results in an error:

“Can’t connect to localhost:60000”

Analysis

  • The backend is clearly exposed on port 60000 and is running inside the same Docker network as the frontend.
  • But the frontend browser code seems to make a request to localhost:60000, which resolves to the user’s local machine, not the container’s internal network.

This leads to a failed connection because browser-side localhost cannot directly resolve services running in a container unless they are correctly exposed and routed.

Expected Behavior

When a video is uploaded from the frontend UI, the frontend should successfully send API requests to the backend server on port 60000 to process the video.

Questions

  1. How should the frontend be configured to reach the backend container (e.g., proxy setup, environment variable, or compose.yml)?
  2. Should the frontend container not use localhost:60000 and instead use an internal Docker network hostname like via-server:60000?
  3. Is there an environment setting I missed to make this work in a single-machine, local GPU setup?

Please advise how to correctly bridge the frontend and backend containers to make video upload functional.

It seems to be a problem with the net port. If you use the default port 8100 for backend, will it work properly?

via-server-1  | INFO:     Started server process [291] via-server-1  | INFO:     Waiting for application startup. via-server-1  | INFO:     Application startup complete. via-server-1  | INFO:     Uvicorn running on http://127.0.0.1:60000 (Press CTRL+C to quit) 

The 60000 port has already been used by the server as the ALERT_CALLBACK_PORT. You can modify that to other port. I have tried 50000 and it works well.

Hello,
Thank you for your response, In my script : nim_vlm_blueprint - Pastebin.com
i replaced os.environ[“BACKEND_PORT”] = “60000” with os.environ[“BACKEND_PORT”] = “8100” and this time even frontend didn’t started,i saw only this log :
via-server-1 | INFO: Started server process [291]
via-server-1 | INFO: Waiting for application startup.
via-server-1 | INFO: Application startup complete.
via-server-1 | INFO: Uvicorn running on http://127.0.0.1:60000 (Press CTRL+C to quit)
and it looks like its kind of stuck. can you kindly give me instructions step by step? I think i have made mistakes that i am not able to catch yet. where you added 50000 port number?

Please refer to our single-gpu-deployment-full-local-deployment first. If you want to modify the port, you can change the source code.

i have changed backend port from 8100 to 50000 but still facing same issue,frontend not starting this time,frontend only starts when backend_port is set as 60000

I used the 50000 as the backend port and 9100 as the front port and it works properly on my side. Have you referred to the source code I attached https://github.com/NVIDIA-AI-Blueprints/video-search-and-summarization/blob/main/deploy/docker/local_deployment_single_gpu/.env#L8?

You can also use the sudo netstat -tulpn command to check the port that have already been used on your device first.

Can you successfully deploy using our default configuration?

FRONTEND_PORT=9100 BACKEND_PORT=8100 

You can also refer to our FAQ Network Ports first.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.