Convert Dataset to TFRecords for TAO

,

For Course “Synthetic Data Generation for Perception Model Training in Isaac Sim” Part: “Training a Model With Synthetic Data”

I cloned the repo, started a docker container from the docker desktop app, open an ubuntu CLI (I’m running Windows 11), opened jupyter notebook and opened local_train.ipynb

I ran all steps successfully inside the notebook.

I replaced the local project path in here :

“os.environ[“LOCAL_PROJECT_DIR”] = os.path.dirname(os.getcwd()) # This is the location of the root of the cloned repo
print(os.environ[“LOCAL_PROJECT_DIR”])”

But After running:

"print(“Converting Tfrecords for palletjack with additional distractors”)

!mkdir -p $LOCAL_PROJECT_DIR/local/training/tao/tfrecords/distractors_additional && rm -rf $LOCAL_PROJECT_DIR/local/training/tao/tfrecords/distractors_additional/*

!docker run -it --rm --gpus all -v $LOCAL_PROJECT_DIR:/workspace/tao-experiments $DOCKER_CONTAINER
detectnet_v2 dataset_convert
-d /workspace/tao-experiments/local/training/tao/specs/tfrecords/distractors_additional.txt
-o /workspace/tao-experiments/local/training/tao/tfrecords/distractors_additional/"

Got Error:

"Converting Tfrecords for palletjack with additional distractors
docker: invalid spec: :/workspace/tao-experiments: empty section between colons

Run ‘docker run --help’ for more information"

→ I’m not very familiar with Docker, don’t know if the issue is related to that.

The $LOCAL_PROJECT_DIR should be empty. You can check it and then set it.

Also, you can use explicit path instead of env.
For example, -v /local/path:/workspace/tao-experiments

OK. And two more things. The code in the notebook seems to have been written for Linux. I’m running it from a Jupyter Notebook desktop on a Windows machine. Would it be possible to provide the code for Windows?

And the folders path on the code below seems to be wrong:

print("Converting Tfrecords for palletjack warehouse distractors dataset")  !mkdir -p $LOCAL_PROJECT_DIR/local/training/tao/tfrecords/distractors_warehouse && rm -rf $LOCAL_PROJECT_DIR/local/training/tao/tfrecords/distractors_warehouse/*  !docker run -it --rm --gpus all -v $LOCAL_PROJECT_DIR:/workspace/tao-experiments $DOCKER_CONTAINER \                    detectnet_v2 dataset_convert \                   -d /workspace/tao-experiments/local/training/tao/specs/tfrecords/distractors_warehouse.txt \                   -o /workspace/tao-experiments/local/training/tao/tfrecords/distractors_warehouse/ 

For instance, there is no “/workspace/tao-experiments” in the repo I cloned to my computer. Could you please check?

Eg. When running the code for “Convert Dataset to TFRecords for TAO” in local_train notebook

I get

“FileNotFoundError: [Errno 2] No such file or directory: ‘/workspace/tao-experiments/palletjack_sdg/palletjack_data/distractors_warehouse/Camera/rgb’ Telemetry data couldn’t be sent, but the command ran successfully. [WARNING]: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)> Execution status: FAIL"

In the -v, you mount the local folder $LOCAL_PROJECT_DIR to the path /workspace/tao-experiments which is inside the docker.

So, if you run inside the docker, you can find /workspace/tao-experiments.

An easy way to run inside the docker is:

$ docker run -it --rm --gpus all -v $LOCAL_PROJECT_DIR:/workspace/tao-experiments $DOCKER_CONTAINER /bin/bash

I I run (original code):

print("Converting Tfrecords for palletjack warehouse distractors dataset")  !mkdir -p $LOCAL_PROJECT_DIR/local/training/tao/tfrecords/distractors_warehouse && rm -rf $LOCAL_PROJECT_DIR/local/training/tao/tfrecords/distractors_warehouse/*  !docker run -it --rm --gpus all -v $LOCAL_PROJECT_DIR:/workspace/tao-experiments $DOCKER_CONTAINER \                    detectnet_v2 dataset_convert \                   -d /workspace/tao-experiments/local/training/tao/specs/tfrecords/distractors_warehouse.txt \                   -o /workspace/tao-experiments/local/training/tao/tfrecords/distractors_warehouse/ 

I get error:

Converting Tfrecords for palletjack warehouse distractors dataset  ============================== === TAO Toolkit TensorFlow === ==============================  NVIDIA Release 4.0.0-TensorFlow (build ) TAO Toolkit Version 4.0.0  Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.  This container image and its contents are governed by the TAO Toolkit End User License Agreement. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/tao-toolkit-software-license-agreement  NOTE: The SHMEM allocation limit is set to the default of 64MB.  This may be    insufficient for TAO Toolkit.  NVIDIA recommends the use of the following flags:    docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 ...  Using TensorFlow backend. 2025-08-09 15:15:59.970302: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them. /usr/local/lib/python3.6/dist-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (3.0.4) doesn't match a supported version!   RequestsDependencyWarning) WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them. /usr/local/lib/python3.6/dist-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (3.0.4) doesn't match a supported version!   RequestsDependencyWarning) Using TensorFlow backend. 2025-08-09 15:16:07,922 [INFO] iva.detectnet_v2.dataio.build_converter: Instantiating a kitti converter Traceback (most recent call last):   File "</usr/local/lib/python3.6/dist-packages/iva/detectnet_v2/scripts/dataset_convert.py>", line 3, in <module>   File "<frozen iva.detectnet_v2.scripts.dataset_convert>", line 135, in <module>   File "<frozen iva.detectnet_v2.scripts.dataset_convert>", line 124, in <module>   File "<frozen iva.detectnet_v2.scripts.dataset_convert>", line 119, in main   File "<frozen iva.detectnet_v2.dataio.dataset_converter_lib>", line 70, in convert   File "<frozen iva.detectnet_v2.dataio.kitti_converter_lib>", line 155, in _partition FileNotFoundError: [Errno 2] No such file or directory: '/workspace/tao-experiments/palletjack_sdg/palletjack_data/distractors_warehouse/Camera/rgb' Telemetry data couldn't be sent, but the command ran successfully. [WARNING]: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)> Execution status: FAIL 

If I run replacing the line of code for what you suggested:

!mkdir -p $LOCAL_PROJECT_DIR/local/training/tao/tfrecords/distractors_warehouse && rm -rf $LOCAL_PROJECT_DIR/local/training/tao/tfrecords/distractors_warehouse/*  !docker run -it --rm --gpus all -v $LOCAL_PROJECT_DIR:/workspace/tao-experiments $DOCKER_CONTAINER /bin/bash \                    detectnet_v2 dataset_convert \                   -d /workspace/tao-experiments/local/training/tao/specs/tfrecords/distractors_warehouse.txt \                   -o /workspace/tao-experiments/local/training/tao/tfrecords/distractors_warehouse/ 

I get another error:

Converting Tfrecords for palletjack warehouse distractors dataset  ============================== === TAO Toolkit TensorFlow === ==============================  NVIDIA Release 4.0.0-TensorFlow (build ) TAO Toolkit Version 4.0.0  Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.  This container image and its contents are governed by the TAO Toolkit End User License Agreement. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/tao-toolkit-software-license-agreement  NOTE: The SHMEM allocation limit is set to the default of 64MB.  This may be    insufficient for TAO Toolkit.  NVIDIA recommends the use of the following flags:    docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 ...  /usr/local/bin/detectnet_v2: line 3: import: command not found /usr/local/bin/detectnet_v2: line 4: import: command not found /usr/local/bin/detectnet_v2: line 5: from: command not found /usr/local/bin/detectnet_v2: detectnet_v2: line 7: syntax error near unexpected token `(' /usr/local/bin/detectnet_v2: detectnet_v2: line 7: `    sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])' 

Can you run in a terminal instead of the notebook?

Please open a terminal, then

$ docker run -it --rm --gpus all -v your/local/dir:/workspace/tao-experiments nvcr.io/nvidia/tao/tao-toolkit:4.0.0-tf1.15.5 /bin/bash

Above command will run into the container.

Then, check if your files exists.

# ls /workspace/tao-experiments/

Then,

# detectnet_v2 dataset_convert xxx

I ran it from my Ubuntu CLI terminal:

I logged into the Nvidia docker with: “docker login nvcr.io

The login succeeded.

I ran

docker run -it --rm --gpus all -v your/local/dir:/workspace/tao-experiments nvcr.io/nvidia/tao/tao-toolkit:4.0.0-tf1.15.5 /bin/bash 

I was able to run the toolkit which showed:

============================== === TAO Toolkit TensorFlow === ==============================  NVIDIA Release 4.0.0-TensorFlow (build ) TAO Toolkit Version 4.0.0  Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.  This container image and its contents are governed by the TAO Toolkit End User License Agreement. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/tao-toolkit-software-license-agreement  NOTE: The SHMEM allocation limit is set to the default of 64MB.  This may be    insufficient for TAO Toolkit.  NVIDIA recommends the use of the following flags:    docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 ... 

Then, I was able to go into /workspace/tao-experiments from my terminal

There I ran

detectnet_v2 dataset_convert -d /workspace/tao-experiments/local/training/tao/specs/tfrecords/distractors_warehouse.txt  -o /workspace/tao-experiments/local/training/tao/tfrecords/distractors_warehouse 

and got the same error

Using TensorFlow backend. 2025-08-11 13:07:01.003432: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them. /usr/local/lib/python3.6/dist-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (3.0.4) doesn't match a supported version!   RequestsDependencyWarning) WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them. /usr/local/lib/python3.6/dist-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (3.0.4) doesn't match a supported version!   RequestsDependencyWarning) Using TensorFlow backend. 2025-08-11 13:07:09,189 [INFO] iva.detectnet_v2.dataio.build_converter: Instantiating a kitti converter Traceback (most recent call last):   File "</usr/local/lib/python3.6/dist-packages/iva/detectnet_v2/scripts/dataset_convert.py>", line 3, in <module>   File "<frozen iva.detectnet_v2.scripts.dataset_convert>", line 135, in <module>   File "<frozen iva.detectnet_v2.scripts.dataset_convert>", line 124, in <module>   File "<frozen iva.detectnet_v2.scripts.dataset_convert>", line 119, in main   File "<frozen iva.detectnet_v2.dataio.dataset_converter_lib>", line 70, in convert   File "<frozen iva.detectnet_v2.dataio.kitti_converter_lib>", line 155, in _partition FileNotFoundError: [Errno 2] No such file or directory: '/workspace/tao-experiments/palletjack_sdg/palletjack_data/distractors_warehouse/Camera/rgb' Telemetry data couldn't be sent, but the command ran successfully. [WARNING]: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)> Execution status: FAIL 

The problem is still:

No such file or directory: '/workspace/tao-experiments/palletjack_sdg/palletjack_data/distractors_warehouse/Camera/rgb' 

There is a folder “palletjack_sdg” inside “tao-experiements”. However, when I go inside “palletjack_sdg” I only see “palletjack_datagen.sh standalone_palletjack_sdg.py” AND NOT “palletjack_data/distractors_warehouse/Camera/rgb”

It seems that the script that “detectnet_v2 dataset_convert” is trying to run, looks for folders “palletjack_data/distractors_warehouse/Camera/rgb” which do not exist in this TAO toolkit inside this docker image.

Also, I believe the whole point of this project. “ Course | NVIDIA is to run everything from a Jupyter notebook to give us more flexibility to work with machine learning code, and not run everything from a terminal.

Can you show the exact command? And what is in your local folder?

$ ls your/local/dir

OK I got it to work.

The problem is that it wasn’t clear in the exercise’s description that I had to run the notebook inside the folder from the repo already cloned in the previous step “Generating a Synthetic Dataset Using Replicator”.

The confusion happened when I got to “Fine-Tuning and Validating an AI Perception Model > Lecture: Training a Model With Synthetic Data” and run into this instruction:

Optional: Training Your Own Model

For those interested in training their own model, follow these steps using the Synthetic Data Generation Training Workflow:

  1. Clone the GitHub project and navigate to the local_train.ipynb notebook.

  2. Set up the TAO Toolkit via a Docker container.

  3. Download a pre-trained object detection model.

  4. Convert your dataset into TFRecords (a format optimized for faster data iteration).

  5. Specify training parameters such as batch size and learning rate.

  6. Train the model using TAO Toolkit.

  7. Evaluate its performance on test data.

  8. Visualize results to assess how well the model detects objects.”

I know it’s optional but I want to train my model anyways.

So I cloned the Github repo from step 1, above, and downloaded to a different folder from the repo used to generate synthetic data from the previous section. This repo does NOT contain “/workspace/tao-experiments/palletjack_sdg/palletjack_data/distractors_warehouse/Camera/rgb” etc. So when ran local_train.ipynb from this repo it gave me that error.

What the exercise should have told me to do is: “go to the folder where you saved the repo from the previous step (where you generate synthetic data) and run local_train.ipynb from there”. This repo DOES contain “/workspace/tao-experiments/palletjack_sdg/palletjack_data/distractors_warehouse/Camera/rgb” and the other needed folders because they were generated during the synthetic data generation step.

Also, we should replace “os.environ[“LOCAL_PROJECT_DIR”] = "<LOCAL_PATH_OF_CLONED_REPO>” with the path for the project ran in the synthetic generation step, not the repo linked from the list above.

Here are the steps that I used to make it work:

0 - Make sure you have completed all the previous steps to generate synthetic data:
. Clone repo: GitHub - NVIDIA-AI-IOT/synthetic_data_generation_training_workflow: Workflow for generating synthetic data and training CV models.
. Configure generate_data.sh
. Run generate_data.sh

1- Open Ubuntu CLI (Linux environment if running on windows)

2- create a separate Conda environment that users python version 3.10: "conda create -n tao-py310 python=3.10
If this environment has already been previously created, skip this step.

3- Activate python 3.10 conda env: “conda activate tao-py310”

4- Connect to Nvidia’s docker container:
. Open docker desktop application and click on the play button on the container
. in the ubuntu cli run “docker login nvcr.io
. login to the Nvidia container (if already logged user and password were already saved, if not get user (API key) from https://org.ngc.nvidia.com/setup/api-keys and rotate password to get a new password)
. run “docker ps -a” to check active containers.

5- navigate, in Ubuntu CLI, to folder where the GitHub project for synthetic data generation was cloned:

. from project Course | NVIDIA “Generating a Synthetic Dataset Using Replicator > Activity: Understanding Basics of the SDG Script”

6- Open the notebook in this folder from Ubuntu CLI: “jupyter notebook local_train.ipynb --allow-root”

. Copy the URL provided in my web browser, click on the notebook to open

7- inside the notebook, replace "# os.environ[“LOCAL_PROJECT_DIR”] = “<LOCAL_PATH_OF_CLONED_REPO>” by the actual path where I saved my cloned project used during the synthetic data generation step.

8- run all cells in the notebook

In my opinion, the exercise should include the steps I describe above to make it easier to follow and clearer.

In brief, the description and steps in the exercise should be clearer to guide the user through all necessary steps and make it explicit that we need to refer to the repo from the previous step, not the one from the list.

Although I managed to run the train and test parts of the notebook (local_train.ipynb) I’m now getting an error on step 7 “Visualize Model Performance on Real World Data”.

After I run

from IPython.display import Image   results_dir = os.path.join(os.environ["LOCAL_PROJECT_DIR"], "local/training/tao/detectnet_v2/resnet18_palletjack/test_loco/images_annotated")  pil_img = Image(filename=os.path.join(os.getenv("LOCAL_PROJECT_DIR"), 'detecnet_v2/july_resnet18_trials/new_pellet_distractors_10k/test_loco/images_annotated/1564562568.298206.jpg'))                             image_names = ["1564562568.298206.jpg", "1564562628.517229.jpg", "1564562843.0618184.jpg", "593768,3659.jpg", "516447400,977.jpg"]                              images = [Image(filename = os.path.join(results_dir, image_name)) for image_name in image_names]  display(*images)  

I get

--------------------------------------------------------------------------- FileNotFoundError                         Traceback (most recent call last) Cell In[18], line 5       1 from IPython.display import Image        3 results_dir = os.path.join(os.environ["LOCAL_PROJECT_DIR"], "local/training/tao/detectnet_v2/resnet18_palletjack/test_loco/images_annotated") ----> 5 pil_img = Image(filename=os.path.join(os.getenv("LOCAL_PROJECT_DIR"), 'detecnet_v2/july_resnet18_trials/new_pellet_distractors_10k/test_loco/images_annotated/1564562568.298206.jpg'))       7 image_names = ["1564562568.298206.jpg", "1564562628.517229.jpg", "1564562843.0618184.jpg", "593768,3659.jpg", "516447400,977.jpg"]        9 images = [Image(filename = os.path.join(results_dir, image_name)) for image_name in image_names]  File ~/miniconda3/envs/tao-py310/lib/python3.10/site-packages/IPython/core/display.py:1053, in Image.__init__(self, data, url, filename, format, embed, width, height, retina, unconfined, metadata, alt)    1051 self.unconfined = unconfined    1052 self.alt = alt -> 1053 super(Image, self).__init__(data=data, url=url, filename=filename,    1054         metadata=metadata)    1056 if self.width is None and self.metadata.get('width', {}):    1057     self.width = metadata['width']  File ~/miniconda3/envs/tao-py310/lib/python3.10/site-packages/IPython/core/display.py:371, in DisplayObject.__init__(self, data, url, filename, metadata)     368 elif self.metadata is None:     369     self.metadata = {} --> 371 self.reload()     372 self._check_data()  File ~/miniconda3/envs/tao-py310/lib/python3.10/site-packages/IPython/core/display.py:1088, in Image.reload(self)    1086 """Reload the raw data from file or URL."""    1087 if self.embed: -> 1088     super(Image,self).reload()    1089     if self.retina:    1090         self._retina_shape()  File ~/miniconda3/envs/tao-py310/lib/python3.10/site-packages/IPython/core/display.py:397, in DisplayObject.reload(self)     395 if self.filename is not None:     396     encoding = None if "b" in self._read_flags else "utf-8" --> 397     with open(self.filename, self._read_flags, encoding=encoding) as f:     398         self.data = f.read()     399 elif self.url is not None:     400     # Deferred import  FileNotFoundError: [Errno 2] No such file or directory: '[PATH FOR MY LOCAL FOLDERS]/synthetic_data_generation_training_workflow/detecnet_v2/july_resnet18_trials/new_pellet_distractors_10k/test_loco/images_annotated/1564562568.298206.jpg' 

Please check if the env is set correctly. More, the synthetic_data_generation_training_workflow/local/local_train.ipynb at main · NVIDIA-AI-IOT/synthetic_data_generation_training_workflow · GitHub is not a release from TAO. You may create a ticket to the owner if this github has any issue. Thanks.

This is the Git repo the Nvidia course told us to follow:

Is the link in the Nvidia course page pointing to the wrong repo? If so, which one should I use?

Also, could you guide me through the steps to check if my env is set correctly?

thank you

The repo should be correct, but this repo is not from the official release of TAO team. TAO team does not maintain this repo. It may be from Isaac team. Suggest to ask questions to the github owner

For above error, it is obviously the [PATH FOR MY LOCAL FOLDERS] is not set correctly. Please double check.

The problem is that none of these folders exist in the repo: “detecnet_v2/july_resnet18_trials/new_pellet_distractors_10k/test_loco/images_annotated”

Was the python script supposed to create them? If so this part of the code has a problem.

The [PATH FOR MY LOCAL FOLDERS] should be set. You can double check.

“[PATH FOR MY LOCAL FOLDERS]” was added by me. I copied the error message, erased the section that shows my local folders path so I don’t expose it on a public forum, and manually replaced by “[PATH FOR MY LOCAL FOLDERS]”. But the original error message shows my actual path.

The issue seems to be that the code is expecting a specific folder path but the actual folder path from the repo is different.

From the error row:

FileNotFoundError: [Errno 2] No such file or directory: C:.../synthetic_data_generation_training_workflow/detecnet_v2/july_resnet18_trials/new_pellet_distractors_10k/test_loco/images_annotated/1564562568.298206.jpg' 

I can infer that It expected the path

/synthetic_data_generation_training_workflow/detecnet_v2 

But, when I open the repo folders I have:

C:...synthetic_data_generation_training_workflow\local\training\tao\detectnet_v2 

Also, when I open the folder “detectnet_v2” I only see one folder: “resnet18_palletjack” with folders “5k_model_synthetic”, “events”, “weights” inside. But it expects the folders:

july_resnet18_trials/new_pellet_distractors_10k/test_loco/images_annotated 

which I haven’t found in my local files.

This kind of folder should be the inference result folder.

Have you already run the inference?

I ran everything, including step “6. Evaluate Trained Model”. I’m assuming it includes inference? But then it stopped on step “7. Visualize Model Performance on Real World Data” due to the error.

For

!docker run -it --rm --gpus all -v $LOCAL_PROJECT_DIR:/workspace/tao-experiments $DOCKER_CONTAINER \                             detectnet_v2 inference -e /workspace/tao-experiments/local/training/tao/specs/inference/new_inference_specs.txt \                             -o /workspace/tao-experiments/local/training/tao/detectnet_v2/resnet18_palletjack/5k_model_synthetic \                             -i /workspace/tao-experiments/images/sample_synthetic \                             -k $KEY 

Please note that -v $LOCAL_PROJECT_DIR:/workspace/tao-experiments $DOCKER_CONTAINER will map your local file into the path(/workspace/tao-experiments) inside the docker.

Please double check you already map it.

After mapping, when you login into the docker, you should find your local file under /workspace/tao-experiments

This is what I got after I ran this line of code:

Converting Tfrecords for palletjack warehouse distractors dataset  ============================== === TAO Toolkit TensorFlow === ==============================  NVIDIA Release 4.0.0-TensorFlow (build ) TAO Toolkit Version 4.0.0  Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.  This container image and its contents are governed by the TAO Toolkit End User License Agreement. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/tao-toolkit-software-license-agreement  NOTE: The SHMEM allocation limit is set to the default of 64MB.  This may be    insufficient for TAO Toolkit.  NVIDIA recommends the use of the following flags:    docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 ...  Using TensorFlow backend. 2025-08-13 11:20:26.120978: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them. /usr/local/lib/python3.6/dist-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (3.0.4) doesn't match a supported version!   RequestsDependencyWarning) WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them. /usr/local/lib/python3.6/dist-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (3.0.4) doesn't match a supported version!   RequestsDependencyWarning) Using TensorFlow backend. 2025-08-13 11:20:32,696 [INFO] iva.detectnet_v2.dataio.build_converter: Instantiating a kitti converter 2025-08-13 11:20:32,707 [INFO] iva.detectnet_v2.dataio.kitti_converter_lib: Num images in Train: 1658	Val: 184 2025-08-13 11:20:32,707 [INFO] iva.detectnet_v2.dataio.kitti_converter_lib: Validation data in partition 0. Hence, while choosing the validationset during training choose validation_fold 0. 2025-08-13 11:20:32,708 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 0, shard 0 2025-08-13 11:20:33,044 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 0, shard 1 2025-08-13 11:20:33,306 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 0, shard 2 2025-08-13 11:20:33,690 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 0, shard 3 2025-08-13 11:20:33,912 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 0, shard 4 2025-08-13 11:20:34,239 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 0, shard 5 2025-08-13 11:20:34,697 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 0, shard 6 2025-08-13 11:20:35,203 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 0, shard 7 2025-08-13 11:20:35,787 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 0, shard 8 2025-08-13 11:20:36,426 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 0, shard 9 2025-08-13 11:20:37,151 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib:  Wrote the following numbers of objects: b'palletjack': 543  2025-08-13 11:20:37,151 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 1, shard 0 2025-08-13 11:20:40,580 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 1, shard 1 2025-08-13 11:20:44,718 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 1, shard 2 2025-08-13 11:20:49,985 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 1, shard 3 2025-08-13 11:20:52,324 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 1, shard 4 2025-08-13 11:20:54,322 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 1, shard 5 2025-08-13 11:20:58,761 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 1, shard 6 2025-08-13 11:21:06,092 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 1, shard 7 2025-08-13 11:21:09,851 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 1, shard 8 2025-08-13 11:21:12,253 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Writing partition 1, shard 9 2025-08-13 11:21:17,178 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib:  Wrote the following numbers of objects: b'palletjack': 5042  2025-08-13 11:21:17,178 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Cumulative object statistics 2025-08-13 11:21:17,178 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib:  Wrote the following numbers of objects: b'palletjack': 5585  2025-08-13 11:21:17,178 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Class map.  Label in GT: Label in tfrecords file  b'palletjack': b'palletjack' For the dataset_config in the experiment_spec, please use labels in the tfrecords file, while writing the classmap.  2025-08-13 11:21:17,178 [INFO] iva.detectnet_v2.dataio.dataset_converter_lib: Tfrecords generation complete. Telemetry data couldn't be sent, but the command ran successfully. [WARNING]: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)> Execution status: PASS