Device booting in initrd cannot mount to folders on NFS server

Hardware: NVIDIA Jetson Orin NX 16GB
Environment: Ubuntu 20.04 Docker container, running inside a host VM Ubuntu 20.04
SDK Manager 2.3, JetPack 6.2.1

I have flashed my device succeasfully by SDK Manager running in the VM Ubuntu 20.04 before, now I’m trying to refer to this guidance Docker Images — SDK Manager to build my own Docker.

Download & Pre-Installation went pretty smoothly, images are created, however after the device booted up into initrd and the flashing was meant to started, I got this error


I also tried to go inside the device and perform manual mount from there, but also failed no matter how I refresh my host side’s /etc/exports or restart the nfs-kernel-server service.

Here are the SDKM logs
sdkm_download-2025-07-16-01-35-19.log (7.8 KB)
NV_L4T_FLASH_JETSON_LINUX_COMP (4).log (404.2 KB)

I also record the serial log
putty_NFS.log (96.7 KB)

I would be appreciate for any support. Thank you very much.

Hi,

We need to double-check this with our internal team.
Will provide more info to you later.

Thanks.

1 Like

Hi,

Thank you for your reply, while waiting for your guidance I’ll try to provide as much context as possible.

On my VM Host, I have:

  • created a user named “thong”, then switch to that user and create these 2 folders (empty) at:
    /home/thong/nvidia/nvidia_sdk/JetPack…/Linux_for_Tegra/rootfs
    /home/thong/nvidia/nvidia_sdk/JetPack…/Linux_for_Tegra/tools/kernel_flash/images

  • set their owner as root.root & give permission 755

  • exported them by adding these lines in /etc/exports
    /home/<new_user>/nvidia_VM/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_ORIN_NX_TARGETS/Linux_for_Tegra/rootfs *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
    /home/<new_user>/nvidia_VM/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_ORIN_NX_TARGETS/Linux_for_Tegra/tools/kernel_flash/images *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)

  • used the user’s name, UID and GID to create a new user inside Docker container, and when building I also map them like this (I can only map the outmost parent folder, if I map the full path they would not be removed and created again during the flash process):
    -v /home/thong/nvidia:/home/thong/nvidia

Inside the container, I created /run/nvidia_initrd_flash/docker_host_network

During the flash process, I continuously supervise the content of the 2 folders (form when they were still empty until they got all files). In some tests I also try running sudo exportfs -ra to refresh the export status after I see something changed inside those folders, but I still failed to proceed.

Hi,

Do you work together with @vuh81hc?
Or do you also meet the same issue, so update the info here?

Thanks.

Dear,

We work together. Sorry for the confusion.
The info above is consistent with the issue described in the master post.

Thanks for the confirm.
We will share the info to our internal team as well.

Will get back to you later.

Dear,

Somehow we’re able to get around the issue.

exported them by adding these lines in /etc/exports
/home/<new_user>/nvidia_VM/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_ORIN_NX_TARGETS/Linux_for_Tegra/rootfs *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
/home/<new_user>/nvidia_VM/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_ORIN_NX_TARGETS/Linux_for_Tegra/tools/kernel_flash/images *(rw,nohide,insecure,no_subtree_check,async,no_root_squash)

Instead of exporting directly the folders that the devices are going to mount (and fail due to “stale file handle”, we try to export the parent folder and add “no_subtree_check” flag as below

/home/thong/nvidia/nvidia_sdk/JetPack_6.2.1_Linux_JETSON_ORIN_NX_TARGETS/Linux_for_Tegra *(rw,nohide,insecure,no_subtree_check,async,no_root_squash,no_subtree_check)

The flash finishes successfully afterwards.

Hi,

Have you solved the issue already?

Please note that in our document:

The SDK Manager Docker image does not currently support flashing to external storages on all Jetson devices.

Since this is not stable and could have many issues.
Thanks.

Yes we did,

We understand the limitation, and thankfully we found a workaround for the moment.

Thank you for your support.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.