Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save gangefors/2029e26501601a99c501599f5b100aa6 to your computer and use it in GitHub Desktop.
Save gangefors/2029e26501601a99c501599f5b100aa6 to your computer and use it in GitHub Desktop.
How to install TrueNAS SCALE on a partition instead of the full disk

Install TrueNAS SCALE on a partition instead of the full disk

The TrueNAS installer doesn't have a way to use anything less than the full device. This is usually a waste of resources when installing to a modern NVMe which is usually several hundred of GB. TrueNAS SCALE will use only a few GB for its system files so installing to a 16GB partition would be helpful.

The easiest way to solve this is to modify the installer script before starting the installation process.

  1. Boot TrueNAS Scale installer from USB stick/ISO

  2. Select shell in the first menu (instead of installing)

  3. While in the shell, run the following commands:

    sed -i 's/sgdisk -n3:0:0/sgdisk -n3:0:+16384M/g' /usr/sbin/truenas-install
    /usr/sbin/truenas-install
    

    For TrueNAS Scale 24.10+ see this comment.

    The first command modifies the installer script so that it creates a 16GiB boot-pool partition instead of using the full disk. The second command restarts the TrueNAS Scale installer.

  4. Continue installing according to the official docs.

Step 7-12 in the deprecated guide has instructions on how to allocate the remaining space to a partition you can use for data. If you are using a single drive just ignore the steps that has to do with mirroring.

Deprecated guide using a USB stick as intermediary

Unfortunately this is only possible by using an intermediate device to act as the installation disk and later move this data to the NVMe. Below I have documented the steps I took to get TrueNAS SCALE to run from a mirrored 16GB partition on NVMe disks.

For an easier initial partition please see this comment and the discussion that follows. Should remove the need to use a USB stick as a intermediate medium.

  1. Install TrueNAS SCALE on a USB drive, preferrably 16GB in size. If you use a 32GB stick you must create a 32GB partition on the NVMe, wasting space that can be used for VMs and Docker/k8s applications.

  2. Boot and enter a Linux shell as root. For example by enabling SSH service and login by root password.

  3. Check available devices

     $ parted
     (parted) print devices
     /dev/sdb (15.4GB)  # boot device
     /dev/nvme0n1 (500GB)
     /dev/nvme1n1 (512GB)
     (parted) quit
    

If you only have one NVMe disk just ignore the instructions that include the second disk (nvme1n1). This disk is used to create a ZFS mirror to handle disk failures.

  1. Clone the boot device to the other devices

     $ cat /dev/sdb > /dev/nvme0n1
     $ cat /dev/sdb > /dev/nvme1n1
    
  2. Check the partition layout. Fix all the GPT space warning prompts that show up.

     $ parted -l
     [...]
     Warning: Not all of the space available to /dev/nvme0n1 appears to be used, you can fix the GPT to use all of the
     space (an extra 946741296 blocks) or continue with the current setting?
     Fix/Ignore? f
     [...]
     Model:  USB  SanDisk 3.2Gen1 (scsi)
     Disk /dev/sdb: 15.4GB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start   End     Size    File system  Name  Flags
      1      20.5kB  1069kB  1049kB                     bios_grub
      2      1069kB  538MB   537MB   fat32              boot, esp
      3      538MB   15.4GB  14.8GB  zfs
     [...]
    

    The other disks partition table should look identical to this.

  3. Remove the zfs partition from the new devices, number 3 in this case. This is the boot-pool partition and we will recreate it later. The reason we remove it is that zfs will recognize metadata that makes it think it's part of the pool while it is not.

     $ parted /dev/nvme0n1 rm
     Partition number? 3
     Information: You may need to update /etc/fstab.
    
  4. Recreate the boot-pool partition as a 16GiB large partition with a sligtly later start sector than before, make sure that it is on a sector divisable with 2048 for best performance (526336 % 2048 = 0). We also do this to make sure that zfs doesn't find any metadata from the old partition.

    Start with the smaller disk if they are not identical.

     $ parted
     (parted) unit kiB
     (parted) select /dev/nvme0n1
     (parted) print
     Model: KINGSTON SNVS500GB (nvme)
     Disk /dev/nvme0n1: 488386584kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start    End        Size       File system  Name  Flags
      1      20.0kiB  1044kiB    1024kiB                       bios_grub
      2      1044kiB  525332kiB  524288kiB  fat32              boot, esp
    
     (parted) mkpart boot-pool 526336kiB 17303552kiB
     (parted) print
     Model: KINGSTON SNVS500GB (nvme)
     Disk /dev/nvme0n1: 488386584kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start      End          Size         File system  Name       Flags
      1      20.0kiB    1044kiB      1024kiB                              bios_grub
      2      1044kiB    525332kiB    524288kiB    fat32                   boot, esp
      3      526336kiB  17303552kiB  16777216kiB               boot-pool
    
  5. Now you can create a partition allocating the rest of the disk.

     (parted) mkpart pool 17303552kiB 100%
     (parted) print
     Model: KINGSTON SNVS500GB (nvme)
     Disk /dev/nvme0n1: 488386584kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start        End           Size          File system  Name       Flags
      1      20.0kiB      1044kiB       1024kiB                               bios_grub
      2      1044kiB      525332kiB     524288kiB     fat32                   boot, esp
      3      526336kiB    17303552kiB   16777216kiB                boot-pool
      4      17303552kiB  488386560kiB  471083008kiB               pool
    
  6. Do the same for the next device, but this time use the same values as in the printout above. We do this to make sure that the partitions are exactly the same size. In this example the disks are slightly different in size so using 100% on the second disk would create a partition larger than the one we just created on the smaller disk.

     (parted) select /dev/nvme1n1
     Using /dev/nvme1n1
     (parted) mkpart boot-pool 526336kiB 17303552kiB
     (parted) mkpart pool 17303552kiB 488386560kiB
     (parted) print
     Model: TS512GMTE220S (nvme)
     Disk /dev/nvme1n1: 500107608kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start        End           Size          File system  Name       Flags
      1      20.0kiB      1044kiB       1024kiB                               bios_grub
      2      1044kiB      525332kiB     524288kiB     fat32                   boot, esp
      3      526336kiB    17303552kiB   16777216kiB                boot-pool
      4      17303552kiB  488386560kiB  471083008kiB               pool
    
  7. Make the new system partitions part of the boot-pool. This is done by attaching them to the existing pool while detaching the USB drive.

    $ zpool attach boot-pool sdb3 nvme0n1p3
    

    Wait for resilvering to complete, check progress with

    $ zpool status
    

    When resilvering is complete we can detach the USB device.

    $ zpool offline boot-pool sdb3
    $ zpool detach boot-pool sdb3
    

    Finally add the last drive to create a mirror of the boot-pool.

    $ zpool attach boot-pool nvme0n1p3 nvme1n1p3
    $ zpool status
    pool: boot-pool
    state: ONLINE
    scan: resilvered 2.78G in 00:00:03 with 0 errors on Wed Oct 27 07:16:56 2021
    config:
    
            NAME           STATE     READ WRITE CKSUM
            boot-pool      ONLINE       0     0     0
            mirror-0       ONLINE       0     0     0
                nvme0n1p3  ONLINE       0     0     0
                nvme1n1p3  ONLINE       0     0     0
    

    At this point you can remove the USB device and when the machine is rebooted it will start up from the NVMe devices instead. Check BIOS boot order if it doesn't.

  8. Now that the boot-pool is mirrored we want to create a mirror pool using the remaining partitions.

    $ zpool create pool1 mirror nvme0n1p4 nvme1n1p4
    $ zpool status
    pool: boot-pool
    state: ONLINE
    scan: resilvered 2.78G in 00:00:03 with 0 errors on Wed Oct 27 07:16:56 2021
    config:
    
            NAME           STATE     READ WRITE CKSUM
            boot-pool      ONLINE       0     0     0
            mirror-0       ONLINE       0     0     0
                nvme0n1p3  ONLINE       0     0     0
                nvme1n1p3  ONLINE       0     0     0
    
    pool: pool1
    state: ONLINE
    config:
    
            NAME           STATE     READ WRITE CKSUM
            pool1          ONLINE       0     0     0
            mirror-0       ONLINE       0     0     0
                nvme0n1p4  ONLINE       0     0     0
                nvme1n1p4  ONLINE       0     0     0
    

    But to be able to import it in the Web UI we need to export it.

    $ zpool export pool1
    
  9. All done! Import pool1 using the Web UI and start enjoying the additional space.

@gaetan-31
Copy link

gaetan-31 commented May 12, 2024

Hi all and thank you for this tuto =)

@gangefors is it possible to add the continuation after the number 4 please?

II had trouble understanding in my case:

A user who uses a single NVME for boot and uses the second partition for apps.

5 . After installation select Shell

Execute : parted -l |more

Get your device name

  1. Run parted

  2. Select your NVME :

(parted)select /dev/nvme0n1

  1. Create a partition :
  • Use kiB :

(parted)unit kiB

  • Get your starting size :

(parted)print

  • Create partition :

mkpart pool 17303552kiB 100%

  • Quit

(parted) quit

  1. Create ZFS pool, check and export for WebGUI :
zpool create vmpool /dev/nvme1n1p4
zpool status 
zpool export vmpool
  1. Reboot on TrueNas WebGui

  2. On WebUi Storage => Import Pool, Select Pool and confirm

@gangefors
Copy link
Author

@gangefors is it possible to add the continuation after the number 4 please?

@gaetan-31 You've just added it to the guide :)

But to properly answer your question. This gist was written with the main purpose of document the creation of a boot partititon that is of a specific size (and mirroring it). I want to keep the guide as concise as possible.

The information about partitioning is available in the deprecated guide in step 8 (and partly in step 7) as is mentioned.

@gaetan-31
Copy link

No problem ;)
Thank a lot any way =)

@transfairs
Copy link

The suggested approach does not work for the next version (currently ElectricEel-24.10-BETA.1). Is there a simple solution for this as well?

@AndreasSchwalb
Copy link

In the beta a python script is used instead of the shell script.
The interesting part seems to be here:
https://github.com/truenas/truenas-installer/blob/40338b0e4b4bc0870001539ff4d7f4f64fd56d4f/truenas_installer/install.py#L81

I'll give it a trial tomorrow

@transfairs
Copy link

transfairs commented Sep 5, 2024

Thanks for pointing me into the right direction! I tried the following and it seems to work.
sed -i 's/-n3:0:0/-n3:0:+16384M/g' /usr/lib/python3/dist-packages/truenas_installer/install.py

Then exit shell with Ctrl+D and continue installation.

Edit: Typo fixed.

@Moorsy-AU
Copy link

Hello, I've read through this page and all it's comments about 10 times, the install process for me keeps failing. I have a new system build with 2x 1TB identical nvme. /dev/nvme0n1 & 1n1 show in parted etc. It appears the swb command works, it begins install, I can see creation of the partitions etc, it then proceeds to tell me "install on nvme0n1 & 1n1 failed, device is to small for install" I've tried this with 16GB & 32GB, with and without swap. Truing to install dragonfish 24.04. Any thoughts or direction would be much appreciated.

@oriaj3
Copy link

oriaj3 commented Sep 30, 2024

¡Gracias por indicarme la dirección correcta! Probé lo siguiente y parece que funciona. sed -i 's/-n3:0:0/-n3:0:+16384M/g' /usr/lib/python3/dist-packages/truenas-installer/install.py

Luego salga del shell con Ctrl+D y continúe con la instalación.

Estoy probando a instalarlo en una VM para ver que funciona antes de llevarlo a mi equipo actual. Hago lo siguiente, y veo que modifica la línea 84 del script de instalación, pero al instalar sólo puedo seleccionar el disco total y después al acceder en truenas a la pestaña disk, me dice que boot-pool esta usando el tamaño total del disco, ¿Qué ocurre? No funciona con la beta 24.10 de la web oficial de truenas?

Muchas gracias por su ayuda!

@transfairs
Copy link

Did you try to let it run through? I also tested this in a VM and it is working, although TrueNAS tells me wrongly that the whole disk will be used.

@oriaj3
Copy link

oriaj3 commented Sep 30, 2024

Did you try to let it run through? I also tested this in a VM and it is working, although TrueNAS tells me wrongly that the whole disk will be used.

It has worked for me as follows:

  1. I run the command:
    sed -i ‘s/-n3:0:0/-n3:0/-n3:0:+16384M/g’ /usr/lib/python3/dist-packages/truenas-installer/install.py.
  2. I install TrueNAS and do not reboot.
  3. I create the partition in the remaining space and follow the steps 7 - 12 of the USB guide.

So everything is fine, thank you very much @transfairs you are a crack!

@WyekS
Copy link

WyekS commented Oct 29, 2024

Hi @oriaj3 there is a minor mistake in your command. It has to be:
sed -i ‘s/-n3:0:0/-n3:0:+16384M/g’ /usr/lib/python3/dist-packages/truenas-installer/install.py

(-n3:0/ has been duplicated in your command)
Cheers

@gaetan-31
Copy link

The suggested approach does not work for the next version (currently ElectricEel-24.10-BETA.1). Is there a simple solution for this as well?

Hi, i've update to ElectricEel-24.10-RC.2 after install.
No problem with update.
It's fresh install ?

@jtenniswood
Copy link

The recommendations didn't work for me, as the trueness installer folder is an underscore rather than a dash.
sed -i ‘s/-n3:0:0/-n3:0:+16384M/g’ /usr/lib/python3/dist-packages/truenas_installer/install.py

@siryoav
Copy link

siryoav commented Nov 2, 2024

The recommendations didn't work for me, as the trueness installer folder is an underscore rather than a dash. sed -i ‘s/-n3:0:0/-n3:0:+16384M/g’ /usr/lib/python3/dist-packages/truenas_installer/install.py

+1

@siryoav
Copy link

siryoav commented Nov 2, 2024

For TrueNas Scale, 24.10 (Electric Eel)
Missing parts in the initial guide.
Need to use some steps from the deprecated guide.

  1. Login to shell (if you are connected to the NAS with a display, press 7 to get shell with root user).
  2. Use parted to edit partitions.
  3. Use print list to find your boot device, for me it was /dev/nvme0n1.
  4. Use select <path to your boot device>.
  5. Change units using unit kiB.
  6. Use print to get exact info on your boot device current partition status.
  7. Find the end of the last partition in your boot device (filesystem should be zfs) - for me it was 17304576kiB
  8. Create the new partition using mkpart <new partition name> <last partition end in kiB> 100%, for me it was mkpart ssd-pool 17304576kiB 100%
  9. Use print to verify (You can change to unit giB for ease of use). Note your new partition number (for me it was 4)
  10. quit to exit parted.
  11. To create a zpool, visible for Truenas, use zpool create <pool name> <path to your boot device>p<your new partition number>. (For me it was zpool create ssd-pool /dev/nvme0n1p4 ).
  12. Received an error cannot mount '/ssd-pool': failed to create mountpoint: Read-only file system
  13. Verified with zpool status that my pool was created.
  14. Use zpool export <pool name> to allow Truenas Web UI to see this pool.
  15. Verified in the web interface I can see the new zpool in the UI (Storage page, import pool).
  16. exit to exit shell.

Some extras on how to create an encrypted pool (too much details, I put here some pointers and commands but I don't have time to add the exact details):

No Encryption:

zpool create ssd-pool /dev/nvme0n1p4
zpool export ssd-pool

With Encryption


openssl rand -hex 32 > /etc/ssd-pool.key
chmod 600 /etc/ssd-pool.key

zpool create -f -o ashift=12 -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa -O encryption=aes-256-gcm -O keylocation=file:///etc/ssd-pool.key -O keyformat=hex -O mountpoint=none ssd-pool /dev/nvme0n1p4

zpool export ssd-pool

This will require also to manually copy the key from the shell to the webui when importing the pool

Adding second SSD and mirroring both boot-pool and ssd-pool

  1. Make sure what is you new device path (under /dev) - for me /dev/nvme1n1.
  2. Copy partition layout: sfdisk -d <first disk path> | sfdisk <new disk path> - for me sfdisk -d /dev/nvme0n1 | sfdisk /dev/nvme1n1.
  3. Attach both partitions to existing pools, using zpool attach <pool name> <partition already in pool> <new partition added to pool>, for me - zpool attach boot-pool nvme0n1p3 nvme1n1p3 and zpool attach ssd-pool nvme0n1p4 nvme1n1p4

Validation of step 2 & 3 done with parted (print list) and zpool status respectively.

@sthames42
Copy link

sthames42 commented Nov 23, 2024

Installed TrueNAS Scale, 24.10.0.2 (ElectricEel) to M.2 SSD without first installing to a USB.
Thanks to all the great work from everyone, especially @siryoav.

  • Gigabyte B360M-D3H-GSM MOBO.
  • 32GB DDR4-3200 RAM.
  • HP SSD EX900 Plus 1TB (nvme).
  • 4xSeagate 6TB HDD.
  1. Download TrueNAS Scale and burn to USB with Rufus (DD Mode).
  2. Boot from USB and select *Start TrueNAS SCALE Installation from the Grub loader menu.
  3. Select Shell from the Console Setup menu.
  4. From @transfairs, modify the installer to create 16GB boot partition instead of using the entire drive.
    sed -i 's/-n3:0:0/-n3:0:+16384M/g' /usr/lib/python3/dist-packages/truenas_installer/install.py
    • Note there is a typo in @transfairs command which includes truenas-installer. Should be truenas_installer.
    • Quick note to help those as dumb as me, I first thought I was modifying the USB, not realizing the installer partition had been mounted in memory. Imagine my surprise when I rebooted from the USB before installing and found my change had not worked. Fortunately, I rebooted again, found the change was missing, and realized my mistake.
  5. exit to return from shell.
  6. Select Install/Upgrade from the Console Setup menu (without rebooting, first) and install to NVMe drive.
  7. Remove the USB and reboot.

Follow @siryoav, for the partitioning.

Here's what I did:

  1. Login to shell (if you are connected to the NAS with a display, press 7 to get shell with root user).
  2. Use parted to edit partitions.
    1. unit KiB to change size display units.

    2. print list to find your boot device, for me it was /dev/nvme0n1.

    3. select <path to your boot device>.

    4. print to get exact info on your boot device current partition status.
      It will look something like this:

       Model: HP SSD EX900 Plus 1TB (nvme)
       Disk /dev/nvme0n1: 1024GB
       Sector size (logical/physical): 512B/512B
       Partition Table: gpt
       Disk Flags: 
       
       Number  Start        End            Size          File system  Name       Flags
        1      2048kiB      3072kiB        1024kiB                               bios_grub, legacy_boot
        2      3072kiB      527360kiB      524288kiB     fat32                   boot, esp
        3      527360kiB    17304576kiB    16777216kiB   zfs
      
    5. name 3 boot-pool to name the boot partition.

      • I don't think this is actually necessary but I wanted clarity in the list.
    6. Find the end of the last partition in your boot device (filesystem should be zfs) - for me it was 17304576kiB

    7. Create the new partition using mkpart <new partition name> <last partition end in kiB> 100%,
      for me it was mkpart nvme-pool 17304576kiB 100%.

      • @siryoav used ssd-pool, here, but I might install other SSDs, in the future.
    8. print to verify

       Model: HP SSD EX900 Plus 1TB (nvme)
       Disk /dev/nvme0n1: 1000204632kiB
       Sector size (logical/physical): 512B/512B
       Partition Table: gpt
       Disk Flags: 
       
       Number  Start        End            Size          File system  Name       Flags
        1      2048kiB      3072kiB        1024kiB                               bios_grub, legacy_boot
        2      3072kiB      527360kiB      524288kiB     fat32                   boot, esp
        3      527360kiB    17304576kiB    16777216kiB   zfs          boot-pool
        4      17304576kiB  1000204288kiB  982899712kiB  zfs          nvme-pool
      

      Note your new partition number (for me it was 4).

    9. quit to exit parted.

  1. To create a zpool, visible for Truenas, use zpool create <pool name> <path to your boot device>p<your new partition number>.
    For me it was zpool create nvme-pool /dev/nvme0n1p4.

    • Received an error:
      cannot mount '/ssd-pool': failed to create mountpoint: Read-only file system
      This can be ignored.
  2. zpool status to verify.

       pool: boot-pool
      state: ONLINE
     config:
     
             NAME         STATE     READ WRITE CKSUM
             boot-pool    ONLINE       0     0     0
               nvme0n1p3  ONLINE       0     0     0
     
     errors: No known data errors
     
       pool: nvme-pool
      state: ONLINE
     config:
     
             NAME         STATE     READ WRITE CKSUM
             nvme-pool    ONLINE       0     0     0
               nvme0n1p4  ONLINE       0     0     0
     
     errors: No known data errors
    
  3. zpool export <pool name> to allow Truenas Web UI to see this pool.

  4. exit to exit shell.

  5. In the Web UI, go to Storage/Import Pool, and select the new pool in the dropdown list.

Results

This all worked well but there were a couple things I took note of in the UI:

  • Following the import, Storage page showed 907 GiB usable capacity in the nvme-pool.
    zpool list from the shell showed the correct size:

      NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
      boot-pool  15.5G  2.27G  13.2G        -         -     0%    14%  1.00x    ONLINE  -
      nvme-pool   936G  17.6M   936G        -         -     0%     0%  1.00x    ONLINE  /mnt
    

    I don't know how to account for the 30 GiB discrepancy. Perhaps a bug in the UI?

  • Under Storage/Disks, the NVMe drive showed nvme-pool in the Pool column. After I rebooted, it showed boot-pool.
    Disk Size was correct, though, as 953.87 GiB.

I've worked with FreeNAS for many years but this is my first experience with TrueNAS and I just set up this server. If I learn anything important, I'll try and update this info.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment