ljcucc blog

Hi! Welcome to my new blog (https://blog.ljc.cc)!! (。・ω・。)ノ And I'll post here about any new news, research, ideas and more... Welcome and have a stay here! Come and join the offical discuss chat in matrix #ljcucc-blog-discuss:matrix.org #ljcucc-blog-discuss:matrix.org

Hi! Welcome to my new blog (https://blog.ljc.cc)!! (。・ω・。)ノ And I'll post here about any new news, research, ideas and more... Welcome and have a stay here! Come and join the offical discuss chat in matrix #ljcucc-blog-discuss:matrix.org #ljcucc-blog-discuss:matrix.org

Transform Your Chromebook into a Docker Dev Machine with Virtual Disks

Estimated reading time: 8 minutes

Available in 繁體中文  

Hello everyone! Recently, I had an old, unused Chromebook lying around. Thinking it’s a shame to let it gather dust, I got the idea to turn it into a simple Docker development server. After all, using a lightweight machine for lightweight services makes many things much more convenient.

However, I quickly ran into the first, and most direct, bottleneck: storage space. This Chromebook’s built-in eMMC drive only had a pathetic 32GB. That might be fine for casual use, but you have to understand, with Docker, just building an image or pulling a few layers quickly eats up space – it’s nowhere near enough!

The Problem: exFAT Permission Issues

Since the built-in space wasn’t enough, my immediate reaction was to mount Docker’s data root directory onto an external USB hard drive that I rarely used. This seemed to solve the storage space problem at a stroke.

However, new problems quickly followed. Previously, to ensure this USB drive could be smoothly read from and written to on both my Mac and Linux machines, I had formatted it as exFAT. Now, the problem emerged: while I had the storage space, using it with Docker introduced permission issues! Specifically, some images perform operations like chown (changing file ownership) during the build process, which completely failed on the exFAT format.

While exFAT offers good cross-platform compatibility, it fundamentally doesn’t support the POSIX permission model required by Linux systems. This is precisely why commands like chown fail to work. It took several trial-and-error attempts before I realized I had formatted the drive as exFAT (where ’ex’ stands for ’exasperating,’ or ’excessively fat and sickening’).

The Solution: Virtual Disk

Since exFAT doesn’t support it, could I “fake” a POSIX-compliant file system on the exFAT drive?

My solution was this: on the exFAT drive, first create a large virtual disk file (.img) using fallocate or dd. Then, mount this .img file to the system and format it as ext4. This way, Docker runs within the ext4 file system, and all permission issues are resolved! This is like setting up an “embassy” area in a country that doesn’t support a specific language, allowing all activities related to that language to proceed normally.

This method not only perfectly solved the permission problem but also allowed me to flexibly plan the size of Docker’s storage space, preventing me from being limited by the Chromebook’s cramped built-in eMMC storage.

Step 1: Create the Virtual Disk File (.img)

First, I need to create a sufficiently large empty file on my exFAT formatted USB drive. This file will serve as our virtual disk. I named it docker_disk.img.

# First, navigate to your USB drive's mount point.
# Assuming your USB drive is mounted at /mnt/usb_drive
cd /mnt/usb_drive

# Create a 20GB virtual disk file. You can adjust the size according to your needs.
# I used status=progress to show a progress bar, which makes waiting less anxious.
sudo dd if=/dev/zero of=docker_disk.img bs=1M count=20000 status=progress
  • if=/dev/zero: Reads data from /dev/zero, essentially filling the file with zeros.
  • of=docker_disk.img: The name of the output file.
  • bs=1M: Sets the block size to 1 Megabyte.
  • count=20000: Writes 20,000 blocks, so the total size is 20,000 * 1MB = 20GB. If you want to be more precise, you can also calculate it based on powers of 2.
    • ($log(size) \in \mathbb{N}$)

Step 2: Format the Virtual Disk File as ext4

Now, this docker_disk.img file is just an empty shell. We need to format its interior with the Linux native ext4 file system so it can support POSIX permissions.

sudo mkfs.ext4 docker_disk.img

After execution, you will see mkfs.ext4 creating the file system, including information like inode count, block size, etc.

Step 3: Mount the Virtual Disk

Once the file system is formatted, we need to mount this virtual disk file to a directory on the system so it can be used like a normal disk partition.

# Create a mount point, for example, in /mnt or your home directory
sudo mkdir /mnt/docker_data

# Mount the virtual disk
sudo mount -o loop docker_disk.img /mnt/docker_data
  • -o loop: This option is crucial; it tells the mount command to treat the docker_disk.img file as a loop device, mounting it as if it were a physical hard drive.

Step 4: Integrate Docker Storage

Now, /mnt/docker_data is a directory backed by an ext4 file system, fully supporting chown and other POSIX permission operations. Next, I’ll integrate it with Docker.

Option A: Change Docker’s Data Root Directory (data-root) – For Advanced Needs

If you want all of Docker’s data (including images, containers, volumes, etc.) to be stored on this virtual disk, you can modify Docker’s daemon.json configuration file. This is a more comprehensive change, recommended when you don’t have existing Docker data, or after creating a backup.

  1. Edit or create the /etc/docker/daemon.json file:

    {
      "data-root": "/mnt/docker_data/docker_root"
    }
    
  2. Create the directory specified by data-root and restart the Docker service:

    sudo mkdir -p /mnt/docker_data/docker_root
    sudo systemctl restart docker
    

Option B: Mount into Containers via Volume (My personal recommendation, more flexible)

This is my most recommended and more flexible approach. You can mount specific directories from /mnt/docker_data into Docker containers that require permission support, using them as volumes. This means only data requiring special permissions needs to go through this ext4 virtual disk; other Docker data can remain in the default location.

For example, if your container needs an /app/data directory for chown operations:

# Create a directory in the virtual disk to store container data
sudo mkdir /mnt/docker_data/my_container_data

# Run your Docker container and mount that directory inside
sudo docker run -v /mnt/docker_data/my_container_data:/app/data my_image

Now, within the container’s /app/data directory, chown or any other permission-related operations will be correctly handled by the underlying ext4 file system.

Note

Mount Point Permissions: After mounting, the owner of the /mnt/docker_data directory is usually root. If you want your regular user to be able to write data directly into this directory (e.g., create files or directories), you might need to change its ownership:

sudo chown -R youruser:youruser /mnt/docker_data

Replace youruser with your actual username.

Performance Considerations: This virtual disk method will incur a slight performance overhead compared to running directly on a native ext4 partition, due to an extra layer of abstraction. However, for most development or lightweight applications, this overhead is usually acceptable.

How to Grow the Virtual Disk File’s Capacity

As your Docker projects grow larger, a 20GB virtual disk might no longer be enough. Don’t worry, the size of this virtual disk file (docker_disk.img) can be modified. Growing the virtual disk is a relatively simple and safe operation.

Before performing any resizing:

1. Unmount the File System: Although ext4 supports hot resizing (adjusting size while mounted), for safety, I recommend unmounting it first.

sudo umount /mnt/docker_data

2. File System Check: Before resizing, check to ensure its integrity.

sudo e2fsck -f docker_disk.img
  • e2fsck: Checks and repairs ext2/ext3/ext4 file systems.
  • -f: Forced alias, even if the file system appears clean.

1. Extend the Disk Image File

You can use truncate or dd to increase the size of the underlying image file. truncate is typically faster for sparse files (files created with dd if=/dev/zero).

Using truncate (recommended, quickly adds space):

# Add an additional 10GB of space to the existing file.
# If the original file was 20GB, this will make it 30GB.
sudo truncate -s +10G docker_disk.img
  • +10G: Indicates an increase of 10 gigabytes on top of the current file size. You can also use M for megabytes, etc.

Using dd (if you prefer, but slower for large increases):

# Append 10GB of zeros to the end of the file
sudo dd if=/dev/zero of=docker_disk.img bs=1M seek=$(stat -c %s docker_disk.img) count=10000 conv=notrunc
  • seek=$(stat -c %s docker_disk.img): This tells dd to start writing from the current end of the file; stat -c %s returns the file size in bytes.
  • count=10000: Appends 10,000 MB (10GB).
  • conv=notrunc: Ensures dd appends rather than truncates the file.

2. Re-mount the Disk Image

sudo mount -o loop docker_disk.img /mnt/docker_data

3. Resize the ext4 File System

Once the underlying file is larger, you can tell the ext4 file system to use this newly added space.

# Find the loop device assigned to your virtual disk (e.g., /dev/loop0)
losetup -a | grep docker_disk.img

# Execute the resize2fs command to extend the file system
sudo resize2fs /dev/loop0 # Replace /dev/loop0 with the actual loop device name
# Alternatively, if the file system is already mounted, you can often directly:
# sudo resize2fs /mnt/docker_data
  • resize2fs will automatically detect the maximum available space and resize the file system to fill it.
  • If the file system is already mounted, resize2fs can usually perform “online” resizing, meaning you don’t have to unmount it first. However, if possible, unmounting is still the safest practice.

Conclusion

While the Chromebook’s built-in eMMC space and the exFAT drive’s permission issues initially left me frustrated, by cleverly creating an ext4 virtual disk on the exFAT drive, I successfully opened up a new world for my Docker environment. This not only resolved both the storage and permission pain points but also deepened my understanding of Linux file systems and Docker’s underlying operations.

This is my first time using an LLM to help me polish a draft, transforming my notes into a full blog post. Because my writing isn’t inherently very good, using an LLM to refine it truly made it much more readable. Although many parts still have a strong “AI flavor,” such as excessive bullet points and overly dramatic narration, which aren’t typical of how I would express myself. Next time, I’ll try to figure out how to write more readable articles. (This paragraph, by the way, I wrote myself; otherwise, if I let the AI polish it, I’d have no idea what monstrosity it would produce).

References


Unless specified, all blog posts are licensed under CC BY-NC-ND 4.0 . Please credit "ljcucc" and this site's URL when reposting.