A complete review from component selection to software configuration for a Chia cryptocurrency farming (mining) system.
Chia is a new cryptocurrency that aims at achieving decentralized consensus with lower energy consumption per transaction than its peers. This is achieved via a proof of space algorithm instead of proof of work.
While proof of work relies on continuously computing a math problem (usually hashes such as SHA256) in an attempt to find results that match a desired pattern, Chia instead relies on checking for the existence of certain properties in unique pre-generated files called plots.
This is done in a two step process called farming consisting of:
- Plotting where large unique files (usually 101.6GB) are created.
- And harvesting where the previously created files are checked for a chance to win that block’s reward (currently 2 Chia).
The more plots (greater storage used) a node has, the higher the chance of winning that block’s reward. There are 4608 chances to win daily and the probability of winning that block’s reward is based primarily on the number of plots one owns in proportion to the total network size. An earnings calculator is provided here.
Therefore, to maximize rewards we want to have the largest amount of storage possible and enough computing resources to fill up said storage with plots.
As with most design problems, there are an infinity of valid solutions to the combination of devices that can be used to farm Chia. To help narrow the options I started by deciding on some constraints and priorities:
- The cost in $/TB should be minimized despite additional complexity (within reason).
- Due to my limited time available to dedicate to this project, the system should be mostly composed of off the shelf parts (no custom PCBs, custom chassis, etc).
Plotting — Processing
Depending on the hardware setup, a single plot can take between 4 to 20 hours to create. Therefore, choosing the right plotting hardware can be the difference between filling up your entire farm in days versus months.
It is important to consider not only the speed at which each plot is made but also the number of concurrent plots that can be created. For instance, a setup that creates 10 concurrent plots at 12 hours each will produce 20 plots (~2TB/day) whereas a setup that plots 1 concurrent plot every 4 hours will only produce 6 plots per day (0.6TB/day).
To minimize cost, I decided to search for decommissioned servers. These servers have some interesting properties that could make them powerful plotters, including:
- Large number of cores can allow for several plots in parallel. Phase 1 (around half the plot time) is multithreaded (usually configured to be between 2 and 4 threads), but remaining phases are single threaded.
- Available RAM is usually in excess of that required to plot (~4GB/concurrent plot).
- Designed to operate at high load for extended periods of time.
After going over dozens of options and possible configurations, I settled for an HP ProLiant DL380p Gen8 with 2x Intel E5–2670 and 192GB of DDR3 RAM. With hyper-threading enabled, this server could hypothetically support simultaneously plotting up to 16 plots in phase 1 for under $400.
Plotting — Temporary Storage
Due to the high write load during the plot creation (around 1.6TB written for a k=32 plot), the choice of temporary storage medium can have a significant impact on the plotting time and cost. A few points to consider:
- The faster the storage, the faster plots can be created. This thread discusses 4 hour plot times using RAMDISK (mount a folder to RAM).
- Given the large write volume, it is important to consider the endurance of the medium. A consumer grade 1TB NVMe SSD usually has around 600 TBW (terabytes written) endurance before it is expected to fail which would mean it could make around 375 plots.
I ended up opting to buy a 12-bay HP StorageWorks D2600 with 15K 450GB SAS drives. With this setup, I can simultaneously plot to each drive without risking IO contention at the drive level. The JBOD + drives was significantly cheaper than an equivalent setup with NVMe drives (without even factoring the replacement cost after TBW has been reached).
Farm — Long Term Storage
Unlike the plotting storage, the harvesting storage does not require high throughput or IOPs. For this reason, minimizing cost was the highest priority.
There are several possible setups some of which are discussed here. However, after searching on eBay for second-hand options, a few things became clear:
- The larger 3.5″ hard drives tend to have the lowest cost. This makes sense since most devices nowadays have moved to the smaller 2.5″ or 1.8″ sizes.
- SAS hard drives (not SSDs) seemed to have the lowest cost per TB with some lots as low as $10/TB (as of May 2021). These SAS drives are not compatible with SATA (consumer) boards and are usually sold on eBay after being decommissioned from a company’s data center.
Once I narrowed in on using 3.5″ SAS hard drives, I needed to find a way to actually connect them to my harvester/plotter. Sticking with the constraint of not building the enclosure/backplane I started looking for used enclosures that could house these drives while maintaining the lowest cost per bay.
I came across several options on eBay and ended up purchasing a few different models including a 24-bay HP 3PAR and two 12-bay IBM DS3512.
The final list of components is shown below. It includes the main components discussed above but also ancillary parts that are required to put the system together.
A worthy mention is the HBA card. It is the PCIe card that exposes the SAS external connectors to which the cables from the JBODs will connect. When buying it, ensure you are getting an initiator target (IT) mode card so that the drives appear directly to the OS as opposed to an IR mode card. As a nice to have, you may want to get a card flashed with a newer OS (version 20).
Setting Up — Hardware
The setup is mostly intuitive. Cables connect to the holes in which they fit. However, the SAS cabling that connects the JBODs and the plotting/harvesting machine are daisy chain-able.
In my case, I have two cables leaving the server (one on each port from the HBA). One of the cables connects to the input of the temporary storage array (HP D2600) and the output of that array connects to the HP 3PAR array. The other cable connects to the first IBM array and the output of that connects to the second IBM array. The JBODs usually have an input (primary) port and an output port (usually labeled with an outward arrow).
Setting Up — Software
I installed Ubuntu 20.04 LTS on the server since its a widely used Linux distribution which means it would be easier to find troubleshooting forums if any troubles arose. The steps below outline the remaining configuration.
Step 1: Ensure All Drives are Available
The first step is to check which drives are being detected by the OS. This can be accomplished by running the
lsscsi command. The output of which is shown below.
Note that the above command does not provide information on the file system or size of the drives. For that, run the
Important: Note that not all drives appear under both commands! The drives that appear under
lsscsi but not
lsblk may have some incompatibility that is causing the OS to not make them available for mounting, etc. An example of this is
In my case, this issue was caused due to the sector size being 520 which is not supported by my installed Linux kernel (discussion on this topic can be found here and here). You can determine if this is your case by looking over the
dmesg command’s log for an error message like
[sdaw] Unsupported sector size 520.
To solve this I reformatted the drives using a block size of 512 with the command
sg_format -v --format --size=512 /dev/sdX. This command can take a significant amount of time to run (several hours) and the output is shown below.
Upon completion, the drive should show up in the
Step 2: Create File System on Drives
To format the drives with the ext4 file system, I ran the following command:
sudo mkfs -t ext4 — verbose /dev/sda.
Step 3: Mount Drives
Now that we can access the drives and they have been formatted with the desired file system, we can mount these drives.
- Create the folders where we will be mounting the drives. For example:
/mnt/farm/23for the drives that will store the final plots, and
/mnt/plot-tmp/11for the temporary plotting locations.
sudo blkidto get the unique IDs of your drives (or partitions). It will output several lines such as
/dev/sdae: UUID=”29494f44–2f75–4c01-a766–18755eb583d7" TYPE=”ext4".
- Edit the fstab file with
sudo vim /etc/fstaband associate each of the drives with their corresponding
/mnt/...folder. Be careful not to edit the first lines of the file since those are required to mount the OS root drive. My final file is shown below.
sudo mount -ato mount all drives specified in the fstab file. It will only mount drives not already mounted so it is safe to run multiple times.
- Ensure users have access to the drives and its files by running sudo
chmod -R 777 /mnt/farm/00.
Step 4: Run Chia Blockchain Software
- Follow the official instructions specified here to install the Chia blockchain (I did not install the GUI).
chia start farmerto start the daemons for the wallet, harvester, etc.
Step 5: Setup Plotman (Optional)
Plotman is a plotting manager that will take over the creation of new plotting jobs. It is a convenience tool (not required).
- Install Plotman following the instructions here.
plotman.yamlto your plotter’s specifications. My final file is shown below.
user_interface: use_stty_size: True directories: log: /home/plotter/plotman-logs tmp: - /mnt/plot-tmp/f00 - /mnt/plot-tmp/f01 - /mnt/plot-tmp/f02 - /mnt/plot-tmp/f03 - /mnt/plot-tmp/f04 - /mnt/plot-tmp/f05 - /mnt/plot-tmp/f06 - /mnt/plot-tmp/f07 - /mnt/plot-tmp/f08 - /mnt/plot-tmp/f09 - /mnt/plot-tmp/f10 - /mnt/plot-tmp/f11 dst: #- /mnt/farm/00 FULL #- /mnt/farm/01 FULL #- /mnt/farm/02 FULL #- /mnt/farm/03 FULL #- /mnt/farm/04 FULL #- /mnt/farm/05 FULL #- /mnt/farm/06 FULL #- /mnt/farm/07 FULL #- /mnt/farm/08 FULL #- /mnt/farm/09 FULL #- /mnt/farm/10 FULL #- /mnt/farm/11 FULL - /mnt/farm/12 - /mnt/farm/13 - /mnt/farm/14 - /mnt/farm/15 - /mnt/farm/16 - /mnt/farm/17 - /mnt/farm/18 - /mnt/farm/19 - /mnt/farm/20 - /mnt/farm/21 - /mnt/farm/22 - /mnt/farm/23 scheduling: tmpdir_stagger_phase_major: 2 tmpdir_stagger_phase_minor: 1 tmpdir_stagger_phase_limit: 1 tmpdir_max_jobs: 1 global_max_jobs: 20 global_stagger_m: 40 polling_time_s: 30 plotting: k: 32 e: False # Use -e plotting option n_threads: 2 # Threads per job n_buckets: 128 # Number of buckets to split data into job_buffer: 8096 # Per job memory
Some points worth mentioning:
- Plotman does not stop scheduling to farm drives (as of the time of this writing) when the drive is full. Therefore, you need to remove them (or comment them out as above).
- Plotman will automatically add farm drives to the chia harvester.
- I use
tmpdir_max_jobsequal to 1 since I am plotting to hard disks which do not have a good seek performance compared to SSDs.
Step 6: Run the Plotter
At this point, all that is needed to start plotting is to run
Note: The very long running job plotting to
/dev/farm/usb2 is a debug run that is not meant to run to completion.
Hope this can was helpful in giving you an idea of what is required to farm Chia!
As of now, my farm is 1/3 of the way full and I plan on posting updates as it fills up and when I start re-plotting for pools.
Special thanks to Katie Gandomi for help with development.
If you found this article helpful feel free to hit the ? button or donate some Chia (XCH) to my address: