This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revision | Next revisionBoth sides next revision | ||
computing:vmserver [2023/01/22 14:12] – oemb1905 | computing:vmserver [2023/05/21 21:47] – oemb1905 | ||
---|---|---|---|
Line 8: | Line 8: | ||
I was given a dual 8-core Xeon SuperMicro server (32 threads), with 8 HD bays in use, 96GBRAM, 8x 6TB Western Digital in Raid1 zfs mirror (24TB actual), with a 120GBSSD boot volume stuck behind the power front panel running non-GUI Debian. (Thanks to Kilo Sierra for the donation.) My first job was to calculate whether my PSU was up to the task I intended for it. I used a 500W PSU. From my calculations, | I was given a dual 8-core Xeon SuperMicro server (32 threads), with 8 HD bays in use, 96GBRAM, 8x 6TB Western Digital in Raid1 zfs mirror (24TB actual), with a 120GBSSD boot volume stuck behind the power front panel running non-GUI Debian. (Thanks to Kilo Sierra for the donation.) My first job was to calculate whether my PSU was up to the task I intended for it. I used a 500W PSU. From my calculations, | ||
+ | |||
+ | **Update**: I am now running a newer server with 48 threads, 12 hard drive bays, 384GB RAM, 4 two-way mirrors of Samsung enterprise SSDs for the primary vm zpool, and 2 two-way mirrors of 16TB platters for the backup zpool and for some mailservers. These are also SAS hard drives now, not SATA. The server can handle 1.5TB of RAM. | ||
zpool create -m /mnt/pool pool -f mirror sda sdb mirror sdc sdh mirror sde sdf mirror sdg sdh | zpool create -m /mnt/pool pool -f mirror sda sdb mirror sdc sdh mirror sde sdf mirror sdg sdh |