In this part, I would keep some notes on tools or softwares that I haven't fully determined to merge into the main workflow on HPC2, it is kept for possible future technology selections.
List of typical HPCs:
Not included, using spack instead
Official repoarrow-up-right , note it has a github wiki page
Cool wiki list on what tools they containedarrow-up-right , and possible an important list of softwares that might be useful for HPC in general.
Not a valid option for debian?
Not included, using spack instead
Alternatives to spack.
Some comparison posts: 1arrow-up-right , 2arrow-up-right
Not included, using lmod by spack
sitearrow-up-right
Not a fan of this due to the language it utilized.
alternatives to modules, merged to the main workflow under spack
Lustre or BeeGFS
file system for HPC
Lustre: need patch on kernel, not considering for mini clusters.
Some configuration reference on lustre: postarrow-up-right
Small files benchmarking on distrubuted FS postarrow-up-right
Analysis of Six Distributed File Systemsarrow-up-right
Seems very promising and easy to configure, also more suitable for a small cluster. But after some reading, it seems not to be a good choice as the main fs for HPC.
Discussion on the comparison between gluster and lustrearrow-up-right
Install on ubuntu18.04arrow-up-right
A remote server specially desinged for clusters: sitearrow-up-right
See herearrow-up-right . Merged to the main workflow
large file transfer service
Name Service Switch
man NSSWITCH.CONF
openLDAP for user management: tutorialarrow-up-right
openLDAP configure: postarrow-up-right
BACKUP and sync
decided to try this in main workflow
very cool and nice softwares with friendly and clear docs!
general resources
Parallel Scheme beyond MPI
language designed for parallel