Further considerations

In this part, I would keep some notes on tools or softwares that I haven't fully determined to merge into the main workflow on HPC2, it is kept for possible future technology selections.

List of typical HPCs:

OpenHPC

Not included, using spack instead

Official repoarrow-up-right, note it has a github wiki page

Cool wiki list on what tools they containedarrow-up-right, and possible an important list of softwares that might be useful for HPC in general.

Not a valid option for debian?

Easybuild

Not included, using spack instead

Alternatives to spack.

Some comparison posts: 1arrow-up-right, 2arrow-up-right

Modules

Not included, using lmod by spack

sitearrow-up-right

Not a fan of this due to the language it utilized.

Lmod

alternatives to modules, merged to the main workflow under spack

Lustre or BeeGFS

file system for HPC

Lustre: need patch on kernel, not considering for mini clusters.

Some configuration reference on lustre: postarrow-up-right

Small files benchmarking on distrubuted FS postarrow-up-right

Analysis of Six Distributed File Systemsarrow-up-right

GlusterFS

Seems very promising and easy to configure, also more suitable for a small cluster. But after some reading, it seems not to be a good choice as the main fs for HPC.

Discussion on the comparison between gluster and lustrearrow-up-right

Install on ubuntu18.04arrow-up-right

Desktop

ThinLinc

A remote server specially desinged for clusters: sitearrow-up-right

X11 forward

See herearrow-up-right. Merged to the main workflow

Globus

large file transfer service

Name Service Switch

man NSSWITCH.CONF

LDAP

openLDAP for user management: tutorialarrow-up-right

openLDAP configure: postarrow-up-right

BACKUP and sync

Borg Backup

restic

decided to try this in main workflow

very cool and nice softwares with friendly and clear docs!

general resources

Parallel Scheme beyond MPI

hadoop

spark

chapel

language designed for parallel

Last updated