Further considerations

In this part, I would keep some notes on tools or softwares that I haven't fully determined to merge into the main workflow on HPC2, it is kept for possible future technology selections.

List of typical HPCs:

OpenHPC

Not included, using spack instead

Official repo, note it has a github wiki page

Cool wiki list on what tools they contained, and possible an important list of softwares that might be useful for HPC in general.

Not a valid option for debian?

Easybuild

Not included, using spack instead

Alternatives to spack.

Some comparison posts: 1, 2

Modules

Not included, using lmod by spack

site

Not a fan of this due to the language it utilized.

Lmod

alternatives to modules, merged to the main workflow under spack

Lustre or BeeGFS

file system for HPC

Lustre: need patch on kernel, not considering for mini clusters.

Some configuration reference on lustre: post

Small files benchmarking on distrubuted FS post

Analysis of Six Distributed File Systems

GlusterFS

Seems very promising and easy to configure, also more suitable for a small cluster. But after some reading, it seems not to be a good choice as the main fs for HPC.

Discussion on the comparison between gluster and lustre

Install on ubuntu18.04

Desktop

ThinLinc

A remote server specially desinged for clusters: site

X11 forward

See here. Merged to the main workflow

Globus

large file transfer service

Name Service Switch

man NSSWITCH.CONF

LDAP

openLDAP for user management: tutorial

openLDAP configure: post

BACKUP and sync

Borg Backup

restic

decided to try this in main workflow

very cool and nice softwares with friendly and clear docs!

general resources

Parallel Scheme beyond MPI

hadoop

spark

chapel

language designed for parallel

Last updated