Guide to IASTU-HPC2
  • Introduction
  • User's Manual
    • basics
      • basic info
      • connecting
      • storage (must-read)
      • module system (must-read)
      • workflow on jobs
    • softwares
      • python
      • mathematica
      • matlab
      • spark
      • singularity
      • fortran, C, C++
      • java
      • command line tools
      • tensorflow
      • jax
    • FAQ
  • Administrator's Manual
    • Hardwares
    • Toolchains
      • Toolset for DELL servers
      • Ansible
      • Slurm
      • Spack
      • Container
      • BigData
      • Toolset for logging and monitoring
      • Further considerations
    • History
      • ToDo
      • VM Test
      • Real Setup
      • Admin Workflow
      • Softwares for scientific computations
      • Relay Host
Powered by GitBook
On this page
  • OpenHPC
  • Easybuild
  • Modules
  • Lmod
  • Lustre or BeeGFS
  • GlusterFS
  • Desktop
  • ThinLinc
  • X11 forward
  • Globus
  • Name Service Switch
  • LDAP
  • BACKUP and sync
  • Borg Backup
  • restic
  • general resources
  • Parallel Scheme beyond MPI
  • hadoop
  • spark
  • chapel

Was this helpful?

  1. Administrator's Manual
  2. Toolchains

Further considerations

PreviousToolset for logging and monitoringNextHistory

Last updated 5 years ago

Was this helpful?

In this part, I would keep some notes on tools or softwares that I haven't fully determined to merge into the main workflow on HPC2, it is kept for possible future technology selections.

List of typical HPCs:

OpenHPC

Not included, using spack instead

Not a valid option for debian?

Easybuild

Not included, using spack instead

Alternatives to spack.

Modules

Not included, using lmod by spack

Not a fan of this due to the language it utilized.

Lmod

alternatives to modules, merged to the main workflow under spack

Lustre or BeeGFS

file system for HPC

Lustre: need patch on kernel, not considering for mini clusters.

GlusterFS

Seems very promising and easy to configure, also more suitable for a small cluster. But after some reading, it seems not to be a good choice as the main fs for HPC.

Desktop

ThinLinc

X11 forward

Globus

large file transfer service

Name Service Switch

man NSSWITCH.CONF

LDAP

BACKUP and sync

Borg Backup

restic

decided to try this in main workflow

very cool and nice softwares with friendly and clear docs!

general resources

Parallel Scheme beyond MPI

hadoop

spark

chapel

language designed for parallel

, note it has a github wiki page

, and possible an important list of softwares that might be useful for HPC in general.

Some comparison posts: ,

Some configuration reference on lustre:

Small files benchmarking on distrubuted FS

A remote server specially desinged for clusters:

See . Merged to the main workflow

openLDAP for user management:

openLDAP configure:

uchicago
princeton
ulhpc
qmul
pku-math
CWRU
sherlock-standford
UMBC
PKU
UIOWA
Yale
Sheffield
Odyssey-Harvard
Niflheim
nersc
Official repo
Cool wiki list on what tools they contained
1
2
site
post
post
Analysis of Six Distributed File Systems
Discussion on the comparison between gluster and lustre
Install on ubuntu18.04
site
here
tutorial
post
Arch wiki on backup programs
Nice blog reflects the drawbacks of MPI style and compare it to spark or chapel