How to Use $HOME area
The $HOME is a filesystem that is associated with your login account. It has a fixed quota:
50GB quota for $HOME
keep it clean and organized. You have read and write access to this.
100GB total quota for $HOME + ZFS snapshots
ZFS snapshots are copies of added/deleted/rewritten data. This gives you some data protection/ backup capability. You have READ ONLY access to the snapshots area. If your snapshots consume X more space than 50GB, you $HOME quota is automatically reduced by that X amount.
|Snapshots are "point-in-time" copies of data. Your home area is snapshot daily at a random time. Snapshots are kept for a period of time and then automatically deleted. Under normal use, the 100GB total limit for $HOME+Snapshots is rarely reached. A file/directory is permanently deleted when the last snapshot that holds that file is removed.|
|Snapshots do not protect you from all possible data loss. For example, if you create a file and then delete it a few hours later, that file is likely irretrievable. Lost data can only be recovered if it existed at the time a snapshot was taken and the snapshot is still available.|
|ZFS snapshots capability is not the same as a selective backup. Selective backup was created for automatically saving important files that are located in various file paths, including DFS filesystems. See BeeGFS howto|
|Every time a snapshot is taken, a virtual copy of all files at that time reside in the snapshot. When you delete a file, it is still in the snapshot. If you constantly create and delete files, many of the deletes will remain in snapshots and consume unwanted space. This is why its important to never put transient files in $HOME.|
Snapshots are kept in $HOME/.zfs/snapshots/. All files and directories that you have in your $HOME are included in snapshots. You cannot exclude any file or directories from a snapshot. Snapshot schedule:
daily, keep last 8
weekly, keep last 6
Per this schedule, you have about 6 weeks before a file is permanently deleted. Any changes or file deletions that occurred more than 6 weeks ago are gone forever.
1.1. What to Store
STORE only important files here that change relatively infrequently
NOT store large jobs output and error files
NOT store and then delete large data files. Such data is considered transient and should be stored on DFS filesystems.
NOT store any large input data files that are used for computational jobs. Use DFS file systems for this data.
2. Check $HOME quota
Your $HOME quota is 50GB and another 50GB is for the snapshots. Changes to the contents of your $HOME are recorded daily and result in snapshots. How frequently and how much data you add/delete/overwrite affects how much data your can store in $HOME. If you are changing the contents very often the snapshots will go over the quota very quickly.
To see your current quota usage do:
[user@login-x:~]$ df -h ~ Filesystem Size Used Avail Use% Mounted on 10.240.58.6:/homezvol0/panteater 50G 3.5G 47G 7% /data/homezvol0/panteater
The ~ stands for your $HOME. The output above shows that user panteater used 3.5Gb of its 50Gb allocation.
If you want to see the usage by files and directories in $HOME:
[user@login-x:~]$ cd (1) [user@login-x:~]$ ls (2) bin examples local perl5 biojhub3_dir info mat.yaml R classify-image.py keras-nn.py modulefiles sbank-out [user@login-x:~]$ du -s -h * (3) 7.0M bin 166M biojhub3_dir 8.5K classify-image.py 647K examples 91K info 4.5K keras-nn.py 126M local 4.5K mat.yaml 60K modulefiles 512 perl5 1.2G R 25K sbank-out
|1||change to your $HOME directory|
|2||list contents of $HOME|
|3||find disk usage for each file and directory in $HOME. The output shows disk usage in kilobytes (K), megabytes (M) or gigabytes (G). For directories, all contents inside are included. For example, a directory R uses 1.2Gb of disk space.|
3. Over quota
It is important to never put transient files in $HOME.
Every time you change files in you $HOME you are adding to your quote. When snapshots are taken they record addition and removal of files.
Once you fill your quota you will not be able to write in your $HOME until some of the space is freed. You applications and jobs will exhibit various errors and will fail. Most of the errors are Cannot write to ‘file name’ or Disk quota exceeded.
The only way to free space is to remove some snapshots and the users CAN NOT do this themselves. You will have to submit a ticket to email@example.com
After your snapshots are removed you will be required to free enough space in your $HOME in order to continue to work.
4. Restore from snapshots
You can use snapshots to restore files and directories provided that existing snapshots still hold the desired data. There is no way to restore files changed more than 6 weeks ago. Below is an example how to restore accidentally deleted file. A similar technique can be used for multiple files and directories.
File is accidentally deleted
[user@login-x:~]$ ls -l out -rw-rw-r-- 1 panteater panteater 4004 Sep 17 15:13 out [user@login-x:~]$ rm -rf out [user@login-x:~]$ ls -l out ls: cannot access out: No such file or directory
Check the existing snapshots
[user@login-x:~]$ ls .zfs/snapshot/ zfs-auto-snap_daily-2020-09-16-1017 zfs-auto-snap_daily-2020-09-17-1045 zfs-auto-snap_daily-2020-09-18-1048
The output indicates there are 3 snapshots done on indicated dates. The missing file from the first step above has a time stamp Sep 17 15:13 which means the file was created or modified on that time.
The snapshots with an earlier time stamp will not have needed file:
[user@login-x:~]$ ls .zfs/snapshot/zfs-auto-snap_daily-2020-09-17-1045/out ls: cannot access .zfs/snapshot/zfs-auto-snap_daily-2020-09-17-1045/out: No such file or directory
Need to search the snapshots that have a later time stamp:
[user@login-x:~]$ ls .zfs/snapshot/zfs-auto-snap_daily-2020-09-18-1048/out .zfs/snapshot/zfs-auto-snap_daily-2020-09-18-1048/out
Restore file from a snapshot
Copy found file:
[user@login-x:~]$ cp zfs/snapshot/zfs-auto-snap_daily-2020-09-18-1048/out . [user@login-x:~]$ ls -l out -rw-rw-r-- 1 panteater panteater 4004 Sep 18 10:53 out