S3 Compatible

This guide shows how to add an S3 compatible backup destination to JetBackup.

First make sure that you have the appropriate S3 destination plugin installed and enabled.

Create a bucket on the backup destination (for instructions on creating buckets for your corresponding S3 Vendor, please click here).

If you do not see your S3 Provider listed as an available Vendor, please note that you may be able to add your S3 Compatible Destination as a Custom Vendor.

Create New Access Key and Secret Key Pair and save them for the credentials on creation page.

An example S3 configuration page for the destination:

S3 destination


Destination Name

Specify a unique name for your destination. This will be used internally for your convenience.

Read Only

This option allows you to add the destination as a "Read Only" destination. You can only restore and download existing backups from a read-only destination. Read Only is required if the Backup Directory is already owned by another destination/server.

Owner

Specify the owner for this destination. This grants the owner and its parent account(s) permission to modify and delete this destination. Please note that the owner specified will also need Can Manage Destinations permission granted under the Permissions section.

Vendor

Choose which S3 compatible vendor you will be using as your S3 destination.

Custom Vendor

A Custom Vendor must be fully S3-Compatible to be used with JetBackup. Choosing to use a custom vendor will require the additional Endpoint field. Please note that JetBackup automatically updates the endpoint with the Region from below. The correct format for the endpoint will use {region} in place of the actual region.

Example Configuration:

destination custom endpoint

When connecting to the destination, JetBackup will automatically take the Region us-east-1 and replace the {region} with the inputted field whenever needed.
For custom endpoint configurations, you can also specify {bucket} depending on the endpoint format. This is not required for most S3 destinations.

Example Custom Vendors:

  • MinIO (self-hosted)
  • Linode Object Storage
  • Backblaze

Backup Directory

This will be our starting point when accessing this destination. Always use the absolute path and make sure to start with "/".
For example: /PATH/TO/FOLDER.
Note: The absolute path must be unique and can not be used by two different Destination Configurations. If a backup directory is already owned by another JetBackup Destination, you may create a "Read Only" Destination to that path.

Only the following characters are allowed for the Backup Directory Path: A-Z a-z 0-9 and / This is in accordance to the Amazon S3 Object key naming guidelines.

Access/Secret Key Pair

Every S3 Destination has a Access/Secret Key Pair you must use in order to create the destination. Please visit the specific documentation on creating these pairs for each S3 Destination.

Bucket

Please visit the specific documentation on creating buckets for each S3 Destination.

Region

JetBackup will list known regions for the S3 Destination chosen. For a Custom Vendor, the region can be entered manually if required by the vendor. For more information, please see Custom Vendor.

It is best practice to choose the closest region to your server, if available, or consider a vendor with closer regions for best performance.

Verify SSL

Whether SSL verification is enabled/disabled on the s3 compatible destination. [ default: Yes ]

HTTP Retries

Controls how many retries will be attempted if the destination returns an error for an HTTP call (upload / download / delete). [ default: 3 retries ]

Additional information

Object storage is a type of storage where data is stored as objects rather than blocks. This concept is useful for data backup, archiving, and scalability for high-load environments.

Objects are the entities of data storage in S3 buckets. S3 objects are comprised of three main parts; the content of the object (data contained in the object includiing directories or files), the unique object identifier (Unique string used to identify the item within the bucket.), and metadata. Metadata contains information such as name, size, date, security attributes, content type, and URL. Each object has an ACL or Access Control list which dictates who is permitted access to the object. S3 object storage has distinct advantages such as allowing you to avoid network bottlenecks as well as a high degree of scalability and data security.

S3 Object Storage vs Traditional File System Comparison

The S3 protocol uses object storage where data is stored as objects with unique identifiers, compared to standard file storage where your data are stored in files and directories and are human-readable.

As an example, this is how a snap is structured on JetBackup 5's backup format, organized per backup item plus additional metadata for JetBackup's use:

.
├── snap.1
│   ├── certificate
│   ├── config
│   ├── cronjob
│   ├── database
│   ├── database_user
│   ├── domain
│   ├── email
│   ├── ftp
│   ├── homedir
│   ├── jetbackup.configs
│   └── jetbackup.index
├── snap.2
 ....

Once this data is transferred to an S3 Compatible Destination, this is then converted and stored as objects and will look something similar to the following:

.
├── export
│   ├── snap.1
│   ├── snap.2
│    ....
├── files
│   ├── 6258c0a30008839f00001146
│   ├── 6258c0a30009115600001148
│   ├── 6258c0a3000914d000001149
│   ├── 6258c0a30009180b0000114b
│   ├── 6258c0a300091a820000114c
│   ├── 6258c0a300091f3400001151
│    ....
├── snap.1
│   └── jetbackup.index
├── snap.2
│   └── jetbackup.index
 ....

Under "files", this is where backup data across all the snaps are actually stored, and each object represents a file in the backup. Additionally, there will also be a corresponding "snap" and "export" data which stores the information required for JetBackup to read this backup.

S3 Lifecycle Settings

It is important to verify proper Lifecycle Settings when setting up an S3 destination to prevent excess storage usage. Some providers will keep all verisons of a file stored by default which can unncessarily increase storage utilization and costs.

Resources

Additional reading on the S3 storage type can be found here