S3 Compatible

Learn how to configure and manage S3 - Compatible backup destinations.

When using JetBackup with a S3 Compatible destination, it is expected to see download egress as JetBackup pulls in the full and snap.x SQLite databases(backup indexing) from the remote destination during each backup job run. Storing and downloading the remote SQLite database files are required to ensure the integrity of the backups stored on the remote destination.

This guide shows how to add an S3 compatible backup destination to JetBackup.

First make sure that you have the appropriate S3 destination plugin installed and enabled.

Create a bucket on the backup destination (for instructions on creating buckets for your corresponding S3 Vendor, please click here).

Create New Access Key and Secret Key Pair and save them for the credentials on creation page.

An example S3 configuration page for the destination:

Destination Name

Specify a unique name for your destination. This will be used internally for your convenience.

Read Only

This option allows you to add the destination as a “Read Only” destination. You can only restore and download existing backups from a read-only destination. Read Only is required if the Backup Directory is already owned by another destination/server. This option is only selectable upon creating the destination and will be automatically enabled for any destinations added during a Disaster Recovery.

Owner

Specify the owner for this destination. This grants the owner and its parent account(s) permission to modify and delete this destination. Please note that the owner specified will also need Can Manage Destinations permission granted under the Permissions section.

Maximum Concurrent Threads

This option allows you to set the number of worker threads that can be used by the destination for upload/download/get/delete requests. This will be shared evenly across the number of Concurrent Tasks set for Backup or Restore/Download. As the tasks complete, the available threads will be allocated to the remaining active tasks up to the set limit. You can lower this value if you receive rate-limit errors from the remote destination.

Vendor

Choose which S3 compatible vendor you will be using as your S3 destination.

Custom Vendor

A Custom Vendor must be fully S3-Compatible to be used with JetBackup. Choosing to use a custom vendor will require the additional Endpoint field. Please note that JetBackup automatically updates the endpoint with the Region from below. The correct format for the endpoint will use {region} in place of the actual region.

Example Configuration:

When connecting to the destination, JetBackup will automatically take the Region us-east-1 and replace the {region} with the inputted field whenever needed. For custom endpoint configurations, you can also specify {bucket} depending on the endpoint format. This is not required for most S3 destinations. Example Custom Vendors:

  • MinIO (self-hosted)
  • Linode Object Storage
  • Backblaze

Backup Directory

This will be our starting point when accessing this destination. Always use the absolute path and make sure to start with “/”.

For example: /PATH/TO/FOLDER.

Note: The absolute path must be unique and can not be used by two different Destination Configurations. If a backup directory is already owned by another JetBackup Destination, you may create a “Read Only” Destination to that path.

Only the following characters are allowed for the Backup Directory Path: A-Z a-z 0-9 and /This is in accordance to the Amazon S3 Object key naming guidelines.

Access/Secret Key Pair

Every S3 Destination has a Access/Secret Key Pair you must use in order to create the destination. Please visit the specific documentation on creating these pairs for each S3 Destination.

Bucket

Please visit the specific documentation on creating buckets for each S3 Destination.

Region

JetBackup will list known regions for the S3 Destination chosen. For a Custom Vendor, the region can be entered manually if required by the vendor. For more information, please see Custom Vendor.

Verify SSL

Whether SSL verification is enabled/disabled on the s3 compatible destination. [ default: Yes ]

HTTP Retries

Controls how many retries will be attempted if the destination returns an error for an HTTP call (upload / download / delete). [default: 3 retries]

Large files upload chunk size

Select the multipart upload chunk size for large files. Lower this value if you find large files are failing to upload. [default: 5120 MB]

Keep Alive timeout

The time in seconds, that the host will allow an idle connection to remain open before it is closed. [default: 60 seconds]

Max Keep Alive Requests

The maximum number of requests that can be sent on a single connection before closing it. Choose “Determined by remote vendor” to fetch the maximum keep alive requests automatically. [default: 100]

Additional information

Object storage is a type of storage where data is stored as objects rather than blocks. This concept is useful for data backup, archiving, and scalability for high-load environments.

Objects are the entities of data storage in S3 buckets. S3 objects are comprised of three main parts; the content of the object (data contained in the object includiing directories or files), the unique object identifier (Unique string used to identify the item within the bucket.), and metadata. Metadata contains information such as name, size, date, security attributes, content type, and URL. Each object has an ACL or Access Control list which dictates who is permitted access to the object. S3 object storage has distinct advantages such as allowing you to avoid network bottlenecks as well as a high degree of scalability and data security.

S3 Object Storage vs Traditional File System Comparison

The S3 protocol uses object storage where data is stored as objects with unique identifiers, compared to standard file storage where your data are stored in files and directories and are human-readable.

As an example, this is how a snap is structured on JetBackup 5’s backup format, organized per backup item plus additional metadata for JetBackup’s use:

.
├── snap.1
│   ├── certificate
│   ├── config
│   ├── cronjob
│   ├── database
│   ├── database_user
│   ├── domain
│   ├── email
│   ├── ftp
│   ├── homedir
│   ├── jetbackup.configs
│   └── jetbackup.index
├── snap.2
 ....

Once this data is transferred to an S3 Compatible Destination, this is then converted and stored as objects and will look something similar to the following:

.
├── export
│   ├── snap.1
│   ├── snap.2
│    ....
├── files
│   ├── 6258c0a30008839f00001146
│   ├── 6258c0a30009115600001148
│   ├── 6258c0a3000914d000001149
│   ├── 6258c0a30009180b0000114b
│   ├── 6258c0a300091a820000114c
│   ├── 6258c0a300091f3400001151
│    ....
├── snap.1
│   └── jetbackup.index
├── snap.2
│   └── jetbackup.index
....

Under “files”, this is where backup data across all the snaps are actually stored, and each object represents a file in the backup. Additionally, there will also be a corresponding “snap” and “export” data which stores the information required for JetBackup to read this backup.

S3 Lifecycle Settings

It is important to verify proper Lifecycle Settings when setting up an S3 destination to prevent excess storage usage. Some providers will keep all verisons of a file stored by default which can unncessarily increase storage utilization and costs.

S3 Minimum Storage Retention Policies

Some providers may apply a charge for stored objects that have been been deleted before a certain number of days. With this in mind, even though JetBackup is removing files during the integrity check and snapshot cleanup processes, the storage usage and costs may not reflect these changes accurately.

Wasabi Minimum Storage Duration Policy

S3 Object Integrity

Please be aware that when utilizing S3 storage destinations, a file hashing mechanism is employed to ensure object integrity during the file upload process. This procedure involves generating a hash of the file, transmitting it to the S3 server, uploading the file, and subsequently hashing the file again on the S3 server using the same algorithm. S3 then compares these two hashes, and if they do not match, the file upload is declined. As this is required for all file uploading to S3 storage, this cannot be disabled at the JetBackup level. With this in mind, we recommend excluding certain files that are regularly written to (logs, temp files, etc.) to mitigate partial backups.

Resources

Additional reading on the S3 storage type can be found here.