Restore a database dump
The neo4j-admin database load
command can be used to load a database from an archive created with the neo4j-admin database dump
command.
Starting from Neo4j 5.20, the neo4j-admin database load
command also supports loading a full backup artifact created by the neo4j-admin database backup
command from Neo4j Enterprise.
If you are replacing an existing database, you have to shut it down before running the command and use the --overwrite-destination
option.
Enterprise Edition If you are not replacing an existing database, you must create the database (using CREATE DATABASE
against the system
database) after the load operation finishes.
The command can be run from either an online or offline Neo4j DBMS, and it must be executed as the neo4j
user to ensure the appropriate file permissions.
Change Data Capture does not capture any data changes resulting from the use of |
Syntax
neo4j-admin database load [-h] [--expand-commands] [--info] [--verbose] [--overwrite-destination[=true|false]]
[--additional-config=<file>] [--from-path=<path> | --from-stdin] <database>
Description
Load a database from an archive.
<archive-path> must be a directory containing an archive(s).
Archive can be a database dump created with the dump command, or can be a full backup artifact created by the backup command from Neo4j Enterprise.
If neither --from-path
or --from-stdin
is supplied server.directories.dumps.root
setting will be searched for the archive.
Existing databases can be replaced by specifying --overwrite-destination
.
It is not possible to replace a database that is mounted in a running Neo4j server.
If --info
is specified, then the database is not loaded, but information (i.e. file count, byte count, and format of load file) about the archive is printed instead.
Parameters
Parameter | Description |
---|---|
|
Name of the database to load. Can contain * and ? for globbing. Note that * and ? have special meaning in some shells and might need to be escaped or used with quotes. |
Options
Option | Description | Default |
---|---|---|
|
Configuration file with additional configuration. |
|
|
Allow command expansion in config value evaluation. |
|
|
Path to directory containing archive(s). It is possible to load databases from AWS S3 buckets, Google Cloud storage buckets, and Azure bucket using the appropriate URI as the path. |
|
|
Read archive from standard input. |
|
|
Show this help message and exit. |
|
|
Print meta-data information about the archive file, instead of loading the contained database. |
|
|
If an existing database should be replaced. |
|
|
Enable verbose output. |
|
1. See Tools → Configuration for details. |
The |
Examples
The following are examples of how to load a dump of a database (database.dump) created in the section Back up an offline database, using the neo4j-admin database load
command.
When replacing an existing database, you have to shut it down before running the command.
The --overwrite-destination
option is required because you are replacing an existing database.
If you are not replacing an existing database, you must create the database (using CREATE DATABASE
against the system
database) after the load operation finishes.
The command looks for a file called <database>.dump where |
When using the |
|
Load a dump from a local directory
You can load a dump from a local directory using the following command:
bin/neo4j-admin database load --from-path=/full-path/data/dumps neo4j --overwrite-destination=true
Starting from Neo4j 5.20, you can use the same command to load the database from its full backup artifact:
bin/neo4j-admin database load --from-path=/full-path/to/backups neo4j --overwrite-destination=true
The following example shows how to designate a specific archive for the load
command.
cat foo.dump | neo4j-admin database load --from-stdin mydatabase
Load a dump from a cloud storage
The following examples show how to load a database dump located in a cloud storage bucket using the --from-path
option.
Neo4j uses the AWS SDK v2 to call the APIs on AWS using AWS URLs.
Alternatively, you can override the endpoints so that the AWS SDK can communicate with alternative storage systems, such as Ceph, Minio, or LocalStack, using the system variables |
-
Install the AWS CLI by following the instructions in the AWS official documentation — Install the AWS CLI version 2.
-
Create an S3 bucket and a directory to store the backup files using the AWS CLI:
aws s3 mb --region=us-east-1 s3://myBucket aws s3api put-object --bucket myBucket --key myDirectory/
For more information on how to create a bucket and use the AWS CLI, see the AWS official documentation — Use Amazon S3 with the AWS CLI and Use high-level (s3) commands with the AWS CLI.
-
Verify that the
~/.aws/config
file is correct by running the following command:cat ~/.aws/config
The output should look like this:
[default] region=us-east-1
-
Configure the access to your AWS S3 bucket by setting the
aws_access_key_id
andaws_secret_access_key
in the~/.aws/credentials
file and, if needed, using a bucket policy. For example:-
Use
aws configure set aws_access_key_id aws_secret_access_key
command to set your IAM credentials from AWS and verify that the~/.aws/credentials
is correct:cat ~/.aws/credentials
The output should look like this:
[default] aws_access_key_id=this.is.secret aws_secret_access_key=this.is.super.secret
-
Additionally, you can use a resource-based policy to grant access permissions to your S3 bucket and the objects in it. Create a policy document with the following content and attach it to the bucket. Note that both resource entries are important to be able to download and upload files.
{ "Version": "2012-10-17", "Id": "Neo4jBackupAggregatePolicy", "Statement": [ { "Sid": "Neo4jBackupAggregateStatement", "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetObject", "s3:PutObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::myBucket/*", "arn:aws:s3:::myBucket" ] } ] }
-
-
Run the
neo4j-admin database load
command to load a dump from your AWS S3 storage. The example assumes that you have dump artifacts located in themyBucket/myDirectory
folder in your bucket.bin/neo4j-admin database load mydatabase --from-path=s3://myBucket/myDirectory/ --overwrite-destination=true
-
Ensure you have a Google account and a project created in the Google Cloud Platform (GCP).
-
Install the
gcloud
CLI by following the instructions in the Google official documentation — Install the gcloud CLI. -
Create a service account and a service account key using Google official documentation — Create service accounts and Creating and managing service account keys.
-
Download the JSON key file for the service account.
-
Set the
GOOGLE_APPLICATION_CREDENTIALS
andGOOGLE_CLOUD_PROJECT
environment variables to the path of the JSON key file and the project ID, respectively:export GOOGLE_APPLICATION_CREDENTIALS="/path/to/keyfile.json" export GOOGLE_CLOUD_PROJECT=YOUR_PROJECT_ID
-
Authenticate the
gcloud
CLI with the e-mail address of the service account you have created, the path to the JSON key file, and the project ID:gcloud auth activate-service-account service-account@example.com --key-file=$GOOGLE_APPLICATION_CREDENTIALS --project=$GOOGLE_CLOUD_PROJECT
For more information, see the Google official documentation — gcloud auth activate-service-account.
-
Create a bucket in the Google Cloud Storage using Google official documentation — Create buckets.
-
Verify that the bucket is created by running the following command:
gcloud storage ls
The output should list the created bucket.
-
-
Run the
neo4j-admin database load
command to load a dump from your Google storage bucket. The example assumes that you have dump artifacts located in themyBucket/myDirectory
folder in your bucket.bin/neo4j-admin database load mydatabase --from-path=gs://myBucket/myDirectory/ --overwrite-destination=true
-
Ensure you have an Azure account, an Azure storage account, and a blob container.
-
You can create a storage account using the Azure portal.
For more information, see the Azure official documentation on Create a storage account. -
Create a blob container in the Azure portal.
For more information, see the Azure official documentation on Quickstart: Upload, download, and list blobs with the Azure portal.
-
-
Install the Azure CLI by following the instructions in the Azure official documentation — Azure official documentation.
-
Authenticate the neo4j or neo4j-admin process against Azure using the default Azure credentials.
See the Azure official documentation on default Azure credentials for more information.az login
Then you should be ready to use Azure URLs in either neo4j or neo4j-admin.
-
To validate that you have access to the container with your login credentials, run the following commands:
# Upload a file: az storage blob upload --file someLocalFile --account-name accountName - --container someContainer --name remoteFileName --auth-mode login # Download the file az storage blob download --account-name accountName --container someContainer --name remoteFileName --file downloadedFile --auth-mode login # List container files az storage blob list --account-name someContainer --container someContainer --auth-mode login
-
Run the
neo4j-admin database load
command to load a dump from your Azure blob storage container. The example assumes that you have dump artifacts located in themyStorageAccount/myContainer/myDirectory
folder in your Azure account.bin/neo4j-admin database load mydatabase --from-path=azb://myStorageAccount/myContainer/myDirectory --overwrite-destination=true