Overview
- OneDrive from the M365 suite of products contains important documents that need to be frequently backed up.
- Using the LegacyFlo OneDrive-S3-ZIP request type, you can now backup files from any user's OneDrive to a S3 bucket in your AWS account.
Step 1: Preparing your S3 bucket
A. Create an S3 bucket on AWS
Create your S3 bucket in a supported region and allow list and write access to LegacyFlo. The steps for the same are given in the links below:
- Get an AWS account.
- Create an S3 bucket in a region supported by Mithi.
- Set a temporary bucket policy to allow LegacyFlo to get the name and region of the buckets as mentioned here.
- When the S3 bucket is used as a source, give list and read access to the objects on the bucket and when the S3 bucket is used as a destination, allow list and write access to LegacyFlo. The steps to do so are mentioned here.
B. Register your S3 bucket as a source/destination S3 bucket on LegacyFlo
For LegacyFlo to upload data to your S3 bucket, it needs to be added to as a destination S3 bucket in LegacyFlo. The steps for doing the same are as given below:
Step 2: Getting access to the OneDrive data
This is a two-step process, in which you generate the API key (one time) and grant OneDrive access for all the accounts whose OneDrive data needs to be accessed
Step 1: Get access to the M365 Graph API
Step 2: Make the user's OneDrive content available for access via the API(This step has to be done for all users whose OneDrive data needs to be accessed.)
Step 3: Submitting a LegacyFlo request
Source Information
FIELD NAME | DESCRIPTION | SAMPLE VALUE |
---|---|---|
User ID | User id on M365 domain whose data needs to be imported (Note: The id should not contain the domain name.) | "john" |
Domain | Source domain on M365 | "baya.com" |
OneDrive Folder name | The folder from which data needs to backed up on the destination. All the contents of this folder and its subfolders will be backed up. Note that the folder name should not start with a /. If all the OneDrive data is to be backed up, leave this value empty | "23-Jun-2020" |
FIELD NAME | DESCRIPTION | SAMPLE VALUE |
---|---|---|
Since (YYYY-MM-DD) | Start date from which to copy/move content. All content created/modified ON or AFTER the start date and BEFORE the end date will be copied/moved. This is an optional parameter and if left blank, all contents before the Before date will be moved or copied. | 2019-01-01 |
Before (YYYY-MM-DD) | End date before which to copy/move content. The content created/modified ON or AFTER the start date and BEFORE the end date will be copied/moved. This is an optional parameter and if left blank, all contents added or modified ON or AFTER the Since date will be copied/moved. | 2019-02-01 |
Destination Information
FIELD NAME | DESCRIPTION | SAMPLE VALUE |
---|---|---|
S3 bucket name | This is the name of the S3 bucket in your account which will hold the output of the LegacyFlo request | Backupbucket NOTE: Refer to the steps here to prepare your own S3 bucket to receive LegacyFlo output |
S3 folder path | The folder created on your S3 bucket for the output | GDrivebackup, FreshdeskSolution |
Doc Type |
| |
File name |
| john-drive.zip, freshdesk-nov-backup.zip |
Report ID
FIELD NAME | DESCRIPTION | SAMPLE VALUE (TO UPLOAD TO VAULTASTIC) |
---|---|---|
Send report to mail id | Email address which will get the notifications | "postmaster@baya.com" |
Step 3 (optional): Submitting multiple LegacyFlo requests to backup data for multiple users
To upload multiple jobs using a CSV, create a CSV file with a list of all the users and other details required to upload data.