Building upon the series on migrating to Bitbucket Cloud, this third post addresses a critical aspect that the Bitbucket Cloud Migration Assistant (BCMA) doesn’t automatically handle: migrating reviewers. After migrating repositories and updating references, ensuring that your pull request (PR) review workflows remain intact is essential. This installment will guide you through automating the migration of default reviewers from Bitbucket Server (Data Center) to Bitbucket Cloud using Python scripts.
You need to be aware of this: The reviews functionality is slightly different in the Cloud environment, as reported in this post Bitbucket Data Center vs Cloud – Detailed Comparison.
| Default reviewers | Per branch source/target | Per repository | ⚠️ |
In this post, the script adds all reviewers, regardless of the source/target branch, as reviewers of the repository. If this change is okay with you, go ahead and proceed with using this solution.
Prerequisites
Before proceeding, it’s imperative to have completed the steps outlined in our previous posts:
- Migrating repositories and Git LFS objects to Bitbucket Cloud.
- Updating repository references within your codebase to point to the new locations in Bitbucket Cloud.
These initial steps ensure that your repositories are properly set up in Bitbucket Cloud, paving the way for migrating reviewer settings.
Why Migrate Reviewers?
Reviewers play a crucial role in the PR review process, ensuring code quality and compliance with project standards. Migrating these settings manually can be time-consuming and error-prone, especially for organizations with numerous repositories. Automating this process ensures a seamless transition, preserving your project’s governance and review standards.
Automating Reviewer Migration
The process involves two main scripts: the first to create a mapping of users between Bitbucket Server and Cloud, and the second to migrate the reviewer settings based on this mapping.
Script 1: Creating a User Mapping
This script fetches user information from both Bitbucket Server and Cloud, creating a CSV file mapping users between the two platforms. It considers display names and email addresses to accurately map users, handling discrepancies with normalization techniques.
- Collect user data from both environments.
- Normalize and compare user information to create accurate mappings.
- Output a CSV file (
bitbucket_users_match.csv) containing the mappings.
from requests.auth import HTTPBasicAuth
import csv
import os
import requests
from unidecode import unidecode
from config import cloud, on_prem
def merge_csv_rows_in_place(csv_path):
"""
Merges rows in a CSV file based on matching server_slug or cloud_nickname, modifying the file in place.
Parameters:
- csv_path: Path to the CSV file to be modified.
"""
# Read the input CSV file into a list of dictionaries
with open(csv_path, mode='r', newline='', encoding='utf-8') as csvfile:
reader = csv.DictReader(csvfile)
rows = [row for row in reader]
# Merge rows with the same server_slug or cloud_nickname
merged_rows = {}
for row in rows:
key = row['server_slug'] if row['server_slug'] else row['cloud_nickname']
if key in merged_rows:
# Merge values, preferring non-empty values
for field in row:
if row[field]: # If the current row's field is not empty
merged_rows[key][field] = row[field] # Update the merged row's field
else:
merged_rows[key] = row
# Overwrite the original CSV file with merged rows
with open(csv_path, mode='w', newline='', encoding='utf-8') as csvfile:
fieldnames = rows[0].keys() # Assuming all rows have the same fields
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for row in merged_rows.values():
writer.writerow(row)
print(f'The CSV file {csv_path} has been modified in place with merged rows.')
# Function to normalize keys
def hash_key(value):
return unidecode(value.lower())
# Function to get cloud users
def get_cloud_users(workspace):
auth = HTTPBasicAuth(cloud['username'], cloud['token'])
headers = {"Accept": "application/json"}
response = requests.get(
f"https://api.bitbucket.org/2.0/workspaces/{workspace}/members?pagelen=100",
auth=auth,
headers=headers)
return response.json()['values']
# Function to get server users
def get_server_users():
auth = HTTPBasicAuth(on_prem['username'], on_prem['password'])
headers = {"Accept": "application/json"}
response = requests.get(
f"{on_prem['base_url']}/rest/api/latest/users?limit=3000",
auth=auth,
headers=headers
)
return response.json()['values']
# Main process
def main():
user_map = {}
cloud_user_names = set()
server_user_names = set()
script_location = os.path.dirname(os.path.abspath(__file__))
default_output_file = os.path.join(script_location, "bitbucket_users_match.csv")
# Processing cloud users
for user in get_cloud_users(cloud['workspace']):
key = hash_key(user['user']['display_name'])
cloud_user_names.add(key)
user_map[key] = {
'cloud_account_id': user['user']['account_id'],
'cloud_uuid': user['user']['uuid'],
'cloud_nickname': user['user']['nickname'],
'cloud_display_name': user['user']['display_name'],
}
# Processing server users
for user in get_server_users():
key = hash_key(user['displayName'])
server_user_names.add(key)
if key not in user_map:
user_map[key] = {}
user_map[key].update({
'server_id': user['id'],
'server_slug': user['slug'],
'server_displayName': user['displayName'],
'server_emailAddress': user['emailAddress']
})
with open(default_output_file, 'w', newline='') as file:
writer = csv.writer(file, delimiter=',')
headers = ['server_slug', 'server_id', 'server_displayName', 'server_emailAddress', 'cloud_uuid', 'cloud_account_id', 'cloud_nickname', 'cloud_display_name']
writer.writerow(headers)
for user in user_map.values():
writer.writerow([
user.get('server_slug'),
user.get('server_id'),
user.get('server_displayName'),
user.get('cloud_uuid'),
user.get('cloud_account_id'),
user.get('cloud_nickname'),
user.get('cloud_display_name'),
])
merge_csv_rows_in_place(default_output_file)
if __name__ == "__main__":
main()
Script 2: Migrating Reviewer Settings
With the user mappings in place, the second script utilizes this information to replicate reviewer settings for each repository.
- Fetch reviewer settings from each repository in Bitbucket Server.
- Identify corresponding users in Bitbucket Cloud using the mapping CSV.
- Apply reviewer settings to the respective repositories in Bitbucket Cloud.
from requests.auth import HTTPBasicAuth
import csv
import json
import os
import requests
import logging
from config import cloud, on_prem
# Initialize logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s: %(message)s')
# Define file paths directly
script_location = os.path.dirname(os.path.abspath(__file__))
user_file = os.path.join(script_location, "bitbucket_users_match.csv")
input_file = on_prem['bitbucket_server_repositories']
# Function to get server reviewer
def get_server_reviewer(project_key, repo_slug):
try:
response = requests.get(
f"{on_prem['base_url']}/rest/default-reviewers/latest/projects/{project_key}/repos/{repo_slug}/conditions",
auth=HTTPBasicAuth(on_prem['username'], on_prem['password']), # Authentication setup
headers={"Accept": "application/json"} # Request headers
)
if response.status_code == 200:
return response.json()
else:
logging.error(f"Failed to fetch server reviewer for {repo_slug}: {response.text}")
return []
except Exception as e:
logging.exception("Error fetching server reviewer")
return []
# Function to add cloud reviewer
def add_cloud_reviewer(workspace, repo_slug, username):
try:
response = requests.request(
"PUT",
f"https://api.bitbucket.org/2.0/repositories/{workspace}/{repo_slug}/default-reviewers/{username}",
auth=HTTPBasicAuth(cloud['username'], cloud['token']), # Authentication setup
headers={"Accept": "application/json"} # Request headers
)
return response
except Exception as e:
logging.exception(f"Error adding cloud reviewer for {repo_slug}")
return None
# Main function to copy server reviewers to cloud
def copy_server_reviewers_to_cloud(project_key, repository_slug, workspace, user_map):
reviewer_config_list = get_server_reviewer(project_key, repository_slug)
for config in reviewer_config_list:
for reviewer in config['reviewers']:
key = str(reviewer['id'])
if key not in user_map:
logging.warning(f"Unable to add {reviewer['displayName']} to {repository_slug}, missing user on server.")
continue
user_data = user_map.get(key)
if 'cloud_uuid' not in user_data:
logging.warning(f"Unable to add {reviewer['displayName']} to {repository_slug}, missing user on cloud.")
continue
response = add_cloud_reviewer(workspace, repository_slug, user_data['cloud_uuid'])
if response and response.ok:
logging.info(f"{reviewer['displayName']} added to {repository_slug} ({response.status_code})")
else:
logging.error(f"Failed to add {reviewer['displayName']} to {repository_slug}. Error: {response.text if response else 'No response'}")
# Loading the user map from the CSV file
def load_user_map(user_file):
user_map = {}
with open(user_file) as f:
reader = csv.DictReader(f, delimiter=",")
for row in reader:
user_map[row['server_id']] = row
return user_map
# Load user map and process each repository from the input CSV
if __name__ == "__main__":
user_map = load_user_map(user_file)
with open(input_file) as f:
reader = csv.DictReader(f, delimiter=",")
for row in reader:
logging.info(f"Copying server reviewers to cloud for project {row['project_key']} and repository {row['slug']}")
copy_server_reviewers_to_cloud(row['project_key'], row['slug'], cloud['workspace'], user_map)
Step-by-Step Execution
- Ensure prerequisites are met: Complete the previous steps in the series to set up your repositories in Bitbucket Cloud.
- Run the user mapping script: Generate the
bitbucket_users_match.csvfile, creating a link between Bitbucket Server and Cloud users. - Execute the reviewer migration script: Use the mapping to migrate reviewer settings to Bitbucket Cloud.
Verifying the Migration
After running the scripts, verify that:
- The user mappings in
bitbucket_users_match.csvaccurately reflect your user base. - Reviewer settings in Bitbucket Cloud match the configurations in Bitbucket Server.
Check a few repositories manually and ensure that the automated PR review processes function as expected in the cloud environment.
Stay tuned for further posts, where I’ll cover more aspects of a comprehensive migration strategy to Bitbucket Cloud, ensuring a smooth transition for your team and projects.